id
stringlengths 10
10
| title
stringlengths 19
145
| abstract
stringlengths 273
1.91k
| full_text
dict | qas
dict | figures_and_tables
dict | question
sequence | retrieval_gt
sequence | answer_gt
sequence | __index_level_0__
int64 0
887
|
---|---|---|---|---|---|---|---|---|---|
1806.07711 | Categorization of Semantic Roles for Dictionary Definitions | Understanding the semantic relationships between terms is a fundamental task in natural language processing applications. While structured resources that can express those relationships in a formal way, such as ontologies, are still scarce, a large number of linguistic resources gathering dictionary definitions is becoming available, but understanding the semantic structure of natural language definitions is fundamental to make them useful in semantic interpretation tasks. Based on an analysis of a subset of WordNet's glosses, we propose a set of semantic roles that compose the semantic structure of a dictionary definition, and show how they are related to the definition's syntactic configuration, identifying patterns that can be used in the development of information extraction frameworks and semantic models. | {
"paragraphs": [
[
"This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/ Many natural language understanding tasks such as Text Entailment and Question Answering systems are dependent on the interpretation of the semantic relationships between terms. The challenge on the construction of robust semantic interpretation models is to provide a model which is both comprehensive (capture a large set of semantic relations) and fine-grained. While semantic relations (high-level binary predicates which express relationships between words) can serve as a semantic interpretation model, in many cases, the relationship between words cannot be fully articulated as a single semantic relation, depending on a contextualization that involves one or more target words, their corresponding semantic relationships and associated logical operators (e.g. modality, functional operators).",
"Natural language definitions of terms, such as dictionary definitions, are resources that are still underutilized in the context of semantic interpretation tasks. The high availability of natural language definitions in different domains of discourse, in contrast to the scarcity of comprehensive structured resources such as ontologies, make them a candidate linguistic resource to provide a data source for fine-grained semantic models.",
"Under this context, understanding the syntactic and semantic “shape” of natural language definitions, i.e., how definitions are usually expressed, is fundamental for the extraction of structured representations and for the construction of semantic models from these data sources. This paper aims at filling this gap by providing a systematic analysis of the syntactic and semantic structure of natural language definitions and proposing a set of semantic roles for them. By semantic role here we mean entity-centered roles, that is, roles representing the part played by an expression in a definition, showing how it relates to the entity being defined. WordNet BIBREF0 , one of the most employed linguistic resources in semantic applications, was used as a corpus for this task. The analysis points out the syntactic and semantic regularity of definitions, making explicit an enumerable set of syntactic and semantic patterns which can be used to derive information extraction frameworks and semantic models.",
"The contributions of this paper are: (i) a systematic preliminary study of syntactic and semantic relationships expressed in a corpus of definitions, (ii) the derivation of semantic categories for the classification of semantic patterns within definitions, and (iii) the description of the main syntactic and semantic shapes present in definitions, along with the quantification of the distribution of these patterns.",
"The paper is organized as follows: Section \"Structural Aspects of Definitions\" presents the basic structural aspects of definitions according to the classic theory of definitions. Section \"Semantic Roles for Lexical Definitions\" introduces the proposed set of semantic roles for definitions. Section \"Identifying Semantic Roles in Definitions\" outlines the relationship between semantic and syntactic patterns. Section \"Related Work\" lists related work, followed by the conclusions and future work in Section \"Conclusion\" ."
],
[
"Swartz swartz2007definitions describe lexical, or dictionary definitions as reports of common usage (or usages) of a term, and argue that they allow the improvement and refinement of the use of language, because they can be used to increase vocabulary (introducing people to the meaning and use of words new to them), to eliminate certain kinds of ambiguity and to reduce vagueness. A clear and properly structured definition can also provide the necessary identity criteria to correctly allocate an entity in an ontologically well-defined taxonomy BIBREF1 .",
"Some linguistic resources, such as WordNet, organize concepts in a taxonomy, so the genus-differentia definition pattern would be a suitable way to represent the subsumption relationship among them. The genus and differentia concepts date back to Aristotle's writings concerning the theory of definition BIBREF2 , BIBREF3 , BIBREF4 and are most commonly used to describe entities in the biology domain, but they are general enough to define concepts in any field of knowledge. An example of a genus-differentia based definition is the Aristotelian definition of a human: “a human is a rational animal”. Animal is the genus, and rational is the differentia distinguishing humans from other animals.",
"Another important aspect of the theory of definition is the distinction between essential and non-essential properties. As pointed by Burek burek2004adoption, stating that “a human is an animal” informs an essential property for a human (being an animal), but the sentence “human is civilized” does not communicate a fundamental property, but rather something that happens to be true for humans, that is, an incidental property.",
"Analyzing a subset of the WordNet definitions to investigate their structure, we noticed that most of them loosely adhere to the classical theory of definition: with the exception of some samples of what could be called ill-formed definitions, in general they are composed by a linguistic structure that resembles the genus-differentia pattern, plus optional and variable incidental properties. Based on this analysis, we derived a set of semantic roles representing the components of a lexical definition, which are described next."
],
[
"Definitions in WordNet don't follow a strict pattern: they can be constructed in terms of the entity's immediate superclass or rather using a more abstract ancestral class. For this reason, we opted for using the more general term supertype instead of the classical genus. A supertype is either the immediate entity's superclass, as in “footwear: clothing worn on a person's feet”, being footwear immediately under clothing in the taxonomy; or an ancestral, as in “illiterate: a person unable to read”, where illiterate is three levels below person in the hierarchy.",
"Two different types of distinguishing features stood out in the analyzed definitions, so the differentia component was split into two roles: differentia quality and differentia event. A differentia quality is an essential, inherent property that distinguishes the entity from the others under the same supertype, as in “baseball_coach: a coach of baseball players”. A differentia event is an action, state or process in which the entity participates and that is mandatory to distinguish it from the others under the same supertype. It is also essential and is more common for (but not restricted to) entities denoting roles, as in “roadhog: a driver who obstructs others”.",
"As any expression describing events, a differentia event can have several subcomponents, denoting time, location, mode, etc. Although many roles could be derived, we opted to specify only the ones that were more recurrent and seemed to be more relevant for the definitions' classification: event time and event location. Event time is the time in which a differentia event happens, as in “master_of_ceremonies: a person who acts as host at formal occasions”; and event location is the location of a differentia event, as in “frontiersman: a man who lives on the frontier”.",
"A quality modifier can also be considered a subcomponent of a differentia quality: it is a degree, frequency or manner modifier that constrain a differentia quality, as in “dart: run or move very quickly or hastily”, where very narrows down the differentia quality quickly associated to the supertypes run and move.",
"The origin location role can be seen as a particular type of differentia quality that determines the entity's location of origin, but in most of the cases it doesn't seem to be an essential property, that is, the entity only happens to occur or come from a given location, and this fact doesn't account to its essence, as in “Bartramian_sandpiper: large plover-like sandpiper of North American fields and uplands”, where large and plover-like are essential properties to distinguish Bartramian_sandpiper from other sandpipers, but occurring in North American fields and uplands is only an incidental property.",
"The purpose role determines the main goal of the entity's existence or occurrence, as in “redundancy: repetition of messages to reduce the probability of errors in transmission”. A purpose is different from a differentia event in the sense that it is not essential: in the mentioned example, a repetition of messages that fails to reduce the probability of errors in transmission is still a redundancy, but in “water_faucet: a faucet for drawing water from a pipe or cask””, for drawing water is a differentia event, because a faucet that fails this condition is not a water faucet.",
"Another event that is also non-essential, but rather brings only additional information to the definition is the associated fact, a fact whose occurrence is/was linked to the entity's existence or occurrence, accounting as an incidental attribute, as in “Mohorovicic: Yugoslav geophysicist for whom the Mohorovicic discontinuity was named”.",
"Other minor, non-essential roles identified in our analysis are: accessory determiner, a determiner expression that doesn't constrain the supertype-differentia scope, as in “camas: any of several plants of the genus Camassia”, where the expression any of several could be removed without any loss in the definition meaning; accessory quality, a quality that is not essential to characterize the entity, as in “Allium: large genus of perennial and biennial pungent bulbous plants”, where large is only an incidental property; and [role] particle, a particle, such as a phrasal verb complement, non-contiguous to the other role components, as in “unstaple: take the staples off”, where the verb take off is split in the definition, being take the supertype and off a supertype particle.",
"The conceptual model in Figure 1 shows the relationship among roles, and between roles and the definiendum, that is, the entity being defined."
],
[
"Once the relevant semantic roles were identified in the manual analysis, the following question emerged: is it possible to extend this classification to the whole definitions database through automated Semantic Role Labelling? Although most SRL systems rely on efficient machine learning techniques BIBREF5 , an initial, preferably large, amount of annotated data is necessary for the training phase.",
"Since manual annotation is expensive, an alternative would be a rule-based mechanism to automatically label the definitions, based on their syntactic structure, followed by a manual curation of the generated data. As shown in an experimental study by Punyakanok et al. punyakanok2005necessity, syntactic parsing provides fundamental information for event-centered SRL, and, in fact, this is also true for entity-centered SRL.",
"To draw the relationship between syntactic and semantic structure (as well as defining the set of relevant roles described earlier), we randomly selected a sample of 100 glosses from the WordNet nouns+verbs database, being 84 nouns and 16 verbs (the verb database size is only approximately 17% of the noun database size). First, we manually annotated each of the glosses, assigning to each segment in the sentence the most suitable role. Example sentences and parentheses were not included in the classification. Figure 2 shows an example of annotated gloss. Then, using the Stanford parser BIBREF6 , we generated the syntactic parse trees for all the 100 glosses and compared the semantic patterns with their syntactic counterparts.",
"Table 1 shows the distribution of the semantic patterns for the analyzed sample. As can be seen, (supertype) (differentia quality) and (supertype) (differentia event) are the most frequent patterns, but many others are composed by a combination of three or more roles, usually the supertype, one or more differentia qualities and/or differentia events, and any of the other roles. Since most of them occurred only once (29 out of 42 identified patterns), it is easier to analyze the roles as independent components, regardless of the pattern where they appear. The context can always give some hint about what a role is, but we would expect the role's main characteristics not to change when their “companions” in the sentence varies. The conclusions are as follows, and are summarized in Table 2 :",
"Supertype: it's mandatory in a well-formed definition, and indeed 99 out of the 100 sentences analyzed have a supertype (the gloss for Tertiary_period – “from 63 million to 2 million years ago” – lacks a supertype and could, then, be considered an ill-formed definition). For verbs, it is the leftmost VB and, in some cases, subsequent VBs preceded by a CC (“or” or “and”). This is the case whenever the parser correctly classifies the gloss' head word as a verb (11 out of 16 sentences). For nouns, in most cases (70 out of 83) the supertype is contained in the innermost and leftmost NP containing at least one NN. It is the whole NP (discarding leading DTs) if it exists as an entry in WN, or the largest rightmost sequence that exists in WN otherwise. In the last case, the remaining leftmost words correspond to one or more differentia qualities. If the NP contains CCs, more than one supertype exist, and can be identified following the same rules just described. The 13 sentences that don't fit this scenario include some non-frequent grammatical variations, parser errors and the presence of accessory determiners, described later.",
"Differentia quality: for verbs, this is the most common identifying component in the definition. It occurs in 14 out of the 16 sentences. The other two ones are composed by a single supertype (that would better be seen as a synonym), and by a conjunction of two supertypes. The differentia quality is usually a PP (5 occurrences) or a NP (4 occurrences) coming immediately after the supertype. JJs inside ADJPs (3 occurrences) or RBs inside ADVPs (1 occurrence) are also possible patterns, where the presence of CCs indicates the existence of more than one differentia quality. For nouns, two scenarios outstand: the differentia quality preceding the supertype, where it is composed by the leftmost words in the same NP that contains the supertype but are not part of the supertype itself, as described above; and the differentia quality coming after the supertype, predominantly composed by a PP, where the prevailing introductory preposition is “of”. These two scenarios cover approximately 90% of all analyzed sentences where one or more differentia qualities occur.",
"Differentia event: differentia events occurs only for nouns, since verbs can't represent entities that can participate in an event (i.e., endurants in the ontological view). They are predominantly composed by either an SBAR or a VP (under a simple clause or not) coming after the supertype. This is the case in approximately 92% of the analyzed sentences where differentia events occur. In the remaining samples, the differentia event is also composed by a VP, but under a PP and immediately after the introductory preposition.",
"Event location: event locations only occur in conjunction with a differentia event, so they will usually be composed by a PP appearing inside a SBAR or a VP. Being attached to a differentia event helps to distinguish an event location from other roles also usually composed by a PP, but additional characteristics can also provide some clues, like, for example, the presence of named entities denoting locations, such as “Morocco” and “Lake District”, which appear in some of the analyzed glosses.",
"Event time: the event time role has the same characteristics of event locations: only occurs in conjunction with a differentia event and is usually composed by a PP inside a SBAR or a VP. Again, additional information such as named entities denoting time intervals, for example, “the 19th century” in one of the analyzed glosses, is necessary to tell it apart from other roles.",
"Origin location: origin locations are similar to event locations, but occurring in the absence of an event, so it is usually a PP that does not appear inside a SBAR or a VP and that frequently contains named entities denoting locations, like “United States”, “Balkan Peninsula” and “France” in our sample glosses. A special case is the definition of entities denoting instances, where the origin location usually comes before the supertype and is composed by a NP (also frequently containing some named entity), like the definitions for Charlotte_Anna_Perkins_Gilman – “United States feminist” – and Joseph_Hooker – “United States general [...]”, for example.",
"Quality modifier: quality modifiers only occur in conjunction with a differentia quality. Though this role wasn't very frequent in our analysis, it is easily identifiable, as long as the differentia quality component has already been detected. A syntactic dependency parsing can show whether some modifier (usually an adjective or adverb) references, instead of the supertype, some of the differentia quality's elements, modifying it.",
"Purpose: the purpose component is usually composed by a VP beginning with a TO (“to”) or a PP beginning with the preposition “for” and having a VP right after it. In a syntactic parse tree, a purpose can easily be mistaken by a differentia event, since the difference between them is semantic (the differentia event is essential to define the entity, and the purpose only provide additional, non-essential information). Since it provides complementary information, it should always occur in conjunction with an identifying role, that is, a differentia quality and/or event. Previously detecting these identifying roles in the definition, although not sufficient, is necessary to correctly assign the purpose role to a definition's segment.",
"Associated fact: an associated fact has characteristics similar to those of a purpose. It is usually composed by a SBAR or by a PP not beginning with “for” with a VP immediately after it (that is, not having the characteristics of a purpose PP). Again, the difference between an associated fact and a differentia event is semantic, and the same conditions and principles for identifying a purpose component also apply.",
"Accessory determiner: accessory determiners come before the supertype and are easily recognizable when they don't contain any noun, like “any of several”, for example: it will usually be the whole expression before the supertype, which, in this case, is contained in the innermost and leftmost NP having at least one NN. If it contains a noun, like “a type of”, “a form of”, “any of a class of”, etc., the recognition becomes more difficult, and it can be mistaken by the supertype, since it will be the leftmost NP in the sentence. In this case, a more extensive analysis in the WN database to collect the most common expressions used as accessory determiners is necessary in order to provide further information for the correct role assignment.",
"Accessory quality: the difference between accessory qualities and differentia qualities is purely semantic. It is usually a single adjective, but the syntactic structure can't help beyond that in the accessory quality identification. Again, the presence of an identifying element in the definition (preferably a differentia quality) associated with knowledge about most common words used as accessory qualities can provide important evidences for the correct role detection.",
"[Role] particle: although we believe that particles can occur for any role, in our analysis it was very infrequent, appearing only twice and only for supertypes. It is easily detectable for phrasal verbs, for example, take off in “take the staples off”, since the particle tends to be classified as PRT in the syntactic tree. For other cases, it is necessary a larger number of samples such that some pattern can be identified and a suitable extraction rule can be defined."
],
[
"The task described in this work is a form of Semantic Role Labeling (SRL), but centered on entities instead of events. Typically, SRL has as primary goal to identify what semantic relations hold among a predicate (the main verb in a clause) and its associated participants and properties BIBREF7 . Focusing on determining “who” did “what” to “whom”, “where”, “when”, and “how”, the labels defined for this task include agent, theme, force, result and instrument, among others BIBREF8 .",
"Liu and Ng liu2007learning perform SRL focusing on nouns instead of verbs, but most noun predicates in NomBank, which were used in the task, are verb nominalizations. This leads to the same event-centered role labeling, and the same principles and labels used for verbs apply.",
"Kordjamshidi et al. kordjamshidi2010spatial describe a non-event-centered semantic role labeling task. They focus on spatial relations between objects, defining roles such as trajectory, landmark, region, path, motion, direction and frame of reference, and develop an approach to annotate sentences containing spatial descriptions, extracting topological, directional and distance relations from their content.",
"Regarding the structural aspects of lexical definitions, Bodenreider and Burgun bodenreider2002characterizing present an analysis of the structure of biological concept definitions from different sources. They restricted the analysis to anatomical concepts to check to what extent they fit the genus-differentia pattern, the most common method used to classify living organisms, and what the other common structures employed are, in the cases where that pattern doesn't apply.",
"Burek burek2004adoption also sticks to the Aristotelian classic theory of definition, but instead of analyzing existing, natural language definitions, he investigates a set of ontology modeling languages to examine their ability to adopt the genus-differentia pattern and other fundamental principles, such as the essential and non-essential property differentiation, when defining a new ontology concept by means of axioms, that is, in a structured way rather than in natural language. He concludes that Description Logic (DL), Unified Modeling Language (UML) and Object Role Modeling (ORM) present limitations to deal with some issues, and proposes a set of definitional tags to address those points.",
"The information extraction from definitions has also been widely explored with the aim of constructing structured knowledge bases from machine readable dictionaries BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . The development of a Lexical Knowledge Base (LKB) also used to take into account both semantic and syntactic information from lexical definitions, which were processed to extract the definiendum's genus and differentiae. To populate the LKB, typed-feature structures were used to store the information from the differentiae, which were, in turn, transmitted by inheritance based on the genus information. A feature structure can be seen as a set of attributes for a given concept, such as “origin”, “color”, “smell”, “taste” and “temperature” for the concept drink (or for a more general concept, such as substance, from which drink would inherit its features), for example, and the differentiae in a definition for a particular drink would be the values that those features assume for that drink, for example, “red”, “white”, “sweet”, “warm”, etc. As a result, concepts could be queried using the values of features as filters, and words defined in different languages could be related, since they were represented in the same structure. To build the feature structures, restricted domains covering subsets of the vocabulary were considered, since having every relevant attribute for every possible entity defined beforehand is not feasible, being more overall strategies required in order to process definitions in large scale."
],
[
"We proposed a set of semantic roles that reflect the most common structures of dictionary definitions. Based on an analysis of a random sample composed by 100 WordNet noun and verb glosses, we identified and named the main semantic roles and their compositions present on dictionary definitions. Moreover, we compared the identified semantic patterns with the definitions' syntactic structure, pointing out the features that can serve as input for automatic role labeling. The proposed semantic roles list is by no means definitive or exhaustive, but a first step at highlighting and formalizing the most relevant aspects of widely used intensional level definitions.",
"As future work, we intend to implement a rule-based classifier, using the identified syntactic patterns to generate an initial annotated dataset, which can be manually curated and subsequently feed a machine learning model able to annotate definitions in large scale. We expect that, through a systematic classification of their elements, lexical definitions can bring even more valuable information to semantic tasks that require world knowledge."
],
[
"Vivian S. Silva is a CNPq Fellow – Brazil."
]
],
"section_name": [
"Introduction",
"Structural Aspects of Definitions",
"Semantic Roles for Lexical Definitions",
"Identifying Semantic Roles in Definitions",
"Related Work",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"5b14ad0f7646da78ffd961e218bcaea78e93f937"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Most common syntactic patterns for each semantic role."
],
"extractive_spans": [],
"free_form_answer": "12",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Most common syntactic patterns for each semantic role."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"How many roles are proposed?"
],
"question_id": [
"0c09a0e8f9c5bdb678563be49f912ab6e3f97619"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"semantic roles"
],
"topic_background": [
"research"
]
} | {
"caption": [
"Figure 1: Conceptual model for the semantic roles for lexical definitions. Relationships between [role] particle and every other role in the model are expressed as dashed lines for readability.",
"Figure 2: Example of role labeling for the definition of the “lake poets” synset.",
"Table 1: Distribution of semantic patterns for the analyzed definitions. “Other” refers to patterns that ocurred only once. (role)+ indicated the occurrence of two or more consecutive instances of the role, and OR(role)+ indicates the same, but with the conjunction “or” connecting the instances.",
"Table 2: Most common syntactic patterns for each semantic role."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Table1-1.png",
"6-Table2-1.png"
]
} | [
"How many roles are proposed?"
] | [
[
"1806.07711-6-Table2-1.png"
]
] | [
"12"
] | 605 |
1912.03457 | Unsung Challenges of Building and Deploying Language Technologies for Low Resource Language Communities | In this paper, we examine and analyze the challenges associated with developing and introducing language technologies to low-resource language communities. While doing so, we bring to light the successes and failures of past work in this area, challenges being faced in doing so, and what they have achieved. Throughout this paper, we take a problem-facing approach and describe essential factors which the success of such technologies hinges upon. We present the various aspects in a manner which clarify and lay out the different tasks involved, which can aid organizations looking to make an impact in this area. We take the example of Gondi, an extremely-low resource Indian language, to reinforce and complement our discussion. | {
"paragraphs": [
[
"Technology pervades all aspects of society and continues to change the way people access and share information, learn and educate, as well as provide and access services. Language is the main medium through which such transformational technology can be integrated into the socioeconomic processes of a community. Natural Language Processing (NLP) and Speech systems, therefore, break down barriers and enable users and whole communities with easy access to information and services. However, the current trend in building language technology is designed to work on languages with very high resources in terms of data and infrastructure.",
"Also, as Machine Learning (ML) and NLP practitioners, we get caught up in an information-theoretic view of the problem, e.g., focusing on incremental improvements of performance on benchmarks or capturing accurate distributions over data, and tend to forget that the raison d'être of NLP is to build systems that add value to its users BIBREF0. We want to build models that enable people to read the news that was not written in their language, ask questions about their health when they do not have access to a doctor, etc. And while these technology applications are more and more ubiquitous for languages with a lot of data, a larger majority of languages remain resource-poor and bereft of such systems. As discussed in the United Nations e-government survey BIBREF1, “one of the most important obstacles to e-inclusion, particularly among vulnerable groups with little education, is language”. Thus, by excluding these languages from reaping the benefits of the advancements in language technology, we marginalize the already vulnerable groups even further.",
"India is a highly multilingual society and home to some of the largest language communities in the world. 6 out of 20 most-spoken (native) languages in the world are Indic. Ethnologue BIBREF2 records 461 tongues in India out of 6912 worldwide (6%), the 4th largest belonging to any single country in the world. 122 of these languages are spoken by more than 10,000 people. 29 languages have more than 1 million speakers, which include indigenous tribal languages like Gondi and Mundari, some without a supported writing system or script. Despite the large numbers of users, most of these languages have very little data available. Figure FIGREF1 shows that as compared to some of the much lesser spoken languages like German, Indic languages are severely low resourced. In a vast country like India, access to information thus becomes a huge concern. This lack of information means that not only do these communities not have information in domains like agriculture, health, weather etc., which could improve their quality of lives, but they may also not be aware of their basic rights as citizens of the country.",
"In this paper, we take the position that the current direction of advanced language technology towards extremely high data requirements can have severe socio-economic implications for a majority of language communities in the world. We focus on specific aspects of designing and building systems and applications for low resource languages and their speech communities to exemplify viable social impact through language technology. We begin by discussing the aspect of information exchange, which is the core motivation behind enabling low-resource language communities. We then steer our analysis towards the design and creation of an interface for people in these communities to simplify and enrich the process of information exchange. Finally, we gather insights about how to deploy these technologies to ensure extensive impact by studying and taking inspiration from existing technological deployments.",
"We use Gondi, a South-Central Dravidian language in the vulnerable category on UNESCO's Atlas of the Worlds Languages in Danger BIBREF3, as an example wherever possible. Spoken by nearly 3 million people BIBREF4 in the Indian states of Chhattisgarh, Andhra, Odisha, Maharashtra and Karnataka, it is heavily influenced by the dominant state language. However, it is also one of the least resourced languages in India, with very little available data and technology.",
"We believe that the components discussed in the sections below encapsulate the spectrum of issues surrounding this field and that all future discussions in this area will also fall under the umbrella of these categories. We believe that by focusing on Gondi, we will not only empower the Gondi community but more importantly, understand and create a pipeline or framework which can serve as a clear guide for potential ventures which plan on introducing disruptive language technologies in under-served communities."
],
[
"The primary element in communication is information exchange. People living in less connected areas are often unable to get the kind of information they need, due to various socio-economical and technological barriers. As a result, they miss out on crucial knowledge required to improve their well-being. There are three co-dependent aspects woven into the fabric of information exchange - access of information, quality and coverage of the information and methods to create and digitize available knowledge (generation)."
],
[
"This section refers to past work and current ventures of making digital resources adequately available and accessible to people."
],
[
"Less-connected and technologically underdeveloped areas often suffer from the limited accessibility of up-to-date information. Providing more individuals access to the online repositories of information can often help them improve their well-being.",
"There are some situations particularly during natural calamities where the absence of notifications about potentially disaster-prone areas can result in life and death situations of individuals. People in regions with sparse connectivity often fall victim to these incidents due to lack of timely updates. Using technical platforms to support the spread of information to these regions is an important goal to keep in mind. LORELEI BIBREF5 is a DARPA funded initiative with the goal of the building of technologies for dealing and responding to disasters in low resource language communities. Similar initiatives in India would be capable of saving lives.",
"The daily function and health of individuals in a community can be influenced positively by the dissemination of relevant information. For example, healthcare and agricultural knowledge can affect the prosperity of a rural household, making them aware of potential solutions and remedies which can be acquired. There has been a considerable body of work focused on technology for healthcare access, which includes telemedicine BIBREF6 and remote diagnosis BIBREF7. While the use of telecenters to spread information on agricultural practises has been employed, persuading users to regularly use the telecenters BIBREF8 is a challenge, which could be addressed by the use of language technologies to simplify access. VideoKheti BIBREF9 is an example of a voice-based application which provides educational videos to farmers about effective agricultural practices. Similar studies have been carried out to assess the effectiveness of voice-activated applications for farming BIBREF10. There are considerable challenges, however, to ensure that these solutions are inclusive and accessible to low-literate and less-connected users.",
"Similarly, there are situations where there are certain rights and duties which an individual as a citizen of India is entitled to. Some communities have long been exploited and ill-treated BIBREF11, and providing them information regarding their rights as well as accurate news could foster a sense of solidarity within the community and encourage them to make their voice heard. An extensive study on the impact of CGNet Swara BIBREF12 showed that this citizen journalism platform inspired people in rural communities, gave them a feeling of being heard, and provided a venue to voice their grievances. There are also other promising ventures such as Awaaz De BIBREF10 and Gram Vaani BIBREF13 which aim to boost social activism in a similar manner."
],
[
"The process of enabling more low-resource language communities with tools to access online information alone is not sufficient. There need to be steps taken to make more of the content which exists online interpretable to people in these communities. For example, The Indian Constitution and other similar official communications from the government are written in 22 scheduled languages of India. Lack of access to other related documents deprives them of basic information. This is where building robust machine translation tools for low resource languages can help. Cross-language information retrieval makes extensive use of these translation mechanisms BIBREF14 where information is retrieved in a language different from the language of the user's query. BIBREF15 describes a system making use of minimal resources to perform the same.",
"There is huge potential for language technologies to be involved in content creation and information access. Further, more accurate retrieval methods can help the user get relevant information specific to their needs and context in their own language."
],
[
"Often, many state-of-the-art tools cannot be applied to low-resource languages due to the lack of data. Table TABREF6 describes the various technologies and their presence concerning languages with different levels of resource availability and the ease of data collection. We can observe that for low resource languages, there is considerable difficulty in adopting these tools. Machine Translation can potentially be used as a fix to bridge the gap. Translation engines can help in translating documents from minority languages to majority languages. This allows the pool of data to be used in a number of NLP tasks like sentiment analysis and summarization. Doing so allows us to leverage the existing body of work in NLP done on resource-rich languages and subsequently apply it to the resource-poor languages, thereby foregoing any attempt to reinvent the wheel for these languages. This ensures a quicker and wider impact.BIBREF16 performs sentiment analysis on Chinese customer reviews by translating them to English. They observe that the quality of machine translation systems are sufficient for sentiment analysis to be performed on the automatically translated texts without a substantial trade-off in accuracy."
],
[
"This section refers to the generation of digital content which enriches online repositories with more diverse sets of information."
],
[
"There is a need to generate digital information and content for low-resource languages. It not only benefits the community by creating digital content for their needs, but it also provides data which can be used to train data-driven language technologies, such as ASRs, translation systems, and optical character recognition systems. Efforts to digitize content in India have been conducted in the past few years. The Government of India launched the Digital India initiative in 2015, which aims to digitize government documents in one of India's 120+ local languages. Such initiatives have evidently been useful before. For instance, the IMPACT project by the European Union was a large scale digitization project which helped push a lot of innovative work towards OCR and language technology for historical text retrieval and processing. IMPRINT is a similar initiative created by the Ministry of Human Resource Development (MHRD) to drive further research towards addressing such challenges.",
"The recent advancements in OCR technologies can propel efforts to digitize more handwritten documents. Such initiatives are already being undertaken to digitize and revive historical languages in Japan BIBREF17. Digital India library is a project that aims towards digitizing books and making them available online. Apart from printed books, a lot of ancient literature is written on palm leaves. The Regional Mega Scan Centre (RMSC) at IIIT Hyderabad has digitized over 100,000 books, one-third of which are in Indian Languages and additionally, they have also digitized text from scans of palm leaves. More initiatives such as these will help preserve and revive a number of languages that are part of the Indian heritage."
],
[
"Data collection via crowdsourcing can be a challenge for low resource languages, primarily due to the expensive nature of the task coupled with the lack of commercial demand for such data. Thus, collecting this data at low cost becomes an important priority. Project Karya is a crowdsourcing platform which provides digital work to low-income workers. Although the data quality can be a concern, promising results have shown otherwise. BIBREF18 tested the quality of crowdsourced data in rural regions of India, tasking individuals with the digitization of Hindi/Marathi handwritten documents. A 96.7% accuracy of annotation was yielded, proving that there is potential in this area. Recently, collection of Marathi speech data is also being conducted. In a similar fashion, Navana Tech, a startup, has been collecting data in mid and low-resource languages of verbal banking queries so that they can be integrated into various banking application platforms for financial inclusion. Such crowdsourcing platforms not only act as a potential data for low-resource communities, they also benefit low-income workers by increasing their current daily wage. Such ventures would enhance the inclusion of such workers in the digitization process, something which aligns with the aims of the Digital India mission.",
"The collection of data in an extremely low-resource language like Gondi can be particularly tricky, additionally considering the fact that Gondi does not have an official script. Pratham Books is a non-profit organization which aims to democratize access to books for children. They recently hosted a workshop where they trained members of the local community to translate books on StoryWeaver, their open-source publication platform. At the end of this workshop, approximately 200 books were translated from Hindi to Gondi (Devanagiri script). This was the first time children's books were made available in Gondi, and it also sparked the creation of parallel data for Hindi-Gondi translation systems."
],
[
"The design of a user-friendly interface plays a very crucial role in ensuring that the deployed technology encompasses all strata of society. It is often seen that a majority of target users have not had the privilege of education, and show varying levels of literacy, both foundational and digital. In such scenarios, text-based modalities pose several limitations from both the user and designer perspectives, and graphical user interfaces have been the preferred choice in these applications. BIBREF19 reports that text-based interfaces were completely redundant for illiterate users and severely error-prone for literate but novice users. Further, several languages do not have unique keyboard standards or fonts, and some do not have a script at all BIBREF20.",
"To overcome these issues with text, speech as a modality has also been deployed with varying success. `CGNet Swara', a citizen-run journalism portal, uses a phone-based IVR system to educate illiterate users BIBREF21. ’Avaaj Utalo’ allows users to make simple phone calls to ask questions or browse questions and answers asked on agricultural topics BIBREF10. ’Spoken Web’ is another application wherein users can create ’voice sites’ analogous to ’websites’ which can then be easily accessed through voice interaction on mobile phones BIBREF22. These serve to provide farmers with relevant crop and market information. An attempt to leverage the complementarity of voice and graphic-based inputs was made by VideoKheti, a mobile system with a multi-modal interface for low-literate farmers providing agricultural extension videos on command in their own language or dialect BIBREF9. They report that people in these communities find it difficult to use softkey type keyboards that are extremely common on modern smartphones. Instead, they proposed a system comprising of large buttons, graphics and some voice input. Such a system for delivering information to farmers was made and they showed that the farmers were very comfortable using it. Their results also show that a speech interface alone was not enough for that scenario, except in cases where the search list was long and the results were dependent on keywords or short phrases. Similarly, the Adivasi Radio App , based on text-to-speech (TTS) technology, is developed to read out written reports in Gondi, one of the main tribal languages in Chhattisgarh. Bolo is another mobile application which uses a very simple interface to improve children's literacy in India. Project Karya also proposes to divide massive digital tasks into ”microwork” and crowdsource this work to millions of people in rural India via phones .",
"While voice might solve the foundational literacy problems, the lack of digital literacy is often more challenging to overcome. BIBREF23 demonstrate the use of an app to teach the Mundari language to children. The app comprised of a series of games designed with the help of the community. The content was delivered in the Bangla script, which was what the children were taught in school. Their study noted that children from such communities found the usage of a smartphone to be difficult.",
"Relying on voice-based systems also poses a few challenges. It is not easy to build robust ASR systems for these languages due to severe lack of data, dialect variations and several such constraints. An attempt to resolve this was made with the development of the SALAAM ASR BIBREF24 which uses the acoustic model of an existing ASR and performs a cross-lingual phoneme mapping between the source and target language. This, however, is limited to recognition of a very small set of vocabulary, but finds use due to its' cost-effective and low resource setting."
],
[
"After developing technologies to provide information, and ensuring that the applications are designed in such a way that they are accessible to the population, the technology must be effectively deployed. Specialized applications are useless if they are not deployed properly in a way that accesses their end-users. When deploying a specially developed technology, the application must be deployed with consideration of the existing community dynamics. For any deployment to be successful, it usually must be able to be purposefully integrated into the lifestyle of community members - or have strong utilization incentives if it is a transformative technology. In this section, we will review examples of technology dissemination to low-resource/rural communities, and the impacts of effective deployment. While some technologies that we examine are not deployed utilizing low-resource languages specifically, the types of rural communities and villages in which they are deployed are analogous to the contexts in which low-resource languages exist, and clear parallels can be drawn.",
"Integrating the usage of a language technology intervention into a community in a low-resource context requires much more simply introducing the technology. Unlike hardware interventions and innovations like solar panels or new agricultural tools, language technologies often rely on the delivery, exchange, and utilization of information, which is much less tangible than physical solutions. This is especially for people with limited previous exposure to digital technology. Upon observing a selection of language-based interventions that were deployed in low-resource contexts, we observed that the most successful deployments of technologies tended to have three components of success. They: 1) Initially launched by seeding with target communities, 2) Worked closely to engage the community itself with the technology and information, and 3) Provided a strong incentive structure to adapt the technology - this incentive could be as simple as payments or as complex as communicated benefits from the technology."
],
[
"In this section, we will be reviewing and comparing three separate technological systems, Learn2Earn BIBREF25, Mobile Vaani BIBREF13, and the Climate and Agriculture Information Service (CAIS)BIBREF26, and see how they utilized the rules of successful deployment outlined above. Learn2Earn, developed by Microsoft Research, is a simple IVR based mobile language technology app which uses quizzes to educate people and spread public awareness campaigns, launched initially in rural central India. Mobile Vaani is a large-scale and broad community-based IVR media exchange platform, developed by the NGO Gram Vaani. It currently has over 100,000 unique monthly active users, and processes 10,000 calls per day across the three Indian states of Bihar, Jharkhand, and Madhya Pradesh. Finally, the CAIS system is an SMS-based information delivery system designed for farmers who live in a rural, low-resource and no connectivity agricultural village on the Char Islands in the Bangladeshi Chalan Beel Wetland. This application provides weather data and agricultural advice to farmers on a periodic basis and was developed by a collaboration between mPower and two local NGOs. After designing the platform in accessible ways, each deployment process began with the seeding within a target community within itself. All examples that we studied became successful only after a small scale launch of their product. These launches occurred in different ways but were all based on targeting a starting group of users and incentivizing them to utilize and share the product. The initial users were people who were somewhat fluent in the technology (either through training or existing knowledge), and who knew of or had specialized needs that the technology could address."
],
[
"Learn2Earn was built as a tech-enabled information dissemination system; its original information awareness campaign centred on informing farmers about their rights as guaranteed in India’s Forest Rights act. Because of the nature of their message content and delivery, the researchers decided to seed the platform with a single advertisement on an existing IVR channel already utilized by farmers. This advertisement reached 150 people, and provided them with distinct financial incentives to both call the platform, and invite friends to the platform. While only 17 of the original listeners of the advertisement went on to call the number, those respondents were members of the relevant community (farmers who were familiar with IVR technology) and were networked through family and friendships to additional ideal users of the platform. Within 7 weeks, the incentive structure allowed the platform to spread from the original 17 users to over 17,000, with little influence from the platform respondents. BIBREF25"
],
[
"Mobile Vaani initially tried to launch in 2011 by encouraging employees from their partner NGOs to distribute their platform. The platform was initially imagined as a voice-based, inclusive medium for communities to express their grievances and communicate with each other digitally. The initial employees who recruited for the platform were not from the community but did work closely with them regularly. While there was some initial success in the launch, the mobile Vaani Team were unable to grow at a significant pace because they informed the end-users about their intended design and usage of the technology, which “set unrealistic expectations of the platform in the minds of the participating users” after the technology could not be used in the exact way that it was encouraged. A few months later, the platform decided to re-launch and expand by recruiting a series of trained and compensated volunteers from a variety of communities that they hoped to engage. During the second “launch,” the community members were able to learn about the platform, and adapt it to their specific use cases. The platform began to gain popularity during a teacher’s strike in the state of Jharkhand – where a specific use case for expressing grievances powered by the community arose."
],
[
"The CAIS platform launched in direct collaboration with the NGO partner for the village – every available farmer registered their name, number, and crop type with the NGO partner – and consequently the target population was integrated from the start. As the programs grew, each engaged with the community on a high level. In the case of all three platforms, a specific population, and a very specific understanding of that population’s needs had to be identified before the platform could be relatively effective. Even after the deployment of the platform, care and close integration with community systems had to be done. The village in which CAIS worked had a system of self-empowerment groups, that had been organized by the NGO. Each group had a leader; and while not every villager in each group had a basic phone, every group leader did. Consequently, the researchers behind CAIS worked to ensure that every group leader was engaged in with CAIS and that they would relay their CAIS informational updates to the villagers that they lead. Similarly, the CAIS researchers worked closely with village leaders to determine who was able to access the information in SMS form and deployed informational physical posters as a substitute to those portions of the village population who could not. This intensive work led to the successful adaptation of technology to the benefit of the farmer’s yields. The researchers behind the Vaani system also continued to expand the system through a local network of volunteers. BIBREF13"
],
[
"CGNet Swara, which we introduced earlier in this paper, also increased their initial participation by engaging with the wider community by holding in-person training and awareness sessions. They have conducted over 50 workshops, and have trained more than 2,000 members of various communities BIBREF12. Outreach activities such as these also allowed an increased spread of awareness via word-of-mouth. From these examples, it is clear to see that community engagement is the absolute key to spreading technology. Incentives –both monetary and situational – are a huge way that these platforms were able to engage their initial users. Incentives served to empower individuals to become champions of the platform and increased the enabled them to use their knowledge of the community and existing peer networks to deliver the technology where it was needed. All platforms used incentives of some sort; Learn2Earn used a direct payment for recruitment + participation, and also delivered relevant topics to the users. Mobile Vaani provided financial incentives to the volunteers who mobilized to evangelize the product. CAIS did not provide monetary incentives but instead brought technology that had an actionable and tangential impact on the daily lives of farmers. With the deployment of these technologies, direct needs of the population were solved."
],
[
"The boost in recent advancements in NLP research has started breaking down communication and information barriers. This, coupled with in-depth studies on the socio-economic benefits of enabling less-connected communities with technology, provides a strong argument for increasing investment in this area. It is promising to observe increased innovation and steady progress in the empowerment of rural communities using language tools. Increased exposure to the challenges and works in this area can catalyse developments in improving inclusion and information dissemination. We hope that this paper will provide pointers in the right direction for potential ventures that plan on introducing disruptive language technologies to marginalized communities."
]
],
"section_name": [
"Introduction",
"Information Exchange",
"Information Exchange ::: Access",
"Information Exchange ::: Access ::: Making information accessible to people",
"Information Exchange ::: Access ::: Making more digital content available",
"Information Exchange ::: Access ::: Making NLP models more accessible to low resource languages",
"Information Exchange ::: Generation",
"Information Exchange ::: Generation ::: Digitization of Documents",
"Information Exchange ::: Generation ::: Crowdsourcing",
"Interface",
"Deployment and Impact",
"Deployment and Impact ::: Case Studies",
"Deployment and Impact ::: Case Studies ::: Learn2Earn",
"Deployment and Impact ::: Case Studies ::: Mobile Vaani",
"Deployment and Impact ::: Case Studies ::: CAIS",
"Deployment and Impact ::: Case Studies ::: CGNET Swara",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"5b98808d0b03d51a2887b93f2a89cb413dafbe82"
],
"answer": [
{
"evidence": [
"Often, many state-of-the-art tools cannot be applied to low-resource languages due to the lack of data. Table TABREF6 describes the various technologies and their presence concerning languages with different levels of resource availability and the ease of data collection. We can observe that for low resource languages, there is considerable difficulty in adopting these tools. Machine Translation can potentially be used as a fix to bridge the gap. Translation engines can help in translating documents from minority languages to majority languages. This allows the pool of data to be used in a number of NLP tasks like sentiment analysis and summarization. Doing so allows us to leverage the existing body of work in NLP done on resource-rich languages and subsequently apply it to the resource-poor languages, thereby foregoing any attempt to reinvent the wheel for these languages. This ensures a quicker and wider impact.BIBREF16 performs sentiment analysis on Chinese customer reviews by translating them to English. They observe that the quality of machine translation systems are sufficient for sentiment analysis to be performed on the automatically translated texts without a substantial trade-off in accuracy.",
"FLOAT SELECTED: Table 1: Enabling language technologies, their availability and quality ( ? ? ? - excellent quality technology, ?? - moderately good but usable, ? - rudimentary and not practically useful) for differently resourced languages, and their data/knowledge requirements (? ? ? - very high data/expertise, ?? - moderate, ? - nominal and easily procurable). This information is based on authors’ analysis and personal experience."
],
"extractive_spans": [],
"free_form_answer": "- Font & Keyboard\n- Speech-to-Text\n- Text-to-Speech\n- Text Prediction\n- Spell Checker\n- Grammar Checker\n- Text Search\n- Machine Translation\n- Voice to Text Search\n- Voice to Speech Search",
"highlighted_evidence": [
"Table TABREF6 describes the various technologies and their presence concerning languages with different levels of resource availability and the ease of data collection. We can observe that for low resource languages, there is considerable difficulty in adopting these tools.",
"FLOAT SELECTED: Table 1: Enabling language technologies, their availability and quality ( ? ? ? - excellent quality technology, ?? - moderately good but usable, ? - rudimentary and not practically useful) for differently resourced languages, and their data/knowledge requirements (? ? ? - very high data/expertise, ?? - moderate, ? - nominal and easily procurable). This information is based on authors’ analysis and personal experience."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
""
],
"paper_read": [
"no"
],
"question": [
"What language technologies have been introduced in the past?"
],
"question_id": [
"50716cc7f589b9b9f3aca806214228b063e9695b"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 1: The points represent the disparity between number of wikipedia articles in comparison with the number of native speakers for a particular language.",
"Table 1: Enabling language technologies, their availability and quality ( ? ? ? - excellent quality technology, ?? - moderately good but usable, ? - rudimentary and not practically useful) for differently resourced languages, and their data/knowledge requirements (? ? ? - very high data/expertise, ?? - moderate, ? - nominal and easily procurable). This information is based on authors’ analysis and personal experience."
],
"file": [
"1-Figure1-1.png",
"4-Table1-1.png"
]
} | [
"What language technologies have been introduced in the past?"
] | [
[
"1912.03457-Information Exchange ::: Access ::: Making NLP models more accessible to low resource languages-0",
"1912.03457-4-Table1-1.png"
]
] | [
"- Font & Keyboard\n- Speech-to-Text\n- Text-to-Speech\n- Text Prediction\n- Spell Checker\n- Grammar Checker\n- Text Search\n- Machine Translation\n- Voice to Text Search\n- Voice to Speech Search"
] | 606 |
1910.11491 | Attention Optimization for Abstractive Document Summarization | Attention plays a key role in the improvement of sequence-to-sequence-based document summarization models. To obtain a powerful attention helping with reproducing the most salient information and avoiding repetitions, we augment the vanilla attention model from both local and global aspects. We propose an attention refinement unit paired with local variance loss to impose supervision on the attention model at each decoding step, and a global variance loss to optimize the attention distributions of all decoding steps from the global perspective. The performances on the CNN/Daily Mail dataset verify the effectiveness of our methods. | {
"paragraphs": [
[
"Abstractive document summarization BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 attempts to produce a condensed representation of the most salient information of the document, aspects of which may not appear as parts of the original input text. One popular framework used in abstractive summarization is the sequence-to-sequence model introduced by BIBREF5. The attention mechanism BIBREF6 is proposed to enhance the sequence-to-sequence model by allowing salient features to dynamically come to the forefront as needed to make up for the incapability of memorizing the long input source.",
"However, when it comes to longer documents, basic attention mechanism may lead to distraction and fail to attend to the relatively salient parts. Therefore, some works focus on designing various attentions to tackle this issue BIBREF2, BIBREF7. We follow this line of research and propose an effective attention refinement unit (ARU). Consider the following case. Even with a preliminary idea of which parts of source document should be focused on (attention), sometimes people may still have trouble in deciding which exact part should be emphasized for the next word (the output of the decoder). To make a more correct decision on what to write next, people always adjust the concentrated content by reconsidering the current state of what has been summarized already. Thus, ARU is designed as an update unit based on current decoding state, aiming to retain the attention on salient parts but weaken the attention on irrelevant parts of input.",
"The de facto standard attention mechanism is a soft attention that assigns attention weights to all input encoder states, while according to previous work BIBREF8, BIBREF9, a well-trained hard attention on exact one input state is conducive to more accurate results compared to the soft attention. To maintain good performance of hard attention as well as the advantage of end-to-end trainability of soft attention, we introduce a local variance loss to encourage the model to put most of the attention on just a few parts of input states at each decoding step. Additionally, we propose a global variance loss to directly optimize the attention from the global perspective by preventing assigning high weights to the same locations multiple times. The global variance loss is somewhat similar with the coverage mechanism BIBREF10, BIBREF11, which is also designed for solving the repetition problem. The coverage mechanism introduces a coverage vector to keep track of previous decisions at each decoding step and adds it into the attention calculation. However, when the high attention on certain position is wrongly assigned during previous timesteps, the coverage mechanism hinders the correct assignment of attention in later steps.",
"We conduct our experiments on the CNN/Daily Mail dataset and achieve comparable results on ROUGE BIBREF12 and METEOR BIBREF13 with the state-of-the-art models. Our model surpasses the strong pointer-generator baseline (w/o coverage) BIBREF11 on all ROUGE metrics by a large margin. As far as we know, we are the first to introduce explicit loss functions to optimize the attention. More importantly, the idea behind our model is simple but effective. Our proposal could be applied to improve other attention-based models, which we leave these explorations for the future work."
],
[
"We adopt the Pointer-Generator Network (PGN) BIBREF11 as our baseline model, which augments the standard attention-based seq2seq model with a hybrid pointer network BIBREF14. An input document is firstly fed into a Bi-LSTM encoder, then an uni-directional LSTM is used as the decoder to generate the summary word by word. At each decoding step, the attention distribution $a_t$ and the context vector $c_t$ are calculated as follows:",
"where $h_i$ and $s_t$ are the hidden states of the encoder and decoder, respectively. Then, the token-generation softmax layer reads the context vector $c_t$ and current hidden state $s_t$ as inputs to compute the vocabulary distribution. To handle OOVs, we inherit the pointer mechanism to copy rare or unseen words from the input document (refer to BIBREF11 for more details).",
"To augment the vanilla attention model, we propose the Attention Refinement Unit (ARU) module to retain the attention on the salient parts while weakening the attention on the irrelevant parts of input. As illustrated in Figure FIGREF5, the attention weight distribution $a_t$ at timestep $t$ (the first red histogram) is fed through the ARU module. In the ARU module, current decoding state $s_t$ and attention distribution $a_t$ are combined to calculate a refinement gate $r_t$:",
"where $\\sigma $ is the sigmoid activation function, $W_{s}^{r}$, $W_{a}^r$ and $b_r$ are learnable parameters. $r_t$ represents how much degree of the current attention should be updated. Small value of $r_{ti}$ indicates that the content of $i$-th position is not much relevant to current decoding state $s_t$, and the attention on $i$-th position should be weakened to avoid confusing the model. The attention distribution is updated as follows (the symbol $\\odot $ means element-wise product):"
],
[
"As discussed in section SECREF1, the attention model putting most of attention weight on just a few parts of the input tends to achieve good performance. Mathematically, when only a small number of values are large, the shape of the distribution is sharp and the variance of the attention distribution is large. Drawing on the concept of variance in mathematics, local variance loss is defined as the reciprocal of its variance expecting the attention model to be able to focus on more salient parts. The standard variance calculation is based on the mean of the distribution. However, as previous work BIBREF15, BIBREF16 mentioned that the median value is more robust to outliers than the mean value, we use the median value to calculate the variance of the attention distribution. Thus, local variance loss can be calculated as:",
"where $\\hat{\\cdot }$ is a median operator and $\\epsilon $ is utilized to avoid zero in the denominator."
],
[
"To avoid the model attending to the same parts of the input states repeatedly, we propose another variance loss to adjust the attention distribution globally. Ideally, the same locations should be assigned a relatively high attention weight once at most. Different from the coverage mechanism BIBREF11, BIBREF10 tracking attention distributions of previous timesteps, we maintain the sum of attention distributions over all decoder timesteps, denoted as $A$. The $i$-th value of $A$ represents the accumulated attention that the input state at $i$-th position has received throughout the whole decoding process. Without repeated high attention being paid to the same location, the difference between the sum of attention weight and maximum attention weight of $i$-th input state among all timesteps should be small. Moreover, the whole distribution of the difference over all input positions should have a flat shape. Similar to the definition of local variance loss, the global variance loss is formulated as:",
"where $g_i$ represents the difference between the accumulated attention weight and maximum attention weight at $i$-th position."
],
[
"The model is firstly pre-trained to minimize the maximum-likelihood loss, which is widely used in sequence generation tasks. We define $y^* = \\lbrace y^*_1, \\cdots , y_T^*\\rbrace $ as the ground-truth output sequence for a given input sequence $x$, then the loss function is formulated as:",
"After converging, the model is further optimized with local variance loss and global variance loss. The mix of loss functions is:",
"where $\\lambda _1$ and $\\lambda _2$ are hyper-parameters.",
"-0.13cm"
],
[
"We conduct our model on the large-scale dataset CNN/Daily Mail BIBREF19, BIBREF1, which is widely used in the task of abstractive document summarization with multi-sentences summaries. We use the scripts provided by BIBREF11 to obtain the non-anonymized version of the dataset without preprocessing to replace named entities. The dataset contains 287,226 training pairs, 13,368 validation pairs and 11,490 test pairs in total. We use the full-length ROUGE F1 and METEOR as our main evaluation metrics."
],
[
"The data preprocessing is the same as PGN BIBREF11, and we randomly initialize the word embeddings. The hidden states of the encoder and the decoder are both 256-dimensional and the embedding size is also 256. Adagrad with learning rate 0.15 and an accumulator with initial value 0.1 are used to train the model. We conduct experiments on a single Tesla P100 GPU with a batch size of 64 and it takes about 50000 iterations for pre-training and 10000 iterations for fine-tuning. Beam search size is set to 4 and trigram avoidance BIBREF17 is used to avoid trigram-level repetition. Tuned on validation set, $\\lambda _1$ and $\\lambda _2$ in the loss function (Equation. DISPLAY_FORM12) is set to 0.3 and 0.1, respectively."
],
[
"As shown in Table TABREF13 (the performance of other models is collected from their papers), our model exceeds the PGN baseline by 3.85, 2.1 and 3.37 in terms of R-1, R-2 and R-L respectively and receives over 3.23 point boost on METEOR. FastAbs BIBREF3 regards ROUGE scores as reward signals with reinforcement learning, which brings a great performance gain. DCA BIBREF4 proposes deep communicating agents with reinforcement setting and achieves the best results on CNN/Daily Mail. Although our experimental results have not outperformed the state-of-the-art models, our model has a much simpler structure with fewer parameters. Besides, these simple methods do yield a boost in performance compared with PGN baseline and may be applied on other models with attention mechanism.",
"We further evaluate how these optimization approaches work. The results at the bottom of Table TABREF13 verify the effectiveness of our proposed methods. The ARU module has achieved a gain of 0.97 ROUGE-1, 0.35 ROUGE-2, and 0.64 ROUGE-L points; the local variance loss boosts the model by 3.01 ROUGE-1, 1.6 ROUGE-2, and 2.58 ROUGE-L. As shown in Figure FIGREF22, the global variance loss helps with eliminating n-gram repetitions, which verifies its effectiveness."
],
[
"We also conduct human evaluation on the generated summaries. Similar to the previous work BIBREF3, BIBREF20, we randomly select 100 samples from the test set of CNN/Daily Mail dataset and ask 3 human testers to measure relevance and readability of each summary. Relevance is based on how much salient information does the summary contain, and readability is based on how fluent and grammatical the summary is. Given an article, different people may have different understandings of the main content of the article, the ideal situation is that more than one reference is paired with the articles. However, most of summarization datasets contain the pairs of article with a single reference summary due to the cost of annotating multi-references. Since we use the reference summaries as target sequences to train the model and assume that they are the gold standard, we give both articles and reference summaries to the annotator to score the generated summaries. In other words, we compare the generated summaries against the reference ones and the original article to obtain the (relative) scores in Table 3. Each perspective is assessed with a score from 1 (worst) to 5 (best). The result in Table TABREF21 demonstrate that our model performs better under both criteria w.r.t. BIBREF11. Additionally, we show the example of summaries generated by our model and baseline model in Table TABREF23. As can be seen from the table, PGN suffers from repetition and fails to obtain the salient information. Though with coverage mechanism solving saliency and repetition problem, it generates many trivial facts. With ARU, the model successfully concentrates on the salient information, however, it also suffers from serious repetition problem. Further optimized by the variance loss, our model can avoid repetition and generate summary with salient information. Besides, our generated summary contains fewer trivial facts compared to the PGN+Coverage model."
]
],
"section_name": [
"Introduction",
"Proposed model ::: Model Architecture",
"Proposed model ::: Local Variance Loss",
"Proposed model ::: Global Variance Loss",
"Proposed model ::: Model Training",
"Experiments ::: Preliminaries ::: Dataset and Metrics.",
"Experiments ::: Preliminaries ::: Implementation Details.",
"Experiments ::: Automatic Evaluation Result",
"Experiments ::: Human Evaluation and Case Study"
]
} | {
"answers": [
{
"annotation_id": [
"ff98d4939cd68b712139fa8a48dc47d3c25467dc"
],
"answer": [
{
"evidence": [
"We conduct our model on the large-scale dataset CNN/Daily Mail BIBREF19, BIBREF1, which is widely used in the task of abstractive document summarization with multi-sentences summaries. We use the scripts provided by BIBREF11 to obtain the non-anonymized version of the dataset without preprocessing to replace named entities. The dataset contains 287,226 training pairs, 13,368 validation pairs and 11,490 test pairs in total. We use the full-length ROUGE F1 and METEOR as our main evaluation metrics."
],
"extractive_spans": [
"ROUGE F1",
"METEOR"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the full-length ROUGE F1 and METEOR as our main evaluation metrics."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"5c5cc47913004866046ff226f5bc5c6653084f71"
],
"answer": [
{
"evidence": [
"As discussed in section SECREF1, the attention model putting most of attention weight on just a few parts of the input tends to achieve good performance. Mathematically, when only a small number of values are large, the shape of the distribution is sharp and the variance of the attention distribution is large. Drawing on the concept of variance in mathematics, local variance loss is defined as the reciprocal of its variance expecting the attention model to be able to focus on more salient parts. The standard variance calculation is based on the mean of the distribution. However, as previous work BIBREF15, BIBREF16 mentioned that the median value is more robust to outliers than the mean value, we use the median value to calculate the variance of the attention distribution. Thus, local variance loss can be calculated as:",
"where $\\hat{\\cdot }$ is a median operator and $\\epsilon $ is utilized to avoid zero in the denominator."
],
"extractive_spans": [],
"free_form_answer": "The reciprocal of the variance of the attention distribution",
"highlighted_evidence": [
"Drawing on the concept of variance in mathematics, local variance loss is defined as the reciprocal of its variance expecting the attention model to be able to focus on more salient parts. The standard variance calculation is based on the mean of the distribution. However, as previous work BIBREF15, BIBREF16 mentioned that the median value is more robust to outliers than the mean value, we use the median value to calculate the variance of the attention distribution. Thus, local variance loss can be calculated as:\n\nwhere $\\hat{\\cdot }$ is a median operator and $\\epsilon $ is utilized to avoid zero in the denominator."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"What evaluation metrics do they use?",
"How do they define local variance?"
],
"question_id": [
"be7b375b22d95d1f6c68c48f57ea87bf82c72123",
"c4b5cc2988a2b91534394a3a0665b0c769b598bb"
],
"question_writer": [
"798ee385d7c8105b83b032c7acc2347588e09d61",
"798ee385d7c8105b83b032c7acc2347588e09d61"
],
"search_query": [
"long document summarization",
"long document summarization"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: The process of attention optimization (better view in color). The original attention distribution (red bar on the left) is updated by the refinement gate rt and attention on some irrelevant parts are lowered. Then the updated attention distribution (blue bar in the middle) is further supervised by a local variance loss and get a final distribution (green bar on the right).",
"Table 1: Performance on CNN/Daily Mail test dataset.",
"Table 2: Human Evaluation: pairwise comparison between our final model and PGN model.",
"Figure 2: With global variance loss, our model (green bar) can avoid repetitions and achieve comparable percentage of duplicates with reference summaries."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"5-Figure2-1.png"
]
} | [
"How do they define local variance?"
] | [
[
"1910.11491-Proposed model ::: Local Variance Loss-0",
"1910.11491-Proposed model ::: Local Variance Loss-1"
]
] | [
"The reciprocal of the variance of the attention distribution"
] | 609 |
1909.03023 | Annotating Student Talk in Text-based Classroom Discussions | Classroom discussions in English Language Arts have a positive effect on students' reading, writing and reasoning skills. Although prior work has largely focused on teacher talk and student-teacher interactions, we focus on three theoretically-motivated aspects of high-quality student talk: argumentation, specificity, and knowledge domain. We introduce an annotation scheme, then show that the scheme can be used to produce reliable annotations and that the annotations are predictive of discussion quality. We also highlight opportunities provided by our scheme for education and natural language processing research. | {
"paragraphs": [
[
"Current research, theory, and policy surrounding K-12 instruction in the United States highlight the role of student-centered disciplinary discussions (i.e. discussions related to a specific academic discipline or school subject such as physics or English Language Arts) in instructional quality and student learning opportunities BIBREF0 , BIBREF1 . Such student-centered discussions – often called “dialogic\" or “inquiry-based” – are widely viewed as the most effective instructional approach for disciplinary understanding, problem-solving, and literacy BIBREF2 , BIBREF3 , BIBREF4 . In English Language Arts (ELA) classrooms, student-centered discussions about literature have a positive impact on the development of students' reasoning, writing, and reading skills BIBREF5 , BIBREF6 . However, most studies have focused on the role of teachers and their talk BIBREF7 , BIBREF2 , BIBREF8 rather than on the aspects of student talk that contribute to discussion quality.",
"Additionally, studies of student-centered discussions rarely use the same coding schemes, making it difficult to generalize across studies BIBREF2 , BIBREF9 . This limitation is partly due to the time-intensive work required to analyze discourse data through qualitative methods such as ethnography and discourse analysis. Thus, qualitative case studies have generated compelling theories about the specific features of student talk that lead to high-quality discussions, but few findings can be generalized and leveraged to influence instructional improvements across ELA classrooms.",
"As a first step towards developing an automated system for detecting the features of student talk that lead to high quality discussions, we propose a new annotation scheme for student talk during ELA “text-based\" discussions - that is, discussions that center on a text or piece of literature (e.g., book, play, or speech). The annotation scheme was developed to capture three aspects of classroom talk that are theorized in the literature as important to discussion quality and learning opportunities: argumentation (the process of systematically reasoning in support of an idea), specificity (the quality of belonging or relating uniquely to a particular subject), and knowledge domain (area of expertise represented in the content of the talk). We demonstrate the reliability and validity of our scheme via an annotation study of five transcripts of classroom discussion."
],
[
"One discourse feature used to assess the quality of discussions is students' argument moves: their claims about the text, their sharing of textual evidence for claims, and their warranting or reasoning to support the claims BIBREF10 , BIBREF11 . Many researchers view student reasoning as of primary importance, particularly when the reasoning is elaborated and highly inferential BIBREF12 . In Natural Language Processing (NLP), most educationally-oriented argumentation research has focused on corpora of student persuasive essays BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 . We instead focus on multi-party spoken discussion transcripts from classrooms. A second key difference consists in the inclusion of the warrant label in our scheme, as it is important to understand how students explicitly use reasoning to connect evidence to claims. Educational studies suggest that discussion quality is also influenced by the specificity of student talk BIBREF19 , BIBREF20 . Chisholm and Godley found that as specificity increased, the quality of students' claims and reasoning also increased. Previous NLP research has studied specificity in the context of professionally written newspaper articles BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 . While the annotation instructions used in these studies work well for general purpose corpora, specificity in text-based discussions also needs to capture particular relations between discussions and texts. Furthermore, since the concept of a sentence is not clearly defined in speech, we annotate argumentative discourse units rather than sentences (see Section SECREF3 ).",
"The knowledge domain of student talk may also matter, that is, whether the talk focuses on disciplinary knowledge or lived experiences. Some research suggests that disciplinary learning opportunities are maximized when students draw on evidence and reasoning that are commonly accepted in the discipline BIBREF25 , although some studies suggest that evidence or reasoning from lived experiences increases discussion quality BIBREF26 . Previous related work in NLP analyzed evidence type for argumentative tweets BIBREF27 . Although the categories of evidence type are different, their definition of evidence type is in line with our definition of knowledge domain. However, our research is distinct from this research in its application domain (i.e. social media vs. education) and in analyzing knowledge domain for all argumentative components, not only those containing claims."
],
[
"Our annotation scheme uses argument moves as the unit of analysis. We define an argument move as an utterance, or part of an utterance, that contains an argumentative discourse unit (ADU) BIBREF28 . Like Peldszus and Stede Peldszus:15, in this paper we use transcripts already segmented into argument moves and focus on the steps following segmentation, i.e., labeling argumentation, specificity, and knowledge domain. Table TABREF2 shows a section of a transcribed classroom discussion along with labels assigned by a human annotator following segmentation."
],
[
"The argumentation scheme is based on BIBREF29 and consists of a simplified set of labels derived from Toulmin's Toulmin:58 model: INLINEFORM0 Claim: an arguable statement that presents a particular interpretation of a text or topic. INLINEFORM1 Evidence: facts, documentation, text reference, or testimony used to support or justify a claim. INLINEFORM2 Warrant: reasons explaining how a specific evidence instance supports a specific claim. Our scheme specifies that warrants must come after claim and evidence, since by definition warrants cannot exist without them.",
"The first three moves in Table TABREF2 show a natural expression of an argument: a student first claims that Willy's wife is only trying to protect him, then provides a reference as evidence by mentioning something she said to her kids at the end of the book, and finally explains how not caring about her kids ties the evidence to the initial claim. The second group shows the same argument progression, with evidence given as a direct quote."
],
[
"Specificity annotations are based on BIBREF19 and have the goal of capturing text-related characteristics expressed in student talk. Specificity labels are directly related to four distinct elements for an argument move: (1) it is specific to one (or a few) character or scene; (2) it makes significant qualifications or elaborations; (3) it uses content-specific vocabulary (e.g. quotes from the text); (4) it provides a chain of reasons. Our annotation scheme for specificity includes three labels along a linear scale: INLINEFORM0 Low: statement that does not contain any of these elements. INLINEFORM1 Medium: statement that accomplishes one of these elements. INLINEFORM2 High: statement that clearly accomplishes at least two specificity elements. Even though we do not explicitly use labels for the four specificity elements, we found that explicitly breaking down specificity into multiple components helped increase reliability when training annotators.",
"The first three argument moves in Table TABREF2 all contain the first element, as they refer to select characters in the book. However, no content-specific vocabulary, clear chain of reasoning, or significant qualifications are provided; therefore all three moves are labeled as medium specificity. The fourth move, however, accomplishes the first and fourth specificity elements, and is labeled as high specificity. The fifth move is also labeled high specificity since it is specific to one character/scene, and provides a direct quote from the text. The last move is labeled as low specificity as it reflects an overgeneralization about all humans."
],
[
"The possible labels for knowledge domain are: INLINEFORM0 Disciplinary: the statement is grounded in knowledge gathered from a text (either the one under discussion or others), such as a quote or a description of a character/event. INLINEFORM1 Experiential: the statement is drawn from human experience, such as what the speaker has experienced or thinks that other humans have experienced.",
"In Table TABREF2 the first six argument moves are labeled as disciplinary, since the moves reflect knowledge from the text currently being discussed. The last move, however, draws from a student's experience or perceived knowledge about the real world."
],
[
"We carried out a reliability study for the proposed scheme using two pairs of expert annotators, P1 and P2. The annotators were trained by coding one transcript at a time and discussing disagreements. Five text-based discussions were used for testing reliability after training: pair P1 annotated discussions of The Bluest Eye, Death of a Salesman, and Macbeth, while pair P2 annotated two separate discussions of Ain't I a Woman. 250 argument moves (discussed by over 40 students and consisting of over 8200 words) were annotated. Inter-rater reliability was assessed using Cohen's kappa: unweighted for argumentation and knowledge domain, but quadratic-weighted for specificity given its ordered labels.",
"Table TABREF6 shows that kappa for argumentation ranges from INLINEFORM0 , which generally indicates substantial agreement BIBREF30 . Kappa values for specificity and knowledge domain are in the INLINEFORM1 range which generally indicates almost perfect agreement BIBREF30 . These results show that our proposed annotation scheme can be used to produce reliable annotations of classroom discussion with respect to argumentation, specificity, and knowledge domain.",
"Table TABREF7 shows confusion matrices for annotator pair P1 (we observed similar trends for P2). The argumentation section of the table shows that the largest number of disagreements happens between the claim and warrant labels. One reason may be related to the constraint we impose on warrants - they require the existence of a claim and evidence. If a student tries to provide a warrant for a claim that happened much earlier in the discussion, the annotators might interpret the warrant as new claim. The specificity section shows relatively few low-high label disagreements as compared to low-med and med-high. This is also reflected in the quadratic-weighted kappa as low-high disagreements will carry a larger penalty (unweighted kappa is INLINEFORM0 ). The main reasons for disagreements over specificity labels come from two of the four specificity elements discussed in Section 3.2: whether an argument move is related to one character or scene, and whether it provides a chain of reasons. With respect to the first of these two elements we observed disagreements in argument moves containing pronouns with an ambiguous reference. Of particular note is the pronoun it. If we consider the argument move “I mean even if you know you have a hatred towards a standard or whatever, you still don't kill it\", the pronoun it clearly refers to something within the move (i.e. the standard) that the student themselves mentioned. In contrast, for argument moves such as “It did happen\" it might not be clear to what previous move the pronoun refers, therefore creating confusion on whether this specificity element is accomplished. Regarding specificity element (4) we found that it was easier to determine the presence of a chain of reasons when discourse connectives (e.g. because, therefore) were present in the argument move. The absence of explicit discourse connectives in an argument move might drive annotators to disagree on the presence/absence of a chain of reasons, which is likely to result in a different specificity label. Additionally, annotators found that shorter turns at talk proved harder to annotate for specificity. Finally, as we can see from the third section in the table, knowledge domain has the lowest disagreements with only one.",
"We also BIBREF32 explored the validity of our coding scheme by comparing our annotations of student talk to English Education experts' evaluations (quadratic-weighted kappa of 0.544) of the discussion's quality. Using stepwise regressions, we found that the best model of discussion quality (R-squared of INLINEFORM0 ) included all three of our coding dimensions: argumentation, specificity, and knowledge domain."
],
[
"Our annotation scheme introduces opportunities for the educational community to conduct further research on the relationship between features of student talk, student learning, and discussion quality. Although Chisholm and Godley Chisholm:11 and we found relations between our coding constructs and discussion quality, these were small-scale studies based on manual annotations. Once automated classifiers are developed, such relations between talk and learning can be examined at scale. Also, automatic labeling via a standard coding scheme can support the generalization of findings across studies, and potentially lead to automated tools for teachers and students.",
"The proposed annotation scheme also introduces NLP opportunities and challenges. Existing systems for classifying specificity and argumentation have largely been designed to analyze written text rather than spoken discussions. This is (at least in part) due to a lack of publicly available corpora and schemes for annotating argumentation and specificity in spoken discussions. The development of an annotation scheme explicitly designed for this problem is the first step towards collecting and annotating corpora that can be used by the NLP community to advance the field in this particular area. Furthermore, in text-based discussions, NLP methods need to tightly couple the discussion with contextual information (i.e., the text under discussion). For example, an argument move from one of the discussions mentioned in Section 4 stated “She's saying like free like, I don't have to be, I don't have to be this salesman's wife anymore, your know? I don't have to play this role anymore.\" The use of the term salesman shows the presence of specificity element (3) (see Section 3.2) because the text under discussion is indeed Death of a Salesman. If the students were discussing another text, the mention of the term salesman would not indicate one of the specificity elements, therefore lowering the specificity rating. Thus, using existing systems is unlikely to yield good performance. In fact, we previously BIBREF31 showed that while using an off-the-shelf system for predicting specificity in newspaper articles resulted in low performance when applied to classroom discussions, exploiting characteristics of our data could significantly improve performance. We have similarly evaluated the performance of two existing argument mining systems BIBREF18 , BIBREF33 on the transcripts described in Section SECREF4 . We noticed that since the two systems were trained to classify only claims and premises, they were never able to correctly predict warrants in our transcripts. Additionally, both systems classified the overwhelming majority of moves as premise, resulting in negative kappa in some cases. Using our scheme to create a corpus of classroom discussion data manually annotated for argumentation, specificity, and knowledge domain will support the development of more robust NLP prediction systems."
],
[
"In this work we proposed a new annotation scheme for three theoretically-motivated features of student talk in classroom discussion: argumentation, specificity, and knowledge domain. We demonstrated usage of the scheme by presenting an annotated excerpt of a classroom discussion. We demonstrated that the scheme can be annotated with high reliability and reported on scheme validity. Finally, we discussed some possible applications and challenges posed by the proposed annotation scheme for both the educational and NLP communities. We plan to extend our annotation scheme to label information about collaborative relations between different argument moves, and release a corpus annotated with the extended scheme."
],
[
"We want to thank Haoran Zhang, Tazin Afrin, and Annika Swallen for their contribution, and all the anonymous reviewers for their helpful suggestions.",
"This work was supported by the Learning Research and Development Center at the University of Pittsburgh."
]
],
"section_name": [
"Introduction",
"Related Work",
"Annotation Scheme",
"Argumentation",
"Specificity",
"Knowledge Domain",
"Reliability and Validity Analyses",
"Opportunities and Challenges",
"Conclusions",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"f9141dde220ce8e554abf2839de0997367b4fdab"
],
"answer": [
{
"evidence": [
"We carried out a reliability study for the proposed scheme using two pairs of expert annotators, P1 and P2. The annotators were trained by coding one transcript at a time and discussing disagreements. Five text-based discussions were used for testing reliability after training: pair P1 annotated discussions of The Bluest Eye, Death of a Salesman, and Macbeth, while pair P2 annotated two separate discussions of Ain't I a Woman. 250 argument moves (discussed by over 40 students and consisting of over 8200 words) were annotated. Inter-rater reliability was assessed using Cohen's kappa: unweighted for argumentation and knowledge domain, but quadratic-weighted for specificity given its ordered labels."
],
"extractive_spans": [
"a reliability study for the proposed scheme "
],
"free_form_answer": "",
"highlighted_evidence": [
"We carried out a reliability study for the proposed scheme using two pairs of expert annotators, P1 and P2. ",
"Inter-rater reliability was assessed using Cohen's kappa: unweighted for argumentation and knowledge domain, but quadratic-weighted for specificity given its ordered labels."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"5c7ffd3f58747a0246e25e813e82766725877ff0"
],
"answer": [
{
"evidence": [
"Our annotation scheme introduces opportunities for the educational community to conduct further research on the relationship between features of student talk, student learning, and discussion quality. Although Chisholm and Godley Chisholm:11 and we found relations between our coding constructs and discussion quality, these were small-scale studies based on manual annotations. Once automated classifiers are developed, such relations between talk and learning can be examined at scale. Also, automatic labeling via a standard coding scheme can support the generalization of findings across studies, and potentially lead to automated tools for teachers and students.",
"The proposed annotation scheme also introduces NLP opportunities and challenges. Existing systems for classifying specificity and argumentation have largely been designed to analyze written text rather than spoken discussions. This is (at least in part) due to a lack of publicly available corpora and schemes for annotating argumentation and specificity in spoken discussions. The development of an annotation scheme explicitly designed for this problem is the first step towards collecting and annotating corpora that can be used by the NLP community to advance the field in this particular area. Furthermore, in text-based discussions, NLP methods need to tightly couple the discussion with contextual information (i.e., the text under discussion). For example, an argument move from one of the discussions mentioned in Section 4 stated “She's saying like free like, I don't have to be, I don't have to be this salesman's wife anymore, your know? I don't have to play this role anymore.\" The use of the term salesman shows the presence of specificity element (3) (see Section 3.2) because the text under discussion is indeed Death of a Salesman. If the students were discussing another text, the mention of the term salesman would not indicate one of the specificity elements, therefore lowering the specificity rating. Thus, using existing systems is unlikely to yield good performance. In fact, we previously BIBREF31 showed that while using an off-the-shelf system for predicting specificity in newspaper articles resulted in low performance when applied to classroom discussions, exploiting characteristics of our data could significantly improve performance. We have similarly evaluated the performance of two existing argument mining systems BIBREF18 , BIBREF33 on the transcripts described in Section SECREF4 . We noticed that since the two systems were trained to classify only claims and premises, they were never able to correctly predict warrants in our transcripts. Additionally, both systems classified the overwhelming majority of moves as premise, resulting in negative kappa in some cases. Using our scheme to create a corpus of classroom discussion data manually annotated for argumentation, specificity, and knowledge domain will support the development of more robust NLP prediction systems."
],
"extractive_spans": [
"Our annotation scheme introduces opportunities for the educational community to conduct further research ",
"Once automated classifiers are developed, such relations between talk and learning can be examined at scale",
" automatic labeling via a standard coding scheme can support the generalization of findings across studies, and potentially lead to automated tools for teachers and students",
"collecting and annotating corpora that can be used by the NLP community to advance the field in this particular area"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our annotation scheme introduces opportunities for the educational community to conduct further research on the relationship between features of student talk, student learning, and discussion quality.",
"Once automated classifiers are developed, such relations between talk and learning can be examined at scale. Also, automatic labeling via a standard coding scheme can support the generalization of findings across studies, and potentially lead to automated tools for teachers and students.",
" The development of an annotation scheme explicitly designed for this problem is the first step towards collecting and annotating corpora that can be used by the NLP community to advance the field in this particular area."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"b8b2b24120750b1a4fd6be2072d78608135c8ba5"
],
"answer": [
{
"evidence": [
"As a first step towards developing an automated system for detecting the features of student talk that lead to high quality discussions, we propose a new annotation scheme for student talk during ELA “text-based\" discussions - that is, discussions that center on a text or piece of literature (e.g., book, play, or speech). The annotation scheme was developed to capture three aspects of classroom talk that are theorized in the literature as important to discussion quality and learning opportunities: argumentation (the process of systematically reasoning in support of an idea), specificity (the quality of belonging or relating uniquely to a particular subject), and knowledge domain (area of expertise represented in the content of the talk). We demonstrate the reliability and validity of our scheme via an annotation study of five transcripts of classroom discussion."
],
"extractive_spans": [],
"free_form_answer": "Measuring three aspects: argumentation, specificity and knowledge domain.",
"highlighted_evidence": [
" The annotation scheme was developed to capture three aspects of classroom talk that are theorized in the literature as important to discussion quality and learning opportunities: argumentation (the process of systematically reasoning in support of an idea), specificity (the quality of belonging or relating uniquely to a particular subject), and knowledge domain (area of expertise represented in the content of the talk)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"f98ad16fc16c082664499e50012f96cfa7f53a85"
],
"answer": [
{
"evidence": [
"We carried out a reliability study for the proposed scheme using two pairs of expert annotators, P1 and P2. The annotators were trained by coding one transcript at a time and discussing disagreements. Five text-based discussions were used for testing reliability after training: pair P1 annotated discussions of The Bluest Eye, Death of a Salesman, and Macbeth, while pair P2 annotated two separate discussions of Ain't I a Woman. 250 argument moves (discussed by over 40 students and consisting of over 8200 words) were annotated. Inter-rater reliability was assessed using Cohen's kappa: unweighted for argumentation and knowledge domain, but quadratic-weighted for specificity given its ordered labels."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We carried out a reliability study for the proposed scheme using two pairs of expert annotators, P1 and P2. "
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"what experiments are conducted?",
"what opportunities are highlighted?",
"how do they measure discussion quality?",
"do they use a crowdsourcing platform?"
],
"question_id": [
"65ebed1971dca992c3751ed985fbe294cbe140d7",
"b24b56ccc5d4b04fee85579b2dee77306ec829b2",
"3bfdbf2d4d68e01bef39dc3371960e25489e510e",
"9378b41f7e888e78d667e9763883dd64ddb48728"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Examples of argument moves and their respective annotations from a discussion of the book Death of a Salesman. As shown by the argument move numbers, boxes for students S1, S2, and S3 indicate separate, non contiguous excerpts of the discussion.",
"Table 2: Inter-rater reliability for pairs P1 and P2.",
"Table 3: Confusion matrices for argumentation, specificity, and knowledge domain, for annotator pair P1."
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png"
]
} | [
"how do they measure discussion quality?"
] | [
[
"1909.03023-Introduction-2"
]
] | [
"Measuring three aspects: argumentation, specificity and knowledge domain."
] | 611 |
1901.05280 | Dependency or Span, End-to-End Uniform Semantic Role Labeling | Semantic role labeling (SRL) aims to discover the predicateargument structure of a sentence. End-to-end SRL without syntactic input has received great attention. However, most of them focus on either span-based or dependency-based semantic representation form and only show specific model optimization respectively. Meanwhile, handling these two SRL tasks uniformly was less successful. This paper presents an end-to-end model for both dependency and span SRL with a unified argument representation to deal with two different types of argument annotations in a uniform fashion. Furthermore, we jointly predict all predicates and arguments, especially including long-term ignored predicate identification subtask. Our single model achieves new state-of-the-art results on both span (CoNLL 2005, 2012) and dependency (CoNLL 2008, 2009) SRL benchmarks. | {
"paragraphs": [
[
"The purpose of semantic role labeling (SRL) is to derive the meaning representation for a sentence, which is beneficial to a wide range of natural language processing (NLP) tasks BIBREF0 , BIBREF1 . SRL can be formed as four subtasks, including predicate detection, predicate disambiguation, argument identification and argument classification. For argument annotation, there are two formulizations. One is based on text spans, namely span-based SRL. The other is dependency-based SRL, which annotates the syntactic head of argument rather than entire argument span. Figure FIGREF1 shows example annotations.",
"Great progress has been made in syntactic parsing BIBREF2 , BIBREF3 , BIBREF4 . Most traditional SRL methods rely heavily on syntactic features. To alleviate the inconvenience, recent works BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 propose end-to-end models for SRL, putting syntax aside and still achieving favorable results. However, these systems focus on either span or dependency SRL, which motivates us to explore a uniform approach.",
"Both span and dependency are effective formal representations for semantics, though for a long time it has been kept unknown which form, span or dependency, would be better for the convenience and effectiveness of semantic machine learning and later applications. Furthermore, researchers are interested in two forms of SRL models that may benefit from each other rather than their separated development. This topic has been roughly discussed in BIBREF19 , who concluded that the (best) dependency SRL system at then clearly outperformed the span-based (best) system through gold syntactic structure transformation. However, BIBREF19 johansson2008EMNLP like all other traditional SRL models themselves had to adopt rich syntactic features, and their comparison was done between two systems in quite different building styles. Instead, this work will develop full syntax-agnostic SRL systems with the same fashion for both span and dependency representation, so that we can revisit this issue under a more solid empirical basis.",
"In addition, most efforts focus on argument identification and classification since span and dependency SRL corpora have already marked predicate positions. Although no predicate identification is needed, it is not available in many downstream applications. Therefore, predicate identification should be carefully handled in a complete practical SRL system. To address this problem, BIBREF9 he2018jointly proposed an end-to-end approach for jointly predicting predicates and arguments for span SRL. Likewise, BIBREF11 cai2018full introduced an end-to-end model to naturally cover all predicate/argument identification and classification subtasks for dependency SRL.",
"To jointly predict predicates and arguments, we present an end-to-end framework for both span and dependency SRL. Our model extends the span SRL model of BIBREF9 he2018jointly, directly regarding all words in a sentence as possible predicates, considering all spans or words as potential arguments and learning distributions over possible predicates. However, we differ by (1) introducing unified argument representation to handle two different types of SRL tasks, and (2) employing biaffine scorer to make decisions for predicate-argument relationship.",
"The proposed models are evaluated on span SRL datasets: CoNLL 2005 and 2012 data, as well as the dependency SRL dataset of CoNLL 2008 and 2009 shared tasks. For span SRL, our single model outperforms the previous best results by 0.3% and 0.5% F INLINEFORM0 -score on CoNLL 2005 and 2012 test sets respectively. For dependency SRL, we achieve new state-of-the-art of 85.3% F INLINEFORM1 and 90.4% F INLINEFORM2 on CoNLL 2008 and 2009 benchmarks respectively."
],
[
"SRL is pioneered by BIBREF20 gildea2002, which uses the PropBank conventions BIBREF21 . Conventionally, span SRL consists of two subtasks, argument identification and classification. The former identifies the arguments of a predicate, and the latter assigns them semantic role labels, namely, determining the relation between arguments and predicates. The PropBank defines a set of semantic roles to label arguments, falling into two categories: core and non-core roles. The core roles (A0-A5 and AA) indicate different semantics in predicate-argument structure, while the non-core roles are modifiers (AM-adj) where adj specifies the adjunct type, such as temporal (AM-TMP) and locative (AM-LOC) adjuncts. For example shown in Figure FIGREF1 , A0 is a proto-agent, representing the borrower.",
"Slightly different from span SRL in argument annotation, dependency SRL labels the syntactic heads of arguments rather than phrasal arguments, which was popularized by CoNLL-2008 and CoNLL-2009 shared tasks BIBREF22 , BIBREF23 . Furthermore, when no predicate is given, two other indispensable subtasks of dependency SRL are predicate identification and disambiguation. One is to identify all predicates in a sentence, and the other is to determine the senses of predicates. As the example shown in Figure FIGREF1 , 01 indicates the first sense from the PropBank sense repository for predicate borrowed in the sentence."
],
[
"The traditional approaches on SRL were mostly about designing hand-crafted feature templates and then employ linear classifiers such as BIBREF24 , BIBREF25 , BIBREF12 . Even though neural models were introduced, early work still paid more attention on syntactic features. For example, BIBREF14 Fitzgerald2015 integrated syntactic information into neural networks with embedded lexicalized features, while BIBREF15 roth2016 embedded syntactic dependency paths between predicates and arguments. Similarly, BIBREF16 marcheggianiEMNLP2017 leveraged the graph convolutional network to encode syntax for dependency SRL. Recently, BIBREF17 Strubell2018 presented a multi-task neural model to incorporate auxiliary syntactic information for SRL, BIBREF18 li2018unified adopted several kinds of syntactic encoder for syntax encoding while BIBREF10 he:2018Syntax used syntactic tree for argument pruning.",
"However, using syntax may be quite inconvenient sometimes, recent studies thus have attempted to build SRL systems without or with little syntactic guideline. BIBREF5 zhou-xu2015 proposed the first syntax-agnostic model for span SRL using LSTM sequence labeling, while BIBREF7 he-acl2017 further enhanced their model using highway bidirectional LSTMs with constrained decoding. Later, BIBREF8 selfatt2018 presented a deep attentional neural network for applying self-attention to span SRL task. Likewise for dependency SRL, BIBREF6 marcheggiani2017 proposed a syntax-agnostic model with effective word representation and obtained favorable results. BIBREF11 cai2018full built a full end-to-end model with biaffine attention and outperformed the previous state-of-the-art.",
"More recently, joint predicting both predicates and arguments has attracted extensive interest on account of the importance of predicate identification, including BIBREF7 , BIBREF17 , BIBREF9 , BIBREF11 and this work. In our preliminary experiments, we tried to integrate the self-attention into our model, but it does not provide any significant performance gain on span or dependency SRL, which is not consistent with the conclusion in BIBREF8 and lets us exclude it from this work.",
"Generally, the above work is summarized in Table TABREF2 . Considering motivation, our work is most closely related to the work of BIBREF14 Fitzgerald2015, which also tackles span and dependency SRL in a uniform fashion. The essential difference is that their model employs the syntactic features and takes pre-identified predicates as inputs, while our model puts syntax aside and jointly learns and predicts predicates and arguments."
],
[
"Given a sentence INLINEFORM0 , we attempt to predict a set of predicate-argument-relation tuples INLINEFORM1 , where INLINEFORM2 is the set of all possible predicate tokens, INLINEFORM3 includes all the candidate argument spans or dependencies, and INLINEFORM6 is the set of the semantic roles. To simplify the task, we introduce a null label INLINEFORM7 to indicate no relation between arbitrary predicate-argument pair following BIBREF9 he2018jointly. As shown in Figure FIGREF5 , our uniform SRL model includes four main modules:",
" INLINEFORM0 token representation component to build token representation INLINEFORM1 from word INLINEFORM2 ,",
" INLINEFORM0 a BiHLSTM encoder that directly takes sequential inputs,",
" INLINEFORM0 predicate and argument representation module to learn candidate representations,",
" INLINEFORM0 a biaffine scorer which takes the candidate representations as input and predicts semantic roles."
],
[
"We follow the bi-directional LSTM-CNN architecture BIBREF26 , where convolutional neural networks (CNNs) encode characters inside a word INLINEFORM0 into character-level representation INLINEFORM1 then concatenated with its word-level INLINEFORM2 into context-independent representation. To further enhance the word representation, we leverage an external representation INLINEFORM3 from pretrained ELMo (Embeddings from Language Models) layers according to BIBREF27 ELMo. Eventually, the resulting token representation is concatenated as DISPLAYFORM0 "
],
[
"The encoder in our model adopts the bidirectional LSTM with highway connections (BiHLSTM) to contextualize the representation into task-specific representation: INLINEFORM0 , where the gated highway connections is used to alleviate the vanishing gradient problem when training very deep BiLSTMs."
],
[
"We employ contextualized representations for all candidate arguments and predicates. As referred in BIBREF2 , applying a multi-layer perceptron (MLP) to the recurrent output states before the classifier has the advantage of stripping away irrelevant information for the current decision. Therefore, to distinguish the currently considered predicate from its candidate arguments in SRL context, we add an MLP layer to contextualized representations for argument INLINEFORM0 and predicate INLINEFORM1 candidates specific representations respectively with ReLU BIBREF28 as its activation function: INLINEFORM2 INLINEFORM3 ",
"To perform uniform SRL, we introduce unified argument representation. For dependency SRL, we assume single word argument span by limiting the length of candidate argument to be 1, so our model uses the INLINEFORM0 as the final argument representation INLINEFORM1 directly. While for span SRL, we utilize the approach of span representation from BIBREF29 lee2017end. Each candidate span representation INLINEFORM2 is built by DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are boundary representations, INLINEFORM2 indicates a span, INLINEFORM3 is a feature vector encoding the size of span, and INLINEFORM4 is the specific notion of headedness which is learned by attention mechanism BIBREF30 over words in each span (where INLINEFORM5 is the position inside span) as follows : INLINEFORM6 INLINEFORM7 "
],
[
"For predicate and arguments, we introduce two unary scores on their candidates: INLINEFORM0 INLINEFORM1 ",
"For semantic role, we adopt a relation scorer with biaffine attention BIBREF2 : DISPLAYFORM0 ",
" where INLINEFORM0 and INLINEFORM1 respectively denote the weight matrix of the bi-linear and the linear terms and INLINEFORM2 is the bias item.",
"The biaffine scorer differs from feed-forward networks scorer in bilinear transformation. Since SRL can be regarded as a classification task, the distribution of classes is uneven and the problem comes worse after the null labels are introduced. The output layer of the model normally includes a bias term designed to capture the prior probability of each class, with the rest of the model focusing on learning the likelihood of every classes occurring in data. The biaffine attention as Dozat and Manning (2017) in our model directly assigns a score for each specific semantic role and would be helpful for semantic role prediction. Actually, (He et al., 2018a) used a scorer as Equation (2), which is only a part of our scorer including both Equations ( EQREF14 ) and (). Therefore, our scorer would be more informative than previous models such as BIBREF9 ."
],
[
"The model is trained to optimize the probability INLINEFORM0 of the predicate-argument-relation tuples INLINEFORM1 given the sentence INLINEFORM2 , which can be factorized as: DISPLAYFORM0 ",
" where INLINEFORM0 represents the model parameters, and INLINEFORM1 , is the score for the predicate-argument-relation tuple, including predicate score INLINEFORM2 , argument score INLINEFORM3 and relation score INLINEFORM4 .",
"Our model adopts a biaffine scorer for semantic role label prediction, which is implemented as cross-entropy loss. Moreover, our model is trained to minimize the negative likehood of the golden structure INLINEFORM0 : INLINEFORM1 . The score of null labels are enforced into INLINEFORM2 . For predicates and arguments prediction, we train separated scorers ( INLINEFORM3 and INLINEFORM4 ) in parallel fed to the biaffine scorer for predicate and argument predication respectively, which helps to reduce the chance of error propagation."
],
[
"The number of candidate arguments for a sentence of length INLINEFORM0 is INLINEFORM1 for span SRL, and INLINEFORM2 for dependency. As the model deals with INLINEFORM3 possible predicates, the computational complexity is INLINEFORM4 for span, INLINEFORM5 for dependency, which is too computationally expensive. To address this issue, we attempt to prune candidates using two beams for storing the candidate arguments and predicates with size INLINEFORM6 and INLINEFORM7 inspired by BIBREF9 he2018jointly, where INLINEFORM8 and INLINEFORM9 are two manually setting thresholds. First, the predicate and argument candidates are ranked according to their predicted score ( INLINEFORM10 and INLINEFORM11 ) respectively, and then we reduce the predicate and argument candidates with defined beams. Finally, we take the candidates from the beams to participate the label prediction. Such pruning will reduce the overall number of candidate tuples to INLINEFORM12 for both types of tasks. Furthermore, for span SRL, we set the maximum length of candidate arguments to INLINEFORM13 , which may decrease the number of candidate arguments to INLINEFORM14 ."
],
[
"According to PropBank semantic convention, predicate-argument structure has to follow a few of global constraints BIBREF25 , BIBREF7 , we thus incorporate constraints on the output structure with a dynamic programing decoder during inference. These constraints are described as follows:",
" INLINEFORM0 Unique core roles (U): Each core role (A0-A5, AA) should appear at most once for each predicate.",
" INLINEFORM0 Continuation roles (C): A continuation role C-X can exist only when its base role X is realized before it.",
" INLINEFORM0 Reference roles (R): A reference role R-X can exist only when its base role X is realized (not necessarily before R-X).",
" INLINEFORM0 Non-overlapping (O): The semantic arguments for the same predicate do not overlap in span SRL.",
"As C and R constraints lead to worse performance in our models from our preliminary experiments, we only enforce U and O constraints on span SRL and U constraints on dependency SRL."
],
[
"Our models are evaluated on two PropBank-style SRL tasks: span and dependency. For span SRL, we test model on the common span SRL datasets from CoNLL-2005 BIBREF32 and CoNLL-2012 BIBREF31 shared tasks. For dependency SRL, we experiment on CoNLL 2008 BIBREF22 and 2009 BIBREF23 benchmarks. As for the predicate disambiguation in dependency SRL task, we follow the previous work BIBREF15 .",
"We consider two SRL setups: end-to-end and pre-identified predicates. For the former setup, our system jointly predicts all the predicates and their arguments in one shot, which turns into CoNLL-2008 setting for dependency SRL. In order to compare with previous models, we also report results with pre-identified predicates, where predicates have been beforehand identified in corpora. Therefore, the experimental results fall into two categories: end-to-end results and results with pre-identified predicates."
],
[
"CoNLL 2005 and 2012 The CoNLL-2005 shared task focused on verbal predicates only for English. The CoNLL-2005 dataset takes section 2-21 of Wall Street Journal (WSJ) data as training set, and section 24 as development set. The test set consists of section 23 of WSJ for in-domain evaluation together with 3 sections from Brown corpus for out-of-domain evaluation. The larger CoNLL-2012 dataset is extracted from OntoNotes v5.0 corpus, which contains both verbal and nominal predicates.",
"CoNLL 2008 and 2009 CoNLL-2008 and the English part of CoNLL-2009 shared tasks use the same English corpus, which merges two treebanks, PropBank and NomBank. NomBank is a complement to PropBank with similar semantic convention for nominal predicate-argument structure annotation. Besides, the training, development and test splits of English data are identical to that of CoNLL-2005."
],
[
"In our experiments, the word embeddings are 300-dimensional GloVe vectors BIBREF33 . The character representations with dimension 8 randomly initialized. In the character CNN, the convolutions have window sizes of 3, 4, and 5, each consisting of 50 filters. Moreover, we use 3 stacked bidirectional LSTMs with 200 dimensional hidden states. The outputs of BiLSTM employs two 300-dimensional MLP layers with the ReLU as activation function. Besides, we use two 150-dimensional hidden MLP layers with ReLU to score predicates and arguments respectively. For candidates pruning, we follow the settings of BIBREF9 he2018jointly, modeling spans up to length INLINEFORM0 for span SRL and INLINEFORM1 for dependency SRL, using INLINEFORM2 for pruning predicates and INLINEFORM3 for pruning arguments.",
"Training Details During training, we use the categorical cross-entropy as objective, with Adam optimizer BIBREF34 initial learning rate 0.001. We apply 0.5 dropout to the word embeddings and character CNN outputs and 0.2 dropout to all hidden layers and feature embeddings. In the LSTMs, we employ variational dropout masks that are shared across timesteps BIBREF35 , with 0.4 dropout rate. All models are trained for up to 600 epochs with batch size 40 on a single NVIDIA GeForce GTX 1080Ti GPU, which occupies 8 GB graphic memory and takes 12 to 36 hours."
],
[
"We present all results using the official evaluation script from the CoNLL-2005 and CoNLL-2009 shared tasks, and compare our model with previous state-of-the-art models.",
"Span SRL Table TABREF15 shows results on CoNLL-2005 in-domain (WSJ) and out-of-domain (Brown) test sets, as well as the CoNLL-2012 test set (OntoNotes). The upper part of table presents results from single models. Our model outperforms the previous models with absolute improvements in F INLINEFORM0 -score of 0.3% on CoNLL-2005 benchmark. Besides, our single model performs even much better than all previous ensemble systems.",
"Dependency SRL Table TABREF19 presents the results on CoNLL-2008. J & N (2008b) BIBREF36 was the highest ranked system in CoNLL-2008 shared task. We obtain comparable results with the recent state-of-the-art method BIBREF11 , and our model surpasses the model BIBREF10 by 2% in F INLINEFORM0 -score."
],
[
"To compare with to previous systems with pre-identified predicates, we report results from our models as well.",
"Span SRL Table TABREF22 shows that our model outperforms all published systems, even the ensemble model BIBREF8 , achieving the best results of 87.7%, 80.5% and 86.0% in F INLINEFORM0 -score respectively.",
"Dependency SRL Table TABREF29 compares the results of dependency SRL on CoNLL-2009 English data. Our single model gives a new state-of-the-art result of 90.4% F INLINEFORM0 on WSJ. For Brown data, the proposed syntax-agnostic model yields a performance gain of 1.7% F INLINEFORM1 over the syntax-aware model BIBREF18 ."
],
[
"To investigate the contributions of ELMo representations and biaffine scorer in our end-to-end model, we conduct a series of ablation studies on the CoNLL-2005 and CoNLL-2008 WSJ test sets, unless otherwise stated.",
"Table TABREF31 compares F INLINEFORM0 scores of BIBREF9 he2018jointly and our model without ELMo representations. We observe that effect of ELMo is somewhat surprising, where removal of the ELMo dramatically declines the performance by 3.3-3.5 F INLINEFORM1 on CoNLL-2005 WSJ. However, our model gives quite stable performance for dependency SRL regardless of whether ELMo is concatenated or not. The results indicate that ELMo is more beneficial to span SRL.",
"In order to better understand how the biaffine scorer influences our model performance, we train our model with different scoring functions. To ensure a fair comparison with the model BIBREF9 , we replace the biaffine scorer with their scoring functions implemented with feed-forward networks, and the results of removing biaffine scorer are also presented in Table TABREF31 . We can see 0.5% and 1.6% F INLINEFORM0 performance degradation on CoNLL 2005 and 2008 WSJ respectively. The comparison shows that the biaffine scorer is more effective for scoring the relations between predicates and arguments. Furthermore, these results show that biaffine attention mechanism is applicable to span SRL."
],
[
"It is very hard to say which style of semantic formal representation, dependency or span, would be more convenient for machine learning as they adopt incomparable evaluation metric. Recent researches BIBREF37 have proposed to learn semantic parsers from multiple datasets in Framenet style semantics, while our goal is to compare the quality of different models in the span and dependency SRL for Propbank style semantics. Following BIBREF19 johansson2008EMNLP, we choose to directly compare their performance in terms of dependency-style metric through a transformation way. Using the head-finding algorithm in BIBREF19 which used gold-standard syntax, we may determine a set of head nodes for each span. This process will output an upper bound performance measure about the span conversion due to the use of gold syntax.",
"We do not train new models for the conversion and the resulted comparison. Instead, we do the job on span-style CoNLL 2005 test set and dependency-style CoNLL 2009 test set (WSJ and Brown), considering these two test sets share the same text content. As the former only contains verbal predicate-argument structures, for the latter, we discard all nomial predicate-argument related results and predicate disambiguation results during performance statistics. Table TABREF33 shows the comparison.",
"On a more strict setting basis, the results from our same model for span and dependency SRL verify the same conclusion of BIBREF19 johansson2008EMNLP, namely, dependency form is in a favor of machine learning effectiveness for SRL even compared to the conversion upper bound of span form."
],
[
"This paper presents an end-to-end neural model for both span and dependency SRL, which may jointly learn and predict all predicates and arguments. We extend existing model and introduce unified argument representation with biaffine scorer to the uniform SRL for both span and dependency representation forms. Our model achieves new state-of-the-art results on the CoNLL 2005, 2012 and CoNLL 2008, 2009 benchmarks. Our results show that span and dependency SRL can be effectively handled in a uniform fashion, which for the first time enables us to conveniently explore the useful connection between two types of semantic representation forms."
]
],
"section_name": [
"Introduction",
"Background",
"Related Work",
"Overview",
"Token Representation",
"Deep Encoder",
"Predicate and Argument Representation",
"Scorers",
"Training Objective",
"Candidates Pruning",
"SRL Constraints",
"Experiments",
"Datasets",
"Setup",
"End-to-end Results",
"Results with Pre-identified Predicates",
"Ablation",
"Dependency or Span?",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"5c9801d2dbcb7c4a1b5bb88fbe952c02bb1fd9d6"
],
"answer": [
{
"evidence": [
"Generally, the above work is summarized in Table TABREF2 . Considering motivation, our work is most closely related to the work of BIBREF14 Fitzgerald2015, which also tackles span and dependency SRL in a uniform fashion. The essential difference is that their model employs the syntactic features and takes pre-identified predicates as inputs, while our model puts syntax aside and jointly learns and predicts predicates and arguments.",
"FLOAT SELECTED: Table 1: A chronicle of related work for span and dependency SRL. SA represents syntax-aware system (no + indicates syntaxagnostic system) and ST indicates sequence tagging model. F1 is the result of single model on official test set."
],
"extractive_spans": [],
"free_form_answer": "2008 Punyakanok et al. \n2009 Zhao et al. + ME \n2008 Toutanova et al. \n2010 Bjorkelund et al. \n2015 FitzGerald et al. \n2015 Zhou and Xu \n2016 Roth and Lapata \n2017 He et al. \n2017 Marcheggiani et al.\n2017 Marcheggiani and Titov \n2018 Tan et al. \n2018 He et al. \n2018 Strubell et al. \n2018 Cai et al. \n2018 He et al. \n2018 Li et al. \n",
"highlighted_evidence": [
"Generally, the above work is summarized in Table TABREF2 . Considering motivation, our work is most closely related to the work of BIBREF14 Fitzgerald2015, which also tackles span and dependency SRL in a uniform fashion. The essential difference is that their model employs the syntactic features and takes pre-identified predicates as inputs, while our model puts syntax aside and jointly learns and predicts predicates and arguments.",
"FLOAT SELECTED: Table 1: A chronicle of related work for span and dependency SRL. SA represents syntax-aware system (no + indicates syntaxagnostic system) and ST indicates sequence tagging model. F1 is the result of single model on official test set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
""
],
"paper_read": [
""
],
"question": [
"what were the baselines?"
],
"question_id": [
"73bbe0b6457423f08d9297a0951381098bd89a2b"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
""
],
"topic_background": [
""
]
} | {
"caption": [
"Figure 1: Examples of annotations in span (above) and dependency (below) SRL.",
"Table 1: A chronicle of related work for span and dependency SRL. SA represents syntax-aware system (no + indicates syntaxagnostic system) and ST indicates sequence tagging model. F1 is the result of single model on official test set.",
"Figure 2: The framework of our end-to-end model for uniform SRL.",
"Table 2: End-to-end span SRL results on CoNLL-2005 and CoNLL-2012 data, compared with previous systems in terms of precision (P), recall (R), F1-score. The CoNLL-2005 contains two test sets: WSJ (in-domain) and Brown (out-of-domain).",
"Table 3: Dependency SRL results on CoNLL-2008 test sets.",
"Table 4: Span SRL results with pre-identified predicates on CoNLL-2005 and CoNLL-2012 test sets.",
"Table 5: Dependency SRL results with pre-identified predicates on CoNLL-2009 English benchmark.",
"Table 6: Effectiveness of ELMo representations and biaffine scorer on the CoNLL 2005 and 2008 WSJ sets.",
"Table 7: Dependency vs. Span-converted Dependency on CoNLL 2005, 2009 test sets with dependency evaluation."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"4-Figure2-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"7-Table5-1.png",
"7-Table6-1.png",
"7-Table7-1.png"
]
} | [
"what were the baselines?"
] | [
[
"1901.05280-2-Table1-1.png",
"1901.05280-Related Work-3"
]
] | [
"2008 Punyakanok et al. \n2009 Zhao et al. + ME \n2008 Toutanova et al. \n2010 Bjorkelund et al. \n2015 FitzGerald et al. \n2015 Zhou and Xu \n2016 Roth and Lapata \n2017 He et al. \n2017 Marcheggiani et al.\n2017 Marcheggiani and Titov \n2018 Tan et al. \n2018 He et al. \n2018 Strubell et al. \n2018 Cai et al. \n2018 He et al. \n2018 Li et al. \n"
] | 612 |
1909.11297 | Learning to Detect Opinion Snippet for Aspect-Based Sentiment Analysis | Aspect-based sentiment analysis (ABSA) is to predict the sentiment polarity towards a particular aspect in a sentence. Recently, this task has been widely addressed by the neural attention mechanism, which computes attention weights to softly select words for generating aspect-specific sentence representations. The attention is expected to concentrate on opinion words for accurate sentiment prediction. However, attention is prone to be distracted by noisy or misleading words, or opinion words from other aspects. In this paper, we propose an alternative hard-selection approach, which determines the start and end positions of the opinion snippet, and selects the words between these two positions for sentiment prediction. Specifically, we learn deep associations between the sentence and aspect, and the long-term dependencies within the sentence by leveraging the pre-trained BERT model. We further detect the opinion snippet by self-critical reinforcement learning. Especially, experimental results demonstrate the effectiveness of our method and prove that our hard-selection approach outperforms soft-selection approaches when handling multi-aspect sentences. | {
"paragraphs": [
[
"Aspect-based sentiment analysis BIBREF0, BIBREF1 is a fine-grained sentiment analysis task which has gained much attention from research and industries. It aims at predicting the sentiment polarity of a particular aspect of the text. With the rapid development of deep learning, this task has been widely addressed by attention-based neural networks BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. To name a few, wang2016attention learn to attend on different parts of the sentence given different aspects, then generates aspect-specific sentence representations for sentiment prediction. tay2018learning learn to attend on correct words based on associative relationships between sentence words and a given aspect. These attention-based methods have brought the ABSA task remarkable performance improvement.",
"Previous attention-based methods can be categorized as soft-selection approaches since the attention weights scatter across the whole sentence and every word is taken into consideration with different weights. This usually results in attention distraction BIBREF7, i.e., attending on noisy or misleading words, or opinion words from other aspects. Take Figure FIGREF1 as an example, for the aspect place in the sentence “the food is usually good but it certainly is not a relaxing place to go”, we visualize the attention weights from the model ATAE-LSTM BIBREF2. As we can see, the words “good” and “but” are dominant in attention weights. However, “good” is used to describe the aspect food rather than place, “but” is not so related to place either. The true opinion snippet “certainly is not a relaxing place” receives low attention weights, leading to the wrong prediction towards the aspect place.",
"Therefore, we propose an alternative hard-selection approach by determining two positions in the sentence and selecting words between these two positions as the opinion expression of a given aspect. This is also based on the observation that opinion words of a given aspect are usually distributed consecutively as a snippet BIBREF8. As a consecutive whole, the opinion snippet may gain enough attention weights, avoid being distracted by other noisy or misleading words, or distant opinion words from other aspects. We then predict the sentiment polarity of the given aspect based on the average of the extracted opinion snippet. The explicit selection of the opinion snippet also brings us another advantage that it can serve as justifications of our sentiment predictions, making our model more interpretable.",
"To accurately determine the two positions of the opinion snippet of a particular aspect, we first model the deep associations between the sentence and aspect, and the long-term dependencies within the sentence by BERT BIBREF9, which is a pre-trained language model and achieves exciting results in many natural language tasks. Second, with the contextual representations from BERT, the two positions are sequentially determined by self-critical reinforcement learning. The reason for using reinforcement learning is that we do not have the ground-truth positions of the opinion snippet, but only the polarity of the corresponding aspect. Then the extracted opinion snippet is used for sentiment classification. The details are described in the model section.",
"The main contributions of our paper are as follows:",
"We propose a hard-selection approach to address the ABSA task. Specifically, our method determines two positions in the sentence to detect the opinion snippet towards a particular aspect, and then uses the framed content for sentiment classification. Our approach can alleviate the attention distraction problem in previous soft-selection approaches.",
"We model deep associations between the sentence and aspect, and the long-term dependencies within the sentence by BERT. We then learn to detect the opinion snippet by self-critical reinforcement learning.",
"The experimental results demonstrate the effectiveness of our method and also our approach significantly outperforms soft-selection approaches on handling multi-aspect sentences."
],
[
"Traditional machine learning methods for aspect-based sentiment analysis focus on extracting a set of features to train sentiment classifiers BIBREF10, BIBREF11, BIBREF12, which usually are labor intensive. With the development of deep learning technologies, neural attention mechanism BIBREF13 has been widely adopted to address this task BIBREF14, BIBREF2, BIBREF15, BIBREF3, BIBREF16, BIBREF4, BIBREF17, BIBREF6, BIBREF5, BIBREF18, BIBREF19, BIBREF20, BIBREF21. wang2016attention propose attention-based LSTM networks which attend on different parts of the sentence for different aspects. Ma2017Interactive utilize the interactive attention to capture the deep associations between the sentence and the aspect. Hierarchical models BIBREF4, BIBREF17, BIBREF6 are also employed to capture multiple levels of emotional expression for more accurate prediction, as the complexity of sentence structure and semantic diversity. tay2018learning learn to attend based on associative relationships between sentence words and aspect.",
"All these methods use normalized attention weights to softly select words for generating aspect-specific sentence representations, while the attention weights scatter across the whole sentence and can easily result in attention distraction. wang2018learning propose a hard-selection method to learn segmentation attention which can effectively capture the structural dependencies between the target and the sentiment expressions with a linear-chain conditional random field (CRF) layer. However, it can only address aspect-term level sentiment prediction which requires annotations for aspect terms. Compared with it, our method can handle both aspect-term level and aspect-category level sentiment prediction by detecting the opinion snippet."
],
[
"We first formulate the problem. Given a sentence $S=\\lbrace w_1,w_2,...,w_N\\rbrace $ and an aspect $A=\\lbrace a_1,a_2,...,a_M\\rbrace $, the ABSA task is to predict the sentiment of $A$. In our setting, the aspect can be either aspect terms or an aspect category. As aspect terms, $A$ is a snippet of words in $S$, i.e., a sub-sequence of the sentence, while as an aspect category, $A$ represents a semantic category with $M=1$, containing just an abstract token.",
"In this paper, we propose a hard-selection approach to solve the ABSA task. Specifically, we first learn to detect the corresponding opinion snippet $O=\\lbrace w_{l},w_{l+1}...,w_{r}\\rbrace $, where $1\\le l\\le r\\le N$, and then use $O$ to predict the sentiment of the given aspect. The network architecture is shown in Figure FIGREF5."
],
[
"Accurately modeling the relationships between sentence words and an aspect is the key to the success of the ABSA task. Many methods have been developed to model word-aspect relationships. wang2016attention simply concatenate the aspect embedding with the input word embeddings and sentence hidden representations for computing aspect-specific attention weights. Ma2017Interactive learn the aspect and sentence interactively by using two attention networks. tay2018learning adopt circular convolution of vectors for performing the word-aspect fusion.",
"In this paper, we employ BERT BIBREF9 to model the deep associations between the sentence words and the aspect. BERT is a powerful pre-trained model which has achieved remarkable results in many NLP tasks. The architecture of BERT is a multi-layer bidirectional Transformer Encoder BIBREF22, which uses the self-attention mechanism to capture complex interaction and dependency between terms within a sequence. To leverage BERT to model the relationships between the sentence and the aspect, we pack the sentence and aspect together into a single sequence and then feed it into BERT, as shown in Figure FIGREF5. With this sentence-aspect concatenation, both the word-aspect associations and word-word dependencies are modeled interactively and simultaneously. With the contextual token representations $T_S=T_{[1:N]}\\in \\mathbb {R}^{N\\times {H}}$ of the sentence, where $N$ is the sentence length and $H$ is the hidden size, we can then determine the start and end positions of the opinion snippet in the sentence."
],
[
"To fairly compare the performance of soft-selection approaches with hard-selection approaches, we use the same word-aspect fusion results $T_{S}$ from BERT. We implement the attention mechanism by adopting the approach similar to the work BIBREF23.",
"where $v_1\\in \\mathbb {R}^{H}$ and $W_1\\in \\mathbb {R}^{H\\times {H}}$ are the parameters. The normalized attention weights $\\alpha $ are used to softly select words from the whole sentence and generate the final aspect-specific sentence representation $g$. Then we make sentiment prediction as follows:",
"where $W_2\\in \\mathbb {R}^{C\\times {H}}$ and $b\\in \\mathbb {R}^{C}$ are the weight matrix and bias vector respectively. $\\hat{y}$ is the probability distribution on $C$ polarities. The polarity with highest probability is selected as the prediction."
],
[
"Our proposed hard-selection approach determines the start and end positions of the opinion snippet and selects the words between these two positions for sentiment prediction. Since we do not have the ground-truth opinion snippet, but only the polarity of the corresponding aspect, we adopt reinforcement learning BIBREF24 to train our model. To make sure that the end position comes after the start position, we determine the start and end sequentially as a sequence training problem BIBREF25. The parameters of the network, $\\Theta $, define a policy $p_{\\theta }$ and output an “action” that is the prediction of the position. For simplicity, we only generate two actions for determining the start and end positions respectively. After determining the start position, the “state\" is updated and then the end is conditioned on the start.",
"Specifically, we define a start vector $s\\in \\mathbb {R}^{H}$ and an end vector $e\\in \\mathbb {R}^{H}$. Similar to the prior work BIBREF9, the probability of a word being the start of the opinion snippet is computed as a dot product between its contextual token representation and $s$ followed by a softmax over all of the words of the sentence.",
"We then sample the start position $l$ based on the multinomial distribution $\\beta _l$. To guarantee the end comes after the start, the end is sampled only in the right part of the sentence after the start. Therefore, the state is updated by slicing operation ${T_S}^r=T_S[l:]$. Same as the start position, the end position $r$ is also sampled based on the distribution $\\beta _r$:",
"Then we have the opinion snippet $T_O=T_S{[l:r]}$ to predict the sentiment polarity of the given aspect in the sentence. The probabilities of the start position at $l$ and the end position at $r$ are $p(l)=\\beta _l[l]$ and $p(r)=\\beta _r[r]$ respectively."
],
[
"After we get the opinion snippet $T_O$ by the sampling of the start and end positions, we compute the final representation $g_o$ by the average of the opinion snippet, $g_o=avg(T_O)$. Then, equation DISPLAY_FORM9 with different weights is applied for computing the sentiment prediction $\\hat{y_o}$. The cross-entropy loss function is employed for computing the reward.",
"where $c$ is the index of the polarity class and $y$ is the ground truth."
],
[
"In this paper, we use reinforcement learning to learn the start and end positions. The goal of training is to minimize the negative expected reward as shown below.",
"where $\\Theta $ is all the parameters in our architecture, which includes the base method BERT, the position selection parameters $\\lbrace s,e\\rbrace $, and the parameters for sentiment prediction and then for reward calculation. Therefore, the state in our method is the combination of the sentence and the aspect. For each state, the action space is every position of the sentence.",
"To reduce the variance of the gradient estimation, the reward is associated with a reference reward or baseline $R_b$ BIBREF25. With the likelihood ratio trick, the objective function can be transformed as.",
"The baseline $R_b$ is computed based on the snippet determined by the baseline policy, which selects the start and end positions greedily by the $argmax$ operation on the $softmax$ results. As shown in Figure FIGREF5, the reward $R$ is calculated by sampling the snippet, while the baseline $R_b$ is computed by greedily selecting the snippet. Note that in the test stage, the snippet is determined by $argmax$ for inference."
],
[
"In this section, we compare our hard-selection model with various baselines. To assess the ability of alleviating the attention distraction, we further conduct experiments on a simulated multi-aspect dataset in which each sentence contains multiple aspects."
],
[
"We use the same datasets as the work by tay2018learning, which are already processed to token lists and released in Github. The datasets are from SemEval 2014 task 4 BIBREF26, and SemEval 2015 task 12 BIBREF27, respectively. For aspect term level sentiment classification task (denoted by T), we apply the Laptops and Restaurants datasets from SemEval 2014. For aspect category level sentiment prediction (denoted by C), we utilize the Restaurants dataset from SemEval 2014 and a composed dataset from both SemEval 2014 and SemEval 2015. The statistics of the datasets are shown in Table TABREF20."
],
[
"Our proposed models are implemented in PyTorch. We utilize the bert-base-uncased model, which contains 12 layers and the number of all parameters is 100M. The dimension $H$ is 768. The BERT model is initialized from the pre-trained model, other parameters are initialized by sampling from normal distribution $\\mathcal {N}(0,0.02)$. In our experiments, the batch size is 32. The reported results are the testing scores that fine-tuning 7 epochs with learning rate 5e-5."
],
[
"LSTM: it uses the average of all hidden states as the sentence representation for sentiment prediction. In this model, aspect information is not used.",
"TD-LSTM BIBREF14: it employs two LSTMs and both of their outputs are applied to predict the sentiment polarity.",
"AT-LSTM BIBREF2: it utilizes the attention mechanism to produce an aspect-specific sentence representation. This method is a kind of soft-selection approach.",
"ATAE-LSTM BIBREF2: it also uses the attention mechanism. The difference with AT-LSTM is that it concatenates the aspect embedding to each word embedding as the input to LSTM.",
"AF-LSTM(CORR) BIBREF5: it adopts circular correlation to capture the deep fusion between sentence words and the aspect, which can learn rich, higher-order relationships between words and the aspect.",
"AF-LSTM(CONV) BIBREF5: compared with AF-LSTM(CORR), this method applies circular convolution of vectors for performing word-aspect fusion to learn relationships between sentence words and the aspect.",
"BERT-Original: it makes sentiment prediction by directly using the final hidden vector $C$ from BERT with the sentence-aspect pair as input."
],
[
"BERT-Soft: as described in Section SECREF7, the contextual token representations from BERT are processed by self attention mechanism BIBREF23 and the attention-weighted sentence representation is utilized for sentiment classification.",
"BERT-Hard: as described in Section SECREF10, it takes the same input as BERT-Soft. It is called a hard-selection approach since it employs reinforcement learning techniques to explicitly select the opinion snippet corresponding to a particular aspect for sentiment prediction."
],
[
"In this section, we evaluate the performance of our models by comparing them with various baseline models. Experimental results are illustrated in Table TABREF21, in which 3-way represents 3-class sentiment classification (positive, negative and neutral) and Binary denotes binary sentiment prediction (positive and negative). The best score of each column is marked in bold.",
"Firstly, we observe that BERT-Original, BERT-Soft, and BERT-Hard outperform all soft attention baselines (in the first part of Table TABREF21), which demonstrates the effectiveness of fine-tuning the pre-trained model on the aspect-based sentiment classification task. Particularly, BERT-Original outperforms AF-LSTM(CONV) by 2.63%$\\sim $9.57%, BERT-Soft outperforms AF-LSTM(CONV) by 2.01%$\\sim $9.60% and BERT-Hard improves AF-LSTM(CONV) by 3.38%$\\sim $11.23% in terms of accuracy. Considering the average score across eight settings, BERT-Original outperforms AF-LSTM(CONV) by 6.46%, BERT-Soft outperforms AF-LSTM(CONV) by 6.47% and BERT-Hard outperforms AF-LSTM(CONV) by 7.19% respectively.",
"Secondly, we compare the performance of three BERT-related methods. The performance of BERT-Original and BERT-Soft are similar by comparing their average scores. The reason may be that the original BERT has already modeled the deep relationships between the sentence and the aspect. BERT-Original can be thought of as a kind of soft-selection approach as BERT-Soft. We also observe that the snippet selection by reinforcement learning improves the performance over soft-selection approaches in almost all settings. However, the improvement of BERT-Hard over BERT-Soft is marginal. The average score of BERT-Hard is better than BERT-Soft by 0.68%. The improvement percentages are between 0.36% and 1.49%, while on the Laptop dataset, the performance of BERT-Hard is slightly weaker than BERT-Soft. The main reason is that the datasets only contain a small portion of multi-aspect sentences with different polarities. The distraction of attention will not impact the sentiment prediction much in single-aspect sentences or multi-aspect sentences with the same polarities."
],
[
"On the one hand, the attention distraction issue becomes worse in multi-aspect sentences. In addition to noisy and misleading words, the attention is also prone to be distracted by opinion words from other aspects of the sentence. On the other hand, the attention distraction impacts the performance of sentiment prediction more in multi-aspect sentences than in single-aspect sentences. Hence, we evaluate the performance of our models on a test dataset with only multi-aspect sentences.",
"A multi-aspect sentence can be categorized by two dimensions: the Number of aspects and the Polarity dimension which indicates whether the sentiment polarities of all aspects are the same or not. In the dimension of Number, we categorize the multi-aspect sentences as 2-3 and More. 2-3 refers to the sentences with two or three aspects while More refers to the sentences with more than three aspects. The statistics in the original dataset shows that there are much more sentences with 2-3 aspects than those with More aspects. In the dimension Polarity, the multi-aspect sentences can be categorized into Same and Diff. Same indicates that all aspects in the sentence have the same sentiment polarity. Diff indicates that the aspects have different polarities.",
"Multi-aspect test set. To evaluate the performance of our models on multi-aspect sentences, we construct a new multi-aspect test set by selecting all multi-aspect sentences from the original training, development, and test sets of the Restaurants term-level task. The details are shown in Table TABREF37.",
"Multi-aspect training set. Since we use all multi-aspect sentences for testing, we need to generate some “virtual” multi-aspect sentences for training. The simulated multi-aspect training set includes the original single-aspect sentences and the newly constructed multi-aspect sentences, which are generated by concatenating multiple single-aspect sentences with different aspects. We keep the balance of each subtype in the new training set (see Table TABREF38). The number of Neutral sentences is the least among three sentiment polarities in all single-aspect sentences. We randomly select the same number of Positive and Negative sentences. Then we construct multi-aspect sentences by combining single-aspect sentences in different combinations of polarities. The naming for different combinations is simple. For example, 2P-1N indicates that the sentence has two positive aspects and one negative aspect, and P-N-Nu means that the three aspects in the sentence are positive, negative, and neutral respectively. For simplicity, we only construct 2-asp and 3-asp sentences which are also the majority in the original dataset.",
"Results and Discussions. The results on different types of multi-aspect sentences are shown in Table TABREF40. The performance of BERT-Hard is better than BERT-Original and BERT-Soft over all types of multi-aspect sentences. BERT-Hard outperforms BERT-Soft by 2.11% when the aspects have the same sentiment polarities. For multi-aspect sentences with different polarities, the improvements are more significant. BERT-Hard outperforms BERT-Soft by 7.65% in total of Diff. The improvements are 5.07% and 12.83% for the types 2-3 and More respectively, which demonstrates the ability of our model on handling sentences with More aspects. Particularly, BERT-Soft has the poorest performance on the subset Diff among the three methods, which proves that soft attention is more likely to cause attention distraction.",
"Intuitively, when multiple aspects in the sentence have the same sentiment polarities, even the attention is distracted to other opinion words of other aspects, it can still predict correctly to some extent. In such sentences, the impact of the attention distraction is not obvious and difficult to detect. However, when the aspects have different sentiment polarities, the attention distraction will lead to catastrophic error prediction, which will obviously decrease the classification accuracy. As shown in Table TABREF40, the accuracy of Diff is much worse than Same for all three methods. It means that the type of Diff is difficult to handle. Even though, the significant improvement proves that our hard-selection method can alleviate the attention distraction to a certain extent. For soft-selection methods, the attention distraction is inevitable due to their way in calculating the attention weights for every single word. The noisy or irrelevant words could seize more attention weights than the ground truth opinion words. Our method considers the opinion snippet as a consecutive whole, which is more resistant to attention distraction."
],
[
"In this section, we visualize the attention weights for BERT-Soft and opinion snippets for BERT-Hard. As demonstrated in Figure FIGREF39, the multi-aspect sentence “the appetizers are OK, but the service is slow” belongs to the category Diff. Firstly, the attention weights of BERT-Soft scatter among the whole sentence and could attend to irrelevant words. For the aspect service, BERT-Soft attends to the word “ok” with relatively high score though it does not describe the aspect service. This problem also exists for the aspect appetizers. Furthermore, the attention distraction could cause error prediction. For the aspect appetizers, “but” and “slow” gain high attention scores and cause the wrong sentiment prediction Negative.",
"Secondly, our proposed method BERT-Hard can detect the opinion snippet for a given aspect. As illustrated in Figure FIGREF39, the opinion snippets are selected by BERT-Hard accurately. In the sentence “the appetizers are ok, but the service is slow”, BERT-Hard can exactly locate the opinion snippets “ok” and “slow” for the aspect appetizers and service respectively.",
"At last, we enumerate some opinion snippets detected by BERT-Hard in Table TABREF42. Our method can precisely detect snippets even for latent opinion expression and alleviate the influence of noisy words. For instance, “cannot be beat for the quality” is hard to predict using soft attention because the sentiment polarity is transformed by the negative word “cannot”. Our method can select the whole snippet without bias to any word and in this way the attention distraction can be alleviated. We also list some inaccurate snippets in Table TABREF43. Some meaningless words around the true snippet are included, such as “are”, “and” and “at”. These words do not affect the final prediction. A possible explanation to these inaccurate words is that the true snippets are unlabeled and our method predicts them only by the supervisory signal from sentiment labels."
],
[
"In this paper, we propose a hard-selection approach for aspect-based sentiment analysis, which determines the start and end positions of the opinion snippet for a given input aspect. The deep associations between the sentence and aspect, and the long-term dependencies within the sentence are taken into consideration by leveraging the pre-trained BERT model. With the hard selection of the opinion snippet, our approach can alleviate the attention distraction problem of traditional attention-based soft-selection methods. Experimental results demonstrate the effectiveness of our method. Especially, our hard-selection approach outperforms soft-selection approaches significantly when handling multi-aspect sentences with different sentiment polarities."
],
[
"This work is supported by National Science and Technology Major Project, China (Grant No. 2018YFB0204304)."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model",
"Model ::: Word-Aspect Fusion",
"Model ::: Soft-Selection Approach",
"Model ::: Hard-Selection Approach",
"Model ::: Hard-Selection Approach ::: Reward",
"Model ::: Hard-Selection Approach ::: Self-Critical Training",
"Experiments",
"Experiments ::: Datasets",
"Experiments ::: Implementation Details",
"Experiments ::: Compared Models",
"Experiments ::: Our Models",
"Experiments ::: Experimental Results",
"Experiments ::: Experimental Results on Multi-Aspect Sentences",
"Experiments ::: Visualization",
"Conclusion",
"Acknowledgement"
]
} | {
"answers": [
{
"annotation_id": [
"74dd33d80995be8867f75042a831f7cb1ca72cb0"
],
"answer": [
{
"evidence": [
"Previous attention-based methods can be categorized as soft-selection approaches since the attention weights scatter across the whole sentence and every word is taken into consideration with different weights. This usually results in attention distraction BIBREF7, i.e., attending on noisy or misleading words, or opinion words from other aspects. Take Figure FIGREF1 as an example, for the aspect place in the sentence “the food is usually good but it certainly is not a relaxing place to go”, we visualize the attention weights from the model ATAE-LSTM BIBREF2. As we can see, the words “good” and “but” are dominant in attention weights. However, “good” is used to describe the aspect food rather than place, “but” is not so related to place either. The true opinion snippet “certainly is not a relaxing place” receives low attention weights, leading to the wrong prediction towards the aspect place.",
"FLOAT SELECTED: Table 2: Experimental results (accuracy %) on all the datasets. Models in the first part are baseline methods. The results in the first part (except BERT-Original) are obtained from the prior work (Tay et al., 2018). Avg column presents macro-averaged results across all the datasets."
],
"extractive_spans": [],
"free_form_answer": "LSTM and BERT ",
"highlighted_evidence": [
"Previous attention-based methods can be categorized as soft-selection approaches since the attention weights scatter across the whole sentence and every word is taken into consideration with different weights. ",
"FLOAT SELECTED: Table 2: Experimental results (accuracy %) on all the datasets. Models in the first part are baseline methods. The results in the first part (except BERT-Original) are obtained from the prior work (Tay et al., 2018). Avg column presents macro-averaged results across all the datasets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"e62f778c2b034cfd4b047efdb0c64e57668d2cdd"
],
"answer": [
{
"evidence": [
"Secondly, we compare the performance of three BERT-related methods. The performance of BERT-Original and BERT-Soft are similar by comparing their average scores. The reason may be that the original BERT has already modeled the deep relationships between the sentence and the aspect. BERT-Original can be thought of as a kind of soft-selection approach as BERT-Soft. We also observe that the snippet selection by reinforcement learning improves the performance over soft-selection approaches in almost all settings. However, the improvement of BERT-Hard over BERT-Soft is marginal. The average score of BERT-Hard is better than BERT-Soft by 0.68%. The improvement percentages are between 0.36% and 1.49%, while on the Laptop dataset, the performance of BERT-Hard is slightly weaker than BERT-Soft. The main reason is that the datasets only contain a small portion of multi-aspect sentences with different polarities. The distraction of attention will not impact the sentiment prediction much in single-aspect sentences or multi-aspect sentences with the same polarities."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The main reason is that the datasets only contain a small portion of multi-aspect sentences with different polarities. The distraction of attention will not impact the sentiment prediction much in single-aspect sentences or multi-aspect sentences with the same polarities."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"5ca5dad60e963ff4e90581d1e170e75efd02ace0"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Which soft-selection approaches are evaluated?",
"Is the model evaluated against the baseline also on single-aspect sentences?",
"Is the accuracy of the opinion snippet detection subtask reported?"
],
"question_id": [
"e292676c8c75dd3711efd0e008423c11077938b1",
"1afd550cbee15b753db45d7db2c969fc3d12a7d9",
"2a7c40a72b6380e76511e722b4b02b3a1e5078fd"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Example of attention visualization. The attention weights of the aspect place are from the model ATAE-LSTM (Wang et al., 2016), a typical attention mechanism used for soft-selection.",
"Figure 2: Network Architecture. We leverage BERT to model the relationships between sentence words and a particular aspect. The sentence and aspect are packed together into a single sequence and fed into BERT, in which E represents the input embedding, and Ti represents the contextual representation of token i. With the contextual representations from BERT, the start and end positions are sequentially sampled and then the framed content is used for sentiment prediction. Reinforcement learning is adopted for solving the nondifferentiable problem of sampling.",
"Table 1: Dataset statistics. T and C denote the aspectterm and aspect-category tasks, respectively. P, N, and Nu represent the numbers of instances with positive, negative and neutral polarities, and All is the total number of instances.",
"Table 2: Experimental results (accuracy %) on all the datasets. Models in the first part are baseline methods. The results in the first part (except BERT-Original) are obtained from the prior work (Tay et al., 2018). Avg column presents macro-averaged results across all the datasets.",
"Table 5: Experimental results (accuracy %) on multiaspect sentences. The performance of the 3-way classification on the multi-aspect test set is reported.",
"Table 3: Distribution of the multi-aspect test set. Around 67% of the multi-aspect sentences belong to the Same category.",
"Table 4: Distribution of the multi-aspect training set. 2-asp and 3-asp indicate that the sentence contains two or three aspects respectively. Each multi-aspect sentence is categorized as Same or Diff.",
"Figure 3: Visualization. The attention weights are visualized for BERT-Soft, and the selected opinion snippets are marked for BERT-Hard. The correctness of the predicted results is also marked.",
"Table 6: Examples of accurate opinion snippets detected by BERT-Hard.",
"Table 7: Examples of inaccurate opinion snippets detected by BERT-Hard."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"7-Table5-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"8-Figure3-1.png",
"8-Table6-1.png",
"8-Table7-1.png"
]
} | [
"Which soft-selection approaches are evaluated?"
] | [
[
"1909.11297-Introduction-1",
"1909.11297-6-Table2-1.png"
]
] | [
"LSTM and BERT "
] | 613 |
1911.01680 | Improving Slot Filling by Utilizing Contextual Information | Slot Filling is the task of extracting the semantic concept from a given natural language utterance. Recently it has been shown that using contextual information, either in work representations (e.g., BERT embedding) or in the computation graph of the model, could improve the performance of the model. However, recent work uses the contextual information in a restricted manner, e.g., by concatenating the word representation and its context feature vector, limiting the model from learning any direct association between the context and the label of word. We introduce a new deep model utilizing the contextual information for each work in the given sentence in a multi-task setting. Our model enforce consistency between the feature vectors of the context and the word while increasing the expressiveness of the context about the label of the word. Our empirical analysis on a slot filling dataset proves the superiority of the model over the baselines. | {
"paragraphs": [
[
"Slot Filling (SF) is the task of identifying the semantic concept expressed in natural language utterance. For instance, consider a request to edit an image expressed in natural language: “Remove the blue ball on the table and change the color of the wall to brown”. Here, the user asks for an \"Action\" (i.e., removing) on one “Object” (blue ball on the table) in the image and changing an “Attribute” (i.e., color) of the image to new “Value” (i.e., brown). Our goal in SF is to provide a sequence of labels for the given sentence to identify the semantic concept expressed in the given sentence.",
"Prior work have shown that contextual information could be useful for SF. They utilize contextual information either in word level representation (i.e., via contextualize embedding e.g., BERT BIBREF0) or in the model computation graph (e.g., concatenating the context feature to the word feature BIBREF1). However, such methods fail to capture the explicit dependence between the context of the word and its label. Moreover, such limited use of contextual information (i.e., concatenation of the feature vector and context vector) in the model cannot model the interaction between the word representation and its context. In order to alleviate these issues, in this work, we propose a novel model to explicitly increase the predictability of the word label using its context and increasing the interactivity between word representations and its context. More specifically, in our model we use the context of the word to predict its label and by doing so our model learns label-aware context for each word in the sentence. In order to improve the interactivity between the word representation and its context, we increase the mutual information between the word representations and its context. In addition to these contributions, we also propose an auxiliary task to predict which labels are expressed in a given sentence. Our model is trained in a mutli-tasking framework. Our experiments on a SF dataset for identifying semantic concepts from natural language request to edit an image show the superiority of our model compared to previous baselines. Our model achieves the state-of-the-art results on the benchmark dataset by improving the F1 score by almost 2%, which corresponds to a 12.3% error rate reduction."
],
[
"The task of Slot Filling is formulated as a sequence labeling problem. Deep learning has been extensively employed for this task (BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11). The prior work has mainly utilized the recurrent neural network as the encoder to extract features per word and Conditional Random Field (CRF) BIBREF12 as the decoder to generate the labels per word. Recently the work BIBREF1 shows that the global context of the sentence could be useful to enhance the performance of neural sequence labeling. In their approach, they use a separate sequential model to extract word features. Afterwards, using max pooling over the representations of the words, they obtain the sentence representations and concatenate it to the word embedding as the input to the main task encoder (i.e. the RNN model to perform sequence labeling). The benefit of using the global context along the word representation is 2-fold: 1) it enhance the representations of the word by the semantics of the entire sentence thus the word representation are more contextualized 2) The global view of the sentence would increase the model performance as it contains information about the entire sentence and this information might not be encoded in word representations due to long decencies.",
"However, the simple concatenation of the global context and the word embeddings would not separately ensure these two benefits of the global context. In order to address this problem, we introduce a multi-task setting to separately ensure the aforementioned benefits of utilizing contextual information. In particular, to ensure the better contextualized representations of the words, the model is encourage to learn representations for the word which are consistent with its context. This is achieved via increasing the mutual information between the word representation and its context. To ensure the usefulness of the contextual information for the final task, we introduce two novel sub-tasks. The first one aims to employ the context of the word instead of the word representation to predict the label of the word. In the second sub-task, we use the global representation of the sentence to predict which labels exist in the given sentence in a multi-label classification setting. These two sub-tasks would encourage the contextual representations to be informative for both word level classification and sentence level classification."
],
[
"Our model is trained in a multi-task setting in which the main task is slot filling to identify the best possible sequence of labels for the given sentence. In the first auxiliary task we aim to increase consistency between the word representation and its context. The second auxiliary task is to enhance task specific information in contextual information. In this section, we explain each of these tasks in more details."
],
[
"The input to the model is a sequence of words $x_1,x_2,...,x_N$. The goal is to assign each word one of the labels action, object, attribute, value or other. Following other methods for sequence labelling, we use the BIO encoding schema. In addition to the sequence of words, the part-of-speech (POS) tags and the dependency parse tree of the input are given to the model.",
"The input word $x_i$ is represented by the concatenation of its pre-trained word embedding and its POS tag embedding, denoted by $e_i$. These representations are further abstracted using a 2-layer Bi-Directional Long Short-Term Memory (LSTM) to obtain feature vector $h_i$. We use the dependency tree of the sentence to utilize the syntactical information about the input text. This information could be useful to identify the important words and their dependents in the sentence. In order to model the syntactic tree, we utilize Graph Convolutional Network (GCN) BIBREF13 over the dependency tree. This model learns the contextualized representations of the words such that the representation of each word is contextualized by its neighbors. We employ 2-layer GCN with $h_i$ as the initial representation for the node (i.e., word) $i$th. The representations of the $i$th node is an aggregation of the representations of its neighbors. Formally the hidden representations of the $i$th word in $l$th layer of GCN is obtained by:",
"where $N(i)$ is the neighbors of the $i$th word in the dependency tree, $W_l$ is the weight matrix in $l$th layer and $deg(i)$ is the degree of the $i$th word in the dependency tree. The biases are omitted for brevity. The final representations of the GCN for $i$th word, $\\hat{h}_i$, represent the structural features for that word. Afterwards, we concatenate the structural features $\\hat{h}_i$ and sequential features $h_i$ to represent $i$th word by feature vector $h^{\\prime }_i$:",
"Finally in order to label each word in the sentence we employ a task specific 2-layer feed forward neural net followed by a logistic regression model to generate class scores $S_i$ for each word:",
"where $W_{LR}, W_1$ and $W_2$ are trainable parameters and $S_i$ is a vector of size number of classes in which each dimension of it is the score for the corresponding class. Since the main task is sequence labeling we exploit Conditional Random Field (CRF) as the final layer to predict the sequence of labels for the given sentence. More specifically, class scores $S_i$ are fed into the CRF layer as emission scores to obtain the final labeling score:",
"where $T$ is the trainable transition matrix and $\\theta $ is the parameters of the model to generate emission scores $S_i$. Viterbi loss $L_{VB}$ is used as the final loss function to be optimized during training. In the inference time, the Viterbi decoder is employed to find the sequence of labels with highest score."
],
[
"In this sub-task we aim to increase the consistency of the word representation and its context. To obtain the context of each word we perform max pooling over the all words of the sentence excluding the word itself:",
"where $h_i$ is the representation of the $i$th word from the Bi-LSTM. We aim to increase the consistency between vectors $h_i$ and $h^c_i$. One way to achieve this is by decreasing the distance between these two vectors. However, directly enforcing the word representation and its context to be close to each other would not be efficient as in long sentences the context might substantially differs from the word. So in order to make enough room for the model to represent the context of each word while it is consistent with the word representation, we employ an indirect method.",
"We propose to maximize the mutual information (MI) between the word representation and its context in the loss function. In information theory, MI evaluates how much information we know about one random variable if the value of another variable is revealed. Formally, the mutual information between two random variable $X_1$ and $X_2$ is obtained by:",
"Using this definition of MI, we can reformulate the MI equation as KL Divergence between the joint distribution $P_{X_1X_2}=P(X_1,X_2)$ and the product of marginal distributions $P_{X_1\\bigotimes X_2}=P(X_1)P(X_2)$:",
"Based on this understanding of MI, we can see that if the two random variables are dependent then the mutual information between them (i.e. the KL-Divergence in equation DISPLAY_FORM9) would be the highest. Consequently, if the representations $h_i$ and $h^c_i$ are encouraged to have large mutual information, we expect them to share more information. The mutual information would be introduced directly into the loss function for optimization.",
"One issue with this approach is that the computation of the MI for such high dimensional continuous vectors as $h_i$ and $h^c_i$ is prohibitively expensive. In this work, we propose to address this issue by employing the mutual information neural estimation (MINE) in BIBREF14 that seeks to estimate the lower bound of the mutual information between the high dimensional vectors via adversarial training. To this goal, MINE attempts to compute the lower bound of the KL divergence between the joint and marginal distributions of the given high dimensional vectors/variables. In particular, MINE computes the lower bound of the Donsker-Varadhan representation of KL-Divergence:",
"However, recently, it has been shown that other divergence metrics (i.e., the Jensen-Shannon divergence) could also be used for this purpose BIBREF15, BIBREF16, offering simpler methods to compute the lower bound for the MI. Consequently, following such methods, we apply the adversarial approach to obtain the MI lower bound via the binary cross entropy of a variable discriminator. This discriminator differentiates the variables that are sampled from the joint distribution from those that are sampled from product of the marginal distributions. In our case, the two variables are the word representation $h_i$ and context representation $h^c_i$. In order to sample from joint distributions, we simply concatenate $h_i$ and $h^c_i$ (i.e., the positive example). To sample from the product of the marginal distributions, we concatenate the representation $h_i$ with $h^c_j$ where $i\\ne j$ (i.e., the negative example). These samples are fed into a 2-layer feed forward neural network $D$ (i.e., the discriminator) to perform a binary classification (i.e., coming from the joint distribution or the product of the marginal distributions). Finally, we use the following binary cross entropy loss to estimate the mutual information between $h_i$ and $h^c_i$ to add into the overall loss function:",
"where $N$ is the length of the sentence and $[h,h^c_i]$ is the concatenation of the two vectors $h$ and $h^c_i$. This loss is added to the final loss function of the model."
],
[
"In addition to increasing consistency between the word representation and its context representation, we aim to increase the task specific information in contextual representations. This is desirable as the main task is utilizing the word representation to predict its label. Since our model enforce the consistency between the word representation and its context, increasing the task specific information in contextual representations would help the model's final performance.",
"In order to increase task-specific information in contextual representation, we train the model on two auxiliary tasks. The first one aims to use the context of each word to predict the label of that word and the goal of the second auxiliary task is to use the global context information to predict sentence level labels. We describe each of these tasks in more details in the following sections."
],
[
"In this sub-task we use the context representations of each word to predict its label. It will increase the information encoded in the context of the word about the label of the word. We use the same context vector $h^c_i$ for the $i$th word as described in the previous section. This vector is fed into a 2-layer feed forward neural network with a softmax layer at the end to output the probabilities for each class:",
"Where $W_2$ and $W_1$ are trainable parameters. Biases are omitted for brevity. Finally we use the following cross-entropy loss function to be optimized during training:",
"where $N$ is the length of the sentence and $l_i$ is the label of the $i$th word."
],
[
"The word label prediction enforces the context of each word to contain information about its label but it would not ensure the contextual information to capture the sentence level patterns for expressing intent. In other words, the word level prediction lacks a general view about the entire sentence. In order to increase the general information about the sentence in the representation of the words, we aim to predict the labels existing in a sentence from the representations of its words. More specifically, we introduce a new sub-task to predict which labels exit in the given sentence (Note that sentences might have only a subset of the labels; e.g. only action and object). We formulate this task as a multi-class classification problem. Formally, given the sentence $X=x_1,x_2,...,x_N$ and label set $S=\\lbrace action, attribute, object, value\\rbrace $ our goal is to predict the vector $L^s=l^s_1,l^s_2,...,l^s_{|S|}$ where $l^s_i$ is one if the sentence $X$ contains $i$th label from the label set $S$ otherwise it is zero.",
"First, we find representation of the sentence from the word representations. To this end, we use max pooling over all words of the sentence to obtain vector $H$:",
"Afterwards, the vector $H$ is further abstracted by a 2-layer feed forward neural net with a sigmoid function at the end:",
"where $W_2$ and $W_1$ are trainable parameters. Note that since this tasks is a multi-class classification the number of neurons at the final layer is equal to $|S|$. We optimize the following binary cross entropy loss function:",
"where $l_k$ is one if the sentence contains the $k$th label otherwise it is zero. Finally, to train the model we optimize the following loss function:",
"where $\\alpha $, $\\beta $ and $\\gamma $ are hyper parameters to be tuned using development set performance."
],
[
"In our experiments, we use Onsei Intent Slot dataset. Table TABREF21 shows the statics of this dataset. We use the following hyper parameters in our model: We set the word embedding and POS embedding to 768 and 30 respectively; The pre-trained BERT BIBREF17 embedding are used to initialize word embeddings; The hidden dimension of the Bi-LSTM, GCN and feed forward networks are 200; the hyper parameters $\\alpha $, $\\beta $ and $\\gamma $ are all set to 0.1; We use Adam optimizer with learning rate 0.003 to train the model. We use micro-averaged F1 score on all labels as the evaluation metric.",
"We compare our method with the models trained using Adobe internal NLU tool, Pytext BIBREF18 and Rasa BIBREF19 NLU tools. Table TABREF22 shows the results on Test set. Our model improves the F1 score by almost 2%, which corresponds to a 12.3% error rate reduction. This improvements proves the effectiveness of using contextual information for the task of slot filling.",
"In order to analyze the contribution of the proposed sub-tasks we also evaluate the model when we remove one of the sub-task and retrain the model. The results are reported in Table TABREF23. This table shows that all sub-tasks are required for the model to have its best performance. Among all sub-tasks the word level prediction using the contextual information has the major contribution to the model performance. This fact shows that contextual information trained to be informative about the final sub-task is necessary to obtain the representations which could boost the final model performance."
],
[
"In this work we introduce a new deep model for the task of Slot Filling. In a multi-task setting, our model increase the mutual information between word representations and its context, improve the label information in the context and predict which concepts are expressed in the given sentence. Our experiments on an image edit request corpus shows that our model achieves state-of-the-art results on this dataset."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model",
"Model ::: Slot Filling",
"Model ::: Consistency with Contextual Representation",
"Model ::: Prediction by Contextual Information",
"Model ::: Prediction by Contextual Information ::: Predicting Word Label",
"Model ::: Prediction by Contextual Information ::: Predicting Sentence Labels",
"Experiments",
"Conclusion & Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"6375c2b7f5867691a863a814ceea183b4690467d"
],
"answer": [
{
"evidence": [
"Prior work have shown that contextual information could be useful for SF. They utilize contextual information either in word level representation (i.e., via contextualize embedding e.g., BERT BIBREF0) or in the model computation graph (e.g., concatenating the context feature to the word feature BIBREF1). However, such methods fail to capture the explicit dependence between the context of the word and its label. Moreover, such limited use of contextual information (i.e., concatenation of the feature vector and context vector) in the model cannot model the interaction between the word representation and its context. In order to alleviate these issues, in this work, we propose a novel model to explicitly increase the predictability of the word label using its context and increasing the interactivity between word representations and its context. More specifically, in our model we use the context of the word to predict its label and by doing so our model learns label-aware context for each word in the sentence. In order to improve the interactivity between the word representation and its context, we increase the mutual information between the word representations and its context. In addition to these contributions, we also propose an auxiliary task to predict which labels are expressed in a given sentence. Our model is trained in a mutli-tasking framework. Our experiments on a SF dataset for identifying semantic concepts from natural language request to edit an image show the superiority of our model compared to previous baselines. Our model achieves the state-of-the-art results on the benchmark dataset by improving the F1 score by almost 2%, which corresponds to a 12.3% error rate reduction."
],
"extractive_spans": [
"we use the context of the word to predict its label and by doing so our model learns label-aware context for each word in the sentence"
],
"free_form_answer": "",
"highlighted_evidence": [
"More specifically, in our model we use the context of the word to predict its label and by doing so our model learns label-aware context for each word in the sentence. In order to improve the interactivity between the word representation and its context, we increase the mutual information between the word representations and its context."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"650838d150c1ed5d34086a59c4b4a72ba1b51889"
],
"answer": [
{
"evidence": [
"In our experiments, we use Onsei Intent Slot dataset. Table TABREF21 shows the statics of this dataset. We use the following hyper parameters in our model: We set the word embedding and POS embedding to 768 and 30 respectively; The pre-trained BERT BIBREF17 embedding are used to initialize word embeddings; The hidden dimension of the Bi-LSTM, GCN and feed forward networks are 200; the hyper parameters $\\alpha $, $\\beta $ and $\\gamma $ are all set to 0.1; We use Adam optimizer with learning rate 0.003 to train the model. We use micro-averaged F1 score on all labels as the evaluation metric."
],
"extractive_spans": [
"micro-averaged F1 score"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use micro-averaged F1 score on all labels as the evaluation metric."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7438654c7049be4498983277621d4de7f54963db"
],
"answer": [
{
"evidence": [
"We compare our method with the models trained using Adobe internal NLU tool, Pytext BIBREF18 and Rasa BIBREF19 NLU tools. Table TABREF22 shows the results on Test set. Our model improves the F1 score by almost 2%, which corresponds to a 12.3% error rate reduction. This improvements proves the effectiveness of using contextual information for the task of slot filling."
],
"extractive_spans": [
" improves the F1 score by almost 2%, which corresponds to a 12.3% error rate reduction"
],
"free_form_answer": "",
"highlighted_evidence": [
" Our model improves the F1 score by almost 2%, which corresponds to a 12.3% error rate reduction."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5e5e4baeb86a02d50b62429afcac0d117f034504"
],
"answer": [
{
"evidence": [
"We compare our method with the models trained using Adobe internal NLU tool, Pytext BIBREF18 and Rasa BIBREF19 NLU tools. Table TABREF22 shows the results on Test set. Our model improves the F1 score by almost 2%, which corresponds to a 12.3% error rate reduction. This improvements proves the effectiveness of using contextual information for the task of slot filling."
],
"extractive_spans": [
"Adobe internal NLU tool",
"Pytext",
"Rasa"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare our method with the models trained using Adobe internal NLU tool, Pytext BIBREF18 and Rasa BIBREF19 NLU tools."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"b8b0b19d6f47c7a41feeeeb7b00302f0839c6520"
],
"answer": [
{
"evidence": [
"In our experiments, we use Onsei Intent Slot dataset. Table TABREF21 shows the statics of this dataset. We use the following hyper parameters in our model: We set the word embedding and POS embedding to 768 and 30 respectively; The pre-trained BERT BIBREF17 embedding are used to initialize word embeddings; The hidden dimension of the Bi-LSTM, GCN and feed forward networks are 200; the hyper parameters $\\alpha $, $\\beta $ and $\\gamma $ are all set to 0.1; We use Adam optimizer with learning rate 0.003 to train the model. We use micro-averaged F1 score on all labels as the evaluation metric.",
"FLOAT SELECTED: Table 1: Label Statistics"
],
"extractive_spans": [],
"free_form_answer": "Dataset has 1737 train, 497 dev and 559 test sentences.",
"highlighted_evidence": [
"In our experiments, we use Onsei Intent Slot dataset. Table TABREF21 shows the statics of this dataset.",
"FLOAT SELECTED: Table 1: Label Statistics"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How does their model utilize contextual information for each work in the given sentence in a multi-task setting? setting?",
"What metris are used for evaluation?",
"How better is proposed model compared to baselines?",
"What are the baselines?",
"How big is slot filing dataset?"
],
"question_id": [
"dcdcd977f18206da3ff8ad0ffb14f7bc5e126c7d",
"5efa19058f815494b72c44d746c157e9403f726e",
"71f135be79341e61c28c3150b1822d0c4d0ca8d6",
"cb8e2069218e30c643013c20e93ebe23525d9f55",
"2d47cdf2c1e0c64c73518aead1b94e0ee594b7a5"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Label Statistics",
"Table 3: Performance of the model when the loss function of each sub-task has been removed from the final loss function. MI, WP and SP stand for Mutual Information, Word Prediction and Sentence Prediction respectively."
],
"file": [
"5-Table1-1.png",
"5-Table3-1.png"
]
} | [
"How big is slot filing dataset?"
] | [
[
"1911.01680-5-Table1-1.png",
"1911.01680-Experiments-0"
]
] | [
"Dataset has 1737 train, 497 dev and 559 test sentences."
] | 617 |
1809.08298 | How do you correct run-on sentences it's not as easy as it seems | Run-on sentences are common grammatical mistakes but little research has tackled this problem to date. This work introduces two machine learning models to correct run-on sentences that outperform leading methods for related tasks, punctuation restoration and whole-sentence grammatical error correction. Due to the limited annotated data for this error, we experiment with artificially generating training data from clean newswire text. Our findings suggest artificial training data is viable for this task. We discuss implications for correcting run-ons and other types of mistakes that have low coverage in error-annotated corpora. | {
"paragraphs": [
[
"A run-on sentence is defined as having at least two main or independent clauses that lack either a conjunction to connect them or a punctuation mark to separate them. Run-ons are problematic because they not only make the sentence unfriendly to the reader but potentially also to the local discourse. Consider the example in Table TABREF1 .",
"In the field of grammatical error correction (GEC), most work has typically focused on determiner, preposition, verb and other errors which non-native writers make more frequently. Run-ons have received little to no attention even though they are common errors for both native and non-native speakers. Among college students in the United States, run-on sentences are the 18th most frequent error and the 8th most frequent error made by students who are not native English speakers BIBREF0 .",
"Correcting run-on sentences is challenging BIBREF1 for several reasons:",
"In this paper, we analyze the task of automatically correcting run-on sentences. We develop two methods: a conditional random field model (roCRF) and a Seq2Seq attention model (roS2S) and show that they outperform models from the sister tasks of punctuation restoration and whole-sentence grammatical error correction. We also experiment with artificially generating training examples in clean, otherwise grammatical text, and show that models trained on this data do nearly as well predicting artificial and naturally occurring run-on sentences."
],
[
"Early work in the field of GEC focused on correcting specific error types such as preposition and article errors BIBREF2 , BIBREF3 , BIBREF4 , but did not consider run-on sentences. The closest work to our own is BIBREF5 , who used Conditional Random Fields (CRFs) for correcting comma errors (excluding comma splices, a type of run-on sentence). BIBREF6 used a similar system based on CRFs but focused on comma splice correction. Recently, the field has focused on the task of whole-sentence correction, targeting all errors in a sentence in one pass. Whole-sentence correction methods borrow from advances in statistical machine translation BIBREF7 , BIBREF8 , BIBREF9 and, more recently, neural machine translation BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 .",
"To date, GEC systems have been evaluated on corpora of non-native student writing such as NUCLE BIBREF14 and the Cambridge Learner Corpus First Certificate of English BIBREF15 . The 2013 and 2014 CoNLL Shared Tasks in GEC used NUCLE as their train and test sets BIBREF16 , BIBREF17 . There are few instances of run-on sentences annotated in both test sets, making it hard to assess system performance on that error type.",
"A closely related task to run-on error correction is that of punctuation restoration in the automatic speech recognition (ASR) field. Here, a system takes as input a speech transcription and is tasked with inserting any type of punctuation where appropriate. Most work utilizes textual features with n-gram models BIBREF18 , CRFs BIBREF19 , convolutional neural networks or recurrent neural networks BIBREF20 , BIBREF21 . The Punctuator BIBREF22 is a leading punctuation restoration system based on a sequence-to-sequence model (Seq2Seq) trained on long slices of text which can span multiple sentences."
],
[
"We treat correcting run-ons as a sequence labeling task: given a sentence, the model reads each token and learns whether there is a SPACE or PERIOD following that token, as shown in Table TABREF5 . We apply two sequence models to this task, conditional random fields (roCRF) and Seq2Seq (roS2S)."
],
[
"Our CRF model, roCRF, represents a sentence as a sequence of spaces between tokens, labeled to indicate whether a period should be inserted in that space. Each space is represented by contextual features (sequences of tokens, part-of-speech tags, and capitalization flags around each space), parse features (the highest uncommon ancestor of the word before and after the space, and binary indicators of whether the highest uncommon ancestors are preterminals), and a flag indicating whether the mean per-word perplexity of the text decreases when a period is inserted at the space according to a 5-gram language model."
],
[
"Another approach is to treat it as a form of neural sequence generation. In this case, the input sentence is a single run-on sentence. During decoding we pass the binary label which determines if there is terminal punctuation following the token at the current position. We then combine the generated label and the input sequence to get the final output.",
"Our model, roS2S, is a Seq2Seq attention model based on the neural machine translation model BIBREF23 . The encoder is a bidirectional LSTM, where a recurrent layer processes the input sequence in both forward and backward direction. The decoder is a uni-directional LSTM. An attention mechanism is used to obtain the context vector."
],
[
"Results are shown in Table TABREF11 . A correct judgment is where a run-on sentence is detected and a PERIOD is inserted in the right place. Across all datasets, roCRF has the highest precision. We speculate that roCRF consistently has the highest precision because it is the only model to use POS and syntactic features, which may restrict the occurrence of false positives by identifying longer distance, structural dependencies. roS2S is able to generalize better than roCRF, resulting in higher recall with only a moderate impact on precision. On all datasets except RealESL, roS2S consistently has the highest overall INLINEFORM0 score. In general, Punctuator has the highest recall, probably because it is trained for a more general purpose task and tries to predict punctuation at each possible position, resulting in lower precision than the other models.",
"NUS18 predicts only a few false positives and no true positives, so INLINEFORM0 and we exclude it from the results table. Even though NUS18 is trained on NUCLE, which RealESL encompasses, its very poor performance is not too surprising given the infrequency of run-ons in NUCLE."
],
[
"Correcting run-on sentences is a challenging task that has not been individually targeted in earlier GEC models. We have developed two new models for run-on sentence correction: a syntax-aware CRF model, roCRF, and a Seq2Seq model, roS2S. Both of these outperform leading models for punctuation restoration and grammatical error correction on this task. In particular, roS2S has very strong performance, with INLINEFORM0 and INLINEFORM1 on run-ons generated from clean and noisy data, respectively. roCRF has very high precision ( INLINEFORM2 ) but low recall, meaning that it does not generalize as well as the leading system, roS2S.",
"Run-on sentences have low frequency in annotated GEC data, so we experimented with artificially generated training data. We chose clean newswire text as the source for training data to ensure there were no unlabeled naturally occurring run-ons in the training data. Using ungrammatical text as a source of artificial data is an area of future work. The results of this study are inconclusive in terms of how much harder the task is on clean versus noisy text. However, our findings suggest that artificial run-ons are similar to naturally occurring run-ons in ungrammatical text because models trained on artificial data do just as well predicting real run-ons as artificial ones.",
"In this work, we found that a leading GEC model BIBREF11 does not correct any run-on sentences, even though there was an overlap between the test and training data for that model. This supports the recent work of BIBREF29 , who found that GEC systems tend to ignore less frequent errors due to reference bias. Based on our work with run-on sentences, a common error type that is infrequent in annotated data, we strongly encourage future GEC work to address low-coverage errors."
],
[
"We thank the three anonymous reviewers for their helpful feedback."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model Descriptions",
"Conditional Random Fields",
"Sequence to Sequence Model with Attention Mechanism",
"Results and Analysis",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"cade44031c6d067827fcc5c72a6734d07dfa0bb7"
],
"answer": [
{
"evidence": [
"In this paper, we analyze the task of automatically correcting run-on sentences. We develop two methods: a conditional random field model (roCRF) and a Seq2Seq attention model (roS2S) and show that they outperform models from the sister tasks of punctuation restoration and whole-sentence grammatical error correction. We also experiment with artificially generating training examples in clean, otherwise grammatical text, and show that models trained on this data do nearly as well predicting artificial and naturally occurring run-on sentences."
],
"extractive_spans": [
"conditional random field model",
"Seq2Seq attention model"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we analyze the task of automatically correcting run-on sentences. We develop two methods: a conditional random field model (roCRF) and a Seq2Seq attention model (roS2S) and show that they outperform models from the sister tasks of punctuation restoration and whole-sentence grammatical error correction."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"5e704f85a8810d07bedff6e6a7f8b23acdd86e3b"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Number of run-on (RO) and non-run-on (Non-RO) sentences in our datasets."
],
"extractive_spans": [],
"free_form_answer": "4.756 million sentences",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Number of run-on (RO) and non-run-on (Non-RO) sentences in our datasets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"Which machine learning models do they use to correct run-on sentences?",
"How large is the dataset they generate?"
],
"question_id": [
"7633be56ae46c163fb21cd1afd018f989eb6b524",
"dafa760e1466e9eaa73ad8cb39b229abd5babbda"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: A run-on sentence before and after correction.",
"Table 2: NUCLE sentence labeled to indicate what follows each token: a space (S) or period (P).",
"Table 3: Number of run-on (RO) and non-run-on (Non-RO) sentences in our datasets.",
"Table 4: Performance on clean v. noisy artificial data with 10% run-ons, and real v. artificial data with 1% run-ons."
],
"file": [
"1-Table1-1.png",
"2-Table2-1.png",
"3-Table3-1.png",
"4-Table4-1.png"
]
} | [
"How large is the dataset they generate?"
] | [
[
"1809.08298-3-Table3-1.png"
]
] | [
"4.756 million sentences"
] | 618 |
1711.00331 | Semantic Structure and Interpretability of Word Embeddings | Dense word embeddings, which encode semantic meanings of words to low dimensional vector spaces have become very popular in natural language processing (NLP) research due to their state-of-the-art performances in many NLP tasks. Word embeddings are substantially successful in capturing semantic relations among words, so a meaningful semantic structure must be present in the respective vector spaces. However, in many cases, this semantic structure is broadly and heterogeneously distributed across the embedding dimensions, which makes interpretation a big challenge. In this study, we propose a statistical method to uncover the latent semantic structure in the dense word embeddings. To perform our analysis we introduce a new dataset (SEMCAT) that contains more than 6500 words semantically grouped under 110 categories. We further propose a method to quantify the interpretability of the word embeddings; the proposed method is a practical alternative to the classical word intrusion test that requires human intervention. | {
"paragraphs": [
[
"Words are the smallest elements of a language with a practical meaning. Researchers from diverse fields including linguistics BIBREF0 , computer science BIBREF1 and statistics BIBREF2 have developed models that seek to capture “word meaning\" so that these models can accomplish various NLP tasks such as parsing, word sense disambiguation and machine translation. Most of the effort in this field is based on the distributional hypothesis BIBREF3 which claims that a word is characterized by the company it keeps BIBREF4 . Building on this idea, several vector space models such as well known Latent Semantic Analysis (LSA) BIBREF5 and Latent Dirichlet Allocation (LDA) BIBREF6 that make use of word distribution statistics have been proposed in distributional semantics. Although these methods have been commonly used in NLP, more recent techniques that generate dense, continuous valued vectors, called embeddings, have been receiving increasing interest in NLP research. Approaches that learn embeddings include neural network based predictive methods BIBREF1 , BIBREF7 and count-based matrix-factorization methods BIBREF8 . Word embeddings brought about significant performance improvements in many intrinsic NLP tasks such as analogy or semantic textual similarity tasks, as well as downstream NLP tasks such as part-of-speech (POS) tagging BIBREF9 , named entity recognition BIBREF10 , word sense disambiguation BIBREF11 , sentiment analysis BIBREF12 and cross-lingual studies BIBREF13 .",
"Although high levels of success have been reported in many NLP tasks using word embeddings, the individual embedding dimensions are commonly considered to be uninterpretable BIBREF14 . Contrary to some earlier sparse vector space models such as Hyperspace Analogue to Language (HAL) BIBREF15 , what is represented in each dimension of word embeddings is often unclear, rendering them a black-box approach. In contrast, embedding models that yield dimensions that are more easily interpretable in terms of the captured information can be better suited for NLP tasks that require semantic interpretation, including named entity recognition and retrieval of semantically related words. Model interpretability is also becoming increasingly relevant from a regulatory standpoint, as evidenced by the recent EU regulation that grants people with a “right to explanation\" regarding automatic decision making algorithms BIBREF16 .",
"Although word embeddings are a dominant part of NLP research, most studies aim to maximize the task performance on standard benchmark tests such as MEN BIBREF17 or Simlex-999 BIBREF18 . While improved test performance is undoubtedly beneficial, an embedding with enhanced performance does not necessarily reveal any insight about the semantic structure that it captures. A systematic assessment of the semantic structure intrinsic to word embeddings would enable an improved understanding of this popular approach, would allow for comparisons among different embeddings in terms of interpretability and potentially motivate new research directions.",
"In this study, we aim to bring light to the semantic concepts implicitly represented by various dimensions of a word embedding. To explore these hidden semantic structures, we leverage the category theory BIBREF19 that defines a category as a grouping of concepts with similar properties. We use human-designed category labels to ensure that our results and interpretations closely reflect human judgements. Human interpretation can make use of any kind of semantic relation among words to form a semantic group (category). This does not only significantly increase the number of possible categories but also makes it difficult and subjective to define a category. Although several lexical databases such as WordNet BIBREF0 have a representation for relations among words, they do not provide categories as needed for this study. Since there is no gold standard for semantic word categories to the best of our knowledge, we introduce a new category dataset where more than 6,500 different words are grouped in 110 semantic categories. Then, we propose a method based on distribution statistics of category words within the embedding space in order to uncover the semantic structure of the dense word vectors. We apply quantitative and qualitative tests to substantiate our method. Finally, we claim that the semantic decomposition of the embedding space can be used to quantify the interpretability of the word embeddings without requiring any human effort unlike the word intrusion test BIBREF20 .",
"This paper is organized as follows: Following a discussion of related work in Section \"Related Work\" , we describe our methods in Section \"Methods\" . In this section we introduce our dataset and also describe methods we used to investigate the semantic decomposition of the embeddings, to validate our findings and to measure the interpretability. In Section \"Results\" , we present the results of our experiments and finally we conclude the paper in Section \"Discussion and Conclusion\" ."
],
[
"In the word embedding literature, the problem of interpretability has been approached via several different routes. For learning sparse, interpretable word representations from co-occurrence variant matrices, BIBREF21 suggested algorithms based on non-negative matrix factorization (NMF) and the resulting representations are called non-negative sparse embeddings (NNSE). To address memory and scale issues of the algorithms in BIBREF21 , BIBREF22 proposed an online method of learning interpretable word embeddings. In both studies, interpretability was evaluated using a word intrusion test introduced in BIBREF20 . The word intrusion test is expensive to apply since it requires manual evaluations by human observers separately for each embedding dimension. As an alternative method to incorporate human judgement, BIBREF23 proposed joint non-negative sparse embedding (JNNSE), where the aim is to combine text-based similarity information among words with brain activity based similarity information to improve interpretability. Yet, this approach still requires labor-intensive collection of neuroimaging data from multiple subjects.",
"Instead of learning interpretable word representations directly from co-occurrence matrices, BIBREF24 and BIBREF25 proposed to use sparse coding techniques on conventional dense word embeddings to obtain sparse, higher dimensional and more interpretable vector spaces. However, since the projection vectors that are used for the transformation are learned from the word embeddings in an unsupervised manner, they do not have labels describing the corresponding semantic categories. Moreover, these studies did not attempt to enlighten the dense word embedding dimensions, rather they learned new high dimensional sparse vectors that perform well on specific tests such as word similarity and polysemy detection. In BIBREF25 , interpretability of the obtained vector space was evaluated using the word intrusion test. An alternative approach was proposed in BIBREF26 , where interpretability was quantified by the degree of clustering around embedding dimensions and orthogonal transformations were examined to increase interpretability while preserving the performance of the embedding. Note, however, that it was shown in BIBREF26 that total interpretability of an embedding is constant under any orthogonal transformation and it can only be redistributed across the dimensions. With a similar motivation to BIBREF26 , BIBREF27 proposed rotation algorithms based on exploratory factor analysis (EFA) to preserve the expressive performance of the original word embeddings while improving their interpretability. In BIBREF27 , interpretability was calculated using a distance ratio (DR) metric that is effectively proportional to the metric used in BIBREF26 . Although interpretability evaluations used in BIBREF26 and BIBREF27 are free of human effort, they do not necessarily reflect human interpretations since they are directly calculated from the embeddings.",
"Taking a different perspective, a recent study, BIBREF28 , attempted to elucidate the semantic structure within NNSE space by using categorized words from the HyperLex dataset BIBREF29 . The interpretability levels of embedding dimensions were quantified based on the average values of word vectors within categories. However, HyperLex is constructed based on a single type of semantic relation (hypernym) and average number of words representing a category is significantly low ( $\\approx 2$ ) making it challenging to conduct a comprehensive analysis."
],
[
"To address the limitations of the approaches discussed in Section \"Related Work\" , in this study we introduce a new conceptual category dataset. Based on this dataset, we propose statistical methods to capture the hidden semantic concepts in word embeddings and to measure the interpretability of the embeddings."
],
[
"Understanding the hidden semantic structure in dense word embeddings and providing insights on interpretation of their dimensions are the main objectives of this study. Since embeddings are formed via unsupervised learning on unannotated large corpora, some conceptual relationships that humans anticipate may be missed and some that humans do not anticipate may be formed in the embedding space BIBREF30 . Thus, not all clusters obtained from a word embedding space will be interpretable. Therefore, using the clusters in the dense embedding space might not take us far towards interpretation. This observation is also rooted in the need for human judgement in evaluating interpretability.",
"To provide meaningful interpretations for embedding dimensions, we refer to the category theory BIBREF19 where concepts with similar semantic properties are grouped under a common category. As mentioned earlier, using clusters from the embedding space as categories may not reflect human expectations accurately, hence having a basis based on human judgements is essential for evaluating interpretability. In that sense, semantic categories as dictated by humans can be considered a gold standard for categorization tasks since they directly reflect human expectations. Therefore, using supervised categories can enable a proper investigation of the word embedding dimensions. In addition, by comparing the human-categorized semantic concepts with the unsupervised word embeddings, one can acquire an understanding of what kind of concepts can or cannot be captured by the current state-of-the-art embedding algorithms.",
"In the literature, the concept of category is commonly used to indicate super-subordinate (hyperonym-hyponym) relations where words within a category are types or examples of that category. For instance, the furniture category includes words for furniture names such as bed or table. The HyperLex category dataset BIBREF29 , which was used in BIBREF28 to investigate embedding dimensions, is constructed based on this type of relation that is also the most frequently encoded relation among sets of synonymous words in the WordNet database BIBREF0 . However, there are many other types of semantic relations such as meronymy (part-whole relations), antonymy (opposite meaning words), synonymy (words having the same sense) and cross-Part of Speech (POS) relations (i.e. lexical entailments). Although WordNet provides representations for a subset of these relations, there is no clear procedure for constructing unified categories based on multiple different types of relations. It remains unclear what should be considered as a category, how many categories there should be, how narrow or broad they should be, and which words they should contain. Furthermore, humans can group words by inference, based on various physical or numerical properties such as color, shape, material, size or speed, increasing the number of possible groups almost unboundedly. For instance, words that may not be related according to classical hypernym or synonym relations might still be grouped under a category due to shared physical properties: sun, lemon and honey are similar in terms of color; spaghetti, limousine and sky-scanner are considered as tall; snail, tractor and tortoise are slow.",
"In sum, diverse types of semantic relationships or properties can be leveraged by humans for semantic interpretation. Therefore, to investigate the semantic structure of the word embedding space using categorized words, we need categories that represent a broad variety of distinct concepts and distinct types of relations. To the best of our knowledge, there is no comprehensive word category dataset that captures the many diverse types of relations mentioned above. What we have found closest to the required dataset are the online categorized word-lists that were constructed for educational purposes. There are a total of 168 categories on these word-lists. To build a word-category dataset suited for assessing the semantic structure in word embeddings, we took these word-lists as a foundational basis. We filtered out words that are not semantically related but share a common nuisance property such as their POS tagging (verbs, adverbs, adjectives etc.) or being compound words. Several categories containing proper words or word phrases such as the chinese new year and good luck symbols categories, which we consider too specific, are also removed from the dataset. Vocabulary is limited to the most frequent 50,000 words, where frequencies are calculated from English Wikipedia, and words that are not contained in this vocabulary are removed from the dataset. We call the resulting semantically grouped word dataset “SEMCAT\" (SEMantic CATegories). Summary statistics of SEMCAT and HyperLex datasets are given in Table 1 . 10 sample words from each of 6 representative SEMCAT categories are given in Table 2 ."
],
[
"In this study, we use GloVe BIBREF8 as the source algorithm for learning dense word vectors. The entire content of English Wikipedia is utilized as the corpus. In the preprocessing step, all non-alphabetic characters (punctuations, digits, etc.) are removed from the corpus and all letters are converted to lowercase. Letters coming after apostrophes are taken as separate words (she'll becomes she ll). The resulting corpus is input to the GloVe algorithm. Window size is set to 15, vector length is chosen to be 300 and minimum occurrence count is set to 20 for the words in the corpus. Default values are used for the remaining parameters. The word embedding matrix, $\\mathcal {E}$ , is obtained from GloVe after limiting vocabulary to the most frequent 50,000 words in the corpus (i.e. $\\mathcal {E}$ is 50,000 $\\times $ 300). The GloVe algorithm is again used for the second time on the same corpus generating a second embedding space, $\\mathcal {E}^2$ , to examine the effects of different initializations of the word vectors prior to training.",
"To quantify the significance of word embedding dimensions for a given semantic category, one should first understand how a semantic concept can be captured by a dimension, and then find a suitable metric to measure it. BIBREF28 assumed that a dimension represents a semantic category if the average value of the category words for that dimension is above an empirical threshold, and therefore took that average value as the representational power of the dimension for the category. Although this approach may be convenient for NNSE, directly using the average values of category words is not suitable for well-known dense word embeddings due to several reasons. First, in dense embeddings it is possible to encode in both positive and negative directions of the dimensions making a single threshold insufficient. In addition, different embedding dimensions may have different statistical characteristics. For instance, average value of the words from the jobs category of SEMCAT is around 0.38 and 0.44 in 221st and 57th dimensions of $\\mathcal {E}$ respectively; and the average values across all vocabulary are around 0.37 and -0.05 respectively for the two dimensions. Therefore, the average value of 0.38 for the jobs category may not represent any encoding in the 221st dimension since it is very close to the average of any random set of words in that dimension. In contrast, an average of similar value 0.44 for the jobs category may be highly significant for the 57th dimension. Note that focusing solely on average values might be insufficient to measure the encoding strength of a dimension for a semantic category. For instance, words from the car category have an average of -0.08 that is close to the average across all vocabulary, -0.04, for the 133th embedding dimension. However, standard deviation of the words within the car category is 0.15 which is significantly lower than the standard deviation of all vocabulary, 0.35, for this particular dimension. In other words, although average of words from the car category is very close to the overall mean, category words are more tightly grouped compared to other vocabulary words in the 133th embedding dimension, potentially implying significant encoding.",
"From a statistical perspective, the question of “How strong a particular concept is encoded in an embedding dimension?\" can be interpreted as “How much information can be extracted from a word embedding dimension regarding a particular concept?\". If the words representing a concept (i.e. words in a SEMCAT category) are sampled from the same distribution with all vocabulary words, then the answer would be zero since the category would be statistically equivalent to a random selection of words. For dimension $i$ and category $j$ , if $\\mathcal {P}_{i,j}$ denotes the distribution from which words of that category are sampled and $\\mathcal {Q}_{i,j}$ denotes the distribution from which all other vocabulary words are sampled, then the distance between distributions $\\mathcal {P}_{i,j}$ and $\\mathcal {Q}_{i,j}$ will be proportional to the information that can be extracted from dimension $i$ regarding category $j$ . Based on this argument, Bhattacharya distance BIBREF31 with normal distribution assumption is a suitable metric, which is given in ( 10 ), to quantify the level of encoding in the word embedding dimensions. Normality of the embedding dimensions are tested using one-sample Kolmogorov-Smirnov test (KS test, Bonferroni corrected for multiple comparisons). ",
"$$ \n{\\mathcal {W}_B(i,j)} = \\frac{1}{4}\\ln \\left(\\frac{1}{4}\\left(\\frac{\\sigma ^2_{p_{i,j}}}{\\sigma ^2_{q_{i,j}}} + \\frac{\\sigma ^2_{q_{i,j}}}{\\sigma ^2_{p_{i,j}}} + 2\\right)\\right) \\\\ + \\frac{1}{4}\\left(\\frac{\\left(\\mu _{p_{i,j}} - \\mu _{q_{i,j}}\\right)^2}{\\sigma ^2_{p_{i,j}} + \\sigma ^2_{q_{i,j}}}\\right)$$ (Eq. 10) ",
"In ( 10 ), $\\mathcal {W}_B$ is a $300\\times 110$ Bhattacharya distance matrix, which can also be considered as a category weight matrix, $i$ is the dimension index ( $i \\in \\lbrace 1, 2, ..., 300\\rbrace $ ), $j$ is the category index ( $j \\in \\lbrace 1, 2, ..., 110\\rbrace $ ). $p_{i,j}$ is the vector of the $i^{th}$ dimension of each word in $j^{th}$ category and $q_{i,j}$ is the vector of the $300\\times 110$0 dimension of all other vocabulary words ( $300\\times 110$1 is of length $300\\times 110$2 and $300\\times 110$3 is of length ( $300\\times 110$4 ) where $300\\times 110$5 is the number of words in the $300\\times 110$6 category). $300\\times 110$7 and $300\\times 110$8 are the mean and the standard deviation operations, respectively. Values in $300\\times 110$9 can range from 0 (if $i$0 and $i$1 have the same means and variances) to $i$2 . In general, a better separation of category words from remaining vocabulary words in a dimension results in larger $i$3 elements for the corresponding dimension.",
"Based on SEMCAT categories, for the learned embedding matrices $\\mathcal {E}$ and $\\mathcal {E}^2$ , the category weight matrices ( $\\mathcal {W}_B$ and $\\mathcal {W}^2_B$ ) are calculated using Bhattacharya distance metric ( 10 ).",
"The KS test for normality reveals that 255 dimensions of $\\mathcal {E}$ are normally distributed ( $p > 0.05$ ). The average test statistic for these 255 dimensions is $0.0064 \\pm 0.0016$ (mean $\\pm $ standard deviation). While the normality hypothesis was rejected for the remaining 45 dimensions, a relatively small test statistic of $0.0156 \\pm 0.0168$ is measured, indicating that the distribution of these dimensions is approximately normal.",
"The semantic category weights calculated using the method introduced in Section \"Semantic Decomposition\" is displayed in Figure 2 . A close examination of the distribution of category weights indicates that the representation of semantic concepts are broadly distributed across many dimensions of the GloVe embedding space. This suggests that the raw space output by the GloVe algorithm has poor interpretability.",
"In addition, it can be observed that the total representation strength summed across dimensions varies significantly across categories, some columns in the category weight matrix contain much higher values than others. In fact, total representation strength of a category greatly depends on its word distribution. If a particular category reflects a highly specific semantic concept with relatively few words such as the metals category, category words tend to be well clustered in the embedding space. This tight grouping of category words results in large Bhattacharya distances in most dimensions indicating stronger representation of the category. On the other hand, if words from a semantic category are weakly related, it is more difficult for the word embedding to encode their relations. In this case, word vectors are relatively more widespread in the embedding space, and this leads to smaller Bhattacharya distances indicating that the semantic category does not have a strong representation across embedding dimensions. The total representation strengths of the 110 semantic categories in SEMCAT are shown in Figure 3 , along with the baseline strength level obtained for a category composed of 91 randomly selected words where 91 is the average word count across categories in SEMCAT. The metals category has the strongest total representation among SEMCAT categories due to relatively few and well clustered words it contains, whereas the pirate category has the lowest total representation due to widespread words it contains.",
"To closely inspect the semantic structure of dimensions and categories, let us investigate the decompositions of three sample dimensions and three specific semantic categories (math, animal and tools). The left column of Figure 4 displays the categorical decomposition of the 2nd, 6th and 45th dimensions of the word embedding. While the 2nd dimension selectively represents a particular category (sciences), the 45th dimension focuses on 3 different categories (housing, rooms and sciences) and the 6th dimension has a distributed and relatively uniform representation of many different categories. These distinct distributional properties can also be observed in terms of categories as shown in the right column of Figure 4 . While only few dimensions are dominant for representing the math category, semantic encodings of the tools and animals categories are distributed across many embedding dimensions.",
"Note that these results are valid regardless of the random initialization of the GloVe algorithm while learning the embedding space. For the weights calculated for our second GloVe embedding space $\\mathcal {E}^2$ , where the only difference between $\\mathcal {E}$ and $\\mathcal {E}^2$ is the independent random initializations of the word vectors before training, we observe nearly identical decompositions for the categories ignoring the order of the dimensions (similar number of peaks and similar total representation strength; not shown)."
],
[
"If the weights in $\\mathcal {W}_B$ truly correspond to the categorical decomposition of the semantic concepts in the dense embedding space, then $\\mathcal {W}_B$ can also be considered as a transformation matrix that can be used to map word embeddings to a semantic space where each dimension is a semantic category. However, it would be erroneous to directly multiply the word embeddings with category weights. The following steps should be performed in order to map word embeddings to a semantic space where dimensions are interpretable:",
"To make word embeddings compatible in scale with the category weights, word embedding dimensions are standardized ( $\\mathcal {E}_S$ ) such that each dimension has zero mean and unit variance since category weights have been calculated based on the deviations from the general mean (second term in ( 10 )) and standard deviations (first term in ( 10 )).",
"Category weights are normalized across dimensions such that each category has a total weight of 1 ( $\\mathcal {W}_{NB}$ ). This is necessary since some columns of $\\mathcal {W}_B$ dominate others in terms of representation strength (will be discussed in Section \"Results\" in more detail). This inequality across semantic categories can cause an undesired bias towards categories with larger total weights in the new vector space. $\\ell _1$ normalization of the category weights across dimensions is performed to prevent bias.",
"Word embedding dimensions can encode semantic categories in both positive and negative directions ( $\\mu _{p_{i,j}} - \\mu _{q_{i,j}}$ can be positive or negative) that contribute equally to the Bhattacharya distance. However, since encoding directions are important for the mapping of the word embeddings, $\\mathcal {W}_{NB}$ is replaced with its signed version $\\mathcal {W}_{NSB}$ (if $\\mu _{p_{i,j}} - \\mu _{q_{i,j}}$ is negative, then $\\mathcal {W}_{NSB}(i,j) = -\\mathcal {W}_{NB}(i,j)$ , otherwise $\\mathcal {W}_{NSB}(i,j) = \\mathcal {W}_{NB}(i,j)$ ) where negative weights correspond to encoding in the negative direction.",
"Then, interpretable semantic vectors ( $\\mathcal {I}_{50000\\times 110}$ ) are obtained by multiplying $\\mathcal {E}_S$ with $\\mathcal {W}_{NSB}$ .",
"One can reasonably suggest to alternatively use the centers of the vectors of the category words as the weights for the corresponding category as given in (2). ",
"$$ \n\\mathcal {W}_C(i,j)=\\mu _{p_{i,j}}$$ (Eq. 16) ",
"A second interpretable embedding space, $\\mathcal {I}^*$ , is then obtained by simply projecting the word vectors in $\\mathcal {E}$ to the category centers. (3) and (4) show the calculation of $\\mathcal {I}$ and $\\mathcal {I}^*$ respectively. Figure 1 shows the procedure for generation of interpretable embedding spaces $\\mathcal {I}$ and $\\mathcal {I}^*$ . ",
"$$\\mathcal {I} = \\mathcal {E}_S\\mathcal {W}_{NSB} \\\\\n\\mathcal {I}^* = \\mathcal {E}\\mathcal {W}_C$$ (Eq. 17) "
],
[
" $\\mathcal {I}$ and $\\mathcal {I}^*$ are further investigated via qualitative and quantitative approaches in order to confirm that $\\mathcal {W}_B$ is a reasonable semantic decomposition of the dense word embedding dimensions, that $\\mathcal {I}$ is indeed an interpretable semantic space and that our proposed method produces better representations for the categories than their center vectors.",
"If $\\mathcal {W}_B$ and $\\mathcal {W}_C$ represent the semantic distribution of the word embedding dimensions, then columns of $\\mathcal {I}$ and $\\mathcal {I}^*$ should correspond to semantic categories. Therefore, each word vector in $\\mathcal {I}$ and $\\mathcal {I}^*$ should represent the semantic decomposition of the respective word in terms of the SEMCAT categories. To test this prediction, word vectors from the two semantic spaces ( $\\mathcal {I}$ and $\\mathcal {I}^*$ ) are qualitatively investigated.",
"To compare $\\mathcal {I}$ and $\\mathcal {I}^*$ , we also define a quantitative test that aims to measure how well the category weights represent the corresponding categories. Since weights are calculated directly using word vectors, it is natural to expect that words should have high values in dimensions that correspond to the categories they belong to. However, using words that are included in the categories for investigating the performance of the calculated weights is similar to using training accuracy to evaluate model performance in machine learning. Using validation accuracy is more adequate to see how well the model generalizes to new, unseen data that, in our case, correspond to words that do not belong to any category. During validation, we randomly select 60% of the words for training and use the remaining 40% for testing for each category. From the training words we obtain the weight matrix $\\mathcal {W}_B$ using Bhattacharya distance and the weight matrix $\\mathcal {W}_C$ using the category centers. We select the largest $k$ weights ( $k \\in \\lbrace 5,7,10,15,25,50,100,200,300\\rbrace $ ) for each category (i.e. largest $k$ elements for each column of $\\mathcal {W}_B$ and $\\mathcal {W}_C$ ) and replace the other weights with 0 that results in sparse category weight matrices ( $\\mathcal {W}_B^s$ and $\\mathcal {I}^*$0 ). Then projecting dense word vectors onto the sparse weights from $\\mathcal {I}^*$1 and $\\mathcal {I}^*$2 , we obtain interpretable semantic spaces $\\mathcal {I}^*$3 and $\\mathcal {I}^*$4 . Afterwards, for each category, we calculate the percentages of the unseen test words that are among the top $\\mathcal {I}^*$5 , $\\mathcal {I}^*$6 and $\\mathcal {I}^*$7 words (excluding the training words) in their corresponding dimensions in the new spaces, where $\\mathcal {I}^*$8 is the number of test words that varies across categories. We calculate the final accuracy as the weighted average of the accuracies across the dimensions in the new spaces, where the weighting is proportional to the number of test words within the categories. We repeat the same procedure for 10 independent random selections of the training words.",
"A representative investigation of the semantic space $\\mathcal {I}$ is presented in Figure 5 , where semantic decompositions of 4 different words, window, bus, soldier and article, are displayed using 20 dimensions of $\\mathcal {I}$ with largest values for each word. These words are expected to have high values in the dimensions that encode the categories to which they belong. However, we can clearly see from Figure 5 that additional categories such as jobs, people, pirate and weapons that are semantically related to soldier but that do not contain the word also have high values. Similar observations can be made for window, bus, and article supporting the conclusion that the category weight spread broadly to many non-category words.",
"Figure 6 presents the semantic decompositions of the words window, bus, soldier and article obtained form $\\mathcal {I}^*$ that is calculated using the category centers. Similar to the distributions obtained in $\\mathcal {I}$ , words have high values for semantically-related categories even when these categories do not contain the words. In contrast to $\\mathcal {I}$ , however, scores for words are much more uniformly distributed across categories, implying that this alternative approach is less discriminative for categories than the proposed method.",
"To quantitatively compare $\\mathcal {I}$ and $\\mathcal {I}^*$ , category word retrieval test is applied and the results are presented in Figure 7 . As depicted in Figure 7 , the weights calculated using our method ( $\\mathcal {W}_B$ ) significantly outperform the weights from the category centers ( $\\mathcal {W}_C$ ). It can be noticed that, using only 25 largest weights from $\\mathcal {W}_B$ for each category ( $k = 25$ ) yields higher accuracy in word retrieval compared to the alternative $\\mathcal {W}_C$ with any $k$ . This result confirms the prediction that the vectors that we obtain for each category (i.e. columns of $\\mathcal {W}_B$ ) distinguish categories better than their average vectors (i.e. columns of $\\mathcal {W}_C$ )."
],
[
"In addition to investigating the semantic distribution in the embedding space, a word category dataset can be also used to quantify the interpretability of the word embeddings. In several studies, BIBREF21 , BIBREF22 , BIBREF20 , interpretability is evaluated using the word intrusion test. In the word intrusion test, for each embedding dimension, a word set is generated including the top 5 words in the top ranks and a noisy word (intruder) in the bottom ranks of that dimension. The intruder is selected such that it is in the top ranks of a separate dimension. Then, human editors are asked to determine the intruder word within the generated set. The editors' performances are used to quantify the interpretability of the embedding. Although evaluating interpretability based on human judgements is an effective approach, word intrusion is an expensive method since it requires human effort for each evaluation. Furthermore, the word intrusion test does not quantify the interpretability levels of the embedding dimensions, instead it yields a binary decision as to whether a dimension is interpretable or not. However, using continuous values is more adequate than making binary evaluations since interpretability levels may vary gradually across dimensions.",
"We propose a framework that addresses both of these issues by providing automated, continuous valued evaluations of interpretability while keeping the basis of the evaluations as human judgements. The basic idea behind our framework is that humans interpret dimensions by trying to group the most distinctive words in the dimensions (i.e. top or bottom rank words), an idea also leveraged by the word intrusion test. Based on this key idea, it can be noted that if a dataset represents all the possible groups humans can form, instead of relying on human evaluations, one can simply check whether the distinctive words of the embedding dimensions are present together in any of these groups. As discussed earlier, the number of groups humans can form is theoretically unbounded, therefore it is not possible to compile an all-comprehensive dataset for all potential groups. However, we claim that a dataset with a sufficiently large number of categories can still provide a good approximation to human judgements. Based on this argument, we propose a simple method to quantify the interpretability of the embedding dimensions.",
"We define two interpretability scores for an embedding dimension-category pair as: ",
"$$ \n\\begin{split}\nIS^+_{i,j}=\\frac{|S_j \\cap V^+_i(\\lambda \\times n_j)|}{n_j} \\times 100 \\\\\nIS^-_{i,j}=\\frac{|S_j \\cap V^-_i(\\lambda \\times n_j)|}{n_j} \\times 100\n\\end{split}$$ (Eq. 23) ",
"where $IS^+_{i,j}$ is the interpretability score for the positive direction and $IS^-_{i,j}$ is the interpretability score for the negative direction for the $i^{th}$ dimension ( $i \\in \\lbrace 1,2,...,D\\rbrace $ where $D$ is the dimensionality of the embedding) and $j^{th}$ category ( $j \\in \\lbrace 1,2,...,K\\rbrace $ where $K$ is the number of categories in the dataset). $S_j$ is the set representing the words in the $j^{th}$ category, $IS^-_{i,j}$0 is the number of the words in the $IS^-_{i,j}$1 category and $IS^-_{i,j}$2 , $IS^-_{i,j}$3 refer to the distinctive words located at the top and bottom ranks of the $IS^-_{i,j}$4 embedding dimension, respectively. $IS^-_{i,j}$5 is the number of words taken from the upper and bottom ranks where $IS^-_{i,j}$6 is the parameter determining how strict the interpretability definition is. The smallest value for $IS^-_{i,j}$7 is 1 that corresponds to the most strict definition and larger $IS^-_{i,j}$8 values relax the definition by increasing the range for selected category words. $IS^-_{i,j}$9 is the intersection operator between category words and top and bottom ranks words, $i^{th}$0 is the cardinality operator (number of elements) for the intersecting set.",
"We take the maximum of scores in the positive and negative directions as the overall interpretability score for a category ( $IS_{i,j}$ ). The interpretability score of a dimension is then taken as the maximum of individual category interpretability scores across that dimension ( $IS_{i}$ ). Finally, we calculate the overall interpretability score of the embedding ( $IS$ ) as the average of the dimension interpretability scores: ",
"$$ \n\\begin{split}\nIS_{i,j} &= \\max (IS^+_{i,j}, IS^-_{i,j}) \\\\\nIS_{i} &= \\max _{j} IS_{i,j} \\\\\nIS &= \\frac{1}{D}\\sum \\limits _{i=1}^D IS_{i}\n\\end{split}$$ (Eq. 24) ",
"We test our method on the GloVe embedding space, on the semantic spaces $\\mathcal {I}$ and $\\mathcal {I}^*$ , and on a random space where word vectors are generated by randomly sampling from a zero mean, unit variance normal distribution. Interpretability scores for the random space are taken as our baseline. We measure the interpretability scores as $\\lambda $ values are varied from 1 (strict interpretability) to 10 (relaxed interpretability).",
"Our interpretability measurements are based on our proposed dataset SEMCAT, which was designed to be a comprehensive dataset that contains a diverse set of word categories. Yet, it is possible that the precise interpretability scores that are measured here are biased by the dataset used. In general, two main properties of the dataset can affect the results: category selection and within-category word selection. To examine the effects of these properties on interpretability evaluations, we create alternative datasets by varying both category selection and word selection for SEMCAT. Since SEMCAT is comprehensive in terms of the words it contains for the categories, these datasets are created by subsampling the categories and words included in SEMCAT. Since random sampling of words within a category may perturb the capacity of the dataset in reflecting human judgement, we subsample r% of the words that are closest to category centers within each category, where $r \\in \\lbrace 40,60,80,100\\rbrace $ . To examine the importance of number of categories in the dataset we randomly select $m$ categories from SEMCAT where $m \\in \\lbrace 30,50,70,90,110\\rbrace $ . We repeat the selection 10 times independently for each $m$ .",
"Figure 8 displays the interpretability scores of the GloVe embedding, $\\mathcal {I}$ , $\\mathcal {I}^*$ and the random embedding for varying $\\lambda $ values. $\\lambda $ can be considered as a design parameter adjusted according to the interpretability definition. Increasing $\\lambda $ relaxes the interpretability definition by allowing category words to be distributed on a wider range around the top ranks of a dimension. We propose that $\\lambda = 5$ is an adequate choice that yields a similar evaluation to measuring the top-5 error in category word retrieval tests. As clearly depicted, semantic space $\\mathcal {I}$ is significantly more interpretable than the GloVe embedding as justified in Section \"Validation\" . We can also see that interpretability score of the GloVe embedding is close to the random embedding representing the baseline interpretability level.",
"Interpretability scores for datasets constructed by sub-sampling SEMCAT are given in Table 3 for the GloVe, $\\mathcal {I}$ , $\\mathcal {I}^*$ and random embedding spaces for $\\lambda = 5$ . Interpretability scores for all embeddings increase as the number of categories in the dataset increase (30, 50, 70, 90, 110) for each category coverage (40%, 60%, 80%, 100%). This is expected since increasing the number of categories corresponds to taking into account human interpretations more substantially during evaluation. One can further argue that true interpretability scores of the embeddings (i.e. scores from an all-comprehensive dataset) should be even larger than those presented in Table 3 . However, it can also be noticed that the increase in the interpretability scores of the GloVe and random embedding spaces gets smaller for larger number of categories. Thus, there is diminishing returns to increasing number of categories in terms of interpretability. Another important observation is that the interpretability scores of $\\mathcal {I}$ and $\\mathcal {I}^*$ are more sensitive to number of categories in the dataset than the GloVe or random embeddings. This can be attributed to the fact that $\\mathcal {I}$ and $\\mathcal {I}^*$ comprise dimensions that correspond to SEMCAT categories, and that inclusion or exclusion of these categories more directly affects interpretability.",
"In contrast to the category coverage, the effects of within-category word coverage on interpretability scores can be more complex. Starting with few words within each category, increasing the number of words is expected to more uniformly sample from the word distribution, more accurately reflect the semantic relations within each category and thereby enhance interpretability scores. However, having categories over-abundant in words might inevitably weaken semantic correlations among them, reducing the discriminability of the categories and interpretability of the embedding. Table 3 shows that, interestingly, changing the category coverage has different effects on the interpretability scores of different types of embeddings. As category word coverage increases, interpretability scores for random embedding gradually decrease while they monotonically increase for the GloVe embedding. For semantic spaces $\\mathcal {I}$ and $\\mathcal {I}^*$ , interpretability scores increase as the category coverage increases up to 80 $\\%$ of that of SEMCAT, then the scores decrease. This may be a result of having too comprehensive categories as argued earlier, implying that categories with coverage of around 80 $\\%$ of SEMCAT are better suited for measuring interpretability. However, it should be noted that the change in the interpretability scores for different word coverages might be effected by non-ideal subsampling of category words. Although our word sampling method, based on words' distances to category centers, is expected to generate categories that are represented better compared to random sampling of category words, category representations might be suboptimal compared to human designed categories."
],
[
"In this paper, we propose a statistical method to uncover the latent semantic structure in dense word embeddings. Based on a new dataset (SEMCAT) we introduce that contains more than 6,500 words semantically grouped under 110 categories, we provide a semantic decomposition of the word embedding dimensions and verify our findings using qualitative and quantitative tests. We also introduce a method to quantify the interpretability of word embeddings based on SEMCAT that can replace the word intrusion test that relies heavily on human effort while keeping the basis of the interpretations as human judgements.",
"Our proposed method to investigate the hidden semantic structure in the embedding space is based on calculation of category weights using a Bhattacharya distance metric. This metric implicitly assumes that the distribution of words within each embedding dimension is normal. Our statistical assessments indicate that the GloVe embedding space considered here closely follows this assumption. In applications where the embedding method yields distributions that significantly deviate from a normal distribution, nonparametric distribution metrics such as Spearman's correlation could be leveraged as an alternative. The resulting category weights can seamlessly be input to the remaining components of our framework.",
"Since our proposed framework for measuring interpretability depends solely on the selection of the category words dataset, it can be used to directly compare different word embedding methods (e.g., GloVe, word2vec, fasttext) in terms of the interpretability of the resulting embedding spaces. A straightforward way to do this is to compare the category weights calculated for embedding dimensions across various embedding spaces. Note, however, that the Bhattacharya distance metric for measuring the category weights does not follow a linear scale and is unbounded. For instance, consider a pair of embeddings with category weights 10 and 30 versus another pair with weights 30 and 50. For both pairs, the latter embedding can be deemed more interpretable than the former. Yet, due to the gross nonlinearity of the distance metric, it is challenging to infer whether a 20-unit improvement in the category weights corresponds to similar levels of improvement in interpretability across the two pairs. To alleviate these issues, here we propose an improved method that assigns normalized interpretability scores with an upper bound of 100%. This method facilitates interpretability assessments and comparisons among separate embedding spaces.",
"The results reported in this study for semantic analysis and interpretability assessment of embeddings are based on SEMCAT. SEMCAT contains 110 different semantic categories where average number of words per category is 91 rendering SEMCAT categories quite comprehensive. Although the HyperLex dataset contains a relatively larger number of categories (1399), the average number of words per category is only 2, insufficient to accurately represent semantic categories. Furthermore, while HyperLex categories are constructed based on a single type of relation among words (hyperonym-hyponym), SEMCAT is significantly more comprehensive since many categories include words that are grouped based on diverse types of relationships that go beyond hypernym-hyponym relations. Meanwhile, the relatively smaller number of categories in SEMCAT is not considered a strong limitation, as our analyses indicate that the interpretability levels exhibit diminishing returns when the number of categories in the dataset are increased and SEMCAT is readily yielding near optimal performance. That said, extended datasets with improved coverage and expert labeling by multiple observers would further improve the reliability of the proposed approach. To do this, a synergistic merge with existing lexical databases such as WordNet might prove useful.",
"Methods for learning dense word embeddings remain an active area of NLP research. The framework proposed in this study enables quantitative assessments on the intrinsic semantic structure and interpretability of word embeddings. Providing performance improvements in other common NLP tasks might be a future study. Therefore, the proposed framework can be a valuable tool in guiding future research on obtaining interpretable yet effective embedding spaces for many NLP tasks that critically rely on semantic information. For instance, performance evaluation of more interpretable word embeddings on higher level NLP tasks (i.e. sentiment analysis, named entity recognition, question answering) and the relation between interpretability and NLP performance can be worthwhile."
],
[
"We thank the anonymous reviewers for their constructive and helpful comments that have significantly improved our paper.",
"This work was supported in part by a European Molecular Biology Organization Installation Grant (IG 3028), by a TUBA GEBIP fellowship, and by a BAGEP 2017 award of the Science Academy."
]
],
"section_name": [
"Introduction",
"Related Work",
"Methods",
"Dataset",
"Semantic Decomposition",
"Interpretable Word Vector Generation",
"Validation",
"Measuring Interpretability",
"Discussion and Conclusion",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"e2af3b4e9614ebdec03f37f1ef4e059685d1e2f8"
],
"answer": [
{
"evidence": [
"Our interpretability measurements are based on our proposed dataset SEMCAT, which was designed to be a comprehensive dataset that contains a diverse set of word categories. Yet, it is possible that the precise interpretability scores that are measured here are biased by the dataset used. In general, two main properties of the dataset can affect the results: category selection and within-category word selection. To examine the effects of these properties on interpretability evaluations, we create alternative datasets by varying both category selection and word selection for SEMCAT. Since SEMCAT is comprehensive in terms of the words it contains for the categories, these datasets are created by subsampling the categories and words included in SEMCAT. Since random sampling of words within a category may perturb the capacity of the dataset in reflecting human judgement, we subsample r% of the words that are closest to category centers within each category, where $r \\in \\lbrace 40,60,80,100\\rbrace $ . To examine the importance of number of categories in the dataset we randomly select $m$ categories from SEMCAT where $m \\in \\lbrace 30,50,70,90,110\\rbrace $ . We repeat the selection 10 times independently for each $m$ .",
"In contrast to the category coverage, the effects of within-category word coverage on interpretability scores can be more complex. Starting with few words within each category, increasing the number of words is expected to more uniformly sample from the word distribution, more accurately reflect the semantic relations within each category and thereby enhance interpretability scores. However, having categories over-abundant in words might inevitably weaken semantic correlations among them, reducing the discriminability of the categories and interpretability of the embedding. Table 3 shows that, interestingly, changing the category coverage has different effects on the interpretability scores of different types of embeddings. As category word coverage increases, interpretability scores for random embedding gradually decrease while they monotonically increase for the GloVe embedding. For semantic spaces $\\mathcal {I}$ and $\\mathcal {I}^*$ , interpretability scores increase as the category coverage increases up to 80 $\\%$ of that of SEMCAT, then the scores decrease. This may be a result of having too comprehensive categories as argued earlier, implying that categories with coverage of around 80 $\\%$ of SEMCAT are better suited for measuring interpretability. However, it should be noted that the change in the interpretability scores for different word coverages might be effected by non-ideal subsampling of category words. Although our word sampling method, based on words' distances to category centers, is expected to generate categories that are represented better compared to random sampling of category words, category representations might be suboptimal compared to human designed categories."
],
"extractive_spans": [],
"free_form_answer": "can be biased by dataset used and may generate categories which are suboptimal compared to human designed categories",
"highlighted_evidence": [
"Yet, it is possible that the precise interpretability scores that are measured here are biased by the dataset used.",
"However, it should be noted that the change in the interpretability scores for different word coverages might be effected by non-ideal subsampling of category words. Although our word sampling method, based on words' distances to category centers, is expected to generate categories that are represented better compared to random sampling of category words, category representations might be suboptimal compared to human designed categories."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ca2a4695129d0180768a955fb5910d639f79aa34"
]
},
{
"annotation_id": [
"61b5d15624527c8371636f185dad980c601296cf"
],
"answer": [
{
"evidence": [
"In the word embedding literature, the problem of interpretability has been approached via several different routes. For learning sparse, interpretable word representations from co-occurrence variant matrices, BIBREF21 suggested algorithms based on non-negative matrix factorization (NMF) and the resulting representations are called non-negative sparse embeddings (NNSE). To address memory and scale issues of the algorithms in BIBREF21 , BIBREF22 proposed an online method of learning interpretable word embeddings. In both studies, interpretability was evaluated using a word intrusion test introduced in BIBREF20 . The word intrusion test is expensive to apply since it requires manual evaluations by human observers separately for each embedding dimension. As an alternative method to incorporate human judgement, BIBREF23 proposed joint non-negative sparse embedding (JNNSE), where the aim is to combine text-based similarity information among words with brain activity based similarity information to improve interpretability. Yet, this approach still requires labor-intensive collection of neuroimaging data from multiple subjects.",
"In addition to investigating the semantic distribution in the embedding space, a word category dataset can be also used to quantify the interpretability of the word embeddings. In several studies, BIBREF21 , BIBREF22 , BIBREF20 , interpretability is evaluated using the word intrusion test. In the word intrusion test, for each embedding dimension, a word set is generated including the top 5 words in the top ranks and a noisy word (intruder) in the bottom ranks of that dimension. The intruder is selected such that it is in the top ranks of a separate dimension. Then, human editors are asked to determine the intruder word within the generated set. The editors' performances are used to quantify the interpretability of the embedding. Although evaluating interpretability based on human judgements is an effective approach, word intrusion is an expensive method since it requires human effort for each evaluation. Furthermore, the word intrusion test does not quantify the interpretability levels of the embedding dimensions, instead it yields a binary decision as to whether a dimension is interpretable or not. However, using continuous values is more adequate than making binary evaluations since interpretability levels may vary gradually across dimensions."
],
"extractive_spans": [],
"free_form_answer": "it is less expensive and quantifies interpretability using continuous values rather than binary evaluations",
"highlighted_evidence": [
"The word intrusion test is expensive to apply since it requires manual evaluations by human observers separately for each embedding dimension.",
"Furthermore, the word intrusion test does not quantify the interpretability levels of the embedding dimensions, instead it yields a binary decision as to whether a dimension is interpretable or not. However, using continuous values is more adequate than making binary evaluations since interpretability levels may vary gradually across dimensions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ca2a4695129d0180768a955fb5910d639f79aa34"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What are the weaknesses of their proposed interpretability quantification method?",
"What advantages does their proposed method of quantifying interpretability have over the human-in-the-loop evaluation they compare to?"
],
"question_id": [
"551f77b58c48ee826d78b4bf622bb42b039eca8c",
"74cd51a5528c6c8e0b634f3ad7a9ce366dfa5706"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"interpretability",
"interpretability"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"TABLE I SUMMARY STATISTICS OF SEMCAT AND HYPERLEX",
"TABLE II TEN SAMPLE WORDS FROM EACH OF THE SIX REPRESENTATIVE SEMCAT CATEGORIES",
"Fig. 1. Flow chart for the generation of the interpretable embedding spaces I and I∗. First, word vectors are obtained using the GloVe algorithm on Wikipedia corpus. To obtain I∗, weight matrix WC is generated by calculating the means of the words from each category for each embedding dimension and then WC is multiplied by the embedding matrix (see Section III-C). To obtain I, weight matrix WB is generated by calculating the Bhattacharya distance between category words and remaining vocabulary for each category and dimension. Then, WB is normalized (see Section III-C item 2), sign corrected (see Section III-C item 3), and finally multiplied with standardized word embedding (Es , see Section III-C item 1).",
"Fig. 3. Total representation strengths of 110 semantic categories from SEMCAT. Bhattacharya distance scores are summed across dimensions and then sorted. Red horizontal line represents the baseline strength level obtained for a category composed of 91 randomly selected words from the vocabulary (where 91 is the average word count across categories in SEMCAT). The metals category has the strongest total representation among SEMCAT categories due to relatively few and well clustered words it contains, while the pirate category has the lowest total representation due to widespread words it contains.",
"Fig. 2. Semantic category weights (WB 300×110) for 110 categories and 300 embedding dimensions obtained using Bhattacharya distance. Weights vary between 0 (represented by black) and 0.63 (represented by white). It can be noticed that some dimensions represent larger number of categories than others do and also some categories are represented strongly by more dimensions than others.",
"Fig. 4. Categorical decompositions of the 2nd, 6th, and 45th word embedding dimensions are given in the left column. A dense word embedding dimension may focus on a single category (top row), may represent a few different categories (bottom row) or may represent many different categories with low strength (middle row). Dimensional decompositions of the math, animal, and tools categories are shown in the right column. Semantic information about a category may be encoded in a few word embedding dimensions (top row) or it can be distributed across many of the dimensions (bottom row).",
"Fig. 5. Semantic decompositions of thewordswindow, bus, soldier, and article for 20 highest scoring SEMCAT categories obtained from vectors in I. Red bars indicate the categories that contain the word, blue bars indicate the categories that do not contain the word.",
"Fig. 6. Categorical decompositions of the words window, bus, soldier, and article for 20 highest scoring categories obtained from vectors in I∗. Red bars indicate the categories that contain the word, blue bars indicate the categories that do not contain the word.",
"Fig. 7. Category word retrieval performances for top n, 3n, and 5n words where n is the number of test words varying across categories. Category weights obtained using Bhattacharya distance represent categories better than the center of the category words. Using only 25 largest weights fromWB for each category (k = 25) gives better performance than using category centers with any k (shown with dashed line).",
"Fig. 8. Interpretability scores for GloVe, I, I∗ and random embeddings for varying λ values where λ is the parameter determining how strict the interpretability definition is (λ = 1 is the most strict definition, λ = 10 is a relaxed definition). Semantic spaces I and I∗ are significantly more interpretable than GloVe as expected. I outperforms I∗ suggesting that weights calculated with our proposedmethodmore distinctively represent categories as opposedweights calculated as the category centers. Interpretability scores of Glove are close to the baseline (Random) implying that the dense word embedding has poor interpretability.",
"TABLE III AVERAGE INTERPRETABILITY SCORES (%) FOR λ = 5"
],
"file": [
"3-TableI-1.png",
"3-TableII-1.png",
"6-Figure1-1.png",
"7-Figure3-1.png",
"7-Figure2-1.png",
"8-Figure4-1.png",
"8-Figure5-1.png",
"8-Figure6-1.png",
"9-Figure7-1.png",
"9-Figure8-1.png",
"10-TableIII-1.png"
]
} | [
"What are the weaknesses of their proposed interpretability quantification method?",
"What advantages does their proposed method of quantifying interpretability have over the human-in-the-loop evaluation they compare to?"
] | [
[
"1711.00331-Measuring Interpretability-8",
"1711.00331-Measuring Interpretability-11"
],
[
"1711.00331-Related Work-0",
"1711.00331-Measuring Interpretability-0"
]
] | [
"can be biased by dataset used and may generate categories which are suboptimal compared to human designed categories",
"it is less expensive and quantifies interpretability using continuous values rather than binary evaluations"
] | 626 |
1707.06939 | Autocompletion interfaces make crowd workers slower, but their use promotes response diversity | Creative tasks such as ideation or question proposal are powerful applications of crowdsourcing, yet the quantity of workers available for addressing practical problems is often insufficient. To enable scalable crowdsourcing thus requires gaining all possible efficiency and information from available workers. One option for text-focused tasks is to allow assistive technology, such as an autocompletion user interface (AUI), to help workers input text responses. But support for the efficacy of AUIs is mixed. Here we designed and conducted a randomized experiment where workers were asked to provide short text responses to given questions. Our experimental goal was to determine if an AUI helps workers respond more quickly and with improved consistency by mitigating typos and misspellings. Surprisingly, we found that neither occurred: workers assigned to the AUI treatment were slower than those assigned to the non-AUI control and their responses were more diverse, not less, than those of the control. Both the lexical and semantic diversities of responses were higher, with the latter measured using word2vec. A crowdsourcer interested in worker speed may want to avoid using an AUI, but using an AUI to boost response diversity may be valuable to crowdsourcers interested in receiving as much novel information from workers as possible. | {
"paragraphs": [
[
"Crowdsourcing applications vary from basic, self-contained tasks such as image recognition or labeling BIBREF0 all the way to open-ended and creative endeavors such as collaborative writing, creative question proposal, or more general ideation BIBREF1 . Yet scaling the crowd to very large sets of creative tasks may require prohibitive numbers of workers. Scalability is one of the key challenges in crowdsourcing: how to best apply the valuable but limited resources provided by crowd workers and how to help workers be as efficient as possible.",
"Efficiency gains can be achieved either collectively at the level of the entire crowd or by helping individual workers. At the crowd level, efficiency can be gained by assigning tasks to workers in the best order BIBREF2 , by filtering out poor tasks or workers, or by best incentivizing workers BIBREF3 . At the individual worker level, efficiency gains can come from helping workers craft more accurate responses and complete tasks in less time.",
"One way to make workers individually more efficient is to computationally augment their task interface with useful information. For example, an autocompletion user interface (AUI) BIBREF4 , such as used on Google's main search page, may speed up workers as they answer questions or propose ideas. However, support for the benefits of AUIs is mixed and existing research has not considered short, repetitive inputs such as those required by many large-scale crowdsourcing problems. More generally, it is not yet clear what are the best approaches or general strategies to achieve efficiency gains for creative crowdsourcing tasks.",
"In this work, we conducted a randomized trial of the benefits of allowing workers to answer a text-based question with the help of an autocompletion user interface. Workers interacted with a web form that recorded how quickly they entered text into the response field and how quickly they submitted their responses after typing is completed. After the experiment concluded, we measured response diversity using textual analyses and response quality using a followup crowdsourcing task with an independent population of workers. Our results indicate that the AUI treatment did not affect quality, and did not help workers perform more quickly or achieve greater response consensus. Instead, workers with the AUI were significantly slower and their responses were more diverse than workers in the non-AUI control group."
],
[
"An important goal of crowdsourcing research is achieving efficient scalability of the crowd to very large sets of tasks. Efficiency in crowdsourcing manifests both in receiving more effective information per worker and in making individual workers faster and/or more accurate. The former problem is a significant area of interest BIBREF5 , BIBREF6 , BIBREF7 while less work has been put towards the latter.",
"One approach to helping workers be faster at individual tasks is the application of usability studies. BIBREF8 ( BIBREF8 ) famously showed how crowd workers can perform user studies, although this work was focused on using workers as usability testers for other platforms, not on studying crowdsourcing interfaces. More recent usability studies on the efficiency and accuracy of workers include: BIBREF9 ( BIBREF9 ), who consider the task completion times of macrotasks and microtasks and find workers given smaller microtasks were slower but achieve higher quality than those given larger macrotasks; BIBREF10 ( BIBREF10 ), who study how the sequence of tasks given to workers and interruptions between tasks may slow workers down; and BIBREF11 ( BIBREF11 ), who study completion times for relevance judgment tasks, and find that imposed time limits can improve relevance quality, but do not focus on ways to speed up workers. These studies do not test the effects of the task interface, however, as we do here.",
"The usability feature we study here is an autocompletion user interface (AUI). AUIs are broadly familiar to online workers at this point, thanks in particular to their prominence on Google's main search bar (evolving out of the original Google Instant implementation). However, literature on the benefits of AUIs (and related word prediction and completion interfaces) in terms of improving efficiency is decidedly mixed.",
"It is generally assumed that AUIs make users faster by saving keystrokes BIBREF12 . However, there is considerable debate about whether or not such gains are countered by increased cognitive load induced by processing the given autocompletions BIBREF13 . BIBREF14 ( BIBREF14 ) showed that typists can enter text more quickly with word completion and prediction interfaces than without. However, this study focused on a different input modality (an onscreen keyboard) and, more importantly, on a text transcription task: typists were asked to reproduce an existing text, not answer questions. BIBREF4 ( BIBREF4 ) showed that medical typists saved keystrokes when using an autocompletion interface to input standardized medical terms. However, they did not consider the elapsed times required by these users, instead focusing on response times of the AUI suggestions, and so it is unclear if the users were actually faster with the AUI. There is some evidence that long-term use of an AUI can lead to improved speed and not just keystroke savings BIBREF15 , but it is not clear how general such learning may be, and whether or not it is relevant to short-duration crowdsourcing tasks."
],
[
"Here we describe the task we studied and its input data, worker recruitment, the design of our experimental treatment and control, the “instrumentation” we used to measure the speeds of workers as they performed our task, and our procedures to post-process and rate the worker responses to our task prior to subsequent analysis."
],
[
"We recruited 176 AMT workers to participate in our conceptualization task. Of these workers, 90 were randomly assigned to the Control group and 86 to the AUI group. These workers completed 1001 tasks: 496 tasks in the control and 505 in the AUI. All responses were gathered within a single 24-hour period during April, 2017.",
"After Control and AUI workers were finished responding, we initiated our non-experimental quality ratings task. Whenever multiple workers provided the same response to a given question, we only sought ratings for that single unique question and response. Each unique question-response pair ( INLINEFORM0 ) was rated at least 8–10 times (a few pairs were rated more often; we retained those extra ratings). We recruited 119 AMT workers (who were not members of the Control or AUI groups) who provided 4300 total ratings."
],
[
"We found that workers were slower overall with the AUI than without the AUI. In Fig. FIGREF16 we show the distributions of typing duration and submission delay. There was a slight difference in typing duration between Control and AUI (median 1.97s for Control compared with median 2.69s for AUI). However, there was a strong difference in the distributions of submission delay, with AUI workers taking longer to submit than Control workers (median submission delay of 7.27s vs. 4.44s). This is likely due to the time required to mentally process and select from the AUI options. We anticipated that the submission delay may be counter-balanced by the time saved entering text, but the total typing duration plus submission delay was still significantly longer for AUI than control (median 7.64s for Control vs. 12.14s for AUI). We conclude that the AUI makes workers significantly slower.",
"We anticipated that workers may learn over the course of multiple tasks. For example, the first time a worker sees the AUI will present a very different cognitive load than the 10th time. This learning may eventually lead to improved response times and so an AUI that may not be useful the first time may lead to performance gains as workers become more experienced.",
"To investigate learning effects, we recorded for each worker's question-response pair how many questions that worker had already answered, and examined the distributions of typing duration and submission delay conditioned on the number of previously answered questions (Fig. FIGREF17 ). Indeed, learning did occur: the submission delay (but not typing duration) decreased as workers responded to more questions. However, this did not translate to gains in overall performance between Control and AUI workers as learning occurred for both groups: Among AUI workers who answered 10 questions, the median submission delay on the 10th question was 8.02s, whereas for Control workers who answered 10 questions, the median delay on the 10th question was only 4.178s. This difference between Control and AUI submission delays was significant (Mann-Whitney test: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ). In comparison, AUI (Control) workers answering their first question had a median submission delay of 10.97s (7.00s). This difference was also significant (Mann-Whitney test: INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 ). We conclude that experience with the AUI will not eventually lead to faster responses those of the control."
],
[
"We were also interested in determining whether or not the worker responses were more consistent or more diverse due to the AUI. Response consistency for natural language data is important when a crowdsourcer wishes to pool or aggregate a set of worker responses. We anticipated that the AUI would lead to greater consistency by, among other effects, decreasing the rates of typos and misspellings. At the same time, however, the AUI could lead to more diversity due to cognitive priming: seeing suggested responses from the AUI may prompt the worker to revise their response. Increased diversity may be desirable when a crowdsourcer wants to receive as much information as possible from a given task.",
"To study the lexical and semantic diversities of responses, we performed three analyses. First, we aggregated all worker responses to a particular question into a single list corresponding to that question. Across all questions, we found that the number of unique responses was higher for the AUI than for the Control (Fig. FIGREF19 A), implying higher diversity for AUI than for Control.",
"Second, we compared the diversity of individual responses between Control and AUI for each question. To measure diversity for a question, we computed the number of responses divided by the number of unique responses to that question. We call this the response density. A set of responses has a response density of 1 when every response is unique but when every response is the same, the response density is equal to the number of responses. Across the ten questions, response density was significantly lower for AUI than for Control (Wilcoxon signed rank test paired on questions: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ) (Fig. FIGREF19 B).",
"Third, we estimated the semantic diversity of responses using word vectors. Word vectors, or word embeddings, are a state-of-the-art computational linguistics tool that incorporate the semantic meanings of words and phrases by learning vector representations that are embedded into a high-dimensional vector space BIBREF18 , BIBREF19 . Vector operations within this space such as addition and subtraction are capable of representing meaning and interrelationships between words BIBREF19 . For example, the vector INLINEFORM0 is very close to the vector INLINEFORM1 , indicating that these vectors capture analogy relations. Here we used 300-dimension word vectors trained on a 100B-word corpus taken from Google News (word2vec). For each question we computed the average similarity between words in the responses to that question—a lower similarity implies more semantically diverse answers. Specifically, for a given question INLINEFORM2 , we concatenated all responses to that question into a single document INLINEFORM3 , and averaged the vector similarities INLINEFORM4 of all pairs of words INLINEFORM5 in INLINEFORM6 , where INLINEFORM7 is the word vector corresponding to word INLINEFORM8 : DISPLAYFORM0 ",
"where INLINEFORM0 if INLINEFORM1 and zero otherwise. We also excluded from EQREF21 any word pairs where one or both words were not present in the pre-trained word vectors (approximately 13% of word pairs). For similarity INLINEFORM2 we chose the standard cosine similarity between two vectors. As with response density, we found that most questions had lower word vector similarity INLINEFORM3 (and are thus collectively more semantically diverse) when considering AUI responses as the document INLINEFORM4 than when INLINEFORM5 came from the Control workers (Fig. FIGREF19 C). The difference was significant (Wilcoxon signed rank test paired on questions: INLINEFORM6 , INLINEFORM7 , INLINEFORM8 ).",
"Taken together, we conclude from these three analyses that the AUI increased the diversity of the responses workers gave."
],
[
"Following the collection of responses from the Control and AUI groups, separate AMT workers were asked to rate the quality of the original responses (see Experimental design). These ratings followed a 1–5 scale from lowest to highest. We present these ratings in Fig. FIGREF23 . While there was variation in overall quality across different questions (Fig. FIGREF23 A), we did not observe a consistent difference in perceived response quality between the two groups. There was also no statistical difference in the overall distributions of ratings per question (Fig. FIGREF23 B). We conclude that the AUI neither increased nor decreased response quality."
],
[
"We have showed via a randomized control trial that an autocompletion user interface (AUI) is not helpful in making workers more efficient. Further, the AUI led to a more lexically and semantically diverse set of text responses to a given task than if the AUI was not present. The AUI also had no noticeable impact, positive or negative, on response quality, as independently measured by other workers.",
"A challenge with text-focused crowdsourcing is aggregation of natural language responses. Unlike binary labeling tasks, for example, normalizing text data can be challenging. Should casing be removed? Should words be stemmed? What to do with punctuation? Should typos be fixed? One of our goals when testing the effects of the AUI was to see if it helps with this normalization task, so that crowdsourcers can spend less time aggregating responses. We found that the AUI would likely not help with this in the sense that the sets of responses became more diverse, not less. Yet, this may in fact be desirable—if a crowdsourcer wants as much diverse information from workers as possible, then showing them dynamic AUI suggestions may provide a cognitive priming mechanism to inspire workers to consider responses which otherwise would not have occurred to them.",
"One potential explanation for the increased submission delay among AUI workers is an excessive number of options presented by the AUI. The goal of an AUI is to present the best options at the top of the drop down menu (Fig. FIGREF2 B). Then a worker can quickly start typing and choose the best option with a single keystroke or mouse click. However, if the best option appears farther down the menu, then the worker must commit more time to scan and process the AUI suggestions. Our AUI always presented six suggestions, with another six available by scrolling, and our experiment did not vary these numbers. Yet the size of the AUI and where options land may play significant roles in submission delay, especially if significant numbers of selections come from AUI positions far from the input area.",
"We aimed to explore position effects, but due to some technical issues we did not record the positions in the AUI that workers chose. However, our Javascript instrumentation logged worker keystrokes as they typed so we can approximately reconstruct the AUI position of the worker's ultimate response. To do this, we first identified the logged text inputed by the worker before it was replaced by the AUI selection, then used this text to replicate the database query underlying the AUI, and lastly determined where the worker's final response appeared in the query results. This procedure is only an approximation because our instrumentation would occasionally fail to log some keystrokes and because a worker could potentially type out the entire response even if it also appeared in the AUI (which the worker may not have even noticed). Nevertheless, most AUI workers submitted responses that appeared in the AUI (Fig. FIGREF24 A) and, of those responses, most owere found in the first few (reconstructed) positions near the top of the AUI (Fig. FIGREF24 B). Specifically, we found that 59.3% of responses were found in the first two reconstructed positions, and 91.2% were in the first six. With the caveats of this analysis in mind, which we hope to address in future experiments, these results provide some evidence that the AUI responses were meaningful and that the AUI workers were delayed by the AUI even though most chosen responses came from the top area of the AUI which is most quickly accessible to the worker.",
"Beyond AUI position effects and the number of options shown in the AUI, there are many aspects of the interplay between workers and the AUI to be further explored. We limited workers to performing no more than ten tasks, but will an AUI eventually lead to efficiency gains beyond that level of experience? It is also an open question if an AUI will lead to efficiency gains when applying more advanced autocompletion and ranking algorithms than the one we used. Given that workers were slower with the AUI primarily due to a delay after they finished typing which far exceeded the delays of non-AUI workers, better algorithms may play a significant role in speeding up or, in this case, slowing down workers. Either way, our results here indicate that crowdsourcers must be very judicious if they wish to augment workers with autocompletion user interfaces."
],
[
"We thank S. Lehman and J. Bongard for useful comments and gratefully acknowledge the resources provided by the Vermont Advanced Computing Core. This material is based upon work supported by the National Science Foundation under Grant No. IIS-1447634."
]
],
"section_name": [
"Introduction",
"Related Work",
"Experimental design",
"Data collection",
"Differences in response time",
"Differences in response diversity",
"No difference in response quality",
"Discussion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"a714d3333794add2cd7a3694afd8b49abd4c681f"
],
"answer": [
{
"evidence": [
"We recruited 176 AMT workers to participate in our conceptualization task. Of these workers, 90 were randomly assigned to the Control group and 86 to the AUI group. These workers completed 1001 tasks: 496 tasks in the control and 505 in the AUI. All responses were gathered within a single 24-hour period during April, 2017."
],
"extractive_spans": [
"conceptualization task"
],
"free_form_answer": "",
"highlighted_evidence": [
"We recruited 176 AMT workers to participate in our conceptualization task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"9210f4c7f6b9c40ddd451006f373acbeccc40d8d"
],
"answer": [
{
"evidence": [
"To study the lexical and semantic diversities of responses, we performed three analyses. First, we aggregated all worker responses to a particular question into a single list corresponding to that question. Across all questions, we found that the number of unique responses was higher for the AUI than for the Control (Fig. FIGREF19 A), implying higher diversity for AUI than for Control.",
"Second, we compared the diversity of individual responses between Control and AUI for each question. To measure diversity for a question, we computed the number of responses divided by the number of unique responses to that question. We call this the response density. A set of responses has a response density of 1 when every response is unique but when every response is the same, the response density is equal to the number of responses. Across the ten questions, response density was significantly lower for AUI than for Control (Wilcoxon signed rank test paired on questions: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ) (Fig. FIGREF19 B)."
],
"extractive_spans": [],
"free_form_answer": "By computing number of unique responses and number of responses divided by the number of unique responses to that question for each of the questions",
"highlighted_evidence": [
"To study the lexical and semantic diversities of responses, we performed three analyses. First, we aggregated all worker responses to a particular question into a single list corresponding to that question.",
"Second, we compared the diversity of individual responses between Control and AUI for each question. To measure diversity for a question, we computed the number of responses divided by the number of unique responses to that question. We call this the response density. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"623413efd9bb6d12635f0668cd86c32e8e83c4d3"
],
"answer": [
{
"evidence": [
"We recruited 176 AMT workers to participate in our conceptualization task. Of these workers, 90 were randomly assigned to the Control group and 86 to the AUI group. These workers completed 1001 tasks: 496 tasks in the control and 505 in the AUI. All responses were gathered within a single 24-hour period during April, 2017."
],
"extractive_spans": [
"1001"
],
"free_form_answer": "",
"highlighted_evidence": [
"These workers completed 1001 tasks: 496 tasks in the control and 505 in the AUI. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"a2e35b02bd9ed0bea48e7924424093b55db25dbf"
],
"answer": [
{
"evidence": [
"We recruited 176 AMT workers to participate in our conceptualization task. Of these workers, 90 were randomly assigned to the Control group and 86 to the AUI group. These workers completed 1001 tasks: 496 tasks in the control and 505 in the AUI. All responses were gathered within a single 24-hour period during April, 2017."
],
"extractive_spans": [
"AMT"
],
"free_form_answer": "",
"highlighted_evidence": [
"We recruited 176 AMT workers to participate in our conceptualization task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"What was the task given to workers?",
"How was lexical diversity measured?",
"How many responses did they obtain?",
"What crowdsourcing platform was used?"
],
"question_id": [
"84d36bca06786070e49d3db784e42a51dd573d36",
"7af01e2580c332e2b5e8094908df4e43a29c8792",
"c78f18606524539e4c573481e5bf1e0a242cc33c",
"0cf6d52d7eafd43ff961377572bccefc29caf612"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1. Screenshots of our conceptualization task interface. The presence of the AUI is the only difference between the task interfaces.",
"Table 1. Question terms used in our conceptualization task. Workers were shown these questions in random order.",
"Figure 2. Distributions of time delays. Workers in the AUI treatment were significantly slower than in the control, and this was primarily due to the submission delay between when they finished entering text and when they submitted their response.",
"Figure 3. Workers became faster as they gained experience by answering more questions, but this improvement occurred in both Control and AUI groups.",
"Figure 4. AUI workers had more lexically (A, B) and semantically (C) diverse responses than Control workers.",
"Figure 5. Quality of responses. All question-response pairs were rated independently by workers on a 1-5 scale of perceived quality (1–lowest quality, 5–highest quality).",
"Figure 6. Inferred positions of AUI selections based on the last text workers in the AUI group typed before choosing from the AUI. (A) Most submitted AUI responses appeared in the AUI. (B) Among the responses appearing in the AUI, the reconstructed positions of those responses tended to be at the top of the AUI, in the most prominent, accessible area."
],
"file": [
"4-Figure1-1.png",
"4-Table1-1.png",
"7-Figure2-1.png",
"8-Figure3-1.png",
"9-Figure4-1.png",
"11-Figure5-1.png",
"13-Figure6-1.png"
]
} | [
"How was lexical diversity measured?"
] | [
[
"1707.06939-Differences in response diversity-1",
"1707.06939-Differences in response diversity-2"
]
] | [
"By computing number of unique responses and number of responses divided by the number of unique responses to that question for each of the questions"
] | 628 |
1805.04033 | Regularizing Output Distribution of Abstractive Chinese Social Media Text Summarization for Improved Semantic Consistency | Abstractive text summarization is a highly difficult problem, and the sequence-to-sequence model has shown success in improving the performance on the task. However, the generated summaries are often inconsistent with the source content in semantics. In such cases, when generating summaries, the model selects semantically unrelated words with respect to the source content as the most probable output. The problem can be attributed to heuristically constructed training data, where summaries can be unrelated to the source content, thus containing semantically unrelated words and spurious word correspondence. In this paper, we propose a regularization approach for the sequence-to-sequence model and make use of what the model has learned to regularize the learning objective to alleviate the effect of the problem. In addition, we propose a practical human evaluation method to address the problem that the existing automatic evaluation method does not evaluate the semantic consistency with the source content properly. Experimental results demonstrate the effectiveness of the proposed approach, which outperforms almost all the existing models. Especially, the proposed approach improves the semantic consistency by 4\% in terms of human evaluation. | {
"paragraphs": [
[
"Abstractive test summarization is an important text generation task. With the applying of the sequence-to-sequence model and the publication of large-scale datasets, the quality of the automatic generated summarization has been greatly improved BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . However, the semantic consistency of the automatically generated summaries is still far from satisfactory.",
"The commonly-used large-scale datasets for deep learning models are constructed based on naturally-annotated data with heuristic rules BIBREF1 , BIBREF3 , BIBREF4 . The summaries are not written for the source content specifically. It suggests that the provided summary may not be semantically consistent with the source content. For example, the dataset for Chinese social media text summarization, namely LCSTS, contains more than 20% text-summary pairs that are not related, according to the statistics of the manually checked data BIBREF1 .",
"Table TABREF1 shows an example of semantic inconsistency. Typically, the reference summary contains extra information that cannot be understood from the source content. It is hard to conclude the summary even for a human. Due to the inconsistency, the system cannot extract enough information in the source text, and it would be hard for the model to learn to generate the summary accordingly. The model has to encode spurious correspondence of the summary and the source content by memorization. However, this kind of correspondence is superficial and is not actually needed for generating reasonable summaries. Moreover, the information is harmful to generating semantically consistent summaries, because unrelated information is modeled. For example, the word UTF8gbsn“利益” (benefits) in the summary is not related to the source content. Thus, it has to be remembered by the model, together with the source content. However, this correspondence is spurious, because the word UTF8gbsn“利益” is not related to any word in the source content. In the following, we refer to this problem as Spurious Correspondence caused by the semantically inconsistent data. In this work, we aim to alleviate the impact of the semantic inconsistency of the current dataset. Based on the sequence-to-sequence model, we propose a regularization method to heuristically show down the learning of the spurious correspondence, so that the unrelated information in the dataset is less represented by the model. We incorporate a new soft training target to achieve this goal. For each output time in training, in addition to the gold reference word, the current output also targets at a softened output word distribution that regularizes the current output word distribution. In this way, a more robust correspondence of the source content and the output words can be learned, and potentially, the output summary will be more semantically consistent. To obtain the softened output word distribution, we propose two methods based on the sequence-to-sequence model.",
"More detailed explanation is introduced in Section SECREF2 . Another problem for abstractive text summarization is that the system summary cannot be easily evaluated automatically. ROUGE BIBREF9 is widely used for summarization evaluation. However, as ROUGE is designed for extractive text summarization, it cannot deal with summary paraphrasing in abstractive text summarization. Besides, as ROUGE is based on the reference, it requires high-quality reference summary for a reasonable evaluation, which is also lacking in the existing dataset for Chinese social media text summarization. We argue that for proper evaluation of text generation task, human evaluation cannot be avoided. We propose a simple and practical human evaluation for evaluating text summarization, where the summary is evaluated against the source content instead of the reference. It handles both of the problems of paraphrasing and lack of high-quality reference. The contributions of this work are summarized as follows:"
],
[
"Base on the fact that the spurious correspondence is not stable and its realization in the model is prone to change, we propose to alleviate the issue heuristically by regularization. We use the cross-entropy with an annealed output distribution as the regularization term in the loss so that the little fluctuation in the distribution will be depressed and more robust and stable correspondence will be learned. By correspondence, we mean the relation between (a) the current output, and (b) the source content and the partially generated output. Furthermore, we propose to use an additional output layer to generate the annealed output distribution. Due to the same fact, the two output layers will differ more in the words that superficially co-occur, so that the output distribution can be better regularized."
],
[
"Typically, in the training of the sequence-to-sequence model, only the one-hot hard target is used in the cross-entropy based loss function. For an example in the training set, the loss of an output vector is DISPLAYFORM0 ",
"where INLINEFORM0 is the output vector, INLINEFORM1 is the one-hot hard target vector, and INLINEFORM2 is the number of labels. However, as INLINEFORM3 is the one-hot vector, all the elements are zero except the one representing the correct label. Hence, the loss becomes DISPLAYFORM0 ",
"where INLINEFORM0 is the index of the correct label. The loss is then summed over the output sentences and across the minibatch and used as the source error signal in the backpropagation. The hard target could cause several problems in the training. Soft training methods try to use a soft target distribution to provide a generalized error signal to the training. For the summarization task, a straight-forward way would be to use the current output vector as the soft target, which contains the knowledge learned by the current model, i.e., the correspondence of the source content and the current output word: DISPLAYFORM0 ",
"Then, the two losses are combined as the new loss function: DISPLAYFORM0 ",
"where INLINEFORM0 is the index of the true label and INLINEFORM1 is the strength of the soft training loss. We refer to this approach as Self-Train (The left part of Figure FIGREF6 ). The output of the model can be seen as a refined supervisory signal for the learning of the model. The added loss promotes the learning of more stable correspondence. The output not only learns from the one-hot distribution but also the distribution generated by the model itself. However, during the training, the output of the neural network can become too close to the one-hot distribution. To solve this, we make the soft target the soften output distribution. We apply the softmax with temperature INLINEFORM2 , which is computed by DISPLAYFORM0 ",
"This transformation keeps the relative order of the labels, and a higher temperature will make the output distributed more evenly. The key motivation is that if the model is still not confident how to generate the current output word under the supervision of the reference summary, it means the correspondence can be spurious and the reference output is unlikely to be concluded from the source content. It makes no sense to force the model to learn such correspondence. The regularization follows that motivation, and in such case, the error signal will be less significant compared to the one-hot target. In the case where the model is extremely confident how to generate the current output, the annealed distribution will resemble the one-hot target. Thus, the regularization is not effective. In all, we make use of the model itself to identify the spurious correspondence and then regularize the output distribution accordingly."
],
[
"However, the aforementioned method tries to regularize the output word distribution based on what it has already learned. The relative order of the output words is kept. The self-dependency may not be desirable for regularization. It may be better if more correspondence that is spurious can be identified. In this paper, we further propose to obtain the soft target from a different view of the model, so that different knowledge of the dataset can be used to mitigate the overfitting problem. An additional output layer is introduced to generate the soft target. The two output layers share the same hidden representation but have independent parameters. They could learn different knowledge of the data. We refer to this approach as Dual-Train. For clarity, the original output layer is denoted by INLINEFORM0 and the new output layer INLINEFORM1 . Their outputs are denoted by INLINEFORM2 and INLINEFORM3 , respectively. The output layer INLINEFORM4 acts as the original output layer. We apply soft training using the output from INLINEFORM5 to this output layer to increase its ability of generalization. Suppose the correct label is INLINEFORM6 . The target of the output INLINEFORM7 includes both the one-hot distribution and the distribution generated from INLINEFORM8 : DISPLAYFORM0 ",
"The new output layer INLINEFORM0 is trained normally using the originally hard target. This output layer is not used in the prediction, and its only purpose is to generate the soft target to facilitate the soft training of INLINEFORM1 . Suppose the correct label is INLINEFORM2 . The target of the output INLINEFORM3 includes only the one-hot distribution: DISPLAYFORM0 ",
"Because of the random initialization of the parameters in the output layers, INLINEFORM0 and INLINEFORM1 could learn different things. The diversified knowledge is helpful when dealing with the spurious correspondence in the data. It can also be seen as an online kind of ensemble methods. Several different instances of the same model are softly aggregated into one to make classification. The right part of Figure FIGREF6 shows the architecture of the proposed Dual-Train method."
],
[
"We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. We also analyze the output text and the output label distribution of the models, showing the power of the proposed approach. Finally, we show the cases where the correspondences learned by the proposed approach are still problematic, which can be explained based on the approach we adopt."
],
[
"Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo. The whole dataset is split into three parts, with 2,400,591 pairs in PART I for training, 10,666 pairs in PART II for validation, and 1,106 pairs in PART III for testing. The authors of the dataset have manually annotated the relevance scores, ranging from 1 to 5, of the text-summary pairs in PART II and PART III. They suggested that only pairs with scores no less than three should be used for evaluation, which leaves 8,685 pairs in PART II, and 725 pairs in PART III. From the statistics of the PART II and PART III, we can see that more than 20% of the pairs are dropped to maintain semantic quality. It indicates that the training set, which has not been manually annotated and checked, contains a huge quantity of unrelated text-summary pairs."
],
[
"We use the sequence-to-sequence model BIBREF10 with attention BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 as the Baseline. Both the encoder and decoder are based on the single layer LSTM BIBREF15 . The word embedding size is 400, and the hidden state size of the LSTM unit is 500. We conduct experiments on the word level. To convert the character sequences into word sequences, we use Jieba to segment the words, the same with the existing work BIBREF1 , BIBREF6 . Self-Train and Dual-Train are implemented based on the baseline model, with two more hyper-parameters, the temperature INLINEFORM0 and the soft training strength INLINEFORM1 . We use a very simple setting for all tasks, and set INLINEFORM2 , INLINEFORM3 . We pre-train the model without applying the soft training objective for 5 epochs out of total 10 epochs. We use the Adam optimizer BIBREF16 for all the tasks, using the default settings with INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 . In testing, we use beam search to generate the summaries, and the beam size is set to 5. We report the test results at the epoch that achieves the best score on the development set."
],
[
"For text summarization, a common automatic evaluation method is ROUGE BIBREF9 . The generated summary is evaluated against the reference summary, based on unigram recall (ROUGE-1), bigram recall (ROUGE-2), and recall of longest common subsequence (ROUGE-L). To facilitate comparison with the existing systems, we adopt ROUGE as the automatic evaluation method. The ROUGE is calculated on the character level, following the previous work BIBREF1 . However, for abstractive text summarization, the ROUGE is sub-optimal, and cannot assess the semantic consistency between the summary and the source content, especially when there is only one reference for a piece of text. The reason is that the same content may be expressed in different ways with different focuses. Simple word match cannot recognize the paraphrasing. It is the case for all of the existing large-scale datasets. Besides, as aforementioned, ROUGE is calculated on the character level in Chinese text summarization, making the metrics favor the models on the character level in practice. In Chinese, a word is the smallest semantic element that can be uttered in isolation, not a character. In the extreme case, the generated text could be completely intelligible, but the characters could still match. In theory, calculating ROUGE metrics on the word level could alleviate the problem. However, word segmentation is also a non-trivial task for Chinese. There are many kinds of segmentation rules, which will produce different ROUGE scores. We argue that it is not acceptable to introduce additional systematic bias in automatic evaluations, and automatic evaluation for semantically related tasks can only serve as a reference. To avoid the deficiencies, we propose a simple human evaluation method to assess the semantic consistency. Each summary candidate is evaluated against the text rather than the reference. If the candidate is irrelevant or incorrect to the text, or the candidate is not understandable, the candidate is labeled bad. Otherwise, the candidate is labeled good. Then, we can get an accuracy of the good summaries. The proposed evaluation is very simple and straight-forward. It focuses on the relevance between the summary and the text. The semantic consistency should be the major consideration when putting the text summarization methods into practice, but the current automatic methods cannot judge properly. For detailed guidelines in human evaluation, please refer to Appendix SECREF6 . In the human evaluation, the text-summary pairs are dispatched to two human annotators who are native speakers of Chinese. As in our setting the summary is evaluated against the reference, the number of the pairs needs to be manually evaluated is four times the number of the pairs in the test set, because we need to compare four systems in total. To decrease the workload and get a hint about the annotation quality at the same time, we adopt the following procedure. We first randomly select 100 pairs in the validation set for the two human annotators to evaluate. Each pair is annotated twice, and the inter-annotator agreement is checked. We find that under the protocol, the inter-annotator agreement is quite high. In the evaluation of the test set, a pair is only annotated once to accelerate evaluation. To further maintain consistency, summaries of the same source content will not be distributed to different annotators."
],
[
"First, we show the results for human evaluation, which focuses on the semantic consistency of the summary with its source content. We evaluate the systems implemented by us as well as the reference. We cannot conduct human evaluations for the existing systems from other work, because the output summaries needed are not available for us. Besides, the baseline system we implemented is very competitive in terms of ROUGE and achieves better performance than almost all the existing systems. The results are listed in Table TABREF24 . It is surprising to see that the accuracy of the reference summaries does not reach 100%. It means that the test set still contains text-summary pairs of poor quality even after removing the pairs with relevance scores lower than 3 as suggested by the authors of the dataset. As we can see, Dual-Train improves the accuracy by 4%. Due to the rigorous definition of being good, the results mean that 4% more of the summaries are semantically consistent with their source content. However, Self-Train has a performance drop compared to the baseline. After investigating its generated summaries, we find that the major reason is that the generated summaries are not grammatically complete and often stop too early, although the generated part is indeed more related to the source content. Because the definition of being good, the improved relevance does not make up the loss on intelligibility.",
"Then, we compare the automatic evaluation results in Table TABREF25 . As we can see, only applying soft training without adaptation (Self-Train) hurts the performance. With the additional output layer (Dual-Train), the performance can be greatly improved over the baseline. Moreover, with the proposed method the simple baseline model is second to the best compared with the state-of-the-art models and even surpasses in ROUGE-2. It is promising that applying the proposed method to the state-of-the-art model could also improve its performance. The automatic evaluation is done on the original test set to facilitate comparison with existing work. However, a more reasonable setting would be to exclude the 52 test instances that are found bad in the human evaluation, because the quality of the automatic evaluation depends on the reference summary. As the existing methods do not provide their test output, it is a non-trivial task to reproduce all their results of the same reported performance. Nonetheless, it does not change the fact that ROUGE cannot handle the issues in abstractive text summarization properly."
],
[
"To examine the effect of the proposed method and reveal how the proposed method improves the consistency, we compare the output of the baseline with Dual-Train, based on both the output text and the output label distribution. We also conduct error analysis to discover room for improvements.",
"To gain a better understanding of the results, we analyze the summaries generated by the baseline model and our proposed model. Some of the summaries are listed in Table TABREF28 . As shown in the table, the summaries generated by the proposed method are much better than the baseline, and we believe they are more precise and informative than the references. In the first one, the baseline system generates a grammatical but unrelated summary, while the proposed method generates a more informative summary. In the second one, the baseline system generates a related but ungrammatical summary, while the proposed method generates a summary related to the source content but different from the reference. We believe the generated summary is actually better than the reference because the focus of the visit is not the event itself but its purpose. In the third one, the baseline system generates a related and grammatical summary, but the facts stated are completely incorrect. The summary generated by the proposed method is more comprehensive than the reference, while the reference only includes the facts in the last sentence of the source content. In short, the generated summary of the proposed method is more consistent with the source content. It also exhibits the necessity of the proposed human evaluation. Because when the generated summary is evaluated against the reference, it may seem redundant or wrong, but it is actually true to the source content. While it is arguable that the generated summary is better than the reference, there is no doubt that the generated summary of the proposed method is better than the baseline. However, the improvement cannot be properly shown by the existing evaluation methods. Furthermore, the examples suggest that the proposed method does learn better correspondence. The highlighted words in each example in Table TABREF28 share almost the same previous words. However, in the first one, the baseline considers “UTF8gbsn停” (stop) as the most related words, which is a sign of noisy word relations learned from other training examples, while the proposed method generates “UTF8gbsn进站” (to the platform), which is more related to what a human thinks. It is the same with the second example, where a human selects “UTF8gbsn专家” (expert) and Dual-Train selects “UTF8gbsn工作者” (worker), while the baseline selects “UTF8gbsn钻研” (research) and fails to generate a grammatical sentence later. In the third one, the reference and the baseline use the same word, while Dual-Train chooses a word of the same meaning. It can be concluded that Dual-Train indeed learns better word relations that could generalize to the test set, and good word relations can guide the decoder to generate semantically consistent summaries.",
"To show why the generated text of the proposed method is more related to the source content, we further analyze the label distribution, i.e., the word distribution, generated by the (first) output layer, from which the output word is selected. To illustrate the relationship, we calculate a representation for each word based on the label distributions. Each representation is associated with a specific label (word), denoted by INLINEFORM0 , and each dimension INLINEFORM1 shows how likely the label indexed by INLINEFORM2 will be generated instead of the label INLINEFORM3 . To get such representation, we run the model on the training set and get the output vectors in the decoder, which are then averaged with respect to their corresponding labels to form a representation. We can obtain the most related words of a word by simply selecting the highest values from its representation. Table TABREF30 lists some of the labels and the top 4 labels that are most likely to replace each of the labels. It is a hint about the correspondence learned by the model. From the results, it can be observed that Dual-Train learns the better semantic relevance of a word compared to the baseline because the spurious word correspondence is alleviated by regularization. For example, the possible substitutes of the word “UTF8gbsn多长时间” (how long) considered by Dual-Train include “UTF8gbsn多少” (how many), “UTF8gbsn多久” (how long) and “UTF8gbsn时间” (time). However, the relatedness is learned poorly in the baseline, as there is “UTF8gbsn知道” (know), a number, and two particles in the possible substitutes considered by the baseline. Another representative example is the word “UTF8gbsn图像” (image), where the baseline also includes two particles in its most related words. The phenomenon shows that the baseline suffers from spurious correspondence in the data, and learns noisy and harmful relations, which rely too much on the co-occurrence. In contrast, the proposed method can capture more stable semantic relatedness of the words. For text summarization, grouping the words that are in the same topic together can help the model to generate sentences that are more coherent and can improve the quality of the summarization and the relevance to the source content. Although the proposed method resolves a large number of the noisy word relations, there are still cases that the less related words are not eliminated. For example, the top 4 most similar words of “UTF8gbsn期货业” (futures industry) from the proposed method include “UTF8gbsn改革” (reform). It is more related than “2013” from the baseline, but it can still be harmful to text summarization. The problem could arise from the fact that words as “UTF8gbsn期货业” rarely occur in the training data, and their relatedness is not reflected in the data. Another issue is that there are some particles, e.g., “UTF8gbsn的” (DE) in the most related words. A possible explanation is that particles show up too often in the contexts of the word, and it is hard for the models to distinguish them from the real semantically-related words. As our proposed approach is based on regularization of the less common correspondence, it is reasonable that such kind of relation cannot be eliminated. The first case can be categorized into data sparsity, which usually needs the aid of knowledge bases to solve. The second case is due to the characteristics of natural language. However, as such words are often closed class words, the case can be resolved by manually restricting the relatedness of these words."
],
[
"Related work includes efforts on designing models for the Chinese social media text summarization task and the efforts on obtaining soft training target for supervised learning."
],
[
"The Large-Scale Chinese Short Text Summarization dataset was proposed by BIBREF1 . Along with the datasets, BIBREF1 also proposed two systems to solve the task, namely RNN and RNN-context. They were two sequence-to-sequence based models with GRU as the encoder and the decoder. The difference between them was that RNN-context had attention mechanism while RNN did not. They conducted experiments both on the character level and on the word level. RNN-distract BIBREF5 was a distraction-based neural model, where the attention mechanism focused on different parts of the source content. CopyNet BIBREF6 incorporated a copy mechanism to allow part of the generated summary to be copied from the source content. The copy mechanism also explained that the results of their word-level model were better than the results of their character-level model. SRB BIBREF17 was a sequence-to-sequence based neural model to improve the semantic relevance between the input text and the output summary. DRGD BIBREF8 was a deep recurrent generative decoder model, combining the decoder with a variational autoencoder."
],
[
"Soft target aims to refine the supervisory signal in supervised learning. Related work includes soft target for traditional learning algorithms and model distillation for deep learning algorithms. The soft label methods are typically for binary classification BIBREF18 , where the human annotators not only assign a label for an example but also give information on how confident they are regarding the annotation. The main difference from our method is that the soft label methods require additional annotation information (e.g., the confidence information of the annotated labels) of the training data, which is costly in the text summarization task. There have also been prior studies on model distillation in deep learning that distills big models into a smaller one. Model distillation BIBREF19 combined different instances of the same model into a single one. It used the output distributions of the previously trained models as the soft target distribution to train a new model. A similar work to model distillation is the soft-target regularization method BIBREF20 for image classification. Instead of using the outputs of other instances, it used an exponential average of the past label distributions of the current instance as the soft target distribution. The proposed method is different compared with the existing model distillation methods, in that the proposed method does not require additional models or additional space to record the past soft label distributions. The existing methods are not suitable for text summarization tasks, because the training of an additional model is costly, and the additional space is huge due to the massive number of data. The proposed method uses its current state as the soft target distribution and eliminates the need to train additional models or to store the history information."
],
[
"We propose a regularization approach for the sequence-to-sequence model on the Chinese social media summarization task. In the proposed approach, we use a cross-entropy based regularization term to make the model neglect the possible unrelated words. We propose two methods for obtaining the soft output word distribution used in the regularization, of which Dual-Train proves to be more effective. Experimental results show that the proposed method can improve the semantic consistency by 4% in terms of human evaluation. As shown by the analysis, the proposed method achieves the improvements by eliminating the less semantically-related word correspondence. The proposed human evaluation method is effective and efficient in judging the semantic consistency, which is absent in previous work but is crucial in the accurate evaluation of the text summarization systems. The proposed metric is simple to conduct and easy to interpret. It also provides an insight on how practicable the existing systems are in the real-world scenario."
],
[
"For human evaluation, the annotators are asked to evaluate the summary against the source content based on the goodness of the summary. If the summary is not understandable, relevant or correct according to the source content, the summary is considered bad. More concretely, the annotators are asked to examine the following aspects to determine whether the summary is good:",
"If a rule is not met, the summary is labeled bad, and the following rules do not need to be checked. In Table TABREF33 , we give examples for cases of each rule. In the first one, the summary is not fluent, because the patient of the predicate UTF8gbsn“找” (seek for) is missing. The second summary is fluent, but the content is not related to the source, in that we cannot determine if Lei Jun is actually fighting the scalpers based on the source content. In the third one, the summary is fluent and related to the source content, but the facts are wrong, as the summary is made up by facts of different people. The last one met all the three rules, and thus it is considered good. This work is supported in part by the GS501100001809National Natural Science Foundation of Chinahttp://dx.doi.org/10.13039/501100001809 under Grant No. GS50110000180961673028. "
]
],
"section_name": [
"Introduction",
"Proposed Method",
"Regularizing the Neural Network with Annealed Distribution",
"Dual Output Layers",
"Experiments",
"Dataset",
"Experimental Settings",
"Evaluation Protocol",
"Experimental Results",
"Experimental Analysis",
"Related Work",
"Systems for Chinese Social Media Text Summarization",
"Methods for Obtaining Soft Training Target",
"Conclusions",
"Standard for Human Evaluation"
]
} | {
"answers": [
{
"annotation_id": [
"ef018f67e48219b1f4fc4590d2a1400f87e49645"
],
"answer": [
{
"evidence": [
"We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. We also analyze the output text and the output label distribution of the models, showing the power of the proposed approach. Finally, we show the cases where the correspondences learned by the proposed approach are still problematic, which can be explained based on the approach we adopt.",
"Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo. The whole dataset is split into three parts, with 2,400,591 pairs in PART I for training, 10,666 pairs in PART II for validation, and 1,106 pairs in PART III for testing. The authors of the dataset have manually annotated the relevance scores, ranging from 1 to 5, of the text-summary pairs in PART II and PART III. They suggested that only pairs with scores no less than three should be used for evaluation, which leaves 8,685 pairs in PART II, and 725 pairs in PART III. From the statistics of the PART II and PART III, we can see that more than 20% of the pairs are dropped to maintain semantic quality. It indicates that the training set, which has not been manually annotated and checked, contains a huge quantity of unrelated text-summary pairs."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model.",
"Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo. "
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"625346e5a184bef42b0c6f2029d5be4697001a72"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3. Comparisons with the Existing Models in Terms of ROUGE Metrics"
],
"extractive_spans": [],
"free_form_answer": "RNN-context, SRB, CopyNet, RNN-distract, DRGD",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3. Comparisons with the Existing Models in Terms of ROUGE Metrics"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"d76d0cd894e1434896b72b8211bf1fa0e2db7703"
],
"answer": [
{
"evidence": [
"More detailed explanation is introduced in Section SECREF2 . Another problem for abstractive text summarization is that the system summary cannot be easily evaluated automatically. ROUGE BIBREF9 is widely used for summarization evaluation. However, as ROUGE is designed for extractive text summarization, it cannot deal with summary paraphrasing in abstractive text summarization. Besides, as ROUGE is based on the reference, it requires high-quality reference summary for a reasonable evaluation, which is also lacking in the existing dataset for Chinese social media text summarization. We argue that for proper evaluation of text generation task, human evaluation cannot be avoided. We propose a simple and practical human evaluation for evaluating text summarization, where the summary is evaluated against the source content instead of the reference. It handles both of the problems of paraphrasing and lack of high-quality reference. The contributions of this work are summarized as follows:",
"For text summarization, a common automatic evaluation method is ROUGE BIBREF9 . The generated summary is evaluated against the reference summary, based on unigram recall (ROUGE-1), bigram recall (ROUGE-2), and recall of longest common subsequence (ROUGE-L). To facilitate comparison with the existing systems, we adopt ROUGE as the automatic evaluation method. The ROUGE is calculated on the character level, following the previous work BIBREF1 . However, for abstractive text summarization, the ROUGE is sub-optimal, and cannot assess the semantic consistency between the summary and the source content, especially when there is only one reference for a piece of text. The reason is that the same content may be expressed in different ways with different focuses. Simple word match cannot recognize the paraphrasing. It is the case for all of the existing large-scale datasets. Besides, as aforementioned, ROUGE is calculated on the character level in Chinese text summarization, making the metrics favor the models on the character level in practice. In Chinese, a word is the smallest semantic element that can be uttered in isolation, not a character. In the extreme case, the generated text could be completely intelligible, but the characters could still match. In theory, calculating ROUGE metrics on the word level could alleviate the problem. However, word segmentation is also a non-trivial task for Chinese. There are many kinds of segmentation rules, which will produce different ROUGE scores. We argue that it is not acceptable to introduce additional systematic bias in automatic evaluations, and automatic evaluation for semantically related tasks can only serve as a reference. To avoid the deficiencies, we propose a simple human evaluation method to assess the semantic consistency. Each summary candidate is evaluated against the text rather than the reference. If the candidate is irrelevant or incorrect to the text, or the candidate is not understandable, the candidate is labeled bad. Otherwise, the candidate is labeled good. Then, we can get an accuracy of the good summaries. The proposed evaluation is very simple and straight-forward. It focuses on the relevance between the summary and the text. The semantic consistency should be the major consideration when putting the text summarization methods into practice, but the current automatic methods cannot judge properly. For detailed guidelines in human evaluation, please refer to Appendix SECREF6 . In the human evaluation, the text-summary pairs are dispatched to two human annotators who are native speakers of Chinese. As in our setting the summary is evaluated against the reference, the number of the pairs needs to be manually evaluated is four times the number of the pairs in the test set, because we need to compare four systems in total. To decrease the workload and get a hint about the annotation quality at the same time, we adopt the following procedure. We first randomly select 100 pairs in the validation set for the two human annotators to evaluate. Each pair is annotated twice, and the inter-annotator agreement is checked. We find that under the protocol, the inter-annotator agreement is quite high. In the evaluation of the test set, a pair is only annotated once to accelerate evaluation. To further maintain consistency, summaries of the same source content will not be distributed to different annotators."
],
"extractive_spans": [],
"free_form_answer": "comparing the summary with the text instead of the reference and labeling the candidate bad if it is incorrect or irrelevant",
"highlighted_evidence": [
"We propose a simple and practical human evaluation for evaluating text summarization, where the summary is evaluated against the source content instead of the reference. It handles both of the problems of paraphrasing and lack of high-quality reference. ",
"To avoid the deficiencies, we propose a simple human evaluation method to assess the semantic consistency. Each summary candidate is evaluated against the text rather than the reference. If the candidate is irrelevant or incorrect to the text, or the candidate is not understandable, the candidate is labeled bad. Otherwise, the candidate is labeled good. Then, we can get an accuracy of the good summaries. The proposed evaluation is very simple and straight-forward. It focuses on the relevance between the summary and the text. The semantic consistency should be the major consideration when putting the text summarization methods into practice, but the current automatic methods cannot judge properly. For detailed guidelines in human evaluation, please refer to Appendix SECREF6 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Are results reported only for English data?",
"Which existing models does this approach outperform?",
"What human evaluation method is proposed?"
],
"question_id": [
"ddd6ba43c4e1138156dd2ef03c25a4c4a47adad0",
"bd99aba3309da96e96eab3e0f4c4c8c70b51980a",
"73bb8b7d7e98ccb88bb19ecd2215d91dd212f50d"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"irony",
"irony",
"irony"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. Illustration of the proposed methods. Left: Self-Train. Right: Dual-Train.",
"Table 2. Results of the Human Evaluation on the Test Set, Showing How Many Summaries are Semantically Consistent with Their Source Content",
"Table 3. Comparisons with the Existing Models in Terms of ROUGE Metrics",
"Table 5. Examples of the Labels and Their Top Four Most Related Labels",
"Table 6. Examples for Each Case in the Human Evaluation",
"Table 7. Results of the Inter-Annotator Agreement"
],
"file": [
"5-Figure1-1.png",
"8-Table2-1.png",
"8-Table3-1.png",
"10-Table5-1.png",
"13-Table6-1.png",
"14-Table7-1.png"
]
} | [
"Which existing models does this approach outperform?",
"What human evaluation method is proposed?"
] | [
[
"1805.04033-8-Table3-1.png"
],
[
"1805.04033-Introduction-3",
"1805.04033-Evaluation Protocol-0"
]
] | [
"RNN-context, SRB, CopyNet, RNN-distract, DRGD",
"comparing the summary with the text instead of the reference and labeling the candidate bad if it is incorrect or irrelevant"
] | 629 |
1910.06748 | Language Identification on Massive Datasets of Short Message using an Attention Mechanism CNN | Language Identification (LID) is a challenging task, especially when the input texts are short and noisy such as posts and statuses on social media or chat logs on gaming forums. The task has been tackled by either designing a feature set for a traditional classifier (e.g. Naive Bayes) or applying a deep neural network classifier (e.g. Bi-directional Gated Recurrent Unit, Encoder-Decoder). These methods are usually trained and tested on a huge amount of private data, then used and evaluated as off-the-shelf packages by other researchers using their own datasets, and consequently the various results published are not directly comparable. In this paper, we first create a new massive labelled dataset based on one year of Twitter data. We use this dataset to test several existing language identification systems, in order to obtain a set of coherent benchmarks, and we make our dataset publicly available so that others can add to this set of benchmarks. Finally, we propose a shallow but efficient neural LID system, which is a ngram-regional convolution neural network enhanced with an attention mechanism. Experimental results show that our architecture is able to predict tens of thousands of samples per second and surpasses all state-of-the-art systems with an improvement of 5%. | {
"paragraphs": [
[
"Language Identification (LID) is the Natural Language Processing (NLP) task of automatically recognizing the language that a document is written in. While this task was called \"solved\" by some authors over a decade ago, it has seen a resurgence in recent years thanks to the rise in popularity of social media BIBREF0, BIBREF1, and the corresponding daily creation of millions of new messages in dozens of different languages including rare ones that are not often included in language identification systems. Moreover, these messages are typically very short (Twitter messages were until recently limited to 140 characters) and very noisy (including an abundance of spelling mistakes, non-word tokens like URLs, emoticons, or hashtags, as well as foreign-language words in messages of another language), whereas LID was solved using long and clean documents. Indeed, several studies have shown that LID systems trained to a high accuracy on traditional documents suffer significant drops in accuracy when applied to short social-media texts BIBREF2, BIBREF3.",
"Given its massive scale, multilingual nature, and popularity, Twitter has naturally attracted the attention of the LID research community. Several attempts have been made to construct LID datasets from that resource. However, a major challenge is to assign each tweet in the dataset to the correct language among the more than 70 languages used on the platform. The three commonly-used approaches are to rely on human labeling BIBREF4, BIBREF5, machine detection BIBREF5, BIBREF6, or user geolocation BIBREF3, BIBREF7, BIBREF8. Human labeling is an expensive process in terms of workload, and it is thus infeasible to apply it to create a massive dataset and get the full benefit of Twitter's scale. Automated LID labeling of this data creates a noisy and imperfect dataset, which is to be expected since the purpose of these datasets is to create new and better LID algorithms. And user geolocation is based on the assumption that users in a geographic region use the language of that region; an assumption that is not always correct, which is why this technique is usually paired with one of the other two. Our first contribution in this paper is to propose a new approach to build and automatically label a Twitter LID dataset, and to show that it scales up well by building a dataset of over 18 million labeled tweets. Our hope is that our new Twitter dataset will become a benchmarking standard in the LID literature.",
"Traditional LID models BIBREF2, BIBREF3, BIBREF9 proposed different ideas to design a set of useful features. This set of features is then passed to traditional machine learning algorithms such as Naive Bayes (NB). The resulting systems are capable of labeling thousands of inputs per second with moderate accuracy. Meanwhile, neural network models BIBREF10, BIBREF6 approach the problem by designing a deep and complex architecture like gated recurrent unit (GRU) or encoder-decoder net. These models use the message text itself as input using a sequence of character embeddings, and automatically learn its hidden structure via a deep neural network. Consequently, they obtain better results in the task but with an efficiency trade-off. To alleviate these drawbacks, our second contribution in this paper is to propose a shallow but efficient neural LID algorithm. We followed previous neural LID BIBREF10, BIBREF6 in using character embeddings as inputs. However, instead of using a deep neural net, we propose to use a shallow ngram-regional convolution neural network (CNN) with an attention mechanism to learn input representation. We experimentally prove that the ngram-regional CNN is the best choice to tackle the bottleneck problem in neural LID. We also illustrate the behaviour of the attention structure in focusing on the most important features in the text for the task. Compared with other benchmarks on our Twitter datasets, our proposed model consistently achieves new state-of-the-art results with an improvement of 5% in accuracy and F1 score and a competitive inference time.",
"The rest of this paper is structured as follows. After a background review in the next section, we will present our Twitter dataset in Section SECREF3. Our novel LID algorithm will be the topic of Section SECREF4. We will then present and analyze some experiments we conducted with our algorithm in Section SECREF5, along with benchmarking tests of popular and literature LID systems, before drawing some concluding remarks in Section SECREF6. Our Twitter dataset and our LID algorithm's source code are publicly available."
],
[
"In this section, we will consider recent advances on the specific challenge of language identification in short text messages. Readers interested in a general overview of the area of LID, including older work and other challenges in the area, are encouraged to read the thorough survey of BIBREF0."
],
[
"One of the first, if not the first, systems for LID specialized for short text messages is the graph-based method of BIBREF5. Their graph is composed of vertices, or character n-grams (n = 3) observed in messages in all languages, and of edges, or connections between successive n-grams weighted by the observed frequency of that connection in each language. Identifying the language of a new message is then done by identifying the most probable path in the graph that generates that message. Their method achieves an accuracy of 0.975 on their own Twitter corpus.",
"Carter, Weerkamp, and Tsagkias proposed an approach for LID that exploits the very nature of social media text BIBREF3. Their approach computes the prior probability of the message being in a given language independently of the content of the message itself, in five different ways: by identifying the language of external content linked to by the message, the language of previous messages by the same user, the language used by other users explicitly mentioned in the message, the language of previous messages in the on-going conversation, and the language of other messages that share the same hashtags. They achieve a top accuracy of 0.972 when combining these five priors with a linear interpolation.",
"One of the most popular language identification packages is the langid.py library proposed in BIBREF2, thanks to the fact it is an open-source, ready-to-use library written in the Python programming language. It is a multinomial Naïve Bayes classifier trained on character n-grams (1 $\\le $ n $\\le $ 4) from 97 different languages. The training data comes from longer document sources, both formal ones (government publications, software documentation, and newswire) and informal ones (online encyclopedia articles and websites). While their system is not specialized for short messages, the authors claim their algorithm can generalize across domains off-the-shelf, and they conducted experiments using the Twitter datasets of BIBREF5 and BIBREF3 that achieved accuracies of 0.941 and 0.886 respectively, which is weaker than the specialized short-message LID systems of BIBREF5 and BIBREF3.",
"Starting from the basic observation of Zipf's Law, that each language has a small number of words that occur very frequently in most documents, the authors of BIBREF9 created a dictionary-based algorithm they called Quelingua. This algorithm includes ranked dictionaries of the 1,000 most popular words of each language it is trained to recognize. Given a new message, recognized words are given a weight based on their rank in each language, and the identified language is the one with the highest sum of word weights. Quelingua achieves an F1-score of 0.733 on the TweetLID competition corpus BIBREF11, a narrow improvement over a trigram Naïve Bayes classifier which achieves an F1-Score of 0.727 on the same corpus, but below the best results achieved in the competition."
],
[
"Neural network models have been applied on many NLP problems in recent years with great success, achieving excellent performance on challenges ranging from text classification BIBREF12 to sequence labeling BIBREF13. In LID, the authors of BIBREF1 built a hierarchical system of two neural networks. The first level is a Convolutional Neural Network (CNN) that converts white-space-delimited words into a word vector. The second level is a Long-Short-Term Memory (LSTM) network (a type of recurrent neural network (RNN)) that takes in sequences of word vectors outputted by the first level and maps them to language labels. They trained and tested their network on Twitter's official Twitter70 dataset, and achieved an F-score of 0.912, compared to langid.py's performance of 0.879 on the same dataset. They also trained and tested their system using the TweetLID corpus and achieved an F1-score of 0.762, above the system of BIBREF9 presented earlier, and above the top system of the TweetLID competition, the SVM LID system of BIBREF14 which achieved an F1-score of 0.752.",
"The authors of BIBREF10 also used a RNN system, but preferred the Gated Recurrent Unit (GRU) architecture to the LSTM, indicating it performed slightly better in their experiments. Their system breaks the text into non-overlapping 200-character segments, and feeds character n-grams (n = 8) into the GRU network to classify each letter into a probable language. The segment's language is simply the most probable language over all letters, and the text's language is the most probable language over all segments. The authors tested their system on short messages, but not on tweets; they built their own corpus of short messages by dividing their data into 200-character segments. On that corpus, they achieve an accuracy of 0.955, while langid.py achieves 0.912.",
"The authors of BIBREF6 also created a character-level LID network using a GRU architecture, in the form of a three-layer encoder-decoder RNN. They trained and tested their system using their own Twitter dataset, and achieved an F1-score of 0.982, while langid.py achieved 0.960 on the same dataset.",
"To summarize, we present the key results of the papers reviewed in this section in Table TABREF1, along with the results langid.py obtained on the same datasets as benchmark."
],
[
"Unlike other authors who built Twitter datasets, we chose not to mine tweets from Twitter directly through their API, but instead use tweets that have already been downloaded and archived on the Internet Archive. This has two important benefits: this site makes its content freely available for research purposes, unlike Twitter which comes with restrictions (especially on distribution), and the tweets are backed-up permanently, as opposed to Twitter where tweets may be deleted at any time and become unavailable for future research or replication of past studies. The Internet Archive has made available a set of 1.7 billion tweets collected over the year of 2017 in a 600GB JSON file which includes all tweet metadata attributes. Five of these attributes are of particular importance to us. They are $\\it {tweet.id}$, $\\it {tweet.user.id}$, $\\it {tweet.text}$, $\\it {tweet.lang}$, and $\\it {tweet.user.lang}$, corresponding respectively to the unique tweet ID number, the unique user ID number, the text content of the tweet in UTF-8 characters, the tweet's language as determined by Twitter's automated LID software, and the user's self-declared language.",
"We begin by filtering the corpus to keep only those tweets where the user's self-declared language and the tweet's detected language correspond; that language becomes the tweet's correct language label. This operation cuts out roughly half the tweets, and leaves us with a corpus of about 900 million tweets in 54 different languages. Table TABREF6 shows the distribution of languages in that corpus. Unsurprisingly, it is a very imbalanced distribution of languages, with English and Japanese together accounting for 60% of all tweets. This is consistent with other studies and statistics of language use on Twitter, going as far back as 2013. It does however make it very difficult to use this corpus to train a LID system for other languages, especially for one of the dozens of seldom-used languages. This was our motivation for creating a balanced Twitter dataset."
],
[
"When creating a balanced Twitter LID dataset, we face a design question: should our dataset seek to maximize the number of languages present, to make it more interesting and challenging for the task of LID, but at the cost of having fewer tweets per language to include seldom-used languages. Or should we maximize the number of tweets per language to make the dataset more useful for training deep neural networks, but at the cost of having fewer languages present and eliminating the seldom-used languages. To circumvent this issue, we propose to build three datasets: a small-scale one with more languages but fewer tweets, a large-scale one with more tweets but fewer languages, and a medium-scale one that is a compromise between the two extremes. Moreover, since we plan for our datasets to become standard benchmarking tools, we have subdivided the tweets of each language in each dataset into training, validation, and testing sets.",
"Small-scale dataset: This dataset is composed of 28 languages with 13,000 tweets per language, subdivided into 7,000 training set tweets, 3,000 validation set tweets, and 3,000 testing set tweets. There is thus a total of 364,000 tweets in this dataset. Referring to Table TABREF6, this dataset includes every language that represents 0.002% or more of the Twitter corpus. To be sure, it is possible to create a smaller dataset with all 54 languages but much fewer tweets per language, but we feel that this is the lower limit to be useful for training LID deep neural systems.",
"Medium scale dataset: This dataset keeps 22 of the 28 languages of the small-scale dataset, but has 10 times as many tweets per language. In other words, each language has a 70,000-tweet training set, a 30,000-tweet validation set, and a 30,000-tweet testing set, for a total of 2,860,000 tweets.",
"Large-scale dataset: Once again, we increased tenfold the number of tweets per language, and kept the 14 languages that had sufficient tweets in our initial 900 million tweet corpus. This gives us a dataset where each language has 700,000 tweets in its training set, 300,000 tweets in its validation set, and 300,000 tweets in its testing set, for a total 18,200,000 tweets. Referring to Table TABREF6, this dataset includes every language that represents 0.1% or more of the Twitter corpus."
],
[
"Since many languages have unclear word boundaries, character n-grams, rather than words, have become widely used as input in LID systems BIBREF2, BIBREF10, BIBREF5, BIBREF6. With this in mind, the LID problem can be defined as such: given a tweet $\\mathit {tw}$ consisting of $n$ ordered characters ($\\mathit {tw}=[ch_1, ch_2, ..., ch_n]$) selected within the vocabulary set $\\mathit {char}$ of $V$ unique characters ($\\mathit {char}=\\lbrace ch_1,ch_2, ..., ch_V\\rbrace $) and a set $\\mathit {l}$ of $L$ languages ($\\mathit {l}=\\lbrace l_1,l_2, ..., l_L\\rbrace $) , the aim is to predict the language $\\mathit {\\hat{l}}$ present in $tw$ using a classifier:",
"where $Score(l_i|tw)$ is a scoring function quantifying how likely language $l_i$ was used given the observed message $tw$.",
"Most statistical LID systems follow the model of BIBREF2. They start off by using what is called a one-hot encoding technique, which represents each character $ch_i$ as a one-hot vector $\\mathbf {x}_i^{oh} \\in \\mathbb {Z}_2^V$ according to the index of this character in $\\mathit {char}$. This transforms $tw$ into a matrix $\\mathbf {X}^{oh}$:",
"The vector $\\mathbf {X}^{oh}$ is passed to a feature extraction function, for example row-wise sum or tf-idf weighting, to obtain a feature vector $\\mathbf {h}$. $\\mathbf {h}$ is finally fed to a classifier model for either discriminative scoring (e.g. Support Vector Machine) or generative scoring (e.g. Naïve Bayes).",
"Unlike statistical methods, a typical neural network LID system, as illustrated in Figure FIGREF15, first pass this input through an embedding layer to map each character $ch_i \\in tw$ to a low-dense vector $\\mathbf {x}_i \\in \\mathbb {R}^d$, where $d$ denotes the dimension of character embedding. Given an input tweet $tw$, after passing through the embedding layer, we obtain an embedded matrix:",
"The embedded matrix $\\mathbf {X}$ is then fed through a neural network architecture, which transforms it into an output vector $\\mathbf {h}=f(\\mathbf {X})$ of length L that represents the likelihood of each language, and which is passed through a $\\mathit {Softmax}$ function. This updates equation DISPLAY_FORM18 as:",
"Tweets in particular are noisy messages which can contain a mix of multiple languages. To deal with this challenge, most previous neural network LID systems used deep sequence neural layers, such as an encoder-decoder BIBREF6 or a GRU BIBREF10, to extract global representations at a high computational cost. By contrast, we propose to employ a shallow (single-layer) convolution neural network (CNN) to locally learn region-based features. In addition, we propose to use an attention mechanism to proportionally merge together these local features for an entire tweet $tw$. We hypothesize that the attention mechanism will effectively capture which local features of a particular language are the dominant features of the tweet. There are two major advantages of our proposed architecture: first, the use of the CNN, which has the least number of parameters among other neural networks, simplifies the neural network model and decreases the inference latency; and second, the use of the attention mechanism makes it possible to model the mix of languages while maintaining a competitive performance."
],
[
"To begin, we present a traditional CNN with an ngam-regional constrain as our baseline. CNNs have been widely used in both image processing BIBREF15 and NLP BIBREF16. The convolution operation of a filter with a region size $m$ is parameterized by a weight matrix $\\mathbf {W}_{cnn} \\in \\mathbb {R}^{d_{cnn}\\times md}$ and a bias vector $\\mathbf {b}_{cnn} \\in \\mathbb {R}^{d_{cnn}}$, where $d_{cnn}$ is the dimension of the CNN. The inputs are a sequence of $m$ consecutive input columns in $\\mathbf {X}$, represented by a concatenated vector $\\mathbf {X}[i:i+m-1] \\in \\mathbb {R}^{md}$. The region-based feature vector $\\mathbf {c}_i$ is computed as follows:",
"where $\\oplus $ denotes a concatenation operation and $g$ is a non-linear function. The region filter is slid from the beginning to the end of $\\mathbf {X}$ to obtain a convolution matrix $\\mathbf {C}$:",
"The first novelty of our CNN is that we add a zero-padding constrain at both sides of $\\mathbf {X}$ to ensure that the number of columns in $\\mathbf {C}$ is equal to the number of columns in $\\mathbf {X}$. Consequently, each $\\mathbf {c}_i$ feature vector corresponds to an $\\mathbf {x}_i$ input vector at the same index position $i$, and is learned from concatenating the surrounding $m$-gram embeddings. Particularly:",
"where $p$ is the number of zero-padding columns. Finally, in a normal CNN, a row-wise max-pooling function is applied on $\\mathbf {C}$ to extract the $d_{cnn}$ most salient features, as shown in Equation DISPLAY_FORM26. However, one weakness of this approach is that it extracts the most salient features out of sequence."
],
[
"Instead of the traditional pooling function of Equation DISPLAY_FORM26, a second important innovation of our CNN model is to use an attention mechanism to model the interaction between region-based features from the beginning to the end of an input. Figure FIGREF15 illustrates our proposed model. Given a sequence of regional feature vectors $\\mathbf {C}=[\\mathbf {c}_1,\\mathbf {c}_2,...,\\mathbf {c}_n]$ as computed in Equation DISPLAY_FORM24, we pass it through a fully-connected hidden layer to learn a sequence of regional hidden vectors $\\mathbf {H}=[\\mathbf {h}_1,\\mathbf {h}_2,...,\\mathbf {h}_n] \\in \\mathbb {R}^{d_{hd} \\times n}$ using Equation DISPLAY_FORM28.",
"where $g_2$ is a non-linear activation function, $\\mathbf {W}_{hd}$ and $\\mathbf {b}_{hd}$ denote model parameters, and $d_{hd}$ is the dimension of the hidden layer. We followed Yang et al. BIBREF17 in employing a regional context vector $\\mathbf {u} \\in \\mathbb {R}^{d_{hd}}$ to measure the importance of each window-based hidden vector. The regional importance factors are computed by:",
"The importance factors are then fed to a $\\mathit {Softmax}$ layer to obtain the normalized weight:",
"The final representation of a given input is computed by a weighted sum of its regional feature vectors:"
],
[
"For the benchmarks, we selected five systems. We picked first the langid.py library which is frequently used to compare systems in the literature. Since our work is in neural-network LID, we selected two neural network systems from the literature, specifically the encoder-decoder EquiLID system of BIBREF6 and the GRU neural network LanideNN system of BIBREF10. Finally, we included CLD2 and CLD3, two implementations of the Naïve Bayes LID software used by Google in their Chrome web browser BIBREF4, BIBREF0, BIBREF8 and sometimes used as a comparison system in the LID literature BIBREF7, BIBREF6, BIBREF8, BIBREF2, BIBREF10. We obtained publicly-available implementations of each of these algorithms, and test them all against our three datasets. In Table TABREF33, we report each algorithm's accuracy and F1 score, the two metrics usually reported in the LID literature. We also included precision and recall values, which are necessary for computing F1 score. And finally we included the speed in number of messages handled per second. This metric is not often discussed in the LID literature, but is of particular importance when dealing with a massive dataset such as ours or a massive streaming source such as Twitter.",
"We compare these benchmarks to our two models: the improved CNN as described in Section SECREF22 and our proposed CNN model with an attention mechanism of Section SECREF27. These are labelled CNN and Attention CNN in Table TABREF33. In both models, we filter out characters that appear less than 5 times and apply a dropout approach with a dropout rate of $0.5$. ADAM optimization algorithm and early stopping techniques are employed during training. The full list of parameters and settings is given in Table TABREF32. It is worth noting that we randomly select this configuration without any tuning process."
],
[
"The first thing that appears from these results is the speed difference between algorithms. CLD3 and langid.py both can process several thousands of messages per second, and CLD2 is even an order of magnitude better, but the two neural network software have considerably worse performances, at less than a dozen messages per second. This is the efficiency trade-off of neural-network LID systems we mentioned in Section SECREF1; although to be fair, we should also point out that those two systems are research prototypes and thus may not have been fully optimized.",
"In terms of accuracy and F1 score, langid.py, LanideNN, and EquiLID have very similar performances. All three consistently score above 0.90, and each achieves the best accuracy or the best F1 score at some point, if only by 0.002. By contrast, CLD2 and CLD3 have weaker performances; significantly so in the case of CLD3. In all cases, using our small-, medium-, or large-scale test set does not significantly affect the results.",
"All the benchmark systems were tested using the pre-trained models they come with. For comparison purposes, we retrained langid.py from scratch using the training and validation portion of our datasets, and ran the tests again. Surprisingly, we find that the results are worse for all metrics compared to using their pre-trained model, and moreover that using the medium- and large-scale datasets give significantly worse results than using the small-scale dataset. This may be a result of the fact the corpus the langid.py software was trained with and optimized for originally is drastically different from ours: it is a imbalanced dataset of 18,269 tweets in 9 languages. Our larger corpora, being more drastically different from the original, give increasingly worse performances. This observation may also explain the almost 10% variation in performance of langid.py reported in the literature and reproduced in Table TABREF1. The fact that the message handling performance of the library drops massively compared to its pre-trained results further indicates how the software was optimized to use its corpus. Based on this initial result, we decided not to retrain the other benchmark systems.",
"The last two lines of Table TABREF33 report the results of our basic CNN and our attention CNN LID systems. It can be seen that both of them outperform the benchmark systems in accuracy, precision, recall, and F1 score in all experiments. Moreover, the attention CNN outperforms the basic CNN in every metric (we will explore the benefit of the attention mechanism in the next subsection). In terms of processing speed, only the CLD2 system surpasses ours, but it does so at the cost of a 10% drop in accuracy and F1 score. Looking at the choice of datasets, it can be seen that training with the large-scale dataset leads to a nearly 1% improvement compared to the medium-sized dataset, which also gives a 1% improvement compared to the small-scale dataset. While it is expected that using more training data will lead to a better system and better results, the small improvement indicates that even our small-scale dataset has sufficient messages to allow the network training to converge."
],
[
"We can further illustrate the impact of our attention mechanism by displaying the importance factor $\\alpha _i$ corresponding to each character $ch_i$ in selected tweets. Table TABREF41 shows a set of tweets that were correctly identified by the attention CNN but misclassified by the regular CNN in three different languages: English, French, and Vietnamese. The color intensity of a letter's cell is proportional to the attention mechanism's normalized weight $\\alpha _i$, or on the focus the network puts on that character. In order words, the attention CNN puts more importance on the features that have the darkest color. The case studies of Table TABREF41 show the noise-tolerance that comes from the attention mechanism. It can be seen that the system puts virtually no weight on URL links (e.g. $tw_{en_1}$, $tw_{fr_2}$, $tw_{vi_2}$), on hashtags (e.g. $tw_{en_3}$), or on usernames (e.g. $tw_{en_2}$, $tw_{fr_1}$, $tw_{vi_1}$). We should emphasize that our system does not implement any text preprocessing steps; the input tweets are kept as-is. Despite that, the network learned to distinguish between words and non-words, and to focus mainly on the former. In fact, when the network does put attention on these elements, it is when they appear to use real words (e.g. “star\" and “seed\" in the username of $tw_{en_2}$, “mother\" and “none\" in the hashtag of $tw_{en_3}$). This also illustrates how the attention mechanism can pick out fine-grained features within noisy text: in those examples, it was able to focus on real-word components of longer non-word strings.",
"The examples of Table TABREF41 also show that the attention CNN learns to focus on common words to recognize languages. Some of the highest-weighted characters in the example tweets are found in common determiners, adverbs, and verbs of each language. These include “in\" ($tw_{en_1}$), “des\" ($tw_{fr_1}$), “le\" ($tw_{fr_2}$), “est\" ($tw_{fr_3}$), vietnamese“quá\" ($tw_{vi_2}$), and vietnamese“nhất\" ($tw_{vi_3}$). These letters and words significantly contribute in identifying the language of a given input.",
"Finally, when multiple languages are found within a tweet, the network successfully captures all of them. For example, $tw_{fr_3}$ switches from French to Spanish and $tw_{vi_2}$ mixes both English and Vietnamese. In both cases, the network identifies features of both languages; it focuses strongly on “est\" and “y\" in $tw_{fr_3}$, and on “Don't\" and vietnamese“bài\" in $tw_{vi_2}$. The message of $tw_{vi_3}$ mixes three languages, Vietnamese, English, and Korean, and the network focuses on all three parts, by picking out vietnamese“nhật\" and vietnamese“mừng\" in Vietnamese, “#생일축하해\" and “#태형생일\" in Korean, and “$\\textbf {h}ave$\" in English. Since our system is setup to classify each tweet into a single language, the strongest feature of each tweet wins out and the message is classified in the corresponding language. Nonetheless, it is significant to see that features of all languages present in the tweet are picked out, and a future version of our system could successfully decompose the tweets into portions of each language."
],
[
"In this paper, we first demonstrated how to build balanced, automatically-labelled, and massive LID datasets. These datasets are taken from Twitter, and are thus composed of real-world and noisy messages. We applied our technique to build three datasets ranging from hundreds of thousands to tens of millions of short texts. Next, we proposed our new neural LID system, a CNN-based network with an attention mechanism to mitigate the performance bottleneck issue while still maintaining a state-of-the-art performance. The results obtained by our system surpassed five benchmark LID systems by 5% to 10%. Moreover, our analysis of the attention mechanism shed some light on the inner workings of the typically-black-box neural network, and demonstrated how it helps pick out the most important linguistic features while ignoring noise. All of our datasets and source code are publicly available at https://github.com/duytinvo/LID_NN."
]
],
"section_name": [
"Introduction",
"Related Work",
"Related Work ::: Probabilistic LID",
"Related Work ::: Neural Network LID",
"Our Twitter LID Datasets ::: Source Data and Language Labeling",
"Our Twitter LID Datasets ::: Our Balanced Datasets",
"Proposed Model",
"Proposed Model ::: ngam-regional CNN Model",
"Proposed Model ::: Attention Mechanism",
"Experimental Results ::: Benchmarks",
"Experimental Results ::: Analysis",
"Experimental Results ::: Impact of Attention Mechanism",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"ed09fbfc79672e3ea94b03c2419946332fd33b6b"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2. Twitter corpus distribution by language label.",
"We begin by filtering the corpus to keep only those tweets where the user's self-declared language and the tweet's detected language correspond; that language becomes the tweet's correct language label. This operation cuts out roughly half the tweets, and leaves us with a corpus of about 900 million tweets in 54 different languages. Table TABREF6 shows the distribution of languages in that corpus. Unsurprisingly, it is a very imbalanced distribution of languages, with English and Japanese together accounting for 60% of all tweets. This is consistent with other studies and statistics of language use on Twitter, going as far back as 2013. It does however make it very difficult to use this corpus to train a LID system for other languages, especially for one of the dozens of seldom-used languages. This was our motivation for creating a balanced Twitter dataset."
],
"extractive_spans": [],
"free_form_answer": "EN, JA, ES, AR, PT, KO, TH, FR, TR, RU, IT, DE, PL, NL, EL, SV, FA, VI, FI, CS, UK, HI, DA, HU, NO, RO, SR, LV, BG, UR, TA, MR, BN, IN, KN, ET, SL, GU, CY, ZH, CKB, IS, LT, ML, SI, IW, NE, KM, MY, TL, KA, BO",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2. Twitter corpus distribution by language label.",
"We begin by filtering the corpus to keep only those tweets where the user's self-declared language and the tweet's detected language correspond; that language becomes the tweet's correct language label. This operation cuts out roughly half the tweets, and leaves us with a corpus of about 900 million tweets in 54 different languages. Table TABREF6 shows the distribution of languages in that corpus."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"635e645dd24ad753389ea15c6709be9741f600f6"
],
"answer": [
{
"evidence": [
"For the benchmarks, we selected five systems. We picked first the langid.py library which is frequently used to compare systems in the literature. Since our work is in neural-network LID, we selected two neural network systems from the literature, specifically the encoder-decoder EquiLID system of BIBREF6 and the GRU neural network LanideNN system of BIBREF10. Finally, we included CLD2 and CLD3, two implementations of the Naïve Bayes LID software used by Google in their Chrome web browser BIBREF4, BIBREF0, BIBREF8 and sometimes used as a comparison system in the LID literature BIBREF7, BIBREF6, BIBREF8, BIBREF2, BIBREF10. We obtained publicly-available implementations of each of these algorithms, and test them all against our three datasets. In Table TABREF33, we report each algorithm's accuracy and F1 score, the two metrics usually reported in the LID literature. We also included precision and recall values, which are necessary for computing F1 score. And finally we included the speed in number of messages handled per second. This metric is not often discussed in the LID literature, but is of particular importance when dealing with a massive dataset such as ours or a massive streaming source such as Twitter."
],
"extractive_spans": [
"langid.py library",
"encoder-decoder EquiLID system",
"GRU neural network LanideNN system",
"CLD2",
"CLD3"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the benchmarks, we selected five systems. We picked first the langid.py library which is frequently used to compare systems in the literature. Since our work is in neural-network LID, we selected two neural network systems from the literature, specifically the encoder-decoder EquiLID system of BIBREF6 and the GRU neural network LanideNN system of BIBREF10. Finally, we included CLD2 and CLD3, two implementations of the Naïve Bayes LID software used by Google in their Chrome web browser BIBREF4, BIBREF0, BIBREF8 and sometimes used as a comparison system in the LID literature BIBREF7, BIBREF6, BIBREF8, BIBREF2, BIBREF10."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"f55899cfbb232179a901567c9dc14f91a62409be"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What languages are represented in the dataset?",
"Which existing language ID systems are tested?",
"How was the one year worth of data collected?"
],
"question_id": [
"8ad815b29cc32c1861b77de938c7269c9259a064",
"3f9ef59ac06db3f99b8b6f082308610eb2d3626a",
"203d322743353aac8a3369220e1d023a78c2cae3"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1. Summary of literature results",
"Table 2. Twitter corpus distribution by language label.",
"Figure 1. Neural network classifier architectures.",
"Table 3. Parameter settings",
"Table 4. Benchmarking results.",
"Table 5. Tweets misclassified by the CNN but recognized by the Attention CNN"
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"5-Figure1-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"8-Table5-1.png"
]
} | [
"What languages are represented in the dataset?"
] | [
[
"1910.06748-Our Twitter LID Datasets ::: Source Data and Language Labeling-1",
"1910.06748-3-Table2-1.png"
]
] | [
"EN, JA, ES, AR, PT, KO, TH, FR, TR, RU, IT, DE, PL, NL, EL, SV, FA, VI, FI, CS, UK, HI, DA, HU, NO, RO, SR, LV, BG, UR, TA, MR, BN, IN, KN, ET, SL, GU, CY, ZH, CKB, IS, LT, ML, SI, IW, NE, KM, MY, TL, KA, BO"
] | 632 |
1911.08673 | Global Greedy Dependency Parsing | Most syntactic dependency parsing models may fall into one of two categories: transition- and graph-based models. The former models enjoy high inference efficiency with linear time complexity, but they rely on the stacking or re-ranking of partially-built parse trees to build a complete parse tree and are stuck with slower training for the necessity of dynamic oracle training. The latter, graph-based models, may boast better performance but are unfortunately marred by polynomial time inference. In this paper, we propose a novel parsing order objective, resulting in a novel dependency parsing model capable of both global (in sentence scope) feature extraction as in graph models and linear time inference as in transitional models. The proposed global greedy parser only uses two arc-building actions, left and right arcs, for projective parsing. When equipped with two extra non-projective arc-building actions, the proposed parser may also smoothly support non-projective parsing. Using multiple benchmark treebanks, including the Penn Treebank (PTB), the CoNLL-X treebanks, and the Universal Dependency Treebanks, we evaluate our parser and demonstrate that the proposed novel parser achieves good performance with faster training and decoding. | {
"paragraphs": [
[
"Dependency parsing predicts the existence and type of linguistic dependency relations between words (as shown in Figure FIGREF1), which is a critical step in accomplishing deep natural language processing. Dependency parsing has been well developed BIBREF0, BIBREF1, and it generally relies on two types of parsing models: transition-based models and graph-based models. The former BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF4 traditionally apply local and greedy transition-based algorithms, while the latter BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12 apply globally optimized graph-based algorithms.",
"A transition-based dependency parser processes the sentence word-by-word, commonly from left to right, and forms a dependency tree incrementally from the operations predicted. This method is advantageous in that inference on the projective dependency tree is linear in time complexity with respect to sentence length; however, it has several obvious disadvantages. Because the decision-making of each step is based on partially-built parse trees, special training methods are required, which results in slow training and error propagation, as well as weak long-distance dependence processing BIBREF13.",
"Graph-based parsers learn scoring functions in one-shot and then perform an exhaustive search over the entire tree space for the highest-scoring tree. This improves the performances of the parsers, particularly the long-distance dependency processing, but these models usually have slow inference speed to encourage higher accuracy.",
"The easy-first parsing approach BIBREF14, BIBREF15 was designed to integrate the advantages of graph-based parsers’ better-performing trees and transition-based parsers’ linear decoding complexity. By processing the input tokens in a stepwise easy-to-hard order, the algorithm makes use of structured information on partially-built parse trees. Because of the presence of rich, structured information, exhaustive inference is not an optimal solution - we can leverage this information to conduct inference much more quickly. As an alternative to exhaustive inference, easy-first chooses to use an approximated greedy search that only explores a tiny fraction of the search space. Compared to graph-based parsers, however, easy-first parsers have two apparent weaknesses: slower training and worse performance. According to our preliminary studies, with the current state-of-the-art systems, we must either sacrifice training complexity for decoding speed, or sacrifice decoding speed for higher accuracy.",
"In this paper, we propose a novel Global (featuring) Greedy (inference) parsing architecture that achieves fast training, high decoding speed and good performance. With our approach, we use the one-shot arc scoring scheme as in the graph-based parser instead of the stepwise local scoring in transition-based. This is essential for achieving competitive performance, efficient training, and fast decoding. Since, to preserve linear time decoding, we chose a greedy algorithm, we introduce a parsing order scoring scheme to retain the decoding order in inference to achieve the highest accuracy possible. Just as with one-shot scoring in graph-based parsers, our proposed parser will perform arc-attachment scoring, parsing order scoring, and decoding simultaneously in an incremental, deterministic fashion just as transition-based parsers do.",
"We evaluated our models on the common benchmark treebanks PTB and CTB, as well as on the multilingual CoNLL and the Universal Dependency treebanks. From the evaluation results on the benchmark treebanks, our proposed model gives significant improvements when compared to the baseline parser. In summary, our contributions are thus:",
"$\\bullet $ We integrate the arc scoring mechanism of graph-based parsers and the linear time complexity inference approach of transition parsing models, which, by replacing stepwise local feature scoring, significantly alleviates the drawbacks of these models, improving their moderate performance caused by error propagation and increasing their training speeds resulting from their lack of parallelism.",
"$\\bullet $ Empirical evaluations on benchmark and multilingual treebanks show that our method achieves state-of-the-art or comparable performance, indicating that our novel neural network architecture for dependency parsing is simple, effective, and efficient.",
"$\\bullet $ Our work shows that using neural networks’ excellent learning ability, we can simultaneously achieve both improved accuracy and speed."
],
[
"The global greedy parser will build its dependency trees in a stepwise manner without backtracking, which takes a general greedy decoding algorithm as in easy-first parsers.",
"Using easy-first parsing's notation, we describe the decoding in our global greedy parsing. As both easy-first and global greedy parsing rely on a series of deterministic parsing actions in a general parsing order (unlike the fixed left-to-right order of standard transitional parsers), they need a specific data structure which consists of a list of unattached nodes (including their partial structures) referred to as “pending\". At each step, the parser chooses a specific action $\\hat{a}$ on position $i$ with the given arc score score($\\cdot $), which is generated by an arc scorer in the parser. Given an intermediate state of parsing with pending $P=\\lbrace p_0, p_1, p_2, \\cdots , p_N\\rbrace $, the attachment action is determined as follows:",
"where $\\mathcal {A}$ denotes the set of the allowed actions, and $i$ is the index of the node in pending. In addition to distinguishing the correct attachments from the incorrect ones, the arc scorer also assigns the highest scores to the easiest attachment decisions and lower scores to the harder decisions, thus determining the parsing order of an input sentence.",
"For projective parsing, there are exactly two types of actions in the allowed action set: ATTACHLEFT($i$) and ATTACHRIGHT($i$). Let $p_i$ refer to $i$-th element in pending, then the allowed actions can be formally defined as follows:",
"$\\bullet $ ATTACHLEFT($i$): attaches $p_{i+1}$ to $p_i$ , which results in an arc ($p_i$, $p_{i+1}$) headed by $p_i$, and removes $p_{i+1}$ from pending.",
"$\\bullet $ ATTACHRIGHT($i$): attaches $p_i$ to $p_{i+1}$ , which results in an arc ($p_{i+1}$, $p_i$) headed by $p_{i+1}$, and removes $p_i$ from pending."
],
[
"Our proposed global greedy model contains three components: (1) an encoder that processes the input sentence and maps it into hidden states that lie in a low dimensional vector space $h_i$ and feeds it into a specific representation layer to strip away irrelevant information, (2) a modified scorer with a parsing order objective, and (3) a greedy inference module that generates the dependency tree."
],
[
"We employ a bi-directional LSTM-CNN architecture (BiLSTM-CNN) to encode the context in which convolutional neural networks (CNNs) learn character-level information $e_{char}$ to better handle out-of-vocabulary words. We then combine these words' character level embeddings with their word embedding $e_{word}$ and POS embedding $e_{pos}$ to create a context-independent representation, which we then feed into the BiLSTM to create word-level context-dependent representations. To further enhance the word-level representation, we leverage an external fixed representation $e_{lm}$ from pre-trained ELMo BIBREF16 or BERT BIBREF17 layer features. Finally, the encoder outputs a sequence of contextualized representations $h_i$.",
"Because the contextualized representations will be used for several different purposes in the following scorers, it is necessary to specify a representation for each purpose. As shown in BIBREF18, applying a multi-layer perceptron (MLP) to the recurrent output states before the classifier strips away irrelevant information for the current decision, reducing both the dimensionality and the risk of model overfitting. Therefore, in order to distinguish the biaffine scorer's head and dependent representations and the parsing order scorer's representations, we add a separate contextualized representation layer with ReLU as its activation function for each syntax head $h^{head}_i \\in H_{head}$ specific representations, dependent $h^{dep}_i \\in H_{dep}$ specific representations, and parsing order $h^{order}_i \\in H_{order}$:"
],
[
"The traditional easy-first model relies on an incremental tree scoring process with stepwise loss backpropagation and sub-tree removal facilitated by local scoring, relying on the scorer and loss backpropagation to hopefully obtain the parsing order. Communicating the information from the scorer and the loss requires training a dynamic oracle, which exposes the model to the configurations resulting from erroneous decisions. This training process is done at the token level, not the sentence level, which unfortunately means incremental scoring prevents parallelized training and causes error propagation. We thus forego incremental local scoring, and, inspired by the design of graph-based parsing models, we instead choose to score all of the syntactic arc candidates in one-shot, which allows for global featuring at a sentence level; however, the introduction of one-shot scoring brings new problems. Since the graph-based method relies on a tree space search algorithm to find the tree with the highest score, the parsing order is not important at all. If we apply one-shot scoring to greedy parsing, we need a mechanism like a stack (as is used in transition-based parsing) to preserve the parsing order.",
"Both transition-based and easy-first parsers build parse trees in an incremental style, which forces tree formation to follow an order starting from either the root and working towards the leaf nodes or vice versa. When a parser builds an arc that skips any layer, certain errors will exist that it will be impossible for the parent node to find. We thus implement a parsing order prediction module to learn a parsing order objective that outputs a parsing order score addition to the arc score to ensure that each pending node is attached to its parent only after all (or at least as many as possible) of its children have been collected.",
"Our scorer consists of two parts: a biaffine scorer for one-shot scoring and a parsing order scorer for parsing order guiding. For the biaffine scorer, we adopt the biaffine attention mechanism BIBREF18 to score all possible head-dependent pairs:",
"where $\\textbf {W}_{arc}$, $\\textbf {U}_{arc}$, $\\textbf {V}_{arc}$, $\\textbf {b}_{arc}$ are the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector, respectively.",
"If we perform greedy inference only on the $s_{arc}$ directly, as in Figure FIGREF6, at step $i$, the decoder tests every pair in the pending list, and although the current score fits the correct tree structure for this example, because backtracking is not allowed in the deterministic greedy inference, according to the maximum score $s_{arc}$, the edge selected in step $i$+1 is “root\"$\\rightarrow $“come\". This prevents the child nodes (“today\" and “.\") from finding the correct parent node in the subsequent step. Thus, the decoder is stuck with this error. This problem can be solved or mitigated by using a max spanning tree (MST) decoder or by adding beam search method to the inference, but neither guarantees maintaining linear time decoding. Therefore, we propose a new scorer for parsing order $s_{order}$. In the scoring stage, the parsing order score is passed to the decoder to guide it and prevent (as much as possible) resorting to erroneous choices.",
"We formally define the parsing order score for decoding. To decode the nodes at the bottom of the syntax tree first, we define the the parsing order priority as the layer “level\" or “position\" in the tree. The biaffine output score is the probability of edge (dependency) existence, between 0 and 1, so the greater the probability, the more likely an edge is to exist. Thus, our parsing order scorer gives a layer score for a node, and then, we add this layer score to the biaffine score. Consequently, the relative score of the same layer can be kept unchanged, and the higher the score of a node in the bottom layer, the higher its decoding priority will be. We therefore define $s_{order}$ as:",
"where $\\textbf {W}_{order}$ and $\\textbf {b}_{order} $ are parameters for the parsing order scorer. Finally, the one-shot arc score is:",
"Similarly, we use the biaffine scorer for dependency label classification. We apply MLPs to the contextualized representations before using them in the label classifier as well. As with other graph-based models, the predicted tree at training time has each word as a dependent of its highest-scoring head (although at test time we ensure that the parse is a well-formed tree via the greedy parsing algorithm)."
],
[
"To parse the syntax tree $y$ for a sentence $x$ with length $l$, the easy-first model relies on an action-by-action process performed on pending. In the training stage, the loss is accumulated once per step (action), and the model is updated by gradient backpropagation according to a preset frequency. This prohibits parallelism during model training lack between and within sentences. Therefore, the traditional easy-first model was trained to maximize following probability:",
"where $\\emph {pending}_i$ is the pending list state at step $i$.",
"While for our proposed model, it uses the training method similar to that of graph-based models, in which the arc scores are all obtained in one-shot. Consequently, it does not rely on the pending list in the training phase and only uses the pending list to promote the process of linear parsing in the inference stage. Our model is trained to optimize the probability of the dependency tree $y$ when given a sentence $x$: $P_\\theta (y|x)$, which can be factorized as:",
"where $\\theta $ represents learnable parameters, $l$ denotes the length of the processing sentence, and $y^{arc}_i$, $y^{rel}_i$ denote the highest-scoring head and dependency relation for node $x_i$. Thus, our model factors the distribution according to a bottom-up tree structure.",
"Corresponding to multiple objectives, several parts compose the loss of our model. The overall training loss is the sum of three objectives:",
"where the loss for arc prediction $\\mathcal {L}^{arc}$ is the negative log-likelihood loss of the golden structure $y^{arc}$:",
"the loss for relation prediction $\\mathcal {L}^{rel}$ is implemented as the negative log-likelihood loss of the golden relation $y^{rel}$ with the golden structure $y^{arc}$,",
"and the loss for parsing order prediction $\\mathcal {L}^{order}$:",
"Because the parsing order score of each layer in the tree increases by 1, we frame it as a classification problem and therefore add a multi-class classifier module as the order scorer."
],
[
"For non-projective inference, we introduce two additional arc-building actions as follows.",
"$\\bullet $ NP-ATTACHLEFT($i$): attaches $p_{j}$ to $p_i$ where $j > i$, which builds an arc ($p_i$, $p_{j}$) headed by $p_i$, and removes $p_{j}$ from pending.",
"$\\bullet $ NP-ATTACHRIGHT($i$): attaches $p_{j}$ to $p_i$ where $j < i$ which builds an arc ($p_i$, $p_j$) headed by $p_i$, and removes $p_j$ from pending.",
"If we use the two arc-building actions for non-projective dependency trees directly on $s_{final}$, the time complexity will become $O(n^3)$, so we need to modify this algorithm to accommodate the non-projective dependency trees. Specifically, we no longer use $s_{final}$ directly for greedy search but instead divide each decision into two steps. The first step is to use the order score $s_{order}$ to sort the pending list in descending order. Then, the second step is to find the edge with the largest arc score $s_{arc}$ for this node in the first position of the pending list."
],
[
"The number of decoding steps to build a parse tree for a sentence is the same as its length, $n$. Combining this with the searching in the pending list (at each step, we need to find the highest-scoring pair in the pending list to attach. This has a runtime of $O(n)$. The time complexity of a full decoding is $O(n^2)$, which is equal to 1st-order non-projective graph-based parsing but more efficient than 1st-order projective parsing with $O(n^3)$ and other higher order graph parsing models. Compared with the current state-of-the-art transition-based parser STACKPTR BIBREF23, with the same decoding time complexity as ours, since our number of decoding takes $n$ steps while STACKPTR takes $2n-1$ steps for decoding and needs to compute the attention vector at each step, our model actually would be much faster than STACKPTR in decoding.",
"For the non-projective inference in our model, the complexity is still $O(n^2)$. Since the order score and the arc score are two parts that do not affect each other, we can sort the order scores with time complexity of $O$($n$log$n$) and then iterate in this descending order. The iteration time complexity is $O(n)$ and determining the arc is also $O(n)$, so the overall time complexity is $O$($n$log$n$) $+$ $O(n^2)$, simplifying to $O(n^2)$."
],
[
"We evaluate our parsing model on the English Penn Treebank (PTB), the Chinese Penn Treebank (CTB), treebanks from two CoNLL shared tasks and the Universal Dependency (UD) Treebanks, using unlabeled attachment scores (UAS) and labeled attachment scores (LAS) as the metrics. Punctuation is ignored as in previous work BIBREF18. For English and Chinese, we use the projective inference, while for other languages, we use the non-projective one."
],
[
"For English, we use the Stanford Dependency (SD 3.3.0) BIBREF37 conversion of the Penn Treebank BIBREF38, and follow the standard splitting convention for PTB, using sections 2-21 for training, section 22 as a development set and section 23 as a test set. We use the Stanford POS tagger BIBREF39 generate predicted POS tags.",
"For Chinese, we adopt the splitting convention for CTB BIBREF40 described in BIBREF19. The dependencies are converted with the Penn2Malt converter. Gold segmentation and POS tags are used as in previous work BIBREF19.",
"For the CoNLL Treebanks, we use the English treebank from the CoNLL-2008 shared task BIBREF41 and all 13 treebanks from the CoNLL-X shared task BIBREF42. The experimental settings are the same as BIBREF43.",
"For UD Treebanks, following the selection of BIBREF23, we take 12 treebanks from UD version 2.1 (Nivre et al. 2017): Bulgarian (bg), Catalan (ca), Czech (cs), Dutch (nl), English (en), French (fr), German (de), Italian (it), Norwegian (no), Romanian (ro), Russian (ru) and Spanish (es). We adopt the standard training/dev/test splits and use the universal POS tags provided in each treebank for all the languages."
],
[
"We use the GloVe BIBREF44 trained on Wikipedia and Gigaword as external embeddings for English parsing. For other languages, we use the word vectors from 157 languages trained on Wikipedia and Crawl using fastText BIBREF45. We use the extracted BERT layer features to enhance the performance on CoNLL-X and UD treebanks."
],
[
"The character embeddings are 8-dimensional and randomly initialized. In the character CNN, the convolutions have a window size of 3 and consist of 50 filters. We use 3 stacked bidirectional LSTMs with 512-dimensional hidden states each. The outputs of the BiLSTM employ a 512-dimensional MLP layer for the arc scorer, a 128-dimensional MLP layer for the relation scorer, and a 128-dimensional MLP layer for the parsing order scorer, with all using ReLU as the activation function. Additionally, for parsing the order score, since considering it a classification problem over parse tree layers, we set its range to $[0, 1, ..., 32]$."
],
[
"Parameter optimization is performed with the Adam optimizer with $\\beta _1$ = $\\beta _2$ = 0.9. We choose an initial learning rate of $\\eta _0$ = 0.001. The learning rate $\\eta $ is annealed by multiplying a fixed decay rate $\\rho $ = 0.75 when parsing performance stops increasing on validation sets. To reduce the effects of an exploding gradient, we use a gradient clipping of 5.0. For the BiLSTM, we use recurrent dropout with a drop rate of 0.33 between hidden states and 0.33 between layers. Following BIBREF18, we also use embedding dropout with a rate of 0.33 on all word, character, and POS tag embeddings."
],
[
"We now compare our model with several other recently proposed parsers as shown in Table TABREF9. Our global greedy parser significantly outperforms the easy-first parser in BIBREF14 (HT-LSTM) on both PTB and CTB. Compared with other graph- and transition-based parsers, our model is also competitive with the state-of-the-art on PTB when considering the UAS metric. Compared to state-of-the-art parsers in transition and graph types, BIAF and STACKPTR, respectively, our model gives better or comparable results but with much faster training and decoding. Additionally, with the help of pre-trained language models, ELMo or BERT, our model can achieve even greater results.",
"In order to explore the impact of the parsing order objective on the parsing performance, we replace the greedy inference with the traditional MST parsing algorithm (i.e., BIAF + parsing order objective), and the result is shown as “This work (MST)\", giving slight performance improvement compared to the greedy inference, which shows globally optimized decoding of graph model still takes its advantage. Besides, compared to the standard training objective for graph model based parser, the performance improvement is slight but still shows the proposed parsing order objective is indeed helpful."
],
[
"Table TABREF11 presents the results on 14 treebanks from the CoNLL shared tasks. Our model yields the best results on both UAS and LAS metrics of all languages except the Japanese. As for Japanese, our model gives unsatisfactory results because the original treebank was written in Roman phonetic characters instead of hiragana, which is used by both common Japanese writing and our pre-trained embeddings. Despite this, our model overall still gives 1.0% higher average UAS and LAS than the previous best parser, BIAF."
],
[
"Following BIBREF23, we report results on the test sets of 12 different languages from the UD treebanks along with the current state-of-the-art: BIAF and STACKPTR. Although both BIAF and STACKPTR parsers have achieved relatively high parsing accuracies on the 12 languages and have all UAS higher than 90%, our model achieves state-of-the-art results in all languages for both UAS and LAS. Overall, our model reports more than 1.0% higher average UAS than STACKPTR and 0.3% higher than BIAF."
],
[
"In order to verify the time complexity analysis of our model, we measured the running time and speed of BIAF, STACKPTR and our model on PTB training and development set using the projective algorithm. The comparison in Table TABREF24 shows that in terms of convergence time, our model is basically the same speed as BIAF, while STACKPTR is much slower. For decoding, our model is the fastest, followed by BIAF. STACKPTR is unexpectedly the slowest. This is because the time cost of attention scoring in decoding is not negligible when compared with the processing speed and actually even accounts for a significant portion of the runtime."
],
[
"This paper presents a new global greedy parser in which we enable greedy parsing inference compatible with the global arc scoring of graph-based parsing models instead of the local feature scoring of transitional parsing models. The proposed parser can perform projective parsing when only using two arc-building actions, and it also supports non-projective parsing when introducing two extra non-projective arc-building actions. Compared to graph-based and transition-based parsers, our parser achieves a better tradeoff between parsing accuracy and efficiency by taking advantages of both graph-based models' training methods and transition-based models' linear time decoding strategies. Experimental results on 28 treebanks show the effectiveness of our parser by achieving good performance on 27 treebanks, including the PTB and CTB benchmarks."
]
],
"section_name": [
"Introduction",
"The General Greedy Parsing",
"Global Greedy Parsing Model",
"Global Greedy Parsing Model ::: Encoder",
"Global Greedy Parsing Model ::: Scorers",
"Global Greedy Parsing Model ::: Training Objectives",
"Global Greedy Parsing Model ::: Non-Projective Inference",
"Global Greedy Parsing Model ::: Time Complexity",
"Experiments",
"Experiments ::: Treebanks",
"Experiments ::: Implementation Details ::: Pre-trained Embeddings",
"Experiments ::: Implementation Details ::: Hyperparameters",
"Experiments ::: Implementation Details ::: Training",
"Experiments ::: Main Results",
"Experiments ::: CoNLL Results",
"Experiments ::: UD Results",
"Experiments ::: Runtime Analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"64097394f3ee6d32daff2f82bf42ae5ca60ca497"
],
"answer": [
{
"evidence": [
"Table TABREF11 presents the results on 14 treebanks from the CoNLL shared tasks. Our model yields the best results on both UAS and LAS metrics of all languages except the Japanese. As for Japanese, our model gives unsatisfactory results because the original treebank was written in Roman phonetic characters instead of hiragana, which is used by both common Japanese writing and our pre-trained embeddings. Despite this, our model overall still gives 1.0% higher average UAS and LAS than the previous best parser, BIAF.",
"Following BIBREF23, we report results on the test sets of 12 different languages from the UD treebanks along with the current state-of-the-art: BIAF and STACKPTR. Although both BIAF and STACKPTR parsers have achieved relatively high parsing accuracies on the 12 languages and have all UAS higher than 90%, our model achieves state-of-the-art results in all languages for both UAS and LAS. Overall, our model reports more than 1.0% higher average UAS than STACKPTR and 0.3% higher than BIAF."
],
"extractive_spans": [
"model overall still gives 1.0% higher average UAS and LAS than the previous best parser, BIAF",
"our model reports more than 1.0% higher average UAS than STACKPTR and 0.3% higher than BIAF"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our model yields the best results on both UAS and LAS metrics of all languages except the Japanese. As for Japanese, our model gives unsatisfactory results because the original treebank was written in Roman phonetic characters instead of hiragana, which is used by both common Japanese writing and our pre-trained embeddings. Despite this, our model overall still gives 1.0% higher average UAS and LAS than the previous best parser, BIAF.",
"Although both BIAF and STACKPTR parsers have achieved relatively high parsing accuracies on the 12 languages and have all UAS higher than 90%, our model achieves state-of-the-art results in all languages for both UAS and LAS. Overall, our model reports more than 1.0% higher average UAS than STACKPTR and 0.3% higher than BIAF."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ecc13a2ba415c6d0c5828dae0fa2ae8d9bf872ce"
],
"answer": [
{
"evidence": [
"In order to verify the time complexity analysis of our model, we measured the running time and speed of BIAF, STACKPTR and our model on PTB training and development set using the projective algorithm. The comparison in Table TABREF24 shows that in terms of convergence time, our model is basically the same speed as BIAF, while STACKPTR is much slower. For decoding, our model is the fastest, followed by BIAF. STACKPTR is unexpectedly the slowest. This is because the time cost of attention scoring in decoding is not negligible when compared with the processing speed and actually even accounts for a significant portion of the runtime.",
"FLOAT SELECTED: Table 4: Training time and decoding speed. The experimental environment is on the same machine with Intel i9 9900k CPU and NVIDIA 1080Ti GPU."
],
"extractive_spans": [],
"free_form_answer": "Proposed vs best baseline:\nDecoding: 8541 vs 8532 tokens/sec\nTraining: 8h vs 8h",
"highlighted_evidence": [
"The comparison in Table TABREF24 shows that in terms of convergence time, our model is basically the same speed as BIAF, while STACKPTR is much slower. For decoding, our model is the fastest, followed by BIAF. STACKPTR is unexpectedly the slowest.",
"FLOAT SELECTED: Table 4: Training time and decoding speed. The experimental environment is on the same machine with Intel i9 9900k CPU and NVIDIA 1080Ti GPU."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero"
],
"paper_read": [
"no",
"no"
],
"question": [
"What are performance compared to former models?",
"How faster is training and decoding compared to former models?"
],
"question_id": [
"24014a040447013a8cf0c0f196274667320db79f",
"9aa52b898d029af615b95b18b79078e9bed3d766"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: A fully built dependency tree for “The test may come today.” including part-of-speech (POS) tags and root token.",
"Figure 2: Overview for our global greedy parser.",
"Figure 3: Illustration of the reason of errors generated when only using biaffine arc score for easy-first parsing.",
"Table 1: Comparison of results on the test sets. “T”, “G” and “E” indicate transition-based, graph-based and easy-first models, respectively. The “G + E” represents the graph-based training while the easy-first algorithm is used for inference. Acronyms used: (g) – greedy, (b) – beam search, (re) – re-ranking, (3rd) – 3rd-order, (1st) – 1st-order. The “*” in the upper right corner of the results is because the original ELMo (Peters et al. 2018) has no Chinese version. We instead used the multilingual version “HIT-ELMo” pre-trained by (Che et al. 2018).",
"Table 2: UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers. Bi-Att is the bi-directional attention based parser (Cheng et al. 2016), and NeuroMST is the neural MST parser (Ma and Hovy 2017). “Best Published” includes the best results in recent years among (Koo et al. 2010), (Martins et al. 2011), (Martins, Almeida, and Smith 2013), (Lei et al. 2014), (Zhang et al. 2014), (Zhang and McDonald 2014), (Pitler and McDonald 2015), and (Cheng et al. 2016) in addition to the ones we listed above.",
"Table 4: Training time and decoding speed. The experimental environment is on the same machine with Intel i9 9900k CPU and NVIDIA 1080Ti GPU.",
"Table 3: UAS and LAS on test datasets of 12 treebanks from UD Treebanks, together with BIAF and STACKPTR for comparison."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"7-Table4-1.png",
"7-Table3-1.png"
]
} | [
"How faster is training and decoding compared to former models?"
] | [
[
"1911.08673-7-Table4-1.png",
"1911.08673-Experiments ::: Runtime Analysis-0"
]
] | [
"Proposed vs best baseline:\nDecoding: 8541 vs 8532 tokens/sec\nTraining: 8h vs 8h"
] | 634 |
1611.04642 | Link Prediction using Embedded Knowledge Graphs | Recent studies on knowledge base completion, the task of recovering missing facts based on observed facts, demonstrate the importance of learning embeddings from multi-step relations. Due to the size of knowledge bases, previous works manually design relation paths of observed triplets in symbolic space (e.g. random walk) to learn multi-step relations during training. However, these approaches suffer some limitations as most paths are not informative, and it is prohibitively expensive to consider all possible paths. To address the limitations, we propose learning to traverse in vector space directly without the need of symbolic space guidance. To remember the connections between related observed triplets and be able to adaptively change relation paths in vector space, we propose Implicit ReasoNets (IRNs), that is composed of a global memory and a controller module to learn multi-step relation paths in vector space and infer missing facts jointly without any human-designed procedure. Without using any axillary information, our proposed model achieves state-of-the-art results on popular knowledge base completion benchmarks. | {
"paragraphs": [
[],
[],
[],
[],
[],
[],
[
"We thank Scott Wen-Tau Yih, Kristina Toutanova, Jian Tang, Greg Yang, Adith Swaminathan, Xiaodong He, and Zachary Lipton for their thoughtful feedback and discussions.",
" Inference Steps in KBC Analysis: Applying IRNs to a Shortest Path Synthesis Task "
]
],
"section_name": [
"Introduction",
"Knowledge Base Completion Task",
"Proposed Model",
"Experimental Results",
"Related Work",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"65a417a86f66a9b02aa9dc3a8f449db6f313013b"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: The knowledge base completion (link prediction) results on WN18 and FB15k."
],
"extractive_spans": [],
"free_form_answer": "WN18 and FB15k",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: The knowledge base completion (link prediction) results on WN18 and FB15k."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"no"
],
"question": [
"What datasets are used to evaluate the model?"
],
"question_id": [
"b13d0e463d5eb6028cdaa0c36ac7de3b76b5e933"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"link prediction"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 1: An overview of the IRN for KBC tasks.",
"Figure 2: A running example of the IRN architecture. Given the input (Obama, CITIZENSHIP,?), the model iteratively reformulates the input vector via the current input vector and the attention vector over the shared memory, and determines to stop when an answer is found.",
"Table 1: The knowledge base completion (link prediction) results on WN18 and FB15k.",
"Table 2: The performance of IRNs with different memory sizes and inference steps on FB15k, where |M | and Tmax represent the number of memory vectors and the maximum inference step, respectively.",
"Table 3: Hits@10 (%) in the relation category on FB15k. (M stands for Many)",
"Table 4: Interpret the state st in each step via finding the closest (entity, relation) tuple, and the corresponding the top-3 predictions and termination probability. “Rank” stands for the rank of the target entity and “Term. Prob.” stands for termination probability.",
"Table 5: Shared memory visualization in an IRN trained on FB15k, where we show the top 8 relations, ranked by the average attention scores, of some memory cells. The first row in each column represents the interpreted relation.",
"Figure 3: An example of the shortest path synthesis dataset, given an input “215 ; 493” (Answer: 215→ 101→ 493). Note that we only show the nodes that are related to this example here. The corresponding termination probability and prediction results are shown in the table. The model terminates at step 5."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"8-Table5-1.png",
"12-Figure3-1.png"
]
} | [
"What datasets are used to evaluate the model?"
] | [
[
"1611.04642-5-Table1-1.png"
]
] | [
"WN18 and FB15k"
] | 638 |
1910.09295 | Localization of Fake News Detection via Multitask Transfer Learning | The use of the internet as a fast medium of spreading fake news reinforces the need for computational tools that combat it. Techniques that train fake news classifiers exist, but they all assume an abundance of resources including large labeled datasets and expert-curated corpora, which low-resource languages may not have. In this paper, we show that Transfer Learning (TL) can be used to train robust fake news classifiers from little data, achieving 91% accuracy on a fake news dataset in the low-resourced Filipino language, reducing the error by 14% compared to established few-shot baselines. Furthermore, lifting ideas from multitask learning, we show that augmenting transformer-based transfer techniques with auxiliary language modeling losses improves their performance by adapting to stylometry. Using this, we improve TL performance by 4-6%, achieving an accuracy of 96% on our best model. We perform ablations that establish the causality of attention-based TL techniques to state-of-the-art results, as well as the model's capability to learn and predict via stylometry. Lastly, we show that our method generalizes well to different types of news articles, including political news, entertainment news, and opinion articles. | {
"paragraphs": [
[
"There is a growing interest in research revolving around automated fake news detection and fact checking as its need increases due to the dangerous speed fake news spreads on social media BIBREF0. With as much as 68% of adults in the United States regularly consuming news on social media, being able to distinguish fake from non-fake is a pressing need.",
"Numerous recent studies have tackled fake news detection with various techniques. The work of BIBREF1 identifies and verifies the stance of a headline with respect to its content as a first step in identifying potential fake news, achieving an accuracy of 89.59% on a publicly available article stance dataset. The work of BIBREF2 uses a deep learning approach and integrates multiple sources to assign a degree of “fakeness” to an article, beating representative baselines on a publicly-available fake news dataset.",
"More recent approaches also incorporate newer, novel methods to aid in detection. The work of BIBREF3 handles fake news detection as a specific case of cross-level stance detection. In addition, their work also uses the presence of an “inverted pyramid” structure as an indicator of real news, using a neural network to encode a given article's structure.",
"While these approaches are valid and robust, most, if not all, modern fake news detection techniques assume the existence of large, expertly-annotated corpora to train models from scratch. Both BIBREF1 and BIBREF3 use the Fake News Challenge dataset, with 49,972 labeled stances for each headline-body pairs. BIBREF2, on the other hand, uses the LIAR dataset BIBREF4, which contains 12,836 labeled short statements as well as sources to support the labels.",
"This requirement for large datasets to effectively train fake news detection models from scratch makes it difficult to adapt these techniques into low-resource languages. Our work focuses on the use of Transfer Learning (TL) to evade this data scarcity problem.",
"We make three contributions.",
"First, we construct the first fake news dataset in the low-resourced Filipino language, alleviating data scarcity for research in this domain.",
"Second, we show that TL techniques such as ULMFiT BIBREF5, BERT BIBREF6, and GPT-2 BIBREF7, BIBREF8 perform better compared to few-shot techniques by a considerable margin.",
"Third, we show that auxiliary language modeling losses BIBREF9, BIBREF10 allows transformers to adapt to the stylometry of downstream tasks, which produces more robust fake news classifiers."
],
[
"We provide a baseline model as a comparison point, using a few-shot learning-based technique to benchmark transfer learning against methods designed with low resource settings in mind. After which, we show three TL techniques that we studied and adapted to the task of fake news detection."
],
[
"We use a siamese neural network, shown to perform state-of-the-art few-shot learning BIBREF11, as our baseline model.",
"A siamese network is composed of weight-tied twin networks that accept distinct inputs, joined by an energy function, which computes a distance metric between the representations given by both twins. The network could then be trained to differentiate between classes in order to perform classification BIBREF11.",
"We modify the original to account for sequential data, with each twin composed of an embedding layer, a Long-Short Term Memory (LSTM) BIBREF12 layer, and a feed-forward layer with Rectified Linear Unit (ReLU) activations.",
"Each twin embeds and computes representations for a pair of sequences, with the prediction vector $p$ computed as:",
"where $o_i$ denotes the output representation of each siamese twin $i$ , $W_{\\textnormal {out}}$ and $b_{\\textnormal {out}}$ denote the weight matrix and bias of the output layer, and $\\sigma $ denotes the sigmoid activation function."
],
[
"ULMFiT BIBREF5 was introduced as a TL method for Natural Language Processing (NLP) that works akin to ImageNet BIBREF13 pretraining in Computer Vision.",
"It uses an AWD-LSTM BIBREF14 pretrained on a language modeling objective as a base model, which is then finetuned to a downstream task in two steps.",
"First, the language model is finetuned to the text of the target task to adapt to the task syntactically. Second, a classification layer is appended to the model and is finetuned to the classification task conservatively. During finetuning, multiple different techniques are introduced to prevent catastrophic forgetting.",
"ULMFiT delivers state-of-the-art performance for text classification, and is notable for being able to set comparable scores with as little as 1000 samples of data, making it attractive for use in low-resource settings BIBREF5."
],
[
"BERT is a Transformer-based BIBREF15 language model designed to pretrain “deep bidirectional representations” that can be finetuned to different tasks, with state-of-the-art results achieved in multiple language understanding benchmarks BIBREF6.",
"As with all Transformers, it draws power from a mechanism called “Attention” BIBREF16, which allows the model to compute weighted importance for each token in a sequence, effectively pinpointing context reference BIBREF15. Precisely, we compute attention on a set of queries packed as a matrix $Q$ on key and value matrices $K$ and $V$, respectively, as:",
"where $d_{k}$ is the dimensions of the key matrix $K$. Attention allows the Transformer to refer to multiple positions in a sequence for context at any given time regardless of distance, which is an advantage over Recurrent Neural Networks (RNN).",
"BERT's advantage over ULMFiT is its bidirectionality, leveraging both left and right context using a pretraining method called “Masked Language Modeling.” In addition, BERT also benefits from being deep, allowing it to capture more context and information. BERT-Base, the smallest BERT model, has 12 layers (768 units in each hidden layer) and 12 attention heads for a total of 110M parameters. Its larger sibling, BERT-Large, has 24 layers (1024 units in each hidden layer) and 16 attention heads for a total of 340M parameters."
],
[
"The GPT-2 BIBREF8 technique builds up from the original GPT BIBREF7. Its main contribution is the way it is trained. With an improved architecture, it learns to do multiple tasks by just training on vanilla language modeling.",
"Architecture-wise, it is a Transformer-based model similar to BERT, with a few differences. It uses two feed-forward layers per transformer “block,” in addition to using “delayed residuals” which allows the model to choose which transformed representations to output.",
"GPT-2 is notable for being extremely deep, with 1.5B parameters, 10x more than the original GPT architecture. This gives it more flexibility in learning tasks unsupervised from language modeling, especially when trained on a very large unlabeled corpus."
],
[
"BERT and GPT-2 both lack an explicit “language model finetuning step,” which gives ULMFiT an advantage where it learns to adapt to the stylometry and linguistic features of the text used by its target task. Motivated by this, we propose to augment Transformer-based TL techniques with a language model finetuning step.",
"Motivated by recent advancements in multitask learning, we finetune the model to the stylometry of the target task at the same time as we finetune the classifier, instead of setting it as a separate step. This produces two losses to be optimized together during training, and ensures that no task (stylometric adaptation or classification) will be prioritized over the other. This concept has been proposed and explored to improve the performance of transfer learning in multiple language tasks BIBREF9, BIBREF10.",
"We show that this method improves performance on both BERT and GPT-2, given that it learns to adapt to the idiosyncracies of its target task in a similar way that ULMFiT also does."
],
[
"We work with a dataset composed of 3,206 news articles, each labeled real or fake, with a perfect 50/50 split between 1,603 real and fake articles, respectively. Fake articles were sourced from online sites that were tagged as fake news sites by the non-profit independent media fact-checking organization Verafiles and the National Union of Journalists in the Philippines (NUJP). Real articles were sourced from mainstream news websites in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera.",
"For preprocessing, we only perform tokenization on our dataset, specifically “Byte-Pair Encoding” (BPE) BIBREF17. BPE is a form of fixed-vocabulary subword tokenization that considers subword units as the most primitive form of entity (i.e. a token) instead of canonical words (i.e. “I am walking today” $\\rightarrow $ “I am walk ##ing to ##day”). BPE is useful as it allows our model to represent out-of-vocabulary (OOV) words unlike standard tokenization. In addition, it helps language models in learning morphologically-rich languages as it now treats morphemes as primary enitites instead of canonical word tokens.",
"For training/finetuning the classifiers, we use a 70%-30% train-test split of the dataset."
],
[
"To pretrain BERT and GPT-2 language models, as well as an AWD-LSTM language model for use in ULMFiT, a large unlabeled training corpora is needed. For this purpose, we construct a corpus of 172,815 articles from Tagalog Wikipedia which we call WikiText-TL-39 BIBREF18. We form training-validation-test splits of 70%-15%-15% from this corpora.",
"Preprocessing is similar to the fake news dataset, with the corpus only being lightly preprocessed and tokenized using Byte-Pair Encoding.",
"Corpus statistics for the pretraining corpora are shown on table TABREF17."
],
[
"We train a siamese recurrent neural network as our baseline. For each twin, we use 300 dimensions for the embedding layer and a hidden size of 512 for all hidden state vectors.",
"To optimize the network, we use a regularized cross-entropy objective of the following form:",
"where y$(x_1, x_2)$ = 1 when $x_1$ and $x_2$ are from the same class and 0 otherwise. We use the Adam optimizer BIBREF19 with an initial learning rate of 1e-4 to train the network for a maximum of 500 epochs."
],
[
"We pretrain a cased BERT-Base model using our prepared unlabeled text corpora using Google's provided pretraining scripts. For the masked language model pretraining objective, we use a 0.15 probability of a word being masked. We also set the maximum number of masked language model predictions to 20, and a maximum sequence length of 512. For training, we use a learning rate of 1e-4 and a batch size of 256. We train the model for 1,000,000 steps with 10,000 steps of learning rate warmup for 157 hours on a Google Cloud Tensor processing Unit (TPU) v3-8.",
"For GPT-2, we pretrain a GPT-2 Transformer model on our prepared text corpora using language modeling as its sole pretraining task, according to the specifications of BIBREF8. We use an embedding dimension of 410, a hidden dimension of 2100, and a maximum sequence length of 256. We use 10 attention heads per multihead attention block, with 16 blocks composing the encoder of the transformer. We use dropout on all linear layers to a probability of 0.1. We initialize all parameters to a standard deviation of 0.02. For training, we use a learning rate of 2.5e-4, and a batch size of 32, much smaller than BERT considering the large size of the model. We train the model for 200 epochs with 1,000 steps of learning rate warmup using the Adam optimizer. The model was pretrained for 178 hours on a machine with one NVIDIA Tesla V100 GPU.",
"For ULMFiT, we pretrain a 3-layer AWD-LSTM model with an embedding size of 400 and a hidden size of 1150. We set the dropout values for the embedding, the RNN input, the hidden-to-hidden transition, and the RNN output to (0.1, 0.3, 0.3, 0.4) respectively. We use a weight dropout of 0.5 on the LSTM’s recurrent weight matrices. The model was trained for 30 epochs with a learning rate of 1e-3, a batch size of 128, and a weight decay of 0.1. We use the Adam optimizer and use slanted triangular learning rate schedules BIBREF5. We train the model on a machine with one NVIDIA Tesla V100 GPU for a total of 11 hours.",
"For each pretraining scheme, we checkpoint models every epoch to preserve a copy of the weights such that we may restore them once the model starts overfitting. This is done as an extra regularization technique."
],
[
"We finetune our models to the target fake news classification task using the pretrained weights with an appended classification layer or head.",
"For BERT, we append a classification head composed of a single linear layer followed by a softmax transformation to the transformer model. We then finetune our BERT-Base model on the fake news classification task for 3 epochs, using a batch size of 32, and a learning rate of 2e-5.",
"For GPT-2, our classification head is first comprised of a layer normalization transform, followed by a linear layer, then a softmax transform. We finetune the pretrained GPT-2 transformer for 3 epochs, using a batch size of 32, and a learning rate of 3e-5.",
"For ULMFiT, we perform language model finetuning on the fake news dataset (appending no extra classification heads yet) for a total of 10 epochs, using a learning rate of 1e-2, a batch size of 80, and weight decay of 0.3. For the final ULMFiT finetuning stage, we append a compound classification head (linear $\\rightarrow $ batch normalization $\\rightarrow $ ReLU $\\rightarrow $ linear $\\rightarrow $ batch normalization $\\rightarrow $ softmax). We then finetune for 5 epochs, gradually unfreezing layers from the last to the first until all layers are unfrozen on the fourth epoch. We use a learning rate of 1e-2 and set Adam's $\\alpha $ and $\\beta $ parameters to 0.8 and 0.7, respectively.",
"To show the efficacy of Multitask Finetuning, we augment BERT and GPT-2 to use this finetuning setup with their classification heads. We finetune both models to the target task for 3 epochs, using a batch size of 32, and a learning rate of 3e-5. For optimization, we use Adam with a warmup steps of 10% the number of steps, comprising 3 epochs."
],
[
"To study the generalizability of the model to different news domains, we test our models against test cases not found in the training dataset. We mainly focus on three domains: political news, opinion articles, and entertainment/gossip articles. Articles used for testing are sourced from the same websites that the training dataset was taken from."
],
[
"Our baseline model, the siamese recurrent network, achieved an accuracy of 77.42% on the test set of the fake news classification task.",
"The transfer learning methods gave comparable scores. BERT finetuned to a final 87.47% accuracy, a 10.05% improvement over the siamese network's performance. GPT-2 finetuned to a final accuracy of 90.99%, a 13.57% improvement from the baseline performance. ULMFiT finetuning gave a final accuracy of 91.59%, an improvement of 14.17% over the baseline Siamese Network.",
"We could see that TL techniques outperformed the siamese network baseline, which we hypothesize is due to the intact pretrained knowledge in the language models used to finetune the classifiers. The pretraining step aided the model in forming relationships between text, and thus, performed better at stylometric based tasks with little finetuning.",
"The model results are all summarized in table TABREF26."
],
[
"One of the most surprising results is that BERT and GPT-2 performed worse than ULMFiT in the fake news classification task despite being deeper models capable of more complex relationships between data.",
"We hypothesize that ULMFiT achieved better accuracy because of its additional language model finetuning step. We provide evidence for this assumption with an additional experiment that shows a decrease in performance when the language model finetuning step is removed, droppping ULMFiT's accuracy to 78.11%, making it only perform marginally better than the baseline model. Results for this experiment are outlined in Table TABREF28",
"In this finetuning stage, the model is said to “adapt to the idiosyncracies of the task it is solving” BIBREF5. Given that our techniques rely on linguistic cues and features to make accurate predictions, having the model adapt to the stylometry or “writing style” of an article will therefore improve performance."
],
[
"We used a multitask finetuning technique over the standard finetuning steps for BERT and GPT-2, motivated by the advantage that language model finetuning provides to ULMFiT, and found that it greatly improves the performance of our models.",
"BERT achieved a final accuracy of 91.20%, now marginally comparable to ULMFiT's full performance. GPT-2, on the other hand, finetuned to a final accuracy of 96.28%, a full 4.69% improvement over the performance of ULMFiT. This provides evidence towards our hypothesis that a language model finetuning step will allow transformer-based TL techniques to perform better, given their inherent advantage in modeling complexity over more shallow models such as the AWD-LSTM used by ULMFiT. Rersults for this experiment are outlined in Table TABREF30."
],
[
"Several ablation studies are performed to establish causation between the model architectures and the performance boosts in the study."
],
[
"An ablation on pretraining was done to establish evidence that pretraining before finetuning accounts for a significant boost in performance over the baseline model. Using non-pretrained models, we finetune for the fake news classification task using the same settings as in the prior experiments.",
"In Table TABREF32, it can be seen that generative pretraining via language modeling does account for a considerable amount of performance, constituting 44.32% of the overall performance (a boost of 42.67% in accuracy) in the multitasking setup, and constituting 43.93% of the overall performance (a boost of 39.97%) in the standard finetuning setup.",
"This provides evidence that the pretraining step is necessary in achieving state-of-the-art performance."
],
[
"An ablation study was done to establish causality between the multiheaded nature of the attention mechanisms and state-of-the-art performance. We posit that since the model can refer to multiple context points at once, it improves in performance.",
"For this experiment, we performed several pretraining-finetuning setups with varied numbers of attention heads using the multitask-based finetuning scheme. Using a pretrained GPT-2 model, attention heads were masked with zero-tensors to downsample the number of positions the model could attend to at one time.",
"As shown in Table TABREF34, reducing the number of attention heads severely decreases multitasking performance. Using only one attention head, thereby attending to only one context position at once, degrades the performance to less than the performance of 10 heads using the standard finetuning scheme. This shows that more attention heads, thereby attending to multiple different contexts at once, is important to boosting performance to state-of-the-art results.",
"While increasing the number of attention heads improves performance, keeping on adding extra heads will not result to an equivalent boost as the performance plateaus after a number of heads.",
"As shown in Figure FIGREF35, the performance boost of the model plateaus after 10 attention heads, which was the default used in the study. While the performance of 16 heads is greater than 10, it is only a marginal improvement, and does not justify the added costs to training with more attention heads."
],
[
"To supplement our understanding of the features our models learn and establish empirical difference in their stylometries, we use two stylometric tests traditionally used for authorship attribution: Mendenhall's Characteristic Curves BIBREF20 and John Burrow's Delta Method BIBREF21.",
"We provide a characteristic curve comparison to establish differences between real and fake news. For the rest of this section, we refer to the characteristic curves on Figure FIGREF36.",
"When looking at the y-axis, there is a big difference in word count. The fake news corpora has twice the amount of words as the real news corpora. This means that fake news articles are at average lengthier than real news articles. The only differences seen in the x-axis is the order of appearance of word lengths 6, 7, and 1. The characteristic curves also exhibit differences in trend. While the head and tail look similar, the body show different trends. When graphing the corpora by news category, the heads and tails look similar to the general real and fake news characteristic curve but the body exhibits a trend different from the general corpora. This difference in trend may be attributed to either a lack of text data to properly represent real and fake news or the existence of a stylistic difference between real and fake news.",
"We also use Burrow’s Delta method to see a numeric distance between text samples. Using the labeled news article corpora, we compare samples outside of the corpora towards real and fake news to see how similar they are in terms of vocabulary distance. The test produces smaller distance for the correct label, which further reaffirms our hypothesis that there is a stylistic difference between the labels. However, the difference in distance between real and fake news against the sample is not significantly large. For articles on politics, business, entertainment, and viral events, the test generates distances that are significant. Meanwhile news in the safety, sports, technology, infrastructure, educational, and health categories have negligible differences in distance. This suggests that some categories are written similarly despite veracity."
],
[
"All the TL techniques were pretrained with a language modeling-based task. While language modeling has been empirically proven as a good pretraining task, we surmise that other pretraining tasks could replace or support it.",
"Since automatic fake news detection uses stylometric information (i.e. writing style, language cues), we predict that the task could benefit from pretraining objectives that also learn stylometric information such as authorship attribution."
],
[
"When testing on three different types of articles (Political News, Opinion, Entertainment/Gossip), we find that writing style is a prominent indicator for fake articles, supporting previous findings regarding writing style in fake news detection BIBREF22.",
"Supported by our findings on the stylometric differences of fake and real news, we show that the model predicts a label based on the test article's stylometry. It produces correct labels when tested on real and fake news.",
"We provide further evidence that the models learn stylometry by testing on out-of-domain articles, particularly opinion and gossip articles. While these articles aren't necessarily real or fake, their stylometries are akin to real and fake articles respectively, and so are classified as such."
],
[
"In this paper, we show that TL techniques can be used to train robust fake news classifiers in low-resource settings, with TL methods performing better than few-shot techniques, despite being a setting they are designed in mind with.",
"We also show the significance of language model finetuning for tasks that involve stylometric cues, with ULMFiT performing better than transformer-based techniques with deeper language model backbones. Motivated by this, we augment the methodology with a multitask learning-inspired finetuning technique that allowed transformer-based transfer learning techniques to adapt to the stylometry of a target task, much like ULMFiT, resulting in better performance.",
"For future work, we propose that more pretraining tasks be explored, particularly ones that learn stylometric information inherently (such as authorship attribution)."
],
[
"The authors would like to acknowledge the efforts of VeraFiles and the National Union of Journalists in the Philippines (NUJP) for their work covering and combating the spread of fake news.",
"We are partially supported by Google's Tensoflow Research Cloud (TFRC) program. Access to the TPU units provided by the program allowed the BERT models in this paper, as well as the countless experiments that brought it to fruition, possible."
]
],
"section_name": [
"Introduction",
"Methods",
"Methods ::: Baseline",
"Methods ::: ULMFiT",
"Methods ::: BERT",
"Methods ::: GPT-2",
"Methods ::: Multitask Finetuning",
"Experimental Setup ::: Fake News Dataset",
"Experimental Setup ::: Pretraining Corpora",
"Experimental Setup ::: Siamese Network Training",
"Experimental Setup ::: Transfer Pretraining",
"Experimental Setup ::: Finetuning",
"Experimental Setup ::: Generalizability Across Domains",
"Results and Discussion ::: Classification Results",
"Results and Discussion ::: Language Model Finetuning Significance",
"Results and Discussion ::: Multitask-based Finetuning",
"Ablation Studies",
"Ablation Studies ::: Pretraining Effects",
"Ablation Studies ::: Attention Head Effects",
"Stylometric Tests",
"Further Discussions ::: Pretraining Tasks",
"Further Discussions ::: Generalizability Across Domains",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"f32e077bbd25d52fe1df2ba0860eb19e75b996dd"
],
"answer": [
{
"evidence": [
"To pretrain BERT and GPT-2 language models, as well as an AWD-LSTM language model for use in ULMFiT, a large unlabeled training corpora is needed. For this purpose, we construct a corpus of 172,815 articles from Tagalog Wikipedia which we call WikiText-TL-39 BIBREF18. We form training-validation-test splits of 70%-15%-15% from this corpora."
],
"extractive_spans": [
"WikiText-TL-39"
],
"free_form_answer": "",
"highlighted_evidence": [
"For this purpose, we construct a corpus of 172,815 articles from Tagalog Wikipedia which we call WikiText-TL-39 BIBREF18."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"b3fc80a4007c648b967be0a2737754e91ecc4cd6"
],
"answer": [
{
"evidence": [
"We work with a dataset composed of 3,206 news articles, each labeled real or fake, with a perfect 50/50 split between 1,603 real and fake articles, respectively. Fake articles were sourced from online sites that were tagged as fake news sites by the non-profit independent media fact-checking organization Verafiles and the National Union of Journalists in the Philippines (NUJP). Real articles were sourced from mainstream news websites in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera."
],
"extractive_spans": [
"3,206"
],
"free_form_answer": "",
"highlighted_evidence": [
"We work with a dataset composed of 3,206 news articles, each labeled real or fake, with a perfect 50/50 split between 1,603 real and fake articles, respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"b1965be20f1363cfeb9c08e0b0a5e24454012003"
],
"answer": [
{
"evidence": [
"We work with a dataset composed of 3,206 news articles, each labeled real or fake, with a perfect 50/50 split between 1,603 real and fake articles, respectively. Fake articles were sourced from online sites that were tagged as fake news sites by the non-profit independent media fact-checking organization Verafiles and the National Union of Journalists in the Philippines (NUJP). Real articles were sourced from mainstream news websites in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera."
],
"extractive_spans": [],
"free_form_answer": "Online sites tagged as fake news site by Verafiles and NUJP and news website in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera",
"highlighted_evidence": [
"We work with a dataset composed of 3,206 news articles, each labeled real or fake, with a perfect 50/50 split between 1,603 real and fake articles, respectively. Fake articles were sourced from online sites that were tagged as fake news sites by the non-profit independent media fact-checking organization Verafiles and the National Union of Journalists in the Philippines (NUJP). Real articles were sourced from mainstream news websites in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"a3be83d56fc38b0af25827a52676a19791f15cb1"
],
"answer": [
{
"evidence": [
"We use a siamese neural network, shown to perform state-of-the-art few-shot learning BIBREF11, as our baseline model.",
"We modify the original to account for sequential data, with each twin composed of an embedding layer, a Long-Short Term Memory (LSTM) BIBREF12 layer, and a feed-forward layer with Rectified Linear Unit (ReLU) activations."
],
"extractive_spans": [],
"free_form_answer": "Siamese neural network consisting of an embedding layer, a LSTM layer and a feed-forward layer with ReLU activations",
"highlighted_evidence": [
"We use a siamese neural network, shown to perform state-of-the-art few-shot learning BIBREF11, as our baseline model.",
"We modify the original to account for sequential data, with each twin composed of an embedding layer, a Long-Short Term Memory (LSTM) BIBREF12 layer, and a feed-forward layer with Rectified Linear Unit (ReLU) activations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"What other datasets are used?",
"What is the size of the dataset?",
"What is the source of the dataset?",
"What were the baselines?"
],
"question_id": [
"50e3fd6778dadf8ec0ff589aa8b18c61bdcacd41",
"c5980fe1a0c53bce1502cc674c8a2ed8c311f936",
"7d3c036ec514d9c09c612a214498fc99bf163752",
"ef7b62a705f887326b7ebacbd62567ee1f2129b3"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Statistics for the WikiText-TL-39 Dataset.",
"Table 2: Final model results. Pretraining time refers to the number of hours the model took to finish the pretraining objective (masked-language modeling and next-sentence prediction for BERT, and language modeling for GPT-2 and ULMFiT (AWD-LSTM), respectively. Finetuning time refers to minutes per epoch. BERT and GPT-2 were finetuned for 3 epochs, while ULMFiT was finetuned for 5.",
"Table 3: ULMFiT results with and without language model finetuning. Removing the language model finetuning step shows a significant drop in performance, giving evidence to the hypothesis that such a step improves the model by adapting to its stylometry.",
"Table 4: ULMFiT compared to transfer learning techniques augmented with multitask finetuning. Including a language modeling finetuning task to the transformerbased transfer learning techniques improved their performance, with GPT-2 outperforming ULMFiT by 4.69%. “Val. Accuracy” in this table refers to validation accuracy at test time.",
"Table 5: An ablation study on the effects of pretraining for multitasking-based and standard GPT-2 finetuning. Results show that pretraining greatly accounts for almost half of performance on both finetuning techniques. “Acc. Inc.” refers to the boost in performance contributed by the pretraining step. “% of Perf.” refers to the percentage of the total performance that the pretraining step contributes.",
"Figure 1: Ablation showing accuracy and loss curves with respect to attention heads.",
"Table 6: An ablation study on the effect of multiple heads in the attention mechanisms. The results show that increasing the number of heads improves performance, though this plateaus at 10 attention heads. All ablations use the multitask-based finetuning method. “Effect” refers to the increase or decrease of accuracy as the heads are removed. Note that 10 heads is the default used throughout the study.",
"Figure 2: Comparison of the characteristic curves of fake news and real news."
],
"file": [
"4-Table1-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"6-Table4-1.png",
"7-Table5-1.png",
"7-Figure1-1.png",
"7-Table6-1.png",
"8-Figure2-1.png"
]
} | [
"What is the source of the dataset?",
"What were the baselines?"
] | [
[
"1910.09295-Experimental Setup ::: Fake News Dataset-0"
],
[
"1910.09295-Methods ::: Baseline-2",
"1910.09295-Methods ::: Baseline-0"
]
] | [
"Online sites tagged as fake news site by Verafiles and NUJP and news website in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera",
"Siamese neural network consisting of an embedding layer, a LSTM layer and a feed-forward layer with ReLU activations"
] | 639 |
1909.03242 | MultiFC: A Real-World Multi-Domain Dataset for Evidence-Based Fact Checking of Claims | We contribute the largest publicly available dataset of naturally occurring factual claims for the purpose of automatic claim verification. It is collected from 26 fact checking websites in English, paired with textual sources and rich metadata, and labelled for veracity by human expert journalists. We present an in-depth analysis of the dataset, highlighting characteristics and challenges. Further, we present results for automatic veracity prediction, both with established baselines and with a novel method for joint ranking of evidence pages and predicting veracity that outperforms all baselines. Significant performance increases are achieved by encoding evidence, and by modelling metadata. Our best-performing model achieves a Macro F1 of 49.2%, showing that this is a challenging testbed for claim veracity prediction. | {
"paragraphs": [
[
"Misinformation and disinformation are two of the most pertinent and difficult challenges of the information age, exacerbated by the popularity of social media. In an effort to counter this, a significant amount of manual labour has been invested in fact checking claims, often collecting the results of these manual checks on fact checking portals or websites such as politifact.com or snopes.com. In a parallel development, researchers have recently started to view fact checking as a task that can be partially automated, using machine learning and NLP to automatically predict the veracity of claims. However, existing efforts either use small datasets consisting of naturally occurring claims (e.g. BIBREF0 , BIBREF1 ), or datasets consisting of artificially constructed claims such as FEVER BIBREF2 . While the latter offer valuable contributions to further automatic claim verification work, they cannot replace real-world datasets."
],
[
"Over the past few years, a variety of mostly small datasets related to fact checking have been released. An overview over core datasets is given in Table TABREF4 , and a version of this table extended with the number of documents, source of annotations and SoA performances can be found in the appendix (Table TABREF1 ). The datasets can be grouped into four categories (I–IV). Category I contains datasets aimed at testing how well the veracity of a claim can be predicted using the claim alone, without context or evidence documents. Category II contains datasets bundled with documents related to each claim – either topically related to provide context, or serving as evidence. Those documents are, however, not annotated. Category III is for predicting veracity; they encourage retrieving evidence documents as part of their task description, but do not distribute them. Finally, category IV comprises datasets annotated for both veracity and stance. Thus, every document is annotated with a label indicating whether the document supports or denies the claim, or is unrelated to it. Additional labels can then be added to the datasets to better predict veracity, for instance by jointly training stance and veracity prediction models.",
"Methods not shown in the table, but related to fact checking, are stance detection for claims BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , satire detection BIBREF21 , clickbait detection BIBREF22 , conspiracy news detection BIBREF23 , rumour cascade detection BIBREF24 and claim perspectives detection BIBREF25 .",
"Claims are obtained from a variety of sources, including Wikipedia, Twitter, criminal reports and fact checking websites such as politifact.com and snopes.com. The same goes for documents – these are often websites obtained through Web search queries, or Wikipedia documents, tweets or Facebook posts. Most datasets contain a fairly small number of claims, and those that do not, often lack evidence documents. An exception is BIBREF2 , who create a Wikipedia-based fact checking dataset. While a good testbed for developing deep neural architectures, their dataset is artificially constructed and can thus not take metadata about claims into account.",
"Contributions: We provide a dataset that, uniquely among extant datasets, contains a large number of naturally occurring claims and rich additional meta-information."
],
[
"Fact checking methods partly depend on the type of dataset used. Methods only taking into account claims typically encode those with CNNs or RNNs BIBREF3 , BIBREF4 , and potentially encode metadata BIBREF3 in a similar way. Methods for small datasets often use hand-crafted features that are a mix of bag of word and other lexical features, e.g. LIWC, and then use those as input to a SVM or MLP BIBREF0 , BIBREF4 , BIBREF13 . Some use additional Twitter-specific features BIBREF26 . More involved methods taking into account evidence documents, often trained on larger datasets, consist of evidence identification and ranking following a neural model that measures the compatibility between claim and evidence BIBREF2 , BIBREF27 , BIBREF28 .",
"Contributions: The latter category above is the most related to our paper as we consider evidence documents. However, existing models are not trained jointly for evidence identification, or for stance and veracity prediction, but rather employ a pipeline approach. Here, we show that a joint approach that learns to weigh evidence pages by their importance for veracity prediction can improve downstream veracity prediction performance."
],
[
"We crawled a total of 43,837 claims with their metadata (see details in Table TABREF1 ). We present the data collection in terms of selecting sources, crawling claims and associated metadata (Section SECREF9 ); retrieving evidence pages; and linking entities in the crawled claims (Section SECREF13 )."
],
[
"We crawled all active fact checking websites in English listed by Duke Reporters' Lab and on the Fact Checking Wikipedia page. This resulted in 38 websites in total (shown in Table TABREF1 ). Ten websites could not be crawled, as further detailed in Table TABREF40 . In the later experimental descriptions, we refer to the part of the dataset crawled from a specific fact checking website as a domain, and we refer to each website as source.",
"From each source, we crawled the ID, claim, label, URL, reason for label, categories, person making the claim (speaker), person fact checking the claim (checker), tags, article title, publication date, claim date, as well as the full text that appears when the claim is clicked. Lastly, the above full text contains hyperlinks, so we further crawled the full text that appears when each of those hyperlinks are clicked (outlinks).",
"There were a number of crawling issues, e.g. security protection of websites with SSL/TLS protocols, time out, URLs that pointed to pdf files instead of HTML content, or unresolvable encoding. In all of these cases, the content could not be retrieved. For some websites, no veracity labels were available, in which case, they were not selected as domains for training a veracity prediction model. Moreover, not all types of metadata (category, speaker, checker, tags, claim date, publish date) were available for all websites; and availability of articles and full texts differs as well.",
"We performed semi-automatic cleansing of the dataset as follows. First, we double-checked that the veracity labels would not appear in claims. For some domains, the first or last sentence of the claim would sometimes contain the veracity label, in which case we would discard either the full sentence or part of the sentence. Next, we checked the dataset for duplicate claims. We found 202 such instances, 69 of them with different labels. Upon manual inspection, this was mainly due to them appearing on different websites, with labels not differing much in practice (e.g. `Not true', vs. `Mostly False'). We made sure that all such duplicate claims would be in the training split of the dataset, so that the models would not have an unfair advantage. Finally, we performed some minor manual merging of label types for the same domain where it was clear that they were supposed to denote the same level of veracity (e.g. `distorts', `distorts the facts').",
"This resulted in a total of 36,534 claims with their metadata. For the purposes of fact verification, we discarded instances with labels that occur fewer than 5 times, resulting in 34,918 claims. The number of instances, as well as labels per domain, are shown in Table TABREF34 and label names in Table TABREF43 in the appendix. The dataset is split into a training part (80%) and a development and testing part (10% each) in a label-stratified manner. Note that the domains vary in the number of labels, ranging from 2 to 27. Labels include both straight-forward ratings of veracity (`correct', `incorrect'), but also labels that would be more difficult to map onto a veracity scale (e.g. `grass roots movement!', `misattributed', `not the whole story'). We therefore do not postprocess label types across domains to map them onto the same scale, and rather treat them as is. In the methodology section (Section SECREF4 ), we show how a model can be trained on this dataset regardless by framing this multi-domain veracity prediction task as a multi-task learning (MTL) one."
],
[
"The text of each claim is submitted verbatim as a query to the Google Search API (without quotes). The 10 most highly ranked search results are retrieved, for each of which we save the title; Google search rank; URL; time stamp of last update; search snippet; as well as the full Web page. We acknowledge that search results change over time, which might have an effect on veracity prediction. However, studying such temporal effects is outside the scope of this paper. Similar to Web crawling claims, as described in Section SECREF9 , the corresponding Web pages can in some cases not be retrieved, in which case fewer than 10 evidence pages are available. The resulting evidence pages are from a wide variety of URL domains, though with a predictable skew towards popular websites, such as Wikipedia or The Guardian (see Table TABREF42 in the appendix for detailed statistics)."
],
[
"To better understand what claims are about, we conduct entity linking for all claims. Specifically, mentions of people, places, organisations, and other named entities within a claim are recognised and linked to their respective Wikipedia pages, if available. Where there are different entities with the same name, they are disambiguated. For this, we apply the state-of-the-art neural entity linking model by BIBREF29 . This results in a total of 25,763 entities detected and linked to Wikipedia, with a total of 15,351 claims involved, meaning that 42% of all claims contain entities that can be linked to Wikipedia. Later on, we use entities as additional metadata (see Section SECREF31 ). The distribution of claim numbers according to the number of entities they contain is shown in Figure FIGREF15 . We observe that the majority of claims have one to four entities, and the maximum number of 35 entities occurs in one claim only. Out of the 25,763 entities, 2,767 are unique entities. The top 30 most frequent entities are listed in Table TABREF14 . This clearly shows that most of the claims involve entities related to the United States, which is to be expected, as most of the fact checking websites are US-based."
],
[
"We train several models to predict the veracity of claims. Those fall into two categories: those that only consider the claims themselves, and those that encode evidence pages as well. In addition, claim metadata (speaker, checker, linked entities) is optionally encoded for both categories of models, and ablation studies with and without that metadata are shown. We first describe the base model used in Section SECREF16 , followed by introducing our novel evidence ranking and veracity prediction model in Section SECREF22 , and lastly the metadata encoding model in Section SECREF31 ."
],
[
"Since not all fact checking websites use the same claim labels (see Table TABREF34 , and Table TABREF43 in the appendix), training a claim veracity prediction model is not entirely straight-forward. One option would be to manually map those labels onto one another. However, since the sheer number of labels is rather large (165), and it is not always clear from the guidelines on fact checking websites how they can be mapped onto one another, we opt to learn how these labels relate to one another as part of the veracity prediction model. To do so, we employ the multi-task learning (MTL) approach inspired by collaborative filtering presented in BIBREF30 (MTL with LEL–multitask learning with label embedding layer) that excels on pairwise sequence classification tasks with disparate label spaces. More concretely, each domain is modelled as its own task in a MTL architecture, and labels are projected into a fixed-length label embedding space. Predictions are then made by taking the dot product between the claim-evidence embeddings and the label embeddings. By doing so, the model implicitly learns how semantically close the labels are to one another, and can benefit from this knowledge when making predictions for individual tasks, which on their own might only have a small number of instances. When making predictions for individual domains/tasks, both at training and at test time, as well as when calculating the loss, a mask is applied such that the valid and invalid labels for that task are restricted to the set of known task labels.",
"Note that the setting here slightly differs from BIBREF30 . There, tasks are less strongly related to one another; for example, they consider stance detection, aspect-based sentiment analysis and natural language inference. Here, we have different domains, as opposed to conceptually different tasks, but use their framework, as we have the same underlying problem of disparate label spaces. A more formal problem definition follows next, as our evidence ranking and veracity prediction model in Section SECREF22 then builds on it.",
"We frame our problem as a multi-task learning one, where access to labelled datasets for INLINEFORM0 tasks INLINEFORM1 is given at training time with a target task INLINEFORM2 that is of particular interest. The training dataset for task INLINEFORM3 consists of INLINEFORM4 examples INLINEFORM5 and their labels INLINEFORM6 . The base model is a classic deep neural network MTL model BIBREF31 that shares its parameters across tasks and has task-specific softmax output layers that output a probability distribution INLINEFORM7 for task INLINEFORM8 : DISPLAYFORM0 ",
"where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 is the weight matrix and bias term of the output layer of task INLINEFORM3 respectively, INLINEFORM4 is the jointly learned hidden representation, INLINEFORM5 is the number of labels for task INLINEFORM6 , and INLINEFORM7 is the dimensionality of INLINEFORM8 . The MTL model is trained to minimise the sum of individual task losses INLINEFORM9 using a negative log-likelihood objective.",
"To learn the relationships between labels, a Label Embedding Layer (LEL) embeds labels of all tasks in a joint Euclidian space. Instead of training separate softmax output layers as above, a label compatibility function INLINEFORM0 measures how similar a label with embedding INLINEFORM1 is to the hidden representation INLINEFORM2 : DISPLAYFORM0 ",
"where INLINEFORM0 is the dot product. Padding is applied such that INLINEFORM1 and INLINEFORM2 have the same dimensionality. Matrix multiplication and softmax are used for making predictions: DISPLAYFORM0 ",
"where INLINEFORM0 is the label embedding matrix for all tasks and INLINEFORM1 is the dimensionality of the label embeddings. We apply a task-specific mask to INLINEFORM2 in order to obtain a task-specific probability distribution INLINEFORM3 . The LEL is shared across all tasks, which allows the model to learn the relationships between labels in the joint embedding space."
],
[
"So far, we have ignored the issue of how to obtain claim representation, as the base model described in the previous section is agnostic to how instances are encoded. A very simple approach, which we report as a baseline, is to encode claim texts only. Such a model ignores evidence for and against a claim, and ends up guessing the veracity based on surface patterns observed in the claim texts.",
"We next introduce two variants of evidence-based veracity prediction models that encode 10 pieces of evidence in addition to the claim. Here, we opt to encode search snippets as opposed to whole retrieved pages. While the latter would also be possible, it comes with a number of additional challenges, such as encoding large documents, parsing tables or PDF files, and encoding images or videos on these pages, which we leave to future work. Search snippets also have the benefit that they already contain summaries of the part of the page content that is most related to the claim.",
"Our problem is to obtain encodings for INLINEFORM0 examples INLINEFORM1 . For simplicity, we will henceforth drop the task superscript and refer to instances as INLINEFORM2 , as instance encodings are learned in a task-agnostic fashion. Each example further consists of a claim INLINEFORM3 and INLINEFORM4 evidence pages INLINEFORM5 .",
"Each claim and evidence page is encoded with a BiLSTM to obtain a sentence embedding, which is the concatenation of the last state of the forward and backward reading of the sentence, i.e. INLINEFORM0 , where INLINEFORM1 is the sentence embedding.",
"Next, we want to combine claims and evidence sentence embeddings into joint instance representations. In the simplest case, referred to as model variant crawled_avg, we mean average the BiLSTM sentence embeddings of all evidence pages (signified by the overline) and concatenate those with the claim embeddings, i.e. DISPLAYFORM0 ",
"where INLINEFORM0 is the resulting encoding for training example INLINEFORM1 and INLINEFORM2 denotes vector concatenation. However, this has the disadvantage that all evidence pages are considered equal.",
"The here proposed alternative instance encoding model, crawled_ranked, which achieves the highest overall performance as discussed in Section SECREF5 , learns the compatibility between an instance's claim and each evidence page. It ranks evidence pages by their utility for the veracity prediction task, and then uses the resulting ranking to obtain a weighted combination of all claim-evidence pairs. No direct labels are available to learn the ranking of individual documents, only for the veracity of the associated claim, so the model has to learn evidence ranks implicitly.",
"To combine claim and evidence representations, we use the matching model proposed for the task of natural language inference by BIBREF32 and adapt it to combine an instance's claim representation with each evidence representation, i.e. DISPLAYFORM0 ",
"where INLINEFORM0 is the resulting encoding for training example INLINEFORM1 and evidence page INLINEFORM2 , INLINEFORM3 denotes vector concatenation, and INLINEFORM4 denotes the dot product.",
"All joint claim-evidence representations INLINEFORM0 are then projected into the binary space via a fully connected layer INLINEFORM1 , followed by a non-linear activation function INLINEFORM2 , to obtain a soft ranking of claim-evidence pairs, in practice a 10-dimensional vector, DISPLAYFORM0 ",
"where INLINEFORM0 denotes concatenation.",
"Scores for all labels are obtained as per ( EQREF28 ) above, with the same input instance embeddings as for the evidence ranker, i.e. INLINEFORM0 . Final predictions for all claim-evidence pairs are then obtained by taking the dot product between the label scores and binary evidence ranking scores, i.e. DISPLAYFORM0 ",
"Note that the novelty here is that, unlike for the model described in BIBREF32 , we have no direct labels for learning weights for this matching model. Rather, our model has to implicitly learn these weights for each claim-evidence pair in an end-to-end fashion given the veracity labels."
],
[
"We experiment with how useful claim metadata is, and encode the following as one-hot vectors: speaker, category, tags and linked entities. We do not encode `Reason' as it gives away the label, and do not include `Checker' as there are too many unique checkers for this information to be relevant. The claim publication date is potentially relevant, but it does not make sense to merely model this as a one-hot feature, so we leave incorporating temporal information to future work.",
"Since all metadata consists of individual words and phrases, a sequence encoder is not necessary, and we opt for a CNN followed by a max pooling operation as used in BIBREF3 to encode metadata for fact checking. The max-pooled metadata representations, denoted INLINEFORM0 , are then concatenated with the instance representations, e.g. for the most elaborate model, crawled_ranked, these would be concatenated with INLINEFORM1 ."
],
[
"The base sentence embedding model is a BiLSTM over all words in the respective sequences with randomly initialised word embeddings, following BIBREF30 . We opt for this strong baseline sentence encoding model, as opposed to engineering sentence embeddings that work particularly well for this dataset, to showcase the dataset. We would expect pre-trained contextual encoding models, e.g. ELMO BIBREF33 , ULMFit BIBREF34 , BERT BIBREF35 , to offer complementary performance gains, as has been shown for a few recent papers BIBREF36 , BIBREF37 .",
"For claim veracity prediction without evidence documents with the MTL with LEL model, we use the following sentence encoding variants: claim-only, which uses a BiLSTM-based sentence embedding as input, and claim-only_embavg, which uses a sentence embedding based on mean averaged word embeddings as input.",
"We train one multi-task model per task (i.e., one model per domain). We perform a grid search over the following hyperparameters, tuned on the respective dev set, and evaluate on the correspoding test set (final settings are underlined): word embedding size [64, 128, 256], BiLSTM hidden layer size [64, 128, 256], number of BiLSTM hidden layers [1, 2, 3], BiLSTM dropout on input and output layers [0.0, 0.1, 0.2, 0.5], word-by-word-attention for BiLSTM with window size 10 BIBREF38 [True, False], skip-connections for the BiLSTM [True, False], batch size [32, 64, 128], label embedding size [16, 32, 64]. We use ReLU as an activation function for both the BiLSTM and the CNN. For the CNN, the following hyperparameters are used: number filters [32], kernel size [32]. We train using cross-entropy loss and the RMSProp optimiser with initial learning rate of INLINEFORM0 and perform early stopping on the dev set with a patience of 3."
],
[
"For each domain, we compute the Micro as well as Macro F1, then mean average results over all domains. Core results with all vs. no metadata are shown in Table TABREF30 . We first experiment with different base model variants and find that label embeddings improve results, and that the best proposed models utilising multiple domains outperform single-task models (see Table TABREF36 ). This corroborates the findings of BIBREF30 . Per-domain results with the best model are shown in Table TABREF34 . Domain names are from hereon after abbreviated for brevity, see Table TABREF1 in the appendix for correspondences to full website names. Unsurprisingly, it is hard to achieve a high Macro F1 for domains with many labels, e.g. tron and snes. Further, some domains, surprisingly mostly with small numbers of instances, seem to be very easy – a perfect Micro and Macro F1 score of 1.0 is achieved on ranz, bove, buca, fani and thal. We find that for those domains, the verdict is often already revealed as part of the claim using explicit wording.",
"Our evidence-based claim veracity prediction models outperform claim-only veracity prediction models by a large margin. Unsurprisingly, claim-only_embavg is outperformed by claim-only. Further, crawled_ranked is our best-performing model in terms of Micro F1 and Macro F1, meaning that our model captures that not every piece of evidence is equally important, and can utilise this for veracity prediction.",
"We perform an ablation analysis of how metadata impacts results, shown in Table TABREF35 . Out of the different types of metadata, topic tags on their own contribute the most. This is likely because they offer highly complementary information to the claim text of evidence pages. Only using all metadata together achieves a higher Macro F1 at similar Micro F1 than using no metadata at all. To further investigate this, we split the test set into those instances for which no metadata is available vs. those for which metadata is available. We find that encoding metadata within the model hurts performance for domains where no metadata is available, but improves performance where it is. In practice, an ensemble of both types of models would be sensible, as well as exploring more involved methods of encoding metadata."
],
[
"An analysis of labels frequently confused with one another, for the largest domain `pomt' and best-performing model crawled_ranked + meta is shown in Figure FIGREF39 . The diagonal represents when gold and predicted labels match, and the numbers signify the number of test instances. One can observe that the model struggles more to detect claims with labels `true' than those with label `false'. Generally, many confusions occur over close labels, e.g. `half-true' vs. `mostly true'. We further analyse what properties instances that are predicted correctly vs. incorrectly have, using the model crawled_ranked meta. We find that, unsurprisingly, longer claims are harder to classify correctly, and that claims with a high direct token overlap with evidence pages lead to a high evidence ranking. When it comes to frequently occurring tags and entities, very general tags such as `government-and-politics' or `tax' that do not give away much, frequently co-occur with incorrect predictions, whereas more specific tags such as `brisbane-4000' or `hong-kong' tend to co-occur with correct predictions. Similar trends are observed for bigrams. This means that the model has an easy time succeeding for instances where the claims are short, where specific topics tend to co-occur with certain veracities, and where evidence documents are highly informative. Instances with longer, more complex claims where evidence is ambiguous remain challenging."
],
[
"We present a new, real-world fact checking dataset, currently the largest of its kind. It consists of 34,918 claims collected from 26 fact checking websites, rich metadata and 10 retrieved evidence pages per claim. We find that encoding the metadata as well evidence pages helps, and introduce a new joint model for ranking evidence pages and predicting veracity."
],
[
"This research is partially supported by QUARTZ (721321, EU H2020 MSCA-ITN) and DABAI (5153-00004A, Innovation Fund Denmark)."
],
[
" Summary statistics for claim collection. “Domain” indicates the domain name used for the veracity prediction experiments, “–” indicates that the website was not used due to missing or insufficient claim labels, see Section SECREF12 .",
" Comparison of fact checking datasets. Doc = all doc types (including tweets, replies, etc.). SoA perform indicates state-of-the-art performance. INLINEFORM0 indicates that claims are not naturally occuring: BIBREF6 use events as claims; BIBREF7 use DBPedia tiples as claims; BIBREF9 use tweets as claims; and BIBREF2 rewrite sentences in Wikipedia as claims. INLINEFORM1 denotes that the SoA performance is from other papers. Best performance for BIBREF3 is from BIBREF40 ; BIBREF2 from BIBREF28 ; BIBREF10 from BIBREF42 in English, BIBREF12 from BIBREF26 ; and BIBREF13 from BIBREF39 ."
]
],
"section_name": [
"Introduction",
"Datasets",
"Methods",
"Dataset Construction",
"Selection of sources",
"Retrieving Evidence Pages",
"Entity Detection and Linking",
"Claim Veracity Prediction",
"Multi-Domain Claim Veracity Prediction with Disparate Label Spaces",
"Joint Evidence Ranking and Claim Veracity Prediction",
"Metadata",
"Experimental Setup",
"Results",
"Analysis and Discussion",
"Conclusions",
"Acknowledgments",
"Appendix"
]
} | {
"answers": [
{
"annotation_id": [
"67245e2849ab43cf331ac19b38ee2cc3e55989c0"
],
"answer": [
{
"evidence": [
"The base sentence embedding model is a BiLSTM over all words in the respective sequences with randomly initialised word embeddings, following BIBREF30 . We opt for this strong baseline sentence encoding model, as opposed to engineering sentence embeddings that work particularly well for this dataset, to showcase the dataset. We would expect pre-trained contextual encoding models, e.g. ELMO BIBREF33 , ULMFit BIBREF34 , BERT BIBREF35 , to offer complementary performance gains, as has been shown for a few recent papers BIBREF36 , BIBREF37 ."
],
"extractive_spans": [
"a BiLSTM over all words in the respective sequences with randomly initialised word embeddings, following BIBREF30"
],
"free_form_answer": "",
"highlighted_evidence": [
"The base sentence embedding model is a BiLSTM over all words in the respective sequences with randomly initialised word embeddings, following BIBREF30 . We opt for this strong baseline sentence encoding model, as opposed to engineering sentence embeddings that work particularly well for this dataset, to showcase the dataset. We would expect pre-trained contextual encoding models, e.g. ELMO BIBREF33 , ULMFit BIBREF34 , BERT BIBREF35 , to offer complementary performance gains, as has been shown for a few recent papers BIBREF36 , BIBREF37 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"9ec39c340c33823304dfde8893fade70c9a3116f"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: An example of a claim instance. Entities are obtained via entity linking. Article and outlink texts, evidence search snippets and pages are not shown."
],
"extractive_spans": [],
"free_form_answer": "besides claim, label and claim url, it also includes a claim ID, reason, category, speaker, checker, tags, claim entities, article title, publish data and claim date",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: An example of a claim instance. Entities are obtained via entity linking. Article and outlink texts, evidence search snippets and pages are not shown."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"b5035cfa76231b5df9b02e846f59de10e3c56d4c"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"What were the baselines?",
"What metadata is included?",
"How many expert journalists were there?"
],
"question_id": [
"a74886d789a5d7ebcf7f151bdfb862c79b6b8a12",
"e9ccc74b1f1b172224cf9f01e66b1fa9e34d2593",
"2948015c2a5cd6a7f2ad99b4622f7e4278ceb0d4"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"dataset",
"dataset",
"dataset"
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 1: An example of a claim instance. Entities are obtained via entity linking. Article and outlink texts, evidence search snippets and pages are not shown.",
"Table 2: Comparison of fact checking datasets. † indicates claims are not “naturally occuring”: Mitra and Gilbert (2015) use events as claims; Ciampaglia et al. (2015) use DBPedia tiples as claims; Shu et al. (2018) use tweets as claims; and Thorne et al. (2018) rewrite sentences in Wikipedia as claims.",
"Table 3: Top 30 most frequent entities listed by their Wikipedia URL with prefix omitted",
"Figure 1: Distribution of entities in claims.",
"Figure 2: The Joint Veracity Prediction and Evidence Ranking model, shown for one task.",
"Table 4: Results with different model variants on the test set, “meta” means all metadata is used.",
"Table 7: Ablation results with crawled ranked + meta encoding for STL vs. MTL vs. MTL + LEL training",
"Table 5: Total number of instances and unique labels per domain, as well as per-domain results with model crawled ranked + meta, sorted by label size",
"Table 6: Ablation results with base model crawled ranked for different types of metadata",
"Figure 3: Confusion matrix of predicted labels with best-performing model, crawled ranked + meta, on the ‘pomt’ domain",
"Table 8: The list of websites that we did not crawl and reasons for not crawling them.",
"Table 9: The top 30 most frequently occurring URL domains.",
"Table 10: Number of instances, and labels per domain sorted by number of occurrences"
],
"file": [
"1-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"5-Figure1-1.png",
"6-Figure2-1.png",
"7-Table4-1.png",
"8-Table7-1.png",
"8-Table5-1.png",
"8-Table6-1.png",
"9-Figure3-1.png",
"12-Table8-1.png",
"12-Table9-1.png",
"13-Table10-1.png"
]
} | [
"What metadata is included?"
] | [
[
"1909.03242-1-Table1-1.png"
]
] | [
"besides claim, label and claim url, it also includes a claim ID, reason, category, speaker, checker, tags, claim entities, article title, publish data and claim date"
] | 641 |
1905.12260 | Learning Multilingual Word Embeddings Using Image-Text Data | There has been significant interest recently in learning multilingual word embeddings -- in which semantically similar words across languages have similar embeddings. State-of-the-art approaches have relied on expensive labeled data, which is unavailable for low-resource languages, or have involved post-hoc unification of monolingual embeddings. In the present paper, we investigate the efficacy of multilingual embeddings learned from weakly-supervised image-text data. In particular, we propose methods for learning multilingual embeddings using image-text data, by enforcing similarity between the representations of the image and that of the text. Our experiments reveal that even without using any expensive labeled data, a bag-of-words-based embedding model trained on image-text data achieves performance comparable to the state-of-the-art on crosslingual semantic similarity tasks. | {
"paragraphs": [
[
"Recent advances in learning distributed representations for words (i.e., word embeddings) have resulted in improvements across numerous natural language understanding tasks BIBREF0 , BIBREF1 . These methods use unlabeled text corpora to model the semantic content of words using their co-occurring context words. Key to this is the observation that semantically similar words have similar contexts BIBREF2 , thus leading to similar word embeddings. A limitation of these word embedding approaches is that they only produce monolingual embeddings. This is because word co-occurrences are very likely to be limited to being within language rather than across language in text corpora. Hence semantically similar words across languages are unlikely to have similar word embeddings.",
"To remedy this, there has been recent work on learning multilingual word embeddings, in which semantically similar words within and across languages have similar word embeddings BIBREF3 . Multilingual embeddings are not just interesting as an interlingua between multiple languages; they are useful in many downstream applications. For example, one application of multilingual embeddings is to find semantically similar words and phrases across languages BIBREF4 . Another use of multilingual embeddings is in enabling zero-shot learning on unseen languages, just as monolingual word embeddings enable predictions on unseen words BIBREF5 . In other words, a classifier using pretrained multilingual word embeddings can generalize to other languages even if training data is only in English. Interestingly, multilingual embeddings have also been shown to improve monolingual task performance BIBREF6 , BIBREF7 .",
"Consequently, multilingual embeddings can be very useful for low-resource languages – they allow us to overcome the scarcity of data in these languages. However, as detailed in Section \"Related Work\" , most work on learning multilingual word embeddings so far has heavily relied on the availability of expensive resources such as word-aligned / sentence-aligned parallel corpora or bilingual lexicons. Unfortunately, this data can be prohibitively expensive to collect for many languages. Furthermore even for languages with such data available, the coverage of the data is a limiting factor that restricts how much of the semantic space can be aligned across languages. Overcoming this data bottleneck is a key contribution of our work.",
"We investigate the use of cheaply available, weakly-supervised image-text data for learning multilingual embeddings. Images are a rich, language-agnostic medium that can provide a bridge across languages. For example, the English word “cat” might be found on webpages containing images of cats. Similarly, the German word “katze” (meaning cat) is likely to be found on other webpages containing similar (or perhaps identical) images of cats. Thus, images can be used to learn that these words have similar semantic content. Importantly, image-text data is generally available on the internet even for low-resource languages.",
"As image data has proliferated on the internet, tools for understanding images have advanced considerably. Convolutional neural networks (CNNs) have achieved roughly human-level or better performance on vision tasks, particularly classification BIBREF8 , BIBREF9 , BIBREF10 . During classification of an image, CNNs compute intermediate outputs that have been used as generic image features that perform well across a variety of vision tasks BIBREF11 . We use these image features to enforce that words associated with similar images have similar embeddings. Since words associated with similar images are likely to have similar semantic content, even across languages, our learned embeddings capture crosslingual similarity.",
"There has been other recent work on reducing the amount of supervision required to learn multilingual embeddings (cf. Section \"Related Work\" ). These methods take monolingual embeddings learned using existing methods and align them post-hoc in a shared embedding space. A limitation with post-hoc alignment of monolingual embeddings, first noticed by BIBREF12 , is that doing training of monolingual embeddings and alignment separately may lead to worse results than joint training of embeddings in one step. Since the monolingual embedding objective is distinct from the multilingual embedding objective, monolingual embeddings are not required to capture all information helpful for post-hoc multilingual alignment. Post-hoc alignment loses out on some information, whereas joint training does not. BIBREF12 observe improved results using a joint training method compared to a similar post-hoc method. Thus, a joint training approach is desirable. To our knowledge, no previous method jointly learns multilingual word embeddings using weakly-supervised data available for low-resource languages.",
"To summarize: In this paper we propose an approach for learning multilingual word embeddings using image-text data jointly across all languages. We demonstrate that even a bag-of-words based embedding approach achieves performance competitive with the state-of-the-art on crosslingual semantic similarity tasks. We present experiments for understanding the effect of using pixel data as compared to co-occurrences alone. We also provide a method for training and making predictions on multilingual word embeddings even when the language of the text is unknown."
],
[
"Most work on producing multilingual embeddings has relied on crosslingual human-labeled data, such as bilingual lexicons BIBREF13 , BIBREF4 , BIBREF6 , BIBREF14 or parallel/aligned corpora BIBREF15 , BIBREF4 , BIBREF16 , BIBREF17 . These works are also largely bilingual due to either limitations of methods or the requirement for data that exists only for a few language pairs. Bilingual embeddings are less desirable because they do not leverage the relevant resources of other languages. For example, in learning bilingual embeddings for English and French, it may be useful to leverage resources in Spanish, since French and Spanish are closely related. Bilingual embeddings are also limited in their applications to just one language pair.",
"For instance, BIBREF16 propose BiSkip, a model that extends the skip-gram approach of BIBREF18 to a bilingual parallel corpus. The embedding for a word is trained to predict not only its own context, but also the contexts for corresponding words in a second corpus in a different language. BIBREF4 extend this approach further to multiple languages. This method, called MultiSkip, is compared to our methods in Section \"Results and Conclusions\" .",
"There has been some recent work on reducing the amount of human-labeled data required to learn multilingual embeddings, enabling work on low-resource languages BIBREF19 , BIBREF20 , BIBREF21 . These methods take monolingual embeddings learned using existing methods and align them post-hoc in a shared embedding space, exploiting the structural similarity of monolingual embedding spaces first noticed by BIBREF13 . As discussed in Section \"Introduction\" , post-hoc alignment of monolingual embeddings is inherently suboptimal. For example, BIBREF19 and BIBREF20 use human-labeled data, along with shared surface forms across languages, to learn an alignment in the bilingual setting. BIBREF21 build on this for the multilingual setting, using no human-labeled data and instead using an adversarial approach to maximize alignment between monolingual embedding spaces given their structural similarities. This method (MUSE) outperforms previous approaches and represents the state-of-the-art. We compare it to our methods in Section \"Results and Conclusions\" .",
"There has been other work using image-text data to improve image and caption representations for image tasks and to learn word translations BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , but no work using images to learn competitive multilingual word-level embeddings."
],
[
"We experiment using a dataset derived from Google Images search results. The dataset consists of queries and the corresponding image search results. For example, one (query, image) pair might be “cat with big ears” and an image of a cat. Each (query, image) pair also has a weight corresponding to a relevance score of the image for the query. The dataset includes 3 billion (query, image, weight) triples, with 900 million unique images and 220 million unique queries. The data was prepared by first taking the query-image set, filtering to remove any personally identifiable information and adult content, and tokenizing the remaining queries by replacing special characters with spaces and trimming extraneous whitespace. Rare tokens (those that do not appear in queries at least six times) are filtered out. Each token in each query is given a language tag based on the user-set home language of the user making the search on Google Images. For example, if the query “back pain” is made by a user with English as her home language, then the query is stored as “en:back en:pain”. The dataset includes queries in about 130 languages.",
"Though the specific dataset we use is proprietary, BIBREF26 have obtained a similar dataset, using the Google Images search interface, that comprises queries in 100 languages."
],
[
"We present a series of experiments to investigate the usefulness of multimodal image-text data in learning multilingual embeddings. The crux of our method involves enforcing that for each query-image pair, the query representation ( $Q$ ) is similar to the image representation ( $I$ ). The query representation is a function of the word embeddings for each word in a (language-tagged) query, so enforcing this constraint on the query representation also has the effect of constraining the corresponding multilingual word embeddings.",
"Given some $Q$ and some $I$ , we enforce that the representations are similar by maximizing their cosine similarity. We use a combination of cosine similarity and softmax objective to produce our loss. This high-level approach is illustrated in Figure 1 . In particular, we calculate unweighted loss as follows for a query $q$ and a corresponding image $i$ : $\\textrm {loss}(\\textrm {Query} \\: q, \\textrm {Image} \\: i) = -\\log {\\frac{e^{\\frac{Q_q^T I_i}{|Q_q| |I_i|}}}{\\sum _{j} e^{\\frac{Q_q^T I_j}{{|Q_q| |I_j|}}}}}$ ",
"where $Q_q$ is the query representation for query $q$ ; $I_i$ is the image representation corresponding to image $i$ ; $j$ ranges over all images in the corpus; and $Q_q^T I_i$ is the dot product of the vectors $Q_q$ and $I_i$ . Note that this requires that $Q_q$ and $I_j$ have identical dimensionality. If a weight $q$0 is provided for the (query, image) pair, the loss is multiplied by the weight. Observe that $q$1 and $q$2 remain unspecified for now: we detail different experiments involving different representations below.",
"In practice, given the size of our dataset, calculating the full denominator of the loss for a query, image pair would involve iterating through each image for each query, which is $O(n^2)$ in the number of training examples. To remedy this, we calculated the loss within each batch separately. That is, the denominator of the loss only involved summing over images in the same batch as the query. We used a batch size of 1000 for all experiments. In principle, the negative sampling approach used by BIBREF0 could be used instead to prevent quadratic time complexity.",
"We can interpret this loss function as producing a softmax classification task for queries and images: given a query, the model needs to predict the image relevant to that query. The cosine similarity between the image representation $I_i$ and the query representation $Q_q$ is normalized under softmax to produce a “belief” that the image $i$ is the image relevant to the query $q$ . This is analogous to the skip-gram model proposed by BIBREF18 , although we use cosine similarity instead of dot product. Just as the skip-gram model ensures the embeddings of words are predictive of their contexts, our model ensures the embeddings of queries (and their constituent words) are predictive of images relevant to them."
],
[
"Given the natural co-occurrence of images and text on the internet and the availability of powerful generic features, a first approach is to use generic image features as the foundation for the image representation $I$ . We apply two fully-connected layers to learn a transformation from image features to the final representation. We can compute the image representation $I_i$ for image $i$ as: $I_i = ReLU(U * ReLU(Vf_i + b_1) + b_2)$ ",
"where $f_i$ is a $d$ -dimensional column vector representing generic image features for image $i$ , $V$ is a $m \\times d$ matrix, $b_1$ is an $m$ -dimensional column vector, $U$ is a $n \\times m$ matrix, and $b_2$ is an $d$0 -dimensional column vector. We use a rectified linear unit activation function after each fully-connected layer.",
"We use 64-dimensional image features derived from image-text data using an approach similar to that used by BIBREF27 , who train image features to discriminate between fine-grained semantic image labels. We run two experiments with $m$ and $n$ : in the first, $m = 200$ and $n = 100$ (producing 100-dimensional embeddings), and in the second, $m = 300$ and $n = 300$ (producing 300-dimensional embeddings).",
"For the query representation, we use a simple approach. The query representation is just the average of its constituent multilingual embeddings. Then, as the query representation is constrained to be similar to corresponding image representations, the multilingual embeddings (randomly initialized) are also constrained.",
"Note that each word in each query is prefixed with the language of the query. For example, the English query “back pain” is treated as “en:back en:pain”, and the multilingual embeddings that are averaged are those for “en:back” and “en:pain”. This means that words in different languages with shared surface forms are given separate embeddings. We experiment with shared embeddings for words with shared surface forms in Section \"Discussion\" .",
"In practice, we use a fixed multilingual vocabulary for the word embeddings, given the size of the dataset. Out-of-vocabulary words are handled by hashing them to a fixed number of embedding buckets (we use 1,000,000). That is, there are 1,000,000 embeddings for all out-of-vocabulary words, and the assignment of embedding for each word is determined by a hash function.",
"Our approach for leveraging image understanding is shown in Figure 2 ."
],
[
"Another approach for generating query and image representations is treating images as a black box. Without using pixel data, how well can we do? Given the statistics of our dataset (3B query, image pairs with 220M unique queries and 900M unique images), we know that different queries co-occur with the same images. Intuitively, if a query $q_1$ co-occurs with many of the same images as query $q_2$ , then $q_1$ and $q_2$ are likely to be semantically similar, regardless of the visual content of the shared images. Thus, we can use a method that uses only co-occurrence statistics to better understand how well we can capture relationships between queries. This method serves as a baseline to our initial approach leveraging image understanding.",
"In this setting, we keep query representations the same, and we modify image representations as follows: the image representation for an image is a randomly initialized, trainable vector (of the same dimensionality as the query representation, to ensure the cosine similarity can be calculated). The intuition for this approach is that if two queries are both associated with an image, their query representations will both be constrained to be similar to the same vector, and so the query representations themselves are constrained to be similar. This approach is a simple way to adapt our method to make use of only co-occurrence statistics.",
"One concern with this approach is that many queries may not have significant image co-occurrences with other queries. In particular, there are likely many images associated with only a single query. These isolated images pull query representations toward their respective random image representations (adding noise), but do not provide any information about the relationships between queries. Additionally, even for images associated with multiple queries, if these queries are all within language, then they may not be very helpful for learning multilingual embeddings. Consequently, we run two experiments: one with the original dataset and one with a subset of the dataset that contains only images associated with queries in at least two different languages. This subset of the dataset has 540 million query, image pairs (down from 3 billion). For both experiments, we use $m = 200$ and $n = 100$ and produce 100-dimensional embeddings."
],
[
"In Section \"Leveraging Image Understanding\" , our method for computing query representations involved prepending language prefixes to each token, ensuring that the multilingual embedding for the English word “pain” is distinct from that for the French word “pain” (meaning bread). These query representations are language aware, meaning that a language tag is required for each query during both training and prediction. In the weakly-supervised setting, we may want to relax this requirement, as language-tagged data is not always readily available.",
"In our language unaware setting, language tags are not necessary. Each surface form in each query has a distinct embedding, and words with shared surface forms across languages (e.g., English “pain” and French “pain”) have a shared embedding. In this sense, shared surface forms are used as a bridge between languages. This is illustrated in Figure 3 . This may be helpful in certain cases, as for English “actor” and Spanish “actor”. The image representations leverage generic image features, exactly as in Section \"Leveraging Image Understanding\" . In our language-unaware experiment, we use $m = 200$ and $n = 100$ and produce 100-dimensional embeddings."
],
[
"We evaluate our learned multilingual embeddings using six crosslingual semantic similarity tasks, two multilingual document classification tasks, and 13 monolingual semantic similarity tasks. We adapt code from BIBREF4 and BIBREF28 for evaluation.",
"This task measures how well multilingual embeddings capture semantic similarity of words, as judged by human raters. The task consists of a series of crosslingual word pairs. For each word pair in the task, human raters judge how semantically similar the words are. The model also predicts how similar the words are, using the cosine similarity between the embeddings. The score on the task is the Spearman correlation between the human ratings and the model predictions.",
"The specific six subtasks we use are part of the Rubenstein-Goodenough dataset BIBREF29 and detailed by BIBREF4 . We also include an additional task aggregating the six subtasks.",
"In this task, a classifier built on top of learned multilingual embeddings is trained on the RCV corpus of newswire text as in BIBREF15 and BIBREF4 . The corpus consists of documents in seven languages on four topics, and the classifier predicts the topic. The score on the task is test accuracy. Note that each document is monolingual, so this task measures performance within languages for multiple languages (as opposed to crosslingual performance).",
"This task is the same as the crosslingual semantic similarity task described above, but all word pairs are in English. We use this to understand how monolingual performance differs across methods. We present an average score across the 13 subtasks provided by BIBREF28 .",
"Evaluation tasks also report a coverage, which is the fraction of the test data that a set of multilingual embeddings is able to make predictions on. This is needed because not every word in the evaluation task has a corresponding learned multilingual embedding. Thus, if coverage is low, scores are less likely to be reliable."
],
[
"We first present results on the crosslingual semantic similarity and multilingual document classification for our previously described experiments. We compare against the multiSkip approach by BIBREF4 and the state-of-the-art MUSE approach by BIBREF21 . Results for crosslingual semantic similarity are presented in Table 1 , and results for multilingual document classification are presented in Table 2 .",
"Our experiments corresponding to Section \"Leveraging Image Understanding\" are titled ImageVec 100-Dim and ImageVec 300-Dim in Tables 1 and 2 . Both experiments significantly outperform the multiSkip experiments in all crosslingual semantic similarity subtasks, and the 300-dimensional experiment slightly outperforms MUSE as well. Note that coverage scores are generally around 0.8 for these experiments. In multilingual document classification, MUSE achieves the best scores, and while our 300-dimensional experiment outperforms the multiSkip 40-dimensional experiment, it does not perform as well as the 512-dimensional experiment. Note that coverage scores are lower on these tasks.",
"One possible explanation for the difference in performance across the crosslingual semantic similarity task and multilingual document classification task is that the former measures crosslingual performance, whereas the latter measures monolingual performance in multiple languages, as described in Section UID10 . We briefly discuss further evidence that our models perform less well in the monolingual context below."
],
[
"We demonstrated how to learn competitive multilingual word embeddings using image-text data – which is available for low-resource languages. We have presented experiments for understanding the effect of using pixel data as compared to co-occurrences alone. We have also proposed a method for training and making predictions on multilingual word embeddings even when language tags for words are unavailable. Using a simple bag-of-words approach, we achieve performance competitive with the state-of-the-art on crosslingual semantic similarity tasks.",
"We have also identified a direction for future work: within language performance is weaker than the state-of-the-art, likely because our work leveraged only image-text data rather than a large monolingual corpus. Fortunately, our joint training approach provides a simple extension of our method for future work: multi-task joint training. For example, in a triple-task setting, we can simultaneously (1) constrain query and relevant image representations to be similar and (2) constrain word embeddings to be predictive of context in large monolingual corpora and (3) constrain representations for parallel text across languages to be similar. For the second task, implementing recent advances in producing monolingual embeddings, such as using subword information, is likely to improve results. Multilingual embeddings learned in a multi-task setting would reap both the benefits of our methods and existing methods for producing word embeddings. For example, while our method is likely to perform worse for more abstract words, when combined with existing approaches it is likely to achieve more consistent performance.",
"An interesting effect of our approach is that queries and images are embedded into a shared space through the query and image representations. This setup enables a range of future research directions and applications, including better image features, better monolingual text representations (especially for visual tasks), nearest-neighbor search for text or images given one modality (or both), and joint prediction using text and images."
]
],
"section_name": [
"Introduction",
"Related Work",
"Data",
"Methods",
"Leveraging Image Understanding",
"Co-Occurrence Only",
"Language Unaware Query Representation",
"Evaluation",
"Results and Conclusions",
"Discussion"
]
} | {
"answers": [
{
"annotation_id": [
"bc5d8595f6a04c19c76b0bf7c519158d7b72ede5"
],
"answer": [
{
"evidence": [
"We experiment using a dataset derived from Google Images search results. The dataset consists of queries and the corresponding image search results. For example, one (query, image) pair might be “cat with big ears” and an image of a cat. Each (query, image) pair also has a weight corresponding to a relevance score of the image for the query. The dataset includes 3 billion (query, image, weight) triples, with 900 million unique images and 220 million unique queries. The data was prepared by first taking the query-image set, filtering to remove any personally identifiable information and adult content, and tokenizing the remaining queries by replacing special characters with spaces and trimming extraneous whitespace. Rare tokens (those that do not appear in queries at least six times) are filtered out. Each token in each query is given a language tag based on the user-set home language of the user making the search on Google Images. For example, if the query “back pain” is made by a user with English as her home language, then the query is stored as “en:back en:pain”. The dataset includes queries in about 130 languages."
],
"extractive_spans": [],
"free_form_answer": "monolingual",
"highlighted_evidence": [
"The dataset consists of queries and the corresponding image search results.",
"Each token in each query is given a language tag based on the user-set home language of the user making the search on Google Images."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86"
]
},
{
"annotation_id": [
"7920586570880af966bf79e5d519996b9222776b"
],
"answer": [
{
"evidence": [
"Another approach for generating query and image representations is treating images as a black box. Without using pixel data, how well can we do? Given the statistics of our dataset (3B query, image pairs with 220M unique queries and 900M unique images), we know that different queries co-occur with the same images. Intuitively, if a query $q_1$ co-occurs with many of the same images as query $q_2$ , then $q_1$ and $q_2$ are likely to be semantically similar, regardless of the visual content of the shared images. Thus, we can use a method that uses only co-occurrence statistics to better understand how well we can capture relationships between queries. This method serves as a baseline to our initial approach leveraging image understanding."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Another approach for generating query and image representations is treating images as a black box.",
"Given the statistics of our dataset (3B query, image pairs with 220M unique queries and 900M unique images), we know that different queries co-occur with the same images. Intuitively, if a query $q_1$ co-occurs with many of the same images as query $q_2$ , then $q_1$ and $q_2$ are likely to be semantically similar, regardless of the visual content of the shared images. Thus, we can use a method that uses only co-occurrence statistics to better understand how well we can capture relationships between queries."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86"
]
},
{
"annotation_id": [
"67b3545ce5f67bbfbe55da6419ddec4b86c9a2f8"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Crosslingual semantic similarity scores (Spearman’s ρ) across six subtasks for ImageVec (our method) and previous work. Coverage is in brackets. The last column indicates the combined score across all subtasks. Best scores on each subtask are bolded."
],
"extractive_spans": [],
"free_form_answer": "performance is significantly degraded without pixel data",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Crosslingual semantic similarity scores (Spearman’s ρ) across six subtasks for ImageVec (our method) and previous work. Coverage is in brackets. The last column indicates the combined score across all subtasks. Best scores on each subtask are bolded."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do the images have multilingual annotations or monolingual ones?",
"Could you learn such embedding simply from the image annotations and without using visual information?",
"How much important is the visual grounding in the learning of the multilingual representations?"
],
"question_id": [
"c33d0bc5484c38de0119c8738ffa985d1bd64424",
"93b1b94b301a46251695db8194a2536639a22a88",
"e8029ec69b0b273954b4249873a5070c2a0edb8a"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Our high-level approach for constraining query and image representations to be similar. The English query “cat with big ears” is mapped to Q, while the corresponding image example is mapped to I . We use the cosine similarity of these representations as input to a softmax loss function. The model task can be understood as predicting if an image is relevant to a given query.",
"Figure 2: Our first method for calculating query and image representations, as presented in Section 4.1. To calculate the query representation, the multilingual embeddings for each language-prefixed token are averaged. To calculate the image representation, d-dimensional generic image features are passed through two fully-connected layers with m and n neurons.",
"Figure 3: In our language unaware approach, language tags are not prepended to each token, so the word “pain” in English and French share an embedding.",
"Table 1: Crosslingual semantic similarity scores (Spearman’s ρ) across six subtasks for ImageVec (our method) and previous work. Coverage is in brackets. The last column indicates the combined score across all subtasks. Best scores on each subtask are bolded.",
"Table 2: Multilingual document classification accuracy scores across two subtasks for ImageVec (our method) and previous work. Coverage is in brackets. Best scores are bolded (ties broken by coverage).",
"Table 3: Average monolingual semantic similarity score (Spearman’s ρ) across 13 subtasks for ImageVec (our method) and previous work. Average coverage is in brackets. Best score is bolded."
],
"file": [
"3-Figure1-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png",
"7-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png"
]
} | [
"Do the images have multilingual annotations or monolingual ones?",
"How much important is the visual grounding in the learning of the multilingual representations?"
] | [
[
"1905.12260-Data-0"
],
[
"1905.12260-7-Table1-1.png"
]
] | [
"monolingual",
"performance is significantly degraded without pixel data"
] | 642 |
1606.01404 | Generating Natural Language Inference Chains | The ability to reason with natural language is a fundamental prerequisite for many NLP tasks such as information extraction, machine translation and question answering. To quantify this ability, systems are commonly tested whether they can recognize textual entailment, i.e., whether one sentence can be inferred from another one. However, in most NLP applications only single source sentences instead of sentence pairs are available. Hence, we propose a new task that measures how well a model can generate an entailed sentence from a source sentence. We take entailment-pairs of the Stanford Natural Language Inference corpus and train an LSTM with attention. On a manually annotated test set we found that 82% of generated sentences are correct, an improvement of 10.3% over an LSTM baseline. A qualitative analysis shows that this model is not only capable of shortening input sentences, but also inferring new statements via paraphrasing and phrase entailment. We then apply this model recursively to input-output pairs, thereby generating natural language inference chains that can be used to automatically construct an entailment graph from source sentences. Finally, by swapping source and target sentences we can also train a model that given an input sentence invents additional information to generate a new sentence. | {
"paragraphs": [
[
"The ability to determine entailment or contradiction between natural language text is essential for improving the performance in a wide range of natural language processing tasks. Recognizing Textual Entailment (RTE) is a task primarily designed to determine whether two natural language sentences are independent, contradictory or in an entailment relationship where the second sentence (the hypothesis) can be inferred from the first (the premise). Although systems that perform well in RTE could potentially be used to improve question answering, information extraction, text summarization and machine translation BIBREF0 , only in few of such downstream NLP tasks sentence-pairs are actually available. Usually, only a single source sentence (e.g. a question that needs to be answered or a source sentence that we want to translate) is present and models need to come up with their own hypotheses and commonsense knowledge inferences.",
"The release of the large Stanford Natural Language Inference (SNLI) corpus BIBREF1 allowed end-to-end differentiable neural networks to outperform feature-based classifiers on the RTE task BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 .",
"In this work, we go a step further and investigate how well recurrent neural networks can produce true hypotheses given a source sentence. Furthermore, we qualitatively demonstrate that by only training on input-output pairs and recursively generating entailed sentence we can generate natural language inference chains (see Figure 1 for an example). Note that every inference step is interpretable as it is mapping one natural language sentence to another one.",
"Our contributions are fourfold: (i) we propose an entailment generation task based on the SNLI corpus (§ \"Entailment Generation\" ), (ii) we investigate a sequence-to-sequence model and find that $82\\%$ of generated sentences are correct (§ \"Example Generations\" ), (iii) we demonstrate the ability to generate natural language inference chains trained solely from entailment pairs (§ \"Entailment Generation\"1 ), and finally (iv) we can also generate sentences with more specific information by swapping source and target sentences during training (§ \"Entailment Generation\"6 )."
],
[
"In the section, we briefly introduce the entailment generation task and our sequence-to-sequence model."
],
[
"To create the entailment generation dataset, we simply filter the Stanford Natural Language Inference corpus for sentence-pairs of the entailment class. This results in a training set of $183,416$ sentence pairs, a development set of $3,329$ pairs and a test of $3,368$ pairs. Instead of a classification task, we can now use this dataset for a sequence transduction task."
],
[
"Sequence-to-sequence recurrent neural networks BIBREF7 have been successfully employed for many sequence transduction tasks in NLP such as machine translation BIBREF8 , BIBREF9 , constituency parsing BIBREF10 , sentence summarization BIBREF11 and question answering BIBREF12 . They consist of two recurrent neural networks (RNNs): an encoder that maps an input sequence of words into a dense vector representation, and a decoder that conditioned on that vector representation generates an output sequence. Specifically, we use long short-term memory (LSTM) RNNs BIBREF13 for encoding and decoding. Furthermore, we experiment with word-by-word attention BIBREF8 , which allows the decoder to search in the encoder outputs to circumvent the LSTM's memory bottleneck. We use greedy decoding at test time. The success of LSTMs with attention in sequence transduction tasks makes them a natural choice as a baseline for entailment generation, and we leave the investigation of more advanced models to future work."
],
[
"We use stochastic gradient descent with a mini-batch size of 64 and the ADAM optimizer BIBREF14 with a first momentum coefficient of $0.9$ and a second momentum coefficient of $0.999$ . Word embeddings are initialized with pre-trained word2vec vectors BIBREF15 . Out-of-vocabulary words ( $10.5\\%$ ) are randomly initialized by sampling values uniformly from $[-\\sqrt{3}, \\sqrt{3}]$ and optimized during training. Furthermore, we clip gradients using a norm of $5.0$ . We stop training after 25 epochs."
],
[
"We present results for various tasks: (i) given a premise, generate a sentence that can be inferred from the premise, (ii) construct inference chains by recursively generating sentences, and (iii) given a sentence, create a premise that would entail this sentence, i.e., make a more descriptive sentence by adding specific information."
],
[
"We train an LSTM with and without attention on the training set. After training, we take the best model in terms of BLEU score BIBREF16 on the development set and calculate the BLEU score on the test set. To our surprise, we found that using attention yields only a marginally higher BLEU score (43.1 vs. 42.8). We suspect that this is due to the fact that generating entailed sentences has a larger space of valid target sequences, which makes the use of BLEU problematic and penalizes correct solutions. Hence, we manually annotated 100 random test sentences and decided whether the generated sentence can indeed be inferred from the source sentence. We found that sentences generated by an LSTM with attention are substantially more accurate ( $82\\%$ accuracy) than those generated from an LSTM baseline ( $71.7\\%$ ). To gain more insights into the model's capabilities, we turn to a thorough qualitative analysis of the attention LSTM model in the remainder of this paper."
],
[
"Figure 2 shows examples of generated sentences from the development set. Syntactic simplification of the input sentence seems to be the most common approach. The model removes certain parts of the premise such as adjectives, resulting in a more abstract sentence (see Figure 2 . UID8 ).",
"Figure 2 . UID9 demonstrates that the system can recognize the number of subjects in the sentence and includes this information in the generated sentence. However, we did not observe such 'counting' behavior for more than four subjects, indicating that the system memorized frequency patterns from the training set.",
"Furthermore, we found predictions that hint to common-sense assumptions: if a sentence talks about a father holding a newborn baby, it is most likely that the newborn baby is his own child (Example 2 . UID10 ).",
"Two reappearing limitations of the proposed model are related to dealing with words that have a very different meaning but similar word2vec embeddings (e.g. colors), as well as ambiguous words. For instance, 'bar' in Figure 3 . UID8 refers to pole vault and not a place in which you can have a drink. Substituting one color by another one (Figure 3 . UID14 ) is a common mistake.",
"The SNLI corpus might not reflect the variety of sentences that can be encountered in downstream NLP tasks. In Figure 4 we present generated sentences for randomly selected examples of out-of-domain textual resources. They demonstrate that the model generalizes well to out-of-domain sentences, making it a potentially very useful component for improving systems for question answering, information extraction, sentence summarization etc."
],
[
"Next, we test how well the model can generate inference chains by repeatedly passing generated output sentences as inputs to the model. We stop once a sentence has already been generated in the chain. Figure 5 shows that this works well despite that the model was only trained on sentence-pairs.",
"Furthermore, by generating inference chains for all sentences in the development set we construct an entailment graph. In that graph we found that sentences with shared semantics are eventually mapped to the same sentence that captures the shared meaning.",
"A visualization of the topology of the entailment graph is shown in Figure 6 . Note that there are several long inference chains, as well as large clusters of sentences (nodes) that are mapped (links) to the same shared meaning."
],
[
"By swapping the source and target sequences for training, we can train a model that given a sentence invents additional information to generate a new sentence (Figure 7 ). We believe this might prove useful to increase the language variety and complexity of AI unit tests such as the Facebook bAbI task BIBREF17 , but we leave this for future work."
],
[
"We investigated the ability of sequence-to-sequence models to generate entailed sentences from a source sentence. To this end, we trained an attentive LSTM on entailment-pairs of the SNLI corpus. We found that this works well and generalizes beyond in-domain sentences. Hence, it could become a useful component for improving the performance of other NLP systems.",
"We were able to generate natural language inference chains by recursively generating sentences from previously inferred ones. This allowed us to construct an entailment graph for sentences of the SNLI development corpus. In this graph, the shared meaning of two related sentences is represented by the first natural language sentence that connects both sentences. Every inference step is interpretable as it maps a natural language sentence to another one.",
"Towards high-quality data augmentation, we experimented with reversing the generation task. We found that this enabled the model to learn to invent specific information.",
"For future work, we want to integrate the presented model into larger architectures to improve the performance of downstream NLP tasks such as information extraction and question answering. Furthermore, we plan to use the model for data augmentation to train expressive neural networks on tasks where only little annotated data is available. Another interesting research direction is to investigate methods for increasing the diversity of the generated sentences."
],
[
"We thank Guillaume Bouchard for suggesting the reversed generation task, and Dirk Weissenborn, Isabelle Augenstein and Matko Bosnjak for comments on drafts of this paper. This work was supported by Microsoft Research through its PhD Scholarship Programme, an Allen Distinguished Investigator Award, and a Marie Curie Career Integration Award."
]
],
"section_name": [
"Introduction",
"Method",
"Entailment Generation",
"Sequence-to-Sequence",
"Optimization and Hyperparameters",
"Experiments and Results",
"Quantitative Evaluation",
"Example Generations",
"Inference Chain Generation",
"Inverse Inference",
"Conclusion and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"68b30fc1728aa490f328e0a09081ad45d7f918a2"
],
"answer": [
{
"evidence": [
"We train an LSTM with and without attention on the training set. After training, we take the best model in terms of BLEU score BIBREF16 on the development set and calculate the BLEU score on the test set. To our surprise, we found that using attention yields only a marginally higher BLEU score (43.1 vs. 42.8). We suspect that this is due to the fact that generating entailed sentences has a larger space of valid target sequences, which makes the use of BLEU problematic and penalizes correct solutions. Hence, we manually annotated 100 random test sentences and decided whether the generated sentence can indeed be inferred from the source sentence. We found that sentences generated by an LSTM with attention are substantially more accurate ( $82\\%$ accuracy) than those generated from an LSTM baseline ( $71.7\\%$ ). To gain more insights into the model's capabilities, we turn to a thorough qualitative analysis of the attention LSTM model in the remainder of this paper."
],
"extractive_spans": [],
"free_form_answer": "Comparing BLEU score of model with and without attention",
"highlighted_evidence": [
"We train an LSTM with and without attention on the training set. After training, we take the best model in terms of BLEU score BIBREF16 on the development set and calculate the BLEU score on the test set. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"How is the generative model evaluated?"
],
"question_id": [
"f4e17b14318b9f67d60a8a2dad1f6b506a10ab36"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"natural language inference"
],
"topic_background": [
"research"
]
} | {
"caption": [
"Figure 2: Valid sentences generated by the model.",
"Figure 5: Examples of inference chains where two premises (underlined) converge to the same sentence (highlighted).",
"Figure 6: Topology of the generated entailment graph."
],
"file": [
"3-Figure2-1.png",
"4-Figure5-1.png",
"4-Figure6-1.png"
]
} | [
"How is the generative model evaluated?"
] | [
[
"1606.01404-Quantitative Evaluation-0"
]
] | [
"Comparing BLEU score of model with and without attention"
] | 643 |
1901.00439 | Deep Representation Learning for Clustering of Health Tweets | Twitter has been a prominent social media platform for mining population-level health data and accurate clustering of health-related tweets into topics is important for extracting relevant health insights. In this work, we propose deep convolutional autoencoders for learning compact representations of health-related tweets, further to be employed in clustering. We compare our method to several conventional tweet representation methods including bag-of-words, term frequency-inverse document frequency, Latent Dirichlet Allocation and Non-negative Matrix Factorization with 3 different clustering algorithms. Our results show that the clustering performance using proposed representation learning scheme significantly outperforms that of conventional methods for all experiments of different number of clusters. In addition, we propose a constraint on the learned representations during the neural network training in order to further enhance the clustering performance. All in all, this study introduces utilization of deep neural network-based architectures, i.e., deep convolutional autoencoders, for learning informative representations of health-related tweets. | {
"paragraphs": [
[
"Social media plays an important role in health informatics and Twitter has been one of the most influential social media channel for mining population-level health insights BIBREF0 , BIBREF1 , BIBREF2 . These insights range from forecasting of influenza epidemics BIBREF3 to predicting adverse drug reactions BIBREF4 . A notable challenge due to the short length of Twitter messages is categorization of tweets into topics in a supervised manner, i.e., topic classification, as well as in an unsupervised manner, i.e., clustering.",
"Classification of tweets into topics has been studied extensively BIBREF5 , BIBREF6 , BIBREF7 . Even though text classification algorithms can reach significant accuracy levels, supervised machine learning approaches require annotated data, i.e, topic categories to learn from for classification. On the other hand, annotated data is not always available as the annotation process is burdensome and time-consuming. In addition, discussions in social media evolve rapidly with recent trends, rendering Twitter a dynamic environment with ever-changing topics. Therefore, unsupervised approaches are essential for mining health-related information from Twitter.",
"Proposed methods for clustering tweets employ conventional text clustering pipelines involving preprocessing applied to raw text strings, followed by feature extraction which is then followed by a clustering algorithm BIBREF8 , BIBREF9 , BIBREF10 . Performance of such approaches depend highly on feature extraction in which careful engineering and domain knowledge is required BIBREF11 . Recent advancements in machine learning research, i.e., deep neural networks, enable efficient representation learning from raw data in a hierarchical manner BIBREF12 , BIBREF13 . Several natural language processing (NLP) tasks involving Twitter data have benefited from deep neural network-based approaches including sentiment classification of tweets BIBREF14 , predicting potential suicide attempts from Twitter BIBREF15 and simulating epidemics from Twitter BIBREF16 .",
"In this work, we propose deep convolutional autoencoders (CAEs) for obtaining efficient representations of health-related tweets in an unsupervised manner. We validate our approach on a publicly available dataset from Twitter by comparing the performance of our approach and conventional feature extraction methods on 3 different clustering algorithms. Furthermore, we propose a constraint on the learned representations during neural network training in order to further improve the clustering performance. We show that the proposed deep neural network-based representation learning method outperforms conventional methods in terms of clustering performance in experiments of varying number of clusters."
],
[
"Devising efficient representations of tweets, i.e., features, for performing clustering has been studied extensively. Most frequently used features for representing the text in tweets as numerical vectors are bag-of-words (BoWs) and term frequency-inverse document frequency (tf-idf) features BIBREF17 , BIBREF9 , BIBREF10 , BIBREF18 , BIBREF19 . Both of these feature extraction methods are based on word occurrence counts and eventually, result in a sparse (most elements being zero) document-term matrix. Proposed algorithms for clustering tweets into topics include variants of hierarchical, density-based and centroid-based clustering methods; k-means algorithm being the most frequently used one BIBREF9 , BIBREF19 , BIBREF20 .",
"Numerous works on topic modeling of tweets are available as well. Topic models are generative models, relying on the idea that a given tweet is a mixture of topics, where a topic is a probability distribution over words BIBREF21 . Even though the objective in topic modeling is slightly different than that of pure clustering, representing each tweet as a topic vector is essentially a way of dimensionality reduction or feature extraction and can further be followed by a clustering algorithm. Proposed topic modeling methods include conventional approaches or variants of them such as Latent Dirichlet Allocation (LDA) BIBREF22 , BIBREF17 , BIBREF9 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF19 , BIBREF28 , BIBREF29 and Non-negative Matrix Factorization (NMF) BIBREF30 , BIBREF18 . Note that topic models such as LDA are based on the notion that words belonging to a topic are more likely to appear in the same document and do not assume a distance metric between discovered topics.",
"In contrary to abovementioned feature extraction methods which are not specific to representation of tweets but rather generic in natural language processing, various works propose custom feature extraction methods for certain health-related information retrieval tasks from Twitter. For instance, Lim et al. engineered sentiment analysis features to discover latent infectious diseases from Twitter BIBREF31 . In order to track public health condition trends from Twitter, specific features are proposed by Parker at al. employing Wikipedia article index, i.e., treating the retrieval of medically-related Wikipedia articles as an indicator of a health-related condition BIBREF32 . Custom user similarity features calculated from tweets were also proposed for building a framework for recommending health-related topics BIBREF27 .",
"The idea of learning effective representations from raw data using neural networks has been employed in numerous machine learning domains such as computer vision and natural language processing BIBREF12 , BIBREF13 . The concept relies on the hierarchical, layer-wise architecture of neural networks in which the raw input data is encoded into informative representations of lower dimensions (representations of higher dimensions are possible as well) in a highly non-linear fashion. Autoencoders, Denoising Autoencoders, Convolutional Autoencoders, Sparse Autoencoders, Stacked Autoencoders and combinations of these, e.g., Denoising Convolutional Autoencoders, are the most common deep neural network architectures specifically used for representation learning. In an autoencoder training, the network tries to reconstruct the input data at its output, which forces the model to capture the most salient features of the data at its intermediate layers. If the intermediate layers correspond to a lower dimensional latent space than the original input, such autoencoders are also known as undercomplete. Activations extracted from these layers can be considered as compact, non-linear representations of the input. Another significant advancement in neural network-based representation learning in NLP tasks is word embeddings (also called distributed representation of words). By representing each word in a given vocabulary with a real-valued vector of a fixed dimension, word embeddings enable capturing of lexical, semantic or even syntactic similarities between words. Typically, these vector representations are learned from large corpora and can be used to enhance the performance of numerous NLP tasks such as document classification, question answering and machine translation. Most frequently used word embeddings are word2vec BIBREF33 and GloVe (Global Vectors for Word Representation) BIBREF34 . Both of these are extracted in an unsupervised manner and are based on the distributional hypothesis BIBREF35 , i.e., the assumption that words that occur in the same contexts tend to have similar meanings. Both word2vec and GloVe treat a word as a smallest entity to train on. A shift in this paradigm was introduced by fastText BIBREF36 , which treats each word as a bag of character n-grams. Consequently, fastText embeddings are shown to have better representations for rare words BIBREF36 . In addition, one can still construct a vector representation for an out-of-vocabulary word which is not possible with word2vec or GloVe embeddings BIBREF36 . Enhanced methods for deducting better word and/or sentence representations were recently introduced as well by Peters et al. with the name ELMo (Embeddings from Language Models) BIBREF37 and by Devlin et al. with the name BERT (Bidirectional Encoder Representations from Transformers) BIBREF38 . All of these word embedding models are trained on large corpora such as Wikipedia, in an unsupervised manner. For analyzing tweets, word2vec and GloVe word embeddings have been employed for topical clustering of tweets BIBREF39 , topic modeling BIBREF40 , BIBREF41 and extracting depression symptoms from tweets BIBREF20 .",
"Metrics for evaluating the performance of clustering algorithms varies depending on whether the ground truth topic categories are available or not. If so, frequently used metrics are accuracy and normalized mutual information. In the case of absence of ground truth labels, one has to use internal clustering criterions such as Calinski-Harabasz (CH) score BIBREF42 and Davies-Bouldin index BIBREF43 . Arbelaitz et al. provides an extensive comparative study of cluster validity indices BIBREF44 ."
],
[
"For this study, a publicly available dataset is used BIBREF45 . The dataset consisting of tweets has been collected using Twitter API and was initially introduced by Karami et al. BIBREF46 . Earliest tweet dates back to 13 June 2011 where the latest one has a timestamp of 9 April 2015. The dataset consists of 63,326 tweets in English language, collected from Twitter channels of 16 major health news agencies. List of health news channels and the number of tweets in the dataset from each channel can be examined from Table 1 .",
"The outlook of a typical tweet from the dataset can be examined from Figure 1 . For every tweet, the raw data consists of the tweet text and in most cases followed by a url to the original news article of the particular news source. This url string, if available, is removed from each tweet as it does not possess any natural language information. As Twitter allows several ways for users to interact such as retweeting or mentioning, these actions appear in the raw text as well. For retweets, an indicator string of \"RT\" appears as a prefix in the raw data and for user mentions, a string of form \"@username\" appears in the raw data. These two tokens are removed as well. In addition, hashtags are converted to plain tokens by removal of the \"#\" sign appearing before them (e.g. <#pregnancy> becomes <pregnancy>). Number of words, number of unique words and mean word counts for each Twitter channel can also be examined from Table 1 . Longest tweet consists of 27 words."
],
[
"For representing tweets, 5 conventional representation methods are proposed as baselines.",
"Word frequency features: For word occurrence-based representations of tweets, conventional tf-idf and BoWs are used to obtain the document-term matrix of $N \\times P$ in which each row corresponds to a tweet and each column corresponds to a unique word/token, i.e., $N$ data points and $P$ features. As the document-term matrix obtained from tf-idf or BoWs features is extremely sparse and consequently redundant across many dimensions, dimensionality reduction and topic modeling to a lower dimensional latent space is performed by the methods below.",
"Principal Component Analysis (PCA): PCA is used to map the word frequency representations from the original feature space to a lower dimensional feature space by an orthogonal linear transformation in such a way that the first principal component has the highest possible variance and similarly, each succeeding component has the highest variance possible while being orthogonal to the preceding components. Our PCA implementation has a time complexity of $\\mathcal {O}(NP^2 + P^3)$ .",
"Truncated Singular Value Decomposition (t-SVD): Standard SVD and t-SVD are commonly employed dimensionality reduction techniques in which a matrix is reduced or approximated into a low-rank decomposition. Time complexity of SVD and t-SVD for $S$ components are $\\mathcal {O}(min(NP^2, N^2P))$ and $\\mathcal {O}(N^2S)$ , respectively (depending on the implementation). Contrary to PCA, t-SVD can be applied to sparse matrices efficiently as it does not require data normalization. When the data matrix is obtained by BoWs or tf-idf representations as in our case, the technique is also known as Latent Semantic Analysis.",
"LDA: Our LDA implementation employs online variational Bayes algorithm introduced by Hoffman et al. which uses stochastic optimization to maximize the objective function for the topic model BIBREF47 .",
"NMF: As NMF finds two non-negative matrices whose product approximates the non-negative document-term matrix, it allows regularization. Our implementation did not employ any regularization and the divergence function is set to be squared error, i.e., Frobenius norm."
],
[
"We propose 2D convolutional autoencoders for extracting compact representations of tweets from their raw form in a highly non-linear fashion. In order to turn a given tweet into a 2D structure to be fed into the CAE, we extract the word vectors of each word using word embedding models, i.e., for a given tweet, $t$ , consisting of $W$ words, the 2D input is $I_{t} \\in ^{W \\times D}$ where $D$ is the embedding vector dimension. We compare 4 different word embeddings namely word2vec, GloVe, fastText and BERT with embedding vector dimensions of 300, 300, 300 and 768, respectively. We set the maximum sequence length to 32, i.e., for tweets having less number of words, the input matrix is padded with zeros. As word2vec and GloVe embeddings can not handle out-of-vocabulary words, such cases are represented as a vector of zeros. The process of extracting word vector representations of a tweet to form the 2D input matrix can be examined from Figure 1 .",
"The CAE architecture can be considered as consisting of 2 parts, ie., the encoder and the decoder. The encoder, $f_{enc}(\\cdot )$ , is the part of the network that compresses the input, $I$ , into a latent space representation, $U$ , and the decoder, $f_{dec}(\\cdot )$ aims to reconstruct the input from the latent space representation (see equation 12 ). In essence, ",
"$$U = f_{enc}(I) = f_{L}(f_{L-1}(...f_{1}(I)))$$ (Eq. 12) ",
"where $L$ is the number of layers in the encoder part of the CAE.",
"The encoder in the proposed architecture consists of three 2D convolutional layers with 64, 32 and 1 filters, respectively. The decoder follows the same symmetry with three convolutional layers with 1, 32 and 64 filters, respectively and an output convolutional layer of a single filter (see Figure 1 ). All convolutional layers have a kernel size of (3 $\\times $ 3) and an activation function of Rectified Linear Unit (ReLU) except the output layer which employs a linear activation function. Each convolutional layer in the encoder is followed by a 2D MaxPooling layer and similarly each convolutional layer in the decoder is followed by a 2D UpSampling layer, serving as an inverse operation (having the same parameters). The pooling sizes for pooling layers are (2 $\\times $ 5), (2 $\\times $ 5) and (2 $\\times $ 2), respectively for the architectures when word2vec, GloVe and fastText embeddings are employed. With this configuration, an input tweet of size $32 \\times 300$ (corresponding to maximum sequence length $\\times $ embedding dimension, $D$ ) is downsampled to size of $4 \\times 6$ out of the encoder (bottleneck layer). As BERT word embeddings have word vectors of fixed size 768, the pooling layer sizes are chosen to be (2 $\\times $ 8), (2 $\\times $ 8) and (2 $\\times $0 2), respectively for that case. In summary, a representation of $\\times $1 values is learned for each tweet through the encoder, e.g., for fastText embeddings the flow of dimensions after each encoder block is as such : $\\times $2 .",
"In numerous NLP tasks, an Embedding Layer is employed as the first layer of the neural network which can be initialized with the word embedding matrix in order to incorporate the embedding process into the architecture itself instead of manual extraction. In our case, this was not possible because of nonexistence of an inversed embedding layer in the decoder (as in the relationship between MaxPooling layers and UpSampling layers) as an embedding layer is not differentiable.",
"Training of autoencoders tries to minimize the reconstruction error/loss, i.e., the deviation of the reconstructed output from the input. $L_2$ -loss or mean square error (MSE) is chosen to be the loss function. In autoencoders, minimizing the $L_2$ -loss is equivalent to maximizing the mutual information between the reconstructed inputs and the original ones BIBREF48 . In addition, from a probabilistic point of view, minimizing the $L_2$ -loss is the same as maximizing the probability of the parameters given the data, corresponding to a maximum likelihood estimator. The optimizer for the autoencoder training is chosen to be Adam due to its faster convergence abilities BIBREF49 . The learning rate for the optimizer is set to $10^{-5}$ and the batch size for the training is set to 32. Random split of 80% training-20% validation set is performed for monitoring convergence. Maximum number of training epochs is set to 50."
],
[
"Certain constraints on neural network weights are commonly employed during training in order to reduce overfitting, also known as regularization. Such constraints include $L_1$ regularization, $L_2$ regularization, orthogonal regularization etc. Even though regularization is a common practice, standard training of neural networks do not inherently impose any constraints on the learned representations (activations), $U$ , other than the ones compelled by the activation functions (e.g. ReLUs resulting in non-negative outputs). Recent advancements in computer vision research show that constraining the learned representations can enhance the effectiveness of representation learning, consequently increasing the clustering performance BIBREF50 , BIBREF51 . ",
"$$\\begin{aligned}\n& \\text{minimize}\n& & L = 1/_N \\left\\Vert I - f_{dec}(f_{enc}(I))\\right\\Vert ^2_{2} \\\\\n& \\text{subject to}\n& & \\left\\Vert f_{enc}(I)\\right\\Vert ^2_{2} = 1\n\\end{aligned}$$ (Eq. 14) ",
"We propose an $L_2$ norm constraint on the learned representations out of the bottleneck layer, $U$ . Essentially, this is a hard constraint introduced during neural network training that results in learned features with unit $L_2$ norm out of the bottleneck layer (see equation 14 where $N$ is the number of data points). Training a deep convolutional autoencoder with such a constraint is shown to be much more effective for image data than applying $L_2$ normalization on the learned representations after training BIBREF51 . To the best of our knowledge, this is the first study to incorporate $L_2$ norm constraint in a task involving text data."
],
[
"In order to fairly compare and evaluate the proposed methods in terms of effectiveness in representation of tweets, we fix the number of features to 24 for all methods and feed these representations as an input to 3 different clustering algorithms namely, k-means, Ward and spectral clustering with cluster numbers of 10, 20 and 50. Distance metric for k-means clustering is chosen to be euclidean and the linkage criteria for Ward clustering is chosen to be minimizing the sum of differences within all clusters, i.e., recursively merging pairs of clusters that minimally increases the within-cluster variance in a hierarchical manner. For spectral clustering, Gaussian kernel has been employed for constructing the affinity matrix. We also run experiments with tf-idf and BoWs representations without further dimensionality reduction as well as concatenation of all word embeddings into a long feature vector. For evaluation of clustering performance, we use Calinski-Harabasz score BIBREF42 , also known as the variance ratio criterion. CH score is defined as the ratio between the within-cluster dispersion and the between-cluster dispersion. CH score has a range of $[0, +\\infty ]$ and a higher CH score corresponds to a better clustering. Computational complexity of calculating CH score is $\\mathcal {O}(N)$ .",
"For a given dataset $X$ consisting of $N$ data points, i.e., $X = \\big \\lbrace x_1, x_2, ... , x_N\\big \\rbrace $ and a given set of disjoint clusters $C$ with $K$ clusters, i.e., $C = \\big \\lbrace c_1, c_2, ... , c_K\\big \\rbrace $ , Calinski-Harabasz score, $S_{CH}$ , is defined as ",
"$$S_{CH} = \\frac{N-K}{K-1}\\frac{\\sum _{c_k \\in C}^{}{N_k \\left\\Vert \\overline{c_k}-\\overline{X}\\right\\Vert ^2_{2}}}{\\sum _{c_k \\in C}^{}{}\\sum _{x_i \\in c_k}^{}{\\left\\Vert x_i-\\overline{c_k}\\right\\Vert ^2_{2}}}$$ (Eq. 16) ",
"where $N_k$ is the number of points belonging to the cluster $c_k$ , $\\overline{X}$ is the centroid of the entire dataset, $\\frac{1}{N}\\sum _{x_i \\in X}{x_i}$ and $\\overline{c_k}$ is the centroid of the cluster $c_k$ , $\\frac{1}{N_k}\\sum _{x_i \\in c_k}{x_i}$ .",
"For visual validation, we plot and inspect the t-Distributed Stochastic Neighbor Embedding (t-SNE) BIBREF52 and Uniform Manifold Approximation and Projection (UMAP) BIBREF53 mappings of the learned representations as well. Implementation of this study is done in Python (version 3.6) using scikit-learn and TensorFlow libraries BIBREF54 , BIBREF55 on a 64-bit Ubuntu 16.04 workstation with 128 GB RAM. Training of autoencoders are performed with a single NVIDIA Titan Xp GPU."
],
[
"Performance of the representations tested on 3 different clustering algorithms, i.e., CH scores, for 3 different cluster numbers can be examined from Table 2 . $L_2$ -norm constrained CAE is simply referred as $L_2$ -CAE in Table 2 . Same table shows the number of features used for each method as well. Document-term matrix extracted by BoWs and tf-idf features result in a sparse matrix of $63,326 \\times 13,026$ with a sparsity of 0.9994733. Similarly, concatenation of word embeddings result in a high number of features with $32 \\times 300 = 9,600$ for word2vec, GloVe and fastText, $32 \\times 768 = 24,576$ for BERT embeddings. In summary, the proposed method of learning representations of tweets with CAEs outperform all of the conventional algorithms. When representations are compared with Hotelling's $T^2$ test (multivariate version of $t$ -test), every representation distribution learned by CAEs are shown to be statistically significantly different than every other conventional representation distribution with $p<0.001$ . In addition, introducing the $L_2$ -norm constraint on the learned representations during training enhances the clustering performance further (again $p<0.001$ when comparing for example fastText+CAE vs. fastText+ $L_2$0 -CAE). An example learning curve for CAE and $L_2$1 -CAE with fastText embeddings as input can also be seen in Figure 2 .",
"Detailed inspection of tweets that are clustered into the same cluster as well as visual analysis of the formed clusters is also performed. Figure 3 shows the t-SNE and UMAP mappings (onto 2D plane) of the 10 clusters formed by k-means algorithm for LDA, CAE and $L_2$ -CAE representations. Below are several examples of tweets sampled from one of the clusters formed by k-means in the 50 clusters case (fastText embeddings fed into $L_2$ -CAE):"
],
[
"Overall, we show that deep convolutional autoencoder-based feature extraction, i.e., representation learning, from health related tweets significantly enhances the performance of clustering algorithms when compared to conventional text feature extraction and topic modeling methods (see Table 2 ). This statement holds true for 3 different clustering algorithms (k-means, Ward, spectral) as well as for 3 different number of clusters. In addition, proposed constrained training ( $L_2$ -norm constraint) is shown to further improve the clustering performance in each experiment as well (see Table 2 ). A Calinski-Harabasz score of 4,304 has been achieved with constrained representation learning by CAE for the experiment of 50 clusters formed by k-means clustering. The highest CH score achieved in the same experiment setting by conventional algorithms was 638 which was achieved by LDA applied of tf-idf features.",
"Visualizations of t-SNE and UMAP mappings in Figure 3 show that $L_2$ -norm constrained training results in higher separability of clusters. The benefit of this constraint is especially significant in the performance of k-means clustering (see Table 2 ). This phenomena is not unexpected as k-means clustering is based on $L_2$ distance as well. The difference in learning curves for regular and constrained CAE trainings is also expected. Constrained CAE training converges to local minimum slightly later than unconstrained CAE, i.e., training of $L_2$ -CAE is slightly slower than that of CAE due to the introduced contraint (see Figure 2 ).",
"When it comes to comparison between word embeddings, fastText and BERT word vectors result in the highest CH scores whereas word2vec and GloVe embeddings result in significantly lower performance. This observation can be explained by the nature of word2vec and GloVe embeddings which can not handle out-of-vocabulary tokens. Numerous tweets include names of certain drugs which are more likely to be absent in the vocabulary of these models, consequently resulting in vectors of zeros as embeddings. However, fastText embeddings are based on character n-grams which enables handling of out-of-vocabulary tokens, e.g., fastText word vectors of the tokens <acetaminophen> and <paracetamol> are closer to each other simply due to shared character sequence, <acetam>, even if one of them is not in the vocabulary. Note that, <acetaminophen> and <paracetamol> are different names for the same drug.",
"Using tf-idf or BoWs features directly results in very poor performance. Similarly, concatenating word embeddings to create thousands of features results in significantly low performance compared to methods that reduce these features to 24. The main reason is that the bias-variance trade-off is dominated by the bias in high dimensional settings especially in Euclidean spaces BIBREF56 . Due to very high number of features (relative to the number of observations), the radius of a given region varies with respect to the $n$ th root of its volume, whereas the number of data points in the region varies roughly linearly with the volume BIBREF56 . This phenomena is known as curse of dimensionality. As topic models such as LDA and NMF are designed to be used on documents that are sufficiently long to extract robust statistics from, extracted topic vectors fall short in performance as well when it comes to tweets due to short texts.",
"The main limitation of this study is the absence of topic labels in the dataset. As a result, internal clustering measure of Calinski-Harabasz score was used for evaluating the performance of the formed clusters instead of accuracy or normalized mutual information. Even though CH score is shown to be able to capture clusters of different densities and presence of subclusters, it has difficulties capturing highly noisy data and skewed distributions BIBREF57 . In addition, used clustering algorithms, i.e., k-means, Ward and spectral clustering, are hard clustering algorithms which results in non-overlapping clusters. However, a given tweet can have several topical labels.",
"Future work includes representation learning of health-related tweets using deep neural network architectures that can inherently learn the sequential nature of the textual data such as recurrent neural networks, e.g., Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU) etc. Sequence-to-sequence autoencoders are main examples of such architectures and they have been shown to be effective in encoding paragraphs from Wikipedia and other corpora to lower dimensions BIBREF58 . Furthermore, encodings out of a bidirectional GRU will be tested for clustering performance, as such architectures have been employed to represent a given tweet in other studies BIBREF59 , BIBREF60 , BIBREF61 ."
],
[
"In summary, we show that deep convolutional autoencoders can effectively learn compact representations of health-related tweets in an unsupervised manner. Conducted analysis show that the proposed representation learning scheme outperforms conventional feature extraction methods in three different clustering algorithms. In addition, we propose a constraint on the learned representation in order to further increase the clustering performance. Future work includes comparison of our model with recurrent neural architectures for clustering of health-related tweets. We believe this study serves as an advancement in the field of natural language processing for health informatics especially in clustering of short-text social media data."
]
],
"section_name": [
"Introduction",
"Related Work",
"Dataset",
"Conventional Representations",
"Representation Learning",
"L 2 L_2-norm Constrained Representation Learning",
"Evaluation",
"Results",
"Discussion",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"f58cb61232502ccd2ca9b1374e7f75e4056e6bef"
],
"answer": [
{
"evidence": [
"In order to fairly compare and evaluate the proposed methods in terms of effectiveness in representation of tweets, we fix the number of features to 24 for all methods and feed these representations as an input to 3 different clustering algorithms namely, k-means, Ward and spectral clustering with cluster numbers of 10, 20 and 50. Distance metric for k-means clustering is chosen to be euclidean and the linkage criteria for Ward clustering is chosen to be minimizing the sum of differences within all clusters, i.e., recursively merging pairs of clusters that minimally increases the within-cluster variance in a hierarchical manner. For spectral clustering, Gaussian kernel has been employed for constructing the affinity matrix. We also run experiments with tf-idf and BoWs representations without further dimensionality reduction as well as concatenation of all word embeddings into a long feature vector. For evaluation of clustering performance, we use Calinski-Harabasz score BIBREF42 , also known as the variance ratio criterion. CH score is defined as the ratio between the within-cluster dispersion and the between-cluster dispersion. CH score has a range of $[0, +\\infty ]$ and a higher CH score corresponds to a better clustering. Computational complexity of calculating CH score is $\\mathcal {O}(N)$ .",
"For visual validation, we plot and inspect the t-Distributed Stochastic Neighbor Embedding (t-SNE) BIBREF52 and Uniform Manifold Approximation and Projection (UMAP) BIBREF53 mappings of the learned representations as well. Implementation of this study is done in Python (version 3.6) using scikit-learn and TensorFlow libraries BIBREF54 , BIBREF55 on a 64-bit Ubuntu 16.04 workstation with 128 GB RAM. Training of autoencoders are performed with a single NVIDIA Titan Xp GPU."
],
"extractive_spans": [
"Calinski-Harabasz score",
"t-SNE",
"UMAP"
],
"free_form_answer": "",
"highlighted_evidence": [
"For evaluation of clustering performance, we use Calinski-Harabasz score BIBREF42 , also known as the variance ratio criterion. CH score is defined as the ratio between the within-cluster dispersion and the between-cluster dispersion. CH score has a range of $[0, +\\infty ]$ and a higher CH score corresponds to a better clustering. ",
"For visual validation, we plot and inspect the t-Distributed Stochastic Neighbor Embedding (t-SNE) BIBREF52 and Uniform Manifold Approximation and Projection (UMAP) BIBREF53 mappings of the learned representations as well."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"f320efb1fbb744616e420aaf8da0f9622b75b2ed"
]
},
{
"annotation_id": [
"68de8331567cb04667b48a874d2c9c5c9084106c"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Fig. 1. Proposed representation learning method depicting the overall flow starting from a tweet to the learned features, including the architecture of the convolutional autoencoder.",
"FLOAT SELECTED: Fig. 1. Proposed representation learning method depicting the overall flow starting from a tweet to the learned features, including the architecture of the convolutional autoencoder."
],
"extractive_spans": [],
"free_form_answer": "The health benefits of alcohol consumption are more limited than previously thought, researchers say",
"highlighted_evidence": [
"FLOAT SELECTED: Fig. 1. Proposed representation learning method depicting the overall flow starting from a tweet to the learned features, including the architecture of the convolutional autoencoder.",
"FLOAT SELECTED: Fig. 1. Proposed representation learning method depicting the overall flow starting from a tweet to the learned features, including the architecture of the convolutional autoencoder."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"f320efb1fbb744616e420aaf8da0f9622b75b2ed"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"How do they evaluate their method?",
"What is an example of a health-related tweet?"
],
"question_id": [
"fac052c4ad6b19a64d7db32fd08df38ad2e22118",
"aa54e12ff71c25b7cff1e44783d07806e89f8e54"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"TABLE I NUMBER OF TWEETS, TOTAL NUMBER OF WORDS, NUMBER OF UNIQUE WORDS AND AVERAGE NUMBER OF WORDS FOR TWEETS FROM 16 HEALTH-RELATED TWITTER CHANNELS.",
"Fig. 1. Proposed representation learning method depicting the overall flow starting from a tweet to the learned features, including the architecture of the convolutional autoencoder.",
"Fig. 2. Learning curves depicting training and validation losses of CAE and L2-norm constrained CAE architectures for fastText embeddings.",
"Fig. 3. UMAP and t-SNE visualizations of representations extracted by LDA, CAE and L2-norm constrained CAE (each having a length of 24) and coloring based on k-means clustering of the representations into 10 clusters.",
"TABLE II CALINSKI-HARABASZ SCORES FOR SEVERAL CONVENTIONAL METHODS AND PROPOSED CAE-BASED METHODS FOR 3 DIFFERENT CLUSTERING ALGORITHMS AND 3 DIFFERENT NUMBER OF CLUSTERS."
],
"file": [
"2-TableI-1.png",
"3-Figure1-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png",
"6-TableII-1.png"
]
} | [
"What is an example of a health-related tweet?"
] | [
[
"1901.00439-3-Figure1-1.png"
]
] | [
"The health benefits of alcohol consumption are more limited than previously thought, researchers say"
] | 644 |
1908.04531 | Offensive Language and Hate Speech Detection for Danish | The presence of offensive language on social media platforms and the implications this poses is becoming a major concern in modern society. Given the enormous amount of content created every day, automatic methods are required to detect and deal with this type of content. Until now, most of the research has focused on solving the problem for the English language, while the problem is multilingual. We construct a Danish dataset containing user-generated comments from \textit{Reddit} and \textit{Facebook}. It contains user generated comments from various social media platforms, and to our knowledge, it is the first of its kind. Our dataset is annotated to capture various types and target of offensive language. We develop four automatic classification systems, each designed to work for both the English and the Danish language. In the detection of offensive language in English, the best performing system achieves a macro averaged F1-score of $0.74$, and the best performing system for Danish achieves a macro averaged F1-score of $0.70$. In the detection of whether or not an offensive post is targeted, the best performing system for English achieves a macro averaged F1-score of $0.62$, while the best performing system for Danish achieves a macro averaged F1-score of $0.73$. Finally, in the detection of the target type in a targeted offensive post, the best performing system for English achieves a macro averaged F1-score of $0.56$, and the best performing system for Danish achieves a macro averaged F1-score of $0.63$. Our work for both the English and the Danish language captures the type and targets of offensive language, and present automatic methods for detecting different kinds of offensive language such as hate speech and cyberbullying. | {
"paragraphs": [
[
"Offensive language in user-generated content on online platforms and its implications has been gaining attention over the last couple of years. This interest is sparked by the fact that many of the online social media platforms have come under scrutiny on how this type of content should be detected and dealt with. It is, however, far from trivial to deal with this type of language directly due to the gigantic amount of user-generated content created every day. For this reason, automatic methods are required, using natural language processing (NLP) and machine learning techniques.",
"Given the fact that the research on offensive language detection has to a large extent been focused on the English language, we set out to explore the design of models that can successfully be used for both English and Danish. To accomplish this, an appropriate dataset must be constructed, annotated with the guidelines described in BIBREF0 . We, furthermore, set out to analyze the linguistic features that prove hard to detect by analyzing the patterns that prove hard to detect."
],
[
"Offensive language varies greatly, ranging from simple profanity to much more severe types of language. One of the more troublesome types of language is hate speech and the presence of hate speech on social media platforms has been shown to be in correlation with hate crimes in real life settings BIBREF1 . It can be quite hard to distinguish between generally offensive language and hate speech as few universal definitions exist BIBREF2 . There does, however, seem to be a general consensus that hate speech can be defined as language that targets a group with the intent to be harmful or to cause social chaos. This targeting is usually done on the basis of some characteristics such as race, color, ethnicity, gender, sexual orientation, nationality or religion BIBREF3 . In section \"Background\" , hate speech is defined in more detail. Offensive language, on the other hand, is a more general category containing any type of profanity or insult. Hate speech can, therefore, be classified as a subset of offensive language. BIBREF0 propose guidelines for classifying offensive language as well as the type and the target of offensive language. These guidelines capture the characteristics of generally offensive language, hate speech and other types of targeted offensive language such as cyberbullying. However, despite offensive language detection being a burgeoning field, no dataset yet exists for Danish BIBREF4 despite this phenomenon being present BIBREF5 .",
"Many different sub-tasks have been considered in the literature on offensive and harmful language detection, ranging from the detection of general offensive language to more refined tasks such as hate speech detection BIBREF2 , and cyberbullying detection BIBREF6 .",
"A key aspect in the research of automatic classification methods for language of any kind is having substantial amount of high quality data that reflects the goal of the task at hand, and that also contains a decent amount of samples belonging to each of the classes being considered. To approach this problem as a supervised classification task the data needs to be annotated according to a well-defined annotation schema that clearly reflects the problem statement. The quality of the data is of vital importance, since low quality data is unlikely to provide meaningful results. Cyberbullying is commonly defined as targeted insults or threats against an individual BIBREF0 . Three factors are mentioned as indicators of cyberbullying BIBREF6 : intent to cause harm, repetitiveness, and an imbalance of power. This type of online harassment most commonly occurs between children and teenagers, and cyberbullying acts are prohibited by law in several countries, as well as many of the US states BIBREF7 .",
" BIBREF8 focus on classifying cyberbullying events in Dutch. They define cyberbullying as textual content that is published online by an individual and is aggressive or hurtful against a victim. The annotation-schema used consists of two steps. In the first step, a three-point harmfulness score is assigned to each post as well as a category denoting the authors role (i.e. harasser, victim, or bystander). In the second step a more refined categorization is applied, by annotating the posts using the the following labels: Threat/Blackmail, Insult, Curse/Exclusion, Defamation, Sexual Talk, Defense, and Encouragement to the harasser. Hate Speech. As discussed in Section \"Classification Structure\" , hate speech is generally defined as language that is targeted towards a group, with the intend to be harmful or cause social chaos. This targeting is usually based on characteristics such as race, color, ethnicity, gender, sexual orientation, nationality or religion BIBREF3 . Hate speech is prohibited by law in many countries, although the definitions may vary. In article 20 of the International Covenant on Civil and Political Rights (ICCPR) it is stated that \"Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law\" BIBREF9 . In Denmark, hate speech is prohibited by law, and is formally defined as public statements where a group is threatened, insulted, or degraded on the basis of characteristics such as nationality, ethnicity, religion, or sexual orientation BIBREF10 . Hate speech is generally prohibited by law in the European Union, where it is defined as public incitement to violence or hatred directed against a group defined on the basis of characteristics such as race, religion, and national or ethnic origin BIBREF11 . Hate speech is, however, not prohibited by law in the United States. This is due to the fact that hate speech is protected by the freedom of speech act in the First Amendment of the U.S. Constitution BIBREF12 .",
" BIBREF2 focus is on classifying hate speech by distinguishing between general offensive language and hate speech. They define hate speech as \"language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group\". They argue that the high use of profanity on social media makes it vitally important to be able to effectively distinguish between generally offensive language and the more severe hate speech. The dataset is constructed by gathering data from Twitter, using a hate speech lexicon to query the data with crowdsourced annotations.",
"Contradicting definitions. It becomes clear that one of the key challenges in doing meaningful research on the topic are the differences in both the annotation-schemas and the definitions used, since it makes it difficult to effectively compare results to existing work, as pointed out by several authors ( BIBREF13 , BIBREF3 , BIBREF14 , BIBREF0 ). These issues become clear when comparing the work of BIBREF6 , where racist and sexist remarks are classified as a subset of insults, to the work of BIBREF15 , where similar remarks are split into two categories; hate speech and derogatory language. Another clear example of conflicting definitions becomes visible when comparing BIBREF16 , where hate speech is considered without any consideration of overlaps with the more general type of offensive language, to BIBREF2 where a clear distinction is made between the two, by classifying posts as either Hate speech, Offensive or Neither. This lack of consensus led BIBREF14 to propose annotation guidelines and introduce a typology. BIBREF17 argue that these proposed guidelines do not effectively capture both the type and target of the offensive language."
],
[
"In this section we give a comprehensive overview of the structure of the task and describe the dataset provided in BIBREF0 . Our work adopts this framing of the offensive language phenomenon."
],
[
"Offensive content is broken into three sub-tasks to be able to effectively identify both the type and the target of the offensive posts. These three sub-tasks are chosen with the objective of being able to capture different types of offensive language, such as hate speech and cyberbullying (section \"Background\" ).",
"In sub-task A the goal is to classify posts as either offensive or not. Offensive posts include insults and threats as well as any form of untargeted profanity BIBREF17 . Each sample is annotated with one of the following labels:",
"In English this could be a post such as #TheNunMovie was just as scary as I thought it would be. Clearly the critics don't think she is terrifyingly creepy. I like how it ties in with #TheConjuring series. In Danish this could be a post such as Kim Larsen var god, men hans død blev alt for hyped.",
". In English this could be a post such as USER is a #pervert himself!. In Danish this could be a post such as Kalle er faggot...",
"In sub-task B the goal is to classify the type of offensive language by determining if the offensive language is targeted or not. Targeted offensive language contains insults and threats to an individual, group, or others BIBREF17 . Untargeted posts contain general profanity while not clearly targeting anyone BIBREF17 . Only posts labeled as offensive (OFF) in sub-task A are considered in this task. Each sample is annotated with one of the following labels:",
"Targeted Insult (TIN). In English this could be a post such as @USER Please ban this cheating scum. In Danish this could be e.g. Hun skal da selv have 99 år, den smatso.",
"Untargeted (UNT). In English this could be a post such as 2 weeks of resp done and I still don't know shit my ass still on vacation mode. In Danish this could e.g. Dumme svin...",
"In sub-task C the goal is to classify the target of the offensive language. Only posts labeled as targeted insults (TIN) in sub-task B are considered in this task BIBREF17 . Samples are annotated with one of the following:",
"Individual (IND): Posts targeting a named or unnamed person that is part of the conversation. In English this could be a post such as @USER Is a FRAUD Female @USER group paid for and organized by @USER. In Danish this could be a post such as USER du er sku da syg i hoved. These examples further demonstrate that this category captures the characteristics of cyberbullying, as it is defined in section \"Background\" .",
"Group (GRP): Posts targeting a group of people based on ethnicity, gender or sexual orientation, political affiliation, religious belief, or other characteristics. In English this could be a post such as #Antifa are mentally unstable cowards, pretending to be relevant. In Danish this could be e.g. Åh nej! Svensk lorteret!",
"Other (OTH): The target of the offensive language does not fit the criteria of either of the previous two categories. BIBREF17 . In English this could be a post such as And these entertainment agencies just gonna have to be an ass about it.. In Danish this could be a post such as Netto er jo et tempel over lort.",
"One of the main concerns when it comes to collecting data for the task of offensive language detection is to find high quality sources of user-generated content that represent each class in the annotation-schema to some extent. In our exploration phase we considered various social media platforms such as Twitter, Facebook, and Reddit.",
"We consider three social media sites as data.",
"Twitter. Twitter has been used extensively as a source of user-generated content and it was the first source considered in our initial data collection phase. The platform provides excellent interface for developers making it easy to gather substantial amounts of data with limited efforts. However, Twitter was not a suitable source of data for our task. This is due to the fact that Twitter has limited usage in Denmark, resulting in low quality data with many classes of interest unrepresented.",
"Facebook. We next considered Facebook, and the public page for the Danish media company Ekstra Bladet. We looked at user-generated comments on articles posted by Ekstra Bladet, and initial analysis of these comments showed great promise as they have a high degree of variation. The user behaviour on the page and the language used ranges from neutral language to very aggressive, where some users pour out sexist, racist and generally hateful language. We faced obstacles when collecting data from Facebook, due to the fact that Facebook recently made the decision to shut down all access to public pages through their developer interface. This makes computational data collection approaches impossible. We faced restrictions on scraping public pages with Facebook, and turned to manual collection of randomly selected user-generated comments from Ekstra Bladet's public page, yielding 800 comments of sufficient quality.",
"Reddit. Given that language classification tasks in general require substantial amounts of data, our exploration for suitable sources continued and our search next led us to Reddit. We scraped Reddit, collecting the top 500 posts from the Danish sub-reddits r/DANMAG and r/Denmark, as well as the user comments contained within each post.",
"We published a survey on Reddit asking Danish speaking users to suggest offensive, sexist, and racist terms for a lexicon. Language and user behaviour varies between platforms, so the goal is to capture platform-specific terms. This gave 113 offensive and hateful terms which were used to find offensive comments. The remainder of comments in the corpus were shuffled and a subset of this corpus was then used to fill the remainder of the final dataset. The resulting dataset contains 3600 user-generated comments, 800 from Ekstra Bladet on Facebook, 1400 from r/DANMAG and 1400 from r/Denmark. In light of the General Data Protection Regulations in Europe (GDPR) and the increased concern for online privacy, we applied some necessary pre-processing steps on our dataset to ensure the privacy of the authors of the comments that were used. Personally identifying content (such as the names of individuals, not including celebrity names) was removed. This was handled by replacing each name of an individual (i.e. author or subject) with @USER, as presented in both BIBREF0 and BIBREF2 . All comments containing any sensitive information were removed. We classify sensitive information as any information that can be used to uniquely identify someone by the following characteristics; racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, and bio-metric data.",
"We base our annotation procedure on the guidelines and schemas presented in BIBREF0 , discussed in detail in section \"Classification Structure\" . As a warm-up procedure, the first 100 posts were annotated by two annotators (the author and the supervisor) and the results compared. This was used as an opportunity to refine the mutual understanding of the task at hand and to discuss the mismatches in these annotations for each sub-task.",
"We used a Jaccard index BIBREF18 to assess the similarity of our annotations. In sub-task A the Jaccard index of these initial 100 posts was 41.9%, 39.1% for sub-task B , and 42.8% for sub-task C. After some analysis of these results and the posts that we disagreed on it became obvious that to a large extent the disagreement was mainly caused by two reasons:",
"Guesswork of the context where the post itself was too vague to make a decisive decision on whether it was offensive or not without more context. An example of this is a post such as Skal de hjælpes hjem, næ nej de skal sendes hjem, where one might conclude, given the current political climate, that this is an offensive post targeted at immigrants. The context is, however, lacking so we cannot make a decisive decision. This post should, therefore, be labeled as non-offensive, since the post does not contain any profanity or a clearly stated group.",
"Failure to label posts containing some kind of profanity as offensive (typically when the posts themselves were not aggressive, harmful, or hateful). An example could be a post like @USER sgu da ikke hans skyld at hun ikke han finde ud af at koge fucking pasta, where the post itself is rather mild, but the presence of fucking makes this an offensive post according to our definitions.",
"In light of these findings our internal guidelines were refined so that no post should be labeled as offensive by interpreting any context that is not directly visible in the post itself and that any post containing any form of profanity should automatically be labeled as offensive. These stricter guidelines made the annotation procedure considerably easier while ensuring consistency. The remainder of the annotation task was performed by the author, resulting in 3600 annotated samples."
],
[
"In Table 1 the distribution of samples by sources in our final dataset is presented. Although a useful tool, using the hate speech lexicon as a filter only resulted in 232 comments. The remaining comments from Reddit were then randomly sampled from the remaining corpus.",
"The fully annotated dataset was split into a train and test set, while maintaining the distribution of labels from the original dataset. The training set contains 80% of the samples, and the test set contains 20%. Table 2 presents the distribution of samples by label for both the train and test set. The dataset is skewed, with around 88% of the posts labeled as not offensive (NOT). This is, however, generally the case when it comes to user-generated content on online platforms, and any automatic detection system needs be able to handle the problem of imbalanced data in order to be truly effective."
],
[
"One of the most important factors to consider when it comes to automatic classification tasks the the feature representation. This section discusses various representations used in the abusive language detection literature.",
"Top-level features. In BIBREF3 information comes from top-level features such as bag-of-words, uni-grams and more complex n-grams, and the literature certainly supports this. In their work on cyberbullying detection, BIBREF8 use word n-grams, character n-grams, and bag-of-words. They report uni-gram bag-of-word features as most predictive, followed by character tri-gram bag-of-words. Later work finds character n-grams are the most helpful features BIBREF15 , underlying the need for the modeling of un-normalized text. these simple top-level feature approaches are good but not without their limitations, since they often have high recall but lead to high rate of false positives BIBREF2 . This is due to the fact that the presence of certain terms can easily lead to misclassification when using these types of features. Many words, however, do not clearly indicate which category the text sample belongs to, e.g. the word gay can be used in both neutral and offensive contexts.",
"Linguistic Features BIBREF15 use a number of linguistic features, including the length of samples, average word lengths, number of periods and question marks, number of capitalized letters, number of URLs, number of polite words, number of unknown words (by using an English dictionary), and number of insults and hate speech words. Although these features have not proven to provide much value on their own, they have been shown to be a good addition to the overall feature space BIBREF15 .",
"Word Representations. Top-level features often require the predictive words to occur in both the training set and the test sets, as discussed in BIBREF3 . For this reason, some sort of word generalization is required. BIBREF15 explore three types of embedding-derived features. First, they explore pre-trained embeddings derived from a large corpus of news samples. Secondly, they use word2vec BIBREF19 to generate word embeddings using their own corpus of text samples. We use both approaches. Both the pre-trained and word2vec models represent each word as a 200 dimensional distributed real number vector. Lastly, they develop 100 dimensional comment2vec model, based on the work of BIBREF20 . Their results show that the comment2vec and the word2vec models provide the most predictive features BIBREF15 . In BIBREF21 they experiment with pre-trained GloVe embeddings BIBREF22 , learned FastText embeddings BIBREF23 , and randomly initialized learned embeddings. Interestingly, the randomly initialized embeddings slightly outperform the others BIBREF21 .",
"Sentiment Scores. Sentiment scores are a common addition to the feature space of classification systems dealing with offensive and hateful speech. In our work we experiment with sentiment scores and some of our models rely on them as a dimension in their feature space. To compute these sentiment score features our systems use two Python libraries: VADER BIBREF24 and AFINN BIBREF25 .Our models use the compound attribute, which gives a normalized sum of sentiment scores over all words in the sample. The compound attribute ranges from $-1$ (extremely negative) to $+1$ (extremely positive).",
"Reading Ease. As well as some of the top-level features mentioned so far, we also use Flesch-Kincaid Grade Level and Flesch Reading Ease scores. The Flesch-Kincaid Grade Level is a metric assessing the level of reading ability required to easily understand a sample of text."
],
[
"We introduce a variety of models in our work to compare different approaches to the task at hand. First of all, we introduce naive baselines that simply classify each sample as one of the categories of interest (based on BIBREF0 ). Next, we introduce a logistic regression model based on the work of BIBREF2 , using the same set of features as introduced there. Finally, we introduce three deep learning models: Learned-BiLSTM, Fast-BiLSTM, and AUX-Fast-BiLSTM. The logistic regression model is built using Scikit Learn BIBREF26 and the deep learning models are built using Keras BIBREF27 . The following sections describe these model architectures in detail, the algorithms they are based on, and the features they use."
],
[
"For each sub-task (A, B, and C, Section \"Classification Structure\" ) we present results for all methods in each language.",
"A - Offensive language identification:",
"English. For English (Table 3 ) Fast-BiLSTM performs best, trained for 100 epochs, using the OLID dataset. The model achieves a macro averaged F1-score of $0.735$ . This result is comparable to the BiLSTM based methods in OffensEval.",
"Additional training data from HSAOFL BIBREF2 does not consistently improve results. For the models using word embeddings results are worse with additional training data. On the other hand, for models that use a range of additional features (Logistic Regression and AUX-Fast-BiLSTM), the additional training data helps.",
"Danish. Results are in Table 4 . Logistic Regression works best with an F1-score of $0.699$ . This is the second best performing model for English, though the best performing model for English (Fast-BiLSTM) is worst for Danish.",
"Best results are given in Table 5 . The low scores for Danish compared to English may be explained by the low amount of data in the Danish dataset. The Danish training set contains $2,879$ samples (table 2 ) while the English training set contains $13,240$ sample.Futher, in the English dataset around $33\\%$ of the samples are labeled offensive while in the Danish set this rate is only at around $12\\%$ . The effect that this under represented class has on the Danish classification task can be seen in more detail in Table 5 .",
"B - Categorization of offensive language type",
"English. In Table 6 the results are presented for sub-task B on English. The Learned-BiLSTM model trained for 60 epochs performs the best, obtaining a macro F1-score of $0.619$ .",
"Recall and precision scores are lower for UNT than TIN (Table 5 ). One reason is skew in the data, with only around $14\\%$ of the posts labeled as UNT. The pre-trained embedding model, Fast-BiLSTM, performs the worst, with a macro averaged F1-score of $0.567$ . This indicates this approach is not good for detecting subtle differences in offensive samples in skewed data, while more complex feature models perform better.",
"Danish. Table 7 presents the results for sub-task B and the Danish language. The best performing system is the AUX-Fast-BiLSTM model (section UID26 ) trained for 100 epochs, which obtains an impressive macro F1-score of $0.729$ . This suggests that models that only rely on pre-trained word embeddings may not be optimal for this task. This is be considered alongside the indication in Section \"Final Dataset\" that relying on lexicon-based selection also performs poorly.",
"The limiting factor seems to be recall for the UNT category (Table 8 ). As mentioned in Section \"Background\" , the best performing system for sub-task B in OffensEval was a rule-based system, suggesting that more refined features, (e.g. lexica) may improve performance on this task. The better performance of models for Danish over English can most likely be explained by the fact that the training set used for Danish is more balanced, with around $42\\%$ of the posts labeled as UNT.",
"C - Offensive language target identification",
"English. The results for sub-task C and the English language are presented in Table 9 . The best performing system is the Learned-BiLSTM model (section UID24 ) trained for 10 epochs, obtaining a macro averaged F1-score of $0.557$ . This is an improvement over the models introduced in BIBREF0 , where the BiLSTM based model achieves a macro F1-score of $0.470$ .",
"The main limitations of our model seems to be in the classification of OTH samples, as seen in Table 11 . This may be explained by the imbalance in the training data. It is interesting to see that this imbalance does not effect the GRP category as much, which only constitutes about $28\\%$ of the training samples. One cause for the differences in these, is the fact that the definitions of the OTH category are vague, capturing all samples that do not belong to the previous two.",
"Danish. Table 10 presents the results for sub-task C and the Danish language. The best performing system is the same as in English, the Learned-BiLSTM model (section UID24 ), trained for 100 epochs, obtaining a macro averaged F1-score of $0.629$ . Given that this is the same model as the one that performed the best for English, this further indicates that task specific embeddings are helpful for more refined classification tasks.",
"It is interesting to see that both of the models using the additional set of features (Logistic Regression and AUX-Fast-BiLSTM) perform the worst. This indicates that these additional features are not beneficial for this more refined sub-task in Danish. The amount of samples used in training for this sub-task is very low. Imbalance does have as much effect for Danish as it does in English, as can be seen in Table 11 . Only about $14\\%$ of the samples are labeled as OTH in the data (table 2 ), but the recall and precision scores are closer than they are for English."
],
[
"We perform analysis of the misclassified samples in the evaluation of our best performing models. To accomplish this, we compute the TF-IDF scores for a range of n-grams. We then take the top scoring n-grams in each category and try to discover any patterns that might exist. We also perform some manual analysis of these misclassified samples. The goal of this process is to try to get a clear idea of the areas our classifiers are lacking in. The following sections describe this process for each of the sub-tasks.",
"A - Offensive language identification",
"The classifier struggles to identify obfuscated offensive terms. This includes words that are concatenated together, such as barrrysoetorobullshit. The classifier also seems to associate she with offensiveness, and samples containing she are misclassified as offensive in several samples while he is less often associated with offensive language.",
"There are several examples where our classifier labels profanity-bearing content as offensive that are labeled as non-offensive in the test set. Posts such as Are you fucking serious? and Fuck I cried in this scene are labeled non-offensive in the test set, but according to annotation guidelines should be classified as offensive.",
"The best classifier is inclined to classify longer sequences as offensive. The mean character length of misclassified offensive samples is $204.7$ , while the mean character length of the samples misclassified not offensive is $107.9$ . This may be due to any post containing any form of profanity being offensive in sub-task A, so more words increase the likelihood of $>0$ profane words.",
"The classifier suffers from the same limitations as the classifier for English when it comes to obfuscated words, misclassifying samples such as Hahhaaha lær det biiiiiaaaatch as non-offensive. It also seems to associate the occurrence of the word svensken with offensive language, and quite a few samples containing that word are misclassified as offensive. This can be explained by the fact that offensive language towards Swedes is common in the training data, resulting in this association. From this, we can conclude that the classifier relies too much on the presence of individual keywords, ignoring the context of these keywords.",
"B - Categorization of offensive language type",
"Obfuscation prevails in sub-task B. Our classifier misses indicators of targeted insults such as WalkAwayFromAllDemocrats. It seems to rely too highly on the presence of profanity, misclassifying samples containing terms such as bitch, fuck, shit, etc. as targeted insults.",
"The issue of the data quality is also concerning in this sub-task, as we discover samples containing clear targeted insults such as HillaryForPrison being labeled as untargeted in the test set.",
"Our Danish classifier also seems to be missing obfuscated words such as kidsarefuckingstupid in the classification of targeted insults. It relies to some extent to heavily on the presence of profanity such as pikfjæs, lorte and fucking, and misclassifies untargeted posts containing these keywords as targeted insults.",
"C - Offensive language target identification Misclassification based on obfuscated terms as discussed earlier also seems to be an issue for sub-task C. This problem of obfuscated terms could be tackled by introducing character-level features such as character level n-grams."
],
[
"Offensive language on online social media platforms is harmful. Due to the vast amount of user-generated content on online platforms, automatic methods are required to detect this kind of harmful content. Until now, most of the research on the topic has focused on solving the problem for English. We explored English and Danish hate speed detection and categorization, finding that sharing information across languages and platforms leads to good models for the task.",
"The resources and classifiers are available from the authors under CC-BY license, pending use in a shared task; a data statement BIBREF29 is included in the appendix. Extended results and analysis are given in BIBREF30 ."
]
],
"section_name": [
"Introduction",
"Background",
"Dataset",
"Classification Structure",
"Final Dataset",
"Features",
"Models",
"Results and Analysis",
"Analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"8be122dcee2cc1ec6d2b2268d0d966aa1fd0c439"
],
"answer": [
{
"evidence": [
"Given the fact that the research on offensive language detection has to a large extent been focused on the English language, we set out to explore the design of models that can successfully be used for both English and Danish. To accomplish this, an appropriate dataset must be constructed, annotated with the guidelines described in BIBREF0 . We, furthermore, set out to analyze the linguistic features that prove hard to detect by analyzing the patterns that prove hard to detect."
],
"extractive_spans": [],
"free_form_answer": "not researched as much as English",
"highlighted_evidence": [
"Given the fact that the research on offensive language detection has to a large extent been focused on the English language, we set out to explore the design of models that can successfully be used for both English and Danish."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"annotation_id": [
"ed49112477038ada1c84719719ca892698771c1f"
],
"answer": [
{
"evidence": [
"In sub-task C the goal is to classify the target of the offensive language. Only posts labeled as targeted insults (TIN) in sub-task B are considered in this task BIBREF17 . Samples are annotated with one of the following:",
"Individual (IND): Posts targeting a named or unnamed person that is part of the conversation. In English this could be a post such as @USER Is a FRAUD Female @USER group paid for and organized by @USER. In Danish this could be a post such as USER du er sku da syg i hoved. These examples further demonstrate that this category captures the characteristics of cyberbullying, as it is defined in section \"Background\" .",
"Group (GRP): Posts targeting a group of people based on ethnicity, gender or sexual orientation, political affiliation, religious belief, or other characteristics. In English this could be a post such as #Antifa are mentally unstable cowards, pretending to be relevant. In Danish this could be e.g. Åh nej! Svensk lorteret!",
"Other (OTH): The target of the offensive language does not fit the criteria of either of the previous two categories. BIBREF17 . In English this could be a post such as And these entertainment agencies just gonna have to be an ass about it.. In Danish this could be a post such as Netto er jo et tempel over lort."
],
"extractive_spans": [],
"free_form_answer": "3",
"highlighted_evidence": [
"In sub-task C the goal is to classify the target of the offensive language. Only posts labeled as targeted insults (TIN) in sub-task B are considered in this task BIBREF17 . Samples are annotated with one of the following:\n\nIndividual (IND): Posts targeting a named or unnamed person that is part of the conversation. In English this could be a post such as @USER Is a FRAUD Female @USER group paid for and organized by @USER. In Danish this could be a post such as USER du er sku da syg i hoved. These examples further demonstrate that this category captures the characteristics of cyberbullying, as it is defined in section \"Background\" .\n\nGroup (GRP): Posts targeting a group of people based on ethnicity, gender or sexual orientation, political affiliation, religious belief, or other characteristics. In English this could be a post such as #Antifa are mentally unstable cowards, pretending to be relevant. In Danish this could be e.g. Åh nej! Svensk lorteret!\n\nOther (OTH): The target of the offensive language does not fit the criteria of either of the previous two categories. BIBREF17 . In English this could be a post such as And these entertainment agencies just gonna have to be an ass about it.. In Danish this could be a post such as Netto er jo et tempel over lort."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"annotation_id": [
"6944bb7e5e800bb6e488e11d422b7b8d5540d537"
],
"answer": [
{
"evidence": [
"We published a survey on Reddit asking Danish speaking users to suggest offensive, sexist, and racist terms for a lexicon. Language and user behaviour varies between platforms, so the goal is to capture platform-specific terms. This gave 113 offensive and hateful terms which were used to find offensive comments. The remainder of comments in the corpus were shuffled and a subset of this corpus was then used to fill the remainder of the final dataset. The resulting dataset contains 3600 user-generated comments, 800 from Ekstra Bladet on Facebook, 1400 from r/DANMAG and 1400 from r/Denmark. In light of the General Data Protection Regulations in Europe (GDPR) and the increased concern for online privacy, we applied some necessary pre-processing steps on our dataset to ensure the privacy of the authors of the comments that were used. Personally identifying content (such as the names of individuals, not including celebrity names) was removed. This was handled by replacing each name of an individual (i.e. author or subject) with @USER, as presented in both BIBREF0 and BIBREF2 . All comments containing any sensitive information were removed. We classify sensitive information as any information that can be used to uniquely identify someone by the following characteristics; racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, and bio-metric data."
],
"extractive_spans": [
"3600 user-generated comments"
],
"free_form_answer": "",
"highlighted_evidence": [
"We published a survey on Reddit asking Danish speaking users to suggest offensive, sexist, and racist terms for a lexicon. Language and user behaviour varies between platforms, so the goal is to capture platform-specific terms. This gave 113 offensive and hateful terms which were used to find offensive comments. The remainder of comments in the corpus were shuffled and a subset of this corpus was then used to fill the remainder of the final dataset. The resulting dataset contains 3600 user-generated comments, 800 from Ekstra Bladet on Facebook, 1400 from r/DANMAG and 1400 from r/Denmark."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"4857c606a55a83454e8d81ffe17e05cf8bc4b75f"
]
},
{
"annotation_id": [
"959546219b2b508cd5682af2e75df913bee83c63"
],
"answer": [
{
"evidence": [
"We base our annotation procedure on the guidelines and schemas presented in BIBREF0 , discussed in detail in section \"Classification Structure\" . As a warm-up procedure, the first 100 posts were annotated by two annotators (the author and the supervisor) and the results compared. This was used as an opportunity to refine the mutual understanding of the task at hand and to discuss the mismatches in these annotations for each sub-task.",
"In light of these findings our internal guidelines were refined so that no post should be labeled as offensive by interpreting any context that is not directly visible in the post itself and that any post containing any form of profanity should automatically be labeled as offensive. These stricter guidelines made the annotation procedure considerably easier while ensuring consistency. The remainder of the annotation task was performed by the author, resulting in 3600 annotated samples."
],
"extractive_spans": [
"the author and the supervisor"
],
"free_form_answer": "",
"highlighted_evidence": [
"As a warm-up procedure, the first 100 posts were annotated by two annotators (the author and the supervisor) and the results compared.",
"The remainder of the annotation task was performed by the author, resulting in 3600 annotated samples."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"4857c606a55a83454e8d81ffe17e05cf8bc4b75f"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What is the challenge for other language except English",
"How many categories of offensive language were there?",
"How large was the dataset of Danish comments?",
"Who were the annotators?"
],
"question_id": [
"5be94c7c54593144ba2ac79729d7545f27c79d37",
"32e8eda2183bcafbd79b22f757f8f55895a0b7b2",
"b69f0438c1af4b9ed89e531c056d9812d4994016",
"2e9c6e01909503020070ec4faa6c8bf2d6c0af42"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"bias and hate speech",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 4: Results from sub-task A in Danish.",
"Table 6: Results from sub-task B in English.",
"Table 10: Results from sub-task C in Danish."
],
"file": [
"7-Table4-1.png",
"8-Table6-1.png",
"9-Table10-1.png"
]
} | [
"What is the challenge for other language except English",
"How many categories of offensive language were there?"
] | [
[
"1908.04531-Introduction-1"
],
[
"1908.04531-Classification Structure-8",
"1908.04531-Classification Structure-7",
"1908.04531-Classification Structure-9",
"1908.04531-Classification Structure-10"
]
] | [
"not researched as much as English",
"3"
] | 646 |
1901.02222 | Multi-turn Inference Matching Network for Natural Language Inference | Natural Language Inference (NLI) is a fundamental and challenging task in Natural Language Processing (NLP). Most existing methods only apply one-pass inference process on a mixed matching feature, which is a concatenation of different matching features between a premise and a hypothesis. In this paper, we propose a new model called Multi-turn Inference Matching Network (MIMN) to perform multi-turn inference on different matching features. In each turn, the model focuses on one particular matching feature instead of the mixed matching feature. To enhance the interaction between different matching features, a memory component is employed to store the history inference information. The inference of each turn is performed on the current matching feature and the memory. We conduct experiments on three different NLI datasets. The experimental results show that our model outperforms or achieves the state-of-the-art performance on all the three datasets. | {
"paragraphs": [
[
"Natural Language Inference (NLI) is a crucial subtopic in Natural Language Processing (NLP). Most studies treat NLI as a classification problem, aiming at recognizing the relation types of hypothesis-premise sentence pairs, usually including “Entailment”, “Contradiction” and “Neutral”.",
"NLI is also called Recognizing Textual Entailment (RTE) BIBREF0 in earlier works and a lot of statistical-based BIBREF1 and rule-based approaches BIBREF2 are proposed to solve the problem. In 2015, Bowman released the SNLI corpus BIBREF3 that provides more than 570K hypothesis-premise sentence pairs. The large-scale data of SNLI allows a Neural Network (NN) based model to perform on the NLI. Since then, a variety of NN based models have been proposed, most of which can be divided into two kinds of frameworks. The first one is based on “Siamense\" network BIBREF3 , BIBREF4 . It first applies either Recurrent Neural Network (RNN) or Convolutional Neural Networks (CNN) to generates sentence representations on both premise and hypothesis, and then concatenate them for the final classification. The second one is called “matching-aggregation\" network BIBREF5 , BIBREF6 . It matches two sentences at word level, and then aggregates the matching results to generate a fixed vector for prediction. Matching is implemented by several functions based on element-wise operations BIBREF7 , BIBREF8 . Studies on SNLI show that the second one performs better.",
"Though the second framework has made considerable success on the NLI task, there are still some limitations. First, the inference on the mixed matching feature only adopts one-pass process, which means some detailed information would not be retrieved once missing. While the multi-turn inference can overcome this deficiency and make better use of these matching features. Second, the mixed matching feature only concatenates different matching features as the input for aggregation. It lacks interaction among various matching features. Furthermore, it treats all the matching features equally and cannot assign different importance to different matching features.",
"In this paper, we propose the MIMN model to tackle these limitations. Our model uses the matching features described in BIBREF5 , BIBREF9 . However, we do not simply concatenate the features but introduce a multi-turn inference mechanism to infer different matching features with a memory component iteratively. The merits of MIMN are as follows:",
"We conduct experiments on three NLI datasets: SNLI BIBREF3 , SCITAIL BIBREF10 and MPE BIBREF11 . On the SNLI dataset, our single model achieves 88.3% in accuracy and our ensemble model achieves 89.3% in terms of accuracy, which are both comparable with the state-of-the-art results. Furthermore, our MIMN model outperforms all previous works on both SCITAIL and MPE dataset. Especially, the model gains substantial (8.9%) improvement on MPE dataset which contains multiple premises. This result shows our model is expert in aggregating the information of multiple premises."
],
[
"Early work on the NLI task mainly uses conventional statistical methods on small-scale datasets BIBREF0 , BIBREF12 . Recently, the neural models on NLI are based on large-scale datasets and can be categorized into two central frameworks: (i) Siamense-based framework which focuses on building sentence embeddings separately and integrates the two sentence representations to make the final prediction BIBREF4 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 ; (ii) “matching-aggregation” framework which uses various matching methods to get the interactive space of two input sentences and then aggregates the matching results to dig for deep information BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF8 , BIBREF6 , BIBREF23 , BIBREF24 , BIBREF18 , BIBREF25 , BIBREF26 .",
"Our model is directly motivated by the approaches proposed by BIBREF7 , BIBREF9 . BIBREF7 introduces the “matching-aggregation\" framework to compare representations between words and then aggregate their matching results for final decision.",
" BIBREF9 enhances the comparing approaches by adding element-wise subtraction and element-wise multiplication, which further improve the performance on SNLI. The previous work shows that matching layer is an essential component of this framework and different matching methods can affect the final classification result.",
"Various attention-based memory neural networks BIBREF27 have been explored to solve the NLI problem BIBREF20 , BIBREF28 , BIBREF14 . BIBREF20 presents a model of deep fusion LSTMs (DF-LSTMs) (Long Short-Term Memory ) which utilizes a strong interaction between text pairs in a recursive matching memory. BIBREF28 uses a memory network to extend the LSTM architecture. BIBREF14 employs a variable sized memory model to enrich the LSTM-based input encoding information. However, all the above models are not specially designed for NLI and they all focus on input sentence encoding.",
"Inspired by the previous work, we propose the MIMN model. We iteratively update memory by feeding in different sequence matching features. We are the first to apply memory mechanism to matching component for the NLI task. Our experiment results on several datasets show that our MIMN model is significantly better than the previous models."
],
[
"In this section, we describe our MIMN model, which consists of the following five major components: encoding layer, attention layer, matching layer, multi-turn inference layer and output layer. Fig. FIGREF3 shows the architecture of our MIMN model.",
"We represent each example of the NLI task as a triple INLINEFORM0 , where INLINEFORM1 is a given premise, INLINEFORM2 is a given hypothesis, INLINEFORM3 and INLINEFORM4 are word embeddings of r-dimension. The true label INLINEFORM5 indicates the logical relationship between the premise INLINEFORM6 and the hypothesis INLINEFORM7 , where INLINEFORM8 . Our model aims to compute the conditional probability INLINEFORM9 and predict the label for examples in testing data set by INLINEFORM10 ."
],
[
"In this paper, we utilize a bidirectional LSTM (BiLSTM) BIBREF29 as our encoder to transform the word embeddings of premise and hypothesis to context vectors. The premise and the hypothesis share the same weights of BiLSTM. DISPLAYFORM0 ",
"where the context vectors INLINEFORM0 and INLINEFORM1 are the concatenation of the forward and backward hidden outputs of BiLSTM respectively. The outputs of the encoding layer are the context vectors INLINEFORM2 and INLINEFORM3 , where INLINEFORM4 is the number of hidden units of INLINEFORM5 ."
],
[
"On the NLI task, the relevant contexts between the premise and the hypothesis are important clues for final classification. The relevant contexts can be acquired by a soft-attention mechanism BIBREF30 , BIBREF31 , which has been applied to a bunch of tasks successfully. The alignments between a premise and a hypothesis are based on a score matrix. There are three most commonly used methods to compute the score matrix: linear combination, bilinear combination, and dot product. For simplicity, we choose dot product in the following computation BIBREF8 . First, each element in the score matrix is computed based on the context vectors of INLINEFORM0 and INLINEFORM1 as follows: DISPLAYFORM0 ",
" where INLINEFORM0 and INLINEFORM1 are computed in Equations ( EQREF5 ) and (), and INLINEFORM2 is a scalar which indicates how INLINEFORM3 is related to INLINEFORM4 .",
"Then, we compute the alignment vectors for each word in the premise and the hypothesis as follows: DISPLAYFORM0 DISPLAYFORM1 ",
" where INLINEFORM0 is the weighted summaries of thehypothesis in terms of each word in the premise. The same operation is applied to INLINEFORM1 . The outputs of this layer are INLINEFORM2 and INLINEFORM3 . For the context vectors INLINEFORM4 , the relevant contexts in the hypothesis INLINEFORM5 are represented in INLINEFORM6 . The same is applied to INLINEFORM7 and INLINEFORM8 ."
],
[
"The goal of the matching layer is to match the context vectors INLINEFORM0 and INLINEFORM1 with the corresponding aligned vectors INLINEFORM2 and INLINEFORM3 from multi-perspective to generate a matching sequence.",
"In this layer, we match each context vector INLINEFORM0 against each aligned vector INLINEFORM1 to capture richer semantic information. We design three effective matching functions: INLINEFORM2 , INLINEFORM3 and INLINEFORM4 to match two vectors BIBREF32 , BIBREF5 , BIBREF9 . Each matching function takes the context vector INLINEFORM5 ( INLINEFORM6 ) and the aligned vector INLINEFORM7 ( INLINEFORM8 ) as inputs, then matches the inputs by an feed-forward network based on a particular matching operation and finally outputs a matching vector. The formulas of the three matching functions INLINEFORM9 , INLINEFORM10 and INLINEFORM11 are described in formulas ( EQREF11 ) () (). To avoid repetition, we will only describe the application of these functions to INLINEFORM12 and INLINEFORM13 . The readers can infer these equations for INLINEFORM14 and INLINEFORM15 . DISPLAYFORM0 ",
" where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 represent concatenation, subtraction, and multiplication respectively, INLINEFORM3 , INLINEFORM4 and INLINEFORM5 are weight parameters to be learned, and INLINEFORM6 are bias parameters to be learned. The outputs of each matching function are INLINEFORM7 , which represent the matching result from three perspectives respectively. After matching the context vectors INLINEFORM8 and the aligned vectors INLINEFORM9 by INLINEFORM10 , INLINEFORM11 and INLINEFORM12 , we can get three matching features INLINEFORM13 , INLINEFORM14 and INLINEFORM15 .",
"After matching the context vectors INLINEFORM0 and the aligned vectors INLINEFORM1 by INLINEFORM2 , INLINEFORM3 and INLINEFORM4 , we can get three matching features INLINEFORM5 , INLINEFORM6 and INLINEFORM7 . where INLINEFORM8 , INLINEFORM9 , and INLINEFORM10 .",
"The INLINEFORM0 can be considered as a joint-feature of combing the context vectors INLINEFORM1 with aligned vectors INLINEFORM2 , which preserves all the information. And the INLINEFORM3 can be seen as a diff-feature of the INLINEFORM4 and INLINEFORM5 , which preserves the different parts and removes the similar parts. And the INLINEFORM6 can be regarded as a sim-feature of INLINEFORM7 and INLINEFORM8 , which emphasizes on the similar parts and neglects the different parts between INLINEFORM9 and INLINEFORM10 . Each feature helps us focus on particular parts between the context vectors and the aligned vectors. These matching features are vector representations with low dimension, but containing high-order semantic information. To make further use of these matching features, we collect them to generate a matching sequence INLINEFORM11 . DISPLAYFORM0 ",
" where INLINEFORM0 .",
"The output of this layer is the matching sequence INLINEFORM0 , which stores three kinds of matching features. The order of the matching features in INLINEFORM1 is inspired by the attention trajectory of human beings making inference on premise and hypothesis. We process the matching sequence in turn in the multi-turn inference layer. Intuitively, given a premise and a hypothesis, we will first read the original sentences to find the relevant information. Next, it's natural for us to combine all the parts of the original information and the relevant information. Then we move the attention to the different parts. Finally, we pay attention to the similar parts."
],
[
"In this layer, we aim to acquire inference outputs by aggregating the information in the matching sequence by multi-turn inference mechanism. We regard the inference on the matching sequence as the multi-turn interaction among various matching features. In each turn, we process one matching feature instead of all the matching features BIBREF9 , BIBREF26 . To enhance the information interaction between matching features, a memory component is employed to store the inference information of the previous turns. Then, the inference of each turn is based on the current matching feature and the memory. Here, we utilize another BiLSTM for the inference. DISPLAYFORM0 ",
" where INLINEFORM0 is an inference vector in the current turn, INLINEFORM1 is the index current turn, INLINEFORM2 , INLINEFORM3 is a memory vector stores the historical inference information, and INLINEFORM4 is used for dimension reduction.",
"Then we update the memory by combining the current inference vector INLINEFORM0 with the memory vector of last turn INLINEFORM1 . An update gate is used to control the ratio of current information and history information adaptively BIBREF33 . The initial values of all the memory vectors are all zeros. DISPLAYFORM0 ",
" where INLINEFORM0 and INLINEFORM1 are parameters to be learned, and INLINEFORM2 is a sigmoid function to compress the ratio between 0-1. Finally, we use the latest memory matrix INLINEFORM3 as the inference output of premise INLINEFORM4 . Then we calculate INLINEFORM5 in a similar way. The final outputs of this layer are INLINEFORM6 and INLINEFORM7 . DISPLAYFORM0 ",
"where INLINEFORM0 stores the inference results of all matching features. The final outputs of multi-turn inference layer are INLINEFORM1 and INLINEFORM2 . The calculation of INLINEFORM3 is the same as INLINEFORM4 ."
],
[
"The final relationship judgment depends on the sentence embeddings of premise and hypothesis. We convert INLINEFORM0 and INLINEFORM1 to sentence embeddings of premise and hypothesis by max pooling and average pooling. Next, we concatenate the two sentence embeddings to a fixed-length output vector. Then we feed the output vector to a multilayer perceptron (MLP) classifier that includes a hidden layer with INLINEFORM2 activation and a softmax layer to get the final prediction. The model is trained end-to-end. We employ multi-class cross-entropy as the cost function when training the model."
],
[
"To verify the effectiveness of our model, we conduct experiments on three NLI datasets. The basic information about the three datasets is shown in Table TABREF19 .",
"The large SNLI BIBREF3 corpus is served as a major benchmark for the NLI task. The MPE corpus BIBREF11 is a newly released textual entailment dataset. Each pair in MPE consists of four premises, one hypothesis, and one label, which is different from the standard NLI datasets. Entailment relationship holds if the hypothesis comes from the same image as the four premises. The SCITAIL BIBREF10 is a dataset about science question answering. The premises are created from relevant web sentences, while hypotheses are created from science questions and the corresponding answer candidates.",
""
],
[
"We compare our model with “matching-aggregation” related and attention-based memory related models. In addition, to verify the effectiveness of these major components in our model, we design the following model variations for comparison: ESIM is considered as a typical model of “matching-aggregation”, so we choose ESIM as the principal comparison object. We choose the LSTMN model with deep attention fusion as a complement comparison, which is a memory related model. Besides above models, following variants of our model are designed for comparing:",
"ESIM We choose the ESIM model as our baseline. It mixes all the matching feature together in the matching layer and then infers the matching result in a single-turn with a BiLSTM.",
"600D MIMN: This is our main model described in section SECREF3 .",
"600D MIMN-memory: This model removes the memory component. The motivation of this experiment is to verify whether the multiple turns inference can acquire more sufficient information than one-pass inference. In this model, we process one matching feature in one iteration. The three matching features are encoded by INLINEFORM0 in multi-turns iteratively without previous memory information. The output of each iteration is concatenated to be the final output of the multi-turn inference layer: Then the Equation ( EQREF14 ) and ( EQREF16 ) are changed into Equation ( EQREF24 ) and () respectively and the Equation ( EQREF15 ) is removed. DISPLAYFORM0 ",
"600D MIMN-gate+ReLU : This model replaces the update gate in the memory component with a ReLU layer. The motivation of this model is to verify the effectiveness of update gate for combining current inference result and previous memory. Then the Equation ( EQREF15 ) is changed into Equation ( EQREF26 ). INLINEFORM0 stays the same as Equations ( EQREF16 ). DISPLAYFORM0 "
],
[
"We implement our model with Tensorflow BIBREF34 . We initialize the word embeddings by the pre-trained embeddings of 300D GloVe 840B vectors BIBREF35 . The word embeddings of the out-of-vocabulary words are randomly initialized. The hidden units of INLINEFORM0 and INLINEFORM1 are 300 dimensions. All weights are constrained by L2 regularization with the weight decay coefficient of 0.0003. We also apply dropout BIBREF36 to all the layers with a dropout rate of 0.2. Batch size is set to 32. The model is optimized with Adam BIBREF37 with an initial learning rate of 0.0005, the first momentum of 0.9 and the second of 0.999. The word embeddings are fixed during all the training time. We use early-stopping (patience=10) based on the validation set accuracy. We use three turns on all the datasets. The evaluation metric is the classification accuracy. To help duplicate our results, we will release our source code at https://github.com/blcunlp/RTE/tree/master/MIMN."
],
[
"Experimental results of the current state-of-the-art models and three variants of our model are listed in Table TABREF29 . The first group of models (1)-(3) are the attention-based memory models on the NLI task. BIBREF20 uses external memory to increase the capacity of LSTMs. BIBREF14 utilizes an encoding memory matrix to maintain the input information. BIBREF28 extends the LSTM architecture with a memory network to enhance the interaction between the current input and all previous inputs.",
"The next group of models (4)-(12) belong to the “matching-aggregation” framework with bidirectional inter-attention. Decomposable attention BIBREF8 first applies the “matching-aggregation” on SNLI dataset explicitly. BIBREF5 enriches the framework with several comparison functions. BiMPM BIBREF6 employs a multi-perspective matching function to match the two sentences. BiMPM BIBREF6 does not only exploit a multi-perspective matching function but also allows the two sentences to match from multi-granularity. ESIM BIBREF9 further sublimates the framework by enhancing the matching tuples with element-wise subtraction and element-wise multiplication. ESIM achieves 88.0% in accuracy on the SNLI test set, which exceeds the human performance (87.7%) for the first time. BIBREF18 and BIBREF1 both further improve the performance by taking the ESIM model as a baseline model. The studies related to “matching-aggregation” but without bidirectional interaction are not listed BIBREF19 , BIBREF7 .",
"Motivated by the attention-based memory models and the bidirectional inter-attention models, we propose the MIMN model. The last group of models (13)-(16) are models described in this paper. Our single MIMN model obtains an accuracy of 88.3% on SNLI test set, which is comparable with the current state-of-the-art single models. The single MIMN model improves 0.3% on the test set compared with ESIM, which shows that multi-turn inference based on the matching features and memory achieves better performance. From model (14), we also observe that memory is generally beneficial, and the accuracy drops 0.8% when the memory is removed. This finding proves that the interaction between matching features is significantly important for the final classification. To explore the way of updating memory, we replace the update gate in MIMN with a ReLU layer to update the memory, which drops 0.1%.",
"To further improve the performance on SNLI dataset, an ensemble model MIMN is built for comparison. We design the ensemble model by simply averaging the probability distributions BIBREF6 of four MIMN models. Each of the models has the same architecture but initialized by different seeds. Our ensemble model achieves the state-of-the-art performance by obtains an accuracy of 89.3% on SNLI test set."
],
[
"The MPE dataset is a brand-new dataset for NLI with four premises, one hypothesis, and one label. In order to maintain the same data format as other textual entailment datasets (one premise, one hypothesis, and one label), we concatenate the four premises as one premise.",
"Table TABREF31 shows the results of our models along with the published models on this dataset. LSTM is a conditional LSTM model used in BIBREF19 . WbW-Attention aligns each word in the hypothesis with the premise. The state-of-the-art model on MPE dataset is SE model proposed by BIBREF11 , which makes four independent predictions for each sentence pairs, and the final prediction is the summation of four predictions. Compared with SE, our MIMN model obtains a dramatic improvement (9.7%) on MPE dataset by achieving 66.0% in accuracy.",
"To compare with the bidirectional inter-attention model, we re-implement the ESIM, which obtains 59.0% in accuracy. We observe that MIMN-memory model achieves 61.6% in accuracy. This finding implies that inferring the matching features by multi-turns works better than single turn. Compared with the ESIM, our MIMN model increases 7.0% in accuracy. We further find that the performance of MIMN achieves 77.9% and 73.1% in accuracy of entailment and contradiction respectively, outperforming all previous models. From the accuracy distributions on N, E, and C in Table TABREF31 , we can see that the MIMN model is good at dealing with entailment and contradiction while achieves only average performance on neural.",
"Consequently, the experiment results show that our MIMN model achieves a new state-of-the-art performance on MPE test set. Besides, our MIMN-memory model and MIMN-gate+ReLU model both achieve better performance than previous models. All of our models perform well on the entailment label, which reveals that our models can aggregate information from multiple sentences for entailment judgment."
],
[
"In this section, we study the effectiveness of our model on the SCITAIL dataset. Table TABREF31 presents the results of our models and the previous models on this dataset. Apart from the results reported in the original paper BIBREF10 : Majority class, ngram, decomposable attention, ESIM and DGEM, we compare further with the current state-of-the-art model CAFE BIBREF18 .",
"We can see that the MIMN model achieves 84.0% in accuracy on SCITAIL test set, which outperforms the CAFE by a margin of 0.5%. Moreover, the MIMN-gate+ReLU model exceeds the CAFE slightly. The MIMN model increases 13.3% in test accuracy compared with the ESIM, which again proves that multi-turn inference is better than one-pass inference."
],
[
"In this paper, we propose the MIMN model for NLI task. Our model introduces a multi-turns inference mechanism to process multi-perspective matching features. Furthermore, the model employs the memory mechanism to carry proceeding inference information. In each turn, the inference is based on the current matching feature and previous memory. Experimental results on SNLI dataset show that the MIMN model is on par with the state-of-the-art models. Moreover, our model achieves new state-of-the-art results on the MPE and the SCITAL datasets. Experimental results prove that the MIMN model can extract important information from multiple premises for the final judgment. And the model is good at handling the relationships of entailment and contradiction."
],
[
"This work is funded by Beijing Advanced Innovation for Language Resources of BLCU, the Fundamental Research Funds for the Central Universities in BLCU (No.17PT05) and Graduate Innovation Fund of BLCU (No.18YCX010)."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model",
"Encoding Layer",
"Attention Layer",
"Matching Layer",
"Multi-turn Inference Layer",
"Output Layer",
"Data",
" Models for Comparison",
"Experimental Settings",
"Experiments on SNLI",
"Experiments on MPE",
"Experiments on SCITAIL",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"69765ac78112db239b9464f5414426f3856e21bb"
],
"answer": [
{
"evidence": [
"In this paper, we propose the MIMN model for NLI task. Our model introduces a multi-turns inference mechanism to process multi-perspective matching features. Furthermore, the model employs the memory mechanism to carry proceeding inference information. In each turn, the inference is based on the current matching feature and previous memory. Experimental results on SNLI dataset show that the MIMN model is on par with the state-of-the-art models. Moreover, our model achieves new state-of-the-art results on the MPE and the SCITAL datasets. Experimental results prove that the MIMN model can extract important information from multiple premises for the final judgment. And the model is good at handling the relationships of entailment and contradiction."
],
"extractive_spans": [],
"free_form_answer": "Matching features from matching sentences from various perspectives.",
"highlighted_evidence": [
"Our model introduces a multi-turns inference mechanism to process multi-perspective matching features. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"Which matching features do they employ?"
],
"question_id": [
"5067e5eb2cddbb34b71e8b74ab9210cd46bb09c5"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
""
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Fig. 1. Architecture of MIMN Model. The matching layer outputs a matching sequence by matching the context vectors with the aligned vectors (green and blue) based on three matching functions. The multi-turn inference layer generates inference vectors by aggregating the matching sequence over multi-turns.",
"Table 1. Basic information about the three NLI datasets. Sentence Pairs is the total examples of each dataset. N, E, and C indicate Neutral, Entailment, and Contradiction, respectively.",
"Table 2. Performance on SNLI",
"Table 4. Performance on SCITAIL. Models with ? are reported from [12]."
],
"file": [
"3-Figure1-1.png",
"7-Table1-1.png",
"9-Table2-1.png",
"10-Table4-1.png"
]
} | [
"Which matching features do they employ?"
] | [
[
"1901.02222-Conclusion-0"
]
] | [
"Matching features from matching sentences from various perspectives."
] | 648 |
1804.08050 | Multi-Head Decoder for End-to-End Speech Recognition | This paper presents a new network architecture called multi-head decoder for end-to-end speech recognition as an extension of a multi-head attention model. In the multi-head attention model, multiple attentions are calculated, and then, they are integrated into a single attention. On the other hand, instead of the integration in the attention level, our proposed method uses multiple decoders for each attention and integrates their outputs to generate a final output. Furthermore, in order to make each head to capture the different modalities, different attention functions are used for each head, leading to the improvement of the recognition performance with an ensemble effect. To evaluate the effectiveness of our proposed method, we conduct an experimental evaluation using Corpus of Spontaneous Japanese. Experimental results demonstrate that our proposed method outperforms the conventional methods such as location-based and multi-head attention models, and that it can capture different speech/linguistic contexts within the attention-based encoder-decoder framework. | {
"paragraphs": [
[
"Automatic speech recognition (ASR) is the task to convert a continuous speech signal into a sequence of discrete characters, and it is a key technology to realize the interaction between human and machine. ASR has a great potential for various applications such as voice search and voice input, making our lives more rich. Typical ASR systems BIBREF0 consist of many modules such as an acoustic model, a lexicon model, and a language model. Factorizing the ASR system into these modules makes it possible to deal with each module as a separate problem. Over the past decades, this factorization has been the basis of the ASR system, however, it makes the system much more complex.",
"With the improvement of deep learning techniques, end-to-end approaches have been proposed BIBREF1 . In the end-to-end approach, a continuous acoustic signal or a sequence of acoustic features is directly converted into a sequence of characters with a single neural network. Therefore, the end-to-end approach does not require the factorization into several modules, as described above, making it easy to optimize the whole system. Furthermore, it does not require lexicon information, which is handcrafted by human experts in general.",
"The end-to-end approach is classified into two types. One approach is based on connectionist temporal classification (CTC) BIBREF2 , BIBREF3 , BIBREF1 , which makes it possible to handle the difference in the length of input and output sequences with dynamic programming. The CTC-based approach can efficiently solve the sequential problem, however, CTC uses Markov assumptions to perform dynamic programming and predicts output symbols such as characters or phonemes for each frame independently. Consequently, except in the case of huge training data BIBREF4 , BIBREF5 , it requires the language model and graph-based decoding BIBREF6 .",
"The other approach utilizes attention-based method BIBREF7 . In this approach, encoder-decoder architecture BIBREF8 , BIBREF9 is used to perform a direct mapping from a sequence of input features into text. The encoder network converts the sequence of input features to that of discriminative hidden states, and the decoder network uses attention mechanism to get an alignment between each element of the output sequence and the encoder hidden states. And then it estimates the output symbol using weighted averaged hidden states, which is based on the alignment, as the inputs of the decoder network. Compared with the CTC-based approach, the attention-based method does not require any conditional independence assumptions including the Markov assumption, language models, and complex decoding. However, non-causal alignment problem is caused by a too flexible alignment of the attention mechanism BIBREF10 . To address this issue, the study BIBREF10 combines the objective function of the attention-based model with that of CTC to constrain flexible alignments of the attention. Another study BIBREF11 uses a multi-head attention (MHA) to get more suitable alignments. In MHA, multiple attentions are calculated, and then, they are integrated into a single attention. Using MHA enables the model to jointly focus on information from different representation subspaces at different positions BIBREF12 , leading to the improvement of the recognition performance.",
"Inspired by the idea of MHA, in this study we present a new network architecture called multi-head decoder for end-to-end speech recognition as an extension of a multi-head attention model. Instead of the integration in the attention level, our proposed method uses multiple decoders for each attention and integrates their outputs to generate a final output. Furthermore, in order to make each head to capture the different modalities, different attention functions are used for each head, leading to the improvement of the recognition performance with an ensemble effect. To evaluate the effectiveness of our proposed method, we conduct an experimental evaluation using Corpus of Spontaneous Japanese. Experimental results demonstrate that our proposed method outperforms the conventional methods such as location-based and multi-head attention models, and that it can capture different speech/linguistic contexts within the attention-based encoder-decoder framework."
],
[
"The overview of attention-based network architecture is shown in Fig. FIGREF1 .",
"The attention-based method directly estimates a posterior INLINEFORM0 , where INLINEFORM1 represents a sequence of input features, INLINEFORM2 represents a sequence of output characters. The posterior INLINEFORM3 is factorized with a probabilistic chain rule as follows: DISPLAYFORM0 ",
"where INLINEFORM0 represents a subsequence INLINEFORM1 , and INLINEFORM2 is calculated as follows: DISPLAYFORM0 DISPLAYFORM1 ",
"where Eq. ( EQREF3 ) and Eq. () represent encoder and decoder networks, respectively, INLINEFORM0 represents an attention weight, INLINEFORM1 represents an attention weight vector, which is a sequence of attention weights INLINEFORM2 , INLINEFORM3 represents a subsequence of attention vectors INLINEFORM4 , INLINEFORM5 and INLINEFORM6 represent hidden states of encoder and decoder networks, respectively, and INLINEFORM7 represents the letter-wise hidden vector, which is a weighted summarization of hidden vectors with the attention weight vector INLINEFORM8 .",
"The encoder network in Eq. ( EQREF3 ) converts a sequence of input features INLINEFORM0 into frame-wise discriminative hidden states INLINEFORM1 , and it is typically modeled by a bidirectional long short-term memory recurrent neural network (BLSTM): DISPLAYFORM0 ",
"In the case of ASR, the length of the input sequence is significantly different from the length of the output sequence. Hence, basically outputs of BLSTM are often subsampled to reduce the computational cost BIBREF7 , BIBREF13 .",
"The attention weight INLINEFORM0 in Eq. ( EQREF4 ) represents a soft alignment between each element of the output sequence INLINEFORM1 and the encoder hidden states INLINEFORM2 .",
"The decoder network in Eq. () estimates the next character INLINEFORM0 from the previous character INLINEFORM1 , hidden state vector of itself INLINEFORM2 and the letter-wise hidden state vector INLINEFORM3 , similar to RNN language model (RNNLM) BIBREF17 . It is typically modeled using LSTM as follows: DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 represent trainable matrix and vector parameters, respectively.",
"Finally, the whole of above networks are optimized using back-propagation through time (BPTT) BIBREF18 to minimize the following objective function: DISPLAYFORM0 ",
"where INLINEFORM0 represents the ground truth of the previous characters."
],
[
"The overview of our proposed multi-head decoder (MHD) architecture is shown in Fig. FIGREF19 . In MHD architecture, multiple attentions are calculated with the same manner in the conventional multi-head attention (MHA) BIBREF12 . We first describe the conventional MHA, and extend it to our proposed multi-head decoder (MHD)."
],
[
"The layer-wise hidden vector at the head INLINEFORM0 is calculated as follows: DISPLAYFORM0 ",
"where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 represent trainable matrix parameters, and any types of attention in Eq. ( EQREF4 ) can be used for INLINEFORM3 in Eq. ( EQREF21 ).",
"In the case of MHA, the layer-wise hidden vectors of each head are integrated into a single vector with a trainable linear transformation: DISPLAYFORM0 ",
"where INLINEFORM0 is a trainable matrix parameter, INLINEFORM1 represents the number of heads."
],
[
"On the other hand, in the case of MHD, instead of the integration at attention level, we assign multiple decoders for each head and then integrate their outputs to get a final output. Since each attention decoder captures different modalities, it is expected to improve the recognition performance with an ensemble effect. The calculation of the attention weight at the head INLINEFORM0 in Eq. ( EQREF21 ) is replaced with following equation: DISPLAYFORM0 ",
"Instead of the integration of the letter-wise hidden vectors INLINEFORM0 with linear transformation, each letter-wise hidden vector INLINEFORM1 is fed to INLINEFORM2 -th decoder LSTM: DISPLAYFORM0 ",
"Note that each LSTM has its own hidden state INLINEFORM0 which is used for the calculation of the attention weight INLINEFORM1 , while the input character INLINEFORM2 is the same among all of the LSTMs. Finally, all of the outputs are integrated as follows: DISPLAYFORM0 ",
"where INLINEFORM0 represents a trainable matrix parameter, and INLINEFORM1 represents a trainable vector parameter."
],
[
"As a further extension, we propose heterogeneous multi-head decoder (HMHD). Original MHA methods BIBREF12 , BIBREF11 use the same attention function such as dot-product or additive attention for each head. On the other hand, HMHD uses different attention functions for each head. We expect that this extension enables to capture the further different context in speech within the attention-based encoder-decoder framework."
],
[
"To evaluate the performance of our proposed method, we conducted experimental evaluation using Corpus of Spontaneous Japanese (CSJ) BIBREF20 , including 581 hours of training data, and three types of evaluation data. To compare the performance, we used following dot, additive, location, and three variants of multi-head attention methods:",
"We used the input feature vector consisting of 80 dimensional log Mel filter bank and three dimensional pitch feature, which is extracted using open-source speech recognition toolkit Kaldi BIBREF21 . Encoder and decoder networks were six-layered BLSTM with projection layer BIBREF22 (BLSTMP) and one-layered LSTM, respectively. In the second and third bottom layers in the encoder, subsampling was performed to reduce the length of utterance, yielding the length INLINEFORM0 . For MHA/MHD, we set the number of heads to four. For HMHD, we used two kind of settings: (1) dot-product attention + additive attention + location-based attention + coverage mechanism attention (Dot+Add+Loc+Cov), and (2) two location-based attentions + two coverage mechanism attentions (2 INLINEFORM1 Loc+2 INLINEFORM2 Cov). The number of distinct output characters was 3,315 including Kanji, Hiragana, Katakana, alphabets, Arabic number and sos/eos symbols. In decoding, we used beam search algorithm BIBREF9 with beam size 20. We manually set maximum and minimum lengths of the output sequence to 0.1 and 0.5 times the length of the subsampled input sequence, respectively, and the length penalty to 0.1 times the length of the output sequence. All of the networks were trained using end-to-end speech processing toolkit ESPnet BIBREF23 with a single GPU (Titan X pascal). Character error rate (CER) was used as a metric. The detail of experimental condition is shown in Table TABREF28 .",
"Experimental results are shown in Table TABREF35 .",
"First, we focus on the results of the conventional methods. Basically, it is known that location-based attention yields better performance than additive attention BIBREF10 . However, in the case of Japanese sentence, its length is much shorter than that of English sentence, which makes the use of location-based attention less effective. In most of the cases, the use of MHA brings the improvement of the recognition performance. Next, we focus on the effectiveness of our proposed MHD architecture. By comparing with the MHA-Loc, MHD-Loc (proposed method) improved the performance in Tasks 1 and 2, while we observed the degradation in Task 3. However, the heterogeneous extension (HMHD), as introduced in Section SECREF27 , brings the further improvement for the performance of MHD, achieving the best performance among all of the methods for all test sets.",
"Finally, Figure FIGREF36 shows the alignment information of each head of HMHD (2 INLINEFORM0 Loc+2 INLINEFORM1 Cov), which was obtained by visualizing the attention weights.",
"Interestingly, the alignments of the right and left ends seem to capture more abstracted dynamics of speech, while the rest of two alignments behave like normal alignments obtained by a standard attention mechanism. Thus, we can see that the attention weights of each head have a different tendency, and it supports our hypothesis that HMHD can capture different speech/linguistic contexts within its framework."
],
[
"In this paper, we proposed a new network architecture called multi-head decoder for end-to-end speech recognition as an extension of a multi-head attention model. Instead of the integration in the attention level, our proposed method utilized multiple decoders for each attention and integrated their outputs to generate a final output. Furthermore, in order to make each head to capture the different modalities, we used different attention functions for each head. To evaluate the effectiveness of our proposed method, we conducted an experimental evaluation using Corpus of Spontaneous Japanese. Experimental results demonstrated that our proposed methods outperformed the conventional methods such as location-based and multi-head attention models, and that it could capture different speech/linguistic contexts within the attention-based encoder-decoder framework.",
"In the future work, we will combine the multi-head decoder architecture with Joint CTC/Attention architecture BIBREF10 , and evaluate the performance using other databases."
]
],
"section_name": [
"Introduction",
"Attention-Based End-to-End ASR",
"Multi-Head Decoder",
"Multi-head attention (MHA)",
"Multi-head decoder (MHD)",
"Heterogeneous multi-head decoder (HMHD)",
"Experimental Evaluation",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"c343336f82f04fe45f3cedf1bbd3085bba374586"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Experimental results."
],
"extractive_spans": [],
"free_form_answer": "Their average improvement in Character Error Rate over the best MHA model was 0.33 percent points.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Experimental results."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"6a51a06bfe1b56e085b91b0de318a94e8cf572f4"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Experimental conditions."
],
"extractive_spans": [],
"free_form_answer": "449050",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Experimental conditions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"e7f6a4372574d4206d380b061c254d9b8ae608f7"
],
"answer": [
{
"evidence": [
"On the other hand, in the case of MHD, instead of the integration at attention level, we assign multiple decoders for each head and then integrate their outputs to get a final output. Since each attention decoder captures different modalities, it is expected to improve the recognition performance with an ensemble effect. The calculation of the attention weight at the head INLINEFORM0 in Eq. ( EQREF21 ) is replaced with following equation: DISPLAYFORM0"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"On the other hand, in the case of MHD, instead of the integration at attention level, we assign multiple decoders for each head and then integrate their outputs to get a final output."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"By how much does their method outperform the multi-head attention model?",
"How large is the corpus they use?",
"Does each attention head in the decoder calculate the same output?"
],
"question_id": [
"5a9f94ae296dda06c8aec0fb389ce2f68940ea88",
"85912b87b16b45cde79039447a70bd1f6f1f8361",
"948327d7aa9f85943aac59e3f8613765861f97ff"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Overview of attention-based network architecture.",
"Figure 2: Overview of multi-head decoder architecture.",
"Table 1: Experimental conditions.",
"Figure 3: Attention weights of each head. Two left figures represent the attention weights of the location-based attention, and the remaining figures represent that of the coverage mechanism attention.",
"Table 2: Experimental results."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"4-Figure3-1.png",
"4-Table2-1.png"
]
} | [
"By how much does their method outperform the multi-head attention model?",
"How large is the corpus they use?"
] | [
[
"1804.08050-4-Table2-1.png"
],
[
"1804.08050-4-Table1-1.png"
]
] | [
"Their average improvement in Character Error Rate over the best MHA model was 0.33 percent points.",
"449050"
] | 652 |
2002.06424 | Deeper Task-Specificity Improves Joint Entity and Relation Extraction | Multi-task learning (MTL) is an effective method for learning related tasks, but designing MTL models necessitates deciding which and how many parameters should be task-specific, as opposed to shared between tasks. We investigate this issue for the problem of jointly learning named entity recognition (NER) and relation extraction (RE) and propose a novel neural architecture that allows for deeper task-specificity than does prior work. In particular, we introduce additional task-specific bidirectional RNN layers for both the NER and RE tasks and tune the number of shared and task-specific layers separately for different datasets. We achieve state-of-the-art (SOTA) results for both tasks on the ADE dataset; on the CoNLL04 dataset, we achieve SOTA results on the NER task and competitive results on the RE task while using an order of magnitude fewer trainable parameters than the current SOTA architecture. An ablation study confirms the importance of the additional task-specific layers for achieving these results. Our work suggests that previous solutions to joint NER and RE undervalue task-specificity and demonstrates the importance of correctly balancing the number of shared and task-specific parameters for MTL approaches in general. | {
"paragraphs": [
[
"Multi-task learning (MTL) refers to machine learning approaches in which information and representations are shared to solve multiple, related tasks. Relative to single-task learning approaches, MTL often shows improved performance on some or all sub-tasks and can be more computationally efficient BIBREF0, BIBREF1, BIBREF2, BIBREF3. We focus here on a form of MTL known as hard parameter sharing. Hard parameter sharing refers to the use of deep learning models in which inputs to models first pass through a number of shared layers. The hidden representations produced by these shared layers are then fed as inputs to a number of task-specific layers.",
"Within the domain of natural language processing (NLP), MTL approaches have been applied to a wide range of problems BIBREF3. In recent years, one particularly fruitful application of MTL to NLP has been joint solving of named entity recognition (NER) and relation extraction (RE), two important information extraction tasks with applications in search, question answering, and knowledge base construction BIBREF4. NER consists in the identification of spans of text as corresponding to named entities and the classification of each span's entity type. RE consists in the identification of all triples $(e_i, e_j, r)$, where $e_i$ and $e_j$ are named entities and $r$ is a relation that holds between $e_i$ and $e_j$ according to the text. For example, in Figure FIGREF1, Edgar Allan Poe and Boston are named entities of the types People and Location, respectively. In addition, the text indicates that the Lives-In relation obtains between Edgar Allan Poe and Boston.",
"One option for solving these two problems is a pipeline approach using two independent models, each designed to solve a single task, with the output of the NER model serving as an input to the RE model. However, MTL approaches offer a number of advantages over the pipeline approach. First, the pipeline approach is more susceptible to error prorogation wherein prediction errors from the NER model enter the RE model as inputs that the latter model cannot correct. Second, the pipeline approach only allows solutions to the NER task to inform the RE task, but not vice versa. In contrast, the joint approach allows for solutions to either task to inform the other. For example, learning that there is a Lives-In relation between Edgar Allan Poe and Boston can be useful for determining the types of these entities. Finally, the joint approach can be computationally more efficient than the pipeline approach. As mentioned above, MTL approaches are generally more efficient than single-task learning alternatives. This is due to the fact that solutions to related tasks often rely on similar information, which in an MTL setting only needs to be represented in one model in order to solve all tasks. For example, the fact that Edgar Allan Poe is followed by was born can help a model determine both that Edgar Allan Poe is an instance of a People entity and that the sentence expresses a Lives-In relation.",
"While the choice as to which and how many layers to share between tasks is known to be an important factor relevant to the performance of MTL models BIBREF5, BIBREF2, this issue has received relatively little attention within the context of joint NER and RE. As we show below in Section 2, prior proposals for jointly solving NER and RE have typically made use of very few task-specific parameters or have mostly used task-specific parameters only for the RE task. We seek to correct for this oversight by proposing a novel neural architecture for joint NER and RE. In particular, we make the following contributions:",
"We allow for deeper task-specificity than does previous work via the use of additional task-specific bidirectional recurrent neural networks (BiRNNs) for both tasks.",
"Because the relatedness between the NER and RE tasks is not constant across all textual domains, we take the number of shared and task-specific layers to be an explicit hyperparameter of the model that can be tuned separately for different datasets.",
"We evaluate the proposed architecture on two publicly available datasets: the Adverse Drug Events (ADE) dataset BIBREF6 and the CoNLL04 dataset BIBREF7. We show that our architecture is able to outperform the current state-of-the-art (SOTA) results on both the NER and RE tasks in the case of ADE. In the case of CoNLL04, our proposed architecture achieves SOTA performance on the NER task and achieves near SOTA performance on the RE task. On both datasets, our results are SOTA when averaging performance across both tasks. Moreover, we achieve these results using an order of magnitude fewer trainable parameters than the current SOTA architecture."
],
[
"We focus in this section on previous deep learning approaches to solving the tasks of NER and RE, as this work is most directly comparable to our proposal. Most work on joint NER and RE has adopted a BIO or BILOU scheme for the NER task, where each token is labeled to indicate whether it is the (B)eginning of an entity, (I)nside an entity, or (O)utside an entity. The BILOU scheme extends these labels to indicate if a token is the (L)ast token of an entity or is a (U)nit, i.e. the only token within an entity span.",
"Several approaches treat the NER and RE tasks as if they were a single task. For example, Gupta et al. gupta-etal-2016-table, following Miwa and Sasaki miwa-sasaki-2014-modeling, treat the two tasks as a table-filling problem where each cell in the table corresponds to a pair of tokens $(t_i, t_j)$ in the input text. For the diagonal of the table, the cell label is the BILOU tag for $t_i$. All other cells are labeled with the relation $r$, if it exists, such that $(e_i, e_j, r)$, where $e_i$ is the entity whose span's final token is $t_i$, is in the set of true relations. A BiRNN is trained to fill the cells of the table. Zheng et al. Zheng2017 introduce a BILOU tagging scheme that incorporates relation information into the tags, allowing them to treat both tasks as if they were a single NER task. A series of two bidirectional LSTM (BiLSTM) layers and a final softmax layer are used to produce output tags. Li et al. li2019entity solve both tasks as a form of multi-turn question answering in which the input text is queried with question templates first to detect entities and then, given the detected entities, to detect any relations between these entities. Li et al. use BERT BIBREF8 as the backbone of their question-answering model and produce answers by tagging the input text with BILOU tags to identify the span corresponding to the answer(s).",
"The above approaches allow for very little task-specificity, since both the NER task and the RE task are coerced into a single task. Other approaches incorporate greater task-specificity in one of two ways. First, several models share the majority of model parameters between the NER and RE tasks, but also have separate scoring and/or output layers used to produce separate outputs for each task. For example, Katiyar and Cardie katiyar-cardie-2017-going and Bekoulis et al. bekoulis2018joint propose models in which token representations first pass through one or more shared BiLSTM layers. Katiyar and Cardie use a softmax layer to tag tokens with BILOU tags to solve the NER task and use an attention layer to detect relations between each pair of entities. Bekoulis et al., following Lample et al. Lample2016, use a conditional random field (CRF) layer to produce BIO tags for the NER task. The output from the shared BiLSTM layer for every pair of tokens is passed through relation scoring and sigmoid layers to predict relations.",
"A second method of incorporating greater task-specificity into these models is via deeper layers for solving the RE task. Miwa and Bansal miwa-bansal-2016-end and Li et al. li2017neural pass token representations through a BiLSTM layer and then use a softmax layer to label each token with the appropriate BILOU label. Both proposals then use a type of tree-structured bidirectional LSTM layer stacked on top of the shared BiLSTM to solve the RE task. Nguyen and Verspoor nguyen2019end use BiLSTM and CRF layers to perform the NER task. Label embeddings are created from predicted NER labels, concatenated with token representations, and then passed through a RE-specific BiLSTM. A biaffine attention layer BIBREF9 operates on the output of this BiLSTM to predict relations.",
"An alternative to the BIO/BILOU scheme is the span-based approach, wherein spans of the input text are directly labeled as to whether they correspond to any entity and, if so, their entity types. Luan et al. Luan2018 adopt a span-based approach in which token representations are first passed through a BiLSTM layer. The output from the BiLSTM is used to construct representations of candidate entity spans, which are then scored for both the NER and RE tasks via feed forward layers. Luan et al. Luan2019 follow a similar approach, but construct coreference and relation graphs between entities to propagate information between entities connected in these graphs. The resulting entity representations are then classified for NER and RE via feed forward layers. To the best of our knowledge, the current SOTA model for joint NER and RE is the span-based proposal of Eberts and Ulges eberts2019span. In this architecture, token representations are obtained using a pre-trained BERT model that is fine-tuned during training. Representations for candidate entity spans are obtained by max pooling over all tokens in each span. Span representations are passed through an entity classification layer to solve the NER task. Representations of all pairs of spans that are predicted to be entities and representations of the contexts between these pairs are then passed through a final layer with sigmoid activation to predict relations between entities. With respect to their degrees of task-specificity, these span-based approaches resemble the BIO/BILOU approaches in which the majority of model parameters are shared, but each task possesses independent scoring and/or output layers.",
"Overall, previous approaches to joint NER and RE have experimented little with deep task-specificity, with the exception of those models that include additional layers for the RE task. To our knowledge, no work has considered including additional NER-specific layers beyond scoring and/or output layers. This may reflect a residual influence of the pipeline approach in which the NER task must be solved first before additional layers are used to solve the RE task. However, there is no a priori reason to think that the RE task would benefit more from additional task-specific layers than the NER task. We also note that while previous work has tackled joint NER and RE in variety of textual domains, in all cases the number of shared and task-specific parameters is held constant across these domains."
],
[
"The architecture proposed here is inspired by several previous proposals BIBREF10, BIBREF11, BIBREF12. We treat the NER task as a sequence labeling problem using BIO labels. Token representations are first passed through a series of shared, BiRNN layers. Stacked on top of these shared BiRNN layers is a sequence of task-specific BiRNN layers for both the NER and RE tasks. We take the number of shared and task-specific layers to be a hyperparameter of the model. Both sets of task-specific BiRNN layers are followed by task-specific scoring and output layers. Figure FIGREF4 illustrates this architecture. Below, we use superscript $e$ for NER-specific variables and layers and superscript $r$ for RE-specific variables and layers."
],
[
"We obtain contextual token embeddings using the pre-trained ELMo 5.5B model BIBREF13. For each token in the input text $t_i$, this model returns three vectors, which we combine via a weighted averaging layer. Each token $t_i$'s weighted ELMo embedding $\\mathbf {t}^{elmo}_{i}$ is concatenated to a pre-trained GloVe embedding BIBREF14 $\\mathbf {t}^{glove}_{i}$, a character-level word embedding $\\mathbf {t}^{char}_i$ learned via a single BiRNN layer BIBREF15 and a one-hot encoded casing vector $\\mathbf {t}^{casing}_i$. The full representation of $t_i$ is given by $\\mathbf {v}_i$ (where $\\circ $ denotes concatenation):",
"For an input text with $n$ tokens, $\\mathbf {v}_{1:n}$ are fed as input to a sequence of one or more shared BiRNN layers, with the output sequence from the $i$th shared BiRNN layer serving as the input sequence to the $i + 1$st shared BiRNN layer."
],
[
"The final shared BiRNN layer is followed by a sequence of zero or more NER-specific BiRNN layers; the output of the final shared BiRNN layer serves as input to the first NER-specific BiRNN layer, if such a layer exists, and the output from from the $i$th NER-specific BiRNN layer serves as input to the $i + 1$st NER-specific BiRNN layer. For every token $t_i$, let $\\mathbf {h}^{e}_i$ denote an NER-specific hidden representation for $t_i$ corresponding to the $i$th element of the output sequence from the final NER-specific BiRNN layer or the final shared BiRNN layer if there are zero NER-specific BiRNN layers. An NER score for token $t_i$, $\\mathbf {s}^{e}_i$, is obtained by passing $\\mathbf {h}^{e}_i$ through a series of two feed forward layers:",
"The activation function of $\\text{FFNN}^{(e1)}$ and its output size are treated as hyperparameters. $\\text{FFNN}^{(e2)}$ uses linear activation and its output size is $|\\mathcal {E}|$, where $\\mathcal {E}$ is the set of possible entity types. The sequence of NER scores for all tokens, $\\mathbf {s}^{e}_{1:n}$, is then passed as input to a linear-chain CRF layer to produce the final BIO tag predictions, $\\hat{\\mathbf {y}}^e_{1:n}$. During inference, Viterbi decoding is used to determine the most likely sequence $\\hat{\\mathbf {y}}^e_{1:n}$."
],
[
"Similar to the NER-specific layers, the output sequence from the final shared BiRNN layer is fed through zero or more RE-specific BiRNN layers. Let $\\mathbf {h}^{r}_i$ denote the $i$th output from the final RE-specific BiRNN layer or the final shared BiRNN layer if there are no RE-specific BiRNN layers.",
"Following previous work BIBREF16, BIBREF11, BIBREF12, we predict relations between entities $e_i$ and $e_j$ using learned representations from the final tokens of the spans corresponding to $e_i$ and $e_j$. To this end, we filter the sequence $\\mathbf {h}^{r}_{1:n}$ to include only elements $\\mathbf {h}^{r}_{i}$ such that token $t_i$ is the final token in an entity span. During training, ground truth entity spans are used for filtering. During inference, predicted entity spans derived from $\\hat{\\mathbf {y}}^e_{1:n}$ are used. Each $\\mathbf {h}^{r}_{i}$ is concatenated to a learned NER label embedding for $t_i$, $\\mathbf {l}^{e}_{i}$:",
"Ground truth NER labels are used to obtain $\\mathbf {l}^{e}_{1:n}$ during training, and predicted NER labels are used during inference.",
"Next, RE scores are computed for every pair $(\\mathbf {g}^{r}_i, \\mathbf {g}^{r}_j)$. If $\\mathcal {R}$ is the set of possible relations, we calculate the DistMult score BIBREF17 for every relation $r_k \\in \\mathcal {R}$ and every pair $(\\mathbf {g}^{r}_i, \\mathbf {g}^{r}_j)$ as follows:",
"$M^{r_k}$ is a diagonal matrix such that $M^{r_k} \\in \\mathbb {R}^{p \\times p}$, where $p$ is the dimensionality of $\\mathbf {g}^r_i$. We also pass each RE-specific hidden representation $\\mathbf {g}^{r}_i$ through a single feed forward layer:",
"As in the case of $\\text{FFNN}^{(e1)}$, the activation function of $\\text{FFNN}^{(r1)}$ and its output size are treated as hyperparameters.",
"Let $\\textsc {DistMult}^r_{i,j}$ denote the concatenation of $\\textsc {DistMult}^{r_k}(\\mathbf {g}^r_i, \\mathbf {g}^r_j)$ for all $r_k \\in \\mathcal {R}$ and let $\\cos _{i,j}$ denote the cosine distance between vectors $\\mathbf {f}^{r}_i$ and $\\mathbf {f}^{r}_j$. We obtain RE scores for $(t_i, t_j)$ via a feed forward layer:",
"$\\text{FFNN}^{(r2)}$ uses linear activation, and its output size is $|\\mathcal {R}|$. Final relation predictions for a pair of tokens $(t_i, t_j)$, $\\hat{\\mathbf {y}}^r_{i,j}$, are obtained by passing $\\mathbf {s}^r_{i,j}$ through an elementwise sigmoid layer. A relation is predicted for all outputs from this sigmoid layer exceeding $\\theta ^r$, which we treat as a hyperparameter."
],
[
"During training, character embeddings, label embeddings, and weights for the weighted average layer, all BiRNN weights, all feed forward networks, and $M^{r_k}$ for all $r_k \\in \\mathcal {R}$ are trained in a supervised manner. As mentioned above, BIO tags for all tokens are used as labels for the NER task. For the the RE task, binary outputs are used. For every relation $r_k \\in R$ and for every pair of tokens $(t_i, t_j)$ such that $t_i$ is the final token of entity $e_i$ and $t_j$ is the final token of entity $e_j$, the RE label $y^{r_k}_{i,j} = 1$ if $(e_i, e_j, r_k)$ is a true relation. Otherwise, we have $y^{r_k}_{i,j} = 0$.",
"For both output layers, we compute the cross-entropy loss. If $\\mathcal {L}_{NER}$ and $\\mathcal {L}_{RE}$ denote the cross-entropy loss for the NER and RE outputs, respectively, then the total model loss is given by $\\mathcal {L} = \\mathcal {L}_{NER} + \\lambda ^r \\mathcal {L}_{RE}$. The weight $\\lambda ^r$ is treated as a hyperparameter and allows for tuning the relative importance of the NER and RE tasks during training. Final training for both datasets used a value of 5 for $\\lambda ^r$.",
"For the ADE dataset, we trained using the Adam optimizer with a mini-batch size of 16. For the CoNLL04 dataset, we used the Nesterov Adam optimizer with and a mini-batch size of 2. For both datasets, we used a learning rate of $5\\times 10^{-4}$, During training, dropout was applied before each BiRNN layer, other than the character BiRNN layer, and before the RE scoring layer."
],
[
"We evaluate the architecture described above using the following two publicly available datasets."
],
[
"The Adverse Drug Events (ADE) dataset BIBREF6 consists of 4,272 sentences describing adverse effects from the use of particular drugs. The text is annotated using two entity types (Adverse-Effect and Drug) and a single relation type (Adverse-Effect). Of the entity instances in the dataset, 120 overlap with other entities. Similar to prior work using BIO/BILOU tagging, we remove overlapping entities. We preserve the entity with the longer span and remove any relations involving a removed entity.",
"There are no official training, dev, and test splits for the ADE dataset, leading previous researchers to use some form of cross-validation when evaluating their models on this dataset. We split out 10% of the data to use as a held-out dev set. Final results are obtained via 10-fold cross-validation using the remaining 90% of the data and the hyperparameters obtained from tuning on the dev set. Following previous work, we report macro-averaged performance metrics averaged across each of the 10 folds."
],
[
"The CoNLL04 dataset BIBREF7 consists of 1,441 sentences from news articles annotated with four entity types (Location, Organization, People, and Other) and five relation types (Works-For, Kill, Organization-Based-In, Lives-In, and Located-In). This dataset contains no overlapping entities.",
"We use the three-way split of BIBREF16, which contains 910 training, 243 dev, and 288 test sentences. All hyperparameters are tuned against the dev set. Final results are obtained by averaging results from five trials with random weight initializations in which we trained on the combined training and dev sets and evaluated on the test set. As previous work using the CoNLL04 dataset has reported both micro- and macro-averages, we report both sets of metrics.",
"",
"In evaluating NER performance on these datasets, a predicted entity is only considered a true positive if both the entity's span and span type are correctly predicted. In evaluating RE performance, we follow previous work in adopting a strict evaluation method wherein a predicted relation is only considered correct if the spans corresponding to the two arguments of this relation and the entity types of these spans are also predicted correctly. We experimented with LSTMs and GRUs for all BiRNN layers in the model and experimented with using $1-3$ shared BiRNN layers and $0-3$ task-specific BiRNN layers for each task. Hyperparameters used for final training are listed in Table TABREF17."
],
[
"Full results for the performance of our model, as well as other recent work, are shown in Table TABREF18. In addition to precision, recall, and F1 scores for both tasks, we show the average of the F1 scores across both tasks. On the ADE dataset, we achieve SOTA results for both the NER and RE tasks. On the CoNLL04 dataset, we achieve SOTA results on the NER task, while our performance on the RE task is competitive with other recent models. On both datasets, we achieve SOTA results when considering the average F1 score across both tasks. The largest gain relative to the previous SOTA performance is on the RE task of the ADE dataset, where we see an absolute improvement of 4.5 on the macro-average F1 score.",
"While the model of Eberts and Ulges eberts2019span outperforms our proposed architecture on the CoNLL04 RE task, their results come at the cost of greater model complexity. As mentioned above, Eberts and Ulges fine-tune the BERTBASE model, which has 110 million trainable parameters. In contrast, given the hyperparameters used for final training on the CoNLL04 dataset, our proposed architecture has approximately 6 million trainable parameters.",
"The fact that the optimal number of task-specific layers differed between the two datasets demonstrates the value of taking the number of shared and task-specific layers to be a hyperparameter of our model architecture. As shown in Table TABREF17, the final hyperparameters used for the CoNLL04 dataset included an additional RE-specific BiRNN layer than did the final hyperparameters used for the ADE dataset. We suspect that this is due to the limited number of relations and entities in the ADE dataset. For most examples in this dataset, it is sufficient to correctly identify a single Drug entity, a single Adverse-Effect entity, and an Adverse-Effect relation between the two entities. Thus, the NER and RE tasks for this dataset are more closely related than they are in the case of the CoNLL04 dataset. Intuitively, cases in which the NER and RE problems can be solved by relying on more shared information should require fewer task-specific layers."
],
[
"To further demonstrate the effectiveness of the additional task-specific BiRNN layers in our architecture, we conducted an ablation study using the CoNLL04 dataset. We trained and evaluated in the same manner described above, using the same hyperparameters, with the following exceptions:",
"We used either (i) zero NER-specific BiRNN layers, (ii) zero RE-specific BiRNN layers, or (iii) zero task-specific BiRNN layers of any kind.",
"We increased the number of shared BiRNN layers to keep the total number of model parameters consistent with the number of parameters in the baseline model.",
"We average the results for each set of hyperparameter across three trials with random weight initializations.",
"Table TABREF26 contains the results from the ablation study. These results show that the proposed architecture benefits from the inclusion of both NER- and RE-specific layers. However, the RE task benefits much more from the inclusion of these task-specific layers than does the NER task. We take this to reflect the fact that the RE task is more difficult than the NER task for the CoNLL04 dataset, and therefore benefits the most from its own task-specific layers. This is consistent with the fact that the hyperparameter setting that performs best on the RE task is that with no NER-specific BiRNN layers, i.e. the setting that retained RE-specific BiRNN layers. In contrast, the inclusion of task-specific BiRNN layers of any kind had relatively little impact on the performance on the NER task.",
"Note that the setting with no NER-specific layers is somewhat similar to the setup of Nguyen and Verspoor's nguyen2019end model, but includes an additional shared and an additional RE-specific layer. That this setting outperforms Nguyen et al.'s model reflects the contribution of having deeper shared and RE-specific layers, separate from the contribution of NER-specific layers."
],
[
"Our results demonstrate the utility of using deeper task-specificity in models for joint NER and RE and of tuning the level of task-specificity separately for different datasets. We conclude that prior work on joint NER and RE undervalues the importance of task-specificity. More generally, these results underscore the importance of correctly balancing the number of shared and task-specific parameters in MTL.",
"We note that other approaches that employ a single model architecture across different datasets are laudable insofar as we should prefer models that can generalize well across domains with little domain-specific hyperparameter tuning. On the other hand, the similarity between the NER and RE tasks varies across domains, and improved performance can be achieved on these tasks by tuning the number of shared and task-specific parameters. In our work, we treated the number of shared and task-specific layers as a hyperparameter to be tuned for each dataset, but future work may explore ways to select this aspect of the model architecture in a more principled way. For example, Vandenhende et al. vandenhende2019branched propose using a measure of affinity between tasks to determine how many layers to share in MTL networks. Task affinity scores of NER and RE could be computed for different textual domains or datasets, which could then guide the decision regarding the number of shared and task-specific layers to employ for joint NER and RE models deployed on these domains.",
"Other extensions to the present work could include fine-tuning the model used to obtain contextual word embeddings, e.g. ELMo or BERT, during training. In order to minimize the number of trainable parameters, we did not employ such fine-tuning in our model, but we suspect a fine-tuning approach could lead to improved performance relative to our results. An additional opportunity for future work would be an extension of this work to other related NLP tasks, such as co-reference resolution and cross-sentential relation extraction."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model",
"Model ::: Shared Layers",
"Model ::: NER-Specific Layers",
"Model ::: RE-Specific Layers",
"Model ::: Training",
"Experiments",
"Experiments ::: ADE",
"Experiments ::: CoNLL04",
"Experiments ::: Results",
"Experiments ::: Ablation Study",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"ebd3deb882f8d1c33836b6a453bc6456773b32cd"
],
"answer": [
{
"evidence": [
"We evaluate the proposed architecture on two publicly available datasets: the Adverse Drug Events (ADE) dataset BIBREF6 and the CoNLL04 dataset BIBREF7. We show that our architecture is able to outperform the current state-of-the-art (SOTA) results on both the NER and RE tasks in the case of ADE. In the case of CoNLL04, our proposed architecture achieves SOTA performance on the NER task and achieves near SOTA performance on the RE task. On both datasets, our results are SOTA when averaging performance across both tasks. Moreover, we achieve these results using an order of magnitude fewer trainable parameters than the current SOTA architecture."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate the proposed architecture on two publicly available datasets: the Adverse Drug Events (ADE) dataset BIBREF6 and the CoNLL04 dataset BIBREF7. We show that our architecture is able to outperform the current state-of-the-art (SOTA) results on both the NER and RE tasks in the case of ADE. In the case of CoNLL04, our proposed architecture achieves SOTA performance on the NER task and achieves near SOTA performance on the RE task. On both datasets, our results are SOTA when averaging performance across both tasks."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"6bed2a7163c3fcd6d33218fbb09a7650d10a7822"
],
"answer": [
{
"evidence": [
"To further demonstrate the effectiveness of the additional task-specific BiRNN layers in our architecture, we conducted an ablation study using the CoNLL04 dataset. We trained and evaluated in the same manner described above, using the same hyperparameters, with the following exceptions:",
"We used either (i) zero NER-specific BiRNN layers, (ii) zero RE-specific BiRNN layers, or (iii) zero task-specific BiRNN layers of any kind.",
"We increased the number of shared BiRNN layers to keep the total number of model parameters consistent with the number of parameters in the baseline model.",
"We average the results for each set of hyperparameter across three trials with random weight initializations."
],
"extractive_spans": [
"(i) zero NER-specific BiRNN layers, (ii) zero RE-specific BiRNN layers, or (iii) zero task-specific BiRNN layers of any kind"
],
"free_form_answer": "",
"highlighted_evidence": [
"To further demonstrate the effectiveness of the additional task-specific BiRNN layers in our architecture, we conducted an ablation study using the CoNLL04 dataset. We trained and evaluated in the same manner described above, using the same hyperparameters, with the following exceptions:\n\nWe used either (i) zero NER-specific BiRNN layers, (ii) zero RE-specific BiRNN layers, or (iii) zero task-specific BiRNN layers of any kind.\n\nWe increased the number of shared BiRNN layers to keep the total number of model parameters consistent with the number of parameters in the baseline model.\n\nWe average the results for each set of hyperparameter across three trials with random weight initializations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"6b3f48502059543742b61485a5c7ed874ab9a6f8"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Optimal hyperparameters used for final training on the ADE and CoNLL04 datasets."
],
"extractive_spans": [],
"free_form_answer": "1",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Optimal hyperparameters used for final training on the ADE and CoNLL04 datasets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"ad66364f90afe2ea62600c773f387823aeeb90c6"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Optimal hyperparameters used for final training on the ADE and CoNLL04 datasets."
],
"extractive_spans": [],
"free_form_answer": "2 for the ADE dataset and 3 for the CoNLL04 dataset",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Optimal hyperparameters used for final training on the ADE and CoNLL04 datasets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"Do they repot results only on English data?",
"What were the variables in the ablation study?",
"How many shared layers are in the system?",
"How many additional task-specific layers are introduced?"
],
"question_id": [
"eae13c9693ace504eab1f96c91b16a0627cd1f75",
"bcec22a75c1f899e9fcea4996457cf177c50c4c5",
"58f50397a075f128b45c6b824edb7a955ee8cba1",
"9adcc8c4a10fa0d58f235b740d8d495ee622d596"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 2: Illustration of our proposed architecture. Token representations are derived from a pre-trained ELMo model, pre-trained GloVe embeddings, learned character-based embeddings, and one-hot encoded casing vectors. The number of shared and task-specific BiRNN layers is treated as a hyperparameter of the model architecture. Only the final token in each entity span is used for predictions for the RE task; grey boxes indicate tokens that are not used for relation predictions. The output for the RE task is a vector of size |R| for all pairs of entities, where R is the set of all possible relations.",
"Table 1: Optimal hyperparameters used for final training on the ADE and CoNLL04 datasets.",
"Table 2: Precision, Recall, and F1 scores for our model and other recent models on the ADE and CoNLL04 datasets. Because our scores are averaged across multiple trials, F1 scores shown here cannot be directly calculated from the precision and recall scores shown here. Note that Nguyen and Verspoor do not report precision and recall scores.",
"Table 3: Results from an ablation study using the CoNLL04 dataset. All models have the same number of total parameters."
],
"file": [
"3-Figure2-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png"
]
} | [
"How many shared layers are in the system?",
"How many additional task-specific layers are introduced?"
] | [
[
"2002.06424-4-Table1-1.png"
],
[
"2002.06424-4-Table1-1.png"
]
] | [
"1",
"2 for the ADE dataset and 3 for the CoNLL04 dataset"
] | 654 |
1909.05246 | Self-Attentional Models Application in Task-Oriented Dialogue Generation Systems | Self-attentional models are a new paradigm for sequence modelling tasks which differ from common sequence modelling methods, such as recurrence-based and convolution-based sequence learning, in the way that their architecture is only based on the attention mechanism. Self-attentional models have been used in the creation of the state-of-the-art models in many NLP tasks such as neural machine translation, but their usage has not been explored for the task of training end-to-end task-oriented dialogue generation systems yet. In this study, we apply these models on the three different datasets for training task-oriented chatbots. Our finding shows that self-attentional models can be exploited to create end-to-end task-oriented chatbots which not only achieve higher evaluation scores compared to recurrence-based models, but also do so more efficiently. | {
"paragraphs": [
[
"Task-oriented chatbots are a type of dialogue generation system which tries to help the users accomplish specific tasks, such as booking a restaurant table or buying movie tickets, in a continuous and uninterrupted conversational interface and usually in as few steps as possible. The development of such systems falls into the Conversational AI domain which is the science of developing agents which are able to communicate with humans in a natural way BIBREF0. Digital assistants such as Apple's Siri, Google Assistant, Amazon Alexa, and Alibaba's AliMe are examples of successful chatbots developed by giant companies to engage with their customers.",
"There are mainly two different ways to create a task-oriented chatbot which are either using set of hand-crafted and carefully-designed rules or use corpus-based method in which the chatbot can be trained with a relatively large corpus of conversational data. Given the abundance of dialogue data, the latter method seems to be a better and a more general approach for developing task-oriented chatbots. The corpus-based method also falls into two main chatbot design architectures which are pipelined and end-to-end architectures BIBREF1. End-to-end chatbots are usually neural networks based BIBREF2, BIBREF3, BIBREF4, BIBREF5 and thus can be adapted to new domains by training on relevant dialogue datasets for that specific domain. Furthermore, all sequence modelling methods can also be used in training end-to-end task-oriented chatbots. A sequence modelling method receives a sequence as input and predicts another sequence as output. For example in the case of machine translation the input could be a sequence of words in a given language and the output would be a sentence in a second language. In a dialogue system, an utterance is the input and the predicted sequence of words would be the corresponding response.",
"Self-attentional models are a new paradigm for sequence modelling tasks which differ from common sequence modelling methods, such as recurrence-based and convolution-based sequence learning, in the way that their architecture is only based on the attention mechanism. The Transformer BIBREF6 and Universal Transformer BIBREF7 models are the first models that entirely rely on the self-attention mechanism for both encoder and decoder, and that is why they are also referred to as a self-attentional models. The Transformer models has produced state-of-the-art results in the task neural machine translation BIBREF6 and this encouraged us to further investigate this model for the task of training task-oriented chatbots. While in the Transformer model there is no recurrence, it turns out that the recurrence used in RNN models is essential for some tasks in NLP including language understanding tasks and thus the Transformer fails to generalize in those tasks BIBREF7. We also investigate the usage of the Universal Transformer for this task to see how it compares to the Transformer model.",
"We focus on self-attentional sequence modelling for this study and intend to provide an answer for one specific question which is:",
"How effective are self-attentional models for training end-to-end task-oriented chatbots?",
"Our contribution in this study is as follows:",
"We train end-to-end task-oriented chatbots using both self-attentional models and common recurrence-based models used in sequence modelling tasks and compare and analyze the results using different evaluation metrics on three different datasets.",
"We provide insight into how effective are self-attentional models for this task and benchmark the time performance of these models against the recurrence-based sequence modelling methods.",
"We try to quantify the effectiveness of self-attention mechanism in self-attentional models and compare its effect to recurrence-based models for the task of training end-to-end task-oriented chatbots."
],
[
"End-to-end architectures are among the most used architectures for research in the field of conversational AI. The advantage of using an end-to-end architecture is that one does not need to explicitly train different components for language understanding and dialogue management and then concatenate them together. Network-based end-to-end task-oriented chatbots as in BIBREF4, BIBREF8 try to model the learning task as a policy learning method in which the model learns to output a proper response given the current state of the dialogue. As discussed before, all encoder-decoder sequence modelling methods can be used for training end-to-end chatbots. Eric and Manning eric2017copy use the copy mechanism augmentation on simple recurrent neural sequence modelling and achieve good results in training end-to-end task-oriented chatbots BIBREF9.",
"Another popular method for training chatbots is based on memory networks. Memory networks augment the neural networks with task-specific memories which the model can learn to read and write. Memory networks have been used in BIBREF8 for training task-oriented agents in which they store dialogue context in the memory module, and then the model uses it to select a system response (also stored in the memory module) from a set of candidates. A variation of Key-value memory networks BIBREF10 has been used in BIBREF11 for the training task-oriented chatbots which stores the knowledge base in the form of triplets (which is (subject,relation,object) such as (yoga,time,3pm)) in the key-value memory network and then the model tries to select the most relevant entity from the memory and create a relevant response. This approach makes the interaction with the knowledge base smoother compared to other models.",
"Another approach for training end-to-end task-oriented dialogue systems tries to model the task-oriented dialogue generation in a reinforcement learning approach in which the current state of the conversation is passed to some sequence learning network, and this network decides the action which the chatbot should act upon. End-to-end LSTM based model BIBREF12, and the Hybrid Code Networks BIBREF13 can use both supervised and reinforcement learning approaches for training task-oriented chatbots."
],
[
"Sequence modelling methods usually fall into recurrence-based, convolution-based, and self-attentional-based methods. In recurrence-based sequence modeling, the words are fed into the model in a sequential way, and the model learns the dependencies between the tokens given the context from the past (and the future in case of bidirectional Recurrent Neural Networks (RNNs)) BIBREF14. RNNs and their variations such as Long Short-term Memory (LSTM) BIBREF15, and Gated Recurrent Units (GRU) BIBREF16 are the most widely used recurrence-based models used in sequence modelling tasks. Convolution-based sequence modelling methods rely on Convolutional Neural Networks (CNN) BIBREF17 which are mostly used for vision tasks but can also be used for handling sequential data. In CNN-based sequence modelling, multiple CNN layers are stacked on top of each other to give the model the ability to learn long-range dependencies. The stacking of layers in CNNs for sequence modeling allows the model to grow its receptive field, or in other words context size, and thus can model complex dependencies between different sections of the input sequence BIBREF18, BIBREF19. WaveNet van2016wavenet, used in audio synthesis, and ByteNet kalchbrenner2016neural, used in machine translation tasks, are examples of models trained using convolution-based sequence modelling."
],
[
"We compare the most commonly used recurrence-based models for sequence modelling and contrast them with Transformer and Universal Transformer models. The models that we train are:"
],
[
"Long Short-term Memory (LSTM) networks are a special kind of RNN networks which can learn long-term dependencies BIBREF15. RNN models suffer from the vanishing gradient problem BIBREF20 which makes it hard for RNN models to learn long-term dependencies. The LSTM model tackles this problem by defining a gating mechanism which introduces input, output and forget gates, and the model has the ability to decide how much of the previous information it needs to keep and how much of the new information it needs to integrate and thus this mechanism helps the model keep track of long-term dependencies.",
"Bi-directional LSTMs BIBREF21 are a variation of LSTMs which proved to give better results for some NLP tasks BIBREF22. The idea behind a Bi-directional LSTM is to give the network (while training) the ability to not only look at past tokens, like LSTM does, but to future tokens, so the model has access to information both form the past and future. In the case of a task-oriented dialogue generation systems, in some cases, the information needed so that the model learns the dependencies between the tokens, comes from the tokens that are ahead of the current index, and if the model is able to take future tokens into accounts it can learn more efficiently."
],
[
"As discussed before, Transformer is the first model that entirely relies on the self-attention mechanism for both the encoder and the decoder. The Transformer uses the self-attention mechanism to learn a representation of a sentence by relating different positions of that sentence. Like many of the sequence modelling methods, Transformer follows the encoder-decoder architecture in which the input is given to the encoder and the results of the encoder is passed to the decoder to create the output sequence. The difference between Transformer (which is a self-attentional model) and other sequence models (such as recurrence-based and convolution-based) is that the encoder and decoder architecture is only based on the self-attention mechanism. The Transformer also uses multi-head attention which intends to give the model the ability to look at different representations of the different positions of both the input (encoder self-attention), output (decoder self-attention) and also between input and output (encoder-decoder attention) BIBREF6. It has been used in a variety of NLP tasks such as mathematical language understanding [110], language modeling BIBREF23, machine translation BIBREF6, question answering BIBREF24, and text summarization BIBREF25."
],
[
"The Universal Transformer model is an encoder-decoder-based sequence-to-sequence model which applies recurrence to the representation of each of the positions of the input and output sequences. The main difference between the RNN recurrence and the Universal Transformer recurrence is that the recurrence used in the Universal Transformer is applied on consecutive representation vectors of each token in the sequence (i.e., over depth) whereas in the RNN models this recurrence is applied on positions of the tokens in the sequence. A variation of the Universal Transformer, called Adaptive Universal Transformer, applies the Adaptive Computation Time (ACT) BIBREF26 technique on the Universal Transformer model which makes the model train faster since it saves computation time and also in some cases can increase the model accuracy. The ACT allows the Universal Transformer model to use different recurrence time steps for different tokens.",
"We know, based on reported evidence that transformers are potent in NLP tasks like translation and question answering. Our aim is to assess the applicability and effectiveness of transformers and universal-transformers in the domain of task-oriented conversational agents. In the next section, we report on experiments to investigate the usage of self-attentional models performance against the aforementioned models for the task of training end-to-end task-oriented chatbots."
],
[
"We run our experiments on Tesla 960M Graphical Processing Unit (GPU). We evaluated the models using the aforementioned metrics and also applied early stopping (with delta set to 0.1 for 600 training steps)."
],
[
"We use three different datasets for training the models. We use the Dialogue State Tracking Competition 2 (DSTC2) dataset BIBREF27 which is the most widely used dataset for research on task-oriented chatbots. We also used two other datasets recently open-sourced by Google Research BIBREF28 which are M2M-sim-M (dataset in movie domain) and M2M-sim-R (dataset in restaurant domain). M2M stands for Machines Talking to Machines which refers to the framework with which these two datasets were created. In this framework, dialogues are created via dialogue self-play and later augmented via crowdsourcing. We trained on our models on different datasets in order to make sure the results are not corpus-biased. Table TABREF12 shows the statistics of these three datasets which we will use to train and evaluate the models.",
"The M2M dataset has more diversity in both language and dialogue flow compared to the the commonly used DSTC2 dataset which makes it appealing for the task of creating task-oriented chatbots. This is also the reason that we decided to use M2M dataset in our experiments to see how well models can handle a more diversed dataset."
],
[
"We followed the data preparation process used for feeding the conversation history into the encoder-decoder as in BIBREF5. Consider a sample dialogue $D$ in the corpus which consists of a number of turns exchanged between the user and the system. $D$ can be represented as ${(u_1, s_1),(u_2, s_2), ...,(u_k, s_k)}$ where $k$ is the number of turns in this dialogue. At each time step in the conversation, we encode the conversation turns up to that time step, which is the context of the dialogue so far, and the system response after that time step will be used as the target. For example, given we are processing the conversation at time step $i$, the context of the conversation so far would be ${(u_1, s_1, u_2, s_2, ..., u_i)}$ and the model has to learn to output ${(s_i)}$ as the target."
],
[
"We used the tensor2tensor library BIBREF29 in our experiments for training and evaluation of sequence modeling methods. We use Adam optimizer BIBREF30 for training the models. We set $\\beta _1=0.9$, $\\beta _2=0.997$, and $\\epsilon =1e-9$ for the Adam optimizer and started with learning rate of 0.2 with noam learning rate decay schema BIBREF6. In order to avoid overfitting, we use dropout BIBREF31 with dropout chosen from [0.7-0.9] range. We also conducted early stopping BIBREF14 to avoid overfitting in our experiments as the regularization methods. We set the batch size to 4096, hidden size to 128, and the embedding size to 128 for all the models. We also used grid search for hyperparameter tuning for all of the trained models. Details of our training and hyperparameter tuning and the code for reproducing the results can be found in the chatbot-exp github repository."
],
[
"In the inference time, there are mainly two methods for decoding which are greedy and beam search BIBREF32. Beam search has been proved to be an essential part in generative NLP task such as neural machine translation BIBREF33. In the case of dialogue generation systems, beam search could help alleviate the problem of having many possible valid outputs which do not match with the target but are valid and sensible outputs. Consider the case in which a task-oriented chatbot, trained for a restaurant reservation task, in response to the user utterance “Persian food”, generates the response “what time and day would you like the reservation for?” but the target defined for the system is “would you like a fancy restaurant?”. The response generated by the chatbot is a valid response which asks the user about other possible entities but does not match with the defined target.",
"We try to alleviate this problem in inference time by applying the beam search technique with a different beam size $\\alpha \\in \\lbrace 1, 2, 4\\rbrace $ and pick the best result based on the BLEU score. Note that when $\\alpha = 1$, we are using the original greedy search method for the generation task."
],
[
"BLEU: We use the Bilingual Evaluation Understudy (BLEU) BIBREF34 metric which is commonly used in machine translation tasks. The BLEU metric can be used to evaluate dialogue generation models as in BIBREF5, BIBREF35. The BLEU metric is a word-overlap metric which computes the co-occurrence of N-grams in the reference and the generated response and also applies the brevity penalty which tries to penalize far too short responses which are usually not desired in task-oriented chatbots. We compute the BLEU score using all generated responses of our systems.",
"Per-turn Accuracy: Per-turn accuracy measures the similarity of the system generated response versus the target response. Eric and Manning eric2017copy used this metric to evaluate their systems in which they considered their response to be correct if all tokens in the system generated response matched the corresponding token in the target response. This metric is a little bit harsh, and the results may be low since all the tokens in the generated response have to be exactly in the same position as in the target response.",
"Per-Dialogue Accuracy: We calculate per-dialogue accuracy as used in BIBREF8, BIBREF5. For this metric, we consider all the system generated responses and compare them to the target responses. A dialogue is considered to be true if all the turns in the system generated responses match the corresponding turns in the target responses. Note that this is a very strict metric in which all the utterances in the dialogue should be the same as the target and in the right order.",
"F1-Entity Score: Datasets used in task-oriented chores have a set of entities which represent user preferences. For example, in the restaurant domain chatbots common entities are meal, restaurant name, date, time and the number of people (these are usually the required entities which are crucial for making reservations, but there could be optional entities such as location or rating). Each target response has a set of entities which the system asks or informs the user about. Our models have to be able to discern these specific entities and inject them into the generated response. To evaluate our models we could use named-entity recognition evaluation metrics BIBREF36. The F1 score is the most commonly used metric used for the evaluation of named-entity recognition models which is the harmonic average of precision and recall of the model. We calculate this metric by micro-averaging over all the system generated responses."
],
[
"The results of running the experiments for the aforementioned models is shown in Table TABREF14 for the DSTC2 dataset and in Table TABREF18 for the M2M datasets. The bold numbers show the best performing model in each of the evaluation metrics. As discussed before, for each model we use different beam sizes (bs) in inference time and report the best one. Our findings in Table TABREF14 show that self-attentional models outperform common recurrence-based sequence modelling methods in the BLEU, Per-turn accuracy, and entity F1 score. The reduction in the evalution numbers for the M2M dataset and in our investigation of the trained model we found that this considerable reduction is due to the fact that the diversity of M2M dataset is considerably more compared to DSTC2 dataset while the traning corpus size is smaller."
],
[
"Table TABREF22 shows the time performance of the models trained on DSTC2 dataset. Note that in order to get a fair time performance comparison, we trained the models with the same batch size (4096) and on the same GPU. These numbers are for the best performing model (in terms of evaluation loss and selected using the early stopping method) for each of the sequence modelling methods. Time to Convergence (T2C) shows the approximate time that the model was trained to converge. We also show the loss in the development set for that specific checkpoint."
],
[
"As discussed before in Section SECREF8, self-attentional models rely on the self-attention mechanism for sequence modelling. Recurrence-based models such as LSTM and Bi-LSTM can also be augmented in order to increase their performance, as evident in Table TABREF14 which shows the increase in the performance of both LSTM and Bi-LSTM when augmented with an attention mechanism. This leads to the question whether we can increase the performance of recurrence-based models by adding multiple attention heads, similar to the multi-head self-attention mechanism used in self-attentional models, and outperform the self-attentional models.",
"To investigate this question, we ran a number of experiments in which we added multiple attention heads on top of Bi-LSTM model and also tried a different number of self-attention heads in self-attentional models in order to compare their performance for this specific task. Table TABREF25 shows the results of these experiments. Note that the models in Table TABREF25 are actually the best models that we found in our experiments on DSTC2 dataset and we only changed one parameter for each of them, i.e. the number of attention heads in the recurrence-based models and the number of self-attention heads in the self-attentional models, keeping all other parameters unchanged. We also report the results of models with beam size of 2 in inference time. We increased the number of attention heads in the Bi-LSTM model up to 64 heads to see its performance change. Note that increasing the number of attention heads makes the training time intractable and time consuming while the model size would increase significantly as shown in Table TABREF24. Furthermore, by observing the results of the Bi-LSTM+Att model in Table TABREF25 (both test and development set) we can see that Bi-LSTM performance decreases and thus there is no need to increase the attention heads further.",
"Our findings in Table TABREF25 show that the self-attention mechanism can outperform recurrence-based models even if the recurrence-based models have multiple attention heads. The Bi-LSTM model with 64 attention heads cannot beat the best Trasnformer model with NH=4 and also its results are very close to the Transformer model with NH=1. This observation clearly depicts the power of self-attentional based models and demonstrates that the attention mechanism used in self-attentional models as the backbone for learning, outperforms recurrence-based models even if they are augmented with multiple attention heads."
],
[
"We have determined that Transformers and Universal-Transformers are indeed effective at generating appropriate responses in task-oriented chatbot systems. In actuality, their performance is even better than the typically used deep learning architectures. Our findings in Table TABREF14 show that self-attentional models outperform common recurrence-based sequence modelling methods in the BLEU, Per-turn accuracy, and entity F1 score. The results of the Transformer model beats all other models in all of the evaluation metrics. Also, comparing the result of LSTM and LSTM with attention mechanism as well as the Bi-LSTM with Bi-LSTM with attention mechanism, it can be observed in the results that adding the attention mechanism can increase the performance of the models. Comparing the results of self-attentional models shows that the Transformer model outperforms the other self-attentional models, while the Universal Transformer model gives reasonably good results.",
"In future work, it would be interesting to compare the performance of self-attentional models (specifically the winning Transformer model) against other end-to-end architectures such as the Memory Augmented Networks."
]
],
"section_name": [
"Introduction",
"Related Work ::: Task-Oriented Chatbots Architectures",
"Related Work ::: Sequence Modelling Methods",
"Models",
"Models ::: LSTM and Bi-Directional LSTM",
"Models ::: Transformer",
"Models ::: Universal Transformer",
"Experiments",
"Experiments ::: Datasets",
"Experiments ::: Datasets ::: Dataset Preparation",
"Experiments ::: Training",
"Experiments ::: Inference",
"Experiments ::: Evaluation Measures",
"Results and Discussion ::: Comparison of Models",
"Results and Discussion ::: Time Performance Comparison",
"Results and Discussion ::: Effect of (Self-)Attention Mechanism",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"cd1011788e81396ea998131ecacf8de802032d77"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 6: Evaluation of effect of self-attention mechanism using DSTC2 dataset (Att: Attetnion mechanism; UT: Universal Transformers; ACT: Adaptive Computation Time; NH: Number of attention heads)"
],
"extractive_spans": [],
"free_form_answer": "1, 4, 8, 16, 32, 64",
"highlighted_evidence": [
"FLOAT SELECTED: Table 6: Evaluation of effect of self-attention mechanism using DSTC2 dataset (Att: Attetnion mechanism; UT: Universal Transformers; ACT: Adaptive Computation Time; NH: Number of attention heads)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"d2251d783bc8d23d77f26228fa5cf885a775ccbe"
],
"answer": [
{
"evidence": [
"BLEU: We use the Bilingual Evaluation Understudy (BLEU) BIBREF34 metric which is commonly used in machine translation tasks. The BLEU metric can be used to evaluate dialogue generation models as in BIBREF5, BIBREF35. The BLEU metric is a word-overlap metric which computes the co-occurrence of N-grams in the reference and the generated response and also applies the brevity penalty which tries to penalize far too short responses which are usually not desired in task-oriented chatbots. We compute the BLEU score using all generated responses of our systems.",
"Per-turn Accuracy: Per-turn accuracy measures the similarity of the system generated response versus the target response. Eric and Manning eric2017copy used this metric to evaluate their systems in which they considered their response to be correct if all tokens in the system generated response matched the corresponding token in the target response. This metric is a little bit harsh, and the results may be low since all the tokens in the generated response have to be exactly in the same position as in the target response.",
"Per-Dialogue Accuracy: We calculate per-dialogue accuracy as used in BIBREF8, BIBREF5. For this metric, we consider all the system generated responses and compare them to the target responses. A dialogue is considered to be true if all the turns in the system generated responses match the corresponding turns in the target responses. Note that this is a very strict metric in which all the utterances in the dialogue should be the same as the target and in the right order.",
"F1-Entity Score: Datasets used in task-oriented chores have a set of entities which represent user preferences. For example, in the restaurant domain chatbots common entities are meal, restaurant name, date, time and the number of people (these are usually the required entities which are crucial for making reservations, but there could be optional entities such as location or rating). Each target response has a set of entities which the system asks or informs the user about. Our models have to be able to discern these specific entities and inject them into the generated response. To evaluate our models we could use named-entity recognition evaluation metrics BIBREF36. The F1 score is the most commonly used metric used for the evaluation of named-entity recognition models which is the harmonic average of precision and recall of the model. We calculate this metric by micro-averaging over all the system generated responses."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"BLEU: We use the Bilingual Evaluation Understudy (BLEU) BIBREF34 metric which is commonly used in machine translation tasks. The BLEU metric can be used to evaluate dialogue generation models as in BIBREF5, BIBREF35. The BLEU metric is a word-overlap metric which computes the co-occurrence of N-grams in the reference and the generated response and also applies the brevity penalty which tries to penalize far too short responses which are usually not desired in task-oriented chatbots. We compute the BLEU score using all generated responses of our systems.\n\nPer-turn Accuracy: Per-turn accuracy measures the similarity of the system generated response versus the target response. Eric and Manning eric2017copy used this metric to evaluate their systems in which they considered their response to be correct if all tokens in the system generated response matched the corresponding token in the target response. This metric is a little bit harsh, and the results may be low since all the tokens in the generated response have to be exactly in the same position as in the target response.\n\nPer-Dialogue Accuracy: We calculate per-dialogue accuracy as used in BIBREF8, BIBREF5. For this metric, we consider all the system generated responses and compare them to the target responses. A dialogue is considered to be true if all the turns in the system generated responses match the corresponding turns in the target responses. Note that this is a very strict metric in which all the utterances in the dialogue should be the same as the target and in the right order.\n\nF1-Entity Score: Datasets used in task-oriented chores have a set of entities which represent user preferences. For example, in the restaurant domain chatbots common entities are meal, restaurant name, date, time and the number of people (these are usually the required entities which are crucial for making reservations, but there could be optional entities such as location or rating). Each target response has a set of entities which the system asks or informs the user about. Our models have to be able to discern these specific entities and inject them into the generated response. To evaluate our models we could use named-entity recognition evaluation metrics BIBREF36. The F1 score is the most commonly used metric used for the evaluation of named-entity recognition models which is the harmonic average of precision and recall of the model. We calculate this metric by micro-averaging over all the system generated responses."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"6bb539e1393bf1ec8721a15552a38190f9f69eb9"
],
"answer": [
{
"evidence": [
"We use three different datasets for training the models. We use the Dialogue State Tracking Competition 2 (DSTC2) dataset BIBREF27 which is the most widely used dataset for research on task-oriented chatbots. We also used two other datasets recently open-sourced by Google Research BIBREF28 which are M2M-sim-M (dataset in movie domain) and M2M-sim-R (dataset in restaurant domain). M2M stands for Machines Talking to Machines which refers to the framework with which these two datasets were created. In this framework, dialogues are created via dialogue self-play and later augmented via crowdsourcing. We trained on our models on different datasets in order to make sure the results are not corpus-biased. Table TABREF12 shows the statistics of these three datasets which we will use to train and evaluate the models."
],
"extractive_spans": [
"DSTC2",
"M2M-sim-M",
"M2M-sim-R"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the Dialogue State Tracking Competition 2 (DSTC2) dataset BIBREF27 which is the most widely used dataset for research on task-oriented chatbots.",
" We also used two other datasets recently open-sourced by Google Research BIBREF28 which are M2M-sim-M (dataset in movie domain) and M2M-sim-R (dataset in restaurant domain). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How many layers of self-attention does the model have?",
"Is human evaluation performed?",
"What are the three datasets used?"
],
"question_id": [
"8568c82078495ab421ecbae38ddd692c867eac09",
"2ea382c676e418edd5327998e076a8c445d007a5",
"bd7a95b961af7caebf0430a7c9f675816c9c527f"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Statistics of DSTC2, M2M-R, and M2MM Datasets",
"Table 2: Evaluation of Models on DSTC2 dataset for both test and development datasets (bs: shows the best beam size in inference; UT: Universal Transformers)",
"Table 3: Evaluation of models on M2M restaurant (M2M-R) and movie (M2M-M) dataset for test datasets (bs: The best beam size in inference; UT: Universal Transformers)",
"Table 4: Comparison of convergence performance of the models",
"Table 5: Comparison of convergence performance of the models",
"Table 6: Evaluation of effect of self-attention mechanism using DSTC2 dataset (Att: Attetnion mechanism; UT: Universal Transformers; ACT: Adaptive Computation Time; NH: Number of attention heads)"
],
"file": [
"4-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"7-Table5-1.png",
"8-Table6-1.png"
]
} | [
"How many layers of self-attention does the model have?"
] | [
[
"1909.05246-8-Table6-1.png"
]
] | [
"1, 4, 8, 16, 32, 64"
] | 657 |
1606.04631 | Bidirectional Long-Short Term Memory for Video Description | Video captioning has been attracting broad research attention in multimedia community. However, most existing approaches either ignore temporal information among video frames or just employ local contextual temporal knowledge. In this work, we propose a novel video captioning framework, termed as \emph{Bidirectional Long-Short Term Memory} (BiLSTM), which deeply captures bidirectional global temporal structure in video. Specifically, we first devise a joint visual modelling approach to encode video data by combining a forward LSTM pass, a backward LSTM pass, together with visual features from Convolutional Neural Networks (CNNs). Then, we inject the derived video representation into the subsequent language model for initialization. The benefits are in two folds: 1) comprehensively preserving sequential and visual information; and 2) adaptively learning dense visual features and sparse semantic representations for videos and sentences, respectively. We verify the effectiveness of our proposed video captioning framework on a commonly-used benchmark, i.e., Microsoft Video Description (MSVD) corpus, and the experimental results demonstrate that the superiority of the proposed approach as compared to several state-of-the-art methods. | {
"paragraphs": [
[
"With the development of digital media technology and popularity of Mobile Internet, online visual content has increased rapidly in recent couple of years. Subsequently, visual content analysis for retrieving BIBREF0 , BIBREF1 and understanding becomes a fundamental problem in the area of multimedia research, which has motivated world-wide researchers to develop advanced techniques. Most previous works, however, have focused on classification task, such as annotating an image BIBREF2 , BIBREF3 or video BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 with given fixed label sets. With some pioneering methods BIBREF8 , BIBREF9 tackling the challenge of describing images with natural language proposed, visual content understanding has attracted more and more attention. State-of-the-art techniques for image captioning have been surpassed by new advanced approaches in succession BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Recent researches BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 have been focusing on describing videos with more comprehensive sentences instead of simple keywords. Different from image, video is sequential data with temporal structure, which may pose significant challenge to video caption. Most of the existing works in video description employed max or mean pooling across video frames to obtain video-level representation, which failed to capture temporal knowledge. To address this problem, Yao et al. proposed to use 3-D Convolutional Neural Networks to explore local temporal information in video clips, where the most relevant temporal fragments were automatically chosen for generating natural language description with attention mechanism BIBREF17 . In BIBREF19 , Venugopanlan et al. implemented a Long-Short Term Memory (LSTM) network, a variant of Recurrent Neural Networks (RNNs), to model the global temporal structure in whole video snippet. However, these methods failed to exploit bidirectional global temporal structure, which could benefit from not only previous video frames, but also information in future frames. Also, existing video captioning schemes cannot adaptively learn dense video representation and generate sparse semantic sentences.",
"In this work, we propose to construct a novel bidirectional LSTM (BiLSTM) network for video captioning. More specifically, we design a joint visual modelling to comprehensively explore bidirectional global temporal information in video data by integrating a forward LSTM pass, a backward LSTM pass, together with CNNs features. In order to enhance the subsequent sentence generation, the obtained visual representations are then fed into LSTM-based language model as initialization. We summarize the main contributions of this work as follows: (1) To our best knowledge, our approach is one of the first to utilize bidirectional recurrent neural networks for exploring bidirectional global temporal structure in video captioning; (2) We construct two sequential processing models for adaptive video representation learning and language description generation, respectively, rather than using the same LSTM for both video frames encoding and text decoding in BIBREF19 ; and (3) Extensive experiments on a real-world video corpus illustrate the superiority of our proposal as compared to state-of-the-arts."
],
[
"In this section, we elaborate the proposed video captioning framework, including an introduction of the overall flowchart (as illustrated in Figure FIGREF1 ), a brief review of LSTM-based Sequential Model, the joint visual modelling with bidirectional LSTM and CNNs, as well as the sentence generation process."
],
[
"With the success in speech recognition and machine translation tasks, recurrent neural structure, especially LSTM and its variants, have dominated sequence processing field. LSTM has been demonstrated to be able to effectively address the gradients vanishing or explosion problem BIBREF20 during back-propagation through time (BPTT) BIBREF21 and to exploit temporal dependencies in very long temporal structure. LSTM incorporates several control gates and a constant memory cell, the details of which are following: DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 -like matrices are LSTM weight parameters, INLINEFORM1 and INLINEFORM2 are denote the sigmoid and hyperbolic non-linear functions, respectively, and INLINEFORM3 indicates element-wise multiplication operation. Inspired by the success of LSTM, we devise an LSTM-based network to investigate the video temporal structure for video representation. Then initializing language model with video representation to generate video description."
],
[
"Different from other video description approaches that represent video by implementing pooling across frames BIBREF16 or 3-D CNNs with local temporal structure BIBREF15 , we apply BiLSTM networks to exploit the bidirectional temporal structure of video clips. Convolutional Neural Networks (CNNs) has demonstrated overwhelming performance on image recognition, classification BIBREF2 and video content analysis BIBREF11 , BIBREF19 . Therefore, we extract caffe BIBREF22 INLINEFORM0 layer of each frame through VGG-16 layers BIBREF23 caffemodel. Following BIBREF19 , BIBREF16 , we sample one frame from every ten frames in the video and extract the INLINEFORM1 layer, the second fully-connected layer, to express selected frames. Then a INLINEFORM2 -by-4096 feature matrix generated to denote given video clip, where INLINEFORM3 is the number of frames we sampled in the video. As in Figure FIGREF1 , we then implement two LSTMs, forward pass and backward pass, to encode CNNs features of video frames, and then merge the output sequences at each time point with a learnt weight matrix. What is interesting is that at each time point in bidirectional structure, we not only “see” the past frames, but also “peek” at the future frames. In other words, our bidirectional LSTM structure encodes video by scanning the entire video sequence several times (same as the number of time steps at encoding stage), and each scan is relevant to its adjacent scans. To investigate the effect of reinforcement of original CNNs feature, we combine the merged hidden states of BiLSTM structure and INLINEFORM4 representation time step-wise. We further employ another forward pass LSTM network with incorporated sequence to generate our video representation. In BIBREF24 , BIBREF25 , Wu et al. had demonstrated that using the output of the last step could perform better than pooling approach across outputs of all the time steps in video classification task. Similarly, we represent the entire video clip using the state of memory cell and output of the last time point, and feed them into description generator as initialization of memory cell and hidden unit respectively."
],
[
"Existing video captioning approaches usually share common part of visual model and language model as representation BIBREF19 , BIBREF15 , which may lead to severe information loss. Besides, they also input the same pooled visual vector of the whole video into every sentence processing unit, thereby ignoring temporal structure. Such methods may easily result in undesirable outputs due to the duplicate inputs in every time point of the new sequence BIBREF16 . To address these issues, we generate descriptions for video clips using a sequential model initialized with visual representation. Inspired by the superior performance of probabilistic sequence generation machine, we generate each word recurrently at each time point. Then the log probability of sentence INLINEFORM0 can be expressed as below: DISPLAYFORM0 ",
"where INLINEFORM0 denotes all parameters in sentence generation model and INLINEFORM1 is the representation of given video, and INLINEFORM2 indicates the number of words in sentence. We identify the most likely sentence by maximizing the log likelihood in Equation ( EQREF10 ), then our object function can be described as: DISPLAYFORM0 ",
"The optimizer updates INLINEFORM0 with INLINEFORM1 across the entire training process applying Stochastic Gradient Descent (SGD). During training phrase, the loss is back propagated through time and each LSTM unit learns to derive an appropriate hidden representation INLINEFORM2 from input sequence. We then implement the Softmax function to get the probability distribution over the words in the entire vocabulary.",
"At the beginning of the sentence generation, as depicted in Figure FIGREF1 , an explicit starting token (<BOS>) is needed and we terminate each sentence when the end-of-sentence token (<EOS>) is feeding in. During test phrase, similar to BIBREF19 , our language model takes the word INLINEFORM0 with maximum likelihood as input at time INLINEFORM1 repeatedly until the <EOS> token is emitted."
],
[
"Video Dataset: We evaluate our approach by conducting experiments on the Microsoft Research Video Description (MSVD) BIBREF26 corpus, which is description for a collection of 1,970 video clips. Each video clip depicts a single action or a simple event, such as “shooting”, “cutting”, “playing the piano” and “cooking”, which with the duration between 8 seconds to 25 seconds. There are roughly 43 available sentences per video and 7 words in each sentence at average. Following the majority of prior works BIBREF15 , BIBREF16 , BIBREF19 , BIBREF17 , we split entire dataset into training, validation and test set with 1200, 100 and 670 snippets, respectively.",
"Image Dataset: Comparing to other LSTM structure and deep networks, the size of video dataset for caption task is small, thereby we apply transferring learning from image description. COCO 2014 image description dataset BIBREF27 has been used to perform experiments frequently BIBREF12 , BIBREF11 , BIBREF10 , BIBREF14 , which consists of more than 120,000 images, about 82,000 and 40,000 images for training and test respectively. We pre-train our language model on COCO 2014 training set first, then transfer learning on MSVD with integral video description model."
],
[
"Description Processing: Some minimal preprocessing has been implemented to the descriptions in both MSVD and COCO 2014 datasets. We first employ word_tokenize operation in NLTK toolbox to obtain individual words, and then convert all words to lower-case. All punctuation are removed, and then we start each sentence with <BOS> and end with <EOS>. Finally, we combine the sets of words in MSVD with COCO 2014, and generate a vocabulary with 12,984 unique words. Each word input to our system is represented by one-hot vector.",
"Video Preprocessing: As previous video description works BIBREF16 , BIBREF19 , BIBREF15 , we sample video frames once in every ten frames, then these frames could represent given video and 28.5 frames for each video averagely. We extract frame-wise caffe INLINEFORM0 layer features using VGG-16 layers model, then feed the sequential feature into our video caption system.",
"We employ a bidirectional S2VT BIBREF19 and a joint bidirectional LSTM structure to investigate the performance of our bidirectional approach. For convenient comparison, we set the size of hidden unit of all LSTMs in our system to 512 as BIBREF15 , BIBREF19 , except for the first video encoder in unidirectional joint LSTM. During training phrase, we set 80 as maximum number of time steps of LSTM in all our models and a mini-batch with 16 video-sentence pairs. We note that over 99% of the descriptions in MSVD and COCO 2014 contain no more than 40 words, and in BIBREF19 , Venugopalan et al. pointed out that 94% of the YouTube training videos satisfy our maximum length limit. To ensure sufficient visual content, we adopt two ways to truncate the videos and sentences adaptively when the sum of the number of frames and words exceed the limit. If the number of words is within 40, we arbitrarily truncate the frames to satisfy the maximum length. When the length of sentence is more than 40, we discard the words that beyond the length and take video frames with a maximum number of 40.",
"Bidirectional S2VT: Similar to BIBREF19 , we implement several S2VT-based models: S2VT, bidirectional S2VT and reinforced S2VT with bidirectional LSTM video encoder. We conduct experiment on S2VT using our video features and LSTM structure instead of the end-to-end model in BIBREF19 , which need original RGB frames as input. For bidirectional S2VT model, we first pre-train description generator on COCO 2014 for image caption. We next implement forward and backward pass for video encoding and merge the hidden states step-wise with a learnt weight while the language layer receives merged hidden representation with null padded as words. We also pad the inputs of forward LSTM and backward LSTM with zeros at decoding stage, and concatenate the merged hidden states to embedded words. In the last model, we regard merged bidirectional hidden states as complementary enhancement and concatenate to original INLINEFORM0 features to obtain a reinforced representation of video, then derive sentence from new feature using the last LSTM. The loss is computed only at decoding stage in all S2VT-based models.",
"Joint-BiLSTM: Different from S2VT-based models, we employ a joint bidirectional LSTM networks to encode video sequence and decode description applying another LSTM respectively rather than sharing the common one. We stack two layers of LSTM networks to encode video and pre-train language model as in S2VT-based models. Similarly, unidirectional LSTM, bidirectional LSTM and reinforced BiLSTM are executed to investigate the performance of each structure. We set 1024 hidden units of the first LSTM in unidirectional encoder so that the output could pass to the second encoder directly, and the memory cell and hidden state of the last time point are applied to initialize description decoder. Bidirectional structure and reinforced BiLSTM in encoder are implemented similarly to the corresponding type structure in S2VT-based models, respectively, and then feed the video representation into description generator as the unidirectional model aforementioned."
],
[
"BLEU BIBREF28 , METEOR BIBREF29 , ROUGE-L BIBREF30 and CIDEr BIBREF31 are common evaluation metrics in image and video description, the first three were originally proposed to evaluate machine translation at the earliest and CIDEr was proposed to evaluate image description with sufficient reference sentences. To quantitatively evaluate the performance of our bidirectional recurrent based approach, we adopt METEOR metric because of its robust performance. Contrasting to the other three metrics, METEOR could capture semantic aspect since it identifies all possible matches by extracting exact matcher, stem matcher, paraphrase matcher and synonym matcher using WordNet database, and compute sentence level similarity scores according to matcher weights. The authors of CIDEr also argued for that METEOR outperforms CIDEr when the reference set is small BIBREF31 .",
"We first compare our unidirectional, bidirectional structures and reinforced BiLSTM. As shown in Table TABREF19 , in S2VT-based model, bidirectional structure performs very little lower score than unidirectional structure while it shows the opposite results in joint LSTM case. It may be caused by the pad at description generating stage in S2VT-based structure. We note that BiLSTM reinforced structure gains more than 3% improvement than unidirectional-only model in both S2VT-based and joint LSTMs structures, which means that combining bidirectional encoding of video representation is beneficial to exploit some additional temporal structure in video encoder (Figure FIGREF17 ). On structure level, Table TABREF19 illustrates that our Joint-LSTMs based models outperform all S2VT based models correspondingly. It demonstrates our Joint-LSTMs structure benefits from encoding video and decoding natural language separately.",
"We also evaluate our Joint-BiLSTM structure by comparing with several other state-of-the-art baseline approaches, which exploit either local or global temporal structure. As shown in Table TABREF20 , our Joint-BiLSTM reinforced model outperforms all of the baseline methods. The result of “LSTM” in first row refer from BIBREF15 and the last row but one denotes the best model combining local temporal structure using C3D with global temporal structure utilizing temporal attention in BIBREF17 . From the first two rows, our unidirectional joint LSTM shows rapid improvement, and comparing with S2VT-VGG model in line 3, it also demonstrates some superiority. Even LSTM-E jointly models video and descriptions representation by minimizing the distance between video and corresponding sentence, our Joint-BiLSTM reinforced obtains better performance from bidirectional encoding and separated visual and language models.",
"We observed that while our unidirectional S2VT has the same deployment as BIBREF19 , our model gives a little poorer performance(line 1, Table TABREF19 and line 3, Table TABREF20 ). As mentioned in Section 3.2.2, they employed an end-to-end model reading original RGB frames and fine-tuning on the VGG caffemodel. The features of frames from VGG INLINEFORM0 layer are more compatible to MSVD dataset and the description task. However, our joint LSTM demonstrates better performance with general features rather than specific ones for data, even superior to their model with multiple feature aspects (RGB + Flow, line 4, Table TABREF20 ), which means that our Joint-BiLSTM could show more powerful descriptive ability in end-to-end case. Certainly, We would investigate effect of end-to-end type of our Joint-BiLSTM in future works."
],
[
"In this paper, we introduced a sequence to sequence approach to describe video clips with natural language. The core of our method was, we applied two LSTM networks for the visual encoder and natural language generator component of our model. In particular, we encoded video sequences with a bidirectional Long-Short Term Memory (BiLSTM) network, which could effectively capture the bidirectional global temporal structure in video. Experimental results on MSVD dataset demonstrated the superior performance over many other state-of-the-art methods.",
"We also note some limitations in our model, such as end-to-end framework employed in BIBREF19 and distance measured in BIBREF15 . In the future we will make more effort to fix these limitations and exploit the linguistic domain knowledge in visual content understanding."
]
],
"section_name": [
"Introduction",
"The Proposed Approach",
"LSTM-based Sequential Model",
"Bidirectional Video Modelling",
"Generating Video Description",
"Dataset",
"Experimental Setup",
"Results and Analysis",
"Conclusion and Future Works"
]
} | {
"answers": [
{
"annotation_id": [
"6f41ba2fb3f91bd1eaf960e7b74e0c3895e961a5"
],
"answer": [
{
"evidence": [
"BLEU BIBREF28 , METEOR BIBREF29 , ROUGE-L BIBREF30 and CIDEr BIBREF31 are common evaluation metrics in image and video description, the first three were originally proposed to evaluate machine translation at the earliest and CIDEr was proposed to evaluate image description with sufficient reference sentences. To quantitatively evaluate the performance of our bidirectional recurrent based approach, we adopt METEOR metric because of its robust performance. Contrasting to the other three metrics, METEOR could capture semantic aspect since it identifies all possible matches by extracting exact matcher, stem matcher, paraphrase matcher and synonym matcher using WordNet database, and compute sentence level similarity scores according to matcher weights. The authors of CIDEr also argued for that METEOR outperforms CIDEr when the reference set is small BIBREF31 ."
],
"extractive_spans": [
"METEOR"
],
"free_form_answer": "",
"highlighted_evidence": [
"To quantitatively evaluate the performance of our bidirectional recurrent based approach, we adopt METEOR metric because of its robust performance."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a6ad52bf694ebe56e40d14d6627d996b278946f1"
],
"answer": [
{
"evidence": [
"We also evaluate our Joint-BiLSTM structure by comparing with several other state-of-the-art baseline approaches, which exploit either local or global temporal structure. As shown in Table TABREF20 , our Joint-BiLSTM reinforced model outperforms all of the baseline methods. The result of “LSTM” in first row refer from BIBREF15 and the last row but one denotes the best model combining local temporal structure using C3D with global temporal structure utilizing temporal attention in BIBREF17 . From the first two rows, our unidirectional joint LSTM shows rapid improvement, and comparing with S2VT-VGG model in line 3, it also demonstrates some superiority. Even LSTM-E jointly models video and descriptions representation by minimizing the distance between video and corresponding sentence, our Joint-BiLSTM reinforced obtains better performance from bidirectional encoding and separated visual and language models.",
"FLOAT SELECTED: Table 2: Comparing with several state-of-the-art models (reported in percentage, higher is better)."
],
"extractive_spans": [],
"free_form_answer": "S2VT, RGB (VGG), RGB (VGG)+Flow (AlexNet), LSTM-E (VGG), LSTM-E (C3D) and Yao et al.",
"highlighted_evidence": [
"We also evaluate our Joint-BiLSTM structure by comparing with several other state-of-the-art baseline approaches, which exploit either local or global temporal structure. As shown in Table TABREF20 , our Joint-BiLSTM reinforced model outperforms all of the baseline methods.",
"FLOAT SELECTED: Table 2: Comparing with several state-of-the-art models (reported in percentage, higher is better)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"what metrics were used for evaluation?",
"what are the state of the art methods?"
],
"question_id": [
"7c792cda220916df40edb3107e405c86455822ed",
"b3fcab006a9e51a0178a1f64d1d084a895bd8d5c"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Figure 1: The overall flowchart of the proposed video captioning framework. We first extract CNNs features of video frames and feed them into forward pass networks (FU, green box) and backward pass networks (BU, yellow box). We then combine the outputs of hidden states together with the original CNNs features, and pass the integrated sequence to another LSTM (MU, blue box) to generate final video representation. We initialize language model (SU, pink box) with video representation and start to generate words sequentially with <BOS> token, and terminate the process until the <EOS> token is emitted.",
"Figure 2: Video captioning examples of our proposed method. “Uni” in color blue, “Bi” in color brown and “Re” in color black are unidirectional Joint-LSTM, bidirectional Joint-LSTM and reinforced Joint-BiLSTM model, respectively.",
"Table 2: Comparing with several state-of-the-art models (reported in percentage, higher is better).",
"Table 1: Comparison results of unidirectional, bidirectional structures and reinforced BiLSTM in both S2VT-based and joint LSTMs structure with METEOR (reported in percentage, higher is better)."
],
"file": [
"2-Figure1-1.png",
"4-Figure2-1.png",
"4-Table2-1.png",
"4-Table1-1.png"
]
} | [
"what are the state of the art methods?"
] | [
[
"1606.04631-Results and Analysis-2",
"1606.04631-4-Table2-1.png"
]
] | [
"S2VT, RGB (VGG), RGB (VGG)+Flow (AlexNet), LSTM-E (VGG), LSTM-E (C3D) and Yao et al."
] | 666 |
2003.07996 | Cross Lingual Cross Corpus Speech Emotion Recognition | The majority of existing speech emotion recognition models are trained and evaluated on a single corpus and a single language setting. These systems do not perform as well when applied in a cross-corpus and cross-language scenario. This paper presents results for speech emotion recognition for 4 languages in both single corpus and cross corpus setting. Additionally, since multi-task learning (MTL) with gender, naturalness and arousal as auxiliary tasks has shown to enhance the generalisation capabilities of the emotion models, this paper introduces language ID as another auxiliary task in MTL framework to explore the role of spoken language on emotion recognition which has not been studied yet. | {
"paragraphs": [
[
"Speech conveys human emotions most naturally. In recent years there has been an increased research interest in speech emotion recognition domain. The first step in a typical SER system is extracting linguistic and acoustic features from speech signal. Some para-linguistic studies find Low-Level Descriptor (LLD) features of the speech signal to be most relevant to studying emotions in speech. These features include frequency related parameters like pitch and jitter, energy parameters like shimmer and loudness, spectral parameters like alpha ratio and other parameters that convey cepstral and dynamic information. Feature extraction is followed with a classification task to predict the emotions of the speaker.",
"Data scarcity or lack of free speech corpus is a problem for research in speech domain in general. This also means that there are even fewer resources for studying emotion in speech. For those that are available are dissimilar in terms of the spoken language, type of emotion (i.e. naturalistic, elicited, or acted) and labelling scheme (i.e. dimensional or categorical).",
"Across various studies involving SER we observe that performance of model depends heavily on whether training and testing is performed from the same corpus or not. Performance is best when focus is on a single corpus at a time, without considering the performance of model in cross-language and cross-corpus scenarios. In this work, we work with diverse SER datasets i.e. tackle the problem in both cross-language and cross-corpus setting. We use transfer learning across SER datasets and investigate the effects of language spoken on the accuracy of the emotion recognition system using our Multi-Task Learning framework.",
"The paper is organized as follows: Section 2 reviewed related work on SER, cross-lingual and cross-corpus SER and the recent studies on role of language identification in speech emotion recognition system, Section 3 describes the datasets that have been used, Section 4 presents detailed descriptions of three types of SER experiments we conduct in this paper. In Section 5, we present our results and evaluations of our models. Section 6 presents some additional experiments to draw a direct comparison with previously published research. Finally, we discuss future work and conclude the paper."
],
[
"Over the last two decades there have been considerable research work on speech emotion recognition. However, all these differ in terms of the training corpora, test conditions, evaluation strategies and more which create difficulty in reproducing exact results. In BIBREF0, the authors give an overview of types of features, classifiers and emotional speech databases used in various SER research.",
"Speech emotion recognition has evolved over time with regards to both the type of features and models used for classifiers. Different types of features that can be used can involve simple features like pitch and intensity BIBREF1, BIBREF2. Some studies use low-level descriptor features(LLDs) like jitter, shimmer, HNR and spectral/cepstral parameters like alpha ratio BIBREF3, BIBREF4. Other features include rhythm and sentence duration BIBREF5 and non-uniform perceptual linear predictive (UN- PLP) features BIBREF6. Sometimes, linear predictive cepstral coefficients(LPCCs) BIBREF7 are used in conjunction with mel-frequency cepstral coefficients (MFCCs).",
"There have been studies on SER in languages other than english. For example, BIBREF8 propose a deep learning model consisting of stacked auto-encoders and deep belief networks for SER on the famous German dataset EMODB. BIBREF9 were the first to study SER work on the GEES, a Serbian emotional speech corpus. The authors developed a multistage strategy with SVMs for emotion recognition on a single dataset.",
"Relatively fewer studies address the problem of cross-language and cross-corpus speech emotion recognition. BIBREF10, BIBREF11. Recent work by BIBREF12, BIBREF13 studies SER for languages belonging to different language families like Urdu vs. Italian or German. Other work involving cross-language emotion recognition includes BIBREF14 which studies speech emotion recognition for for mandarin language vs. western languages like German and Danish. BIBREF15 developed an ensemble SVM for emotion detection with a focus on emotion recognition in unseen languages.",
"Although there are a lot of psychological case studies on the effect of language and culture in SER, there are very few computational linguistic studies in the same domain. In BIBREF16, the authors support the fact that SER is language independent, however also reveal that there are language specific differences in emotion recognition in which English shows a higher recognition rate compared to Malay and Mandarin. In BIBREF17 the authors proposed two-pass method based on language identification and then emotion recognition. It showed significant improvement in performance. They used English IEMOCAP, the German Emo-DB, and a Japanese corpus to recognize four emotions based on the proposed two-pass method.",
"In BIBREF18, the authors also use language identification to enhance cross-lingual SER. They concluded that in order to recognize the emotions of a speaker whose language is unknown, it is beneficial to use a language identifier followed by model selection instead of using a model which is trained based on all available languages. This work is to the best of our knowledge the first work that jointly tries to learn the language and emotion in speech."
],
[
"This dataset was introduced by BIBREF19. Language of recordings is German and consists of acted speech with 7 categorical labels. The semantic content in this data is pre-defined in 10 emotionally neutral German short sentences. It contains 494 emotionally labeled phrases collected from 5 male and 5 female actors in age range of 21-35 years."
],
[
"Surrey Audio-Visual Expressed Emotion (SAVEE) database BIBREF20 is a famous acted-speech multimodal corpus. It consists of 480 British English utterances from 4 male actors in 7 different emotion categories. The text material consisted of 15 TIMIT BIBREF21 sentences per emotion: 3 common, 2 emotion-specific and 10 generic sentences that were different for each emotion and phonetically-balanced."
],
[
"This BIBREF22 is an Italian language acted speech emotional corpus that contains recordings of 6 actors who acted 14 emotionally neutral short sentences sentences to simulate 7 emotional states. It consists of 588 utterances and annotated by two different groups of 24 annotators."
],
[
"This is an Mandarin language acted speech emotional corpus that consist of 68 speakers (23 females, 45 males) each reading out read that consisted of five phrases, fifteen sentences and two paragraphs to simulate 5 emotional states. Altogether this database BIBREF23 contains 25,636 utterances."
],
[
"IEMOCAP database BIBREF24 is an English language multi-modal emotional speech database. It contains approximately 12 hours of audiovisual data, including video, speech, motion capture of face, text transcriptions. It consists of dyadic sessions where actors perform improvisations or scripted scenarios, specifically selected to elicit emotional expressions. It has categorical labels, such as anger, happiness, sadness, neutrality, as well as dimensional labels such as valence, activation and dominance."
],
[
"The first set of experiments focused on performing speech emotion recognition for the 5 datasets individually. We perform a 5-way classification by choosing 5 emotions common in all datasets i.e. happy, sad, fear, anger and neutral. For each dataset, we experiment with different types of features and classifiers. To generate Mel-frequency Cepstral Coefficients (MFCC) features we used the Kaldi-toolkit. We created spk2utt, utt2spk and wav.scp files for each dataset and generated MFCC features in .ark format. We leveraged kaldiio python library to convert .ark files to numpy arrays. Apart from MFCC's we also computed pitch features using the same toolkit. We keep a maximum of 120 frames of the input, and zero padded the extra signal for short utterances and clipped the extra signal for longer utterances to end up with (120,13) feature vector for each utterance.",
"To compare emotion classification performance using MFCC's as input features we also tried a different feature set i.e. IS09 emotion feature set BIBREF25 which has in previous research shown good performance on SER tasks. The IS09 feature set contains 384 features that result from a systematic combination of 16 Low-Level Descriptors (LLDs) and corresponding first order delta coefficients with 12 functionals. The 16 LLDs consist of zero-crossing-rate (ZCR), root mean square (RMS) frame energy, pitch frequency (normalized to 500 Hz), harmonics-to-noise ratio (HNR) by autocorrelation function, and mel-frequency cepstral coefficients (MFCC) 1–12 (in full accordance to HTK-based computation). The 12 functionals used are mean, standard deviation, kurtosis, skewness, minimum, maximum, relative position, range, and offset and slope of linear regression of segment contours, as well as its two regression coefficients with their mean square error (MSE) applied on a chunk. To get these features we had to install OpenSmile toolkit. Script to get these features after installation is included in code submitted (refer IS09 directory).",
"Once we had our input features ready we created test datasets from each of the 5 datasets by leaving one speaker out for small datasets (EMOVO, EMODB, SAVEE) and 2 speakers out for the larger datasets (IEMOCAP, MASC). Thus, for all corpora, the speakers in the test sets do not appear in the training set. We then performed SER using both classical machine learning and deep learning models. We used Support Vector one-vs-rest classifier and Logistic Regression Classifier for classical ML models and a stacked LSTM model for the deep learning based classifier. The LSTM network comprised of 2 hidden layers with 128 LSTM cells, followed by a dense layer of size 5 with softmax activation.",
"We present a comparative study across all datasets, feature sets and classifiers in table 2."
],
[
"In the next step of experiments we tried to improve on the results we got for individual datasets by trying to leverage the technique of transfer learning. While we had relatively large support for languages like English and Chinese, speech emotion datasets for other languages like Italian and German were very small i.e. only had a total of around 500 labeled utterances. Such small amount of training data is not sufficient specially when training a deep learning based model.",
"We used the same LSTM classifier as detailed in section 4.1. with an additional dense layer before the final dense layer with softmax. We train this base model using the large IEMOCAP English dataset. We then freeze the weights of LSTM layers i.e. only trainable weights in the classifier remain those of the penultimate dense layer. We fine tune the weights of this layer using the small datasets(eg. SAVEE, EMODB, EMOVO) and test performance on the same test sets we created in section 4.1.",
"Table 3 shows the results of transfer learning experiments."
],
[
"Last set of experiments focus on studying the role of language being spoken on emotion recognition. Due to the lack of adequately sized emotion corpus in many languages, researchers have previously tried training emotion recognition models on cross-corpus data i.e. training with data in one or more language and testing on another. This approach sounds valid only if we consider that expression of emotion is same in all languages i.e. no matter which language you speak, the way you convey your happiness, anger, sadness etc will remain the same. One example can be that low pitch signals are generally associated with sadness and high pitch and amplitude with anger. If expression of emotion is indeed language agnostic we could train emotion recognition models with high resource languages and use the same models for low resource languages.",
"To verify this hypothesis, we came up with a multi-task framework that jointly learns to predict emotion and the language in which the emotion is being expressed. The framework is illustrated in figure 2. The parameters of the LSTM model remain the same as mentioned in section 4.1. The SER performance of using training data from all languages and training a single classifier(same as shown in figure 1) vs. using training data from all languages in a multi-task setting is mentioned in table 4."
],
[
"We will discuss the results of each experiment in detail in this section:",
"For SER experiments on individual dataset we see from Table 2 that SVC classifier with IS09 input features gave the best performance for four out of 5 datasets. We also note a huge difference in accuracy scores when using the same LSTM classifier and only changing the input features i.e. MFCC and IS09. LSTM model with IS09 input features gives better emotion recognition performance for four out of 5 datasets. These experiments suggest the superiority of IS09 features as compared to MFCC's for SER tasks.",
"As expected the second set of experiments show that transfer learning is beneficial for SER task for small datasets. In table 3 we observe that training on IEMOCAP and then fine-tuning on train set of small dataset improves performance for german dataset EMODB and smaller english dataset SAVEE. However, we also note a small drop in performance for Italian dataset EMOVO.",
"Results in table 4 do not show improvement with using language as an auxiliary task in speech emotion recognition. While a improvement would have suggested that language spoken does affect the way people express emotions in speech, the current results are more suggestive of the fact that emotion in speech are universal i.e. language agnostic. People speaking different languages express emotions in the same way and SER models could be jointly trained across various SER corpus we have for different languages."
],
[
"In this section we present comparative study of two previous research papers with our work. We keep this report in a separate section because in order to give a direct comparison with these two papers we had to follow their train-test split, number of emotion classes etc.",
"In Analysis of Deep Learning Architectures for Cross-corpus Speech Emotion Recognition BIBREF26, the authors discuss cross-corpus training using 6 datasets. In one of their experiments, they report performance on test set of each corpus for models trained only on IEMOCAP dataset. When we perform the same experiment i.e. train our model only on IEMOCAP and test on other datasets using IS09 as input features and SVC classifier, we observe better results even while performing a 5 way classification task as compared to their 4 way classification. Results are shown in Table 5.",
"In multi modal emotion recognition on IEMOCAP with neural networks BIBREF27, the authors present three deep learning based speech emotion recognition models. We follow the exact same data pre-processing steps for obtaining same train-test split. We also use the same LSTM model as their best performing model to verify we get the same result i.e. accuracy of 55.65%. However, we could improve this performance to 56.45% by using IS09 features for input and a simple SVC classifier. This experiment suggested we could get equal or better performance in much less training time with classical machine learning models given the right input features as compared to sophisticated deep learning classifiers."
],
[
"In future we would like to experiment with more architectures and feature sets. We would also like to extend this study to include other languages, specially low resource languages. Since all datasets in this study were acted speech, another interesting study would be to note the differences that arise when dealing with natural speech."
],
[
"Some of the main conclusions that can be drawn from this study are that classical machine learning models may perform as well as deep learning models for SER tasks given we choose the right input features. IS09 features consistently perform well for SER tasks across datasets in different languages. Transfer learning proved to be an effective technique for performing SER for small datasets and multi-task learning experiments shed light on the language agnostic nature of speech emotion recognition task."
]
],
"section_name": [
"Introduction",
"Related Work",
"Datasets ::: EMO-DB",
"Datasets ::: SAVEE",
"Datasets ::: EMOVO",
"Datasets ::: MASC: Mandarin Affective Speech Corpus",
"Datasets ::: IEMOCAP: The Interactive Emotional Dyadic Motion Capture",
"Experiments ::: SER on Individual Datasets",
"Experiments ::: SER using Transfer learning for small sized datasets",
"Experiments ::: Multitask learning for SER",
"Results and Analysis",
"Comparison with Previous Research",
"Future Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"710d1d8c7b2dce62767e892d73222e2085305a54"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Datasets used for various SER experiments."
],
"extractive_spans": [],
"free_form_answer": "German, English, Italian, Chinese",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Datasets used for various SER experiments."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"Which four languages do they experiment with?"
],
"question_id": [
"6baf5d7739758bdd79326ce8f50731c785029802"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
""
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Table 1: Datasets used for various SER experiments.",
"Table 2: SER performance for each of the 5 datasets using different feature sets and classifiers",
"Table 3: Transfer learning for small datasets. Row 1: Training on large English corpus, testing on test sets of small corpses. Row 2: Fine-tune base English model on say EMODB train set and test on EMODB test set",
"Table 4: Multitask-learning. Table only shows accuracy scores for emotion recognition. Model always predicted language ID with very high accuracy(>97%).",
"Figure 1: Transfer learning for small datasets",
"Figure 2: Multi-task learning for learning emotion and language ID simultaneously",
"Table 5: Comparative results with Parry et al."
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"3-Table3-1.png",
"3-Table4-1.png",
"4-Figure1-1.png",
"4-Figure2-1.png",
"5-Table5-1.png"
]
} | [
"Which four languages do they experiment with?"
] | [
[
"2003.07996-2-Table1-1.png"
]
] | [
"German, English, Italian, Chinese"
] | 669 |
1910.10288 | Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis | Despite the ability to produce human-level speech for in-domain text, attention-based end-to-end text-to-speech (TTS) systems suffer from text alignment failures that increase in frequency for out-of-domain text. We show that these failures can be addressed using simple location-relative attention mechanisms that do away with content-based query/key comparisons. We compare two families of attention mechanisms: location-relative GMM-based mechanisms and additive energy-based mechanisms. We suggest simple modifications to GMM-based attention that allow it to align quickly and consistently during training, and introduce a new location-relative attention mechanism to the additive energy-based family, called Dynamic Convolution Attention (DCA). We compare the various mechanisms in terms of alignment speed and consistency during training, naturalness, and ability to generalize to long utterances, and conclude that GMM attention and DCA can generalize to very long utterances, while preserving naturalness for shorter, in-domain utterances. | {
"paragraphs": [
[
"Sequence-to-sequence models that use an attention mechanism to align the input and output sequences BIBREF0, BIBREF1 are currently the predominant paradigm in end-to-end TTS. Approaches based on the seminal Tacotron system BIBREF2 have demonstrated naturalness that rivals that of human speech for certain domains BIBREF3. Despite these successes, there are sometimes complaints of a lack of robustness in the alignment procedure that leads to missing or repeating words, incomplete synthesis, or an inability to generalize to longer utterances BIBREF4, BIBREF5, BIBREF6.",
"The original Tacotron system BIBREF2 used the content-based attention mechanism introduced in BIBREF1 to align the target text with the output spectrogram. This mechanism is purely content-based and does not exploit the monotonicity and locality properties of TTS alignment, making it one of the least stable choices. The Tacotron 2 system BIBREF3 used the improved hybrid location-sensitive mechanism from BIBREF7 that combines content-based and location-based features, allowing generalization to utterances longer than those seen during training.",
"The hybrid mechanism still has occasional alignment issues which led a number of authors to develop attention mechanisms that directly exploit monotonicity BIBREF8, BIBREF4, BIBREF5. These monotonic alignment mechanisms have demonstrated properties like increased alignment speed during training, improved stability, enhanced naturalness, and a virtual elimination of synthesis errors. Downsides of these methods include decreased efficiency due to a reliance on recursion to marginalize over possible alignments, the necessity of training hacks to ensure learning doesn't stall or become unstable, and decreased quality when operating in a more efficient hard alignment mode during inference.",
"Separately, some authors BIBREF9 have moved back toward the purely location-based GMM attention introduced by Graves in BIBREF0, and some have proposed stabilizing GMM attention by using softplus nonlinearities in place of the exponential function BIBREF10, BIBREF11. However, there has been no systematic comparison of these design choices.",
"In this paper, we compare the content-based and location-sensitive mechanisms used in Tacotron 1 and 2 with a variety of simple location-relative mechanisms in terms of alignment speed and consistency, naturalness of the synthesized speech, and ability to generalize to long utterances. We show that GMM-based mechanisms are able to generalize to very long (potentially infinite-length) utterances, and we introduce simple modifications that result in improved speed and consistency of alignment during training. We also introduce a new location-relative mechanism called Dynamic Convolution Attention that modifies the hybrid location-sensitive mechanism from Tacotron 2 to be purely location-based, allowing it to generalize to very long utterances as well."
],
[
"The system that we use in this paper is based on the original Tacotron system BIBREF2 with architectural modifications from the baseline model detailed in the appendix of BIBREF11. We use the CBHG encoder from BIBREF2 to produce a sequence of encoder outputs, $\\lbrace j\\rbrace _{j=1}^L$, from a length-$L$ input sequence of target phonemes, $\\lbrace \\mathbf {x}_j\\rbrace _{j=1}^L$. Then an attention RNN, (DISPLAY_FORM2), produces a sequence of states, $\\lbrace \\mathbf {s}_i\\rbrace _{i=1}^T$, that the attention mechanism uses to compute $\\mathbf {\\alpha }_i$, the alignment at decoder step $i$. Additional arguments to the attention function in () depend on the specific attention mechanism (e.g., whether it is content-based, location-based, or both). The context vector, $\\mathbf {c}_i$, that is fed to the decoder RNN is computed using the alignment, $\\mathbf {\\alpha }_i$, to produce a weighted average of encoder states. The decoder is fed both the context vector and the current attention RNN state, and an output function produces the decoder output, $\\mathbf {y}_i$, from the decoder RNN state, $\\mathbf {d}_i$."
],
[
"An early sequence-to-sequence attention mechanism was proposed by Graves in BIBREF0. This approach is a purely location-based mechanism that uses an unnormalized mixture of $K$ Gaussians to produce the attention weights, $\\mathbf {\\alpha }_i$, for each encoder state. The general form of this type of attention is shown in (DISPLAY_FORM4), where $\\mathbf {w}_i$, $\\mathbf {Z}_i$, $\\mathbf {\\Delta }_i$, and $\\mathbf {\\sigma }_i$ are computed from the attention RNN state. The mean of each Gaussian component is computed using the recurrence relation in (), which makes the mechanism location-relative and potentially monotonic if $\\mathbf {\\Delta }_i$ is constrained to be positive.",
"In order to compute the mixture parameters, intermediate parameters ($\\hat{\\mathbf {w}}_i,\\hat{\\mathbf {\\Delta }}_i,\\hat{\\mathbf {\\sigma }}_i$) are first computed using the MLP in (DISPLAY_FORM5) and then converted to the final parameters using the expressions in Table TABREF6.",
"The version 0 (V0) row in Table TABREF6 corresponds to the original mechanism proposed in BIBREF0. V1 adds normalization of the mixture weights and components and uses the exponential function to compute the mean offset and variance. V2 uses the softplus function to compute the mean offset and standard deviation.",
"Another modification we test is the addition of initial biases to the intermediate parameters $\\hat{\\mathbf {\\Delta }}_i$ and $\\hat{\\mathbf {\\sigma }}_i$ in order to encourage the final parameters $\\mathbf {\\Delta }_i$ and $\\mathbf {\\sigma }_i$ to take on useful values at initialization. In our experiments, we test versions of V1 and V2 GMM attention that use biases that target a value of $\\mathbf {\\Delta }_i=1$ for the initial forward movement and $\\mathbf {\\sigma }_i=10$ for the initial standard deviation (taking into account the different nonlinearities used to compute the parameters)."
],
[
"A separate family of attention mechanisms use an MLP to compute attention energies, $\\mathbf {e}_i$, that are converted to attention weights, $\\mathbf {\\alpha }_i$ using the softmax function. This family includes the content-based mechanism introduced in BIBREF1 and the hybrid location-sensitive mechanism from BIBREF7. A generalized formulation of this family is shown in (DISPLAY_FORM8).",
"Here we see the content-based terms, $W\\mathbf {s}_i$ and $Vj$, that represent query/key comparisons and the location-sensitive term, $U{i,j}$, that uses convolutional features computed from the previous attention weights as in () BIBREF7. Also present are two new terms, $T\\mathbf {g}_{i,j}$ and $p_{i,j}$, that are unique to our proposed Dynamic Convolution Attention. The $T\\mathbf {g}_{i,j}$ term is very similar to $U{i,j}$ except that it uses dynamic filters that are computed from the current attention RNN state as in (). The $p_{i,j}$ term is the output of a fixed prior filter that biases the mechanism to favor certain types of alignment. Table TABREF9 shows which of the terms are present in the three energy-based mechanisms we compare in this paper."
],
[
"In designing Dynamic Convolution Attention (DCA), we were motivated by location-relative mechanisms like GMM attention, but desired fully normalized attention weights. Despite the fact that GMM attention V1 and V2 use normalized mixture weights and components, the attention weights still end up unnormalized because they are sampled from a continuous probability density function. This can lead to occasional spikes or dropouts in the alignment, and attempting to directly normalize GMM attention weights results in unstable training. Attention normalization isn't a significant problem in fine-grained output-to-text alignment, but becomes more of an issue for coarser-grained alignment tasks where the attention window needs to gradually move to the next index (for example in variable-length prosody transfer applications BIBREF12). Because DCA is in the energy-based attention family, it is normalized by default and should work well for a variety of monotonic alignment tasks.",
"Another issue with GMM attention is that because it uses a mixture of distributions with infinite support, it isn't necessarily monotonic. At any time, the mechanism could choose to emphasize a component whose mean is at an earlier point in the sequence, or it could expand the variance of a component to look backward in time, potentially hurting alignment stability.",
"To address monotonicity issues, we make modifications to the hybrid location-sensitive mechanism. First we remove the content-based terms, $W\\mathbf {s}_i$ and $Wi$, which prevents the alignment from moving backward due to a query/key match at a past timestep. Doing this prevents the mechanism from adjusting its alignment trajectory as it is only left with a set of static filters, $U{i,j}$, that learn to bias the alignment to move forward by a certain fixed amount. To remedy this, we add a set of learned dynamic filters, $T\\mathbf {g}_{i,j}$, that are computed from the attention RNN state as in (). These filters serve to dynamically adjust the alignment relative to the alignment at the previous step.",
"In order to prevent the dynamic filters from moving things backward, we use a single fixed prior filter to bias the alignment toward short forward steps. Unlike the static and dynamic filters, the prior filter is a causal filter that only allows forward progression of the alignment. In order to enforce the monotonicity constraint, the output of the filter is converted to the logit domain via the log function before being added to the energy function in (DISPLAY_FORM8) (we also floor the prior logits at $-10^6$ to prevent underflow).",
"We set the taps of the prior filter using values from the beta-binomial distribution, which is a two-parameter discrete distribution with finite support.",
"where $\\textrm {B}(\\cdot )$ is the beta function. For our experiments we use the parameters $\\alpha =0.1$ and $\\beta =0.9$ to set the taps on a length-11 prior filter ($n=10$), Repeated application of the prior filter encourages an average forward movement of 1 encoder step per decoder step ($\\mathbb {E}[k] = \\alpha n/(\\alpha +\\beta )$) with the uncertainty in the prior alignment increasing after each step. The prior parameters could be tailored to reflect the phonemic rate of each dataset in order to optimize alignment speed during training, but for simplicity we use the same values for all experiments. Figure FIGREF12 shows the prior filter along with the alignment weights every 20 decoder steps when ignoring the contribution from other terms in (DISPLAY_FORM8)."
],
[
"In our experiments we compare the GMM and additive energy-based families of attention mechanisms enumerated in Tables TABREF6 and TABREF9. We use the Tacotron architecture described in Section SECREF1 and only vary the attention function used to compute the attention weights, $\\mathbf {\\alpha }_i$. The decoder produces two 128-bin, 12.5ms-hop mel spectrogram frames per step. We train each model using the Adam optimizer for 300,000 steps with a gradient clipping threshold of 5 and a batch size of 256, spread across 32 Google Cloud TPU cores. We use an initial learning rate of $10^{-3}$ that is reduced to $5\\times 10^{-4}$, $3\\times 10^{-4}$, $10^{-4}$, and $5\\times 10^{-5}$ at 50k, 100k, 150k, and 200k steps, respectively. To convert the mel spectrograms produced by the models into audio samples, we use a separately-trained WaveRNN BIBREF13 for each speaker.",
"For all attention mechanisms, we use a size of 128 for all tanh hidden layers. For the GMM mechanisms, we use $K=5$ mixture components. For location-sensitive attention (LSA), we use 32 static filters, each of length 31. For DCA, we use 8 static filters and 8 dynamic filters (all of length 21), and a length-11 causal prior filter as described in Section SECREF10.",
"We run experiments using two different single-speaker datasets. The first (which we refer to as the Lessac dataset) comprises audiobook recordings from Catherine Byers, the speaker from the 2013 Blizzard Challenge. For this dataset, we train on a 49,852-utterance (37-hour) subset, consisting of utterances up to 5 seconds long, and evaluate on a separate 935-utterance subset. The second is the LJ Speech dataset BIBREF14, a public dataset consisting of audiobook recordings that are segmented into utterances of up to 10 seconds. We train on a 12,764-utterance subset (23 hours) and evaluate on a separate 130-utterance subset."
],
[
"To test the alignment speed and consistency of the various mechanisms, we run 10 identical trials of 10,000 training steps and plot the MCD-DTW between a ground truth holdout set and the output of the model during training. The MCD-DTW is an objective similarity metric that uses dynamic time warping (DTW) to find the minimum mel cepstral distortion (MCD) BIBREF15 between two sequences. The faster a model is able to align with the text, the faster it will start producing reasonable spectrograms that produce a lower MCD-DTW.",
"Figure FIGREF15 shows these trials for 8 different mechanisms for both the Lessac and LJ datasets. Content-based (CBA), location-sensitive (LSA), and DCA are the three energy-based mechanisms from Table TABREF9, and the 3 GMM varieties are shown in Table TABREF6. We also test the V1 and V2 GMM mechanisms with an initial parameter bias as described in Section SECREF3 (abbreviated as GMMv1b and GMMv2b).",
"Looking at the plots for the Lessac dataset (top of Figure FIGREF15), we see that the mechanisms on the top row (the energy-based family and GMMv2b) all align consistently with DCA and GMMv2b aligning the fastest. The GMM mechanisms on the bottom row don't fare as well, and while they typically align more often than not, there are a significant number failures or cases of delayed alignment. It's interesting to note that adding a bias to the GMMv1 mechanism actually hurts its consistency while adding a bias to GMMv2 helps it.",
"Looking at the plots for the LJ dataset at bottom of Figure FIGREF15, we first see that the dataset is more difficult in terms of alignment. This is likely due to the higher maximum and average length of the utterances in the training data (most utterances in the LJ dataset are longer than 5 seconds) but could also be caused by an increased presence of intra-utterance pauses and overall lower audio quality. Here, the top row doesn't fare as well: CBA has trouble aligning within the first 10k steps, while DCA and GMMv2b both fail to align once. LSA succeeds on all 10 trials but tends to align more slowly than DCA and GMMv2b when they succeed. With these consistency results in mind, we will only be testing the top row of mechanisms in subsequent evaluations."
],
[
"We evaluate CBA, LSA, DCA, and GMMv2b using mean opinion score (MOS) naturalness judgments produced by a crowd-sourced pool of raters. Scores range from 1 to 5, with 5 representing “completely natural speech”. The Lessac and LJ models are evaluated on their respective test sets (hence in-domain), and the results are shown in Table TABREF17. We see that for these utterances, the LSA, DCA, and GMMV2b mechanisms all produce equivalent scores around 4.3, while the content-based mechanism is a bit lower due to occasional catastrophic attention failures."
],
[
"Now we evaluate our models on long utterances taken from two chapters of the Harry Potter novels. We use 1034 utterances that vary between 58 and 1648 characters (10 and 299 words). Google Cloud Speech-To-Text is used to produce transcripts of the resulting audio output, and we compute the character errors rate (CER) between the produced transcripts and the target transcripts.",
"Figure FIGREF20 shows the CER results as the utterance length is varied for the Lessac models (trained on up to 5 second utterances) and LJ models (trained on up to 10 second utterances). The plots show that CBA fares the worst with the CER shooting up when the test length exceeds the max training length. LSA shoots up soon after at around 3x the max training length, while the two location-relative mechanisms, DCA and GMMv2b, are both able to generalize to the whole range of utterance lengths tested."
],
[
"We have shown that Dynamic Convolution Attention (DCA) and our V2 GMM attention with initial bias (GMMv2b) are able to generalize to utterances much longer than those seen during training, while preserving naturalness on shorter utterances. This opens the door for synthesis of entire paragraph or long sentences (e.g., for book or news reading applications), which can improve naturalness and continuity compared to synthesizing each sentence or clause separately and then stitching them together.",
"These two location-relative mechanisms are simple to implement and do not rely on dynamic programming to marginalize over alignments. They also tend to align very quickly during training, which makes the occasional alignment failure easy to detect so training can be restarted. In our alignment trials, despite being slower to align on average, LSA attention seemed to have an edge in terms of alignment consistency; however, we have noticed that slower alignment can sometimes lead to worse quality models, probably because the other model components are being optimized in an unaligned state for longer.",
"Compared to GMMv2b, DCA can more easily bound its receptive field (because its prior filter numerically disallows excessive forward movement), which makes it easier to incorporate hard windowing optimizations in production. Another advantage of DCA over GMM attention is that its attention weights are normalized, which helps to stabilize the alignment, especially for coarse-grained alignment tasks.",
"For monotonic alignment tasks like TTS and speech recognition, location-relative attention mechanisms have many advantages and warrant increased consideration and further study. Supplemental materials, including audio examples, are available on the web."
]
],
"section_name": [
"Introduction",
"Two Families of Attention Mechanisms ::: Basic Setup",
"Two Families of Attention Mechanisms ::: GMM-Based Mechanisms",
"Two Families of Attention Mechanisms ::: Additive Energy-Based Mechanisms",
"Two Families of Attention Mechanisms ::: Dynamic Convolution Attention",
"Experiments ::: Experiment Setup",
"Experiments ::: Alignment Speed and Consistency",
"Experiments ::: In-Domain Naturalness",
"Experiments ::: Generalization to Long Utterances",
"Discussion"
]
} | {
"answers": [
{
"annotation_id": [
"719ea9298ff180b15c16136e924316bee9c669da"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3. MOS naturalness results along with 95% confidence intervals for the Lessac and LJ datasets."
],
"extractive_spans": [],
"free_form_answer": "About the same performance",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3. MOS naturalness results along with 95% confidence intervals for the Lessac and LJ datasets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"a040283fdf60d90e7155eef3e0ab19af3d471e33"
],
"answer": [
{
"evidence": [
"We evaluate CBA, LSA, DCA, and GMMv2b using mean opinion score (MOS) naturalness judgments produced by a crowd-sourced pool of raters. Scores range from 1 to 5, with 5 representing “completely natural speech”. The Lessac and LJ models are evaluated on their respective test sets (hence in-domain), and the results are shown in Table TABREF17. We see that for these utterances, the LSA, DCA, and GMMV2b mechanisms all produce equivalent scores around 4.3, while the content-based mechanism is a bit lower due to occasional catastrophic attention failures."
],
"extractive_spans": [
"using mean opinion score (MOS) naturalness judgments produced by a crowd-sourced pool of raters"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate CBA, LSA, DCA, and GMMv2b using mean opinion score (MOS) naturalness judgments produced by a crowd-sourced pool of raters. Scores range from 1 to 5, with 5 representing “completely natural speech”."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero"
],
"paper_read": [
"no",
"no"
],
"question": [
"Does DCA or GMM-based attention perform better in experiments?",
"How they compare varioius mechanisms in terms of naturalness?"
],
"question_id": [
"5c4c8e91d28935e1655a582568cc9d94149da2b2",
"e4024db40f4b8c1ce593f53b28718e52d5007cd2"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 2. The terms from (8) that are present in each of the three energy-based attention mechanisms we test.",
"Table 1. Conversion of intermediate parameters computed in (7) to final mixture parameters for the three tested GMM-based attention mechanisms. Smax(·) is the softmax function, while S+(·) is the softplus function.",
"Fig. 1. Initial alignment encouraged by the prior filter (ignoring the contribution of other term in (8)). The attention weights are shown every 20 decoders steps with the prior filter itself shown at the top.",
"Fig. 2. Alignment trials for 8 different mechanisms (10 runs each) trained on the Lessac (top) and LJ (bottom) datasets. The validation set MCD-DTW drops down after alignment has occurred.",
"Fig. 3. Utterance length robustness for models trained on the Lessac (top) and LJ (bottom) datasets.",
"Table 3. MOS naturalness results along with 95% confidence intervals for the Lessac and LJ datasets."
],
"file": [
"2-Table2-1.png",
"2-Table1-1.png",
"3-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"4-Table3-1.png"
]
} | [
"Does DCA or GMM-based attention perform better in experiments?"
] | [
[
"1910.10288-4-Table3-1.png"
]
] | [
"About the same performance"
] | 670 |
1908.06083 | Build it Break it Fix it for Dialogue Safety: Robustness from Adversarial Human Attack | The detection of offensive language in the context of a dialogue has become an increasingly important application of natural language processing. The detection of trolls in public forums (Galan-Garcia et al., 2016), and the deployment of chatbots in the public domain (Wolf et al., 2017) are two examples that show the necessity of guarding against adversarially offensive behavior on the part of humans. In this work, we develop a training scheme for a model to become robust to such human attacks by an iterative build it, break it, fix it strategy with humans and models in the loop. In detailed experiments we show this approach is considerably more robust than previous systems. Further, we show that offensive language used within a conversation critically depends on the dialogue context, and cannot be viewed as a single sentence offensive detection task as in most previous work. Our newly collected tasks and methods will be made open source and publicly available. | {
"paragraphs": [
[
"The detection of offensive language has become an important topic as the online community has grown, as so too have the number of bad actors BIBREF2. Such behavior includes, but is not limited to, trolling in public discussion forums BIBREF3 and via social media BIBREF4, BIBREF5, employing hate speech that expresses prejudice against a particular group, or offensive language specifically targeting an individual. Such actions can be motivated to cause harm from which the bad actor derives enjoyment, despite negative consequences to others BIBREF6. As such, some bad actors go to great lengths to both avoid detection and to achieve their goals BIBREF7. In that context, any attempt to automatically detect this behavior can be expected to be adversarially attacked by looking for weaknesses in the detection system, which currently can easily be exploited as shown in BIBREF8, BIBREF9. A further example, relevant to the natural langauge processing community, is the exploitation of weaknesses in machine learning models that generate text, to force them to emit offensive language. Adversarial attacks on the Tay chatbot led to the developers shutting down the system BIBREF1.",
"In this work, we study the detection of offensive language in dialogue with models that are robust to adversarial attack. We develop an automatic approach to the “Build it Break it Fix it” strategy originally adopted for writing secure programs BIBREF10, and the “Build it Break it” approach consequently adapting it for NLP BIBREF11. In the latter work, two teams of researchers, “builders” and “breakers” were used to first create sentiment and semantic role-labeling systems and then construct examples that find their faults. In this work we instead fully automate such an approach using crowdworkers as the humans-in-the-loop, and also apply a fixing stage where models are retrained to improve them. Finally, we repeat the whole build, break, and fix sequence over a number of iterations.",
"We show that such an approach provides more and more robust systems over the fixing iterations. Analysis of the type of data collected in the iterations of the break it phase shows clear distribution changes, moving away from simple use of profanity and other obvious offensive words to utterances that require understanding of world knowledge, figurative language, and use of negation to detect if they are offensive or not. Further, data collected in the context of a dialogue rather than a sentence without context provides more sophisticated attacks. We show that model architectures that use the dialogue context efficiently perform much better than systems that do not, where the latter has been the main focus of existing research BIBREF12, BIBREF5, BIBREF13.",
"Code for our entire build it, break it, fix it algorithm will be made open source, complete with model training code and crowdsourcing interface for humans. Our data and trained models will also be made available for the community."
],
[
"The task of detecting offensive language has been studied across a variety of content classes. Perhaps the most commonly studied class is hate speech, but work has also covered bullying, aggression, and toxic comments BIBREF13.",
"To this end, various datasets have been created to benchmark progress in the field. In hate speech detection, recently BIBREF5 compiled and released a dataset of over 24,000 tweets labeled as containing hate speech, offensive language, or neither. The TRAC shared task on Aggression Identification, a dataset of over 15,000 Facebook comments labeled with varying levels of aggression, was released as part of a competition BIBREF14. In order to benchmark toxic comment detection, The Wikipedia Toxic Comments dataset (which we study in this work) was collected and extracted from Wikipedia Talk pages and featured in a Kaggle competition BIBREF12, BIBREF15. Each of these benchmarks examine only single-turn utterances, outside of the context in which the language appeared. In this work we recommend that future systems should move beyond classification of singular utterances and use contextual information to help identify offensive language.",
"Many approaches have been taken to solve these tasks – from linear regression and SVMs to deep learning BIBREF16. The best performing systems in each of the competitions mentioned above (for aggression and toxic comment classification) used deep learning approaches such as LSTMs and CNNs BIBREF14, BIBREF15. In this work we consider a large-pretrained transformer model which has been shown to perform well on many downstream NLP tasks BIBREF17.",
"The broad class of adversarial training is currently a hot topic in machine learning BIBREF18. Use cases include training image generators BIBREF19 as well as image classifiers to be robust to adversarial examples BIBREF20. These methods find the breaking examples algorithmically, rather than by using humans breakers as we do. Applying the same approaches to NLP tends to be more challenging because, unlike for images, even small changes to a sentence can cause a large change in the meaning of that sentence, which a human can detect but a lower quality model cannot. Nevertheless algorithmic approaches have been attempted, for example in text classification BIBREF21, machine translation BIBREF22, dialogue generation tasks BIBREF23 and reading comprehension BIBREF24. The latter was particularly effective at proposing a more difficult version of the popular SQuAD dataset.",
"As mentioned in the introduction, our approach takes inspiration from “Build it Break it” approaches which have been successfully tried in other domains BIBREF10, BIBREF11. Those approaches advocate finding faults in systems by having humans look for insecurities (in software) or prediction failures (in models), but do not advocate an automated approach as we do here. Our work is also closely connected to the “Mechanical Turker Descent” algorithm detailed in BIBREF25 where language to action pairs were collected from crowdworkers by incentivizing them with a game-with-a-purpose technique: a crowdworker receives a bonus if their contribution results in better models than another crowdworker. We did not gamify our approach in this way, but still our approach has commonalities in the round-based improvement of models through crowdworker interaction."
],
[
"In this section we describe the publicly available data that we have used to bootstrap our build it break it fix it approach. We also compare our model choices with existing work and clarify the metrics chosen to report our results."
],
[
"The Wikipedia Toxic Comments dataset (WTC) has been collected in a common effort from the Wikimedia Foundation and Jigsaw BIBREF12 to identify personal attacks online. The data has been extracted from the Wikipedia Talk pages, discussion pages where editors can discuss improvements to articles or other Wikipedia pages. We considered the version of the dataset that corresponds to the Kaggle competition: “Toxic Comment Classification Challenge\" BIBREF15 which features 7 classes of toxicity: toxic, severe toxic, obscene, threat, insult, identity hate and non-toxic. In the same way as in BIBREF26, every label except non-toxic is grouped into a class offensive while the non-toxic class is kept as the safe class. In order to compare our results to BIBREF26, we similarly split this dataset to dedicate 10% as a test set. 80% are dedicated to train set while the remaining 10% is used for validation. Statistics on the dataset are shown in Table TABREF4."
],
[
"We establish baselines using two models. The first one is a binary classifier built on top of a large pre-trained transformer model. We use the same architecture as in BERT BIBREF17. We add a linear layer to the output of the first token ([CLS]) to produce a final binary classification. We initialize the model using the weights provided by BIBREF17 corresponding to “BERT-base\". The transformer is composed of 12 layers with hidden size of 768 and 12 attention heads. We fine-tune the whole network on the classification task. We also compare it the fastText classifier BIBREF27 for which a given sentence is encoded as the average of individual word vectors that are pre-trained on a large corpus issued from Wikipedia. A linear layer is then applied on top to yield a binary classification."
],
[
"We compare the two aforementioned models with BIBREF26 who conducted their experiments with a BiLSTM with GloVe pre-trained word vectors BIBREF28. Results are listed in Table TABREF5 and we compare them using the weighted-F1, i.e. the sum of F1 score of each class weighted by their frequency in the dataset. We also report the F1 of the offensive-class which is the metric we favor within this work, although we report both. (Note that throughout the paper, the notation F1 is always referring to offensive-class F1.) Indeed, in the case of an imbalanced dataset such as Wikipedia Toxic Comments where most samples are safe, the weighted-F1 is closer to the F1 score of the safe class while we focus on detecting offensive content. Our BERT-based model outperforms the method from BIBREF26; throughout the rest of the paper, we use the BERT-based architecture in our experiments. In particular, we used this baseline trained on WTC to bootstrap our approach, to be described subsequently."
],
[
"In order to train models that are robust to adversarial behavior, we posit that it is crucial collect and train on data that was collected in an adversarial manner. We propose the following automated build it, break it, fix it algorithm:",
"Build it: Build a model capable of detecting offensive messages. This is our best-performing BERT-based model trained on the Wikipedia Toxic Comments dataset described in the previous section. We refer to this model throughout as $A_0$.",
"Break it: Ask crowdworkers to try to “beat the system\" by submitting messages that our system ($A_0$) marks as safe but that the worker considers to be offensive.",
"Fix it: Train a new model on these collected examples in order to be more robust to these adversarial attacks.",
"Repeat: Repeat, deploying the newly trained model in the break it phase, then fix it again.",
"See Figure FIGREF6 for a visualization of this process."
],
[
"Throughout data collection, we characterize offensive messages for users as messages that would not be “ok to send in a friendly conversation with someone you just met online.\" We use this specific language in an attempt to capture various classes of content that would be considered unacceptable in a friendly conversation, without imposing our own definitions of what that means. The phrase “with someone you just met online\" was meant to mimic the setting of a public forum."
],
[
"We ask crowdworkers to try to “beat the system\" by submitting messages that our system marks as safe but that the worker considers to be offensive. For a given round, workers earn a “game” point each time they are able to “beat the system,\" or in other words, trick the model by submitting offensive messages that the model marks as safe. Workers earn up to 5 points each round, and have two tries for each point: we allow multiple attempts per point so that workers can get feedback from the models and better understand their weaknesses. The points serve to indicate success to the crowdworker and motivate to achieve high scores, but have no other meaning (e.g. no monetary value as in BIBREF25). More details regarding the user interface and instructions can be found in Appendix SECREF9."
],
[
"During round 1, workers try to break the baseline model $A_0$, trained on Wikipedia Toxic Comments. For rounds $i$, $i > 1$, workers must break both the baseline model and the model from the previous “fix it\" round, which we refer to as $A_{i-1}$. In that case, the worker must submit messages that both $A_0$ and $A_{i-1}$ mark as safe but which the worker considers to be offensive."
],
[
"During the “fix it\" round, we update the models with the newly collected adversarial data from the “break it\" round.",
"The training data consists of all previous rounds of data, so that model $A_i$ is trained on all rounds $n$ for $n \\le i$, as well as the Wikipedia Toxic Comments data. We split each round of data into train, validation, and test partitions. The validation set is used for hyperparameter selection. The test sets are used to measure how robust we are to new adversarial attacks. With increasing round $i$, $A_i$ should become more robust to increasingly complex human adversarial attacks."
],
[
"We first consider a single-turn set-up, i.e. detection of offensive language in one utterance, with no dialogue context or conversational history."
],
[
"We collected three rounds of data with the build it, break it, fix it algorithm described in the previous section. Each round of data consisted of 1000 examples, leading to 3000 single-turn adversarial examples in total. For the remainder of the paper, we refer to this method of data collection as the adversarial method."
],
[
"In addition to the adversarial method, we also collected data in a non-adversarial manner in order to directly compare the two set-ups. In this method – which we refer to as the standard method, we simply ask crowdworkers to submit messages that they consider to be offensive. There is no model to break. Instructions are otherwise the same.",
"In this set-up, there is no real notion of “rounds\", but for the sake of comparison we refer to each subsequent 1000 examples collected in this manner as a “round\". We collect 3000 examples – or three rounds of data. We refer to a model trained on rounds $n \\le i$ of the standard data as $S_i$."
],
[
"Since all of the collected examples are labeled as offensive, to make this task a binary classification problem, we will also add safe examples to it.",
"The “safe data\" is comprised of utterances from the ConvAI2 chit-chat task BIBREF29, BIBREF30 which consists of pairs of humans getting to know each other by discussing their interests. Each utterance we used was reviewed by two independent crowdworkers and labeled as safe, with the same characterization of safe as described before.",
"For each partition (train, validation, test), the final task has a ratio of 9:1 safe to offensive examples, mimicking the division of the Wikipedia Toxic Comments dataset used for training our baseline models. Dataset statistics for the final task can be found in Table TABREF21. We refer to these tasks – with both safe and offensive examples – as the adversarial and standard tasks."
],
[
"Using the BERT-based model architecture described in Section SECREF3, we trained models on each round of the standard and adversarial tasks, multi-tasking with the Wikipedia Toxic Comments task. We weight the multi-tasking with a mixing parameter which is also tuned on the validation set. Finally, after training weights with the cross entropy loss, we adjust the final bias also using the validation set. We optimize for the sensitive class (i.e. offensive-class) F1 metric on the standard and adversarial validation sets respectively.",
"For each task (standard and adversarial), on round $i$, we train on data from all rounds $n$ for $n \\le i$ and optimize for performance on the validation sets $n \\le i$."
],
[
"We conduct experiments comparing the adversarial and standard methods. We break down the results into “break it\" results comparing the data collected and “fix it\" results comparing the models obtained."
],
[
"Examples obtained from both the adversarial and standard collection methods were found to be clearly offensive, but we note several differences in the distribution of examples from each task, shown in Table TABREF21. First, examples from the standard task tend to contain more profanity. Using a list of common English obscenities and otherwise bad words, in Table TABREF21 we calculate the percentage of examples in each task containing such obscenities, and see that the standard examples contain at least seven times as many as each round of the adversarial task. Additionally, in previous works, authors have observed that classifiers struggle with negations BIBREF8. This is borne out by our data: examples from the single-turn adversarial task more often contain the token “not\" than examples from the standard task, indicating that users are easily able to fool the classifier with negations.",
"We also anecdotally see figurative language such as “snakes hiding in the grass” in the adversarial data, which contain no individually offensive words, the offensive nature is captured by reading the entire sentence. Other examples require sophisticated world knowledge such as that many cultures consider eating cats to be offensive. To quantify these differences, we performed a blind human annotation of a sample of the data, 100 examples of standard and 100 examples of adversarial round 1. Results are shown in Table TABREF16. Adversarial data was indeed found to contain less profanity, fewer non-profane but offending words (such as “idiot”), more figurative language, and to require more world knowledge.",
"We note that, as anticipated, the task becomes more challenging for the crowdworkers with each round, indicated by the decreasing average scores in Table TABREF27. In round 1, workers are able to get past $A_0$ most of the time – earning an average score of $4.56$ out of 5 points per round – showcasing how susceptible this baseline is to adversarial attack despite its relatively strong performance on the Wikipedia Toxic Comments task. By round 3, however, workers struggle to trick the system, earning an average score of only $1.6$ out of 5. A finer-grained assessment of the worker scores can be found in Table TABREF38 in the appendix."
],
[
"Results comparing the performance of models trained on the adversarial ($A_i$) and standard ($S_i$) tasks are summarized in Table TABREF22, with further results in Table TABREF41 in Appendix SECREF40. The adversarially trained models $A_i$ prove to be more robust to adversarial attack: on each round of adversarial testing they outperform standard models $S_i$.",
"Further, note that the adversarial task becomes harder with each subsequent round. In particular, the performance of the standard models $S_i$ rapidly deteriorates between round 1 and round 2 of the adversarial task. This is a clear indication that models need to train on adversarially-collected data to be robust to adversarial behavior.",
"Standard models ($S_i$), trained on the standard data, tend to perform similarly to the adversarial models ($A_i$) as measured on the standard test sets, with the exception of training round 3, in which $A_3$ fails to improve on this task, likely due to being too optimized for adversarial tasks. The standard models $S_i$, on the other hand, are improving with subsequent rounds as they have more training data of the same distribution as the evaluation set. Similarly, our baseline model performs best on its own test set, but other models are not far behind.",
"Finally, we remark that all scores of 0 in Table TABREF22 are by design, as for round $i$ of the adversarial task, both $A_0$ and $A_{i-1}$ classified each example as safe during the `break it' data collection phase."
],
[
"In most real-world applications, we find that adversarial behavior occurs in context – whether it is in the context of a one-on-one conversation, a comment thread, or even an image. In this work we focus on offensive utterances within the context of two-person dialogues. For dialogue safety we posit it is important to move beyond classifying single utterances, as it may be the case that an utterance is entirely innocuous on its own but extremely offensive in the context of the previous dialogue history. For instance, “Yes, you should definitely do it!\" is a rather inoffensive message by itself, but most would agree that it is a hurtful response to the question “Should I hurt myself?\""
],
[
"To this end, we collect data by asking crowdworkers to try to “beat\" our best single-turn classifier (using the model that performed best on rounds 1-3 of the adversarial task, i.e., $A_3$), in addition to our baseline classifier $A_0$. The workers are shown truncated pieces of a conversation from the ConvAI2 chit-chat task, and asked to continue the conversation with offensive responses that our classifier marks as safe. As before, workers have two attempts per conversation to try to get past the classifier and are shown five conversations per round. They are given a score (out of five) at the end of each round indicating the number of times they successfully fooled the classifier.",
"We collected 3000 offensive examples in this manner. As in the single-turn set up, we combine this data with safe examples with a ratio of 9:1 safe to offensive for classifier training. The safe examples are dialogue examples from ConvAI2 for which the responses were reviewed by two independent crowdworkers and labeled as safe, as in the s single-turn task set-up. We refer to this overall task as the multi-turn adversarial task. Dataset statistics are given in Table TABREF30."
],
[
"To measure the impact of the context, we train models on this dataset with and without the given context. We use the fastText and the BERT-based model described in Section SECREF3. In addition, we build a BERT-based model variant that splits the last utterance (to be classified) and the rest of the history into two dialogue segments. Each segment is assigned an embedding and the input provided to the transformer is the sum of word embedding and segment embedding, replicating the setup of the Next Sentence Prediction that is used in the training of BERT BIBREF17."
],
[
"During data collection, we observed that workers had an easier time bypassing the classifiers than in the single-turn set-up. See Table TABREF27. In the single-turn set-up, the task at hand gets harder with each round – the average score of the crowdworkers decreases from $4.56$ in round 1 to $1.6$ in round 3. Despite the fact that we are using our best single-turn classifier in the multi-turn set-up ($A_3$), the task becomes easier: the average score per round is $2.89$. This is because the workers are often able to use contextual information to suggest something offensive rather than say something offensive outright. See examples of submitted messages in Table TABREF29. Having context also allows one to express something offensive more efficiently: the messages supplied by workers in the multi-turn setting were significantly shorter on average, see Table TABREF21."
],
[
"During training, we multi-tasked the multi-turn adversarial task with the Wikipedia Toxic Comments task as well as the single-turn adversarial and standard tasks. We average the results of our best models from five different training runs. The results of these experiments are given in Table TABREF31.",
"As we observed during the training of our baselines in Section SECREF3, the fastText model architecture is ill-equipped for this task relative to our BERT-based architectures. The fastText model performs worse given the dialogue context (an average of 23.56 offensive-class F1 relative to 37.1) than without, likely because its bag-of-embeddings representation is too simple to take the context into account.",
"We see the opposite with our BERT-based models, indicating that more complex models are able to effectively use the contextual information to detect whether the response is safe or offensive. With the simple BERT-based architecture (that does not split the context and the utterance into separate segments), we observe an average of a 3.7 point increase in offensive-class F1 with the addition of context. When we use segments to separate the context from the utterance we are trying to classify, we observe an average of a 7.4 point increase in offensive-class F1. Thus, it appears that the use of contextual information to identify offensive language is critical to making these systems robust, and improving the model architecture to take account of this has large impact."
],
[
"We have presented an approach to build more robust offensive language detection systems in the context of a dialogue. We proposed a build it, break it, fix it, and then repeat strategy, whereby humans attempt to break the models we built, and we use the broken examples to fix the models. We show this results in far more nuanced language than in existing datasets. The adversarial data includes less profanity, which existing classifiers can pick up on, and is instead offensive due to figurative language, negation, and by requiring more world knowledge, which all make current classifiers fail. Similarly, offensive language in the context of a dialogue is also more nuanced than stand-alone offensive utterances. We show that classifiers that learn from these more complex examples are indeed more robust to attack, and that using the dialogue context gives improved performance if the model architecture takes it into account.",
"In this work we considered a binary problem (offensive or safe). Future work could consider classes of offensive language separately BIBREF13, or explore other dialogue tasks, e.g. from social media or forums. Another interesting direction is to explore how our build it, break it, fix it strategy would similarly apply to make neural generative models safe BIBREF31.",
""
],
[
"Additional results regarding the crowdworkers' ability to “beat\" the classifiers are reported in Table TABREF38. In particular, we report the percent of messages sent by the crowdsource workers that were marked safe and offensive by both $A_0$ and $A_{i-1}$. We note that very infrequently ($<1\\%$ of the time) a message was marked offensive by $A_0$ but safe by $A_{i-1}$, showing that $A_0$ was relatively ineffective at catching adversarial behavior.",
"In Table TABREF39, we report the categorization of examples into classes of offensive language from the blind human annotation of round 1 of the single-turn adversarial and standard data. We observe that in the adversarial set-up, there were fewer examples of bullying language but more examples targeting a protected class."
],
[
"We report F1, precision, and recall for the offensive class, as well as weighted-F1 for models $S_i$ and $A_i$ on the single-turn standard and adversarial tasks in Table TABREF41."
],
[
"During the adversarial data collection, we asked users to generate a message that “[the user believes] is not ok but that our system marks as ok,\" using the definition of “ok\" and “not ok\" described in the paper (i.e. “ok to send in a friendly conversation with someone you just met online\").",
"In order to generate a variety of responses, during the single-turn adversarial collection, we provided users with a topic to base their response on 50% of the time. The topics were pulled from a set of 1365 crowd-sourced open-domain dialogue topics. Example topics include diverse topics such as commuting, Gouda cheese, music festivals, podcasts, bowling, and Arnold Schwarzenegger.",
"Users were able to earn up to five points per round, with two tries for each point (to allow them to get a sense of the models' weaknesses). Users were informed of their score after each message, and provided with bonuses for good effort. The points did not affect the user's compensation, but rather, were provided as a way of gamifying the data collection, as this has been showed to increase data quality BIBREF25.",
"Please see an example image of the chat interface in Figure FIGREF42."
]
],
"section_name": [
"Introduction",
"Related Work",
"Baselines: Wikipedia Toxic Comments",
"Baselines: Wikipedia Toxic Comments ::: Wikipedia Toxic Comments",
"Baselines: Wikipedia Toxic Comments ::: Models",
"Baselines: Wikipedia Toxic Comments ::: Experiments",
"Build it Break it Fix it Method",
"Build it Break it Fix it Method ::: Break it Details ::: Definition of offensive",
"Build it Break it Fix it Method ::: Break it Details ::: Crowderworker Task",
"Build it Break it Fix it Method ::: Break it Details ::: Models to Break",
"Build it Break it Fix it Method ::: Fix it Details",
"Single-Turn Task",
"Single-Turn Task ::: Data Collection ::: Adversarial Collection",
"Single-Turn Task ::: Data Collection ::: Standard Collection",
"Single-Turn Task ::: Data Collection ::: Task Formulation Details",
"Single-Turn Task ::: Data Collection ::: Model Training Details",
"Single-Turn Task ::: Experimental Results",
"Single-Turn Task ::: Experimental Results ::: Break it Phase",
"Single-Turn Task ::: Experimental Results ::: Fix it Phase",
"Multi-Turn Task",
"Multi-Turn Task ::: Task Implementation",
"Multi-Turn Task ::: Models",
"Multi-Turn Task ::: Experimental Results ::: Break it Phase",
"Multi-Turn Task ::: Experimental Results ::: Fix it Phase",
"Conclusion",
"Additional Experimental Results ::: Additional Break It Phase Results",
"Additional Experimental Results ::: Additional Fix It Phase Results",
"Data Collection Interface Details"
]
} | {
"answers": [
{
"annotation_id": [
"722b3a586935b6d3f181d3144e33701ed9cc2e50"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 10: Results of experiments on the multi-turn adversarial task. We denote the average and one standard deviation from the results of five runs. Models that use the context as input (“with context”) perform better. Encoding this in the architecture as well (via BERT dialogue segment features) gives us the best results."
],
"extractive_spans": [],
"free_form_answer": "F1 and Weighted-F1",
"highlighted_evidence": [
"FLOAT SELECTED: Table 10: Results of experiments on the multi-turn adversarial task. We denote the average and one standard deviation from the results of five runs. Models that use the context as input (“with context”) perform better. Encoding this in the architecture as well (via BERT dialogue segment features) gives us the best results."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"d6d0539fa8e02acd98b0c217c4637baf86516d1f"
],
"answer": [
{
"evidence": [
"To this end, various datasets have been created to benchmark progress in the field. In hate speech detection, recently BIBREF5 compiled and released a dataset of over 24,000 tweets labeled as containing hate speech, offensive language, or neither. The TRAC shared task on Aggression Identification, a dataset of over 15,000 Facebook comments labeled with varying levels of aggression, was released as part of a competition BIBREF14. In order to benchmark toxic comment detection, The Wikipedia Toxic Comments dataset (which we study in this work) was collected and extracted from Wikipedia Talk pages and featured in a Kaggle competition BIBREF12, BIBREF15. Each of these benchmarks examine only single-turn utterances, outside of the context in which the language appeared. In this work we recommend that future systems should move beyond classification of singular utterances and use contextual information to help identify offensive language."
],
"extractive_spans": [
"The Wikipedia Toxic Comments dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to benchmark toxic comment detection, The Wikipedia Toxic Comments dataset (which we study in this work) was collected and extracted from Wikipedia Talk pages and featured in a Kaggle competition BIBREF12, BIBREF15. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"two",
"two"
],
"paper_read": [
"no",
"no"
],
"question": [
"What evaluation metric is used?",
"What datasets are used?"
],
"question_id": [
"3f326c003be29c8eac76b24d6bba9608c75aa7ea",
"c84590ba32df470a7c5343d8b99e541b217f10cf"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Dataset statistics for our splits of Wikipedia Toxic Comments.",
"Table 2: Comparison between our models based on fastText and BERT with the BiLSTM used by (Khatri et al., 2018) on Wikipedia Toxic Comments.",
"Figure 1: The build it, break it, fix it algorithm we use to iteratively train better models A0, . . . , AN . In experiments we perform N = 3 iterations of the break it, fix it loop for the single-turn utterance detection task, and a further iteration for the multi-turn task in a dialogue context setting.",
"Table 3: Language analysis of the single-turn standard and adversarial (round 1) tasks by human annotation of various language properties. Standard collection examples contain more words found in an offensive words list, while adversarial examples require more sophisticated language understanding.",
"Table 4: Percent of OFFENSIVE examples in each task containing profanity, the token “not”, as well as the average number of characters and tokens in each example. Rows 1-4 are the single-turn task, and the last row is the multi-turn task. Later rounds have less profanity and more use of negation as human breakers have to find more sophisticated language to adversarially attack our models.",
"Table 5: Dataset statistics for the single-turn rounds of the adversarial task data collection. There are three rounds in total all of identical size, hence the numbers above can be divided for individual statistics. The standard task is an additional dataset of exactly the same size as above.",
"Table 6: Test performance of best standard models trained on standard task rounds (models Si for each round i) and best adversarial models trained on adversarial task rounds (models Ai). All models are evaluated using OFFENSIVE-class F1 on each round of both the standard task and adversarial task. A0 is the baseline model trained on the existing Wiki Toxic Comments (WTC) dataset. Adversarial models prove to be more robust than standard ones against attack (Adversarial Task 1-3), while still performing reasonably on Standard and WTC tasks.",
"Table 7: Adversarial data collection worker scores. Workers received a score out of 5 indicating how often (out of 5 rounds) they were able to get past our classifiers within two tries. In later single-turn rounds it is harder to defeat our models, but switching to multi-turn makes this easier again as new attacks can be found by using the dialogue context.",
"Table 8: Examples from the multi-turn adversarial task. Responses can be offensive only in context.",
"Table 9: Multi-turn adversarial task data statistics.",
"Table 10: Results of experiments on the multi-turn adversarial task. We denote the average and one standard deviation from the results of five runs. Models that use the context as input (“with context”) perform better. Encoding this in the architecture as well (via BERT dialogue segment features) gives us the best results.",
"Table 11: Adversarial data collection statistics. A0 is the baseline model, trained on the Wikipedia Toxic Comments dataset. Ai−1 is the model for round i, trained on the adversarial data for rounds n ≤ i − 1. In the case of the multi-turn set-up, Ai−1 is A3.",
"Table 12: Human annotation of 100 examples from each the single-turn standard and adversarial (round 1) tasks into offensive classes.",
"Table 13: Full table of results from experiments on the single-turn standard and adversarial tasks. F1, precision, and recall are reported for the OFFENSIVEclass, as well as weighted F1."
],
"file": [
"3-Table1-1.png",
"3-Table2-1.png",
"4-Figure1-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"5-Table5-1.png",
"6-Table6-1.png",
"6-Table7-1.png",
"8-Table8-1.png",
"8-Table9-1.png",
"8-Table10-1.png",
"11-Table11-1.png",
"12-Table12-1.png",
"13-Table13-1.png"
]
} | [
"What evaluation metric is used?"
] | [
[
"1908.06083-8-Table10-1.png"
]
] | [
"F1 and Weighted-F1"
] | 671 |
1910.12129 | ViGGO: A Video Game Corpus for Data-To-Text Generation in Open-Domain Conversation | The uptake of deep learning in natural language generation (NLG) led to the release of both small and relatively large parallel corpora for training neural models. The existing data-to-text datasets are, however, aimed at task-oriented dialogue systems, and often thus limited in diversity and versatility. They are typically crowdsourced, with much of the noise left in them. Moreover, current neural NLG models do not take full advantage of large training data, and due to their strong generalizing properties produce sentences that look template-like regardless. We therefore present a new corpus of 7K samples, which (1) is clean despite being crowdsourced, (2) has utterances of 9 generalizable and conversational dialogue act types, making it more suitable for open-domain dialogue systems, and (3) explores the domain of video games, which is new to dialogue systems despite having excellent potential for supporting rich conversations. | {
"paragraphs": [
[
"The recent adoption of deep learning methods in natural language generation (NLG) for dialogue systems resulted in an explosion of neural data-to-text generation models, which depend on large training data. These are typically trained on one of the few parallel corpora publicly available, in particular the E2E BIBREF0 and the WebNLG BIBREF1 datasets. Crowdsourcing large NLG datasets tends to be a costly and time-consuming process, making it impractical outside of task-oriented dialogue systems. At the same time, current neural NLG models struggle to replicate the high language diversity of the training sentences present in these large datasets, and instead they learn to produce the same generic type of sentences as with considerably less training data BIBREF2, BIBREF3, BIBREF4.",
"Motivated by the rising interest in open-domain dialogue systems and conversational agents, we present ViGGO – a smaller but more comprehensive dataset in the video game domain, introducing several generalizable dialogue acts (DAs), making it more suitable for training versatile and more conversational NLG models. The dataset provides almost 7K pairs of structured meaning representations (MRs) and crowdsourced reference utterances about more than 100 video games. Table TABREF2 lists three examples.",
"Video games are a vast entertainment topic that can naturally be discussed in a casual conversation, similar to movies and music, yet in the dialogue systems community it does not enjoy popularity anywhere close to that of the latter two topics BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9. Restaurants have served as the go-to topic in data-to-text NLG for decades, as they offer a sufficiently large set of various attributes and corresponding values to talk about. While they certainly can be a topic of a casual conversation, the existing restaurant datasets BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15 are geared more toward a task-oriented dialogue where a system tries to narrow down a restaurant based on the user's preferences and ultimately give a recommendation. Our new video game dataset is designed to be more conversational, and to thus enable neural models to produce utterances more suitable for an open-domain dialogue system.",
"Even the most recent addition to the publicly available restaurant datasets for data-to-text NLG, the E2E dataset BIBREF0, suffers from the lack of a conversational aspect. It has become popular, thanks to its unprecedented size and multiple reference utterances per MR, for training end-to-end neural models, yet it only provides a single DA type. In contrast with the E2E dataset, ViGGO presents utterances of 9 different DAs.",
"Other domains have been represented by task-oriented datasets with multiple DA types, for example the Hotel, Laptop, and TV datasets BIBREF16, BIBREF17. Nevertheless, the DAs in these datasets vary greatly in complexity, and their distribution is thus heavily skewed, typically with two or three similar DAs comprising almost the entire dataset. In our video game dataset, we omitted simple DAs, in particular those that do not require any slots, such as greetings or short prompts, and focused on a set of substantial DAs only.",
"The main contribution of our work is thus a new parallel data-to-text NLG corpus that (1) is more conversational, rather than information seeking or question answering, and thus more suitable for an open-domain dialogue system, (2) represents a new, unexplored domain which, however, has excellent potential for application in conversational agents, and (3) has high-quality, manually cleaned human-produced utterances."
],
[
"ViGGO features more than 100 different video game titles, whose attributes were harvested using free API access to two of the largest online video game databases: IGDB and GiantBomb. Using these attributes, we generated a set of 2,300 structured MRs. The human reference utterances for the generated MRs were then crowdsourced using vetted workers on the Amazon Mechanical Turk (MTurk) platform BIBREF18, resulting in 6,900 MR-utterance pairs altogether. With the goal of creating a clean, high-quality dataset, we strived to obtain reference utterances with correct mentions of all slots in the corresponding MR through post-processing."
],
[
"The MRs in the ViGGO dataset range from 1 to 8 slot-value pairs, and the slots come from a set of 14 different video game attributes. Table TABREF6 details how these slots may be distributed across the 9 different DAs. The inform DA, represented by 3,000 samples, is the most prevalent one, as the average number of slots it contains is significantly higher than that of all the other DAs. Figure FIGREF7 visualizes the MR length distribution across the entire dataset.",
"The slots can be classified into 5 general categories covering most types of information MRs typically convey in data-to-text generation scenarios: Boolean, Numeric, Scalar, Categorical, and List. The first 4 categories are common in other NLG datasets, such as E2E, Laptop, TV, and Hotel, while the List slots are unique to ViGGO. List slots have values which may comprise multiple items from a discrete list of possible items."
],
[
"With neural language generation in mind, we crowdsourced 3 reference utterances for each MR so as to provide the models with the information about how the same content can be realized in multiple different ways. At the same time, this allows for a more reliable automatic evaluation by comparing the generated utterances with a set of different references each, covering a broader spectrum of correct ways of expressing the content given by the MR. The raw data, however, contains a significant amount of noise, as is inevitable when crowdsourcing. We therefore created and enforced a robust set of heuristics and regular expressions to account for typos, grammatical errors, undesirable abbreviations, unsolicited information, and missing or incorrect slot realizations."
],
[
"The crowdsourcing of utterances on MTurk took place in three stages. After collecting one third of the utterances, we identified a pool of almost 30 workers who wrote the most diverse and natural-sounding sentences in the context of video games. We then filtered out all utterances of poor quality and had the qualified workers write new ones for the corresponding inputs. Finally, the remaining two thirds of utterances were completed by these workers exclusively.",
"For each DA we created a separate task in order to minimize the workers' confusion. The instructions contained several different examples, as well as counter-examples, and they situated the DA in the context of a hypothetical conversation. The video game attributes to be used were provided for the workers in the form of a table, with their order shuffled so as to avoid any kind of bias. Further details on the data collection and cleaning are included in the Appendix."
],
[
"Despite the fact that the ViGGO dataset is not very large, we strived to make the test set reasonably challenging. To this end, we ensured that, after delexicalizing the name and the developer slots, there were no common MRs between the train set and either of the validation or test set. We maintained a similar MR length and slot distribution across the three partitions. The distribution of DA types, on the other hand, is skewed slightly toward fewer inform DA instances and a higher proportion of the less prevalent DAs in the validation and test sets (see Figure FIGREF11). With the exact partition sizes indicated in the diagram, the final ratio of samples is approximately $7.5:1:1.5$."
],
[
"Our new dataset was constructed under different constraints than the E2E dataset. First, in ViGGO we did not allow any omissions of slot mentions, as those are not justifiable for data-to-text generation with no previous context, and it makes the evaluation ambiguous. Second, the MRs in ViGGO are grounded by real video game data, which can encourage richer and more natural-sounding reference utterances.",
"While ViGGO is only 13% the size of the E2E dataset, the lexical diversity of its utterances is 77% of that in the E2E dataset, as indicated by the “delexicalized vocabulary” column in Table TABREF13. Part of the reason naturally is the presence of additional DAs in ViGGO, and therefore we also indicate the statistics in Table TABREF13 for the inform samples only. The average inform utterance length in ViGGO turns out to be over 30% greater, in terms of both words and sentences per utterance.",
"Finally, we note that, unlike the E2E dataset, our test set does not place any specific emphasis on longer MRs. While the average number of slots per MR in the inform DAs are comparable to the E2E dataset, in general the video game MRs are significantly shorter. This is by design, as shorter, more focused responses are more conversational than consistently dense utterances."
],
[
"The NLG model we use to establish a baseline for this dataset is a standard Transformer-based BIBREF19 sequence-to-sequence model. For decoding we employ beam search of width 10 ($\\alpha = 1.0$). The generated candidates are then reranked according to the heuristically determined slot coverage score. Before training the model on the ViGGO dataset, we confirmed on the E2E dataset that it performed on par with, or even slightly better than, the strong baseline models from the E2E NLG Challenge, namely, TGen BIBREF20 and Slug2Slug BIBREF21."
],
[
"We evaluate our model's performance on the ViGGO dataset using the following standard NLG metrics: BLEU BIBREF22, METEOR BIBREF23, ROUGE-L BIBREF24, and CIDEr BIBREF25. Additionally, with our heuristic slot error rate (SER) metric we approximate the percentage of failed slot realizations (i.e., missed, incorrect, or hallucinated) across the test set. The results are shown in Table TABREF16."
],
[
"We let two expert annotators with no prior knowledge of the ViGGO dataset evaluate the outputs of our model. Their task was to rate 240 shuffled utterances (120 generated utterances and 120 human references) each on naturalness and coherence using a 5-point Likert scale. We define naturalness as a measure of how much one would expect to encounter an utterance in a conversation with a human, as opposed to sounding robotic, while coherence measures its grammaticality and fluency. Out of the 120 MRs in each partition, 40 were of the inform type, with the other 8 DAs represented by 10 samples each. In addition to that, we had the annotators rate a sample of 80 utterances from the E2E dataset (40 generated and 40 references) as a sort of a baseline for the human evaluation.",
"With both datasets, our model's outputs were highly rated on both naturalness and coherence (see Table TABREF18). The scores for the ViGGO utterances were overall higher than those for the E2E ones, which we understand as an indication of the video game data being more fluent and conversational. At the same time, we observed that the utterances generated by our model tended to score higher than the reference utterances, though significantly more so for the E2E dataset. This is likely a consequence of the ViGGO dataset being cleaner and less noisy than the E2E dataset.",
"In an additional evaluation of ViGGO, we asked the annotators to classify the utterance samples into the 9 DA groups. For this task they were provided with a brief description of each DA type. The annotators identified the DA incorrectly in only 7% of the samples, which we interpret as a confirmation that our DAs are well-defined. Most of the mistakes can be ascribed to the inherent similarity of the recommend and the suggest DA, as well as to our model often generating give_opinion utterances that resemble the inform ones."
],
[
"Among all 9 DAs, the one posing the greatest challenge for our model was give_opinion, due to its high diversity of reference utterances. Despite the occasional incoherence, it learned to produce rich and sensible utterances, for instance “Little Nightmares is a pretty good game. Tarsier Studios is a talented developer and the side view perspective makes it easy to play.”.",
"Since our baseline model does not implement any form of a copy mechanism, it fails on instances with out-of-vocabulary terms, such as the values of the specifier slot in the test set. These, in fact, account for almost half of the errors indicated by the SER metric in Table TABREF16. Therefore, more robust models have good potential for improving on our scores."
],
[
"In Table TABREF20 we demonstrate how the 9 DAs of the ViGGO dataset can support a natural multi-turn exchange on the topic of video games, as a part of a longer casual conversation on different topics. One caveat of using a language generator trained on this dataset in a dialogue system as-is is that multiple subsequent turns discussing the same video game would be repeating its full name. ViGGO was designed for grounded generation but without context, and therefore it is up to the dialogue manager to ensure that pronouns are substituted for the names whenever it would sound more natural in a dialogue. Alternately, the dataset can easily be augmented with automatically constructed samples which omit the name slot in the MR and replace the name with a pronoun in the reference utterance."
],
[
"In this paper we presented a new parallel corpus for data-to-text NLG, which contains 9 dialogue acts, making it more conversational than other similar datasets. The crowdsourced utterances were thoroughly cleaned in order to obtain high-quality human references, which we hope will support the recent trend in research to train neural models on small but high-quality data, like humans can. This could possibly be achieved by transferring fundamental knowledge from larger available corpora, such as the E2E dataset, but perhaps by other, completely new, methods."
],
[
"In Table TABREF22 we present one example of each DA in the ViGGO dataset, including the examples given in Table TABREF2."
],
[
"In Section SECREF5 we mentioned that the slots in the ViGGO dataset can be classified into 5 general categories. Here we provide more detailed descriptions of the categories:",
"Boolean – binary value, such as “yes”/“no” or “true”/“false” (e.g., has_multiplayer or available_on_steam),",
"Numeric – value is a number or contains number(s) as the salient part (e.g., release_year or exp_release_date),",
"Scalar – values are on a distinct scale (e.g., rating or esrb),",
"Categorical – takes on virtually any value, typically coming from a certain category, such as names or types (e.g., name or developer),",
"List – similar to categorical, where the value can, however, consist of multiple individual items (e.g., genres or player_perspective).",
"Note that in ViGGO the items in the value of a List slot are comma-separated, and therefore the individual items must not contain a comma. There are no restrictions as to whether the values are single-word or multi-word in any of the categories."
],
[
"When generating the MRs for the inform DA, we fixed the slot ratios: the name and genres slots were mandatory in every MR, the player_perspective and release_year were enforced in about half of the MRs, while the remaining slots are present in about 25% of the MRs. At the same time we imposed two constraints on the slot combinations: (1) whenever one of the Steam, Linux or Mac related boolean slots is present in an MR, the platforms slot must be included too, and (2) whenever either of the Linux or Mac slots was picked for an MR, the other one was automatically added too. These two constraints were introduced so as to encourage reference utterances with natural aggregations and contrast relations.",
"The remaining 8 DAs, however, contain significantly fewer slots each (see Table TABREF6). We therefore decided to have the MTurk workers select 5 unique slot combinations for each given video game before writing the corresponding utterances. Since for these DAs we collected less data, we tried to ensure in this way that we have a sufficient number of samples for those slot combinations that are most natural to be mentioned in each of the DAs. While fixing mandatory slots for each DA, we instructed the workers to choose 1 or 2 additional slots depending on the task. The data collection for MRs with only 1 additional slot and for those with 2 was performed separately, so as to prevent workers from taking the easy way out by always selecting just a single slot, given the option.",
"Leaving the slot selection to crowdworkers yields a frequency distribution of all slot combinations, which presumably indicates the suitability of different slots to be mentioned together in a sentence. This meta-information can be made use of in a system's dialogue manager to sample from the observed slot combination distributions instead of sampling randomly or hard-coding the combinations. Figure FIGREF30 shows the distributions of the 8 slot pairs most commonly mentioned together in different DAs. These account for 53% of the selections among the 6 DAs that can take 2 additional slots besides the mandatory ones. We can observe some interesting trends in the distributions, such as that the developer + release_year combination was the most frequent one in the confirm DA, while fairly rare in most of the other DAs. This might be because this pair of a game's attributes is arguably the next best identifier of a game after its name."
],
[
"A large proportion of the raw data collected contained typos and various errors, as is inevitable when crowdsourcing. We took the following three steps to clean the data.",
"First, we used regular expressions to enforce several standardization policies regarding special characters, punctuation, and the correction of undesired abbreviations/misspellings of standard domain-specific terms (e.g., we would change terms like “Play station” or “PS4” to the uniform “PlayStation”). At the same time, we removed or enforced hyphens uniformly in certain terms, for example, “single-player”. Although phrases such as “first person” should correctly have a hyphen when used as adjective, the turkers used this rule very inconsistently. In order to avoid model outputs being penalized during the evaluation by the arbitrary choice of a hyphen presence or absence in the reference utterances, we decided to remove the hyphen in all such phrases regardless of the noun/adjective use.",
"Second, we developed an extensive set of heuristics to identify slot-related errors. This process revealed the vast majority of missing or incorrect slot mentions, which we subsequently fixed according to the corresponding MRs. Turkers would sometimes also inject a piece of information which was not present in the MR, some of which is not even represented by any of the slots, e.g., plot or main characters. We remove this extraneous information from the utterances so as to avoid confusing the neural model. This step thus involved certain manual work and was thus performed jointly with the third step.",
"Finally, we further resolved the remaining typos, grammatical errors, and unsolicited information."
],
[
"Even though on the small datasets we work with we do not necessarily expect the Transformer model to perform better than recurrent neural networks, we chose this model for its significantly faster training, without sacrificing the performance. For our experiments a small 2-layer Transformer with 8 heads proved to be sufficient. The input tokens are encoded into embeddings of size 256, and the target sequences were truncated to 60 tokens. The model performed best with dropout values of 0.2. For training of the Transformer models we used the Adam optimizer with a custom learning rate schedule including a brief linear warm-up and a cosine decay."
]
],
"section_name": [
"Introduction",
"The ViGGO Dataset",
"The ViGGO Dataset ::: Meaning Representations",
"The ViGGO Dataset ::: Utterances",
"The ViGGO Dataset ::: Data Collection",
"The ViGGO Dataset ::: Train/Validation/Test Split",
"The ViGGO Dataset ::: ViGGO vs. E2E",
"Baseline System Evaluation",
"Baseline System Evaluation ::: Automatic Metrics",
"Baseline System Evaluation ::: Human Evaluation",
"Baseline System Evaluation ::: Qualitative Analysis",
"Discussion",
"Conclusion",
"Appendix ::: Additional ViGGO Dataset Examples",
"Appendix ::: Slot Categories",
"Appendix ::: Data Collection",
"Appendix ::: Dataset Cleaning",
"Appendix ::: Model Parameters"
]
} | {
"answers": [
{
"annotation_id": [
"8efc73143ed32f02de412437d3aa4b015617dd5b"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"8f11d08106d54afe50e011d2234af6c8c6817208"
],
"answer": [
{
"evidence": [
"The NLG model we use to establish a baseline for this dataset is a standard Transformer-based BIBREF19 sequence-to-sequence model. For decoding we employ beam search of width 10 ($\\alpha = 1.0$). The generated candidates are then reranked according to the heuristically determined slot coverage score. Before training the model on the ViGGO dataset, we confirmed on the E2E dataset that it performed on par with, or even slightly better than, the strong baseline models from the E2E NLG Challenge, namely, TGen BIBREF20 and Slug2Slug BIBREF21.",
"We evaluate our model's performance on the ViGGO dataset using the following standard NLG metrics: BLEU BIBREF22, METEOR BIBREF23, ROUGE-L BIBREF24, and CIDEr BIBREF25. Additionally, with our heuristic slot error rate (SER) metric we approximate the percentage of failed slot realizations (i.e., missed, incorrect, or hallucinated) across the test set. The results are shown in Table TABREF16.",
"FLOAT SELECTED: Table 4: Baseline system performance on the ViGGO test set. Despite individual models (Bo3 – best of 3 experiments) often having better overall scores, we consider the Ao3 (average of 3) results the most objective."
],
"extractive_spans": [],
"free_form_answer": "Yes, Transformer based seq2seq is evaluated with average BLEU 0.519, METEOR 0.388, ROUGE 0.631 CIDEr 2.531 and SER 2.55%.",
"highlighted_evidence": [
"The NLG model we use to establish a baseline for this dataset is a standard Transformer-based BIBREF19 sequence-to-sequence model.",
"The results are shown in Table TABREF16.",
"FLOAT SELECTED: Table 4: Baseline system performance on the ViGGO test set. Despite individual models (Bo3 – best of 3 experiments) often having better overall scores, we consider the Ao3 (average of 3) results the most objective."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"722fcd8ec012d958b553c838c273157f88cfa905"
],
"answer": [
{
"evidence": [
"The main contribution of our work is thus a new parallel data-to-text NLG corpus that (1) is more conversational, rather than information seeking or question answering, and thus more suitable for an open-domain dialogue system, (2) represents a new, unexplored domain which, however, has excellent potential for application in conversational agents, and (3) has high-quality, manually cleaned human-produced utterances."
],
"extractive_spans": [
"manually cleaned human-produced utterances"
],
"free_form_answer": "",
"highlighted_evidence": [
"The main contribution of our work is thus a new parallel data-to-text NLG corpus that (1) is more conversational, rather than information seeking or question answering, and thus more suitable for an open-domain dialogue system, (2) represents a new, unexplored domain which, however, has excellent potential for application in conversational agents, and (3) has high-quality, manually cleaned human-produced utterances."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Is the origin of the dialogues in corpus some video game and what game is that?",
"Is any data-to-text generation model trained on this new corpus, what are the results?",
"How the authors made sure that corpus is clean despite being crowdsourced?"
],
"question_id": [
"88e9e5ad0e4c369b15d81a4e18f7d12ff8fa9f1b",
"14e259a312e653f8fc0d52ca5325b43c3bdfb968",
"e93b4a15b54d139b768d5913fb5fd1aed8ab25da"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Examples of MRs and corresponding reference utterances in the ViGGO dataset. The DA of the MRs is indicated in italics, and the slots in small caps. The slot mentions in the utterances are bolded.",
"Table 2: Overview of mandatory and common possible slots for each DA in the ViGGO dataset. There is an additional slot, EXP RELEASE DATE, only possible in the inform and confirm DAs. Moreover, RATING is also possible in the inform DA, though not mandatory.",
"Figure 1: Distribution of the number of slots across all types of MRs, as well as the inform slot separately, and non-inform slots only.",
"Figure 2: Distribution of the DAs across the train/validation/test split. For each partition the total count of DAs/MRs is indicated.",
"Table 3: Dataset statistics comparing the ViGGO dataset, as well as its subset of inform DAs only (ViGGOinf ), with the E2E dataset. The average trigram frequency was calculated on trigrams that appear more than once.",
"Table 5: Naturalness and coherence scores of our model’s generated outputs compared to the reference utterances, as per the human evaluation. ViGGOinf corresponds to the subset of inform DAs only.",
"Table 4: Baseline system performance on the ViGGO test set. Despite individual models (Bo3 – best of 3 experiments) often having better overall scores, we consider the Ao3 (average of 3) results the most objective.",
"Table 6: An example of a chit-chat about video games comprising utterances of DAs defined in ViGGO. “S” denotes the system and “U” the user turns.",
"Figure 3: Distribution of the 8 most frequently selected slot combinations across different DAs."
],
"file": [
"1-Table1-1.png",
"2-Table2-1.png",
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table3-1.png",
"4-Table5-1.png",
"4-Table4-1.png",
"5-Table6-1.png",
"8-Figure3-1.png"
]
} | [
"Is any data-to-text generation model trained on this new corpus, what are the results?"
] | [
[
"1910.12129-Baseline System Evaluation ::: Automatic Metrics-0",
"1910.12129-4-Table4-1.png",
"1910.12129-Baseline System Evaluation-0"
]
] | [
"Yes, Transformer based seq2seq is evaluated with average BLEU 0.519, METEOR 0.388, ROUGE 0.631 CIDEr 2.531 and SER 2.55%."
] | 672 |
1710.00341 | Fully Automated Fact Checking Using External Sources | Given the constantly growing proliferation of false claims online in recent years, there has been also a growing research interest in automatically distinguishing false rumors from factually true claims. Here, we propose a general-purpose framework for fully-automatic fact checking using external sources, tapping the potential of the entire Web as a knowledge source to confirm or reject a claim. Our framework uses a deep neural network with LSTM text encoding to combine semantic kernels with task-specific embeddings that encode a claim together with pieces of potentially-relevant text fragments from the Web, taking the source reliability into account. The evaluation results show good performance on two different tasks and datasets: (i) rumor detection and (ii) fact checking of the answers to a question in community question answering forums. | {
"paragraphs": [
[
"Recent years have seen the proliferation of deceptive information online. With the increasing necessity to validate the information from the Internet, automatic fact checking has emerged as an important research topic. It is at the core of multiple applications, e.g., discovery of fake news, rumor detection in social media, information verification in question answering systems, detection of information manipulation agents, and assistive technologies for investigative journalism. At the same time, it touches many aspects, such as credibility of users and sources, information veracity, information verification, and linguistic aspects of deceptive language.",
"In this paper, we present an approach to fact-checking with the following design principles: (i) generality, (ii) robustness, (iii) simplicity, (iv) reusability, and (v) strong machine learning modeling. Indeed, the system makes very few assumptions about the task, and looks for supportive information directly on the Web. Our system works fully automatically. It does not use any heavy feature engineering and can be easily used in combination with task-specific approaches as well, as a core subsystem. Finally, it combines the representational strength of recurrent neural networks with kernel-based classification.",
"The system starts with a claim to verify. First, we automatically convert the claim into a query, which we execute against a search engine in order to obtain a list of potentially relevant documents. Then, we take both the snippets and the most relevant sentences in the full text of these Web documents, and we compare them to the claim. The features we use are dense representations of the claim, of the snippets and of related sentences from the Web pages, which we automatically train for the task using Long Short-Term Memory networks (LSTMs). We also use the final hidden layer of the neural network as a task-specific embedding of the claim, together with the Web evidence. We feed all these representations as features, together with pairwise similarities, into a Support Vector Machine (SVM) classifier using an RBF kernel to classify the claim as True or False.",
"Figure FIGREF1 presents a real example from one of the datasets we experiment with. The left-hand side of the figure contains a True example, while the right-hand side shows a False one. We show the original claims from snopes.com, the query generated by our system, and the information retrieved from the Web (most relevant snippet and text selection from the web page). The veracity of the claim can be inferred from the textual information.",
"Our contributions can be summarized as follows:",
"The remainder of this paper is organized as follows. Section SECREF2 introduces our method for fact checking claims using external sources. Section SECREF3 presents our experiments and discusses the results. Section SECREF4 describes an application of our approach to a different dataset and a slightly different task: fact checking in community question answering forums. Section SECREF5 presents related work. Finally, Section SECREF6 concludes and suggests some possible directions for future work."
],
[
"Given a claim, our system searches for support information on the Web in order to verify whether the claim is likely to be true. The three steps in this process are (i) external support retrieval, (ii) text representation, and (iii) veracity prediction."
],
[
"This step consists of generating a query out of the claim and querying a search engine (here, we experiment with Google and Bing) in order to retrieve supporting documents. Rather than querying the search engine with the full claim (as on average, a claim is two sentences long), we generate a shorter query following the lessons highlighted in BIBREF0 .",
"As we aim to develop a general-purpose fact checking system, we use an approach for query generation that does not incorporate any features that are specific to claim verification (e.g., no temporal indicators).",
"We rank the words by means of tf-idf. We compute the idf values on a 2015 Wikipedia dump and the English Gigaword. BIBREF0 suggested that a good way to perform high-quality search is to only consider the verbs, the nouns and the adjectives in the claim; thus, we exclude all words in the claim that belong to other parts of speech. Moreover, claims often contain named entities (e.g., names of persons, locations, and organizations); hence, we augment the initial query with all the named entities from the claim's text. We use IBM's AlchemyAPI to identify named entities. Ultimately, we generate queries of 5–10 tokens, which we execute against a search engine. We then collect the snippets and the URLs in the results, skipping any result that points to a domain that is considered unreliable. Finally, if our query has returned no results, we iteratively relax it by dropping the final tokens one at a time."
],
[
"Next, we build the representation of a claim and the corresponding snippets and Web pages. First, we calculate three similarities (a) between the claim and a snippet, or (b) between the claim and a Web page: (i) cosine with tf-idf, (ii) cosine over embeddings, and (iii) containment BIBREF1 . We calculate the embedding of a text as the average of the embeddings of its words; for this, we use pre-trained embeddings from GloVe BIBREF2 . Moreover, as a Web page can be long, we first split it into a set of rolling sentence triplets, then we calculate the similarities between the claim and each triplet, and we take the highest scoring triplet. Finally, as we have up to ten hits from the search engine, we take the maximum and also the average of the three similarities over the snippets and over the Web pages.",
"We further use as features the embeddings of the claim, of the best-scoring snippet, and of the best-scoring sentence triplet from a Web page. We calculate these embeddings (i) as the average of the embeddings of the words in the text, and also (ii) using LSTM encodings, which we train for the task as part of a deep neural network (NN). We also use a task-specific embedding of the claim together with all the above evidence about it, which comes from the last hidden layer of the NN."
],
[
"Next, we build classifiers: neural network (NN), support vector machines (SVM), and a combination thereof (SVM+NN).",
"The architecture of our NN is shown on top of Figure FIGREF7 . We have five LSTM sub-networks, one for each of the text sources from two search engines: Claim, Google Web page, Google snippet, Bing Web page, and Bing snippet. The claim is fed into the neural network as-is. As we can have multiple snippets, we only use the best-matching one as described above. Similarly, we only use a single best-matching triple of consecutive sentences from a Web page. We further feed the network with the similarity features described above.",
"All these vectors are concatenated and fully connected to a much more compact hidden layer that captures the task-specific embeddings. This layer is connected to a softmax output unit to classify the claim as true or false. The bottom of Figure FIGREF7 represents the generic architecture of each of the LSTM components. The input text is transformed into a sequence of word embeddings, which is then passed to the bidirectional LSTM layer to obtain a representation for the full sequence.",
"Our second classifier is an SVM with an RBF kernel. The input is the same as for the NN: word embeddings and similarities. However, the word embeddings this time are calculated by averaging rather than using a bi-LSTM.",
"Finally, we combine the SVM with the NN by augmenting the input to the SVM with the values of the units in the hidden layer. This represents a task-specific embedding of the input example, and in our experiments it turned out to be quite helpful. Unlike in the SVM only model, this time we use the bi-LSTM embeddings as an input to the SVM. Ultimately, this yields a combination of deep learning and task-specific embeddings with RBF kernels."
],
[
"We used part of the rumor detection dataset created by BIBREF3 . While they analyzed a claim based on a set of potentially related tweets, we focus on the claim itself and on the use of supporting information from the Web.",
"The dataset consists of 992 sets of tweets, 778 of which are generated starting from a claim on snopes.com, which ma2016detecting converted into a query. Another 214 sets of tweets are tweet clusters created by other researchers BIBREF4 , BIBREF5 with no claim behind them. ma2016detecting ignored the claim and did not release it as part of their dataset. We managed to find the original claim for 761 out of the 778 snopes.com-based clusters.",
"Our final dataset consists of 761 claims from snopes.com, which span various domains including politics, local news, and fun facts. Each of the claims is labeled as factually true (34%) or as a false rumor (66%). We further split the data into 509 for training, 132 for development, and 120 for testing. As the original split for the dataset was not publicly available, and as we only used a subset of their data, we had to make a new training and testing split. Note that we ignored the tweets, as we wanted to focus on a complementary source of information: the Web. Moreover, ma2016detecting used manual queries, while we use a fully automatic method. Finally, we augmented the dataset with Web-retrieved snippets, Web pages, and sentence triplets from Web pages."
],
[
"We tuned the architecture (i.e., the number of layers and their size) and the hyper-parameters of the neural network on the development dataset. The best configuration uses a bidirectional LSTM with 25 units. It further uses a RMSprop optimizer with 0.001 initial learning rate, L2 regularization with INLINEFORM0 =0.1, and 0.5 dropout after the LSTM layers. The size of the hidden layer is 60 with tanh activations. We use a batch of 32 and we train for 400 epochs.",
"For the SVM model, we merged the development and the training dataset, and we then ran a 5-fold cross-validation with grid-search, looking for the best kernel and its parameters. We ended up selecting an RBF kernel with INLINEFORM0 and INLINEFORM1 0.01."
],
[
"The evaluation metrics we use are P (precision), R (recall), and F INLINEFORM0 , which we calculate with respect to the false and to the true claims. We further report AvgR (macro-average recall), AvgF INLINEFORM1 (macro-average F INLINEFORM2 ), and Acc (accuracy)."
],
[
"Table TABREF14 shows the results on the test dataset. We can see that both the NN and the SVM models improve over the majority class baseline (all false rumors) by a sizable margin. Moreover, the NN consistently outperforms the SVM by a margin on all measures. Yet, adding the task-specific embeddings from the NN as features of the SVM yields overall improvements over both the SVM and the NN in terms of avgR, avgF INLINEFORM0 , and accuracy. We can see that both the SVM and the NN overpredict the majority class (false claims); however, the combined SVM+NN model is quite balanced between the two classes.",
"Table TABREF22 compares the performance of the SVM with and without task-specific embeddings from the NN, when training on Web pages vs. snippets, returned by Google vs. Bing vs. both. The NN embeddings consistently help the SVM in all cases. Moreover, while the baseline SVM using snippets is slightly better than when using Web pages, there is almost no difference between snippets vs. Web pages when NN embeddings are added to the basic SVM. Finally, gathering external support from either Google or Bing makes practically no difference, and using the results from both together does not yield much further improvement. Thus, (i) the search engines already do a good job at generating relevant snippets, and one does not need to go and download the full Web pages, and (ii) the choice of a given search engine is not an important factor. These are good news for the practicality of our approach.",
"Unfortunately, direct comparison with respect to BIBREF3 is not possible. First, we only use a subset of their examples: 761 out of 993 (see Section SECREF17 ), and we also have a different class distribution. More importantly, they have a very different formulation of the task: for them, the claim is not available as input (in fact, there has never been a claim for 21% of their examples); rather an example consists of a set of tweets retrieved using manually written queries.",
"In contrast, our system is fully automatic and does not use tweets at all. Furthermore, their most important information source is the change in tweets volume over time, which we cannot use. Still, our results are competitive to theirs when they do not use temporal features.",
"To put the results in perspective, we can further try to make an indirect comparison to the very recent paper by BIBREF6 . They also present a model to classify true vs. false claims extracted from snopes.com, by using information extracted from the Web. Their formulation of the task is the same as ours, but our corpora and label distributions are not the same, which makes a direct comparison impossible. Still, we can see that regarding overall classification accuracy they improve a baseline from 73.7% to 84.02% with their best model, i.e., a 39.2% relative error reduction. In our case, we go from 66.7% to 80.0%, i.e., an almost identical 39.9% error reduction. These results are very encouraging, especially given the fact that our model is much simpler than theirs regarding the sources of information used (they model the stance of the text, the reliability of the sources, the language style of the articles, and the temporal footprint)."
],
[
"Next, we tested the generality of our approach by applying it to a different setup: fact-checking the answers in community question answering (cQA) forums. As this is a new problem, for which no dataset exists, we created one. We augmented with factuality annotations the cQA dataset from SemEval-2016 Task 3 (CQA-QA-2016) BIBREF7 . Overall, we annotated 249 question–answer, or INLINEFORM0 - INLINEFORM1 , pairs (from 71 threads): 128 factually true and 121 factually false answers.",
"Each question in CQA-QA-2016 has a subject, a body, and meta information: ID, category (e.g., Education, and Moving to Qatar), date and time of posting, user name and ID. We selected only the factual questions such as “What is Ooredoo customer service number?”, thus filtering out all (i) socializing, e.g., “What was your first car?”, (ii) requests for opinion/advice/guidance, e.g., “Which is the best bank around??”, and (iii) questions containing multiple sub-questions, e.g., “Is there a land route from Doha to Abudhabi. If yes; how is the road and how long is the journey?”",
"Next, we annotated for veracity the answers to the retained questions. Note that in CQA-QA-2016, each answer has a subject, a body, meta information (answer ID, user name and ID), and a judgment about how well it addresses the question of its thread: Good vs. Potentially Useful vs. Bad . We only annotated the Good answers. We further discarded answers whose factuality was very time-sensitive (e.g., “It is Friday tomorrow.”, “It was raining last week.”), or for which the annotators were unsure.",
"We targeted very high quality, and thus we did not use crowdsourcing for the annotation, as pilot annotations showed that the task was very difficult and that it was not possible to guarantee that Turkers would do all the necessary verification, e.g., gathering evidence from trusted sources. Instead, all examples were first annotated independently by four annotators, and then each example was discussed in detail to come up with a final label. We ended up with 249 Good answers to 71 different questions, which we annotated for factuality: 128 Positive and 121 Negative examples. See Table TABREF26 for details.",
"We further split our dataset into 185 INLINEFORM0 – INLINEFORM1 pairs for training, 31 for development, and 32 for testing, preserving the general positive:negative ratio, and making sure that the questions for the INLINEFORM2 – INLINEFORM3 pairs did not overlap between the splits. Figure FIGREF23 presents an excerpt of an example from the dataset, with one question and three answers selected from a longer thread. Answer INLINEFORM4 contains false information, while INLINEFORM5 and INLINEFORM6 are true, as can be checked on an official governmental website.",
"We had to fit our system for this problem, as here we do not have claims, but a question and an answer. So, we constructed the query from the concatenation of INLINEFORM0 and INLINEFORM1 . Moreover, as Google and Bing performed similarly, we only report results using Google. We limited our run to snippets only, as we have found them rich enough above (see Section SECREF3 ). Also, we had a list of reputed and Qatar-related sources for the domain, and we limited our results to these sources only. This time, we had more options to calculate similarities compared to the rumors dataset: we can compare against INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 – INLINEFORM5 ; we chose to go with the latter. For the LSTM representations, we use both the question and the answer.",
"Table TABREF27 shows the results on the cQA dataset. Once again, our models outperformed all baselines by a margin. This time, the predictions of all models are balanced between the two classes, which is probably due to the dataset being well balanced in general. The SVM model performs better than the NN by itself. This is due to the fact that the cQA dataset is significantly smaller than the rumor detection one. Thus, the neural network could not be trained effectively by itself. Nevertheless, the task-specific representations were useful and combining them with the SVM model yielded consistent improvements on all the measures once again."
],
[
"Journalists, online users, and researchers are well aware of the proliferation of false information on the Web, and topics such as information credibility and fact checking are becoming increasingly important as research directions. For example, there was a recent 2016 special issue of the ACM Transactions on Information Systems journal on Trust and Veracity of Information in Social Media BIBREF9 , there was a SemEval-2017 shared task on Rumor Detection BIBREF10 , and there is an upcoming lab at CLEF-2018 on Automatic Identification and Verification of Claims in Political Debates BIBREF11 .",
"The credibility of contents on the Web has been questioned by researches for a long time. While in the early days the main research focus was on online news portals BIBREF12 , BIBREF13 , BIBREF14 , the interest has eventually shifted towards social media BIBREF4 , BIBREF15 , BIBREF6 , BIBREF16 , which are abundant in sophisticated malicious users such as opinion manipulation trolls, paid BIBREF17 or just perceived BIBREF18 , BIBREF19 , sockpuppets BIBREF20 , Internet water army BIBREF21 , and seminar users BIBREF22 .",
"For instance, BIBREF23 studied the credibility of Twitter accounts (as opposed to tweet posts), and found that both the topical content of information sources and social network structure affect source credibility. Other work, closer to ours, aims at addressing credibility assessment of rumors on Twitter as a problem of finding false information about a newsworthy event BIBREF4 . This model considers user reputation, writing style, and various time-based features, among others.",
"Other efforts have focused on news communities. For example, several truth discovery algorithms are combined in an ensemble method for veracity estimation in the VERA system BIBREF24 . They proposed a platform for end-to-end truth discovery from the Web: extracting unstructured information from multiple sources, combining information about single claims, running an ensemble of algorithms, and visualizing and explaining the results. They also explore two different real-world application scenarios for their system: fact checking for crisis situations and evaluation of trustworthiness of a rumor. However, the input to their model is structured data, while here we are interested in unstructured text as input.",
"Similarly, the task defined by BIBREF25 combines three objectives: assessing the credibility of a set of posted articles, estimating the trustworthiness of sources, and predicting user's expertise. They considered a manifold of features characterizing language, topics and Web-specific statistics (e.g., review ratings) on top of a continuous conditional random fields model. In follow-up work, BIBREF26 proposed a model to support or refute claims from snopes.com and Wikipedia by considering supporting information gathered from the Web. They used the same task formulation for claims as we do, but different datasets. In yet another follow-up work, Popat:2017:TLE:3041021.3055133 proposed a complex model that considers stance, source reliability, language style, and temporal information.",
"Our approach to fact checking is related: we verify facts on the Web. However, we use a much simpler and feature-light system, and a different machine learning model. Yet, our model performs very similarly to this latter work (even though a direct comparison is not possible as the datasets differ), which is a remarkable achievement given the fact that we consider less knowledge sources, we have a conceptually simpler model, and we have six times less training data than Popat:2017:TLE:3041021.3055133.",
"Another important research direction is on using tweets and temporal information for checking the factuality of rumors. For example, BIBREF27 used temporal patterns of rumor dynamics to detect false rumors and to predict their frequency. BIBREF27 focused on detecting false rumors in Twitter using time series. They used the change of social context features over a rumor's life cycle in order to detect rumors at an early stage after they were broadcast.",
"A more general approach for detecting rumors is explored by BIBREF3 , who used recurrent neural networks to learn hidden representations that capture the variation of contextual information of relevant posts over time. Unlike this work, we do not use microblogs, but we query the Web directly in search for evidence. Again, while direct comparison to the work of BIBREF3 is not possible, due to differences in dataset and task formulation, we can say that our framework is competitive when temporal information is not used. More importantly, our approach is orthogonal to theirs in terms of information sources used, and thus, we believe there is potential in combining the two approaches.",
"In the context of question answering, there has been work on assessing the credibility of an answer, e.g., based on intrinsic information BIBREF28 , i.e., without any external resources. In this case, the reliability of an answer is measured by computing the divergence between language models of the question and of the answer. The spawn of community-based question answering Websites also allowed for the use of other kinds of information. Click counts, link analysis (e.g., PageRank), and user votes have been used to assess the quality of a posted answer BIBREF29 , BIBREF30 , BIBREF31 . Nevertheless, these studies address the answers' credibility level just marginally.",
"Efforts to determine the credibility of an answer in order to assess its overall quality required the inclusion of content-based information BIBREF32 , e.g., verbs and adjectives such as suppose and probably, which cast doubt on the answer. Similarly, BIBREF33 used source credibility (e.g., does the document come from a government Website?), sentiment analysis, and answer contradiction compared to other related answers.",
"Overall, credibility assessment for question answering has been mostly modeled at the feature level, with the goal of assessing the quality of the answers. A notable exception is the work of BIBREF34 , where credibility is treated as a task of its own right. Yet, note that credibility is different from factuality (our focus here) as the former is a subjective perception about whether a statement is credible, rather than verifying it as true or false as a matter of fact; still, these notions are often wrongly mixed in the literature. To the best of our knowledge, no previous work has targeted fact-checking of answers in the context of community Question Answering by gathering external support."
],
[
"We have presented and evaluated a general-purpose method for fact checking that relies on retrieving supporting information from the Web and comparing it to the claim using machine learning. Our method is lightweight in terms of features and can be very efficient because it shows good performance by only using the snippets provided by the search engines. The combination of the representational power of neural networks with the classification of kernel-based methods has proven to be crucial for making balanced predictions and obtaining good results. Overall, the strong performance of our model across two different fact-checking tasks confirms its generality and potential applicability for different domains and for different fact-checking task formulations.",
"In future work, we plan to test the generality of our approach by applying it to these and other datasets in combination with complementary methods, e.g., those focusing on microblogs and temporal information in Twitter to make predictions about rumors BIBREF27 , BIBREF3 . We also want to explore the possibility of providing justifications for our predictions, and we plan to integrate our method into a real-world application."
],
[
"This research was performed by the Arabic Language Technologies group at Qatar Computing Research Institute, HBKU, within the Interactive sYstems for Answer Search project (Iyas)."
]
],
"section_name": [
"Introduction",
"The Fact-Checking System",
"External Support Retrieval",
"Text Representation",
"Veracity Prediction",
"Dataset",
"Experimental Setup",
"Evaluation Metrics",
"Results",
"Application to cQA",
"Related Work",
"Conclusions and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"78f1fc6be70125c3ef5d14b90acdb78424e414e7"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 3: Example from the cQA forum dataset."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 3: Example from the cQA forum dataset."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"973538abc7f77ce0a0a50e5cbd5c280f52d53b33"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"731e59641baa58931e452b44250939ac37c6fc48"
],
"answer": [
{
"evidence": [
"This step consists of generating a query out of the claim and querying a search engine (here, we experiment with Google and Bing) in order to retrieve supporting documents. Rather than querying the search engine with the full claim (as on average, a claim is two sentences long), we generate a shorter query following the lessons highlighted in BIBREF0 .",
"We rank the words by means of tf-idf. We compute the idf values on a 2015 Wikipedia dump and the English Gigaword. BIBREF0 suggested that a good way to perform high-quality search is to only consider the verbs, the nouns and the adjectives in the claim; thus, we exclude all words in the claim that belong to other parts of speech. Moreover, claims often contain named entities (e.g., names of persons, locations, and organizations); hence, we augment the initial query with all the named entities from the claim's text. We use IBM's AlchemyAPI to identify named entities. Ultimately, we generate queries of 5–10 tokens, which we execute against a search engine. We then collect the snippets and the URLs in the results, skipping any result that points to a domain that is considered unreliable. Finally, if our query has returned no results, we iteratively relax it by dropping the final tokens one at a time."
],
"extractive_spans": [],
"free_form_answer": " Generate a query out of the claim and querying a search engine, rank the words by means of TF-IDF, use IBM's AlchemyAPI to identify named entities, generate queries of 5–10 tokens, which execute against a search engine, and collect the snippets and the URLs in the results, skipping any result that points to a domain that is considered unreliable.",
"highlighted_evidence": [
"This step consists of generating a query out of the claim and querying a search engine (here, we experiment with Google and Bing) in order to retrieve supporting documents. Rather than querying the search engine with the full claim (as on average, a claim is two sentences long), we generate a shorter query following the lessons highlighted in BIBREF0 .",
"We rank the words by means of tf-idf. We compute the idf values on a 2015 Wikipedia dump and the English Gigaword. BIBREF0 suggested that a good way to perform high-quality search is to only consider the verbs, the nouns and the adjectives in the claim; thus, we exclude all words in the claim that belong to other parts of speech. Moreover, claims often contain named entities (e.g., names of persons, locations, and organizations); hence, we augment the initial query with all the named entities from the claim's text. We use IBM's AlchemyAPI to identify named entities. Ultimately, we generate queries of 5–10 tokens, which we execute against a search engine. We then collect the snippets and the URLs in the results, skipping any result that points to a domain that is considered unreliable."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"839ef05467ebe0cb1bbb1febc908968c410c2b2d"
],
"answer": [
{
"evidence": [
"We further use as features the embeddings of the claim, of the best-scoring snippet, and of the best-scoring sentence triplet from a Web page. We calculate these embeddings (i) as the average of the embeddings of the words in the text, and also (ii) using LSTM encodings, which we train for the task as part of a deep neural network (NN). We also use a task-specific embedding of the claim together with all the above evidence about it, which comes from the last hidden layer of the NN."
],
"extractive_spans": [
" task-specific embedding of the claim together with all the above evidence about it, which comes from the last hidden layer of the NN"
],
"free_form_answer": "",
"highlighted_evidence": [
"We further use as features the embeddings of the claim, of the best-scoring snippet, and of the best-scoring sentence triplet from a Web page. We calculate these embeddings (i) as the average of the embeddings of the words in the text, and also (ii) using LSTM encodings, which we train for the task as part of a deep neural network (NN). We also use a task-specific embedding of the claim together with all the above evidence about it, which comes from the last hidden layer of the NN."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"78a170014ed131979d5c3e947b77b897e7edd4d3"
],
"answer": [
{
"evidence": [
"The system starts with a claim to verify. First, we automatically convert the claim into a query, which we execute against a search engine in order to obtain a list of potentially relevant documents. Then, we take both the snippets and the most relevant sentences in the full text of these Web documents, and we compare them to the claim. The features we use are dense representations of the claim, of the snippets and of related sentences from the Web pages, which we automatically train for the task using Long Short-Term Memory networks (LSTMs). We also use the final hidden layer of the neural network as a task-specific embedding of the claim, together with the Web evidence. We feed all these representations as features, together with pairwise similarities, into a Support Vector Machine (SVM) classifier using an RBF kernel to classify the claim as True or False."
],
"extractive_spans": [
"embedding of the claim",
"Web evidence"
],
"free_form_answer": "",
"highlighted_evidence": [
" We also use the final hidden layer of the neural network as a task-specific embedding of the claim, together with the Web evidence. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"Does this system improve on the SOTA?",
"How are the potentially relevant text fragments identified?",
"What algorithm and embedding dimensions are used to build the task-specific embeddings?",
"What data is used to build the task-specific embeddings?"
],
"question_id": [
"21f615bf19253fc27ea838012bc088f4d10cdafd",
"1ed006dde28f6946ad2f8bd204f61eda0059a515",
"29d917cc38a56a179395d0f3a2416fca41a01659",
"ad4658c64056b6eddda00d3cbc55944ae01eb437",
"89b9e298993dbedd3637189c3f37c0c4791041a1"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Example claims and the information we use to predict whether they are factually true or false.",
"Figure 2: Our general neural network architecture (top) and detailed LSTM representation (bottom). Each blue box in the top consists of the bi-LSTM structure in the bottom.",
"Table 1: Results on the rumor detection dataset using Web pages returned by the search engines.",
"Table 2: Results using an SVM with and without task-specific embeddings from the NN on the Rumor detection dataset. Training on Web pages vs. snippets vs. both.",
"Figure 3: Example from the cQA forum dataset.",
"Table 3: Distribution of the answer labels.",
"Table 4: Results on the cQA answer fact-checking problem."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"6-Figure3-1.png",
"6-Table3-1.png",
"7-Table4-1.png"
]
} | [
"How are the potentially relevant text fragments identified?"
] | [
[
"1710.00341-External Support Retrieval-2",
"1710.00341-External Support Retrieval-0"
]
] | [
" Generate a query out of the claim and querying a search engine, rank the words by means of TF-IDF, use IBM's AlchemyAPI to identify named entities, generate queries of 5–10 tokens, which execute against a search engine, and collect the snippets and the URLs in the results, skipping any result that points to a domain that is considered unreliable."
] | 677 |
1911.02821 | Enhancing Pre-trained Chinese Character Representation with Word-aligned Attention | Most Chinese pre-trained encoders take a character as a basic unit and learn representations according to character's external contexts, ignoring the semantics expressed in the word, which is the smallest meaningful unit in Chinese. Hence, we propose a novel word aligned attention to incorporate word segmentation information, which is complementary to various Chinese pre-trained language models. Specifically, we devise a mixed-pooling strategy to align the character level attention to the word level, and propose an effective fusion method to solve the potential issue of segmentation error propagation. As a result, word and character information are explicitly integrated at the fine-tuning procedure. Experimental results on various Chinese NLP benchmarks demonstrate that our model could bring another significant gain over several pre-trained models. | {
"paragraphs": [
[
"Pre-trained language Models (PLM) such as ELMo BIBREF0, BERT BIBREF1, ERNIE BIBREF2 and XLNet BIBREF3 have been proven to capture rich language information from text and then benefit many NLP applications by simple fine-tuning, including sentiment classification, natural language inference, named entity recognition and so on.",
"Generally, most of PLMs focus on using attention mechanism BIBREF4 to represent the natural language, such as word-level attention for English and character-level attention for Chinese. Unlike English, in Chinese, words are not separated by explicit delimiters, which means that character is the smallest linguistic unit. However, in most cases, the semantic of single Chinese character is ambiguous. UTF8gbsn For example, in Table 1, using the attention over word 西山 is more intuitive than over the two individual characters 西 and 山. Moreover, previous work has shown that considering the word segmentation information can lead to better language understanding and accordingly benefits various Chines NLP tasks BIBREF5, BIBREF6, BIBREF7.",
"All these factors motivate us to expand the character-level attention mechanism in Chinese PLM to represent attention over words . To this end, there are two main challenges. (1) How to seamlessly integrate word segmentation information into character-level attention module of PLM is an important problem. (2) Gold-standard segmentation is rarely available in the downstream tasks, and how to effectively reduce the cascading noise caused by automatic segmentation tools BIBREF8 is another challenge.",
"In this paper, we propose a new architecture, named Multi-source Word Alignd Attention (MWA), to solve the above issues. (1) Psycholinguistic experiments BIBREF9, BIBREF10 have shown that readers are likely to pay approximate attention to each character in one Chinese word. Drawing inspiration from such finding, we introduce a novel word-aligned attention, which could aggregate attention weight of characters in one word into a unified value with the mixed pooling strategy BIBREF11. (2) For reducing segmentation error, we further extend our word-aligned attention with multi-source segmentation produced by various segmenters, and deploy a fusion function to pull together their disparate output. In this way, we can implicitly reduce the error caused by automatic annotation.",
"Extensive experiments are conducted on various Chinese NLP datasets including named entity recognition, sentiment classification, sentence pair matching, natural language inference, etc. The results show that the proposed model brings another gain over BERT BIBREF1, ERNIE BIBREF2 and BERT-wwm BIBREF12, BIBREF13 in all the tasks."
],
[
"The primary goal of this work is to inject the word segmentation knowledge into character-level Chinese PLM and enhance original models. Given the strong performance of recent deep transformers trained on language modeling, we adopt BERT and its updated variants (ERNIE, BERT-wwm) as the basic encoder for our work, and the outputs $\\mathbf {H}$ from the last layer of encoder are treated as the enriched contextual representations."
],
[
"Although the character-level PLM can well capture language knowledge from text, it neglects the semantic information expressed in the word level. Therefore we apply a word-aligned layer on top of the encoder to integrate the word boundary information into representation of character with the attention aggregation mechanism.",
"For an input sequence with with $n$ characters $S=[c_1, c_2, ... , c_n]$, where $c_j$ denotes the $j$-th character, Chinese words segmentation tool $\\pi $ is used to partition $S$ into non-overlapping word blocks:",
"where $w_i = \\lbrace c_s, c_{s+1}, ..., c_{s+l-1}\\rbrace $ is the $i$-th segmented word of length $l$ and $s$ is the index of $w_i$'s first character in $S$. We apply the self-attention with the representations of all input characters to get the character-level attention score matrix $\\textbf {A}_c \\in \\mathbb {R}^{n \\times n}$. It can be formulated as:",
"where $\\textbf {Q}$ and $\\textbf {K}$ are both equal to the collective representation $\\textbf {H}$ at the last layer of the Chinese PLM, $\\textbf {W}_k \\in \\mathbb {R}^{d\\times d}$ and $\\textbf {W}_q \\in \\mathbb {R}^{d\\times d}$ are trainable parameters for projection. While $\\textbf {A}_c$ models the relationship between two arbitrarily characters without regard to the word boundary, we argue that incorporating word as atoms in the attention can better represent the semantics, as the literal meaning of each individual characters can be quite different from the implied meaning of the whole word, and the simple weighted sum in character-level cannot capture the semantic interaction between words.",
"To this end, we propose to align $\\textbf {A}_c$ in the word level and integrate the inner-word attention. For the sake of simplicity, we rewrite $\\textbf {A}_c$ as $[\\textbf {a}_c^1, \\textbf {a}_c^2, ... ,\\textbf {a}_c^n]$, where $\\textbf {a}_c^i \\in \\mathbb {R}^n $ denotes the $i$-th row vector of $\\textbf {A}_c$ and the attention score vector of the $i$-th character. Then we deploy $\\pi $ to segment $\\textbf {A}_c$ according to $\\pi (S)$. For example, if $\\pi (S) = [\\lbrace c_1, c_2\\rbrace , \\lbrace c_3\\rbrace , ...,\\lbrace c_{n-1}, c_{n}\\rbrace ]$, then",
"In this way, an attention vector sequence is segmented into several subsequences and each subsequence represents the attention of one word. Then, motivated by the psycholinguistic finding that readers are likely to pay approximate attention to each character in one Chinese word, we devise an appropriate aggregation module to fuse the inner-word character attention. Concretely, we first transform $\\lbrace \\textbf {a}_c^s,..., \\textbf {a}_c^{s+l-1}\\rbrace $ into one attention vector $\\textbf {a}_w^i$ for $w_i$ with the mixed pooling strategy BIBREF11. Then we execute the piecewise up- mpling operation over each $\\textbf {a}_w^i$ to keep input and output dimensions unchanged for the sake of plug and play. The detailed process can be summarized as follows:",
"where $\\lambda \\in R^1 $ is a weighting trainable variable to balance the mean and max pooling, $\\textbf {e}_l=[1,...,1]^T$ represents a $l$-dimensional all-ones vector, $l$ is the length of word $w_i$, $\\textbf {e}_l \\otimes \\textbf {a}_w^i=[\\textbf {a}_w^i,...,\\textbf {a}_w^i]$ denotes the kronecker product operation between $\\textbf {e}_l$ and $\\textbf {a}_w^i$, $\\hat{\\textbf {A}}_c \\in \\mathbb {R}^{n \\times n}$ is the aligned attention matrix. The Eq. (DISPLAY_FORM9-) can help incorporate word segmentation information into character-level attention calculation process, and determine the attention vector of one character from the perspective of the whole word, which is beneficial for eliminating the attention bias caused by character ambiguity. Finally, we get the enhanced character representation produced by word-aligned attention:",
"where $\\textbf {V} = \\textbf {H}$, $\\textbf {W}_v \\in \\mathbb {R}^{d\\times d}$ is a trainable projection matrix. Besides, we also use multi-head attention BIBREF4 to capture information from different representation subspaces jointly, thus we have $K$ different aligned attention matrices $\\hat{\\textbf {A}}_c^k (1\\le k\\le K)$ and corresponding output $\\hat{\\textbf {H}}^k$. With multi-head attention architecture, the output can be expressed as follows:"
],
[
"As mentioned in Section SECREF1, our proposed word-aligned attention relies on the segmentation results of CWS tool $\\pi $. Unfortunately, a segmenter is usually unreliable due to the risk of ambiguous and non-formal input, especially on out-of-domain data, which may lead to error propagation and an unsatisfactory model performance. In practice, The ambiguous distinction between morphemes and compound words leads to the cognitive divergence of words concepts, thus different $\\pi $ may provide diverse $\\pi (S)$ with various granularities. To reduce the impact of segmentation error and effectively mine the common knowledge of different segmenters, it’s natural to enhance the word-aligned attention layer with multi-source segmentation input. Formally, assume that there are $M$ popular CWS tools employed, we can obtain $M$ different representations $\\overline{\\textbf {H}}^1, ..., \\overline{\\textbf {H}}^M $ by Eq. DISPLAY_FORM11. Then we propose to fuse these semantically different representations as follows:",
"where $\\textbf {W}_g$ is the parameter matrix and $\\tilde{\\textbf {H}}$ is the final output of the MWA attention layer."
],
[
"To test the applicability of the proposed MWA attention, we choose three publicly available Chinese pre-trained models as the basic encoder: BERT, ERNIE, and BERT-wwm. In order to make a fair comparison, we keep the same hyper-parameters (such maximum length, warm-up steps, initial learning rate, etc) as suggested in BERT-wwm BIBREF13 for both baselines and our method on each dataset. We run the same experiment for five times and report the average score to ensure the reliability of results. For detailed hyper-parameter settings, please see Appendix. Besides, three popular CWS tools thulac BIBREF14, ictclas BIBREF15 and hanlp BIBREF16 are employed to segment the Chinese sentences into words.",
"We carried out experiments on four Chinese NLP tasks, including Emotion Classification (EC), Named Entity Recognition (NER), Sentence Pair Matching (SPM) and Natural Language Inference (NLI). The detail of those tasks and the corresponding datasets are introduced in Appendix."
],
[
"Table TABREF14 shows the experiment measuring improvements from the MWA attention on test sets of four datasets. Generally, our method consistently outperforms all baselines on all of four tasks, which clearly indicates the advantage of introducing word segmentation information into the encoding of character sequences. Moreover, the Wilcoxon’s test shows that significant difference ($p< 0.01$) exits between our model with baseline models.",
"In detail, On the EC task, we observe 1.46% absolute improvement in F1 score over ERINE. SPM and NLI tasks can also gain benefits from our enhanced representation, achieving an absolute F1 increase of 0.68% and 0.55% over original models averagely. For the NER task, our method improves the performance of BERT by 1.54%, and obtains 1.23% improvement averagely over all baselines. We attribute such significant gain in NER to the particularity of this task. Intuitively, Chinese NER is correlated with word segmentation, and named entity boundaries are also word boundaries. Thus the potential boundary information presented by the additional segmentation input can provide a better guidance to label each character, which is consistent with the conclusion in BIBREF6, BIBREF7."
],
[
"To demonstrate the effectiveness of our multi-source fusion method in reducing the segmentation error introduced by CWS tools, We further carry out experiments on the EC task with different segmentation inputs. Table TABREF16 presents the comprehensive results on the three segmentation inputs produced by three CWS tools aforementioned. Experimental results show that our model gives quite stable improvement no matter the segmentation input quality. This again suggests the effectiveness of incorporating word segmentation information into character-level PLMs. And by employing multiple segmenters and fusing them together could introduce richer segmentation information and reduce the impact of general existent segmentation error."
],
[
"In this paper, we propose an effective architecture Word-aligned Attention to incorporate word segmentation information into character-based pre-trained language models, which is adopted to a variety of downstream NLP tasks as an extend layer in fine-tuned process. And we also employ more segmenters into via proposed Multi-source Word-aligned Attention for reducing segmentation error. The experimental results show the effectiveness of our method. Comparing to BERT, ERNIE and BERT-wwm, our model obtains substantial improvements on various NLP benchmarks. Although we mainly focused on Chinese PLM in this paper, our model would take advantage the capabilities of Word-aligned Attention for word-piece in English NLP task. We are also considering applying this model into pre-training language model for various Language Model task in different grain to capture multi-level language features."
]
],
"section_name": [
"Introduction",
"Methodology ::: Character-level Pre-trained Encoder",
"Methodology ::: Word-aligned Attention",
"Methodology ::: Multi-source Word-aligned Attention",
"Experiments ::: Experiments Setup",
"Experiments ::: Experiment Results",
"Experiments ::: Ablation Study",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"946ddcbd46cbfb059d571d9aebf27201a80ebf4a"
],
"answer": [
{
"evidence": [
"To test the applicability of the proposed MWA attention, we choose three publicly available Chinese pre-trained models as the basic encoder: BERT, ERNIE, and BERT-wwm. In order to make a fair comparison, we keep the same hyper-parameters (such maximum length, warm-up steps, initial learning rate, etc) as suggested in BERT-wwm BIBREF13 for both baselines and our method on each dataset. We run the same experiment for five times and report the average score to ensure the reliability of results. For detailed hyper-parameter settings, please see Appendix. Besides, three popular CWS tools thulac BIBREF14, ictclas BIBREF15 and hanlp BIBREF16 are employed to segment the Chinese sentences into words."
],
"extractive_spans": [
"BERT, ERNIE, and BERT-wwm"
],
"free_form_answer": "",
"highlighted_evidence": [
"To test the applicability of the proposed MWA attention, we choose three publicly available Chinese pre-trained models as the basic encoder: BERT, ERNIE, and BERT-wwm. In order to make a fair comparison, we keep the same hyper-parameters (such maximum length, warm-up steps, initial learning rate, etc) as suggested in BERT-wwm BIBREF13 for both baselines and our method on each dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"c24f3d0ee8ad87b9ee1f75ab6bf4f7700316d311"
],
"answer": [
{
"evidence": [
"In this way, an attention vector sequence is segmented into several subsequences and each subsequence represents the attention of one word. Then, motivated by the psycholinguistic finding that readers are likely to pay approximate attention to each character in one Chinese word, we devise an appropriate aggregation module to fuse the inner-word character attention. Concretely, we first transform $\\lbrace \\textbf {a}_c^s,..., \\textbf {a}_c^{s+l-1}\\rbrace $ into one attention vector $\\textbf {a}_w^i$ for $w_i$ with the mixed pooling strategy BIBREF11. Then we execute the piecewise up- mpling operation over each $\\textbf {a}_w^i$ to keep input and output dimensions unchanged for the sake of plug and play. The detailed process can be summarized as follows:"
],
"extractive_spans": [
"ttention vector sequence is segmented into several subsequences and each subsequence represents the attention of one word",
"we devise an appropriate aggregation module to fuse the inner-word character attention"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this way, an attention vector sequence is segmented into several subsequences and each subsequence represents the attention of one word. Then, motivated by the psycholinguistic finding that readers are likely to pay approximate attention to each character in one Chinese word, we devise an appropriate aggregation module to fuse the inner-word character attention."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"742f7e119a52e7321a38141df06264e5db7294f2"
],
"answer": [
{
"evidence": [
"Table TABREF14 shows the experiment measuring improvements from the MWA attention on test sets of four datasets. Generally, our method consistently outperforms all baselines on all of four tasks, which clearly indicates the advantage of introducing word segmentation information into the encoding of character sequences. Moreover, the Wilcoxon’s test shows that significant difference ($p< 0.01$) exits between our model with baseline models.",
"FLOAT SELECTED: Table 2: Results of word-aligned attention models on multi NLP task. All of results are f1-score evaluated on test set and each experiment are enacted five times, the average is taken as result. Part of results are similar to results from BERT-wwm technical report (Cui et al., 2019)."
],
"extractive_spans": [],
"free_form_answer": "weibo-100k, Ontonotes, LCQMC and XNLI",
"highlighted_evidence": [
"Table TABREF14 shows the experiment measuring improvements from the MWA attention on test sets of four datasets.",
"FLOAT SELECTED: Table 2: Results of word-aligned attention models on multi NLP task. All of results are f1-score evaluated on test set and each experiment are enacted five times, the average is taken as result. Part of results are similar to results from BERT-wwm technical report (Cui et al., 2019)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"cf20997985621f7c70e1f870101131f42e888772"
],
"answer": [
{
"evidence": [
"We carried out experiments on four Chinese NLP tasks, including Emotion Classification (EC), Named Entity Recognition (NER), Sentence Pair Matching (SPM) and Natural Language Inference (NLI). The detail of those tasks and the corresponding datasets are introduced in Appendix."
],
"extractive_spans": [
"Emotion Classification (EC)",
"Named Entity Recognition (NER)",
"Sentence Pair Matching (SPM)",
"Natural Language Inference (NLI)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We carried out experiments on four Chinese NLP tasks, including Emotion Classification (EC), Named Entity Recognition (NER), Sentence Pair Matching (SPM) and Natural Language Inference (NLI)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What pre-trained models did they compare to?",
"How does the fusion method work?",
"What dataset did they use?",
"What benchmarks did they experiment on?"
],
"question_id": [
"6b4de7fef3a543215f16042ce6a29186bf84fea4",
"3a62dd5fece70f8bf876dcbb131223682e3c54b7",
"34fab25d9ceb9c5942daf4ebdab6c5dd4ff9d3db",
"2c20426c003f7e3053f8e6c333f8bb744f6f31f8"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Results from different tokenizers over北京西 山森林公园(Beijing west mount forest park).",
"Figure 1: Architecture of Word-aligned Attention",
"Table 2: Results of word-aligned attention models on multi NLP task. All of results are f1-score evaluated on test set and each experiment are enacted five times, the average is taken as result. Part of results are similar to results from BERT-wwm technical report (Cui et al., 2019).",
"Table 3: Results of word-aligned attention produced by difference segmenters, and results of aggregated model over multi tokenizers on weibo sentiment-100k dataset."
],
"file": [
"2-Table1-1.png",
"2-Figure1-1.png",
"4-Table2-1.png",
"4-Table3-1.png"
]
} | [
"What dataset did they use?"
] | [
[
"1911.02821-4-Table2-1.png",
"1911.02821-Experiments ::: Experiment Results-0"
]
] | [
"weibo-100k, Ontonotes, LCQMC and XNLI"
] | 680 |
1909.02265 | Towards Task-Oriented Dialogue in Mixed Domains | This work investigates the task-oriented dialogue problem in mixed-domain settings. We study the effect of alternating between different domains in sequences of dialogue turns using two related state-of-the-art dialogue systems. We first show that a specialized state tracking component in multiple domains plays an important role and gives better results than an end-to-end task-oriented dialogue system. We then propose a hybrid system which is able to improve the belief tracking accuracy of about 28% of average absolute point on a standard multi-domain dialogue dataset. These experimental results give some useful insights for improving our commercial chatbot platform this http URL, which is currently deployed for many practical chatbot applications. | {
"paragraphs": [
[
"In this work, we investigate the problem of task-oriented dialogue in mixed-domain settings. Our work is related to two lines of research in Spoken Dialogue System (SDS), namely task-oriented dialogue system and multi-domain dialogue system. We briefly review the recent literature related to these topics as follows.",
"Task-oriented dialogue systems are computer programs which can assist users to complete tasks in specific domains by understanding user requests and generating appropriate responses within several dialogue turns. Such systems are useful in domain-specific chatbot applications which help users find a restaurant or book a hotel. Conventional approach for building a task-oriented dialogue system is concerned with building a quite complex pipeline of many connected components. These components are usually independently developed which include at least four crucial modules: a natural language understanding module, a dialogue state tracking module, a dialogue policy learning module, and a answer generation module. Since these systems components are usually trained independently, their optimization targets may not fully align with the overall system evaluation criteria BIBREF0. In addition, such a pipeline system often suffers from error propagation where error made by upstream modules are accumuated and got amplified to the downstream ones.",
"To overcome the above limitations of pipeline task-oriented dialogue systems, much research has focused recently in designing end-to-end learning systems with neural network-based models. One key property of task-oriented dialogue model is that it is required to reason and plan over multiple dialogue turns by aggregating useful information during the conversation. Therefore, sequence-to-sequence models such as the encoder-decoder based neural network models are proven to be suitable for both task-oriented and non-task-oriented systems. Serban et al. proposed to build end-to-end dialogue systems using generative hierarchical recurrent encoder-decoder neural network BIBREF1. Li et al. presented persona-based models which incorporate background information and speaking style of interlocutors into LSTM-based seq2seq network so as to improve the modeling of human-like behavior BIBREF2. Wen et al. designed an end-to-end trainable neural dialogue model with modularly connected components BIBREF3. Bordes et al. BIBREF4 proposed a task-oriented dialogue model using end-to-end memory networks. At the same time, many works explored different kinds of networks to model the dialogue state, such as copy-augmented networks BIBREF5, gated memory networks BIBREF6, query-regression networks BIBREF7. These systems do not perform slot-filling or user goal tracking; they rank and select a response from a set of response candidates which are conditioned on the dialogue history.",
"One of the significant effort in developing end-to-end task-oriented systems is the recent Sequicity framework BIBREF8. This framework also relies on the sequence-to-sequence model and can be optimized with supervised or reinforcement learning. The Sequicity framework introduces the concept of belief span (bspan), which is a text span that tracks the dialogue states at each turn. In this framework, the task-oriented dialogue problem is decomposed into two stages: bspan generation and response generation. This framework has been shown to significantly outperform state-of-the-art pipeline-based methods.",
"The second line of work in SDS that is related to this work is concerned with multi-domain dialogue systems. As presented above, one of the key components of a dialogue system is dialogue state tracking, or belief tracking, which maintains the states of conversation. A state is usually composed of user's goals, evidences and information which is accumulated along the sequence of dialogue turns. While the user's goal and evidences are extracted from user's utterances, the useful information is usually aggregated from external resources such as knowledge bases or dialogue ontologies. Such knowledge bases contain slot type and slot value entries in one or several predefined domains. Most approaches have difficulty scaling up with multiple domains due to the dependency of their model parameters on the underlying knowledge bases. Recently, Ramadan et al. BIBREF9 has introduced a novel approach which utilizes semantic similarity between dialogue utterances and knowledge base terms, allowing the information to be shared across domains. This method has been shown not only to scale well to multi-domain dialogues, but also outperform existing state-of-the-art models in single-domain tracking tasks.",
"The problem that we are interested in this work is task-oriented dialogue in mixed-domain settings. This is different from the multi-domain dialogue problem above in several aspects, as follows:",
"First, we investigate the phenomenon of alternating between different dialogue domains in subsequent dialogue turns, where each turn is defined as a pair of user question and machine answer. That is, the domains are mixed between turns. For example, in the first turn, the user requests some information of a restaurant; then in the second turn, he switches to the a different domain, for example, he asks about the weather at a specific location. In a next turn, he would either switch to a new domain or come back to ask about some other property of the suggested restaurant. This is a realistic scenario which usually happens in practical chatbot applications in our observations. We prefer calling this problem mixed-domain dialogue rather than multiple-domain dialogue.",
"Second, we study the effect of the mixed-domain setting in the context of multi-domain dialogue approaches to see how they perform in different experimental scenarios.",
"The main findings of this work include:",
"A specialized state tracking component in multiple domains still plays an important role and gives better results than a state-of-the-art end-to-end task-oriented dialogue system.",
"A combination of specialized state tracking system and an end-to-end task-oriented dialogue system is beneficial in mix-domain dialogue systems. Our hybrid system is able to improve the belief tracking accuracy of about 28% of average absolute point on a standard multi-domain dialogue dataset.",
"These experimental results give some useful insights on data preparation and acquisition in the development of the chatbot platform FPT.AI, which is currently deployed for many practical chatbot applications.",
"The remainder of this paper is structured as follows. First, Section SECREF2 discusses briefly the two methods in building dialogue systems that our method relies on. Next, Section SECREF3 presents experimental settings and results. Finally, Section SECREF4 concludes the paper and gives some directions for future work."
],
[
"In this section, we present briefly two methods that we use in our experiments which have been mentioned in the previous section. The first method is the Sequicity framework and the second one is the state-of-the-art multi-domain dialogue state tracking approach."
],
[
"Figure FIGREF1 shows the architecture of the Sequicity framework as described in BIBREF8. In essence, in each turn, the Sequicity model first takes a bspan ($B_1$) and a response ($R_1$) which are determined in the previous step, and the current human question ($U_2$) to generate the current bspan. This bspan is then used together with a knowledge base to generate the corresponding machine answer ($R_2$), as shown in the right part of Figure FIGREF1.",
"The left part of that figure shows an example dialogue in a mixed-domain setting (which will be explained in Section SECREF3)."
],
[
"Figure FIGREF8 shows the architecture of the multi-domain belief tracking with knowledge sharing as described in BIBREF9. This is the state-of-the-art belief tracker for multi-domain dialogue.",
"This system encodes system responses with 3 bidirectional LSTM network and encodes user utterances with 3+1 bidirectional LSTM network. There are in total 7 independent LSTMs. For tracking domain, slot and value, it uses 3 corresponding LSTMs, either for system response or user utterance. There is one special LSTM to track the user affirmation. The semantic similarity between the utterances and ontology terms are learned and shared between domains through their embeddings in the same semantic space."
],
[
"In this section, we present experimental settings, different scenarios and results. We first present the datasets, then implementation settings, and finally obtained results."
],
[
"We use the publicly available dataset KVRET BIBREF5 in our experiments. This dataset is created by the Wizard-of-Oz method BIBREF10 on Amazon Mechanical Turk platform. This dataset includes dialogues in 3 domains: calendar, weather, navigation (POI) which is suitable for our mix-domain dialogue experiments. There are 2,425 dialogues for training, 302 for validation and 302 for testing, as shown in the upper half of Table TABREF12.",
"In this original dataset, each dialogue is of a single domain where all of its turns are on that domain. Each turn is composed of a sentence pair, one sentence is a user utterance, the other sentence is the corresponding machine response. A dialogue is a sequence of turns. To create mix-domain dialogues for our experiments, we make some changes in this dataset as follows:",
"We keep the dialogues in the calendar domain as they are.",
"We take a half of dialogues in the weather domain and a half of dialogues in the POI domain and mix their turns together, resulting in a dataset of mixed weather-POI dialogues. In this mixed-domain dialogue, there is a turn in the weather domain, followed by a turn in POI domain or vice versa.",
"We call this dataset the sequential turn dataset. Since the start turn of a dialogue has a special role in triggering the learning systems, we decide to create another and different mixed-domain dataset with the following mixing method:",
"The first turn and the last turn of each dialogue are kept as in their original.",
"The internal turns are mixed randomly.",
"We call this dataset the random turn dataset. Some statistics of these mixed-domain datasets are shown in the lower half of the Table TABREF12."
],
[
"For the task-oriented Sequicity model, we keep the best parameter settings as reported in the original framework, on the same KVRET dataset BIBREF8. In particular, the hidden size of GRU unit is set to 50; the learning rate of Adam optimizer is 0.003. In addition to the original GRU unit, we also re-run this framework with simple RNN unit to compare the performance of different recurrent network types. The Sequicity tool is freely available for download.",
"For the multi-domain belief tracker model, we set the hidden size of LSTM units to 50 as in the original model; word embedding size is 300 and number of training epochs is 100. The corresponding tool is also freely available for download."
],
[
"Our experimental results are shown in Table TABREF21. The first half of the table contains results for task-oriented dialogue with the Sequicity framework with two scenarios for training data preparation. For each experiment, we run our models for 3 times and their scores are averaged as the final score. The mixed training scenario performs the mixing of both the training data, development data and the test data as described in the previous subsection. The non-mixed training scenario performs the mixing only on the development and test data, keeps the training data unmixed as in the original KVRET dataset. As in the Sequicity framework, we report entity match rate, BLEU score and Success F1 score. Entity match rate evaluates task completion, it determines if a system can generate all correct constraints to search the indicated entities of the user. BLEU score evaluates the language quality of generated responses. Success F1 balances the recall and precision rates of slot answers. For further details on these metrics, please refer to BIBREF8.",
"In the first series of experiments, we evaluate the Sequicity framework on different mixing scenarios and different recurrent units (GRU or RNN), on two mixing methods (sequential turn or random turn), as described previously. We see that when the training data is kept unmixed, the match rates are better than those of the mixed training data. It is interesting to note that the GRU unit is much more sensitive with mixed data than the simple RNN unit with the corresponding absolute point drop of about 10%, compared to about 3.5%. However, the entity match rate is less important than the Success F1 score, where the GRU unit outperforms RNN in both sequential turn and random turn by a large margin. It is logical that if the test data are mixed but the training data are unmixed, we get lower scores than when both the training data and test data are mixed. The GRU unit is also better than the RNN unit on response generation in terms of BLEU scores.",
"We also see that the task-oriented dialogue system has difficulty running on mixed-domain dataset; it achieves only about 75.62% of Success F1 in comparison to about 81.1% (as reported in the Sequicity paper, not shown in our table). Appendix SECREF5 shows some example dialogues generated automatically by our implemented system.",
"In the second series of experiments, we evaluate the belief tracking components of two systems, the specialized multi-domain belief tracker and the Sequicity bspan component. As shown in the lower half of the Table TABREF21, Sequicity capability of belief tracking is much worse than that of the multi-domain belief tracker. The slot accuracy gap between the tools is about 21.6%, the value accuracy gap is about 34.4%; that is a large average gap of 28% of accuracy. This result suggests a future work on combining a specialized belief tracking module with an end-to-end task-oriented dialogue system to improve further the performance of the overall dialogue system."
],
[
"In this subsection, we present an example of erroneous mixed dialogue with multple turns. Table TABREF23 shows a dialogue in the test set where wrong generated responses of the Sequicity system are marked in bold font.",
"In the first turn, the system predicts incorrectly the bspan, thus generates wrong slot values (heavy traffic and Pizza Hut). The word Pizza Hut is an arbitrary value selected by the system when it cannot capture the correct value home in the bspan. In the second turn, the machine is not able to capture the value this_week. This failure does not manifest immediately at this turn but it is accumulated to make a wrong answer at the third turn (monday instead of this_week).",
"The third turn is of domain weather and the fourth turn is switched to domain POI. The bspan value cleveland is retained through cross domain, resulting in an error in the fourth turn, where cleveland is shown instead of home. This example demonstrates a weakness of the system when being trained on a mixed-domain dataset. In the fifth turn, since the system does not recognize the value fastest in the bspan, it generates a random and wrong value moderate traffic. Note that the generated answer of the sixth turn is correct despite of the wrong predicted bspan; however, it is likely that if the dialogue continues, this wrong bspan may result in more answer mistakes. In such situations, multi-domain belief tracker usually performs better at bspan prediction."
],
[
"We have presented the problem of mixed-domain task-oriented dialogue and its empirical results on two datasets. We employ two state-of-the-art, publicly available tools, one is the Sequicity framework for task-oriented dialogue, and another is the multi-domain belief tracking system. The belief tracking capability of the specialized system is much better than that of the end-to-end system. We also show the difficulty of task-oriented dialogue systems on mixed-domain datasets through two series of experiments. These results give some useful insights in combining the approaches to improve the performance of a commercial chatbot platform which is under active development in our company. We plan to extend this current research and integrate its fruitful results into a future version of the platform."
],
[
"The following is three example dialogues generated by our system. The first dialogue is in single-domain.",
"",
"The next two dialogues are in mixed-domains.",
"",
""
]
],
"section_name": [
"Introduction",
"Methodology",
"Methodology ::: Sequicity",
"Methodology ::: Multi-domain Dialogue State Tracking",
"Experiments",
"Experiments ::: Datasets",
"Experiments ::: Experimental Settings",
"Experiments ::: Results",
"Experiments ::: Error Analysis",
"Conclusion",
"Example Dialogues"
]
} | {
"answers": [
{
"annotation_id": [
"af61fddeb75c99643d9e84e20b30cdde93fb4306"
],
"answer": [
{
"evidence": [
"Our experimental results are shown in Table TABREF21. The first half of the table contains results for task-oriented dialogue with the Sequicity framework with two scenarios for training data preparation. For each experiment, we run our models for 3 times and their scores are averaged as the final score. The mixed training scenario performs the mixing of both the training data, development data and the test data as described in the previous subsection. The non-mixed training scenario performs the mixing only on the development and test data, keeps the training data unmixed as in the original KVRET dataset. As in the Sequicity framework, we report entity match rate, BLEU score and Success F1 score. Entity match rate evaluates task completion, it determines if a system can generate all correct constraints to search the indicated entities of the user. BLEU score evaluates the language quality of generated responses. Success F1 balances the recall and precision rates of slot answers. For further details on these metrics, please refer to BIBREF8."
],
"extractive_spans": [
"entity match rate",
"BLEU score",
"Success F1 score"
],
"free_form_answer": "",
"highlighted_evidence": [
"As in the Sequicity framework, we report entity match rate, BLEU score and Success F1 score. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"746f575b6c2c95a958b4f7d721222927bc094d0f"
],
"answer": [
{
"evidence": [
"We use the publicly available dataset KVRET BIBREF5 in our experiments. This dataset is created by the Wizard-of-Oz method BIBREF10 on Amazon Mechanical Turk platform. This dataset includes dialogues in 3 domains: calendar, weather, navigation (POI) which is suitable for our mix-domain dialogue experiments. There are 2,425 dialogues for training, 302 for validation and 302 for testing, as shown in the upper half of Table TABREF12."
],
"extractive_spans": [],
"free_form_answer": "3029",
"highlighted_evidence": [
"We use the publicly available dataset KVRET BIBREF5 in our experiments.",
"There are 2,425 dialogues for training, 302 for validation and 302 for testing, as shown in the upper half of Table TABREF12."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"80b456e9541dcdfb561a6a0f4e806a685bee97dd"
],
"answer": [
{
"evidence": [
"We use the publicly available dataset KVRET BIBREF5 in our experiments. This dataset is created by the Wizard-of-Oz method BIBREF10 on Amazon Mechanical Turk platform. This dataset includes dialogues in 3 domains: calendar, weather, navigation (POI) which is suitable for our mix-domain dialogue experiments. There are 2,425 dialogues for training, 302 for validation and 302 for testing, as shown in the upper half of Table TABREF12."
],
"extractive_spans": [
"KVRET"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the publicly available dataset KVRET BIBREF5 in our experiments.",
"This dataset includes dialogues in 3 domains: calendar, weather, navigation (POI) which is suitable for our mix-domain dialogue experiments. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"fc426c6907468f5217077beff8da0541412054b3"
],
"answer": [
{
"evidence": [
"We use the publicly available dataset KVRET BIBREF5 in our experiments. This dataset is created by the Wizard-of-Oz method BIBREF10 on Amazon Mechanical Turk platform. This dataset includes dialogues in 3 domains: calendar, weather, navigation (POI) which is suitable for our mix-domain dialogue experiments. There are 2,425 dialogues for training, 302 for validation and 302 for testing, as shown in the upper half of Table TABREF12."
],
"extractive_spans": [
"calendar",
"weather",
"navigation"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the publicly available dataset KVRET BIBREF5 in our experiments. ",
"This dataset includes dialogues in 3 domains: calendar, weather, navigation (POI) which is suitable for our mix-domain dialogue experiments. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"What were the evaluation metrics used?",
"What is the size of the dataset?",
"What multi-domain dataset is used?",
"Which domains did they explored?"
],
"question_id": [
"d1909ce77d09983aa1b3ab5c56e2458caefbd442",
"fc3f0eb297b2308b99eb4661a510c9cdbb6ffba2",
"27c1c678d3862c7676320ca493537b03a9f0c77a",
"ccb3d21885250bdbfc4c320e99f25923896e70fa"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Fig. 1. Sequicity architecture.",
"Fig. 2. Multi-domain belief tracking with knowledge sharing.",
"TABLE I SOME STATISTICS OF THE DATASETS USED IN OUR EXPERIMENTS. THE ORIGINAL KVRET DATASET IS SHOWN IN THE UPPER HALF OF THE TABLE. THE MIXED DATASET IS SHOWN IN THE LOWER HALF OF THE TABLE.",
"TABLE II OUR EXPERIMENTAL RESULTS.MATCH. AND SUCC. F1 ARE ENTITY MATCH RATE AND SUCCESS F1. THE UPPER HALF OF THE TABLE SHOWS RESULTS OF TASK-ORIENTED DIALOGUE WITH THE SEQUICITY FRAMEWORK. THE LOWER HALF OF THE TABLE SHOWS RESULTS OF MULTI-DOMAIN BELIEF TRACKER.",
"TABLE III A MIXED DIALOGUE EXAMPLE IN THE TEST SET WITH ERRONEOUS GENERATED RESPONSES. THE LAST TWO COLUMNS SHOW RESPECTIVELY THE SYSTEM’S GENERATED BSPAN AND THE GOLD BSPAN OR BELIEF TRACKER."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-TableI-1.png",
"5-TableII-1.png",
"5-TableIII-1.png"
]
} | [
"What is the size of the dataset?"
] | [
[
"1909.02265-Experiments ::: Datasets-0"
]
] | [
"3029"
] | 681 |
1906.10551 | Assessing the Applicability of Authorship Verification Methods | Authorship verification (AV) is a research subject in the field of digital text forensics that concerns itself with the question, whether two documents have been written by the same person. During the past two decades, an increasing number of proposed AV approaches can be observed. However, a closer look at the respective studies reveals that the underlying characteristics of these methods are rarely addressed, which raises doubts regarding their applicability in real forensic settings. The objective of this paper is to fill this gap by proposing clear criteria and properties that aim to improve the characterization of existing and future AV approaches. Based on these properties, we conduct three experiments using 12 existing AV approaches, including the current state of the art. The examined methods were trained, optimized and evaluated on three self-compiled corpora, where each corpus focuses on a different aspect of applicability. Our results indicate that part of the methods are able to cope with very challenging verification cases such as 250 characters long informal chat conversations (72.7% accuracy) or cases in which two scientific documents were written at different times with an average difference of 15.6 years (>75% accuracy). However, we also identified that all involved methods are prone to cross-topic verification cases. | {
"paragraphs": [
[
"Digital text forensics aims at examining the originality and credibility of information in electronic documents and, in this regard, at extracting and analyzing information about the authors of the respective texts BIBREF0 . Among the most important tasks of this field are authorship attribution (AA) and authorship verification (AV), where the former deals with the problem of identifying the most likely author of a document INLINEFORM0 with unknown authorship, given a set of texts of candidate authors. AV, on the other hand, focuses on the question whether INLINEFORM1 was in fact written by a known author INLINEFORM2 , where only a set of reference texts INLINEFORM3 of this author is given. Both disciplines are strongly related to each other, as any AA problem can be broken down into a series of AV problems BIBREF1 . Breaking down an AA problem into multiple AV problems is especially important in such scenarios, where the presence of the true author of INLINEFORM4 in the candidate set cannot be guaranteed.",
"In the past two decades, researchers from different fields including linguistics, psychology, computer science and mathematics proposed numerous techniques and concepts that aim to solve the AV task. Probably due to the interdisciplinary nature of this research field, AV approaches were becoming more and more diverse, as can be seen in the respective literature. In 2013, for example, Veenman and Li BIBREF2 presented an AV method based on compression, which has its roots in the field of information theory. In 2015, Bagnall BIBREF3 introduced the first deep learning approach that makes use of language modeling, an important key concept in statistical natural language processing. In 2017, Castañeda and Calvo BIBREF4 proposed an AV method that applies a semantic space model through Latent Dirichlet Allocation, a generative statistical model used in information retrieval and computational linguistics.",
"Despite the increasing number of AV approaches, a closer look at the respective studies reveals that only minor attention is paid to their underlying characteristics such as reliability and robustness. These, however, must be taken into account before AV methods can be applied in real forensic settings. The objective of this paper is to fill this gap and to propose important properties and criteria that are not only intended to characterize AV methods, but also allow their assessment in a more systematic manner. By this, we hope to contribute to the further development of this young research field. Based on the proposed properties, we investigate the applicability of 12 existing AV approaches on three self-compiled corpora, where each corpus involves a specific challenge.",
"The rest of this paper is structured as follows. Section SECREF2 discusses the related work that served as an inspiration for our analysis. Section SECREF3 comprises the proposed criteria and properties to characterize AV methods. Section SECREF4 describes the methodology, consisting of the used corpora, examined AV methods, selected performance measures and experiments. Finally, Section SECREF5 concludes the work and outlines future work."
],
[
"Over the years, researchers in the field of authorship analysis identified a number of challenges and limitations regarding existing studies and approaches. Azarbonyad et al. BIBREF8 , for example, focused on the questions if the writing styles of authors of short texts change over time and how this affects AA. To answer these questions, the authors proposed an AA approach based on time-aware language models that incorporate the temporal changes of the writing style of authors. In one of our experiments, we focus on a similar question, namely, whether it is possible to recognize the writing style of authors, despite of large time spans between their documents. However, there are several differences between our experiment and the study of Azarbonyad et al. First, the authors consider an AA task, where one anonymous document INLINEFORM0 has to be attributed to one of INLINEFORM1 possible candidate authors, while we focus on an AV task, where INLINEFORM2 is compared against one document INLINEFORM3 of a known author. Second, the authors focus on texts with informal language (emails and tweets) in their study, while in our experiment we consider documents written in a formal language (scientific works). Third, Azarbonyad et al. analyzed texts with a time span of four years, while in our experiment the average time span is 15.6 years. Fourth, in contrast to the approach of the authors, none of the 12 examined AV approaches in our experiment considers a special handling of temporal stylistic changes.",
"In recent years, the new research field author obfuscation (AO) evolved, which concerns itself with the task to fool AA or AV methods in a way that the true author cannot be correctly recognized anymore. To achieve this, AO approaches which, according to Gröndahl and Asokan BIBREF9 can be divided into manual, computer-assisted and automatic types, perform a variety of modifications on the texts. These include simple synonym replacements, rule-based substitutions or word order permutations. In 2016, Potthast et al. BIBREF10 presented the first large-scale evaluation of three AO approaches that aim to attack 44 AV methods, which were submitted to the PAN-AV competitions during 2013-2015 BIBREF11 , BIBREF5 , BIBREF12 . One of their findings was that even basic AO approaches have a significant impact on many AV methods. More precisely, the best-performing AO approach was able to flip on average INLINEFORM0 % of an authorship verifier’s decisions towards choosing N (“different author”), while in fact Y (“same author”) was correct BIBREF10 . In contrast to Potthast et al., we do not focus on AO to measure the robustness of AV methods. Instead, we investigate in one experiment the question how trained AV models behave, if the lengths of the questioned documents are getting shorter and shorter. To our best knowledge, this question has not been addressed in previous authorship verification studies."
],
[
"Before we can assess the applicability of AV methods, it is important to understand their fundamental characteristics. Due to the increasing number of proposed AV approaches in the last two decades, the need arose to develop a systematization including the conception, implementation and evaluation of authorship verification methods. In regard to this, only a few attempts have been made so far. In 2004, for example, Koppel and Schler BIBREF13 described for the first time the connection between AV and unary classification, also known as one-class classification. In 2008, Stein et al. BIBREF14 compiled an overview of important algorithmic building blocks for AV where, among other things, they also formulated three AV problems as decision problems. In 2009, Stamatatos BIBREF15 coined the phrases profile- and instance-based approaches that initially were used in the field of AA, but later found their way also into AV. In 2013 and 2014, Stamatatos et al. BIBREF11 , BIBREF16 introduced the terms intrinsic- and extrinsic models that aim to further distinguish between AV methods. However, a closer look at previous attempts to characterize authorship verification approaches reveals a number of misunderstandings, for instance, when it comes to draw the borders between their underlying classification models. In the following subsections, we clarify these misunderstandings, where we redefine previous definitions and propose new properties that enable a better comparison between AV methods."
],
[
"Reliability is a fundamental property any AV method must fulfill in order to be applicable in real-world forensic settings. However, since there is no consistent concept nor a uniform definition of the term “reliability” in the context of authorship verification according to the screened literature, we decided to reuse a definition from applied statistics, and adapt it carefully to AV.",
"In his standard reference book, Bollen BIBREF17 gives a clear description for this term: “Reliability is the consistency of measurement” and provides a simple example to illustrate its meaning: At time INLINEFORM0 we ask a large number of persons the same question Q and record their responses. Afterwards, we remove their memory of the dialogue. At time INLINEFORM1 we ask them again the same question Q and record their responses again. “The reliability is the consistency of the responses across individuals for the two time periods. To the extent that all individuals are consistent, the measure is reliable” BIBREF17 . This example deals with the consistency of the measured objects as a factor for the reliability of measurements. In the case of authorship verification, the analyzed objects are static data, and hence these cannot be a source of inconsistency. However, the measurement system itself can behave inconsistently and hence unreliable. This aspect can be described as intra-rater reliability.",
"Reliability in authorship verification is satisfied, if an AV method always generates the same prediction INLINEFORM0 for the same input INLINEFORM1 , or in other words, if the method behaves deterministically. Several AV approaches, including BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF16 fall into this category. In contrast, if an AV method behaves non-deterministically such that two different predictions for INLINEFORM2 are possible, the method can be rated as unreliable. Many AV approaches, including BIBREF4 , BIBREF13 , BIBREF26 , BIBREF1 , BIBREF27 , BIBREF3 , BIBREF28 , BIBREF29 , BIBREF30 belong to this category, since they involve randomness (e. g., weight initialization, feature subsampling, chunk generation or impostor selection), which might distort the evaluation, as every run on a test corpus very likely leads to different results. Under lab conditions, results of non-deterministic AV methods can (and should) be counteracted by averaging multiple runs. However, it remains highly questionable if such methods are generally applicable in realistic forensic cases, where the prediction INLINEFORM3 regarding a verification case INLINEFORM4 might sometimes result in Y and sometimes in N."
],
[
"Another important property of an AV method is optimizability. We define an AV method as optimizable, if it is designed in such a way that it offers adjustable hyperparameters that can be tuned against a training/validation corpus, given an optimization method such as grid or random search. Hyperparameters might be, for instance, the selected distance/similarity function, the number of layers and neurons in a neural network or the choice of a kernel method. The majority of existing AV approaches in the literature (for example, BIBREF13 , BIBREF23 , BIBREF24 , BIBREF22 , BIBREF31 , BIBREF4 , BIBREF32 , BIBREF16 ) belong to this category. On the other hand, if a published AV approach involves hyperparameters that have been entirely fixed such that there is no further possibility to improve its performance from outside (without deviating from the definitions in the publication of the method), the method is considered to be non-optimizable. Non-optimizable AV methods are preferable in forensic settings as, here, the existence of a training/validation corpus is not always self-evident. Among the proposed AV approaches in the respective literature, we identified only a small fraction BIBREF21 , BIBREF2 , BIBREF30 that fall into this category."
],
[
"From a machine learning point of view, authorship verification represents a unary classification problem BIBREF22 , BIBREF13 , BIBREF16 , BIBREF33 , BIBREF14 . Yet, in the literature, it can be observed that sometimes AV is treated as a unary BIBREF25 , BIBREF23 , BIBREF26 , BIBREF16 and sometimes as a binary classification task BIBREF30 , BIBREF32 , BIBREF22 , BIBREF2 . We define the way an AV approach is modeled by the phrase model category. However, before explaining this in more detail, we wish to recall what unary/one-class classification exactly represents. For this, we list the following verbatim quotes, which characterize one-class classification, as can be seen, almost identically (emphasis by us):",
"“In one-class classification it is assumed that only information of one of the classes, the target class, is available. This means that just example objects of the target class can be used and that no information about the other class of outlier objects is present.” BIBREF34 ",
"“One-class classification (OCC) [...] consists in making a description of a target class of objects and in detecting whether a new object resembles this class or not. [...] The OCC model is developed using target class samples only.” BIBREF35 ",
"“In one-class classification framework, an object is classified as belonging or not belonging to a target class, while only sample examples of objects from the target class are available during the training phase.” BIBREF25 ",
"Note that in the context of authorship verification, target class refers to the known author INLINEFORM0 such that for a document INLINEFORM1 of an unknown author INLINEFORM2 the task is to verify whether INLINEFORM3 holds. One of the most important requirements of any existing AV method is a decision criterion, which aims to accept or reject a questioned authorship. A decision criterion can be expressed through a simple scalar threshold INLINEFORM4 or a more complex model INLINEFORM5 such as a hyperplane in a high-dimensional feature space. As a consequence of the above statements, the determination of INLINEFORM6 or INLINEFORM7 has to be performed solely on the basis of INLINEFORM8 , otherwise the AV method cannot be considered to be unary. However, our conducted literature research regarding existing AV approaches revealed that there are uncertainties how to precisely draw the borders between unary and binary AV methods (for instance, BIBREF36 , BIBREF16 , BIBREF33 ). Nonetheless, few attempts have been made to distinguish both categories from another perspective. Potha and Stamatatos BIBREF33 , for example, categorize AV methods as either intrinsic or extrinsic (emphasis by us):",
"“Intrinsic verification models view it [i. e., the verification task] as a one-class classification task and are based exclusively on analysing the similarity between [ INLINEFORM0 ] and [ INLINEFORM1 ]. [...] Such methods [...] do not require any external resources.” BIBREF33 ",
"“On the other hand, extrinsic verification models attempt to transform the verification task to a pair classification task by considering external documents to be used as samples of the negative class.” BIBREF33 ",
"While we agree with statement (2), the former statement (1) is unsatisfactory, as intrinsic verification models are not necessarily unary. For example, the AV approach GLAD proposed by Hürlimann et al. BIBREF22 directly contradicts statement (1). Here, the authors “decided to cast the problem as a binary classification task where class values are Y [ INLINEFORM0 ] and N [ INLINEFORM1 ]. [...] We do not introduce any negative examples by means of external documents, thus adhering to an intrinsic approach.” BIBREF22 .",
"A misconception similar to statement (1) can be observed in the paper of Jankowska et al. BIBREF24 , who introduced the so-called CNG approach claimed to be a one-class classification method. CNG is intrinsic in that way that it considers only INLINEFORM0 when deciding a problem INLINEFORM1 . However, the decision criterion, which is a threshold INLINEFORM2 , is determined on a set of verification problems, labeled either as Y or N. This incorporates “external resources” for defining the decision criterion, and it constitutes an implementation of binary classification between Y and N in analogy to the statement of Hürlimann et al. BIBREF22 mentioned above. Thus, CNG is in conflict with the unary definition mentioned above. In a subsequent paper BIBREF25 , however, Jankowska et al. refined their approach and introduced a modification, where INLINEFORM3 was determined solely on the basis of INLINEFORM4 . Thus, the modified approach can be considered as a true unary AV method, according to the quoted definitions for unary classification.",
"In 2004, Koppel and Schler BIBREF13 presented the Unmasking approach which, according to the authors, represents a unary AV method. However, if we take a closer look at the learning process of Unmasking, we can see that it is based on a binary SVM classifier that consumes feature vectors (derived from “degradation curves”) labeled as Y (“same author”) or N (“different author”). Unmasking, therefore, cannot be considered to be unary as the decision is not solely based on the documents within INLINEFORM0 , in analogy to the CNG approach of Jankowska et al. BIBREF24 discussed above. It should be highlighted again that the aforementioned three approaches are binary-intrinsic since their decision criteria INLINEFORM1 or INLINEFORM2 was determined on a set of problems labeled in a binary manner (Y and N) while after training, the verification is performed in an intrinsic manner, meaning that INLINEFORM3 and INLINEFORM4 are compared against INLINEFORM5 or INLINEFORM6 but not against documents within other verification problems (cf. Figure FIGREF15 ). A crucial aspect, which might have lead to misperceptions regarding the model category of these approaches in the past, is the fact that two different class domains are involved. On the one hand, there is the class domain of authors, where the task is to distinguish INLINEFORM7 and INLINEFORM8 . On the other hand, there is the elevated or lifted domain of verification problem classes, which are Y and N. The training phase of binary-intrinsic approaches is used for learning to distinguish these two classes, and the verification task can be understood as putting the verification problem as a whole into class Y or class N, whereby the class domain of authors fades from the spotlight (cf. Figure FIGREF15 ).",
"Besides unary and binary-intrinsic methods, there is a third category of approaches, namely binary-extrinsic AV approaches (for example, BIBREF3 , BIBREF30 , BIBREF29 , BIBREF37 , BIBREF32 , BIBREF1 , BIBREF2 ). These methods use external documents during a potentially existing training phase and – more importantly – during testing. In these approaches, the decision between INLINEFORM0 and INLINEFORM1 is put into the focus, where the external documents aim to construct the counter class INLINEFORM2 .",
"Based on the above observations, we conclude that the key requirement for judging the model category of an AV method depends solely on the aspect how its decision criterion INLINEFORM0 or INLINEFORM1 is determined (cf. Figure FIGREF15 ):",
"An AV method is unary if and only if its decision criterion INLINEFORM0 or INLINEFORM1 is determined solely on the basis of the target class INLINEFORM2 during testing. As a consequence, an AV method cannot be considered to be unary if documents not belonging to INLINEFORM3 are used to define INLINEFORM4 or INLINEFORM5 .",
"An AV method is binary-intrinsic if its decision criterion INLINEFORM0 or INLINEFORM1 is determined on a training corpus comprising verification problems labeled either as Y or N (in other words documents of several authors). However, once the training is completed, a binary-intrinsic method has no access to external documents anymore such that the decision regarding the authorship of INLINEFORM2 is made on the basis of the reference data of INLINEFORM3 as well as INLINEFORM4 or INLINEFORM5 .",
"An AV method is binary-extrinsic if its decision criterion INLINEFORM0 or INLINEFORM1 is determined during testing on the basis of external documents that represent the outlier class INLINEFORM2 .",
"Note that optimizable AV methods such as BIBREF18 , BIBREF25 are not excluded to be unary. Provided that INLINEFORM0 or INLINEFORM1 is not subject of the optimization procedure, the model category remains unary. The reason for this is obvious; Hyperparameters might influence the resulting performance of unary AV methods. The decision criterion itself, however, remains unchanged."
],
[
"Each model category has its own implications regarding prerequisites, evaluability, and applicability.",
"One advantage of unary AV methods is that they do not require a specific document collection strategy to construct the counter class INLINEFORM0 , which reduces their complexity. On the downside, the choice of the underlying machine learning model of a unary AV approach is restricted to one-class classification algorithms or unsupervised learning techniques, given a suitable decision criterion.",
"However, a far more important implication of unary AV approaches concerns their performance assessment. Since unary classification (not necessarily AV) approaches depend on a fixed decision criterion INLINEFORM0 or INLINEFORM1 , performance measures such as the area under the ROC curve (AUC) are meaningless. Recall that ROC analysis is used for evaluating classifiers, where the decision threshold is not finally fixed. ROC analysis requires that the classifier generates scores, which are comparable across classification problem instances. The ROC curve and the area under this curve is then computed by considering all possible discrimination thresholds for these scores. While unary AV approaches might produce such scores, introducing a variable INLINEFORM2 would change the semantics of these approaches. Since unary AV approaches have a fixed decision criterion, they provide only a single point in the ROC space. To assess the performance of a unary AV method, it is, therefore, mandatory to consider the confusion matrix that leads to this point in the ROC space.",
"Another implication is that unary AV methods are necessarily instance-based and, thus, require a set INLINEFORM0 of multiple documents of the known author INLINEFORM1 . If only one reference document is available ( INLINEFORM2 ), this document must be artificially turned into multiple samples from the author. In general, unary classification methods need multiple samples from the target class since it is not possible to determine a relative closeness to that class based on only one sample.",
"On the plus side, binary-intrinsic or extrinsic AV methods benefit from the fact that we can choose among a variety of binary and INLINEFORM0 -ary classification models. However, if we consider designing a binary-intrinsic AV method, it should not be overlooked that the involved classifier will learn nothing about individual authors, but only similarities or differences that hold in general for Y and N verification problems BIBREF32 .",
"If, on the other hand, the choice falls on a binary-extrinsic method, a strategy has to be considered for collecting representative documents for the outlier class INLINEFORM0 . Several existing methods such as BIBREF32 , BIBREF1 , BIBREF2 rely on search engines for retrieving appropriate documents, but these search engines might refuse their service if a specified quota is exhausted. Additionally, the retrieved documents render these methods inherently non-deterministic. Moreover, such methods cause relatively high runtimes BIBREF11 , BIBREF5 . Using search engines also requires an active Internet connection, which might not be available or allowed in specific scenarios. But even if we can access the Internet to retrieve documents, there is no guarantee that the true author is not among them. With these points in mind, the applicability of binary-extrinsic methods in real-world cases, i. e., in real forensic settings, remains highly questionable."
],
[
"In the following, we introduce our three self-compiled corpora, where each corpus represents a different challenge. Next, we describe which authorship verification approaches we considered for the experiments and classify each AV method according to the properties introduced in Section SECREF3 . Afterwards, we explain which performance measures were selected with respect to the conclusion made in Section UID17 . Finally, we describe our experiments, present the results and highlight a number of observations."
],
[
"A serious challenge in the field of AV is the lack of publicly available (and suitable) corpora, which are required to train and evaluate AV methods. Among the few publicly available corpora are those that were released by the organizers of the well-known PAN-AV competitions BIBREF11 , BIBREF5 , BIBREF12 . In regard to our experiments, however, we cannot use these corpora, due to the absence of relevant meta-data such as the precise time spans where the documents have been written as well as the topic category of the texts. Therefore, we decided to compile our own corpora based on English documents, which we crawled from different publicly accessible sources. In what follows, we describe our three constructed corpora, which are listed together with their statistics in Table TABREF23 . Note that all corpora are balanced such that verification cases with matching (Y) and non-matching (N) authorships are evenly distributed.",
"As a first corpus, we compiled INLINEFORM0 that represents a collection of 80 excerpts from scientific works including papers, dissertations, book chapters and technical reports, which we have chosen from the well-known Digital Bibliography & Library Project (DBLP) platform. Overall, the documents were written by 40 researchers, where for each author INLINEFORM1 , there are exactly two documents. Given the 80 documents, we constructed for each author INLINEFORM2 two verification problems INLINEFORM3 (a Y-case) and INLINEFORM4 (an N-case). For INLINEFORM5 we set INLINEFORM6 's first document as INLINEFORM7 and the second document as INLINEFORM8 . For INLINEFORM9 we reuse INLINEFORM10 from INLINEFORM11 as the known document and selected a text from another (random) author as the unknown document. The result of this procedure is a set of 80 verification problems, which we split into a training and test set based on a 40/60% ratio. Where possible, we tried to restrict the content of each text to the abstract and conclusion of the original work. However, since in many cases these sections were too short, we also considered other parts of the original works such as introduction or discussion sections. To ensure that the extracted text portions are appropriate for the AV task, each original work was preprocessed manually. More precisely, we removed tables, formulas, citations, quotes and sentences that include non-language content such as mathematical constructs or specific names of researchers, systems or algorithms. The average time span between both documents of an author is 15.6 years. The minimum and maximum time span are 6 and 40 years, respectively. Besides the temporal aspect of INLINEFORM12 , another challenge of this corpus is the formal (scientific) language, where the usage of stylistic devices is more restricted, in contrast to other genres such as novels or poems.",
"As a second corpus, we compiled INLINEFORM0 , which represents a collection of 1,645 chat conversations of 550 sex offenders crawled from the Perverted-Justice portal. The chat conversations stem from a variety of sources including emails and instant messengers (e. g., MSN, AOL or Yahoo), where for each conversation, we ensured that only chat lines from the offender were extracted. We applied the same problem construction procedure as for the corpus INLINEFORM1 , which resulted in 1,100 verification problems that again were split into a training and test set given a 40/60% ratio. In contrast to the corpus INLINEFORM2 , we only performed slight preprocessing. Essentially, we removed user names, time-stamps, URLs, multiple blanks as well as annotations that were not part of the original conversations from all chat lines. Moreover, we did not normalize words (for example, shorten words such as “nooooo” to “no”) as we believe that these represent important style markers. Furthermore, we did not remove newlines between the chat lines, as the positions of specific words might play an important role regarding the individual's writing style.",
"As a third corpus, we compiled INLINEFORM0 , which is a collection of 200 aggregated postings crawled from the Reddit platform. Overall, the postings were written by 100 Reddit users and stem from a variety of subreddits. In order to construct the Y-cases, we selected exactly two postings from disjoint subreddits for each user such that both the known and unknown document INLINEFORM1 and INLINEFORM2 differ in their topic. Regarding the N-cases, we applied the opposite strategy such that INLINEFORM3 and INLINEFORM4 belong to the same topic. The rationale behind this is to figure out to which extent AV methods can be fooled in cases, where the topic matches but not the authorship and vice versa. Since for this specific corpus we have to control the topics of the documents, we did not perform the same procedure applied for INLINEFORM5 and INLINEFORM6 to construct the training and test sets. Instead, we used for the resulting 100 verification problems a 40/60% hold-out split, where both training and test set are entirely disjoint."
],
[
"As a basis for our experiments, we reimplemented 12 existing AV approaches, which have shown their potentials in the previous PAN-AV competitions BIBREF11 , BIBREF12 as well as in a number of AV studies. The methods are listed in Table TABREF33 together with their classifications regarding the AV characteristics, which we proposed in Section SECREF3 .",
"All (optimizable) AV methods were tuned regarding their hyperparameters, according to the original procedure mentioned in the respective paper. However, in the case of the binary-extrinsic methods (GenIM, ImpGI and NNCD) we had to use an alternative impostors generation strategy in our reimplementations, due to technical problems. In the respective papers, the authors used search engine queries to generate the impostor documents, which are needed to model the counter class INLINEFORM0 . Regarding our reimplementations, we used the documents from the static corpora (similarly to the idea of Kocher and Savoy BIBREF30 ) to generate the impostors in the following manner: Let INLINEFORM1 denote a corpus with INLINEFORM2 verification problems. For each INLINEFORM3 we choose all unknown documents INLINEFORM4 in INLINEFORM5 with INLINEFORM6 and append them the impostor set INLINEFORM7 . Here, it should be highlighted that both GenIM and ImpGI consider the number of impostors as a hyperparameter such that the resulting impostor set is a subset of INLINEFORM8 . In contrast to this, NNCD considers all INLINEFORM9 as possible impostors. This fact plays an important role in the later experiments, where we compare the AV approaches to each other. Although our strategy is not flexible like using a search engine, it has one advantage that, here, it is assumed that the true author of an unknown document is not among the impostors, since in our corpora the user/author names are known beforehand."
],
[
"According to our extensive literature research, numerous measures (e. g., Accuracy, F INLINEFORM0 , c@1, AUC, AUC@1, INLINEFORM1 or EER) have been used so far to assess the performance of AV methods. In regard to our experiments, we decided to use c@1 and AUC for several reasons. First, Accuracy, F INLINEFORM2 and INLINEFORM3 are not applicable in cases where AV methods leave verification problems unanswered, which concerns some of our examined AV approaches. Second, using AUC alone is meaningless for non-optimizable AV methods, as explained in Section UID17 . Third, both have been used in the PAN-AV competitions BIBREF5 , BIBREF12 . Note that we also list the confusion matrix outcomes."
],
[
"Overall, we focus on three experiments, which are based on the corpora introduced in Section SECREF21 :",
"The Effect of Stylistic Variation Across Large Time Spans",
"The Effect of Topical Influence",
"The Effect of Limited Text Length",
"In the following each experiment is described in detail.",
"In this experiment, we seek to answer the question if the writing style of an author INLINEFORM0 can be recognized, given a large time span between two documents of INLINEFORM1 . The motivation behind this experiment is based on the statement of Olsson BIBREF38 that language acquisition is a continuous process, which is not only acquired, but also can be lost. Therefore, an important question that arises here is, if the writing style of a person remains “stable” across a large time span, given the fact that language in each individual's life is never “fixed” BIBREF38 . Regarding this experiment, we used the INLINEFORM2 corpus.",
"The results of the 12 examined AV methods are listed in Table TABREF41 , where it can be seen that the majority of the examined AV methods yield useful recognition results with a maximum value of 0.792 in terms of c@1. With the exception of the binary-intrinsic approach COAV, the remaining top performing methods belong to the binary-extrinsic category. This category of AV methods has also been superior in the PAN-AV competitions BIBREF11 , BIBREF5 , BIBREF12 , where they outperformed binary-intrinsic and unary approaches three times in a row (2013–2015).",
"The top performing approaches Caravel, COAV and NNCD deserve closer attention. All three are based on character-level language models that capture low-level features similar to character INLINEFORM0 -grams, which have been shown in numerous AA and AV studies (for instance, BIBREF39 , BIBREF26 ) to be highly effective and robust. In BIBREF19 , BIBREF28 , it has been shown that Caravel and COAV were also the two top-performing approaches, where in BIBREF19 they were evaluated on the PAN-2015 AV corpus BIBREF12 , while in BIBREF28 they were applied on texts obtained from Project Gutenberg. Although both approaches perform similarly, they differ in the way how the decision criterion INLINEFORM1 is determined. While COAV requires a training corpus to learn INLINEFORM2 , Caravel assumes that the given test corpus (which provides the impostors) is balanced. Given this assumption, Caravel first computes similarity scores for all verification problems in the corpus and then sets INLINEFORM3 to the median of all similarities (cf. Figure FIGREF49 ). Thus, from a machine learning perspective, there is some undue training on the test set. Moreover, the applicability of Caravel in realistic scenarios is questionable, as a forensic case is not part of a corpus where the Y/N-distribution is known beforehand.",
"Another interesting observation can be made regarding COAV, NNCD and OCCAV. Although all three differ regarding their model category, they use the same underlying compression algorithm (PPMd) that is responsible for generating the language model. While the former two approaches perform similarly well, OCCAV achieves a poor c@1 score ( INLINEFORM0 ). An obvious explanation for this is a wrongly calibrated threshold INLINEFORM1 , as can be seen from the confusion matrix, where almost all answers are N-predictions. Regarding the NNCD approach, one should consider that INLINEFORM2 is compared against INLINEFORM3 as well as INLINEFORM4 impostors within a corpus comprised of INLINEFORM5 verification problems. Therefore, a Y-result is correct with relatively high certainty (i. e., the method has high precision compared to other approaches with a similar c@1 score), as NNCD decided that author INLINEFORM6 fits best to INLINEFORM7 among INLINEFORM8 candidates. In contrast to Caravel, NNCD only retrieves the impostors from the given corpus, but it does not exploit background knowledge about the distribution of problems in the corpus.",
"Overall, the results indicate that it is possible to recognize writing styles across large time spans. To gain more insights regarding the question which features led to the correct predictions, we inspected the AVeer method. Although the method achieved only average results, it benefits from the fact that it can be interpreted easily, as it relies on a simple distance function, a fixed threshold INLINEFORM0 and predefined feature categories such as function words. Regarding the correctly recognized Y-cases, we noticed that conjunctive adverbs such as “hence”, “therefore” or “moreover” contributed mostly to AVeer's correct predictions. However, a more in-depth analysis is required in future work to figure out whether the decisions of the remaining methods are also primarily affected by these features.",
"In this experiment, we investigate the question if the writing style of authors can be recognized under the influence of topical bias. In real-world scenarios, the topic of the documents within a verification problem INLINEFORM0 is not always known beforehand, which can lead to a serious challenge regarding the recognition of the writing style. Imagine, for example, that INLINEFORM1 consists of a known and unknown document INLINEFORM2 and INLINEFORM3 that are written by the same author ( INLINEFORM4 ) while at the same time differ regarding their topic. In such a case, an AV method that it focusing “too much” on the topic (for example on specific nouns or phrases) will likely predict a different authorship ( INLINEFORM5 ). On the other hand, when INLINEFORM6 and INLINEFORM7 match regarding their topic, while being written by different authors, a topically biased AV method might erroneously predict INLINEFORM8 .",
"In the following we show to which extent these assumptions hold. As a data basis for this experiment, we used the INLINEFORM0 corpus introduced in Section UID30 . The results regarding the 12 AV methods are given in Table TABREF44 , where it can be seen that our assumptions hold. All examined AV methods (with no exception) are fooled by the topical bias in the corpus. Here, the highest achieved results in terms of c@1 and AUC are very close to random guessing. A closer look at the confusion matrix outcomes reveals that some methods, for example ImpGI and OCCAV, perform almost entirely inverse to each other, where the former predicts nothing but Y and the latter nothing but N (except 1 Y). Moreover, we can assume that the lower c@1 is, the stronger is the focus of the respective AV method on the topic of the documents. Overall, the results of this experiment suggest that none of the examined AV methods is robust against topical influence.",
"In our third experiment, we investigate the question how text lengths affect the results of the examined AV methods. The motivation behind this experiment is based on the observation of Stamatatos et al. BIBREF12 that text length is an important issue, which has not been thoroughly studied within authorship verification research. To address this issue, we make use of the INLINEFORM0 corpus introduced in Section UID28 . The corpus is suitable for this purpose, as it comprises a large number of verification problems, where more than 90% of all documents have sufficient text lengths ( INLINEFORM1 2,000 characters). This allows a stepwise truncation and by this to analyze the effect between the text lengths and the recognition results. However, before considering this, we first focus on the results (shown in Table TABREF46 ) after applying all 12 AV methods on the original test corpus.",
"As can be seen in Table TABREF46 , almost all approaches perform very well with c@1 scores up to 0.991. Although these results are quite impressive, it should be noted that a large fraction of the documents comprises thousands of words. Thus, the methods can learn precise representations based on a large variety of features, which in turn enable a good determination of (dis)similarities between known/unknown documents. To investigate this issue in more detail, we constructed four versions of the test corpus and equalized the unknown document lengths to 250, 500, 1000, and 2000 characters. Then, we applied the top performing AV methods with a c@1 value INLINEFORM0 on the four corpora. Here, we reused the same models and hyperparameters (including the decision criteria INLINEFORM1 and INLINEFORM2 ) that were determined on the training corpus. The intention behind this was to observe the robustness of the trained AV models, given the fact that during training they were confronted with longer documents.",
"The results are illustrated in Figure FIGREF47 , where it can be observed that GLAD yields the most stable results across the four corpora versions, where even for the corpus with the 250 characters long unknown documents, it achieves a c@1 score of 0.727. Surprisingly, Unmasking performs similarly well, despite of the fact that the method has been designed for longer texts i. e., book chunks of at least 500 words BIBREF13 . Sanderson and Guenter also point out that the Unmasking approach is less useful when dealing with relatively short texts BIBREF40 . However, our results show a different picture, at least for this corpus.",
"One explanation of the resilience of GLAD across the varying text lengths might be due to its decision model INLINEFORM0 (an SVM with a linear kernel) that withstands the absence of missing features caused by the truncation of the documents, in contrast to the distance-based approaches AVeer, NNCD and COAV, where the decision criterion INLINEFORM1 is reflected by a simple scalar. Table TABREF48 lists the confusion matrix outcomes of the six AV methods regarding the 250 characters version of INLINEFORM2 .",
"Here, it can be seen that the underlying SVM model of GLAD and Unmasking is able to regulate its Y/N-predictions, in contrast to the three distance-based methods, where the majority of predictions fall either on the Y- or on the N-side. To gain a better picture regarding the stability of the decision criteria INLINEFORM0 and INLINEFORM1 of the methods, we decided to take a closer look on the ROC curves (cf. Figure FIGREF49 ) generated by GLAD, Caravel and COAV for the four corpora versions, where a number of interesting observations can be made. When focusing on AUC, it turns out that all three methods perform very similar to each other, whereas big discrepancies between GLAD and COAV can be observed regarding c@1. When we consider the current and maximum achievable results (depicted by the circles and triangles, respectively) it becomes apparent that GLAD's model behaves stable, while the one of COAV becomes increasingly vulnerable the more the documents are shortened. When looking at the ROC curve of Caravel, it can be clearly seen that the actual and maximum achievable results are very close to each other. This is not surprising, due to the fact that Caravel's threshold always lies at the median point of the ROC curve, provided that the given corpus is balanced.",
"While inspecting the 250 characters long documents in more detail, we identified that they share similar vocabularies consisting of chat abbreviations such as “lol” (laughing out loud) or “k” (ok), smileys and specific obscene words. Therefore, we assume that the verification results of the examined methods are mainly caused by the similar vocabularies between the texts."
],
[
"We highlighted the problem that underlying characteristics of authorship verification approaches have not been paid much attention in the past research and that these affect the applicability of the methods in real forensic settings. Then, we proposed several properties that enable a better characterization and by this a better comparison between AV methods. Among others, we explained that the performance measure AUC is meaningless in regard to unary or specific non-optimizable AV methods, which involve a fixed decision criterion (for example, NNCD). Additionally, we mentioned that determinism must be fulfilled such that an AV method can be rated as reliable. Moreover, we clarified a number of misunderstandings in previous research works and proposed three clear criteria that allow to classify the model category of an AV method, which in turn influences its design and the way how it should be evaluated. In regard to binary-extrinsic AV approaches, we explained which challenges exist and how they affect their applicability.",
"In an experimental setup, we applied 12 existing AV methods on three self-compiled corpora, where the intention behind each corpus was to focus on a different aspect of the methods applicability. Our findings regarding the examined approaches can be summarized as follows: Despite of the good performance of the five AV methods GenIM, ImpGI, Unmasking, Caravel and SPATIUM, none of them can be truly considered as reliable and therefore applicable in real forensic cases. The reason for this is not only the non-deterministic behavior of the methods but also their dependence (excepting Unmasking) on an impostor corpus. Here, it must be guaranteed that the true author is not among the candidates, but also that the impostor documents are suitable such that the AV task not inadvertently degenerates from style to topic classification. In particular, the applicability of the Caravel approach remains highly questionable, as it requires a corpus where the information regarding Y/N-distribution is known beforehand in order to set the threshold. In regard to the two examined unary AV approaches MOCC and OCCAV, we observed that these perform poorly on all three corpora in comparison to the binary-intrinsic and binary-extrinsic methods. Most likely, this is caused by the wrong threshold setting, as both tend to generate more N-predictions. From the remaining approaches, GLAD and COAV seem to be a good choice for realistic scenarios. However, the former has been shown to be more robust in regard to varying text lengths given a fixed model, while the latter requires a retraining of the model (note that both performed almost equal in terms of AUC). Our hypothesis, which we leave open for future work, is that AV methods relying on a complex model INLINEFORM0 are more robust than methods based on a scalar-threshold INLINEFORM1 . Lastly, we wish to underline that all examined approaches failed in the cross-topic experiment. One possibility to counteract this is to apply text distortion techniques (for instance, BIBREF41 ) in order to control the topic influence in the documents.",
"As one next step, we will compile additional and larger corpora to investigate the question whether the evaluation results of this paper hold more generally. Furthermore, we will address the important question how the results of AV methods can be interpreted in a more systematic manner, which will further influence the practicability of AV methods besides the proposed properties.",
"This work was supported by the German Federal Ministry of Education and Research (BMBF) under the project \"DORIAN\" (Scrutinise and thwart disinformation)."
]
],
"section_name": [
"Introduction",
"Related Work",
"Characteristics of Authorship Verification",
"Reliability (Determinism)",
"Optimizability",
"Model Category",
"Implications",
"Methodology",
"Corpora",
"Examined Authorship Verification Methods",
"Performance Measures",
"Experiments",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"ee03f7af0b4987771e0a53b47e6df22c43038dc0"
],
"answer": [
{
"evidence": [
"A serious challenge in the field of AV is the lack of publicly available (and suitable) corpora, which are required to train and evaluate AV methods. Among the few publicly available corpora are those that were released by the organizers of the well-known PAN-AV competitions BIBREF11 , BIBREF5 , BIBREF12 . In regard to our experiments, however, we cannot use these corpora, due to the absence of relevant meta-data such as the precise time spans where the documents have been written as well as the topic category of the texts. Therefore, we decided to compile our own corpora based on English documents, which we crawled from different publicly accessible sources. In what follows, we describe our three constructed corpora, which are listed together with their statistics in Table TABREF23 . Note that all corpora are balanced such that verification cases with matching (Y) and non-matching (N) authorships are evenly distributed."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Therefore, we decided to compile our own corpora based on English documents, which we crawled from different publicly accessible sources."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"bb93471d6d2bbd108c18a5902cf6245c03dde2c2"
],
"answer": [
{
"evidence": [
"The top performing approaches Caravel, COAV and NNCD deserve closer attention. All three are based on character-level language models that capture low-level features similar to character INLINEFORM0 -grams, which have been shown in numerous AA and AV studies (for instance, BIBREF39 , BIBREF26 ) to be highly effective and robust. In BIBREF19 , BIBREF28 , it has been shown that Caravel and COAV were also the two top-performing approaches, where in BIBREF19 they were evaluated on the PAN-2015 AV corpus BIBREF12 , while in BIBREF28 they were applied on texts obtained from Project Gutenberg. Although both approaches perform similarly, they differ in the way how the decision criterion INLINEFORM1 is determined. While COAV requires a training corpus to learn INLINEFORM2 , Caravel assumes that the given test corpus (which provides the impostors) is balanced. Given this assumption, Caravel first computes similarity scores for all verification problems in the corpus and then sets INLINEFORM3 to the median of all similarities (cf. Figure FIGREF49 ). Thus, from a machine learning perspective, there is some undue training on the test set. Moreover, the applicability of Caravel in realistic scenarios is questionable, as a forensic case is not part of a corpus where the Y/N-distribution is known beforehand."
],
"extractive_spans": [
"Caravel, COAV and NNCD"
],
"free_form_answer": "",
"highlighted_evidence": [
"The top performing approaches Caravel, COAV and NNCD deserve closer attention."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"98800acffd718241859030133e25d96765a38c4f"
],
"answer": [
{
"evidence": [
"As a first corpus, we compiled INLINEFORM0 that represents a collection of 80 excerpts from scientific works including papers, dissertations, book chapters and technical reports, which we have chosen from the well-known Digital Bibliography & Library Project (DBLP) platform. Overall, the documents were written by 40 researchers, where for each author INLINEFORM1 , there are exactly two documents. Given the 80 documents, we constructed for each author INLINEFORM2 two verification problems INLINEFORM3 (a Y-case) and INLINEFORM4 (an N-case). For INLINEFORM5 we set INLINEFORM6 's first document as INLINEFORM7 and the second document as INLINEFORM8 . For INLINEFORM9 we reuse INLINEFORM10 from INLINEFORM11 as the known document and selected a text from another (random) author as the unknown document. The result of this procedure is a set of 80 verification problems, which we split into a training and test set based on a 40/60% ratio. Where possible, we tried to restrict the content of each text to the abstract and conclusion of the original work. However, since in many cases these sections were too short, we also considered other parts of the original works such as introduction or discussion sections. To ensure that the extracted text portions are appropriate for the AV task, each original work was preprocessed manually. More precisely, we removed tables, formulas, citations, quotes and sentences that include non-language content such as mathematical constructs or specific names of researchers, systems or algorithms. The average time span between both documents of an author is 15.6 years. The minimum and maximum time span are 6 and 40 years, respectively. Besides the temporal aspect of INLINEFORM12 , another challenge of this corpus is the formal (scientific) language, where the usage of stylistic devices is more restricted, in contrast to other genres such as novels or poems.",
"As a second corpus, we compiled INLINEFORM0 , which represents a collection of 1,645 chat conversations of 550 sex offenders crawled from the Perverted-Justice portal. The chat conversations stem from a variety of sources including emails and instant messengers (e. g., MSN, AOL or Yahoo), where for each conversation, we ensured that only chat lines from the offender were extracted. We applied the same problem construction procedure as for the corpus INLINEFORM1 , which resulted in 1,100 verification problems that again were split into a training and test set given a 40/60% ratio. In contrast to the corpus INLINEFORM2 , we only performed slight preprocessing. Essentially, we removed user names, time-stamps, URLs, multiple blanks as well as annotations that were not part of the original conversations from all chat lines. Moreover, we did not normalize words (for example, shorten words such as “nooooo” to “no”) as we believe that these represent important style markers. Furthermore, we did not remove newlines between the chat lines, as the positions of specific words might play an important role regarding the individual's writing style.",
"As a third corpus, we compiled INLINEFORM0 , which is a collection of 200 aggregated postings crawled from the Reddit platform. Overall, the postings were written by 100 Reddit users and stem from a variety of subreddits. In order to construct the Y-cases, we selected exactly two postings from disjoint subreddits for each user such that both the known and unknown document INLINEFORM1 and INLINEFORM2 differ in their topic. Regarding the N-cases, we applied the opposite strategy such that INLINEFORM3 and INLINEFORM4 belong to the same topic. The rationale behind this is to figure out to which extent AV methods can be fooled in cases, where the topic matches but not the authorship and vice versa. Since for this specific corpus we have to control the topics of the documents, we did not perform the same procedure applied for INLINEFORM5 and INLINEFORM6 to construct the training and test sets. Instead, we used for the resulting 100 verification problems a 40/60% hold-out split, where both training and test set are entirely disjoint."
],
"extractive_spans": [
"80 excerpts from scientific works",
"collection of 1,645 chat conversations",
"collection of 200 aggregated postings"
],
"free_form_answer": "",
"highlighted_evidence": [
"As a first corpus, we compiled INLINEFORM0 that represents a collection of 80 excerpts from scientific works including papers, dissertations, book chapters and technical reports, which we have chosen from the well-known Digital Bibliography & Library Project (DBLP) platform.",
"As a second corpus, we compiled INLINEFORM0 , which represents a collection of 1,645 chat conversations of 550 sex offenders crawled from the Perverted-Justice portal.",
"As a third corpus, we compiled INLINEFORM0 , which is a collection of 200 aggregated postings crawled from the Reddit platform."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7471625cd84416b1da7e795334fcb9cc5227f965"
],
"answer": [
{
"evidence": [
"As a first corpus, we compiled INLINEFORM0 that represents a collection of 80 excerpts from scientific works including papers, dissertations, book chapters and technical reports, which we have chosen from the well-known Digital Bibliography & Library Project (DBLP) platform. Overall, the documents were written by 40 researchers, where for each author INLINEFORM1 , there are exactly two documents. Given the 80 documents, we constructed for each author INLINEFORM2 two verification problems INLINEFORM3 (a Y-case) and INLINEFORM4 (an N-case). For INLINEFORM5 we set INLINEFORM6 's first document as INLINEFORM7 and the second document as INLINEFORM8 . For INLINEFORM9 we reuse INLINEFORM10 from INLINEFORM11 as the known document and selected a text from another (random) author as the unknown document. The result of this procedure is a set of 80 verification problems, which we split into a training and test set based on a 40/60% ratio. Where possible, we tried to restrict the content of each text to the abstract and conclusion of the original work. However, since in many cases these sections were too short, we also considered other parts of the original works such as introduction or discussion sections. To ensure that the extracted text portions are appropriate for the AV task, each original work was preprocessed manually. More precisely, we removed tables, formulas, citations, quotes and sentences that include non-language content such as mathematical constructs or specific names of researchers, systems or algorithms. The average time span between both documents of an author is 15.6 years. The minimum and maximum time span are 6 and 40 years, respectively. Besides the temporal aspect of INLINEFORM12 , another challenge of this corpus is the formal (scientific) language, where the usage of stylistic devices is more restricted, in contrast to other genres such as novels or poems."
],
"extractive_spans": [
" restrict the content of each text to the abstract and conclusion of the original work",
"considered other parts of the original works such as introduction or discussion sections",
"extracted text portions are appropriate for the AV task, each original work was preprocessed manually",
"removed tables, formulas, citations, quotes and sentences that include non-language content such as mathematical constructs or specific names of researchers, systems or algorithms"
],
"free_form_answer": "",
"highlighted_evidence": [
"Where possible, we tried to restrict the content of each text to the abstract and conclusion of the original work. However, since in many cases these sections were too short, we also considered other parts of the original works such as introduction or discussion sections. To ensure that the extracted text portions are appropriate for the AV task, each original work was preprocessed manually. More precisely, we removed tables, formulas, citations, quotes and sentences that include non-language content such as mathematical constructs or specific names of researchers, systems or algorithms."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a6a3c24f448a5cbdb28cb50807e690c5c3701d72"
],
"answer": [
{
"evidence": [
"As a basis for our experiments, we reimplemented 12 existing AV approaches, which have shown their potentials in the previous PAN-AV competitions BIBREF11 , BIBREF12 as well as in a number of AV studies. The methods are listed in Table TABREF33 together with their classifications regarding the AV characteristics, which we proposed in Section SECREF3 .",
"FLOAT SELECTED: Table 2: All 12 AVmethods, classified according to their properties."
],
"extractive_spans": [],
"free_form_answer": "MOCC, OCCAV, COAV, AVeer, GLAD, DistAV, Unmasking, Caravel, GenIM, ImpGI, SPATIUM and NNCD",
"highlighted_evidence": [
"The methods are listed in Table TABREF33 together with their classifications regarding the AV characteristics, which we proposed in Section SECREF3 .",
"FLOAT SELECTED: Table 2: All 12 AVmethods, classified according to their properties."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"Which is the best performing method?",
"What size are the corpora?",
"What is a self-compiled corpus?",
"What are the 12 AV approaches which are examined?"
],
"question_id": [
"61b0db2b5718d409b07f83f912bad6a788bfee5a",
"b217d9730ba469f48426280945dbb77542b39183",
"8c0846879771c8f3915cc2e0718bee448f5cb007",
"3fae289ab1fc023bce2fa4f1ce4d9f828074f232",
"863d5c6305e5bb4b14882b85b6216fa11bcbf053"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: The three possible model categories of authorship verification approaches. Here,U refers to the instance (for example, a document or a feature vector) of the unknown author. A is the target class (known author) and ¬A the outlier class (any other possible author). In the binary-intrinsic case, ρ denotes the verification problem (subject of classification), and Y and N denote the regions of the problem feature space where, according to a training corpus, the authorship holds or not.",
"Table 1: All training and testing corpora used in our experiments. Here, |C | denotes the number of verification problems in each corpus C and |DA | the number of the known documents. The average character length of the unknown document DU and the known document DA (concatenation of all known documents DA) is denoted by avg_len(DU ) and avg_len(DA), respectively.",
"Table 2: All 12 AVmethods, classified according to their properties.",
"Table 3: Evaluation results for the test corpus CDBLP in terms of c@1 and AUC. TP, FN, FP and TN represent the four confusion matrix outcomes, while UP denotes the number of unanswered verification problems. Note that AUC scores for the non-optimizable and unary AV methods are grayed out.",
"Table 4: Evaluation results for the test corpus CReddit.",
"Figure 2: Evaluation results for the four versions of the test corpus CPerv in terms of c@1.",
"Table 5: Evaluation results for the test corpus CPerv.",
"Table 6: Confusionmatrix outcomes for the 250 characters version of the test corpus CPerv.",
"Figure 3: ROC curves forGLAD,Caravel andCOAV (applied on the four corpora versions of CPerv). The circles and triangles depict the current and maximum achievable c@1 values on the corpus, respectively. Note that Caravel’s thresholds always lie along the EER-line."
],
"file": [
"5-Figure1-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"8-Figure2-1.png",
"8-Table5-1.png",
"8-Table6-1.png",
"10-Figure3-1.png"
]
} | [
"What are the 12 AV approaches which are examined?"
] | [
[
"1906.10551-6-Table2-1.png",
"1906.10551-Examined Authorship Verification Methods-0"
]
] | [
"MOCC, OCCAV, COAV, AVeer, GLAD, DistAV, Unmasking, Caravel, GenIM, ImpGI, SPATIUM and NNCD"
] | 682 |
1704.02385 | A Trolling Hierarchy in Social Media and A Conditional Random Field For Trolling Detection | An-ever increasing number of social media websites, electronic newspapers and Internet forums allow visitors to leave comments for others to read and interact. This exchange is not free from participants with malicious intentions, which do not contribute with the written conversation. Among different communities users adopt strategies to handle such users. In this paper we present a comprehensive categorization of the trolling phenomena resource, inspired by politeness research and propose a model that jointly predicts four crucial aspects of trolling: intention, interpretation, intention disclosure and response strategy. Finally, we present a new annotated dataset containing excerpts of conversations involving trolls and the interactions with other users that we hope will be a useful resource for the research community. | {
"paragraphs": [
[
"In contrast to traditional content distribution channels like television, radio and newspapers, Internet opened the door for direct interaction between the content creator and its audience. One of these forms of interaction is the presence of comments sections that are found in many websites. The comments section allows visitors, authenticated in some cases and unauthenticated in others, to leave a message for others to read. This is a type of multi-party asynchronous conversation that offers interesting insights: one can learn what is the commenting community thinking about the topic being discussed, their sentiment, recommendations among many other. There are some comment sections in which commentators are allowed to directly respond to others, creating a comment hierarchy. These kind of written conversations are interesting because they bring light to the types interaction between participants with minimal supervision. This lack of supervision and in some forums, anonymity, give place to interactions that may not be necessarily related with the original topic being discussed, and as in regular conversations, there are participants with not the best intentions. Such participants are called trolls in some communities.",
"Even though there are some studies related to trolls in different research communities, there is a lack of attention from the NLP community. We aim to reduce this gap by presenting a comprehensive categorization of trolling and propose two models to predict trolling aspects. First, we revise the some trolling definitions: “Trolling is the activity of posting messages via communication networks that are in tended to be provocative, offensive or menacing” by BIBREF0 , this definition considers trolling from the most negative perspective where a crime might be committed. In a different tone, BIBREF1 provides a working definition for troll: “A troller in a user in a computer mediated communication who constructs the identity of sincerely wishing to be part of the group in question, including professing, or conveying pseudo-sincere intentions, but whose real intention(s) is/are to cause disruption and/or trigger or exacerbate conflict for the purpose of their own amusement”. These definitions inspire our trolling categorization, but first, we define a trolling event: a comment in a conversation whose intention is to cause conflict, trouble; be malicious, purposely seek or disseminate false information or advice; give a dishonest impression to deceive; offend, insult, cause harm, humiliation or aggravation. Also, a troll or troller is the individual that generates a trolling event, trolling is the overall phenomena that involves a troll, trolling event and generates responses from others. Any participant in a forum conversation may become a troll at any given point, as we will see, the addressee of a trolling event may choose to reply with a trolling comment or counter-trolling, effectively becoming a troll as well.",
"We believe our work makes four contributions. First, unlike previous computational work on trolling, which focused primarily on analyzing the narrative retrospectively by the victim (e.g., determining the trolling type and the role played by each participant), we study trolling by analyzing comments in a conversation, aiming instead to identify trollers, who, once identified, could be banned from posting. Second, while previous work has focused on analyzing trolling from the troll's perspective, we additionally model trolling from the target's perspective, with the goal understanding the psychological impact of a trolling event on the target, which we believe is equally important from a practical standpoint. Third, we propose a comprehensive categorization of trolling that covers not only the troll's intention but also the victim and other commenters' reaction to the troll's comment. We believe such categorization will provide a solid basis on which future computational approaches to trolling can be built. Finally, we make our annotated data set consisting of 1000 annotated trolling events publicly available. We believe that our data set will be a valuable resource to any researcher interested in the computational modeling of trolling."
],
[
"Based on the previous definitions we identify four aspects that uniquely define a trolling event-response pair: 1) Intention: what is the author of the comment in consideration purpose, a) trolling, the comment is malicious in nature, aims to disrupt, annoy, offend, harm or spread purposely false information, b) playing the comment is playful, joking, teasing others without the malicious intentions as in a), or c) none, the comment has no malicious intentions nor is playful, it is a simple comment. 2) Intention Disclosure: this aspect is meant to indicate weather a trolling comment is trying to deceive its readers, the possible values for this aspect are a) the comment's author is a troll and is trying to hide its real intentions, and pretends to convey a different meaning, at least temporarily, b) the comment's author is a troll but is clearly exposing its malicious intentions and c) the comment's author is not a troll, therefore there are not hidden or exposed malicious or playful intentions. There are two aspects defined on the comments that direct address the comment in consideration, 3) Intentions Interpretation: this aspect refers to the responder's understanding of the parent's comment intentions. There possible interpretations are the same as the intentions aspect: trolling, playing or none. The last element, is the 4) Response strategy employed by the commentators directly replaying to a comment, which can be a trolling event. The response strategy is influenced directly by the responder's interpretation of the parent's comment intention. We identify 14 possible response strategies. Some of these strategies are tied with combinations of the three other aspects. We briefly define each of them in the appendix.",
"Figure FIGREF2 shows this categories as a hierarchy. Using this trolling formulation, the suspected troll event and the responses are correlated and one cannot independently name the strategy response without learning about the other three aspects. This is challenging prediction problem that we address in this work."
],
[
"To illustrate this hierarchy, we present some examples. These are excerpts from original conversations; the first comment, generated by author C0, on each excerpt is given as a minimal piece of context, the second comment, by the author C1 in italics, is the comment suspected to be a trolling event. The rest of the comments, are all direct responses to the suspected trolling comment. When the response author “name” is the same as the first comment, it indicates that the that same individual also replied to the suspected troll.",
"Example 1.",
"",
"[noitemsep,nolistsep] ",
"My friend who makes $20,000 a year leased a brand new Chevy Spark EV, for only $75 per month and he got a California rebate for driving an electric car. Much cheaper than buying older car which usually require heavy upkeep due to its mileage. At this point I think you're just trolling.",
"[noitemsep,nolistsep]",
"IYour friend has a good credit score, which can't be said about actual poor people. Did you grow up sheltered by any chance?",
"[noitemsep,nolistsep]",
"Judging by your post history, you're indeed a troll. Have a good one.",
"In this example, when C1 asks “Did you grow up sheltered by any chance?\", her intention is to denigrate or be offensive, and it is not hiding it, instead he is clearly disclosing her trolling intentions. In C0's response, we see that has came to the conclusion that C1 is trolling and his response strategy is frustrate the trolling event by ignoring the malicious troll's intentions.",
"Example 2.",
"",
"[noitemsep,nolistsep] ",
"What do you mean look up ?:( I don't see anything lol",
"[noitemsep,nolistsep]",
"Look up! Space is cool! :)",
"[noitemsep,nolistsep]",
"why must you troll me :(",
"Keep going, no matter how many times you say it, he will keep asking",
"In this example, we hypothesize that C0 is requesting some information and C1 is given an answer that is unfit to C0's' request. We do so based on the last C0's comment; CO is showing disappointment or grievance. Also, we deduct that C1 is trying to deceive C0, therefore, C1's comment is a trolling event. This is a trolling event whose intention is to purposely convey false information, and that hiding its intentions. As for the response, in the last C0's comment, he has finally realized or interpreted that C1's real intentions are deceiving and since his comment shows a “sad emoticon” his reply is emotionally, with aggravation, so we say that CO got engaged. C2 on the other hand, acknowledges the malicious and play along with the troll.",
"Given these examples, address the task of predicting the four aspects of a trolling event based on the methodology described in the next section."
],
[
"We collected all available comments in the stories from Reddit from August 2015. Reddit is popular website that allows registered users (without identity verification) to participate in forums specific a post or topic. These forums are of they hierarchical type, those that allow nested conversation, where the children of a comment are its direct response. To increase recall and make the annotation process feasible we created an inverted index with Lucene and queried for comments containing the word troll with an edit distance of 1, to include close variations of this word. We do so inspired by the method by BIBREF2 to created a bullying dataset, and because we hypothesize that such comments will be related or involved in a trolling event. As we observed in the dataset, people use the word troll in many different ways, sometimes it is to point out that some used is indeed trolling him or her or is accusing someone else of being a troll. Other times, people use the term, to express their frustration or dislike about a particular user, but there is no trolling event. Other times, people simple discuss about trolling and trolls, without actually participating or observing one directly. Nonetheless, we found that this search produced a dataset in which 44.3 % of the comments directly involved a trolling event. Moreover, as we exposed our trolling definition, it is possible for commentators in a conversation to believe that they are witnessing a trolling event and respond accordingly even where there is none. Therefore, even in the comments that do not involve trolling, we are interested in learning what triggers users interpretation of trolling where it is not present and what kind of response strategies are used. We define as a suspected trolling event in our dataset a comment in which at least one of its children contains the word troll.",
"With the gathered comments, we reconstructed the original conversation trees, from the original post, the root, to the leaves, when they were available and selected a subset to annotated. For annotation purposes, we created snippets of conversations as the ones shown in Example 1 and Example 2 consisting of the parent of the suspected trolling event, the suspected trolling event comment, and all of the direct responses to the suspected trolling event. We added an extra constraint that the parent of the suspected trolling event should also be part of the direct responses, we hypothesize that if the suspected trolling event is indeed trolling, its parent should be the object of its trolling and would have a say about it. We recognize that this limited amount of information is not always sufficient to recover the original message conveyed by all of the participants in the snippet, and additional context would be beneficial. However, the trade off is that snippets like this allow us to make use of Amazon Mechanical Turk (AMT) to have the dataset annotated, because it is not a big burden for a “turker” to work on an individual snippet in exchange for a small pay, and expedites the annotation process by distributing it over dozens of people. Specifically, for each snippet, we requested three annotators to label the four aspects previously described. Before annotating, we set up a qualification test along with borderline examples to guide them in process and align them with our criteria. The qualification test turned out to be very selective since only 5% of all of the turkers that attempted it passed the exam. Our dataset consists of 1000 conversations with 5868 sentences and 71033 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF24 in the column “Size”.",
"Inter-Annotator Agreement. Due to the subjective nature of the task we did not expected perfect agreement. However, we obtained substantial inter-annotator agreement as we measured the fleiss-kappa statistic BIBREF3 for each of the trolling aspects: Intention: 0.578, Intention Disclosure: 0.556, Interpretation: 0.731 and Response 0.632. After inspecting the dataset, we manually reconciled aspects of the threads that found no majority on the turkers annotation and verified and corrected consistency on the four tasks on each thread."
],
[
"In this section we propose to solve the following problem: given a comment in a conversation, suspected to a trolling event, it's parent comment and all it's direct responses, we aim to predict the suspected comment I: intention, its D: intention disclosure and from the responses point of view, for each response comment the R: interpretation of the suspected troll comment's intentions, and identify its B: response strategy. This problem can be seen as a multi-task prediction. To do so, we split the dataset into training and testing sets using a 5-fold cross validation setup."
],
[
"For prediction we define two sets of features, a basic and an enhanced dataset, extracted from each of the comments in the dataset. The features are described below.",
"N-gram features. We encode each unigram and bigram collected from the training comments a binary feature. In a similar manner, we include the unigram and bigram along with their POS tag as in BIBREF2 . To extract these features we used the most current version of the Stanford CoreNLP BIBREF4 . Each token's",
"Lemmas as in BIBREF5 as a binary feature.",
"Harmful Vocabulary. In their research on bullying BIBREF6 identified a small set of words that are highly offensive. We encode them as well as binary features.",
"Emotions Synsets. As in BIBREF5 we extracted all lemmas associated with each of Synsets extracted from WordNet BIBREF7 from these emotions: anger, embarrassment, empathy, fear, pride, relief and sadness. As well all the synonyms from these emotions extracted from the dictionary. Also,",
"Emoticons. Reddit's comments make extensive use of emoticons, we argue that some emoticons are specially used in trolling events and to express a variety of emotions, which we hypothesize would be useful to identify a comments intention, interpretation and response. For that we use the emoticon dictionary BIBREF8 and we set a binary feature for each emoticon that is found in the dictionary.",
"Sentiment Polarity. Using a similar idea, we hypothesize that the overall comment emotion would be useful to identify the response and intention in a trolling event. So, we apply the Vader Sentiment Polarity Analyzer BIBREF9 and include a four features, one per each measurement given by the analyzer: positive, neutral, negative and a composite metric, each as a real number value.",
"Subjectivity Lexicon. From the MPQA Subjective Lexicon BIBREF10 we include all tokens that are found in the lexicon as binary features. This lexicon was created from a news domains, so the words in it don't necessarily align with the informal vocabulary used in Reddit, but, there are serious Reddit users that use proper language and formal constructions. We believe that these features will allow us to discriminate formal comments from being potentially labeled as trolling events, which tend to be vulgar.",
"Swearing Vocabulary. We manually collected 1061 swear words and short phrases from the internet, blogs, forums and smaller repositories. The informal nature of this dictionary resembles the type of language used by flaming trolls and agitated responses, so we encode a binary feature for each word or short phrase in a comment if it appears in the swearing dictionary.",
"Framenet. Following BIBREF11 use of FrameNet, we apply the Semaphore Parser BIBREF12 to each sentence in every comment in the training set, and construct three different binary features: every frame name that is present in the sentence, the frame name a the target word associated with it, and the argument name along with the token or lexical unit in the sentence associated with it. We argue that some frames are especially interesting from the trolling perspective. For example, the frame “Deception_success” precisely models one of the trolling models, and we argue that these features will be particularly to identify trolling events in which semantic and not just syntactic information is necessary.",
"Politeness Queues. BIBREF13 identified queues that signal polite and impolite interactions among groups of people collaborating online. Based on our observations of trolling examples, it is clear that flaming troll and engaged or emotional responses would use impolite queues. On the contrary, neutralizing and frustrating responses to troll avoid falling in confrontation and their vocabulary tends to be more polite. So use these queues as binary features as they appear in the comments in consideration."
],
[
"The most naïve approach is to consider each of the four tasks as an independent classification problem. Such system would be deprived from the other's tasks information that we've mentioned is strictly necessary to make a correct prediction of the response strategy. Instead, as our baseline we follow a pipeline approach, using the tasks oder: I, D, R and B, so that each of the subsequent subtasks' feature set is extended with a feature per each of previously computed subtasks. We argue that this setup is a competitive baseline, as can be verified in the results table TABREF24 . For the classifier in the pipeline approach we choose a log-linear model, a logistic regression classifier. In addition to logistic regression, we tried the generative complement of logistic regression, naïve bayes and max-margin classifier, a support vector machine, but their performance was not superior to the logistic regression. It is noteworthy to mention that the feature set used for the intention predict is the combined features sets of the suspected troll comment as well as its parent. We do so in all of our experiments the learner can take advantage of the conversation context."
],
[
"The nature of this problem makes the use of a joint model a logical choice. Among different options for joint inference, we choose a (conditional) probabilistic graphical model (henceforth PGM) BIBREF15 because, in contrast to ILP formulations, has the ability to learn parameters and not just impose hard constraints. Also, compared to Markov Logic Networks BIBREF16 , a relatively recent formulation that combines logic and Markov Random Fields, PGMs in practice have proved to be more scalable, even though, inference in general models is shown to be intractable. Finally, we are also interested in choosing a PGM because it allow to directly compare the strength of joint inference with the baseline, because the our model is a collection of logistic regressors trained simultaneously.",
"A conditional random field factorizes the conditional probability distribution over all possible values of the query variables in the model, given a set of observations as in equation EQREF22 . In our model, the query variables are the four tasks we desire to predict, INLINEFORM0 and the observations is their combined feature sets INLINEFORM1 . Each of the factors INLINEFORM2 in this distribution is a log-linear model as in equation EQREF23 and represents the probability distribution of the clique of variables INLINEFORM3 in it, given the observation set INLINEFORM4 . This is identical to the independent logistic regression model described in the baseline, except for the fact that all variables or tasks are consider a the same time. To do so, we add additional factors that connect task variables among them, permitting the flow of information of one task to the other.",
"Specifically, our model represent each task with a random variable, shown in figure FIGREF15 (left), represented by the circles. The plate notation that surrounds variables INLINEFORM0 and INLINEFORM1 indicates that there will as many variables INLINEFORM2 and INLINEFORM3 and edges connecting them to INLINEFORM4 as the number of responses in the problem snippet. The edges connecting INLINEFORM5 and INLINEFORM6 with INLINEFORM7 attempts to model influence of these two variables on the response, and how this information is passed along to the response strategy variable INLINEFORM8 . Figure FIGREF15 (right) explicitly represents the cliques in the underlying factor graph. We can see that there are unary factors, INLINEFORM9 , INLINEFORM10 , INLINEFORM11 and INLINEFORM12 , that model the influence of the observation features over their associated variables, just as the logistic regression model does. Factors INLINEFORM13 models the interaction between variables INLINEFORM14 and INLINEFORM15 , INLINEFORM16 the interaction between variables INLINEFORM17 and INLINEFORM18 and INLINEFORM19 models the interactions between variables INLINEFORM20 and INLINEFORM21 , using a log-linear model over the possible values of the pair of variables in that particular clique.",
"Due to the size of the model, we are able to perform exact inference at train and test time. For parameter learning we employ limited memory lbfgs optimizer BIBREF17 as we provide the cost function and gradient based on the equations described in BIBREF18 .",
"2 pass Model A hybrid mode that we experiment with is model that performs joint inference on three tasks: I: intention, D: intention disclosure and R: responders' intention interpretation. The remaining task B: response strategy is performed in a second step, with the input the other three tasks. We do so because we observed in our experiments that the close coupling between the first three tasks allow them to perform better independently of the response strategy, as we will elaborate in the results section. DISPLAYFORM0 DISPLAYFORM1 "
],
[
"We perform 5-fold cross validation on the dataset. We use the first fold to tune the parameters and the remaining four folds to report results. The system performance is measured using precision, recall and F-1 as shown in table TABREF24 . The left side of the table, reports results obtained using the basic feature set, while the right side does so on the enhanced feature set. In order to maintain consistency folds are created based on the threads or snippets and for the case of the baseline system, all instances in the particular fold for task in consideration are considered independent of each other. On the table, rows show the classes performance for each of the tasks, indicated by a heard with the task name. For the response strategy we present results for those class values that are at least 5% of the total distribution, we do so, because the number of labeled instances for this classes is statistically insignificant compared to the majority classes."
],
[
"From the result table TABREF24 , we observe that hybrid model significantly outperform the baseline, by more than 20 points in intention and intention disclosure prediction. For the response strategy, it is clear that none of the systems offer satisfying results; this showcases the difficult of such a large number of classes. Nonetheless, the hybrid model outperforms the fully joint model and baseline in all but one the response strategy classes. However, the differences are far less impressive as in the other tasks. It is surprisingly; that the full joint model did not offered the best performance. One of the reasons behind this is that intention, intentions disclosure and interpretation tasks are hurt by the difficulty of learning parameters that maximize the response strategy, this last task drags the other three down in performance. Another reason is that, the features response strategy is not informative enough to learn the correct concept, and due to the joint inference process, all tasks receive a hit. Also, it is counter-intuitive that the augmented set of features did not outperform in all tasks but in intentions disclosure and interpretation, and just by a small margin. A reason explaining this unexpected behavior is that the majority of enhanced features are already represented in the basic feature set by means of the unigrams and bigrams, and the Framenet and Sentiment features are uninformative or redundant. Lastly, we observe that for interpretation category, none of systems were able to predict the “playing” class. This is because of the relative size of the number of instances annotated with that value, 1% of the entire dataset. We hypothesize those instances labeled by the annotators, of which a majority agreed on, incorrectly selected the playing category instead of the trolling class, and that, at the interpretation level, one can only expect to reliably differentiate between trolling and trolling."
],
[
"In this section, we discuss related work in the areas of trolling, bullying and politeness, as they intersect in their scope and at least partially address the problem presented in this work.",
" BIBREF19 address the problem of identifying manipulation trolls in news community forums. The major difference with this work is that all their predictions are based on meta-information such as number of votes, dates, number of comments and so on. There is no NLP approach to the problem and their task is limited to identifying trolls. BIBREF0 and BIBREF20 elaborate a deep description of the trolls personality, motivations, effects on the community that trolls interfere and the criminal and psychological aspects of trolls. Their main focus are flaming trolls, but have no NLP insights do not propose and automated prediction tasks as in this work. In a networks related framework BIBREF21 and BIBREF22 present a methodology to identify malicious individuals in a network based solely on the network's properties. Even though they offer present and evaluate a methodology, their focus is different from NLP. BIBREF23 proposes a method that involves NLP components, but fails to provide a evaluation of their system. Finally, BIBREF2 and BIBREF5 address bullying traces. That is self reported events of individuals describing being part of bullying events, but their focus is different from trolling event and the interactions with other participants."
],
[
"In this paper we address the under-attended problem of trolling in Internet forums. We presented a comprehensive categorization of trolling events and defined a prediction tasks that does not only considers trolling from the troll's perspective but includes the responders to the trolls comment. Also, we evaluated three different models and analyzed their successes and shortcomings. Finally we provide an annotated dataset which we hope will be useful for the research community. We look forward to investigate trolling phenomena in larger conversations, formalize the concepts of changing roles among the participants in trolling events, and improve response strategy performance."
]
],
"section_name": [
"Introduction",
"Trolling Categorization",
"Conversations Excerpts Examples",
"Corpus and Annotations",
"Trolling Events Prediction",
"Feature Set",
"Baseline System",
"Joint Models",
"Evaluation and Results",
"Results Discussion",
"Related Work",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"74782daa65019ee4208f68fd0fcb74687f4caea5"
],
"answer": [
{
"evidence": [
"With the gathered comments, we reconstructed the original conversation trees, from the original post, the root, to the leaves, when they were available and selected a subset to annotated. For annotation purposes, we created snippets of conversations as the ones shown in Example 1 and Example 2 consisting of the parent of the suspected trolling event, the suspected trolling event comment, and all of the direct responses to the suspected trolling event. We added an extra constraint that the parent of the suspected trolling event should also be part of the direct responses, we hypothesize that if the suspected trolling event is indeed trolling, its parent should be the object of its trolling and would have a say about it. We recognize that this limited amount of information is not always sufficient to recover the original message conveyed by all of the participants in the snippet, and additional context would be beneficial. However, the trade off is that snippets like this allow us to make use of Amazon Mechanical Turk (AMT) to have the dataset annotated, because it is not a big burden for a “turker” to work on an individual snippet in exchange for a small pay, and expedites the annotation process by distributing it over dozens of people. Specifically, for each snippet, we requested three annotators to label the four aspects previously described. Before annotating, we set up a qualification test along with borderline examples to guide them in process and align them with our criteria. The qualification test turned out to be very selective since only 5% of all of the turkers that attempted it passed the exam. Our dataset consists of 1000 conversations with 5868 sentences and 71033 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF24 in the column “Size”."
],
"extractive_spans": [],
"free_form_answer": "Annotation was done with the help of annotators from Amazon Mechanical Turk on snippets of conversations",
"highlighted_evidence": [
"For annotation purposes, we created snippets of conversations as the ones shown in Example 1 and Example 2 consisting of the parent of the suspected trolling event, the suspected trolling event comment, and all of the direct responses to the suspected trolling event.",
"However, the trade off is that snippets like this allow us to make use of Amazon Mechanical Turk (AMT) to have the dataset annotated, because it is not a big burden for a “turker” to work on an individual snippet in exchange for a small pay, and expedites the annotation process by distributing it over dozens of people. Specifically, for each snippet, we requested three annotators to label the four aspects previously described."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"cbc0387f96782d9d4d70abd866ef2a4441c8b469"
],
"answer": [
{
"evidence": [
"We collected all available comments in the stories from Reddit from August 2015. Reddit is popular website that allows registered users (without identity verification) to participate in forums specific a post or topic. These forums are of they hierarchical type, those that allow nested conversation, where the children of a comment are its direct response. To increase recall and make the annotation process feasible we created an inverted index with Lucene and queried for comments containing the word troll with an edit distance of 1, to include close variations of this word. We do so inspired by the method by BIBREF2 to created a bullying dataset, and because we hypothesize that such comments will be related or involved in a trolling event. As we observed in the dataset, people use the word troll in many different ways, sometimes it is to point out that some used is indeed trolling him or her or is accusing someone else of being a troll. Other times, people use the term, to express their frustration or dislike about a particular user, but there is no trolling event. Other times, people simple discuss about trolling and trolls, without actually participating or observing one directly. Nonetheless, we found that this search produced a dataset in which 44.3 % of the comments directly involved a trolling event. Moreover, as we exposed our trolling definition, it is possible for commentators in a conversation to believe that they are witnessing a trolling event and respond accordingly even where there is none. Therefore, even in the comments that do not involve trolling, we are interested in learning what triggers users interpretation of trolling where it is not present and what kind of response strategies are used. We define as a suspected trolling event in our dataset a comment in which at least one of its children contains the word troll."
],
"extractive_spans": [
"Reddit"
],
"free_form_answer": "",
"highlighted_evidence": [
"We collected all available comments in the stories from Reddit from August 2015. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"how was annotation done?",
"what is the source of the new dataset?"
],
"question_id": [
"37c7c62c9216d6cf3d0858cf1deab6db4b815384",
"539eb559744641e6a4aefe267cbc4c79e2bcceae"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Figure 1: Trolling categorization based on four aspects: Comment’s Intention and Intentions Disclosure, and Response’s Interpretation and Strategy",
"Figure 2: Trolling tasks modeled as a (conditional) probabilistic graphical model (left). Factor graph showing all cliques or direct interactions in the model (right). R is the number of direct responses to the suspected trolling comment.",
"Table 1: Prediction Results for the four aspects of trolling: Intention, Intentions Disclosure, Interpretation, and Response strategy. Three models are evaluated: a logistic regression classifier: Baseline, a four-tasks CRF: Full Joint, and a two steps process: three-tasks CRF followed by the Response Strategy prediction tasks given the the outcome of the CRF"
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"8-Table1-1.png"
]
} | [
"how was annotation done?"
] | [
[
"1704.02385-Corpus and Annotations-1"
]
] | [
"Annotation was done with the help of annotators from Amazon Mechanical Turk on snippets of conversations"
] | 683 |
1708.01776 | e-QRAQ: A Multi-turn Reasoning Dataset and Simulator with Explanations | In this paper we present a new dataset and user simulator e-QRAQ (explainable Query, Reason, and Answer Question) which tests an Agent's ability to read an ambiguous text; ask questions until it can answer a challenge question; and explain the reasoning behind its questions and answer. The User simulator provides the Agent with a short, ambiguous story and a challenge question about the story. The story is ambiguous because some of the entities have been replaced by variables. At each turn the Agent may ask for the value of a variable or try to answer the challenge question. In response the User simulator provides a natural language explanation of why the Agent's query or answer was useful in narrowing down the set of possible answers, or not. To demonstrate one potential application of the e-QRAQ dataset, we train a new neural architecture based on End-to-End Memory Networks to successfully generate both predictions and partial explanations of its current understanding of the problem. We observe a strong correlation between the quality of the prediction and explanation. | {
"paragraphs": [
[
"In recent years deep neural network models have been successfully applied in a variety of applications such as machine translation BIBREF0 , object recognition BIBREF1 , BIBREF2 , game playing BIBREF3 , dialog BIBREF4 and more. However, their lack of interpretability makes them a less attractive choice when stakeholders must be able to understand and validate the inference process. Examples include medical diagnosis, business decision-making and reasoning, legal and safety compliance, etc. This opacity also presents a challenge simply for debugging and improving model performance. For neural systems to move into realms where more transparent, symbolic models are currently employed, we must find mechanisms to ground neural computation in meaningful human concepts, inferences, and explanations. One approach to this problem is to treat the explanation problem itself as a learning problem and train a network to explain the results of a neural computation. This can be done either with a single network learning jointly to explain its own predictions or with separate networks for prediction and explanation. Regardless, the availability of sufficient labelled training data is a key impediment. In previous work BIBREF5 we developed a synthetic conversational reasoning dataset in which the User presents the Agent with a simple, ambiguous story and a challenge question about that story. Ambiguities arise because some of the entities in the story have been replaced by variables, some of which may need to be known to answer the challenge question. A successful Agent must reason about what the answers might be, given the ambiguity, and, if there is more than one possible answer, ask for the value of a relevant variable to reduce the possible answer set. In this paper we present a new dataset e-QRAQ constructed by augmenting the QRAQ simulator with the ability to provide detailed explanations about whether the Agent's response was correct and why. Using this dataset we perform some preliminary experiments, training an extended End-to-End Memory Network architecture BIBREF6 to jointly predict a response and a partial explanation of its reasoning. We consider two types of partial explanation in these experiments: the set of relevant variables, which the Agent must know to ask a relevant, reasoned question; and the set of possible answers, which the Agent must know to answer correctly. We demonstrate a strong correlation between the qualities of the prediction and explanation."
],
[
"Current interpretable machine learning algorithms for deep learning can be divided into two approaches: one approach aims to explain black box models in a model-agnostic fashion BIBREF7 , BIBREF8 ; another studies learning models, in particular deep neural networks, by visualizing for example the activations or gradients inside the networks BIBREF9 , BIBREF10 , BIBREF11 . Other work has studied the interpretability of traditional machine learning algorithms, such as decision trees BIBREF12 , graphical models BIBREF13 , and learned rule-based systems BIBREF14 . Notably, none of these algorithms produces natural language explanations, although the rule-based system is close to a human-understandable form if the features are interpretable. We believe one of the major impediments to getting NL explanations is the lack of datasets containing supervised explanations.",
"Datasets have often accelerated the advance of machine learning in their perspective areas BIBREF15 , including computer vision BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , natural language BIBREF21 , BIBREF22 , BIBREF23 , reasoning BIBREF24 , BIBREF25 , BIBREF5 , etc. Recently, natural language explanation was added to complement existing visual datasets via crowd-sourcing labeling BIBREF26 . However, we know of no question answering or reasoning datasets which offer NL explanations. Obviously labeling a large number of examples with explanations is a difficult and tedious task – and not one which is easily delegated to an unskilled worker. To make progress until such a dataset is available or other techniques obviate its need, we follow the approach of existing work such as BIBREF24 , BIBREF4 , and generate synthetic natural language explanations from a simulator."
],
[
"A QRAQ domain, as introduced in BIBREF5 , has two actors, the User and the Agent. The User provides a short story set in a domain similar to the HomeWorld domain of BIBREF24 , BIBREF27 given as an initial context followed by a sequence of events, in temporal order, and a challenge question. The stories are semantically coherent but may contain hidden, sometimes ambiguous, entity references, which the Agent must potentially resolve to answer the question.",
"To do so, the Agent can query the User for the value of variables which hide the identity of entities in the story. At each point in the interaction, the Agent must determine whether it knows the answer, and if so, provide it; otherwise it must determine a variable to query which will reduce the potential answer set (a “relevant” variable).",
"In example SECREF1 the actors $v, $w, $x and $y are treated as variables whose value is unknown to the Agent. In the first event, for example, $v refers to either Hannah or Emma, but the Agent can't tell which. In a realistic text this entity obfuscation might occur due to spelling or transcription errors, unknown descriptive references such as “Emma's sibling”, or indefinite pronouns such as “somebody”. Several datasets with 100k problems each and of varying difficulty have been released to the research community and are available for download BIBREF28 ."
],
[
"This paper's main contribution is an extension to the original QRAQ simulator that provides extensive explanations of the reasoning process required to solve a QRAQ problem. These explanations are created dynamically at runtime, in response to the Agent's actions. The following two examples illustrate these explanations, for several different scenarios:",
"The context (C), events (E), and question (Q) parts of the problem are identical to those in a QRAQ problem. In addition there is a trace of the interaction of a trained Agent (A) model with the User (U) simulator. The simulator provides two kinds of explanations in response to the Agent's query or answer. The first kind denoted “U” indicates whether the Agent's response is correct or not and why. The second kind of explanation, denoted “U INLINEFORM0 ” provides a full description of what can be inferred in the current state of the interaction. In this case the relevant information is the set of possible answers at different points in the interaction (Porch, Boudoir / Porch for Example UID13 ) and the set of relevant variables ($V0 / none for Example UID13 ).",
"In Example UID13 , illustrating a successful interaction, the Agent asks for the value of $V0 and the User responds with the answer (Silvia) as well as an explanation indicating that it was correct (helpful) and why. Specifically, in this instance it was helpful because it enabled an inference which reduced the possible answer set (and reduced the set of relevant variables). On the other hand, in Example UID30 , we see an example of a bad query and corresponding critical explanation.",
"In general, the e-QRAQ simulator offers the following explanations to the Agent:",
"When answering, the User will provide feedback depending on whether or not the Agent has enough information to answer; that is, on whether the set of possible answers contains only one answer. If the Agent has enough information, the User will only provide feedback on whether or not the answer was correct and on the correct answer if the answer was false. If the agent does not have enough information, and is hence guessing, the User will say so and list all still relevant variables and the resulting possible answers.",
"When querying, the User will provide several kinds of feedback, depending on how useful the query was. A query on a variable not even occurring in the problem will trigger an explanation that says that the variable is not in the problem. A query on an irrelevant variable will result in an explanation showing that the story's protagonist cannot be the entity hidden by that variable. Finally, a useful (i.e. relevant) query will result in feedback showing the inference that is possible by knowing that variable's reference. This set of inference can also serve as the detailed explanation to obtain the correct answer above.",
"The e-QRAQ simulator will be available upon publication of this paper at the same location as QRAQ BIBREF28 for researchers to test their interpretable learning algorithms."
],
[
"The normal interaction flow between the User and the Agent during runtime of the simulator is shown in Figure FIGREF49 , and is - with the exception of the additional explanations - identical to the interaction flow for the original QRAQ proglems BIBREF5 . This means that the User acts as a scripted counterpart to the Agent in the simulated e-QRAQ environment. We show interaction flows for both supervised and reinforcement learning modes. Additionally, we want to point out that INLINEFORM0 in Figure FIGREF49 can be both U and U INLINEFORM1 , i.e. both the natural language explanation and the internal state explanations. Performance and accuracy are measured by the User, that compares the Agent's suggested actions and the Agent's suggested explanations with the ground truth known by the User."
],
[
"For the experiments, we use the User simulator explanations to train an extended memory network. As shown in Figure FIGREF50 , our network architecture extends the End-to-End Memory architecture of BIBREF6 , adding a two layer Multi-Layer Perceptron to a concatenation of all “hops” of the network. The explanation and response prediction are trained jointly. In these preliminary experiments we do not train directly with the natural language explanation from U, just the explanation of what can be inferred in the current state U INLINEFORM0 . In future experiments we will work with the U explanations directly.",
"Specifically, for our experiments, we provide a classification label for the prediction output generating the Agent's actions, and a vector INLINEFORM0 of the following form to the explanation output (where INLINEFORM1 is an one-hot encoding of dimensionality (or vocabulary size) INLINEFORM2 of word INLINEFORM3 , and INLINEFORM4 is the explanation set: DISPLAYFORM0 ",
"For testing, we consider the network to predict a entity in the explanation if the output vector INLINEFORM0 surpasses a threshold for the index corresponding to that entity. We tried several thresholds, some adaptive (such as the average of the output vector's values), but found that a fixed threshold of .5 works best."
],
[
"To evaluate the model's ability to jointly learn to predict and explain its predictions we performed two experiments. First, we investigate how the prediction accuracy is affected by jointly training the network to produce explanations. Second, we evaluate how well the model learns to generate explanations. To understand the role of the explanation content in the learning process we perform both of these experiments for each of the two types of explanation: relevant variables and possible answers. We do not perform hyperparameter optimization on the E2E Memory Network, since we are more interested in relative performance. While we only show a single experimental run in our Figures, results were nearly identical for over five experimental runs.",
"The experimental results differ widely for the two kinds of explanation considered, where an explanation based on possible answers provides better scores for both experiments. As illustrated in Figure FIGREF52 , simultaneously learning possible-answer explanations does not affect prediction, while learning relevant-variable explanation learning severely impairs prediction performance, slowing the learning by roughly a factor of four. We can observe the same outcome for the quality of the explanations learned, shown in Figure FIGREF53 . Here again the performance on possible-answer explanations is significantly higher than for relevant-variable explanations. Possible-answer explanations reach an F-Score of .9, while relevant-variable explanations one of .09 only, with precision and recall only slightly deviating from the F-Score in all experiments.",
"We would expect that explanation performance should correlate with prediction performance. Since Possible-answer knowledge is primarily needed to decide if the net has enough information to answer the challenge question without guessing and relevant-variable knowledge is needed for the net to know what to query, we analyzed the network's performance on querying and answering separately. The memory network has particular difficulty learning to query relevant variables, reaching only about .5 accuracy when querying. At the same time, it learns to answer very well, reaching over .9 accuracy there. Since these two parts of the interaction are what we ask it to explain in the two modes, we find that the quality of the explanations strongly correlates with the quality of the algorithm executed by the network."
],
[
"We have constructed a new dataset and simulator, e-QRAQ, designed to test a network's ability to explain its predictions in a set of multi-turn, challenging reasoning problems. In addition to providing supervision on the correct response at each turn, the simulator provides two types of explanation to the Agent: A natural language assessment of the Agent's prediction which includes language about whether the prediction was correct or not, and a description of what can be inferred in the current state – both about the possible answers and the relevant variables. We used the relevant variable and possible answer explanations to jointly train a modified E2E memory network to both predict and explain it's predictions. Our experiments show that the quality of the explanations strongly correlates with the quality of the predictions. Moreover, when the network has trouble predicting, as it does with queries, requiring it to generate good explanations slows its learning. For future work, we would like to investigate whether we can train the net to generate natural language explanations and how this might affect prediction performance."
]
],
"section_name": [
"Introduction",
"Related Work",
"The QRAQ Dataset",
"The Dataset",
"The “interaction flow”",
"Experimental Setup",
"Results",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"e7acf448a61cb3f5a453f14fe6834ec0dfa31d3b"
],
"answer": [
{
"evidence": [
"We would expect that explanation performance should correlate with prediction performance. Since Possible-answer knowledge is primarily needed to decide if the net has enough information to answer the challenge question without guessing and relevant-variable knowledge is needed for the net to know what to query, we analyzed the network's performance on querying and answering separately. The memory network has particular difficulty learning to query relevant variables, reaching only about .5 accuracy when querying. At the same time, it learns to answer very well, reaching over .9 accuracy there. Since these two parts of the interaction are what we ask it to explain in the two modes, we find that the quality of the explanations strongly correlates with the quality of the algorithm executed by the network."
],
"extractive_spans": [],
"free_form_answer": "They look at the performance accuracy of explanation and the prediction performance",
"highlighted_evidence": [
"We would expect that explanation performance should correlate with prediction performance. Since Possible-answer knowledge is primarily needed to decide if the net has enough information to answer the challenge question without guessing and relevant-variable knowledge is needed for the net to know what to query, we analyzed the network's performance on querying and answering separately. The memory network has particular difficulty learning to query relevant variables, reaching only about .5 accuracy when querying. At the same time, it learns to answer very well, reaching over .9 accuracy there. Since these two parts of the interaction are what we ask it to explain in the two modes, we find that the quality of the explanations strongly correlates with the quality of the algorithm executed by the network."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"7508d156bb952297c013ed3d0a83d09e5e451216"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 1. The User-Agent Interaction",
"In Example UID13 , illustrating a successful interaction, the Agent asks for the value of $V0 and the User responds with the answer (Silvia) as well as an explanation indicating that it was correct (helpful) and why. Specifically, in this instance it was helpful because it enabled an inference which reduced the possible answer set (and reduced the set of relevant variables). On the other hand, in Example UID30 , we see an example of a bad query and corresponding critical explanation."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 1. The User-Agent Interaction",
"In Example UID13 , illustrating a successful interaction, the Agent asks for the value of $V0 and the User responds with the answer (Silvia) as well as an explanation indicating that it was correct (helpful) and why."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"How do they measure correlation between the prediction and explanation quality?",
"Does the Agent ask for a value of a variable using natural language generated text?"
],
"question_id": [
"2ec97cf890b537e393c2ce4c2b3bd05dfe46f683",
"41174d8b176cb8549c2d83429d94ba8218335c84"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1. The User-Agent Interaction",
"Figure 2. The modified E2E-Memory Network architecture simultaneously generating answers to the challenge question and explanations of its internal belief state, shown with four internal “hops”.",
"Figure 3. The Interaction Accuracy (over 50 epochs with 1000 problems each)",
"Figure 4. The Explanation Accuracies (over 50 epochs with 1000 problems each)"
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"4-Figure3-1.png",
"5-Figure4-1.png"
]
} | [
"How do they measure correlation between the prediction and explanation quality?"
] | [
[
"1708.01776-Results-2"
]
] | [
"They look at the performance accuracy of explanation and the prediction performance"
] | 685 |
1808.03430 | Lingke: A Fine-grained Multi-turn Chatbot for Customer Service | Traditional chatbots usually need a mass of human dialogue data, especially when using supervised machine learning method. Though they can easily deal with single-turn question answering, for multi-turn the performance is usually unsatisfactory. In this paper, we present Lingke, an information retrieval augmented chatbot which is able to answer questions based on given product introduction document and deal with multi-turn conversations. We will introduce a fine-grained pipeline processing to distill responses based on unstructured documents, and attentive sequential context-response matching for multi-turn conversations. | {
"paragraphs": [
[
" $\\dagger $ Corresponding author. This paper was partially supported by National Key Research and Development Program of China (No. 2017YFB0304100), National Natural Science Foundation of China (No. 61672343 and No. 61733011), Key Project of National Society Science Foundation of China (No. 15-ZDA041), The Art and Science Interdisciplinary Funds of Shanghai Jiao Tong University (No. 14JCRZ04). This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/.",
"Recently, dialogue and interactive systems have been emerging with huge commercial values BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , especially in the e-commerce field BIBREF6 , BIBREF7 . Building a chatbot mainly faces two challenges, the lack of dialogue data and poor performance for multi-turn conversations. This paper describes a fine-grained information retrieval (IR) augmented multi-turn chatbot - Lingke. It can learn knowledge without human supervision from conversation records or given product introduction documents and generate proper response, which alleviates the problem of lacking dialogue corpus to train a chatbot. First, by using Apache Lucene to select top 2 sentences most relevant to the question and extracting subject-verb-object (SVO) triples from them, a set of candidate responses is generated. With regard to multi-turn conversations, we adopt a dialogue manager, including self-attention strategy to distill significant signal of utterances, and sequential utterance-response matching to connect responses with conversation utterances, which outperforms all other models in multi-turn response selection. An online demo is available via accessing http://47.96.2.5:8080/ServiceBot/demo/."
],
[
"This section presents the architecture of Lingke, which is overall shown in Figure 1 .",
"The technical components include 1) coreference resolution and document separation, 2) target sentences retrieval, 3) candidate responses generation, followed by a dialouge manager including 4) self-matching attention, 5) response selection and 6) chit-chat response generation.",
"The first three steps aim at selecting candidate responses, and in the remaining steps, we utilize sentences from previous conversations to select the most proper response. For multi-turn conversation modeling, we develop a dialogue manager which employs self-matching attention strategy and sequential utterance-response matching to distill pivotal information from the redundant context and determine the most proper response from the candidates."
],
[
"In this section, we will discuss the usability of Lingke. In situation of lacking enough dialogue data such as when a new product is put on an online shop, Lingke only needs an introduction document to respond to customers. Because of the chit-chat response generation engine, Lingke can easily deal with any commodity-independent conversations. Thanks to our multi-turn model, Lingke will not get confused when customer gives incomplete questions which need to be understood based on context.",
"Figure UID17 - UID17 show two typical application scenarios of Lingke, namely, conversation record based and document-based ones, which vary based on the training corpus. Figure UID17 shows Linke can effectively respond to the customer shopping consultations. The customer sends a product link and then Lingke recognizes it, and when the customer asks production specifications Lingke will give responses based on information from the context and the conversation record. Figure UID17 shows a typical scenario when a customer consults Lingke about a new product. The customer starts with a greeting, which is answered by chit-chat engine. Then the customer asks certain features of a product. Note that the second response comes from a sentence which has a redundant clause, and main information the customer cares about has been extracted. In the third user utterance, words like “What\" and “ZenBook Pro\" are omitted, which can be deducted from the prior question. Such pivotal information from the context is distilled and utilized to determine proper response with the merit of self-matching attention and multi-turn modeling.",
"The user utterances of examples in this paper and our online demo are relatively simple and short, which usually aim at only one feature of the product. In some cases, when the customer utterance becomes more complex, for example, focusing on more than one feature of the product, Lingke may fail to give complete response. A possible solution is to concatenate two relevant candidate responses, but the key to the problem is to determine the intents of the customer."
],
[
"We have presented a fine-grained information retrieval augmented chatbot for multi-turn conversations. In this paper, we took e-commerce product introduction as example, but our solution will not be limited to this domain. In our future work, we will add the mechanism of intent detection, and try to find solutions of how to deal with introduction document that contains more than one object."
]
],
"section_name": [
"Introduction",
"Architecture",
"Usability and Analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"eaa315ee3d5254c8cd33ea7eaf98c2897dbba78b"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "They used a dataset from Taobao which contained a collection of conversation records between customers and customer service staffs. It contains over five kinds of conversations,\nincluding chit-chat, product and discount consultation, querying delivery progress and after-sales feedback. ",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"annotation_id": [
"755967324fd50dd9453c569faa9f03e6d2167769"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Comparison of different models."
],
"extractive_spans": [],
"free_form_answer": "Their model resulted in values of 0.476, 0.672 and 0.893 for recall at position 1,2 and 5 respectively in 10 candidates.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Comparison of different models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"What datasets are used to evaluate the introduced method?",
"What are the results achieved from the introduced method?"
],
"question_id": [
"255fb6e20b95092c548ba47d8a295468e06698bd",
"01edeca7b902ae3fd66264366bf548acea1db364"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 2: Example of SVO Extraction.",
"Figure 1: Architecture of Lingke.",
"Table 1: Comparison of different models.",
"Figure 5: A document-based example."
],
"file": [
"2-Figure2-1.png",
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Figure5-1.png"
]
} | [
"What are the results achieved from the introduced method?"
] | [
[
"1808.03430-3-Table1-1.png"
]
] | [
"Their model resulted in values of 0.476, 0.672 and 0.893 for recall at position 1,2 and 5 respectively in 10 candidates."
] | 687 |
1607.00424 | Learning Relational Dependency Networks for Relation Extraction | We consider the task of KBP slot filling -- extracting relation information from newswire documents for knowledge base construction. We present our pipeline, which employs Relational Dependency Networks (RDNs) to learn linguistic patterns for relation extraction. Additionally, we demonstrate how several components such as weak supervision, word2vec features, joint learning and the use of human advice, can be incorporated in this relational framework. We evaluate the different components in the benchmark KBP 2015 task and show that RDNs effectively model a diverse set of features and perform competitively with current state-of-the-art relation extraction. | {
"paragraphs": [
[
"The problem of knowledge base population (KBP) – constructing a knowledge base (KB) of facts gleaned from a large corpus of unstructured data – poses several challenges for the NLP community. Commonly, this relation extraction task is decomposed into two subtasks – entity linking, in which entities are linked to already identified identities within the document or to entities in the existing KB, and slot filling, which identifies certain attributes about a target entity.",
"We present our work-in-progress for KBP slot filling based on our probabilistic logic formalisms and present the different components of the system. Specifically, we employ Relational Dependency Networks BIBREF0 , a formalism that has been successfully used for joint learning and inference from stochastic, noisy, relational data. We consider our RDN system against the current state-of-the-art for KBP to demonstrate the effectiveness of our probabilistic relational framework.",
"Additionally, we show how RDNs can effectively incorporate many popular approaches in relation extraction such as joint learning, weak supervision, word2vec features, and human advice, among others. We provide a comprehensive comparison of settings such as joint learning vs learning of individual relations, use of weak supervision vs gold standard labels, using expert advice vs only learning from data, etc. These questions are extremely interesting from a general machine learning perspective, but also critical to the NLP community. As we show empirically, some of the results such as human advice being useful in many relations and joint learning being beneficial in the cases where the relations are correlated among themselves are on the expected lines. However, some surprising observations include the fact that weak supervision is not as useful as expected and word2vec features are not as predictive as the other domain-specific features.",
"We first present the proposed pipeline with all the different components of the learning system. Next we present the set of 14 relations that we learn on before presenting the experimental results. We finally discuss the results of these comparisons before concluding by presenting directions for future research."
],
[
"We present the different aspects of our pipeline, depicted in Figure FIGREF1 . We will first describe our approach to generating features and training examples from the KBP corpus, before describing the core of our framework – the RDN Boost algorithm."
],
[
"Given a training corpus of raw text documents, our learning algorithm first converts these documents into a set of facts (i.e., features) that are encoded in first order logic (FOL). Raw text is processed using the Stanford CoreNLP Toolkit BIBREF1 to extract parts-of-speech, word lemmas, etc. as well as generate parse trees, dependency graphs and named-entity recognition information. The full set of extracted features is listed in Table TABREF3 . These are then converted into features in prolog (i.e., FOL) format and are given as input to the system.",
"In addition to the structured features from the output of Stanford toolkit, we also use deeper features based on word2vec BIBREF2 as input to our learning system. Standard NLP features tend to treat words as individual objects, ignoring links between words that occur with similar meanings or, importantly, similar contexts (e.g., city-country pairs such as Paris – France and Rome – Italy occur in similar contexts). word2vec provide a continuous-space vector embedding of words that, in practice, capture many of these relationships BIBREF2 , BIBREF3 . We use word vectors from Stanford and Google along with a few specific words that, experts believe, are related to the relations learned. For example, we include words such as “father” and “mother” (inspired by the INLINEFORM0 relation) or “devout”,“convert”, and “follow” ( INLINEFORM1 relation). We generated features from word vectors by finding words with high similarity in the embedded space. That is, we used word vectors by considering relations of the following form: INLINEFORM2 , where INLINEFORM3 is the cosine similarity score between the words. Only the top cosine similarity scores for a word are utilized."
],
[
"One difficulty with the KBP task is that very few documents come labeled as gold standard labels, and further annotation is prohibitively expensive beyond a few hundred documents. This is problematic for discriminative learning algorithms, like the RDN learning algorithm, which excel when given a large supervised training corpus. To overcome this obstacle, we employ weak supervision – the use of external knowledge (e.g., a database) to heuristically label examples. Following our work in Soni et al. akbc16, we employ two approaches for generating weakly supervised examples – distant supervision and knowledge-based weak supervision.",
"Distant supervision entails the use of external knowledge (e.g., a database) to heuristically label examples. Following standard procedure, we use three data sources – Never Ending Language Learner (NELL) BIBREF4 , Wikipedia Infoboxes and Freebase. For a given target relation, we identify relevant database(s), where the entries in the database form entity pairs (e.g., an entry of INLINEFORM0 for a parent database) that will serve as a seed for positive training examples. These pairs must then be mapped to mentions in our corpus – that is, we must find sentences in our corpus that contain both entities together BIBREF5 . This process is done heuristically and is fraught with potential errors and noise BIBREF6 .",
"An alternative approach, knowledge-based weak supervision is based on previous work BIBREF7 , BIBREF8 with the following insight: labels are typically created by “domain experts” who annotate the labels carefully, and who typically employ some inherent rules in their mind to create examples. For example, when identifying family relationship, we may have an inductive bias towards believing two persons in a sentence with the same last name are related, or that the words “son” or “daughter” are strong indicators of a parent relation. We call this world knowledge as it describes the domain (or the world) of the target relation.",
"To this effect, we encode the domain expert's knowledge in the form of first-order logic rules with accompanying weights to indicate the expert's confidence. We use the probabilistic logic formalism Markov Logic Networks BIBREF9 to perform inference on unlabeled text (e.g., the TAC KBP corpus). Potential entity pairs from the corpus are queried to the MLN, yielding (weakly-supervised) positive examples. We choose MLNs as they permit domain experts to easily write rules while providing a probabilistic framework that can handle noise, uncertainty, and preferences while simultaneously ranking positive examples.",
"We use the Tuffy system BIBREF10 to perform inference. The inference algorithm implemented inside Tuffy appears to be robust and scales well to millions of documents.",
"For the KBP task, some rules that we used are shown in Table TABREF8 . For example, the first rule identifies any number following a person's name and separated by a comma is likely to be the person's age (e.g., “Sharon, 42”). The third and fourth rule provide examples of rules that utilize more textual features; these rules state the appearance of the lemma “mother” or “father” between two persons is indicative of a parent relationship (e.g.,“Malia's father, Barack, introduced her...”).",
"To answer Q1, we generated positive training examples using the weak supervision techniques specified earlier. Specifically, we evaluated 10 relations as show in Table TABREF20 . Based on experiments from BIBREF8 , we utilized our knowledge-based weak supervision approach to provide positive examples in all but two of our relations. A range of 4 to 8 rules are derived for each relation. Examples for the organization relations INLINEFORM0 and INLINEFORM1 were generated using standard distant supervision techniques – Freebase databases were mapped to INLINEFORM2 while Wikipedia Infoboxes provides entity pairs for INLINEFORM3 . Lastly, only 150 weakly supervised examples were utilized in our experiments (all gold standard examples were utilized). Performing larger runs is part of work in progress.",
"The results are presented in Table TABREF20 . We compared our standard pipeline (individually learned relations with only standard features) learned on gold standard examples only versus our system learned with weak and gold examples combined. Surprisingly, weak supervision does not seem to help learn better models for inferring relations in most cases. Only two relations – INLINEFORM0 , INLINEFORM1 – see substantial improvements in AUC ROC, while F1 shows improvements for INLINEFORM2 and, INLINEFORM3 , and INLINEFORM4 . We hypothesize that generating more examples will help (some relations produced thousands of examples), but nonetheless find the lack of improved models from even a modest number of examples a surprising result. Alternatively, the number of gold standard examples provided may be sufficient to learn RDN models. Thus Q1 is answered equivocally, but in the negative."
],
[
"Previous research BIBREF11 has demonstrated that joint inferences of the relations are more effective than considering each relation individually. Consequently, we have considered a formalism that has been successfully used for joint learning and inference from stochastic, noisy, relational data called Relational Dependency Networks (RDNs) BIBREF0 , BIBREF12 . RDNs extend dependency networks (DN) BIBREF13 to the relational setting. The key idea in a DN is to approximate the joint distribution over a set of random variables as a product of their marginal distributions, i.e., INLINEFORM0 INLINEFORM1 INLINEFORM2 . It has been shown that employing Gibbs sampling in the presence of a large amount of data allows this approximation to be particularly effective. Note that, one does not have to explicitly check for acyclicity making these DNs particularly easy to be learned.",
"In an RDN, typically, each distribution is represented by a relational probability tree (RPT) BIBREF14 . However, following previous work BIBREF12 , we replace the RPT of each distribution with a set of relational regression trees BIBREF15 built in a sequential manner i.e., replace a single tree with a set of gradient boosted trees. This approach has been shown to have state-of-the-art results in learning RDNs and we adapted boosting to learn for relation extraction. Since this method requires negative examples, we created negative examples by considering all possible combinations of entities that are not present in positive example set and sampled twice as many negatives as positive examples."
],
[
"While most relational learning methods restrict the human to merely annotating the data, we go beyond and request the human for advice. The intuition is that we as humans read certain patterns and use them to deduce the nature of the relation between two entities present in the text. The goal of our work is to capture such mental patterns of the humans as advice to the learning algorithm. We modified the work of Odom et al. odomAIME15,odomAAAI15 to learn RDNs in the presence of advice. The key idea is to explicitly represent advice in calculating gradients. This allows the system to trade-off between data and advice throughout the learning phase, rather than only consider advice in initial iterations. Advice, in particular, become influential in the presence of noisy or less amout of data. A few sample advice rules in English (these are converted to first-order logic format and given as input to our algorithm) are presented in Table TABREF11 . Note that some of the rules are “soft\" rules in that they are not true in many situations. Odom et al. odomAAAI15 weigh the effect of the rules against the data and hence allow for partially correct rules."
],
[
"We now present our experimental evaluation. We considered 14 specific relations from two categories, person and organization from the TAC KBP competition. The relations considered are listed in the left column of Table TABREF13 . We utilize documents from KBP 2014 for training while utilizing documents from the 2015 corpus for testing.",
"All results presented are obtained from 5 different runs of the train and test sets to provide more robust estimates of accuracy. We consider three standard metrics – area under the ROC curve, F-1 score and the recall at a certain precision. We chose the precision as INLINEFORM0 since the fraction of positive examples to negatives is 1:2 (we sub-sampled the negative examples for the different training sets). Negative examples are re-sampled for each training run. It must be mentioned that not all relations had the same number of hand-annotated (gold standard) examples because the 781 documents that we annotated had different number of instances for these relations. The train/test gold-standard sizes are provided in the table, including weakly supervised examples, if available. Lastly, to control for other factors, the default setting for our experiments is individual learning, standard features, with gold standard examples only (i.e., no weak supervision, word2vec, advice, or advice).",
"Since our system had different components, we aimed to answer the following questions:"
],
[
"To address our next question, we assessed our pipeline when learning relations independently (i.e., individually) versus learning relations jointly within the RDN, displayed in Table TABREF22 . Recall and F1 are omitted for conciseness – the conclusions are the same across all metrics. Joint learning appears to help in about half of the relations (8/14). Particularly, in person category, joint learning with gold standard outperforms their individual learning counterparts. This is due to the fact that some relations such as parents, spouse, siblings etc. are inter-related and learning them jointly indeed improves performance. Hence Q2 can be answered affirmatively for half the relations."
],
[
"Table TABREF24 shows the results of experiments comparing the RDN framework with and without word2vec features. word2vec appears to largely have no impact, boosting results in just 4 relations. We hypothesize that this may be due to a limitation in the depth of trees learned. Learning more and/or deeper trees may improve use of word2vec features, and additional work can be done to generate deep features from word vectors. Q3 is answered cautiously in the negative, although future work could lead to improvements."
],
[
"Table TABREF26 shows the results of experiments that test the use of advice within the joint learning setting. The use of advice improves or matches the performance of using only joint learning. The key impact of advice can be mostly seen in the improvement of recall in several relations. This clearly shows that using human advice patterns allows us to extract more relations effectively making up for noisy or less number of training examples. This is in-line with previously published machine learning literature BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 in that humans can be more than mere labelers by providing useful advice to learning algorithms that can improve their performance. Thus Q4 can be answered affirmatively."
],
[
"Relation factory (RF) BIBREF16 is an efficient, open source system for performing relation extraction based on distantly supervised classifiers. It was the top system in the TAC KBP 2013 competition BIBREF21 and thus serves as a suitable baseline for our method. RF is very conservative in its responses, making it very difficult to adjust the precision levels. To be most generous to RF, we present recall for all returned results (i.e., score INLINEFORM0 ). The AUC ROC, recall, and F1 scores of our system against RF are presented in Table TABREF28 .",
"Our system performs comparably, and often better than the state-of-the-art Relation Factory system. In particular, our method outperforms Relation Factory in AUC ROC across all relations. Recall provides a more mixed picture with both approaches showing some improvements – RDN outperforms in 6 relations while Relation Factory does so in 8. Note that in the instances where RDN provides superior recall, it does so with dramatic improvements (RF often returns 0 positives in these relations). F1 also shows RDN's superior performance, outperforming RF in most relations. Thus, the conclusion for Q5 is that our RDN framework performas comparably, if not better, across all metrics against the state-of-the-art."
],
[
"We presented our fully relational system utilizing Relational Dependency Networks for the Knowledge Base Population task. We demonstrated RDN's ability to effectively learn the relation extraction task, performing comparably (and often better) than the state-of-art Relation Factory system. Furthermore, we demonstrated the ability of RDNs to incorporate various concepts in a relational framework, including word2vec, human advice, joint learning, and weak supervision. Some surprising results are that weak supervision and word2vec did not significantly improve performance. However, advice is extremely useful thus validating the long-standing results inside the Artificial Intelligence community for the relation extraction task as well. Possible future directions include considering a larger number of relations, deeper features and finally, comparisons with more systems. We believe further work on developing word2vec features and utilizing more weak supervision examples may reveal further insights into how to effectively utilize such features in RDNs."
]
],
"section_name": [
"Introduction",
"Proposed Pipeline",
"Feature Generation",
"Weak Supervision",
"Learning Relational Dependency Networks",
"Incorporating Human Advice",
"Experiments and Results",
"Joint learning",
"word2vec",
"Advice",
"RDN Boost vs Relation Factory",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"b4d823a4991cae336e6ab2abaa489cffec793b07"
],
"answer": [
{
"evidence": [
"While most relational learning methods restrict the human to merely annotating the data, we go beyond and request the human for advice. The intuition is that we as humans read certain patterns and use them to deduce the nature of the relation between two entities present in the text. The goal of our work is to capture such mental patterns of the humans as advice to the learning algorithm. We modified the work of Odom et al. odomAIME15,odomAAAI15 to learn RDNs in the presence of advice. The key idea is to explicitly represent advice in calculating gradients. This allows the system to trade-off between data and advice throughout the learning phase, rather than only consider advice in initial iterations. Advice, in particular, become influential in the presence of noisy or less amout of data. A few sample advice rules in English (these are converted to first-order logic format and given as input to our algorithm) are presented in Table TABREF11 . Note that some of the rules are “soft\" rules in that they are not true in many situations. Odom et al. odomAAAI15 weigh the effect of the rules against the data and hence allow for partially correct rules."
],
"extractive_spans": [],
"free_form_answer": "by converting human advice to first-order logic format and use as an input to calculate gradient",
"highlighted_evidence": [
"A few sample advice rules in English (these are converted to first-order logic format and given as input to our algorithm) are presented in Table TABREF11 . ",
"We modified the work of Odom et al. odomAIME15,odomAAAI15 to learn RDNs in the presence of advice. The key idea is to explicitly represent advice in calculating gradients."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"75b8338883fd7c1bd88118b650a6e53d7cc2e624"
],
"answer": [
{
"evidence": [
"To address our next question, we assessed our pipeline when learning relations independently (i.e., individually) versus learning relations jointly within the RDN, displayed in Table TABREF22 . Recall and F1 are omitted for conciseness – the conclusions are the same across all metrics. Joint learning appears to help in about half of the relations (8/14). Particularly, in person category, joint learning with gold standard outperforms their individual learning counterparts. This is due to the fact that some relations such as parents, spouse, siblings etc. are inter-related and learning them jointly indeed improves performance. Hence Q2 can be answered affirmatively for half the relations."
],
"extractive_spans": [
"relations"
],
"free_form_answer": "",
"highlighted_evidence": [
"To address our next question, we assessed our pipeline when learning relations independently (i.e., individually) versus learning relations jointly within the RDN, displayed in Table TABREF22 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"How do they incorporate human advice?",
"What do they learn jointly?"
],
"question_id": [
"496b4ae3c0e26ec95ff6ded5e6790f24c35f0f5b",
"281cb27cfa0eea12180fd82ae33035945476609e"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. Pipeline Full RDN relation extraction pipeline",
"Table 1. Standard NLP Features Features derived from the training corpus used by our learning system. POS - part of speech. NE - Named Entity. DPR - root of dependency path tree.",
"Table 2. Rules for KB Weak Supervision A sample of knowledge-based rules for weak supervision. The first value defines a weight, or confidence in the accuracy of the rule. The target relation appears at the end of each clause. “PER”, “ORG”, “NUM” represent entities that are persons, organizations, and numbers, respectively.",
"Table 3. Advice Rules Sample advice rules used for relation extraction. We employed a total of 72 such rules for our 14 relations.",
"Fig. 2. Example regression tree for the siblings relation. This tree states that the weight for the relation being true is higher if either “husband” or “wife” appear between the entities.",
"Table 4. Relations The relations considered from TAC KBP. Columns indicate the number of training examples utilized – both human annotated (Gold) and weakly supervised (WS), when available – from TAC KBP 2014 and number of test examples from TAC KBP 2015. 10 relations describe person entities (per) while the last 4 describe organizations (org).",
"Table 5. Weak Supervision Results comparing models trained with gold standard examples only (G) and models trained with gold standard and weakly supervised examples combined (G+WS).",
"Table 6. Joint Learning Results comparing relation models learned individually (IL) and jointly (JL).",
"Table 8. Advice Results comparing models trained without (-Adv) and with advice (+Adv).",
"Table 9. RelationFactory (RF) vs RDN Values in bold indicate superiour performance against the alternative approach."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"6-Table3-1.png",
"6-Figure2-1.png",
"7-Table4-1.png",
"8-Table5-1.png",
"9-Table6-1.png",
"10-Table8-1.png",
"11-Table9-1.png"
]
} | [
"How do they incorporate human advice?"
] | [
[
"1607.00424-Incorporating Human Advice-0"
]
] | [
"by converting human advice to first-order logic format and use as an input to calculate gradient"
] | 688 |
1901.01911 | Stance Classification for Rumour Analysis in Twitter: Exploiting Affective Information and Conversation Structure | Analysing how people react to rumours associated with news in social media is an important task to prevent the spreading of misinformation, which is nowadays widely recognized as a dangerous tendency. In social media conversations, users show different stances and attitudes towards rumourous stories. Some users take a definite stance, supporting or denying the rumour at issue, while others just comment it, or ask for additional evidence related to the veracity of the rumour. On this line, a new shared task has been proposed at SemEval-2017 (Task 8, SubTask A), which is focused on rumour stance classification in English tweets. The goal is predicting user stance towards emerging rumours in Twitter, in terms of supporting, denying, querying, or commenting the original rumour, looking at the conversation threads originated by the rumour. This paper describes a new approach to this task, where the use of conversation-based and affective-based features, covering different facets of affect, has been explored. Our classification model outperforms the best-performing systems for stance classification at SemEval-2017 Task 8, showing the effectiveness of the feature set proposed. | {
"paragraphs": [
[
"Nowadays, people increasingly tend to use social media like Facebook and Twitter as their primary source of information and news consumption. There are several reasons behind this tendency, such as the simplicity to gather and share the news and the possibility of staying abreast of the latest news and updated faster than with traditional media. An important factor is also that people can be engaged in conversations on the latest breaking news with their contacts by using these platforms. Pew Research Center's newest report shows that two-thirds of U.S. adults gather their news from social media, where Twitter is the most used platform. However, the absence of a systematic approach to do some form of fact and veracity checking may also encourage the spread of rumourous stories and misinformation BIBREF0 . Indeed, in social media, unverified information can spread very quickly and becomes viral easily, enabling the diffusion of false rumours and fake information.",
"Within this scenario, it is crucial to analyse people attitudes towards rumours in social media and to resolve their veracity as soon as possible. Several approaches have been proposed to check the rumour veracity in social media BIBREF1 . This paper focus on a stance-based analysis of event-related rumours, following the approach proposed at SemEval-2017 in the new RumourEval shared task (Task 8, sub-task A) BIBREF2 . In this task English tweets from conversation threads, each associated to a newsworthy event and the rumours around it, are provided as data. The goal is to determine whether a tweet in the thread is supporting, denying, querying, or commenting the original rumour which started the conversation. It can be considered a stance classification task, where we have to predict the user's stance towards the rumour from a tweet, in the context of a given thread. This task has been defined as open stance classification task and is conceived as a key step in rumour resolution, by providing an analysis of people reactions towards an emerging rumour BIBREF0 , BIBREF3 . The task is also different from detecting stance towards a specific target entity BIBREF4 .",
"Contribution We describe a novel classification approach, by proposing a new feature matrix, which includes two new groups: (a) features exploiting the conversational structure of the dataset BIBREF2 ; (b) affective features relying on the use of a wide range of affective resources capturing different facets of sentiment and other affect-related phenomena. We were also inspired by the fake news study on Twitter in BIBREF5 , showing that false stories inspire fear, disgust, and surprise in replies, while true stories inspire anticipation, sadness, joy, and trust. Meanwhile, from a dialogue act perspective, the study of BIBREF6 found that a relationship exists between the use of an affective lexicon and the communicative intention of an utterance which includes AGREE-ACCEPT (support), REJECT (deny), INFO-REQUEST (question), and OPINION (comment). They exploited several LIWC categories to analyse the role of affective content.",
"Our results show that our model outperforms the state of the art on the Semeval-2017 benchmark dataset. Feature analysis highlights the contribution of the different feature groups, and error analysis is shedding some light on the main difficulties and challenges which still need to be addressed.",
"Outline The paper is organized as follows. Section 2 introduces the SemEval-2017 Task 8. Section 3 describes our approach to deal with open stance classification by exploiting different groups of features. Section 4 describes the evaluation and includes a qualitative error analysis. Finally, Section 5 concludes the paper and points to future directions."
],
[
"The SemEval-2017 Task 8 Task A BIBREF2 has as its main objective to determine the stance of the users in a Twitter thread towards a given rumour, in terms of support, denying, querying or commenting (SDQC) on the original rumour. Rumour is defined as a “circulating story of questionable veracity, which is apparently credible but hard to verify, and produces sufficient skepticism and/or anxiety so as to motivate finding out the actual truth” BIBREF7 . The task was very timing due to the growing importance of rumour resolution in the breaking news and to the urgency of preventing the spreading of misinformation.",
"Dataset The data for this task are taken from Twitter conversations about news-related rumours collected by BIBREF3 . They were annotated using four labels (SDQC): support - S (when tweet's author support the rumour veracity); deny -D (when tweet's author denies the rumour veracity); query - Q (when tweet's author ask for additional information/evidence); comment -C (when tweet's author just make a comment and does not give important information to asses the rumour veracity). The distribution consists of three sets: development, training and test sets, as summarized in Table TABREF3 , where you can see also the label distribution and the news related to the rumors discussed. Training data consist of 297 Twitter conversations and 4,238 tweets in total with related direct and nested replies, where conversations are associated to seven different breaking news. Test data consist of 1049 tweets, where two new rumourous topics were added.",
"Participants Eight teams participated in the task. The best performing system was developed by Turing (78.4 in accuracy). ECNU, MamaEdha, UWaterloo, and DFKI-DKT utilized ensemble classifier. Some systems also used deep learning techniques, including Turing, IKM, and MamaEdha. Meanwhile, NileTRMG and IITP used classical classifier (SVM) to build their systems. Most of the participants exploited word embedding to construct their feature space, beside the Twitter domain features."
],
[
"We developed a new model by exploiting several stylistic and structural features characterizing Twitter language. In addition, we propose to utilize conversational-based features by exploiting the peculiar tree structure of the dataset. We also explored the use of affective based feature by extracting information from several affective resources including dialogue-act inspired features."
],
[
"They were designed taking into account several Twitter data characteristics, and then selecting the most relevant features to improve the classification performance. The set of structural features that we used is listed below.",
"Retweet Count: The number of retweet of each tweet.",
"Question Mark: presence of question mark \"?\"; binary value (0 and 1).",
"Question Mark Count: number of question marks present in the tweet.",
"Hashtag Presence: this feature has a binary value 0 (if there is no hashtag in the tweet) or 1 (if there is at least one hashtag in the tweet).",
"Text Length: number of characters after removing Twitter markers such as hashtags, mentions, and URLs.",
"URL Count: number of URL links in the tweet."
],
[
"These features are devoted to exploit the peculiar characteristics of the dataset, which have a tree structure reflecting the conversation thread.",
"Text Similarity to Source Tweet: Jaccard Similarity of each tweet with its source tweet.",
"Text Similarity to Replied Tweet: the degree of similarity between the tweet with the previous tweet in the thread (the tweet is a reply to that tweet).",
"Tweet Depth: the depth value is obtained by counting the node from sources (roots) to each tweet in their hierarchy."
],
[
"The idea to use affective features in the context of our task was inspired by recent works on fake news detection, focusing on emotional responses to true and false rumors BIBREF5 , and by the work in BIBREF6 reflecting on the role of affect in dialogue acts BIBREF6 . Multi-faceted affective features have been already proven to be effective in some related tasks BIBREF9 , including the stance detection task proposed at SemEval-2016 (Task 6).",
"We used the following affective resources relying on different emotion models.",
"Emolex: it contains 14,182 words associated with eight primary emotion based on the Plutchik model BIBREF10 , BIBREF11 .",
"EmoSenticNet(EmoSN): it is an enriched version of SenticNet BIBREF12 including 13,189 words labeled by six Ekman's basic emotion BIBREF13 , BIBREF14 .",
"Dictionary of Affect in Language (DAL): includes 8,742 English words labeled by three scores representing three dimensions: Pleasantness, Activation and Imagery BIBREF15 .",
"Affective Norms for English Words (ANEW): consists of 1,034 English words BIBREF16 rated with ratings based on the Valence-Arousal-Dominance (VAD) model BIBREF17 .",
"Linguistic Inquiry and Word Count (LIWC): this psycholinguistic resource BIBREF18 includes 4,500 words distributed into 64 emotional categories including positive (PosEMO) and negative (NegEMO)."
],
[
"We also included additional 11 categories from bf LIWC, which were already proven to be effective in dialogue-act task in previous work BIBREF6 . Basically, these features are part of the affective feature group, but we present them separately because we are interested in exploring the contribution of such feature set separately. This feature set was obtained by selecting 4 communicative goals related to our classes in the stance task: agree-accept (support), reject (deny), info-request (question), and opinion (comment). The 11 LIWC categories include:",
"Agree-accept: Assent, Certain, Affect;",
"Reject: Negate, Inhib;",
"Info-request: You, Cause;",
"Opinion: Future, Sad, Insight, Cogmech."
],
[
"We used the RumourEval dataset from SemEval-2017 Task 8 described in Section SECREF2 . We defined the rumour stance detection problem as a simple four-way classification task, where every tweet in the dataset (source and direct or nested reply) should be classified into one among four classes: support, deny, query, and comment. We conducted a set of experiments in order to evaluate and analyze the effectiveness of our proposed feature set..",
"The results are summarized in Table TABREF28 , showing that our system outperforms all of the other systems in terms of accuracy. Our best result was obtained by a simple configuration with a support vector classifier with radial basis function (RBF) kernel. Our model performed better than the best-performing systems in SemEval 2017 Task 8 Subtask A (Turing team, BIBREF19 ), which exploited deep learning approach by using LTSM-Branch model. In addition, we also got a higher accuracy than the system described in BIBREF20 , which exploits a Random Forest classifier and word embeddings based features.",
"We experimented with several classifiers, including Naive Bayes, Decision Trees, Support Vector Machine, and Random Forest, noting that SVM outperforms the other classifiers on this task. We explored the parameter space by tuning the SVM hyperparameters, namely the penalty parameter C, kernel type, and class weights (to deal with class imbalance). We tested several values for C (0.001, 0.01, 0.1, 1, 10, 100, and 1000), four different kernels (linear, RBF, polynomial, and sigmoid) and weighted the classes based on their distribution in the training data. The best result was obtained with C=1, RBF kernel, and without class weighting.",
"An ablation test was conducted to explore the contribution of each feature set. Table TABREF32 shows the result of our ablation test, by exploiting several feature sets on the same classifier (SVM with RBF kernel) . This evaluation includes macro-averages of precision, recall and INLINEFORM0 -score as well as accuracy. We also presented the scores for each class in order to get a better understanding of our classifier's performance.",
"Using only conversational, affective, or dialogue-act features (without structural features) did not give a good classification result. Set B (conversational features only) was not able to detect the query and deny classes, while set C (affective features only) and D (dialogue-act features only) failed to catch the support, query, and deny classes. Conversational features were able to improve the classifier performance significantly, especially in detecting the support class. Sets E, H, I, and K which utilize conversational features induce an improvement on the prediction of the support class (roughly from 0.3 to 0.73 on precision). Meanwhile, the combination of affective and dialogue-act features was able to slightly improve the classification of the query class. The improvement can be seen from set E to set K where the INLINEFORM0 -score of query class increased from 0.52 to 0.58. Overall, the best result was obtained by the K set which encompasses all sets of features. It is worth to be noted that in our best configuration system, not all of affective and dialogue-act features were used in our feature vector. After several optimization steps, we found that some features were not improving the system's performance. Our final list of affective and dialogue-act based features includes: DAL Activation, ANEW Dominance, Emolex Negative, Emolex Fear, LIWC Assent, LIWC Cause, LIWC Certain and LIWC Sad. Therefore, we have only 17 columns of features in the best performing system covering structural, conversational, affective and dialogue-act features.",
"We conducted a further analysis of the classification result obtained by the best performing system (79.50 on accuracy). Table TABREF30 shows the confusion matrix of our result. On the one hand, the system is able to detect the comment tweets very well. However, this result is biased due to the number of comment data in the dataset. On the other hand, the system is failing to detect denying tweets, which were falsely classified into comments (68 out of 71). Meanwhile, approximately two thirds of supporting tweets and almost half of querying tweets were classified into the correct class by the system.",
"In order to assess the impact of class imbalance on the learning, we performed an additional experiment with a balanced dataset using the best performing configuration. We took a subset of the instances equally distributed with respect to their class from the training set (330 instances for each class) and test set (71 instances for each class). As shown in Table TABREF31 , our classifier was able to correctly predict the underrepresented classes much better, although the overall accuracy is lower (59.9%). The result of this analysis clearly indicates that class imbalance has a negative impact on the system performance."
],
[
"We conducted a qualitative error analysis on the 215 misclassified in the test set, to shed some light on the issues and difficulties to be addressed in future work and to detect some notable error classes.",
"Denying by attacking the rumour's author. An interesting finding from the analysis of the Marina Joyce rumour data is that it contains a lot of denying tweets including insulting comments towards the author of the source tweet, like in the following cases:",
"Rumour: Marina Joyce",
"Misclassified tweets:",
"(da1) stfu you toxic sludge",
"(da2) @sampepper u need rehab ",
"Misclassification type: deny (gold) INLINEFORM0 comment (prediction)",
"Source tweet:",
"(s1) Anyone who knows Marina Joyce personally knows she has a serious drug addiction. she needs help, but in the form of rehab #savemarinajoyce",
"Tweets like (da1) and (da2) seem to be more inclined to show the respondent's personal hatred towards the s1-tweet's author than to deny the veracity of the rumour. In other words, they represent a peculiar form of denying the rumour, which is expressed by personal attack and by showing negative attitudes or hatred towards the rumour's author. This is different from denying by attacking the source tweet content, and it was difficult to comprehend for our system, that often misclassified such kind of tweets as comments.",
"Noisy text, specific jargon, very short text. In (da1) and (da2) (as in many tweets in the test set), we also observe the use of noisy text (abbreviations, misspellings, slang words and slurs, question statements without question mark, and so on) that our classifier struggles to handle . Moreover, especially in tweets from the Marina Joyce rumour's group, we found some very short tweets in the denying class that do not provide enough information, e.g. tweets like “shut up!\", “delete\", and “stop it. get some help\".",
"Argumentation context. We also observed misclassification cases that seem to be related to a deeper capability of dealing with the argumentation context underlying the conversation thread.",
"Rumour: Ferguson",
"Misclassified tweet:",
"(arg1)@QuadCityPat @AP I join you in this demand. Unconscionable.",
"Misclassification type: deny (gold) INLINEFORM0 comment (prediction)",
"Source tweet:",
"(s2) @AP I demand you retract the lie that people in #Ferguson were shouting “kill the police\", local reporting has refuted your ugly racism",
"",
"Here the misclassified tweet is a reply including an explicit expression of agreement with the author of the source tweet (“I join you”). Tweet (s2) is one of the rare cases of source tweets denying the rumor (source tweets in the RumourEval17 dataset are mostly supporting the rumor at issue). Our hypothesis is that it is difficult for a system to detect such kind of stance without a deeper comprehension of the argumentation context (e.g., if the author's stance is denying the rumor, and I agree with him, then I am denying the rumor as well). In general, we observed that when the source tweet is annotated by the deny label, most of denying replies of the thread include features typical of the support class (and vice versa), and this was a criticism.",
"Mixed cases. Furthermore, we found some borderline mixed cases in the gold standard annotation. See for instance the following case:",
"Rumour: Ferguson",
"Misclassified tweet: ",
"(mx1) @MichaelSkolnik @MediaLizzy Oh do tell where they keep track of \"vigilante\" stats. That's interesting.",
"Misclassification type: query (gold) INLINEFORM0 comment (prediction)",
"Source tweet:",
"(s3) Every 28 hours a black male is killed in the United States by police or vigilantes. #Ferguson",
"",
"Tweet (mx1) is annotated with a query label rather than as a comment (our system prediction), but we can observe the presence of a comment (“That's interesting”) after the request for clarification, so it seems to be a kind of mixed case, where both labels make sense.",
"Citation of the source's tweet. We have noticed many misclassified cases of replying tweets with error pattern support (gold) INLINEFORM0 comment (our prediction), where the text contains a literal citation of the source tweet, like in the following tweet: THIS HAS TO END “@MichaelSkolnik: Every 28 hours a black male is killed in the United States by police or vigilantes. #Ferguson” (the text enclosed in quotes is the source tweet). Such kind of mistakes could be maybe addressed by applying some pre-processing to the data, for instance by detecting the literal citation and replacing it with a marker.",
"Figurative language devices. Finally, the use of figurative language (e.g., sarcasm) is also an issue that should be considered for the future work. Let us consider for instance the following misclassified tweets:",
"Rumour: Hillary's Illness",
"Misclassified tweets:",
"(fg1) @mitchellvii True, after all she can open a pickle jar.",
"(fg2) @mitchellvii Also, except for having a 24/7 MD by her side giving her Valium injections, Hillary is in good health! https://t.co/GieNxwTXX7",
"(fg3) @mitchellvii @JoanieChesnutt At the very peak yes, almost time to go down a cliff and into the earth.",
"Misclassification type: support (gold) INLINEFORM0 comment (prediction)",
"Source tweet:",
"(s4) Except for the coughing, fainting, apparent seizures and \"short-circuits,\" Hillary is in the peak of health.",
"All misclassified tweets (fg1-fg3) from the Hillary's illness data are replies to a source tweet (s4), which is featured by sarcasm. In such replies authors support the rumor by echoing the sarcastic tone of the source tweet. Such more sophisticated cases, where the supportive attitude is expressed in an implicit way, were challenging for our classifier, and they were quite systematically misclassified as simple comments."
],
[
"In this paper we proposed a new classification model for rumour stance classification. We designed a set of features including structural, conversation-based, affective and dialogue-act based feature. Experiments on the SemEval-2017 Task 8 Subtask A dataset show that our system based on a limited set of well-engineered features outperforms the state-of-the-art systems in this task, without relying on the use of sophisticated deep learning approaches. Although achieving a very good result, several research challenges related to this task are left open. Class imbalance was recognized as one the main issues in this task. For instance, our system was struggling to detect the deny class in the original dataset distribution, but it performed much better in that respect when we balanced the distribution across the classes.",
"A re-run of the RumourEval shared task has been proposed at SemEval 2019 and it will be very interesting to participate to the new task with an evolution of the system here described."
],
[
"Endang Wahyu Pamungkas, Valerio Basile and Viviana Patti were partially funded by Progetto di Ateneo/CSP 2016 (Immigrants, Hate and Prejudice in Social Media, S1618_L2_BOSC_01)."
]
],
"section_name": [
"Introduction",
"SemEval-2017 Task 8: RumourEval",
"Proposed Method",
"Structural Features",
"Conversation Based Features",
"Affective Based Features",
"Dialogue-Act Features",
"Experiments, Evaluation and Analysis",
"Error analysis",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"9995b54d7adc2e4f74eac854ac0044f5278de8fe"
],
"answer": [
{
"evidence": [
"Within this scenario, it is crucial to analyse people attitudes towards rumours in social media and to resolve their veracity as soon as possible. Several approaches have been proposed to check the rumour veracity in social media BIBREF1 . This paper focus on a stance-based analysis of event-related rumours, following the approach proposed at SemEval-2017 in the new RumourEval shared task (Task 8, sub-task A) BIBREF2 . In this task English tweets from conversation threads, each associated to a newsworthy event and the rumours around it, are provided as data. The goal is to determine whether a tweet in the thread is supporting, denying, querying, or commenting the original rumour which started the conversation. It can be considered a stance classification task, where we have to predict the user's stance towards the rumour from a tweet, in the context of a given thread. This task has been defined as open stance classification task and is conceived as a key step in rumour resolution, by providing an analysis of people reactions towards an emerging rumour BIBREF0 , BIBREF3 . The task is also different from detecting stance towards a specific target entity BIBREF4 ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" In this task English tweets from conversation threads, each associated to a newsworthy event and the rumours around it, are provided as data. The goal is to determine whether a tweet in the thread is supporting, denying, querying, or commenting the original rumour which started the conversation. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"761d9d08c60bdf105f6874771c55f4d3e301147d"
],
"answer": [
{
"evidence": [
"We used the following affective resources relying on different emotion models.",
"Emolex: it contains 14,182 words associated with eight primary emotion based on the Plutchik model BIBREF10 , BIBREF11 .",
"EmoSenticNet(EmoSN): it is an enriched version of SenticNet BIBREF12 including 13,189 words labeled by six Ekman's basic emotion BIBREF13 , BIBREF14 .",
"Dictionary of Affect in Language (DAL): includes 8,742 English words labeled by three scores representing three dimensions: Pleasantness, Activation and Imagery BIBREF15 .",
"Affective Norms for English Words (ANEW): consists of 1,034 English words BIBREF16 rated with ratings based on the Valence-Arousal-Dominance (VAD) model BIBREF17 .",
"Linguistic Inquiry and Word Count (LIWC): this psycholinguistic resource BIBREF18 includes 4,500 words distributed into 64 emotional categories including positive (PosEMO) and negative (NegEMO)."
],
"extractive_spans": [],
"free_form_answer": "affective features provided by different emotion models such as Emolex, EmoSenticNet, Dictionary of Affect in Language, Affective Norms for English Words and Linguistics Inquiry and Word Count",
"highlighted_evidence": [
"We used the following affective resources relying on different emotion models.\n\nEmolex: it contains 14,182 words associated with eight primary emotion based on the Plutchik model BIBREF10 , BIBREF11 .\n\nEmoSenticNet(EmoSN): it is an enriched version of SenticNet BIBREF12 including 13,189 words labeled by six Ekman's basic emotion BIBREF13 , BIBREF14 .\n\nDictionary of Affect in Language (DAL): includes 8,742 English words labeled by three scores representing three dimensions: Pleasantness, Activation and Imagery BIBREF15 .\n\nAffective Norms for English Words (ANEW): consists of 1,034 English words BIBREF16 rated with ratings based on the Valence-Arousal-Dominance (VAD) model BIBREF17 .\n\nLinguistic Inquiry and Word Count (LIWC): this psycholinguistic resource BIBREF18 includes 4,500 words distributed into 64 emotional categories including positive (PosEMO) and negative (NegEMO)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"a91b81370c829f06c8067a357517b3934c08c21e"
],
"answer": [
{
"evidence": [
"Conversation Based Features",
"These features are devoted to exploit the peculiar characteristics of the dataset, which have a tree structure reflecting the conversation thread.",
"Text Similarity to Source Tweet: Jaccard Similarity of each tweet with its source tweet.",
"Text Similarity to Replied Tweet: the degree of similarity between the tweet with the previous tweet in the thread (the tweet is a reply to that tweet).",
"Tweet Depth: the depth value is obtained by counting the node from sources (roots) to each tweet in their hierarchy."
],
"extractive_spans": [
"Text Similarity to Source Tweet",
"Text Similarity to Replied Tweet",
"Tweet Depth"
],
"free_form_answer": "",
"highlighted_evidence": [
"Conversation Based Features\nThese features are devoted to exploit the peculiar characteristics of the dataset, which have a tree structure reflecting the conversation thread.\n\nText Similarity to Source Tweet: Jaccard Similarity of each tweet with its source tweet.\n\nText Similarity to Replied Tweet: the degree of similarity between the tweet with the previous tweet in the thread (the tweet is a reply to that tweet).\n\nTweet Depth: the depth value is obtained by counting the node from sources (roots) to each tweet in their hierarchy."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Is this an English-language dataset?",
"What affective-based features are used?",
"What conversation-based features are used?"
],
"question_id": [
"04a4b0c6c8bd4c170c93ea7ea1bf693965ef38f4",
"dbfce07613e6d0d7412165e14438d5f92ad4b004",
"b7e419d2c4e24c40b8ad0fae87036110297d6752"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Semeval-2017 Task 8 (A) dataset distribution.",
"Table 2: Results and comparison with state of the art",
"Table 3: Confusion Matrix",
"Table 4: Confusion Matrix on Balanced Dataset",
"Table 5: Ablation test on several feature sets."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"4-Table4-1.png",
"5-Table5-1.png"
]
} | [
"What affective-based features are used?"
] | [
[
"1901.01911-Affective Based Features-4",
"1901.01911-Affective Based Features-3",
"1901.01911-Affective Based Features-5",
"1901.01911-Affective Based Features-2",
"1901.01911-Affective Based Features-1",
"1901.01911-Affective Based Features-6"
]
] | [
"affective features provided by different emotion models such as Emolex, EmoSenticNet, Dictionary of Affect in Language, Affective Norms for English Words and Linguistics Inquiry and Word Count"
] | 689 |
1911.05153 | Improving Robustness of Task Oriented Dialog Systems | Task oriented language understanding in dialog systems is often modeled using intents (task of a query) and slots (parameters for that task). Intent detection and slot tagging are, in turn, modeled using sentence classification and word tagging techniques respectively. Similar to adversarial attack problems with computer vision models discussed in existing literature, these intent-slot tagging models are often over-sensitive to small variations in input -- predicting different and often incorrect labels when small changes are made to a query, thus reducing their accuracy and reliability. However, evaluating a model's robustness to these changes is harder for language since words are discrete and an automated change (e.g. adding `noise') to a query sometimes changes the meaning and thus labels of a query. In this paper, we first describe how to create an adversarial test set to measure the robustness of these models. Furthermore, we introduce and adapt adversarial training methods as well as data augmentation using back-translation to mitigate these issues. Our experiments show that both techniques improve the robustness of the system substantially and can be combined to yield the best results. | {
"paragraphs": [
[
"In computer vision, it is well known that otherwise competitive models can be \"fooled\" by adding intentional noise to the input images BIBREF0, BIBREF1. Such changes, imperceptible to the human eye, can cause the model to reverse its initial correct decision on the original input. This has also been studied for Automatic Speech Recognition (ASR) by including hidden commands BIBREF2 in the voice input. Devising such adversarial examples for machine learning algorithms, in particular for neural networks, along with defense mechanisms against them, has been of recent interest BIBREF3. The lack of smoothness of the decision boundaries BIBREF4 and reliance on weakly correlated features that do not generalize BIBREF5 seem to be the main reasons for confident but incorrect predictions for instances that are far from the training data manifold. Among the most successful techniques to increase resistance to such attacks is perturbing the training data and enforcing the output to remain the same BIBREF4, BIBREF6. This is expected to improve the smoothing of the decision boundaries close to the training data but may not help with points that are far from them.",
"There has been recent interest in studying this adversarial attack phenomenon for natural language processing tasks, but that is harder than vision problems for at least two reasons: 1) textual input is discrete, and 2) adding noise may completely change a sentence's meaning or even make it meaningless. Although there are various works that devise adversarial examples in the NLP domain, defense mechanisms have been rare. BIBREF7 applied perturbation to the continuous word embeddings instead of the discrete tokens. This has been shown BIBREF8 to act as a regularizer that increases the model performance on the clean dataset but the perturbed inputs are not true adversarial examples, as they do not correspond to any input text and it cannot be tested whether they are perceptible to humans or not.",
"Unrestricted adversarial examples BIBREF9 lift the constraint on the size of added perturbation and as such can be harder to defend against. Recently, Generative Adversarial Networks (GANs) alongside an auxiliary classifier have been proposed to generate adversarial examples for each label class. In the context of natural languages, use of seq2seq models BIBREF10 seems to be a natural way of perturbing an input example BIBREF11. Such perturbations, that practically paraphrase the original sentence, lie somewhere between the two methods described above. On one hand, the decoder is not constrained to be in a norm ball from the input and, on the other hand, the output is strongly conditioned on the input and hence, not unrestricted.",
"Current NLP work on input perturbations and defense against them has mainly focused on sentence classification. In this paper, we examine a harder task: joint intent detection (sentence classification) and slot tagging (sequence word tagging) for task oriented dialog, which has been of recent interest BIBREF12 due to the ubiquity of commercial conversational AI systems.",
"In the task and data described in Section SECREF2, we observe that exchanging a word with its synonym, as well as changing the structural order of a query can flip the model prediction. Table TABREF1 shows a few such sentence pairs for which the model prediction is different. Motivated by this, in this paper, we focus on analyzing the model robustness against two types of untargeted (that is, we do not target a particular perturbed label) perturbations: paraphrasing and random noise. In order to evaluate the defense mechanisms, we discuss how one can create an adversarial test set focusing on these two types of perturbations in the setting of joint sentence classification and sequence word tagging.",
"Our contributions are: 1. Analyzing the robustness of the joint task of sentence classification and sequence word tagging through generating diverse untargeted adversarial examples using back-translation and noisy autoencoder, and 2. Two techniques to improve upon a model's robustness – data augmentation using back-translation, and adversarial logit pairing loss. Data augmentation using back-translation was earlier proposed as a defense mechanism for a sentence classification task BIBREF11; we extend it to sequence word tagging. We investigate using different types of machine translation systems, as well as different auxiliary languages, for both test set generation and data augmentation. Logit pairing was proposed for improving the robustness in the image classification setting with norm ball attacks BIBREF6; we extend it to the NLP context. We show that combining the two techniques gives the best results."
],
[
"block = [text width=15em, text centered]",
"In conversational AI, the language understanding task typically consists of classifying the intent of a sentence and tagging the corresponding slots. For example, a query like What's the weather in Sydney today could be annotated as a weather/find intent, with Sydney and today being location and datetime slots, respectively. This predicted intent then informs which API to call to answer the query and the predicted slots inform the arguments for the call. See Fig. FIGREF2. Slot tagging is arguably harder compared to intent classification since the spans need to align as well.",
"We use the data provided by BIBREF13, which consists of task-oriented queries in weather and alarm domains. The data contains 25k training, 3k evaluation and 7k test queries with 11 intents and 7 slots. We conflate and use a common set of labels for the two domains. Since there is no ambiguous slot or intent in the domains, unlike BIBREF14, we do not need to train a domain classifier, neither jointly nor at the beginning of the pipeline. If a query is not supported by the system but it is unambiguously part of the alarm or weather domains, they are marked as alarm/unsupported and weather/unsupported respectively."
],
[
"To evaluate model robustness, we devise a test set consisting of ‘adversarial’ examples, i.e, perturbed examples that can potentially change the base model's prediction. These could stem from paraphrasing a sentence, e.g., lexical and syntactical changes. We use two approaches described in literature: back-translation and noisy sequence autoencoder. Note that these examples resemble black-box attacks but are not intentionally designed to fool the system and hence, we use the term 'adversarial' broadly. We use these techniques to produce many paraphrases and find a subset of utterances that though very similar to the original test set, result in wrong predictions. We will measure the model robustness against such changes.",
"Also note that to make the test set hard, we select only the examples for which the model prediction is different for the paraphrased sentence compared to the original sentence. We, however, do not use the original annotation for the perturbed sentences – instead, we re-annotate the sentences manually. We explain the motivation and methodology for manual annotation later in this section."
],
[
"We describe two methods of devising untargeted (not targeted towards a particular label) paraphrase generation to find a subset that dramatically reduce the accuracy of the model mentioned in the previous section. We follow BIBREF11 and BIBREF15 to generate the potential set of sentences."
],
[
"Back-translation is a common technique in Machine Translation (MT) to improve translation performance, especially for low-resource language pairs BIBREF16, BIBREF17, BIBREF18. In back-translation, a MT system is used to translate the original sentences to an auxiliary language and a reverse MT system translates them back into the original language. At the final decoding phase, the top k beams are the variations of the original sentence. See Fig. FIGREF5. BIBREF11 which showed the effectiveness of simple back-translation in quickly generating adversarial paraphrases and breaking the correctly predicted examples.",
"To increase diversity, we use two different MT systems and two different auxiliary languages - Czech (cs) and Spanish (es), to use with our training data in English (en). We use the Nematus BIBREF19 pre-trained cs-en model, which was also used in BIBREF11, as well as the FB internal MT system with pre-trained models for cs-en and es-en language pairs."
],
[
"Following BIBREF15, we train a sequence autoencoder BIBREF20 using all the training data. At test time, we add noise to the last hidden state of the encoder, which is used to decode a variation. We found that not using attention results in more diverse examples, by giving the model more freedom to stray from the original sentence. We again decode the top k beams as variations to the original sentence. We observed that the seq2seq model results in less meaningful sentences than using the MT systems, which have been trained over millions of sentences."
],
[
"For each of the above methods, we use the original test data and generate paraphrases using k=5 beams. We remove the beams that are the same as the original sentence after lower-casing. In order to make sure we have a high-quality adversarial test set, we need to manually check the model's prediction on the above automatically-generated datasets. Unlike targeted methods to procure adversarial examples, our datasets have been generated by random perturbations in the original sentences. Hence, we expect that the true adversarial examples would be quite sparse. In order to obviate the need for manual annotation of a large dataset to find these sparse examples, we sample only from the paraphrases for which the predicted intent is different from the original sentence's predicted intent. This significantly increases the chance of encountering an adversarial example. Note that the model accuracy on this test set might not be zero for two reasons: 1) the flipped intent might actually be justified and not a mistake. For example, “Cancel the alarm” and “Pause the alarm” may be considered as paraphrases, but in the dataset they correspond to alarm/cancel and alarm/pause intents, respectively, and 2) the model might have been making an error in the original prediction, which was corrected by the paraphrase. (However, upon manual observation, this rarely happens).",
"The other reason that we need manual annotation is that such unrestricted generation may result in new variations that can be meaningless or ambiguous without any context. Note that if the meaning can be easily inferred, we do not count slight grammatical errors as meaningless. Thus, we manually double annotate the sentences with flipped intent classification where the disagreements are resolved by a third annotator. As a part of this manual annotation, we also remove the meaningless and ambiguous sentences. Note that these adversarial examples are untargeted, i.e., we had no control in which new label a perturbed example would be sent to."
],
[
"We have shown adversarial examples from different sources alongside their original sentence in Table TABREF3. We observe that some patterns, such as addition of a definite article or gerund appear more often in the es test set which perhaps stems from the properties of the Spanish language (i.e., most nouns have an article and present simple/continuous tense are often interchangeable). On the other hand, there is more verbal diversity in the cs test set which may be because of the linguistic distance of Czech from English compared with Spanish. Moreover, we observe many imperative-to-declarative transformation in all the back-translated examples. Finally, the seq2seq examples seem to have a higher degree of freedom but that can tip them off into the meaningless realm more often too."
],
[
"A commonly used architecture for the task described in Section SECREF2 is a bidirectional LSTM for the sentence representation with separate projection layers for sentence (intent) classification and sequence word (slot) tagging BIBREF21, BIBREF22, BIBREF12, BIBREF14. In order to evaluate the model in a task oriented setting, exact match accuracy (from now on, accuracy) is of paramount importance. This is defined as the percentage of the sentences for which the intent and all the slots have been correctly tagged.",
"We use two biLSTM layers of size 200 and two feed-forward layers for the intents and the slots. We use dropout of $0.3$ and train the model for 20 epochs with learning rate of $0.01$ and weight decay of $0.001$. This model, our baseline, achieves $87.1\\%$ accuracy over the test set.",
"The performance of the base model described in the previous section is shown in the first row of Table TABREF8 for the Nematus cs-en ($\\bar{cs}$), FB MT system cs-en (cs) and es-en (es), sequence autoencoder (seq2seq), and the average of the adversarial sets (avg). We also included the results for the ensemble model, which combines the decisions of five separate baseline models that differ in batch order, initialization, and dropout masking. We can see that, similar to the case in computer vision BIBREF4, the adversarial examples seem to stem from fundamental properties of the neural networks and ensembling helps only a little."
],
[
"In order to improve robustness of the base model against paraphrases and random noise, we propose two approaches: data augmentation and model smoothing via adversarial logit pairing. Data augmentation generates and adds training data without manual annotation. This would help the model see variations that it has not observed in the original training data. As discussed before, back-translation is one way to generate unlabeled data automatically. In this paper, we show how we can automatically generate labels for such sentences during training time and show that it improves the robustness of the model. Note that for our task we have to automatically label both sentence labels (intent) and word tags (slots) for such sentences.",
"The second method we propose is adding logit pairing loss. Unlike data augmentation, logit pairing treats the original and paraphrased sentence sets differently. As such, in addition to the cross-entropy loss over the original training data, we would have another loss term enforcing that the predictions for a sentence and its paraphrases are similar in the logit space. This would ensure that the model makes smooth decisions and prevent the model from making drastically different decisions with small perturbations."
],
[
"We generate back-translated data from the training data using pre-trained FB MT system. We keep the top 5 beams after the back-translation and remove the beams that already exist in the training data after lower-casing. We observed that including the top 5 beams results in quite diverse combinations without hurting the readability of the sentences. In order to use the unlabeled data, we use an extended version of self training BIBREF23 in which the original classifier is used to annotate the unlabeled data. Unsurprisingly, self-training can result in reinforcing the model errors. Since the sentence intents usually remain the same after paraphrasing for each paraphrase, we annotate its intent as the intent of the original sentence. Since many slot texts may be altered or removed during back-translation, we use self-training to label the slots of the paraphrases. We train the model on the combined clean and noisy datasets with the loss function being the original loss plus the loss on back-translated data weighted by 0.1 for which the accuracy on the clean dev set is still negligible. The model seemed to be quite insensitive against this weight, though and the clean dev accuracy was hurt by less than 1 point using weighing the augmented data equally as the original data. The accuracy over the clean test set using the augmented training data having Czech (cs) and Spanish (es) as the auxiliary languages are shown in Table TABREF8.",
"We observe that, as expected, data augmentation improves accuracy on sentences generated using back-translation, however we see that it also improves accuracy on sentences generated using seq2seq autoencoder. We discuss the results in more detail in the next section."
],
[
"BIBREF6 perturb images with the attacks introduced by BIBREF3 and report state-of-the-art results by matching the logit distribution of the perturbed and original images instead of matching only the classifier decision. They also introduce clean pairing in which the logit pairing is applied to random data points in the clean training data, which yields surprisingly good results. Here, we modify both methods for the language understanding task, including sequence word tagging, and expand the approach to targeted pairing for increasing robustness against adversarial examples."
],
[
"Pairing random queries as proposed by BIBREF6 performed very poorly on our task. In the paper, we study the effects when we pair the sentences that have the same annotations, i.e., same intent and same slot labels. Consider a batch $M$, with $m$ clean sentences. For each tuple of intent and slot labels, we identify corresponding sentences in the batch, $M_k$ and sample pairs of sentences. We add a second cost function to the original cost function for the batch that enforces the logit vectors of the intent and same-label slots of those pairs of sentences to have similar distributions:",
"where $I^{(i)}$ and $S^{(i)}_s$ denote the logit vectors corresponding to the intent and $s^{th}$ slot of the $i^{th}$ sentence, respectively. Moreover, $P$ is the total number of sampled pairs, and $\\lambda _{sf}$ is a hyper-parameter. We sum the above loss for all the unique tuples of labels and normalize by the total number of pairs. Throughout this section, we use MSE loss for the function $L()$. We train the model with the same parameters as in Section SECREF2, with the only difference being that we use learning rate of $0.001$ and train for 25 epochs to improve model convergence. Contrary to what we expected, clean logit pairing on its own reduces accuracy on both clean and adversarial test sets. Our hypothesis is that the logit smoothing resulted by this method prevents the model from using weakly correlated features BIBREF5, which could have helped the accuracy of both the clean and adversarial test sets."
],
[
"In order to make the model more robust to paraphrases, we pair a sentence with its back-translated paraphrases and impose the logit distributions to be similar. We generate the paraphrases using the FB MT system as in the previous section using es and cs as auxiliary languages. For the sentences $m^{(i)}$ inside the mini-batch and their paraphrase $\\tilde{m}^{(i)}_k$, we add the following loss",
"",
"where $P$ is the total number of original-paraphrase sentence pairs. Note that the first term, which pairs the logit vectors of the predicted intents of a sentence and its paraphrase, can be obtained in an unsupervised fashion. For the second term however, we need to know the position of the slots in the paraphrases in order to be matched with the original slots. We use self-training again to tag the slots in the paraphrased sentence. Then, we pair the logit vectors corresponding to the common labels found among the original and paraphrases slots left to right. We also find that adding a similar loss for pairs of paraphrases of the original sentence, i.e. matching the logit vectors corresponding to the intent and slots, can help the performance on the accuracy over the adversarial test sets. In Table TABREF8, we show the results using ALP (using both the original-paraphrase and paraphrase-paraphrase pairs) for $\\lambda _a=0.01$."
],
[
"We observe that data augmentation using back-translation improves the accuracy across all the adversarial sets, including the seq2seq test set. Unsurprisingly, the gains are the highest when augmenting the training data using the same MT system and the same auxiliary language that the adversarial test set was generated from. However, more interestingly, it is still effective for adversarial examples generated using a different auxiliary language or a different MT system (which, as discussed in the previous section, yielded different types of sentences) from that which was used at the training time. More importantly, even if the generation process is different altogether, that is, the seq2seq dataset generated by the noisy autoencoder, some of the gains are still transferred and the accuracy over the adversarial examples increases. We also train a model using the es and cs back-translated data combined. Table TABREF8 shows that this improves the average performance over the adversarial sets.",
"This suggests that in order to achieve robustness towards different types of paraphrasing, we would need to augment the training data using data generated with various techniques. But one can hope that some of the defense would be transferred for adversarial examples that come from unknown sources. Note that unlike the manually annotated test sets, the augmented training data contains noise both in the generation step (e.g. meaningless utterances) as well as in the automatic annotation step. But the model seems to be quite robust toward this random noise; its accuracy over the clean test set is almost unchanged while yielding nontrivial gains over the adversarial test sets.",
"We observe that ALP results in similarly competitive performance on the adversarial test sets as using the data augmentation but it has a more detrimental effect on the clean test set accuracy. We hypothesize that data augmentation helps with smoothing the decision boundaries without preventing the model from using weakly correlated features. Hence, the regression on the clean test set is very small. This is in contrast with adversarial defense mechanisms such as ALP BIBREF5 which makes the model regress much more on the clean test set.",
"We also combine ALP with the data augmentation technique that yields the highest accuracy on the adversarial test sets but incurs additional costs to the clean test set (more than three points compared with the base model). Adding clean logit pairing to the above resulted in the most defense transfer (i.e. accuracy on the seq2seq adversarial test set) but it is detrimental to almost all the other metrics. One possible explanation can be that the additional regularization stemming from the clean logit pairing helps with generalization (and hence, the transfer) from the back-translated augmented data to the seq2seq test set but it is not helpful otherwise."
],
[
"Adversarial examples BIBREF4 refer to intentionally devised inputs by an adversary which causes the model's accuracy to make highly-confident but erroneous predictions, e.g. Fast Gradient Sign Attack (FGSA) BIBREF4 and Projected gradient Descent (PGD) BIBREF3. In such methods, the constrained perturbation that (approximately) maximizes the loss for an original data point is added to it. In white-box attacks, the perturbations are chosen to maximize the model loss for the original inputs BIBREF4, BIBREF3, BIBREF24. Such attacks have shown to be transferable to other models which makes it possible to devise black-box attacks for a machine learning model by transferring from a known model BIBREF25, BIBREF1.",
"Defense against such examples has been an elusive task, with proposed mechanisms proving effective against only particular attacks BIBREF3, BIBREF26. Adversarial training BIBREF4 augments the training data with carefully picked perturbations during the training time, which is robust against normed-ball perturbations. But in the general setting of having unrestricted adversarial examples, these defenses have been shown to be highly ineffective BIBREF27.",
"BIBREF28 introduced white-box attacks for language by swapping one token for another based on the gradient of the input. BIBREF29 introduced an algorithm to generate adversarial examples for sentiment analysis and textual entailment by replacing words of the sentence with similar tokens that preserve the language model scoring and maximize the target class probability. BIBREF7 introduced one of the few defense mechanisms for NLP by extending adversarial training to this domain by perturbing the input embeddings and enforcing the label (distribution) to remain unchanged. BIBREF30 and BIBREF8 used this strategy as a regularization method for part-of-speech, relation extraction and NER tasks. Such perturbations resemble the normed-ball attacks for images but the perturbed input does not correspond to a real adversarial example. BIBREF11 studied two methods of generating adversarial data – back-translation and syntax-controlled sequence-to-sequence generation. They show that although the latter method is more effective in generating syntactically diverse examples, the former is also a fast and effective way of generating adversarial examples.",
"There has been a large body of literature on language understanding for task oriented dialog using the intent/slot framework. Bidirectional LSTM for the sentence representation alongside separate projection layers for intent and slot tagging is the typical architecture for the joint task BIBREF21, BIBREF22, BIBREF12, BIBREF14.",
"In parallel to the current work, BIBREF31 introduced unsupervised data augmentation for classification tasks by perturbing the training data and similar to BIBREF7 minimize the KL divergence between the predicted distributions on an unlabeled example and its perturbations. Their goal is to achieve high accuracy using as little labeled data as possible by leveraging the unlabeled data. In this paper, we have focused on increasing the model performance on adversarial test sets in supervised settings while constraining the degradation on the clean test set. Moreover, we focused on a more complicated task: the joint classification and sequence tagging task."
],
[
"In this paper, we study the robustness of language understanding models for the joint task of sentence classification and sequence word tagging in the field of task oriented dialog by generating adversarial test sets. We further discuss defense mechanisms using data augmentation and adversarial logit pairing loss.",
"We first generate adversarial test sets using two methods, back-translation with two languages and sequence auto-encoder, and observe that the two methods generate different types of sentences. Our experiments show that creating the test set using a combination of the two methods above is better than either method alone, based on the model's performance on the test sets. Secondly, we propose how to improve the model's robustness against such adversarial test sets by both augmenting the training data and using a new loss function based on logit pairing with back-translated paraphrases annotated using self-training. The experiments show that combining data augmentation using back-translation and adversarial logit pairing loss performs best on the adversarial test sets."
],
[
"Though the adversarial accuracy has significantly improved using the above techniques, there is still a huge gap between the adversarial and clean test accuracy. Exploring other techniques to augment the data as well as other methods to leverage the augmented data is left for future work. For example, using sampling at the decoding time BIBREF17 or conditioning the seq2seq model on structure BIBREF11 has shown to produce more diverse examples. On the other hand, using more novel techniques such as multi-task tri-training BIBREF32 to label the unlabeled data rather than the simple self-training may yield better performance. Moreover, augmenting the whole dataset through back-translation and keeping the top k beams is not practical for large datasets. Exploring more efficient augmentations, i.e., which sentences to augment and which back-translated beams to keep, and adapting techniques such as in BIBREF33 are also interesting research directions to pursue.",
"In this paper, we studied various ways of devising untargeted adversarial examples. This is in contrast with targeted attacks, which can perturb the original input data toward a particular label class. Encoding this information in the seq2seq model, e.g., feeding a one-hot encoded label vector, may deserve attention for future research."
]
],
"section_name": [
"Introduction",
"Task and Data",
"Robustness Evaluation",
"Robustness Evaluation ::: Automatically Generating Examples",
"Robustness Evaluation ::: Automatically Generating Examples ::: Back-translation",
"Robustness Evaluation ::: Automatically Generating Examples ::: Noisy Sequence Autoencoder",
"Robustness Evaluation ::: Annotation",
"Robustness Evaluation ::: Analysis",
"Base Model",
"Approaches to Improve Robustness",
"Approaches to Improve Robustness ::: Data Augmentation",
"Approaches to Improve Robustness ::: Model smoothing via Logit Pairing",
"Approaches to Improve Robustness ::: Model smoothing via Logit Pairing ::: Clean Logit Pairing",
"Approaches to Improve Robustness ::: Model smoothing via Logit Pairing ::: Adversarial Logit Pairing (ALP)",
"Results and Discussion",
"Related Work",
"Conclusion",
"Conclusion ::: Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"76ff67baf34aab9780aab1e66289a31eb4bb18a5"
],
"answer": [
{
"evidence": [
"The performance of the base model described in the previous section is shown in the first row of Table TABREF8 for the Nematus cs-en ($\\bar{cs}$), FB MT system cs-en (cs) and es-en (es), sequence autoencoder (seq2seq), and the average of the adversarial sets (avg). We also included the results for the ensemble model, which combines the decisions of five separate baseline models that differ in batch order, initialization, and dropout masking. We can see that, similar to the case in computer vision BIBREF4, the adversarial examples seem to stem from fundamental properties of the neural networks and ensembling helps only a little.",
"FLOAT SELECTED: Table 3: Accuracy over clean and adversarial test sets. Note that data augmentation and logit pairing loss decrease accuracy on clean test sets and increase accuracy on the adversarial test sets."
],
"extractive_spans": [],
"free_form_answer": "Data augmentation (es) improved Adv es by 20% comparing to baseline \nData augmentation (cs) improved Adv cs by 16.5% comparing to baseline\nData augmentation (cs+es) improved both Adv cs and Adv es by at least 10% comparing to baseline \nAll models show improvements over adversarial sets \n",
"highlighted_evidence": [
"The performance of the base model described in the previous section is shown in the first row of Table TABREF8 for the Nematus cs-en ($\\bar{cs}$), FB MT system cs-en (cs) and es-en (es), sequence autoencoder (seq2seq), and the average of the adversarial sets (avg).",
"FLOAT SELECTED: Table 3: Accuracy over clean and adversarial test sets. Note that data augmentation and logit pairing loss decrease accuracy on clean test sets and increase accuracy on the adversarial test sets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"e3272e2e3206192c46a4c77f50cf255c075e9475"
],
"answer": [
{
"evidence": [
"To evaluate model robustness, we devise a test set consisting of ‘adversarial’ examples, i.e, perturbed examples that can potentially change the base model's prediction. These could stem from paraphrasing a sentence, e.g., lexical and syntactical changes. We use two approaches described in literature: back-translation and noisy sequence autoencoder. Note that these examples resemble black-box attacks but are not intentionally designed to fool the system and hence, we use the term 'adversarial' broadly. We use these techniques to produce many paraphrases and find a subset of utterances that though very similar to the original test set, result in wrong predictions. We will measure the model robustness against such changes."
],
"extractive_spans": [
"we devise a test set consisting of ‘adversarial’ examples, i.e, perturbed examples that can potentially change the base model's prediction. ",
"We use two approaches described in literature: back-translation and noisy sequence autoencoder."
],
"free_form_answer": "",
"highlighted_evidence": [
"To evaluate model robustness, we devise a test set consisting of ‘adversarial’ examples, i.e, perturbed examples that can potentially change the base model's prediction. These could stem from paraphrasing a sentence, e.g., lexical and syntactical changes. We use two approaches described in literature: back-translation and noisy sequence autoencoder. Note that these examples resemble black-box attacks but are not intentionally designed to fool the system and hence, we use the term 'adversarial' broadly. We use these techniques to produce many paraphrases and find a subset of utterances that though very similar to the original test set, result in wrong predictions. We will measure the model robustness against such changes."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"zero",
"zero"
],
"paper_read": [
"no",
"no"
],
"question": [
"How big is performance improvement proposed methods are used?",
"How authors create adversarial test set to measure model robustness?"
],
"question_id": [
"234ccc1afcae4890e618ff2a7b06fc1e513ea640",
"4bd894c365d85e20753d9d2cb6edebb8d6f422e9"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"computer vision",
"computer vision"
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Example sentence and its annotation",
"Table 1: Pairs of sentences whose oracle labels are same but the base model predictions are different. Note that small changes to an sentence results in different model predictions.",
"Figure 2: An example of back-translation. Translating the utterance back into the original language (English) but via an auxiliary language (Spanish) results in paraphrased variations in the beam.",
"Table 2: Adversarial examples alongside their original sentence. Note that we choose sentences on which the base model predicts intent differently than the original sentence.",
"Table 3: Accuracy over clean and adversarial test sets. Note that data augmentation and logit pairing loss decrease accuracy on clean test sets and increase accuracy on the adversarial test sets."
],
"file": [
"2-Figure1-1.png",
"2-Table1-1.png",
"3-Figure2-1.png",
"3-Table2-1.png",
"5-Table3-1.png"
]
} | [
"How big is performance improvement proposed methods are used?"
] | [
[
"1911.05153-5-Table3-1.png",
"1911.05153-Base Model-2"
]
] | [
"Data augmentation (es) improved Adv es by 20% comparing to baseline \nData augmentation (cs) improved Adv cs by 16.5% comparing to baseline\nData augmentation (cs+es) improved both Adv cs and Adv es by at least 10% comparing to baseline \nAll models show improvements over adversarial sets \n"
] | 693 |
1811.02906 | Transfer Learning from LDA to BiLSTM-CNN for Offensive Language Detection in Twitter | We investigate different strategies for automatic offensive language classification on German Twitter data. For this, we employ a sequentially combined BiLSTM-CNN neural network. Based on this model, three transfer learning tasks to improve the classification performance with background knowledge are tested. We compare 1. Supervised category transfer: social media data annotated with near-offensive language categories, 2. Weakly-supervised category transfer: tweets annotated with emojis they contain, 3. Unsupervised category transfer: tweets annotated with topic clusters obtained by Latent Dirichlet Allocation (LDA). Further, we investigate the effect of three different strategies to mitigate negative effects of 'catastrophic forgetting' during transfer learning. Our results indicate that transfer learning in general improves offensive language detection. Best results are achieved from pre-training our model on the unsupervised topic clustering of tweets in combination with thematic user cluster information. | {
"paragraphs": [
[
"User-generated content in forums, blogs, and social media not only contributes to a deliberative exchange of opinions and ideas but is also contaminated with offensive language such as threats and discrimination against people, swear words or blunt insults. The automatic detection of such content can be a useful support for moderators of public platforms as well as for users who could receive warnings or would be enabled to filter unwanted content.",
"Although this topic now has been studied for more than two decades, so far there has been little work on offensive language detection for German social media content. Regarding this, we present a new approach to detect offensive language as defined in the shared task of the GermEval 2018 workshop. For our contribution to the shared task, we focus on the question how to apply transfer learning for neural network-based text classification systems.",
"In Germany, the growing interest in hate speech analysis and detection is closely related to recent political developments such as the increase of right-wing populism, and societal reactions to the ongoing influx of refugees seeking asylum BIBREF0 . Content analysis studies such as InstituteforStrategicDialogue.2018 have shown that a majority of hate speech comments in German Facebook is authored by a rather small group of very active users (5% of all accounts engaging in hate speech). The findings suggest that already such small groups are able to severely disturb social media debates for large audiences.",
"From the perspective of natural language processing, the task of automatic detection of offensive language in social media is complex due to three major reasons. First, we can expect `atypical' language data due to incorrect spellings, false grammar and non-standard language variations such as slang terms, intensifiers, or emojis/emoticons. For the automatic detection of offensive language, it is not quite clear whether these irregularities should be treated as `noise' or as a signal. Second, the task cannot be reduced to an analysis of word-level semantics only, e.g. spotting offensive keyterms in the data. Instead, the assessment of whether or not a post contains offensive language can be highly dependent on sentence and discourse level semantics, as well as subjective criteria. In a crowd-sourcing experiment on `hate speech' annotation, Ross.2016 achieved only very low inter-rater agreement between annotators. Offensive language is probably somewhat easier to achieve agreement on, but still sentence-level semantics and context or `world knowledge' remains important. Third, there is a lack of a common definition of the actual phenomenon to tackle. Published studies focus on `hostile messages', `flames', `hate speech', `discrimination', `abusive language', or `offensive language'. Although certainly overlapping, each of these categories has been operationalized in a slightly different manner. Since category definitions do not match properly, publicly available annotated datasets and language resources for one task cannot be used directly to train classifiers for any respective other task."
],
[
"Automatic detection of offensive language is a well-studied phenomenon for the English language. Initial works on the detection of `hostile messages' have been published already during the 1990s BIBREF4 . An overview of recent approaches comparing the different task definitions, feature sets and classification methods is given by Schmidt.2017. A major step forward to support the task was the publication of a large publicly available, manually annotated dataset by Yahoo research BIBREF5 . They provide a classification approach for detection of abusive language in Yahoo user comments using a variety of linguistic features in a linear classification model. One major result of their work was that learning text features from comments which are temporally close to the to-be-predicted data is more important than learning features from as much data as possible. This is especially important for real-life scenarios of classifying streams of comment data. In addition to token-based features, Xiang.2012 successfully employed topical features to detect offensive tweets. We will build upon this idea by employing topical data in our transfer learning setup. Transfer learning recently has gained a lot of attention since it can be easily applied to neural network learning architectures. For instance, Howard.2018 propose a generic transfer learning setup for text classification based on language modeling for pre-training neural models with large background corpora. To improve offensive language detection for English social media texts, a transfer learning approach was recently introduced by Felbo.2017. Their `deepmoji' approach relies on the idea to pre-train a neural network model for an actual offensive language classification task by using emojis as weakly supervised training labels. On a large collection of millions of randomly collected English tweets containing emojis, they try to predict the specific emojis from features obtained from the remaining tweet text. We will follow this idea of transfer learning to evaluate it for offensive language detection in German Twitter data together with other transfer learning strategies."
],
[
"Organizers of GermEval 2018 provide training and test datasets for two tasks. Task 1 is a binary classification for deciding whether or not a German tweet contains offensive language (the respective category labels are `offense' and `other'). Task 2 is a multi-class classification with more fine-grained labels sub-categorizing the same tweets into either `insult', `profanity', `abuse', or `other'.",
"The training data contains 5,008 manually labeled tweets sampled from Twitter from selected accounts that are suspected to contain a high share of offensive language. Manual inspection reveals a high share of political tweets among those labeled as offensive. These tweets range from offending single Twitter users, politicians and parties to degradation of whole social groups such as Muslims, migrants or refugees. The test data contains 3,532 tweets. To create a realistic scenario of truly unseen test data, training and test set are sampled from disjoint user accounts. No standard validation set is provided for the task. To optimize hyper-parameters of our classification models and allow for early stopping to prevent the neural models from overfitting, we created our own validation set. For this, we used the last 808 examples from the provided training set. The remaining first 4,200 examples were used to train our models."
],
[
"Since the provided dataset for offensive language detection is rather small, we investigate the potential of transfer learning to increase classification performance. For this, we use the following labeled as well as unlabeled datasets.",
"A recently published resource of German language social media data has been published by Schabus2017. Among other things, the dataset contains 11,773 labeled user comments posted to the Austrian newspaper website `Der Standard'. Comments have not been annotated for offensive language, but for categories such as positive/negative sentiment, off-topic, inappropriate or discriminating.",
"As a second resource, we use a background corpus of German tweets that were collected using the Twitter streaming API from 2011 to 2017. Since the API provides a random fraction of all tweets (1%), language identification is performed using `langid.py' BIBREF6 to filter for German tweets. For all years combined, we obtain about 18 million unlabeled German tweets from the stream, which can be used as a large, in-domain background corpus.",
"For a transfer learning setup, we need to specify a task to train the model and prepare the corresponding dataset. We compare the following three methods.",
"As introduced above, the `One Million Post' corpus provides annotation labels for more than 11,000 user comments. Although there is no directly comparable category capturing `offensive language' as defined in the shared task, there are two closely related categories. From the resource, we extract all those comments in which a majority of the annotators agree that they contain either `inappropriate' or `discriminating' content, or none of the aforementioned. We treat the first two cases as examples of `offense' and the latter case as examples of `other'. This results in 3,599 training examples (519 offense, 3080 other) from on the `One Million Post' corpus. We conduct pre-training of the neural model as a binary classification task (similar to the Task 1 of GermEval 2018)",
"Following the approach of Felbo.2017, we constructed a weakly-supervised training dataset from our Twitter background corpus. From all tweets posted between 2013 and 2017, we extract those containing at least one emoji character. In the case of several emojis in one tweet, we duplicate the tweet for each unique emoji type. Emojis are then removed from the actual tweets and treated as a label to predict by the neural model. This results in a multi-class classification task to predict the right emoji out of 1,297 different ones. Our training dataset contains 1,904,330 training examples.",
"As a final method, we create a training data set for transfer learning in a completely unsupervised manner. For this, we compute an LDA clustering with INLINEFORM0 topics on 10 million tweets sampled from 2016 and 2017 from our Twitter background corpus containing at least two meaningful words (i.e. alphanumeric sequences that are not stopwords, URLs or user mentions). Tweets also have been deduplicated before sampling. From the topic-document distribution of the resulting LDA model, we determined the majority topic id for each tweet as a target label for prediction during pre-training our neural model. Pre-training of the neural model was conducted on the 10 million tweets with batch size 128 for 10 epochs."
],
[
"In the following section, we describe one linear classification model in combination with specifically engineered features, which we use as a baseline for the classification task. We further introduce a neural network model as a basis for our approach to transfer learning. This model achieves the highest performance for offensive language detection, as compared to our baseline."
],
[
"The baseline classifier uses a linear Support Vector Machine BIBREF7 , which is suited for a high number of features. We use a text classification framework for German BIBREF8 that has been used successfully for sentiment analysis before.",
"We induce token features based on the Twitter background corpus. Because tweets are usually very short, they are not an optimal source to obtain good estimates on inverse document frequencies (IDF). To obtain a better feature weighting, we calculate IDF scores based on the Twitter corpus combined with an in-house product review dataset (cf. ibid.). From this combined corpus, we compute the IDF scores and 300-dimensional word embeddings BIBREF9 for all contained features. Following Ruppert2017, we use the IDF scores to obtain the highest-weighted terms per category in the training data. Here, we obtain words like Staatsfunk, Vasall (state media, vassal) or deutschlandfeindlichen (Germany-opposing) for the category `abuse' and curse words for `insult'. Further, IDF scores are used to weight the word vectors of all terms in a tweet. Additionally, we employ a polarity lexicon and perform lexical expansion on it to obtain new entries from our in-domain background corpus that are weighted on a `positive–negative' continuum. Lexical expansion is based on distributional word similarity as described in Kumar.2016."
],
[
"For transfer learning, we rely on a neural network architecture implemented in the Keras framework for Python. Our model (see Fig. FIGREF15 ) combines a bi-directional LSTM layer BIBREF1 with 100 units followed by three parallel convolutional layers (CNN), each with a different kernel size INLINEFORM0 , and a filter size 200. The outputs of the three CNN blocks are max-pooled globally and concatenated. Finally, features encoded by the CNN blocks are fed into a dense layer with 100 units, followed by the prediction layer. Except for this final layer which uses Softmax activation, we rely on LeakyReLU activation BIBREF10 for the other model layers. For regularization, dropout is applied to the LSTM layer and to each CNN block after global max-pooling (dropout rate 0.5). For training, we use the Nesterov Adam optimization and categorical cross-entropy loss with a learning rate of 0.002.",
"The intuition behind this architecture is that the recurrent LSTM layer can serve as a feature encoder for general language characteristics from sequences of semantic word embeddings. The convolutional layers on top of this can then encode category related features delivered by the LSTM while the last dense layers finally fine-tune highly category-specific features for the actual classification task.",
"As input, we feed 300-dimensional word embeddings obtained from fastText BIBREF11 into our model. Since fastText also makes use of sub-word information (character n-grams), it has the great advantage that it can provide semantic embeddings also for words that have not been seen during training the embedding model. We use a model pre-trained with German language data from Wikipedia and Common Crawl provided by mikolov2018advances. First, we unify all Twitter-typical user mentions (`@username') and URLs into a single string representation and reduce all characters to lower case. Then, we split tweets into tokens at boundaries of changing character classes. As an exception, sequences of emoji characters are split into single character tokens. Finally, for each token, an embedding vector is obtained from the fastText model.",
"For offensive language detection in Twitter, users addressed in tweets might be an additional relevant signal. We assume it is more likely that politicians or news agencies are addressees of offensive language than, for instance, musicians or athletes. To make use of such information, we obtain a clustering of user ids from our Twitter background corpus. From all tweets in our stream from 2016 or 2017, we extract those tweets that have at least two @-mentions and all of the @-mentions have been seen at least five times in the background corpus. Based on the resulting 1.8 million lists of about 169,000 distinct user ids, we compute a topic model with INLINEFORM0 topics using Latent Dirichlet Allocation BIBREF3 . For each of the user ids, we extract the most probable topic from the inferred user id-topic distribution as cluster id. This results in a thematic cluster id for most of the user ids in our background corpus grouping together accounts such as American or German political actors, musicians, media websites or sports clubs (see Table TABREF17 ). For our final classification approach, cluster ids for users mentioned in tweets are fed as a second input in addition to (sub-)word embeddings to the penultimate dense layer of the neural network model."
],
[
"As mentioned earlier, we investigate potential strategies for transfer learning to achieve optimal performance. For this, we compare three different methods to pre-train our model with background data sets. We also compare three different strategies to combat `catastrophic forgetting' during training on the actual target data."
],
[
"Once the neural model has been pre-trained on the above-specified targets and corresponding datasets, we can apply it for learning our actual target task. For this, we need to remove the final prediction layer of the pre-trained model (i.e. Layer 4 in Fig. FIGREF15 ), and add a new dense layer for prediction of one of the actual label sets (two for Task 1, four for Task 2). The training for the actual GermEval tasks is conducted with batch size 32 for up to 50 epochs. To prevent the aforementioned effect of forgetting pre-trained knowledge during this task-specific model training, we evaluate three different strategies.",
"In Howard.2018, gradual unfreezing of pre-trained model weights is proposed as one strategy to mitigate forgetting. The basic idea is to initially freeze all pre-trained weights of the neural model and keep only the newly added last layer trainable (i.e. Layer 4 in Fig. FIGREF15 ). After training that last layer for one epoch on the GermEval training data, the next lower frozen layer is unfrozen and training will be repeated for another epoch. This will be iterated until all layers (4 to 1) are unfrozen.",
"Following the approach of Felbo.2017, we do not iteratively unfreeze all layers of the model, but only one at a time. First, the newly added final prediction layer is trained while all other model weights remain frozen. Training is conducted for up to 50 epochs. The best performing model during these epochs with respect to our validation set is then used in the next step of fine-tuning the pre-trained model layers. For the bottom-up strategy, we unfreeze the lowest layer (1) containing the most general knowledge first, then we continue optimization with the more specific layers (2 and 3) one after the other. During fine-tuning of each single layer, all other layers remain frozen and training is performed for 50 epochs selecting the best performing model at the end of each layer optimization. In a final round of fine-tuning, all layers are unfrozen.",
"This proceeding is similar the one described above, but inverts the order of unfreezing single layers from top to bottom sequentially fine-tuning layers 4, 3, 2, 1 individually, and all together in a final round.",
"All strategies are compared to the baseline of no freezing of model weights, but training all layers at once directly after pre-training with one of the three transfer datasets."
],
[
"Since there is no prior state-of-the-art for the GermEval Shared Task 2018 dataset, we evaluate the performance of our neural model compared to the baseline SVM architecture. We further compare the different tasks and strategies for transfer learning introduced above and provide some first insights on error analysis."
],
[
"In this paper, we presented our neural network text classification approach for offensive language detection on the GermEval 2018 Shared Task dataset. We used a combination of BiLSTM and CNN architectures for learning. As task-specific adaptations of standard text classification, we evaluated different datasets and strategies for transfer learning, as well as additional features obtained from users addressed in tweets. The coarse-grained offensive language detection could be realized to a much better extent than the fine-grained task of separating four different categories of insults (accuracy 77.5% vs. 73.7%). From our experiments, four main messages can be drawn:",
"The fact that our unsupervised, task-agnostic pre-training by LDA topic transfer performed best suggests that this approach will also contribute beneficially to other text classification tasks such as sentiment analysis. Thus, in future work, we plan to evaluate our approach with regard to such other tasks. We also plan to evaluate more task-agnostic approaches for transfer learning, for instance employing language modeling as a pre-training task."
]
],
"section_name": [
"Introduction",
"Related Work",
"GermEval 2018 Shared Task",
"Background Knowledge",
"Text Classification",
"SVM baseline:",
"BiLSTM-CNN for Text Classification",
"Transfer Learning",
"Transfer Learning Strategies",
"Evaluation",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"ca550118493e1ac68127df6a071a437793d8aebd"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Transfer learning performance (Task 1)"
],
"extractive_spans": [],
"free_form_answer": "In task 1 best transfer learning strategy improves F1 score by 4.4% and accuracy score by 3.3%, in task 2 best transfer learning strategy improves F1 score by 2.9% and accuracy score by 1.7%",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Transfer learning performance (Task 1)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"927fd843c5f73dc2b826d25ce555bd0021e5f811"
],
"answer": [
{
"evidence": [
"The baseline classifier uses a linear Support Vector Machine BIBREF7 , which is suited for a high number of features. We use a text classification framework for German BIBREF8 that has been used successfully for sentiment analysis before."
],
"extractive_spans": [],
"free_form_answer": "SVM",
"highlighted_evidence": [
"The baseline classifier uses a linear Support Vector Machine BIBREF7 , which is suited for a high number of features. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"77c708508fd38d6e143235cf6fed37596aa68372"
],
"answer": [
{
"evidence": [
"For offensive language detection in Twitter, users addressed in tweets might be an additional relevant signal. We assume it is more likely that politicians or news agencies are addressees of offensive language than, for instance, musicians or athletes. To make use of such information, we obtain a clustering of user ids from our Twitter background corpus. From all tweets in our stream from 2016 or 2017, we extract those tweets that have at least two @-mentions and all of the @-mentions have been seen at least five times in the background corpus. Based on the resulting 1.8 million lists of about 169,000 distinct user ids, we compute a topic model with INLINEFORM0 topics using Latent Dirichlet Allocation BIBREF3 . For each of the user ids, we extract the most probable topic from the inferred user id-topic distribution as cluster id. This results in a thematic cluster id for most of the user ids in our background corpus grouping together accounts such as American or German political actors, musicians, media websites or sports clubs (see Table TABREF17 ). For our final classification approach, cluster ids for users mentioned in tweets are fed as a second input in addition to (sub-)word embeddings to the penultimate dense layer of the neural network model."
],
"extractive_spans": [],
"free_form_answer": "Clusters of Twitter user ids from accounts of American or German political actors, musicians, media websites or sports club",
"highlighted_evidence": [
"Based on the resulting 1.8 million lists of about 169,000 distinct user ids, we compute a topic model with INLINEFORM0 topics using Latent Dirichlet Allocation BIBREF3 . For each of the user ids, we extract the most probable topic from the inferred user id-topic distribution as cluster id. This results in a thematic cluster id for most of the user ids in our background corpus grouping together accounts such as American or German political actors, musicians, media websites or sports clubs (see Table TABREF17 ). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"930c4bbeecefdbe76d98f4fce851def5630e08ae"
],
"answer": [
{
"evidence": [
"As introduced above, the `One Million Post' corpus provides annotation labels for more than 11,000 user comments. Although there is no directly comparable category capturing `offensive language' as defined in the shared task, there are two closely related categories. From the resource, we extract all those comments in which a majority of the annotators agree that they contain either `inappropriate' or `discriminating' content, or none of the aforementioned. We treat the first two cases as examples of `offense' and the latter case as examples of `other'. This results in 3,599 training examples (519 offense, 3080 other) from on the `One Million Post' corpus. We conduct pre-training of the neural model as a binary classification task (similar to the Task 1 of GermEval 2018)"
],
"extractive_spans": [
"inappropriate",
"discriminating"
],
"free_form_answer": "",
"highlighted_evidence": [
"As introduced above, the `One Million Post' corpus provides annotation labels for more than 11,000 user comments. Although there is no directly comparable category capturing `offensive language' as defined in the shared task, there are two closely related categories. From the resource, we extract all those comments in which a majority of the annotators agree that they contain either `inappropriate' or `discriminating' content, or none of the aforementioned. We treat the first two cases as examples of `offense' and the latter case as examples of `other'."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"By how much does transfer learning improve performance on this task?",
"What baseline is used?",
"What topic clusters are identified by LDA?",
"What are the near-offensive language categories?"
],
"question_id": [
"4704cbb35762d0172f5ac6c26b67550921567a65",
"38a5cc790f66a7362f91d338f2f1d78f48c1e252",
"0da6cfbc8cb134dc3d247e91262f5050a2200664",
"9003c7041d3d2addabc2c112fa2c7efe5fab493c"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: BiLSTM-CNN model architecture. We use a combination of recurrent and convolutional cells for learning. As input, we rely on (sub-)word embeddings. The final architecture also includes clustering information obtained from Twitter user ids. Dotted lines indicate dropout with rate 0.5 between layers. The last dense layer contains n units for prediction of the probability of each of the n classification labels per task.",
"Table 1: Examples of Twitter user clusters",
"Table 2: Transfer learning performance (Task 1)",
"Table 3: Transfer learning performance (Task 2)",
"Table 4: Offensive language detection performance % (Task 1)",
"Table 5: Offensive language detection performance % (Task 2)"
],
"file": [
"4-Figure1-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"8-Table4-1.png",
"8-Table5-1.png"
]
} | [
"By how much does transfer learning improve performance on this task?",
"What baseline is used?",
"What topic clusters are identified by LDA?"
] | [
[
"1811.02906-6-Table2-1.png"
],
[
"1811.02906-SVM baseline:-0"
],
[
"1811.02906-BiLSTM-CNN for Text Classification-3"
]
] | [
"In task 1 best transfer learning strategy improves F1 score by 4.4% and accuracy score by 3.3%, in task 2 best transfer learning strategy improves F1 score by 2.9% and accuracy score by 1.7%",
"SVM",
"Clusters of Twitter user ids from accounts of American or German political actors, musicians, media websites or sports club"
] | 695 |
1903.09588 | Utilizing BERT for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence | Aspect-based sentiment analysis (ABSA), which aims to identify fine-grained opinion polarity towards a specific aspect, is a challenging subtask of sentiment analysis (SA). In this paper, we construct an auxiliary sentence from the aspect and convert ABSA to a sentence-pair classification task, such as question answering (QA) and natural language inference (NLI). We fine-tune the pre-trained model from BERT and achieve new state-of-the-art results on SentiHood and SemEval-2014 Task 4 datasets. | {
"paragraphs": [
[
"Sentiment analysis (SA) is an important task in natural language processing. It solves the computational processing of opinions, emotions, and subjectivity - sentiment is collected, analyzed and summarized. It has received much attention not only in academia but also in industry, providing real-time feedback through online reviews on websites such as Amazon, which can take advantage of customers' opinions on specific products or services. The underlying assumption of this task is that the entire text has an overall polarity.",
"However, the users' comments may contain different aspects, such as: “This book is a hardcover version, but the price is a bit high.\" The polarity in `appearance' is positive, and the polarity regarding `price' is negative. Aspect-based sentiment analysis (ABSA) BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 aims to identify fine-grained polarity towards a specific aspect. This task allows users to evaluate aggregated sentiments for each aspect of a given product or service and gain a more granular understanding of their quality.",
"Both SA and ABSA are sentence-level or document-level tasks, but one comment may refer to more than one object, and sentence-level tasks cannot handle sentences with multiple targets. Therefore, BIBREF4 introduce the task of targeted aspect-based sentiment analysis (TABSA), which aims to identify fine-grained opinion polarity towards a specific aspect associated with a given target. The task can be divided into two steps: (1) the first step is to determine the aspects associated with each target; (2) the second step is to resolve the polarity of aspects to a given target.",
"The earliest work on (T)ABSA relied heavily on feature engineering BIBREF5 , BIBREF6 , and subsequent neural network-based methods BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 achieved higher accuracy. Recently, BIBREF12 incorporate useful commonsense knowledge into a deep neural network to further enhance the result of the model. BIBREF13 optimize the memory network and apply it to their model to better capture linguistic structure.",
"More recently, the pre-trained language models, such as ELMo BIBREF14 , OpenAI GPT BIBREF15 , and BERT BIBREF16 , have shown their effectiveness to alleviate the effort of feature engineering. Especially, BERT has achieved excellent results in QA and NLI. However, there is not much improvement in (T)ABSA task with the direct use of the pre-trained BERT model (see Table TABREF19 ). We think this is due to the inappropriate use of the pre-trained BERT model.",
"Since the input representation of BERT can represent both a single text sentence and a pair of text sentences, we can convert (T)ABSA into a sentence-pair classification task and fine-tune the pre-trained BERT.",
"In this paper, we investigate several methods of constructing an auxiliary sentence and transform (T)ABSA into a sentence-pair classification task. We fine-tune the pre-trained model from BERT and achieve new state-of-the-art results on (T)ABSA task. We also conduct a comparative experiment to verify that the classification based on a sentence-pair is better than the single-sentence classification with fine-tuned BERT, which means that the improvement is not only from BERT but also from our method. In particular, our contribution is two-fold:",
"1. We propose a new solution of (T)ABSA by converting it to a sentence-pair classification task.",
"2. We fine-tune the pre-trained BERT model and achieve new state-of-the-art results on SentiHood and SemEval-2014 Task 4 datasets."
],
[
"In this section, we describe our method in detail."
],
[
"In TABSA, a sentence INLINEFORM0 usually consists of a series of words: INLINEFORM1 , and some of the words INLINEFORM2 are pre-identified targets INLINEFORM3 , following BIBREF4 , we set the task as a 3-class classification problem: given the sentence INLINEFORM4 , a set of target entities INLINEFORM5 and a fixed aspect set INLINEFORM6 , predict the sentiment polarity INLINEFORM7 over the full set of the target-aspect pairs INLINEFORM8 . As we can see in Table TABREF6 , the gold standard polarity of (LOCATION2, price) is negative, while the polarity of (LOCATION1, price) is none.",
"In ABSA, the target-aspect pairs INLINEFORM0 become only aspects INLINEFORM1 . This setting is equivalent to learning subtasks 3 (Aspect Category Detection) and subtask 4 (Aspect Category Polarity) of SemEval-2014 Task 4 at the same time."
],
[
"For simplicity, we mainly describe our method with TABSA as an example.",
"We consider the following four methods to convert the TABSA task into a sentence pair classification task:",
"The sentence we want to generate from the target-aspect pair is a question, and the format needs to be the same. For example, for the set of a target-aspect pair (LOCATION1, safety), the sentence we generate is “what do you think of the safety of location - 1 ?\"",
"For the NLI task, the conditions we set when generating sentences are less strict, and the form is much simpler. The sentence created at this time is not a standard sentence, but a simple pseudo-sentence, with (LOCATION1, safety) pair as an example: the auxiliary sentence is: “location - 1 - safety\".",
"For QA-B, we add the label information and temporarily convert TABSA into a binary classification problem ( INLINEFORM0 ) to obtain the probability distribution. At this time, each target-aspect pair will generate three sequences such as “the polarity of the aspect safety of location - 1 is positive\", “the polarity of the aspect safety of location - 1 is negative\", “the polarity of the aspect safety of location - 1 is none\". We use the probability value of INLINEFORM1 as the matching score. For a target-aspect pair which generates three sequences ( INLINEFORM2 ), we take the class of the sequence with the highest matching score for the predicted category.",
"The difference between NLI-B and QA-B is that the auxiliary sentence changes from a question to a pseudo-sentence. The auxiliary sentences are: “location - 1 - safety - positive\", “location - 1 - safety - negative\", and “location - 1 - safety - none\".",
"After we construct the auxiliary sentence, we can transform the TABSA task from a single sentence classification task to a sentence pair classification task. As shown in Table TABREF19 , this is a necessary operation that can significantly improve the experimental results of the TABSA task."
],
[
"BERT BIBREF16 is a new language representation model, which uses bidirectional transformers to pre-train a large corpus, and fine-tunes the pre-trained model on other tasks. We fine-tune the pre-trained BERT model on TABSA task. Let's take a brief look at the input representation and the fine-tuning procedure.",
"The input representation of the BERT can explicitly represent a pair of text sentences in a sequence of tokens. For a given token, its input representation is constructed by summing the corresponding token, segment, and position embeddings. For classification tasks, the first word of each sequence is a unique classification embedding ([CLS]).",
"BERT fine-tuning is straightforward. To obtain a fixed-dimensional pooled representation of the input sequence, we use the final hidden state (i.e., the output of the transformer) of the first token as the input. We denote the vector as INLINEFORM0 . Then we add a classification layer whose parameter matrix is INLINEFORM1 , where INLINEFORM2 is the number of categories. Finally, the probability of each category INLINEFORM3 is calculated by the softmax function INLINEFORM4 .",
"BERT for single sentence classification tasks. Suppose the number of target categories are INLINEFORM0 and aspect categories are INLINEFORM1 . We consider TABSA as a combination of INLINEFORM2 target-aspect-related sentiment classification problems, first classifying each sentiment classification problem, and then summarizing the results obtained. For ABSA, We fine-tune pre-trained BERT model to train INLINEFORM3 classifiers for all aspects and then summarize the results.",
"BERT for sentence pair classification tasks. Based on the auxiliary sentence constructed in Section SECREF7 , we use the sentence-pair classification approach to solve (T)ABSA. Corresponding to the four ways of constructing sentences, we name the models: BERT-pair-QA-M, BERT-pair-NLI-M, BERT-pair-QA-B, and BERT-pair-NLI-B."
],
[
"We evaluate our method on the SentiHood BIBREF4 dataset, which consists of 5,215 sentences, 3,862 of which contain a single target, and the remainder multiple targets. Each sentence contains a list of target-aspect pairs INLINEFORM0 with the sentiment polarity INLINEFORM1 . Ultimately, given a sentence INLINEFORM2 and the target INLINEFORM3 in the sentence, we need to:",
"",
"",
"(1) detect the mention of an aspect INLINEFORM0 for the target INLINEFORM1 ;",
"(2) determine the positive or negative sentiment polarity INLINEFORM0 for detected target-aspect pairs.",
"We also evaluate our method on SemEval-2014 Task 4 BIBREF1 dataset for aspect-based sentiment analysis. The only difference from the SentiHood is that the target-aspect pairs INLINEFORM0 become only aspects INLINEFORM1 . This setting allows us to jointly evaluate subtask 3 (Aspect Category Detection) and subtask 4 (Aspect Category Polarity)."
],
[
"We use the pre-trained uncased BERT-base model for fine-tuning. The number of Transformer blocks is 12, the hidden layer size is 768, the number of self-attention heads is 12, and the total number of parameters for the pre-trained model is 110M. When fine-tuning, we keep the dropout probability at 0.1, set the number of epochs to 4. The initial learning rate is 2e-5, and the batch size is 24.",
"",
""
],
[
"We compare our model with the following models:",
"LR BIBREF4 : a logistic regression classifier with n-gram and pos-tag features.",
"LSTM-Final BIBREF4 : a biLSTM model with the final state as a representation.",
"LSTM-Loc BIBREF4 : a biLSTM model with the state associated with the target position as a representation.",
"LSTM+TA+SA BIBREF12 : a biLSTM model which introduces complex target-level and sentence-level attention mechanisms.",
"SenticLSTM BIBREF12 : an upgraded version of the LSTM+TA+SA model which introduces external information from SenticNet BIBREF17 .",
"Dmu-Entnet BIBREF13 : a bi-directional EntNet BIBREF18 with external “memory chains” with a delayed memory update mechanism to track entities.",
"During the evaluation of SentiHood, following BIBREF4 , we only consider the four most frequently seen aspects (general, price, transit-location, safety). When evaluating the aspect detection, following BIBREF12 , we use strict accuracy and Macro-F1, and we also report AUC. In sentiment classification, we use accuracy and macro-average AUC as the evaluation indices.",
"Results on SentiHood are presented in Table TABREF19 . The results of the BERT-single model on aspect detection are better than Dmu-Entnet, but the accuracy of sentiment classification is much lower than that of both SenticLstm and Dmu-Entnet, with a difference of 3.8 and 5.5 respectively.",
"However, BERT-pair outperforms other models on aspect detection and sentiment analysis by a substantial margin, obtaining 9.4 macro-average F1 and 2.6 accuracies improvement over Dmu-Entnet. Overall, the performance of the four BERT-pair models is close. It is worth noting that BERT-pair-NLI models perform relatively better on aspect detection, while BERT-pair-QA models perform better on sentiment classification. Also, the BERT-pair-QA-B and BERT-pair-NLI-B models can achieve better AUC values on sentiment classification than the other models."
],
[
"The benchmarks for SemEval-2014 Task 4 are the two best performing systems in BIBREF1 and ATAE-LSTM BIBREF8 . When evaluating SemEval-2014 Task 4 subtask 3 and subtask 4, following BIBREF1 , we use Micro-F1 and accuracy respectively.",
"Results on SemEval-2014 are presented in Table TABREF35 and Table TABREF36 . We find that BERT-single has achieved better results on these two subtasks, and BERT-pair has achieved further improvements over BERT-single. The BERT-pair-NLI-B model achieves the best performance for aspect category detection. For aspect category polarity, BERT-pair-QA-B performs best on all 4-way, 3-way, and binary settings."
],
[
"Why is the experimental result of the BERT-pair model so much better? On the one hand, we convert the target and aspect information into an auxiliary sentence, which is equivalent to exponentially expanding the corpus. A sentence INLINEFORM0 in the original data set will be expanded into INLINEFORM1 in the sentence pair classification task. On the other hand, it can be seen from the amazing improvement of the BERT model on the QA and NLI tasks BIBREF16 that the BERT model has an advantage in dealing with sentence pair classification tasks. This advantage comes from both unsupervised masked language model and next sentence prediction tasks.",
"TABSA is more complicated than SA due to additional target and aspect information. Directly fine-tuning the pre-trained BERT on TABSA does not achieve performance growth. However, when we separate the target and the aspect to form an auxiliary sentence and transform the TABSA into a sentence pair classification task, the scenario is similar to QA and NLI, and then the advantage of the pre-trained BERT model can be fully utilized. Our approach is not limited to TABSA, and this construction method can be used for other similar tasks. For ABSA, we can use the same approach to construct the auxiliary sentence with only aspects.",
"In BERT-pair models, BERT-pair-QA-B and BERT-pair-NLI-B achieve better AUC values on sentiment classification, probably because of the modeling of label information."
],
[
"In this paper, we constructed an auxiliary sentence to transform (T)ABSA from a single sentence classification task to a sentence pair classification task. We fine-tuned the pre-trained BERT model on the sentence pair classification task and obtained the new state-of-the-art results. We compared the experimental results of single sentence classification and sentence pair classification based on BERT fine-tuning, analyzed the advantages of sentence pair classification, and verified the validity of our conversion method. In the future, we will apply this conversion method to other similar tasks."
],
[
"We would like to thank the anonymous reviewers for their valuable comments. The research work is supported by Shanghai Municipal Science and Technology Commission (No. 16JC1420401 and 17JC1404100), National Key Research and Development Program of China (No. 2017YFB1002104), and National Natural Science Foundation of China (No. 61672162 and 61751201)."
]
],
"section_name": [
"Introduction",
"Methodology",
"Task description",
"Construction of the auxiliary sentence",
"Fine-tuning pre-trained BERT",
"Datasets",
"Hyperparameters",
"Exp-I: TABSA",
"Exp-II: ABSA",
"Discussion",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"78abd9320095515f99542d027e4c9ab2b9e0c6ae"
],
"answer": [
{
"evidence": [
"Results on SemEval-2014 are presented in Table TABREF35 and Table TABREF36 . We find that BERT-single has achieved better results on these two subtasks, and BERT-pair has achieved further improvements over BERT-single. The BERT-pair-NLI-B model achieves the best performance for aspect category detection. For aspect category polarity, BERT-pair-QA-B performs best on all 4-way, 3-way, and binary settings.",
"FLOAT SELECTED: Table 4: Test set results for Semeval-2014 task 4 Subtask 3: Aspect Category Detection. We use the results reported in XRCE (Brun et al., 2014) and NRC-Canada (Kiritchenko et al., 2014).",
"FLOAT SELECTED: Table 5: Test set accuracy (%) for Semeval-2014 task 4 Subtask 4: Aspect Category Polarity. We use the results reported in XRCE (Brun et al., 2014), NRCCanada (Kiritchenko et al., 2014) and ATAE-LSTM (Wang et al., 2016). “-” means not reported."
],
"extractive_spans": [],
"free_form_answer": "On subtask 3 best proposed model has F1 score of 92.18 compared to best previous F1 score of 88.58.\nOn subtask 4 best proposed model has 85.9, 89.9 and 95.6 compared to best previous results of 82.9, 84.0 and 89.9 on 4-way, 3-way and binary aspect polarity.",
"highlighted_evidence": [
"Results on SemEval-2014 are presented in Table TABREF35 and Table TABREF36 . We find that BERT-single has achieved better results on these two subtasks, and BERT-pair has achieved further improvements over BERT-single. The BERT-pair-NLI-B model achieves the best performance for aspect category detection. For aspect category polarity, BERT-pair-QA-B performs best on all 4-way, 3-way, and binary settings.",
"FLOAT SELECTED: Table 4: Test set results for Semeval-2014 task 4 Subtask 3: Aspect Category Detection. We use the results reported in XRCE (Brun et al., 2014) and NRC-Canada (Kiritchenko et al., 2014).",
"FLOAT SELECTED: Table 5: Test set accuracy (%) for Semeval-2014 task 4 Subtask 4: Aspect Category Polarity. We use the results reported in XRCE (Brun et al., 2014), NRCCanada (Kiritchenko et al., 2014) and ATAE-LSTM (Wang et al., 2016). “-” means not reported."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"aae9c33d07b6165afae55f1fd5b7758f616f3d92"
],
"answer": [
{
"evidence": [
"Construction of the auxiliary sentence",
"For simplicity, we mainly describe our method with TABSA as an example.",
"We consider the following four methods to convert the TABSA task into a sentence pair classification task:",
"The sentence we want to generate from the target-aspect pair is a question, and the format needs to be the same. For example, for the set of a target-aspect pair (LOCATION1, safety), the sentence we generate is “what do you think of the safety of location - 1 ?\"",
"For the NLI task, the conditions we set when generating sentences are less strict, and the form is much simpler. The sentence created at this time is not a standard sentence, but a simple pseudo-sentence, with (LOCATION1, safety) pair as an example: the auxiliary sentence is: “location - 1 - safety\".",
"For QA-B, we add the label information and temporarily convert TABSA into a binary classification problem ( INLINEFORM0 ) to obtain the probability distribution. At this time, each target-aspect pair will generate three sequences such as “the polarity of the aspect safety of location - 1 is positive\", “the polarity of the aspect safety of location - 1 is negative\", “the polarity of the aspect safety of location - 1 is none\". We use the probability value of INLINEFORM1 as the matching score. For a target-aspect pair which generates three sequences ( INLINEFORM2 ), we take the class of the sequence with the highest matching score for the predicted category.",
"The difference between NLI-B and QA-B is that the auxiliary sentence changes from a question to a pseudo-sentence. The auxiliary sentences are: “location - 1 - safety - positive\", “location - 1 - safety - negative\", and “location - 1 - safety - none\".",
"After we construct the auxiliary sentence, we can transform the TABSA task from a single sentence classification task to a sentence pair classification task. As shown in Table TABREF19 , this is a necessary operation that can significantly improve the experimental results of the TABSA task."
],
"extractive_spans": [
"The sentence we want to generate from the target-aspect pair is a question, and the format needs to be the same.",
"For the NLI task, the conditions we set when generating sentences are less strict, and the form is much simpler.",
"For QA-B, we add the label information and temporarily convert TABSA into a binary classification problem ( INLINEFORM0 ) to obtain the probability distribution",
"auxiliary sentence changes from a question to a pseudo-sentence"
],
"free_form_answer": "",
"highlighted_evidence": [
"Construction of the auxiliary sentence\nFor simplicity, we mainly describe our method with TABSA as an example.\n\nWe consider the following four methods to convert the TABSA task into a sentence pair classification task:\n\nThe sentence we want to generate from the target-aspect pair is a question, and the format needs to be the same. For example, for the set of a target-aspect pair (LOCATION1, safety), the sentence we generate is “what do you think of the safety of location - 1 ?\"\n\nFor the NLI task, the conditions we set when generating sentences are less strict, and the form is much simpler. The sentence created at this time is not a standard sentence, but a simple pseudo-sentence, with (LOCATION1, safety) pair as an example: the auxiliary sentence is: “location - 1 - safety\".\n\nFor QA-B, we add the label information and temporarily convert TABSA into a binary classification problem ( INLINEFORM0 ) to obtain the probability distribution. At this time, each target-aspect pair will generate three sequences such as “the polarity of the aspect safety of location - 1 is positive\", “the polarity of the aspect safety of location - 1 is negative\", “the polarity of the aspect safety of location - 1 is none\". We use the probability value of INLINEFORM1 as the matching score. For a target-aspect pair which generates three sequences ( INLINEFORM2 ), we take the class of the sequence with the highest matching score for the predicted category.\n\nThe difference between NLI-B and QA-B is that the auxiliary sentence changes from a question to a pseudo-sentence. The auxiliary sentences are: “location - 1 - safety - positive\", “location - 1 - safety - negative\", and “location - 1 - safety - none\".\n\nAfter we construct the auxiliary sentence, we can transform the TABSA task from a single sentence classification task to a sentence pair classification task. As shown in Table TABREF19 , this is a necessary operation that can significantly improve the experimental results of the TABSA task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"How much do they outperform previous state-of-the-art?",
"How do they generate the auxiliary sentence?"
],
"question_id": [
"e9d9bb87a5c4faa965ceddd98d8b80d4b99e339e",
"3554ac92d4f2d00dbf58f7b4ff2b36a852854e95"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"BERT Sentiment Analysis",
"BERT Sentiment Analysis"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: An example of SentiHood dataset.",
"Table 2: The construction methods. Due to limited space, we use the following abbreviations: S.P. for sentiment polarity, w/o for without, and w/ for with.",
"Table 3: Performance on SentiHood dataset. We boldface the score with the best performance across all models. We use the results reported in Saeidi et al. (2016), Ma et al. (2018) and Liu et al. (2018). “-” means not reported.",
"Table 4: Test set results for Semeval-2014 task 4 Subtask 3: Aspect Category Detection. We use the results reported in XRCE (Brun et al., 2014) and NRC-Canada (Kiritchenko et al., 2014).",
"Table 5: Test set accuracy (%) for Semeval-2014 task 4 Subtask 4: Aspect Category Polarity. We use the results reported in XRCE (Brun et al., 2014), NRCCanada (Kiritchenko et al., 2014) and ATAE-LSTM (Wang et al., 2016). “-” means not reported."
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"4-Table3-1.png",
"5-Table4-1.png",
"5-Table5-1.png"
]
} | [
"How much do they outperform previous state-of-the-art?"
] | [
[
"1903.09588-Exp-II: ABSA-1",
"1903.09588-5-Table4-1.png",
"1903.09588-5-Table5-1.png"
]
] | [
"On subtask 3 best proposed model has F1 score of 92.18 compared to best previous F1 score of 88.58.\nOn subtask 4 best proposed model has 85.9, 89.9 and 95.6 compared to best previous results of 82.9, 84.0 and 89.9 on 4-way, 3-way and binary aspect polarity."
] | 696 |
1804.05868 | Universal Dependency Parsing for Hindi-English Code-switching | Code-switching is a phenomenon of mixing grammatical structures of two or more languages under varied social constraints. The code-switching data differ so radically from the benchmark corpora used in NLP community that the application of standard technologies to these data degrades their performance sharply. Unlike standard corpora, these data often need to go through additional processes such as language identification, normalization and/or back-transliteration for their efficient processing. In this paper, we investigate these indispensable processes and other problems associated with syntactic parsing of code-switching data and propose methods to mitigate their effects. In particular, we study dependency parsing of code-switching data of Hindi and English multilingual speakers from Twitter. We present a treebank of Hindi-English code-switching tweets under Universal Dependencies scheme and propose a neural stacking model for parsing that efficiently leverages part-of-speech tag and syntactic tree annotations in the code-switching treebank and the preexisting Hindi and English treebanks. We also present normalization and back-transliteration models with a decoding process tailored for code-switching data. Results show that our neural stacking parser is 1.5% LAS points better than the augmented parsing model and our decoding process improves results by 3.8% LAS points over the first-best normalization and/or back-transliteration. | {
"paragraphs": [
[
"Code-switching (henceforth CS) is the juxtaposition, within the same speech utterance, of grammatical units such as words, phrases, and clauses belonging to two or more different languages BIBREF0 . The phenomenon is prevalent in multilingual societies where speakers share more than one language and is often prompted by multiple social factors BIBREF1 . Moreover, code-switching is mostly prominent in colloquial language use in daily conversations, both online and offline.",
"Most of the benchmark corpora used in NLP for training and evaluation are based on edited monolingual texts which strictly adhere to the norms of a language related, for example, to orthography, morphology, and syntax. Social media data in general and CS data, in particular, deviate from these norms implicitly set forth by the choice of corpora used in the community. This is the reason why the current technologies often perform miserably on social media data, be it monolingual or mixed language data BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . CS data offers additional challenges over the monolingual social media data as the phenomenon of code-switching transforms the data in many ways, for example, by creating new lexical forms and syntactic structures by mixing morphology and syntax of two languages making it much more diverse than any monolingual corpora BIBREF4 . As the current computational models fail to cater to the complexities of CS data, there is often a need for dedicated techniques tailored to its specific characteristics.",
"Given the peculiar nature of CS data, it has been widely studied in linguistics literature BIBREF8 , BIBREF0 , BIBREF1 , and more recently, there has been a surge in studies concerning CS data in NLP as well BIBREF9 , BIBREF9 , BIBREF3 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . Besides the individual computational works, a series of shared-tasks and workshops on preprocessing and shallow syntactic analysis of CS data have also been conducted at multiple venues such as Empirical Methods in NLP (EMNLP 2014 and 2016), International Conference on NLP (ICON 2015 and 2016) and Forum for Information Retrieval Evaluation (FIRE 2015 and 2016). Most of these works have attempted to address preliminary tasks such as language identification, normalization and/or back-transliteration as these data often need to go through these additional processes for their efficient processing. In this paper, we investigate these indispensable processes and other problems associated with syntactic parsing of code-switching data and propose methods to mitigate their effects. In particular, we study dependency parsing of Hindi-English code-switching data of multilingual Indian speakers from Twitter. Hindi-English code-switching presents an interesting scenario for the parsing community. Mixing among typologically diverse languages will intensify structural variations which will make parsing more challenging. For example, there will be many sentences containing: (1) both SOV and SVO word orders, (2) both head-initial and head-final genitives, (3) both prepositional and postpositional phrases, etc. More importantly, none among the Hindi and English treebanks would provide any training instance for these mixed structures within individual sentences. In this paper, we present the first code-switching treebank that provides syntactic annotations required for parsing mixed-grammar syntactic structures. Moreover, we present a parsing pipeline designed explicitly for Hindi-English CS data. The pipeline comprises of several modules such as a language identification system, a back-transliteration system, and a dependency parser. The gist of these modules and our overall research contributions are listed as follows:"
],
[
"As preliminary steps before parsing of CS data, we need to identify the language of tokens and normalize and/or back-transliterate them to enhance the parsing performance. These steps are indispensable for processing CS data and without them the performance drops drastically as we will see in Results Section. We need normalization of non-standard word forms and back-transliteration of Romanized Hindi words for addressing out-of-vocabulary problem, and lexical and syntactic ambiguity introduced due to contracted word forms. As we will train separate normalization and back-transliteration models for Hindi and English, we need language identification for selecting which model to use for inference for each word form separately. Moreover, we also need language information for decoding best word sequences."
],
[
"For language identification task, we train a multilayer perceptron (MLP) stacked on top of a recurrent bidirectional LSTM (Bi-LSTM) network as shown in Figure \"Results\" .",
" skip=0.5em figureLanguage identification network ",
"We represent each token by a concatenated vector of its English embedding, back-transliterated Hindi embedding, character Bi-LSTM embedding and flag embedding (English dictionary flag and word length flag with length bins of 0-3, 4-6, 7-10, and 10-all). These concatenated vectors are passed to a Bi-LSTM network to generate a sequence of hidden representations which encode the contextual information spread across the sentence. Finally, output layer uses the feed-forward neural network with a softmax function for a probability distribution over the language tags. We train the network on our CS training set concatenated with the data set provided in ICON 2015 shared task (728 Facebook comments) on language identification and evaluate it on the datasets from bhat-EtAl:2017:EACLshort. We achieved the state-of-the-art performance on both development and test sets BIBREF13 . The results are shown in Table \"Results\" .",
" skip=0.5em tableLanguage Identification results on CS test set. "
],
[
"We learn two separate but similar character-level models for normalization-cum-transliteration of noisy Romanized Hindi words and normalization of noisy English words. We treat both normalization and back-transliteration problems as a general sequence to sequence learning problem. In general, our goal is to learn a mapping for non-standard English and Romanized Hindi word forms to standard forms in their respective scripts. In case of Hindi, we address the problem of normalization and back-transliteration of Romanized Hindi words using a single model. We use the attention-based encoder-decoder model of Luong BIBREF17 with global attention for learning. For Hindi, we train the model on the transliteration pairs (87,520) from the Libindic transliteration project and Brahmi-Net BIBREF18 which are further augmented with noisy transliteration pairs (1,75,668) for normalization. Similarly, for normalization of noisy English words, we train the model on noisy word forms (4,29,715) synthetically generated from the English vocabulary. We use simple rules such as dropping non-initial vowels and replacing consonants based on their phonological proximity to generate synthetic data for normalization. Figure \"Supplemental Material\" shows some of the noisy forms generated from standard word forms using simple and finite rules which include vowel elision (please $\\rightarrow $ pls), interchanging similar consonants and vowels (cousin $\\rightarrow $ couzin), replacing consonant or vowel clusters with a single letter (Twitter $\\rightarrow $ Twiter), etc. From here onwards, we will refer to both normalization and back-transliteration as normalization.",
" figureSynthetic normalization pairs generated for a sample of English words using hand crafted rules. ",
"At inference time, our normalization models will predict the most likely word form for each input word. However, the single-best output from the model may not always be the best option considering an overall sentential context. Contracted word forms in social media content are quite often ambiguous and can represent different standard word forms. For example, noisy form `pt' can expand to different standard word forms such as `put', `pit', `pat', `pot' and `pet'. The choice of word selection will solely depend on the sentential context. To select contextually relevant forms, we use exact search over n-best normalizations from the respective models extracted using beam-search decoding. The best word sequence is selected using the Viterbi decoding over $b^n$ word sequences scored by a trigram language model. $b$ is the size of beam-width and $n$ is the sentence length. The language models are trained on the monolingual data of Hindi and English using KenLM toolkit BIBREF19 . For each word, we extract five best normalizations ( $b$ =5). Decoding the best word sequence is a non-trivial problem for CS data due to lack of normalized and back-transliterated CS data for training a language model. One obvious solution is to apply decoding on individual language fragments in a CS sentence BIBREF20 . One major problem with this approach is that the language models used for scoring are trained on complete sentences but are applied on sentence fragments. Scoring individual CS fragments might often lead to wrong word selection due to incomplete context, particularly at fragment peripheries. We solve this problem by using a 3-step decoding process that works on two separate versions of a CS sentence, one in Hindi, and one in English. In the first step, we replace first-best back-transliterated forms of Hindi words by their translation equivalents using a Hindi-English bilingual lexicon. An exact search is used over the top `5' normalizations of English words, the translation equivalents of Hindi words and the actual word itself. In the second step, we decode best word sequence over Hindi version of the sentence by replacing best English word forms decoded from the first step by their translation equivalents. An exact search is used over the top `5' normalizations of Hindi words, the dictionary equivalents of decoded English words and the original words. In the final step, English and Hindi words are selected from their respective decoded sequences using the predicted language tags from the language identification system. Note that the bilingual mappings are only used to aid the decoding process by making the CS sentences lexically monolingual so that the monolingual language models could be used for scoring. They are not used in the final decoded output. The overall decoding process is shown in Figure 1 .",
"Both of our normalization and back-transliteration systems are evaluated on the evaluation set of bhat-EtAl:2017:EACLshort. Results of our systems are reported in Table \"Supplemental Material\" with a comparison of accuracies based on the nature of decoding used. The results clearly show the significance of our 3-step decoding over first-best and fragment-wise decoding.",
" skip=0.5em tableNormalization accuracy based on the number of noisy tokens in the evaluation set. FB = First Best, and FW = Fragment Wise "
],
[
"Recently bhat-EtAl:2017:EACLshort provided a CS dataset for the evaluation of their parsing models which they trained on the Hindi and English Universal Dependency (UD) treebanks. We extend this dataset by annotating 1,448 more sentences. Following bhat-EtAl:2017:EACLshort we first sampled CS data from a large set of tweets of Indian language users that we crawled from Twitter using Tweepy–a Twitter API wrapper. We then used a language identification system trained on ICON dataset (see Section \"Preliminary Tasks\" ) to filter Hindi-English CS tweets from the crawled Twitter data. Only those tweets were selected that satisfied a minimum ratio of 30:70(%) code-switching. From this dataset, we manually selected 1,448 tweets for annotation. The selected tweets are thoroughly checked for code-switching ratio. For POS tagging and dependency annotation, we used Version 2 of Universal dependency guidelines BIBREF21 , while language tags are assigned based on the tag set defined in BIBREF22 , BIBREF23 . The dataset was annotated by two expert annotators who have been associated with annotation projects involving syntactic annotations for around 10 years. Nonetheless, we also ensured the quality of the manual annotations by carrying an inter-annotator agreement analysis. We randomly selected a dataset of 150 tweets which were annotated by both annotators for both POS tagging and dependency structures. The inter-annotator agreement has a 96.20% accuracy for POS tagging and a 95.94% UAS and a 92.65% LAS for dependency parsing.",
"We use our dataset for training while the development and evaluation sets from bhat-EtAl:2017:EACLshort are used for tuning and evaluation of our models. Since the annotations in these datasets follow version 1.4 of the UD guidelines, we converted them to version 2 by using carefully designed rules. The statistics about the data are given in Table \"Supplemental Material\" .",
" skip=0.5em tableData Statistics. Dev set is used for tuning model parameters, while Test set is used for evaluation. "
],
[
"We adapt Kiperwasser and Goldberg kiperwasser2016simple transition-based parser as our base model and incorporate POS tag and monolingual parse tree information into the model using neural stacking, as shown in Figures \"Parsing Algorithm\" and \"Stacking Models\" ."
],
[
"Our parsing models are based on an arc-eager transition system BIBREF24 . The arc-eager system defines a set of configurations for a sentence w $_1$ ,...,w $_n$ , where each configuration C = (S, B, A) consists of a stack S, a buffer B, and a set of dependency arcs A. For each sentence, the parser starts with an initial configuration where S = [ROOT], B = [w $_1$ ,...,w $_n$ ] and A = $\\emptyset $ and terminates with a configuration C if the buffer is empty and the stack contains the ROOT. The parse trees derived from transition sequences are given by A. To derive the parse tree, the arc-eager system defines four types of transitions ( $t$ ): Shift, Left-Arc, Right-Arc, and Reduce.",
"We use the training by exploration method of goldberg2012dynamic for decoding a transition sequence which helps in mitigating error propagation at evaluation time. We also use pseudo-projective transformations of nivre2005 to handle a higher percentage of non-projective arcs in the CS data ( $\\sim $ 2%). We use the most informative scheme of head+path to store the transformation information.",
" skip=0.5em figurePOS tagging and parsing network based on stack-propagation model proposed in BIBREF25 . "
],
[
"Our base model is a stack of a tagger network and a parser network inspired by stack-propagation model of zhang-weiss:2016:P16-1. The parameters of the tagger network are shared and act as a regularization on the parsing model. The model is trained by minimizing a joint negative log-likelihood loss for both tasks. Unlike zhang-weiss:2016:P16-1, we compute the gradients of the log-loss function simultaneously for each training instance. While the parser network is updated given the parsing loss only, the tagger network is updated with respect to both tagging and parsing losses. Both tagger and parser networks comprise of an input layer, a feature layer, and an output layer as shown in Figure \"Parsing Algorithm\" . Following zhang-weiss:2016:P16-1, we refer to this model as stack-prop.",
"The input layer of the tagger encodes each input word in a sentence by concatenating a pre-trained word embedding with its character embedding given by a character Bi-LSTM. In the feature layer, the concatenated word and character representations are passed through two stacked Bi-LSTMs to generate a sequence of hidden representations which encode the contextual information spread across the sentence. The first Bi-LSTM is shared with the parser network while the other is specific to the tagger. Finally, output layer uses the feed-forward neural network with a softmax function for a probability distribution over the Universal POS tags. We only use the forward and backward hidden representations of the focus word for classification.",
"Similar to the tagger network, the input layer encodes the input sentence using word and character embeddings which are then passed to the shared Bi-LSTM. The hidden representations from the shared Bi-LSTM are then concatenated with the dense representations from the feed-forward network of the tagger and passed through the Bi-LSTM specific to the parser. This ensures that the tagging network is penalized for the parsing error caused by error propagation by back-propagating the gradients to the shared tagger parameters BIBREF25 . Finally, we use a non-linear feed-forward network to predict the labeled transitions for the parser configurations. From each parser configuration, we extract the top node in the stack and the first node in the buffer and use their hidden representations from the parser specific Bi-LSTM for classification.",
" skip=0.5em figureCode-switching tweet showing grammatical fragments from Hindi and English."
],
[
"It seems reasonable that limited CS data would complement large monolingual data in parsing CS data and a parsing model which leverages both data would significantly improve parsing performance. While a parsing model trained on our limited CS data might not be enough to accurately parse the individual grammatical fragments of Hindi and English, the preexisting Hindi and English treebanks are large enough to provide sufficient annotations to capture their structure. Similarly, parsing model(s) trained on the Hindi and English data may not be able to properly connect the divergent fragments of the two languages as the model lacks evidence for such mixed structures in the monolingual data. This will happen quite often as Hindi and English are typologicalls very diverse (see Figure UID16 ).",
" skip=0.5em figureNeural Stacking-based parsing architecture for incorporating monolingual syntactic knowledge. ",
"As we discussed above, we adapted feature-level neural stacking BIBREF25 , BIBREF26 for joint learning of POS tagging and parsing. Similarly, we also adapt this stacking approach for incorporating the monolingual syntactic knowledge into the base CS model. Recently, wang-EtAl:2017:Long6 used neural stacking for injecting syntactic knowledge of English into a graph-based Singlish parser which lead to significant improvements in parsing performance. Unlike wang-EtAl:2017:Long6, our base stacked models will allow us to transfer the POS tagging knowledge as well along the parse tree knowledge.",
"As shown in Figure \"Stacking Models\" , we transfer both POS tagging and parsing information from the source model trained on augmented Hindi and English data. For tagging, we augment the input layer of the CS tagger with the MLP layer of the source tagger. For transferring parsing knowledge, hidden representations from the parser specific Bi-LSTM of the source parser are augmented with the input layer of the CS parser which already includes the hidden layer of the CS tagger, word and character embeddings. In addition, we also add the MLP layer of the source parser to the MLP layer of the CS parser. The MLP layers of the source parser are generated using raw features from CS parser configurations. Apart from the addition of these learned representations from the source model, the overall CS model remains similar to the base model shown in Figure \"Parsing Algorithm\" . The tagging and parsing losses are back-propagated by traversing back the forward paths to all trainable parameters in the entire network for training and the whole network is used collectively for inference."
],
[
"We train all of our POS tagging and parsing models on training sets of the Hindi and English UD-v2 treebanks and our Hindi-English CS treebank. For tuning and evaluation, we use the development and evaluation sets from bhat-EtAl:2017:EACLshort. We conduct multiple experiments in gold and predicted settings to measure the effectiveness of the sub-modules of our parsing pipeline. In predicted settings, we use the POS taggers separately trained on the Hindi, English and CS training sets. All of our models use word embeddings from transformed Hindi and English embedding spaces to address the problem of lexical differences prevalent in CS sentences."
],
[
"For language identification, POS tagging and parsing models, we include the lexical features in the input layer of our neural networks using 64-dimension pre-trained word embeddings, while we use randomly initialized embeddings within a range of $[-0.1$ , $+0.1]$ for non-lexical units such as POS tags and dictionary flags. We use 32-dimensional character embeddings for all the three models and 32-dimensional POS tag embeddings for pipelined parsing models. The distributed representation of Hindi and English vocabulary are learned separately from the Hindi and English monolingual corpora. The English monolingual data contains around 280M sentences, while the Hindi data is comparatively smaller and contains around 40M sentences. The word representations are learned using Skip-gram model with negative sampling which is implemented in word2vec toolkit BIBREF27 . We use the projection algorithm of artetxe2016learning to transform the Hindi and English monolingual embeddings into same semantic space using a bilingual lexicon ( $\\sim $ 63,000 entries). The bilingual lexicon is extracted from ILCI and Bojar Hindi-English parallel corpora BIBREF28 , BIBREF29 . For normalization models, we use 32-dimensional character embeddings uniformly initialized within a range of $[-0.1, +0.1]$ .",
"The POS tagger specific Bi-LSTMs have 128 cells while the parser specific Bi-LSTMs have 256 cells. The Bi-LSTM in the language identification model has 64 cells. The character Bi-LSTMs have 32 cells for all three models. The hidden layer of MLP has 64 nodes for the language identification network, 128 nodes for the POS tagger and 256 nodes for the parser. We use hyperbolic tangent as an activation function in all tasks. In the normalization models, we use single layered Bi-LSTMs with 512 cells for both encoding and decoding of character sequences.",
"For language identification, POS tagging and parsing networks, we use momentum SGD for learning with a minibatch size of 1. The LSTM weights are initialized with random orthonormal matrices as described in BIBREF30 . We set the dropout rate to 30% for POS tagger and parser Bi-LSTM and MLP hidden states while for language identification network we set the dropout to 50%. All three models are trained for up to 100 epochs, with early stopping based on the development set.",
"In case of normalization, we train our encoder-decoder models for 25 epochs using vanilla SGD. We start with a learning rate of $1.0$ and after 8 epochs reduce it to half for every epoch. We use a mini-batch size of 128, and the normalized gradient is rescaled whenever its norm exceeds 5. We use a dropout rate of 30% for the Bi-LSTM.",
"Language identification, POS tagging and parsing code is implemented in DyNet BIBREF31 and for normalization without decoding, we use Open-NMT toolkit for neural machine translation BIBREF32 . All the code is available at https://github.com/irshadbhat/nsdp-cs and the data is available at https://github.com/CodeMixedUniversalDependencies/UD_Hindi_English."
],
[
"In Table \"Results\" , we present the results of our main model that uses neural stacking for learning POS tagging and parsing and also for knowledge transfer from the Bilingual model. Transferring POS tagging and syntactic knowledge using neural stacking gives 1.5% LAS improvement over a naive approach of data augmentation. The Bilingual model which is trained on the union of Hindi and English data sets is least accurate of all our parsing models. However, it achieves better or near state-of-the-art results on the Hindi and English evaluation sets (see Table \"Results\" ). As compared to the best system in CoNLL 2017 Shared Task on Universal Dependencies BIBREF33 , BIBREF34 , our results for English are around 3% better in LAS, while for Hindi only 0.5% LAS points worse. The CS model trained only on the CS training data is slightly more accurate than the Bilingual model. Augmenting the CS data to Hindi-English data complements their syntactic structures relevant for parsing mixed grammar structures which are otherwise missing in the individual datasets. The average improvements of around $\\sim $ 5% LAS clearly show their complementary nature.",
" skip=0.5em tableAccuracy of different parsing models on the evaluation set. POS tags are jointly predicted with parsing. LID = Language tag, TRN = Transliteration/normalization. ",
"Table \"Results\" summarizes the POS tagging results on the CS evaluation set. The tagger trained on the CS training data is 2.5% better than the Bilingual tagger. Adding CS training data to Hindi and English train sets further improves the accuracy by 1%. However, our stack-prop tagger achieves the highest accuracy of 90.53% by leveraging POS information from Bilingual tagger using neural stacking.",
" skip=0.5em tablePOS and parsing results for Hindi and English monolingual test sets using pipeline and stack-prop models. ",
" skip=0.5em tablePOS tagging accuracies of different models on CS evaluation set. SP = stack-prop. "
],
[
"In this paper, we have presented a dependency parser designed explicitly for Hindi-English CS data. The parser uses neural stacking architecture of zhang-weiss:2016:P16-1 and chen-zhang-liu:2016:EMNLP2016 for learning POS tagging and parsing and for knowledge transfer from Bilingual models trained on Hindi and English UD treebanks. We have also presented normalization and back-transliteration models with a decoding process tailored for CS data. Our neural stacking parser is 1.5% LAS points better than the augmented parsing model and 3.8% LAS points better than the one which uses first-best normalizations."
]
],
"section_name": [
"Introduction",
"Preliminary Tasks",
"Language Identification",
"Normalization and Back-transliteration",
"Universal Dependencies for Hindi-English",
"Dependency Parsing",
"Parsing Algorithm",
"Base Models",
"Stacking Models",
"Experiments",
"Hyperparameters",
"Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"988165c00acb794b1771513009c5973b17a5a2e4"
],
"answer": [
{
"evidence": [
"Recently bhat-EtAl:2017:EACLshort provided a CS dataset for the evaluation of their parsing models which they trained on the Hindi and English Universal Dependency (UD) treebanks. We extend this dataset by annotating 1,448 more sentences. Following bhat-EtAl:2017:EACLshort we first sampled CS data from a large set of tweets of Indian language users that we crawled from Twitter using Tweepy–a Twitter API wrapper. We then used a language identification system trained on ICON dataset (see Section \"Preliminary Tasks\" ) to filter Hindi-English CS tweets from the crawled Twitter data. Only those tweets were selected that satisfied a minimum ratio of 30:70(%) code-switching. From this dataset, we manually selected 1,448 tweets for annotation. The selected tweets are thoroughly checked for code-switching ratio. For POS tagging and dependency annotation, we used Version 2 of Universal dependency guidelines BIBREF21 , while language tags are assigned based on the tag set defined in BIBREF22 , BIBREF23 . The dataset was annotated by two expert annotators who have been associated with annotation projects involving syntactic annotations for around 10 years. Nonetheless, we also ensured the quality of the manual annotations by carrying an inter-annotator agreement analysis. We randomly selected a dataset of 150 tweets which were annotated by both annotators for both POS tagging and dependency structures. The inter-annotator agreement has a 96.20% accuracy for POS tagging and a 95.94% UAS and a 92.65% LAS for dependency parsing."
],
"extractive_spans": [],
"free_form_answer": "1448 sentences more than the dataset from Bhat et al., 2017",
"highlighted_evidence": [
"Recently bhat-EtAl:2017:EACLshort provided a CS dataset for the evaluation of their parsing models which they trained on the Hindi and English Universal Dependency (UD) treebanks. We extend this dataset by annotating 1,448 more sentences."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"2d5eb3b62fdd55fc54d7803863a493e795720f0d"
]
},
{
"annotation_id": [
"7937d4de6d6d634561edf7034ddad867fb3b2bee"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"fe09bc2ef2737a3258f978e26226dcbac1b3f948"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"How big is the provided treebank?",
"What is LAS metric?"
],
"question_id": [
"df2839dbd68ed9d5d186e6c148fa42fce60de64f",
"3996438cef34eb7bedaa6745b190c69553cf246b"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 2.1: Dependency tree of Example sentence 3.",
"Figure 2.2: Transition sequence for Example sentence 3 based on Arc-eager algorithm.",
"Table 3.1: Universal dependency relations",
"Table 3.2: UD lables with their meaning.",
"Figure 3.1: Few examples trees our Hindi-English Code-Mixed dependency treebank.",
"Table 3.3: Statistics on training, testing and development sets used in all the experiments reported in this thesis.",
"Figure 4.1: Language identification network",
"Table 4.1: Language Identification results on CS development set and test set.",
"Figure 4.2: Synthetic normalization pairs generated for a sample of English words using hand crafted rules.",
"Figure 4.3: The figure shows a 3-step decoding process for the sentence “Yar cn anyone tel me k twitr account bnd ksy krty hn plz” (Friend can anyone tell me how to close twitter account please).",
"Table 4.2: Normalization accuracy based on the number of noisy tokens in the evaluation set. FB = First Best, and FW = Fragment Wise",
"Figure 4.4: Devanagari to Roman character mapping table",
"Figure 5.2: Resolving structural ambiguity problem using a token-level language tag.",
"Figure 5.3: First Pass: Parse individual fragments using their respective parsing models. Second Pass: Parse the root nodes of the parsed fragments by the matrix language parsing model.",
"Figure 5.4: Example case of an imperfect segmentation",
"Figure 5.5: First Pass: Parse subordinate language first. Second Pass: Parse the roots of the subordinate fragments with the fragments of matrix language using the matrix language parser.",
"Figure 5.6: Example case of an imperfect segmentation",
"Table 5.1: POS Tagging accuracies for monolingual and multilingual models. LID = Language tag, G = Gold LID, A = Auto LID.",
"Table 5.2: Accuracy of different parsing strategies on Code-switching as well as Hindi and English evaluation sets. Multipass f |s = fragment-wise and subordinate-first parsing methods.",
"Table 5.3: Parsing accuracies with exact search and k-best search (k = 5)",
"Figure 6.1: POS tagging and parsing network based on stack-propagation model proposed in [82].",
"Figure 6.2: Code-switching tweet showing grammatical fragments from Hindi and English.",
"Figure 6.3: Neural Stacking-based parsing architecture for incorporating monolingual syntactic knowledge.",
"Table 6.1: Accuracy of different parsing models on the evaluation set. POS tags are jointly predicted with parsing. LID = Language tag, TRN = Transliteration/normalization.",
"Table 6.2: POS and parsing results for Hindi and English monolingual test sets using pipeline and stack-prop models.",
"Table 6.3: POS tagging accuracies of different models on CS evaluation set. SP = stack-prop.",
"Table 6.4: Accuracy of different parsing models on the test set using predicted language tags, normalized/back-transliterated words and predicted POS tags. POS tags are predicted separately before parsing. In Neural Stacking model, only parsing knowledge from the Bilingual model is transferred.",
"Table 6.5: Impact of normalization and back-transliteration on POS tagging and parsing models.",
"Table 6.6: Impact of monolingual and cross-lingual embeddings on stacking model performance."
],
"file": [
"20-Figure2.1-1.png",
"23-Figure2.2-1.png",
"29-Table3.1-1.png",
"30-Table3.2-1.png",
"32-Figure3.1-1.png",
"32-Table3.3-1.png",
"35-Figure4.1-1.png",
"36-Table4.1-1.png",
"37-Figure4.2-1.png",
"39-Figure4.3-1.png",
"39-Table4.2-1.png",
"41-Figure4.4-1.png",
"44-Figure5.2-1.png",
"45-Figure5.3-1.png",
"45-Figure5.4-1.png",
"46-Figure5.5-1.png",
"46-Figure5.6-1.png",
"48-Table5.1-1.png",
"48-Table5.2-1.png",
"50-Table5.3-1.png",
"54-Figure6.1-1.png",
"55-Figure6.2-1.png",
"56-Figure6.3-1.png",
"59-Table6.1-1.png",
"59-Table6.2-1.png",
"60-Table6.3-1.png",
"60-Table6.4-1.png",
"61-Table6.5-1.png",
"61-Table6.6-1.png"
]
} | [
"How big is the provided treebank?"
] | [
[
"1804.05868-Universal Dependencies for Hindi-English-0"
]
] | [
"1448 sentences more than the dataset from Bhat et al., 2017"
] | 699 |
1710.09589 | ALL-IN-1: Short Text Classification with One Model for All Languages | We present ALL-IN-1, a simple model for multilingual text classification that does not require any parallel data. It is based on a traditional Support Vector Machine classifier exploiting multilingual word embeddings and character n-grams. Our model is simple, easily extendable yet very effective, overall ranking 1st (out of 12 teams) in the IJCNLP 2017 shared task on customer feedback analysis in four languages: English, French, Japanese and Spanish. | {
"paragraphs": [
[
"Customer feedback analysis is the task of classifying short text messages into a set of predefined labels (e.g., bug, request). It is an important step towards effective customer support.",
"However, a real bottleneck for successful classification of customer feedback in a multilingual environment is the limited transferability of such models, i.e., typically each time a new language is encountered a new model is built from scratch. This is clearly impractical, as maintaining separate models is cumbersome, besides the fact that existing annotations are simply not leveraged.",
"In this paper we present our submission to the IJCNLP 2017 shared task on customer feedback analysis, in which data from four languages was available (English, French, Japanese and Spanish). Our goal was to build a single system for all four languages, and compare it to the traditional approach of creating separate systems for each language. We hypothesize that a single system is beneficial, as it can provide positive transfer, particularly for the languages for which less data is available. The contributions of this paper are:"
],
[
"Motivated by the goal to evaluate how good a single model for multiple languages fares, we decided to build a very simple model that can handle any of the four languages. We aimed at an approach that does not require any language-specific processing (beyond tokenization) nor requires any parallel data. We set out to build a simple baseline, which turned out to be surprisingly effective. Our model is depicted in Figure FIGREF7 .",
"Our key motivation is to provide a simple, general system as opposed to the usual ad-hoc setups one can expect in a multilingual shared task. So we rely on character n-grams, word embeddings, and a traditional classifier, motivated as follows.",
"First, character n-grams and traditional machine learning algorithms have proven successful for a variety of classification tasks, e.g., native language identification and language detection. In recent shared tasks simple traditional models outperformed deep neural approaches like CNNs or RNNs, e.g., BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . This motivated our choice of using a traditional model with character n-gram features.",
"Second, we build upon the recent success of multilingual embeddings. These are embedding spaces in which word types of different languages are embedded into the same high-dimensional space. Early approaches focus mainly on bilingual approaches, while recent research aims at mapping several languages into a single space. The body of literature is huge, but an excellent recent overview is given in xlingsurvey. We chose a very simple and recently proposed method that does not rely on any parallel data BIBREF4 and extend it to the multilingual case. In particular, the method falls under the broad umbrella of monolingual mappings. These approaches first train monolingual embeddings on large unlabeled corpora for the single languages. They then learn linear mappings between the monolingual embeddings to map them to the same space. The approach we apply here is particularly interesting as it does not require parallel data (parallel sentences/documents or dictionaries) and is readily applicable to off-the-shelf embeddings. In brief, the approach aims at learning a transformation in which word vector spaces are orthogonal (by applying SVD) and it leverages so-called “pseudo-dictionaries”. That is, the method first finds the common word types in two embedding spaces, and uses those as pivots to learn to align the two spaces (cf. further details in smith2017offline)."
],
[
"In this section we first describe the IJCNLP 2017 shared task 4 including the data, the features, model and evaluation metrics."
],
[
"The customer feedback analysis task BIBREF5 is a short text classification task. Given a customer feedback message, the goal is to detect the type of customer feedback. For each message, the organizers provided one or more labels. To give a more concrete idea of the data, the following are examples of the English dataset:",
"“Still calls keep dropping with the new update” (bug)",
"“Room was grubby, mold on windows frames.” (complaint)",
"“The new update is amazing.” (comment)",
"“Needs more control s and tricks..” (request)",
"“Enjoy the sunshine!!” (meaningless)"
],
[
"The data stems from a joint ADAPT-Microsoft project. An overview of the provided dataset is given in Table TABREF16 . Notice that the available amount of data differs per language.",
"We treat the customer feedback analysis problem as a single-class classification task and actually ignore multi-label instances, as motivated next. The final label distribution for the data is given in Figure FIGREF17 .",
"In initial investigations of the data we noticed that very few instances had multiple labels, e.g., “comment,complaint”. In the English training data this amounted to INLINEFORM0 4% of the data. We decided to ignore those additional labels (just picked the first in case of multiple labels) and treat the problem as a single-class classification problem. This was motivated by the fact that some labels were expected to be easily confused. Finally, there were some labels in the data that did not map to any of the labels in the task description (i.e., `undetermined', `undefined', `nonsense' and `noneless', they were presumably typos) so we mapped them all to the `meaningless' label. This frames the task as a 5-class classification problem with the following classes:",
"bug,",
"comment,",
"complaint,",
"meaningless and",
"request.",
"At test time the organizers additionally provided us with translations of the three language-specific test datasets back to English. These translations were obtained by Google translate. This allowed us to evaluate our English model on the translations, to gauge whether translation is a viable alternative to training a multilingual model."
],
[
"We perform two simple preprocessing steps. First of all, we tokenize all data using off-the-shelf tokenizers. We use tinysegmenter for Japanese and the NLTK TweetTokenizer for all other languages. The Japanese segmenter was crucial to get sufficient coverage from the word embeddings later. No additional preprocessing is performed."
],
[
"Word embeddings for single languages are readily available, for example the Polyglot or Facebook embeddings BIBREF6 , which were recently released.",
"In this work we start from the monolingual embeddings provided by the Polyglot project BIBREF7 . We use the recently proposed approach based on SVD decomposition and a “pseudo-dictionary” BIBREF4 obtained from the monolingual embeddings to project embedding spaces. To extend their method from the bilingual to the multilingual case, we apply pair-wise projections by using English as pivot, similar in spirit to ammar2016massively. We took English as our development language. We also experimented with using larger embeddings (Facebook embeddings; larger in the sense of both trained on more data and having higher dimensionality), however, results were comparable while training time increased, therefore we decided to stick to the smaller 64-dimensional Polyglot embeddings."
],
[
"As classifier we use a traditional model, a Support Vector Machine (SVM) with linear kernel implemented in scikit-learn BIBREF8 . We tune the regularization parameter INLINEFORM0 on the English development set and keep the parameter fixed for the remaining experiments and all languages ( INLINEFORM1 ).",
"We compared the SVM to fastText BIBREF9 . As we had expected fastText gave consistently lower performance, presumably because of the small amounts of training data. Therefore we did not further explore neural approaches.",
"Our features are character n-grams (3-10 grams, with binary tf-idf) and word embeddings. For the latter we use a simple continuous bag-of-word representation BIBREF10 based on averaging and min-max scaling.",
"Additionally, we experimented with adding Part-Of-Speech (POS) tags to our model. However, to keep in line with our goal to build a single system for all languages we trained a single multilingual POS tagger by exploiting the projected multilingual embeddings. In particular, we trained a state-of-the-art bidirectional LSTM tagger BIBREF11 that uses both word and character representations on the concatenation of language-specific data provided from the Universal Dependencies data (version 1.2 for En, Fr and Es and version 2.0 data for Japanese, as the latter was not available in free-form in the earlier version). The word embeddings module of the tagger is initialized with the multilingual embeddings. We investigated POS n-grams (1 to 3 grams) as additional features."
],
[
"We decided to evaluate our model using weighted F1-score, i.e., the per-class F1 score is calculated and averaged by weighting each label by its support. Notice, since our setup deviates from the shared task setup (single-label versus multi-label classification), the final evaluation metric is different. We will report on weighted F1-score for the development and test set (with simple macro averaging), but use Exact-Accuracy and Micro F1 over all labels when presenting official results on the test sets. The latter two metrics were part of the official evaluation metrics. For details we refer the reader to the shared task overview paper BIBREF5 ."
],
[
"We first present results on the provided development set, then on the official evaluation test set."
],
[
"First of all, we evaluated different feature representations. As shown in Table TABREF31 character n-grams alone prove very effective, outperforming word n-grams and word embeddings alone. Overall simple character n-grams (C) in isolation are often more beneficial than word and character n-grams together, albeit for some languages results are close. The best representation are character n-grams with word embeddings. This representation provides the basis for our multilingual model which relies on multilingual embeddings. The two officially submitted models both use character n-grams (3-10) and word embeddings. Our first official submission, Monolingual is the per-language trained model using this representation.",
"Next we investigated adding more languages to the model, by relying on the multilingual embeddings as bridge. For instance in Table TABREF31 , the model indicated as En+Es is a character and word embedding-based SVM trained using bilingual embeddings created by mapping the two monolingual embeddings onto the same space and using both the English and Spanish training material. As the results show, using multiple languages can improve over the in-language development performance of the character+embedding model. However, the bilingual models are still only able to handle pairs of languages. We therefore mapped all embeddings to a common space and train a single multilingual All-in-1 model on the union of all training data. This is the second model that we submitted to the shared task. As we can see from the development data, on average the multilingual model shows promising, overall (macro average) outperforming the single language-specific models. However, the multilingual model does not consistently fare better than single models, for example on French a monolingual model would be more beneficial.",
"Adding POS tags did not help (cf. Table TABREF31 ), actually dropped performance. We disregard this feature for the final official runs."
],
[
"We trained the final models on the concatenation of Train and Dev data. The results on the test set (using our internally used weighted F1 metric) are given in Table TABREF33 .",
"There are two take-away points from the main results: First, we see a positive transfer for languages with little data, i.e., the single multilingual model outperforms the language-specific models on the two languages (Spanish and Japanese) which have the least amount of training data. Overall results between the monolingual and multilingual model are close, but the advantage of our multilingual All-in-1 approach is that it is a single model that can be applied to all four languages. Second, automatic translation harms, the performance of the EN model on the translated data is substantially lower than the respective in-language model. We could investigate this as the organizers provided us with translations of French, Spanish and Japanese back to English.",
"Averaged over all languages our system ranked first, cf. Table TABREF34 for the results of the top 5 submissions. The multilingual model reaches the overall best exact accuracy, for two languages training a in-language model would be slightly more beneficial at the cost of maintaining a separate model. The similarity-based baseline provided by the organizers is considerably lower.",
"Our system was outperformed on English by three teams, most of which focused only on English. Unfortunately at the time of writing there is no system description available for most other top systems, so that we cannot say whether they used more English-specific features. From the system names of other teams we may infer that most teams used neural approaches, and they score worse than our SVM-based system.",
"The per-label breakdown of our systems on the official test data (using micro F1 as calculated by the organizers) is given in Table TABREF36 . Unsurprisingly less frequent labels are more difficult to predict."
],
[
"We presented a simple model that can effectively handle multiple languages in a single system. The model is based on a traditional SVM, character n-grams and multilingual embeddings. The model ranked first in the shared task of customer feedback analysis, outperforming other approaches that mostly relied on deep neural networks.",
"There are two take-away messages of this work: 1) multilingual embeddings are very promising to build single multilingual models; and 2) it is important to compare deep learning methods to simple traditional baselines; while deep approaches are undoubtedly very attractive (and fun!), we always deem it important to compare deep neural to traditional approaches, as the latter often turn out to be surprisingly effective. Doing so will add to the literature and help to shed more light on understanding why and when this is the case."
],
[
"I would like to thank the organizers, in particular Chao-Hong Liu, for his quick replies. I also thank Rob van der Goot, Héctor Martínez Alonso and Malvina Nissim for valuable comments on earlier drafts of this paper."
]
],
"section_name": [
"Introduction",
"All-In-1: One Model for All",
"Experimental Setup",
"Task Description",
"Data",
"Pre-processing",
"Multilingual Embeddings",
"Model and Features",
"Evaluation",
"Results",
"Results on Development",
"Test Performance",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"c990c7117b62ed815d118c2b01c36b86891367ae"
],
"answer": [
{
"evidence": [
"The data stems from a joint ADAPT-Microsoft project. An overview of the provided dataset is given in Table TABREF16 . Notice that the available amount of data differs per language."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"An overview of the provided dataset is given in Table TABREF16 . Notice that the available amount of data differs per language."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"7d9d8749b3f3a378b011669700863626b8f7c03d"
],
"answer": [
{
"evidence": [
"We decided to evaluate our model using weighted F1-score, i.e., the per-class F1 score is calculated and averaged by weighting each label by its support. Notice, since our setup deviates from the shared task setup (single-label versus multi-label classification), the final evaluation metric is different. We will report on weighted F1-score for the development and test set (with simple macro averaging), but use Exact-Accuracy and Micro F1 over all labels when presenting official results on the test sets. The latter two metrics were part of the official evaluation metrics. For details we refer the reader to the shared task overview paper BIBREF5 ."
],
"extractive_spans": [
"weighted F1-score"
],
"free_form_answer": "",
"highlighted_evidence": [
"We decided to evaluate our model using weighted F1-score, i.e., the per-class F1 score is calculated and averaged by weighting each label by its support. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"7a5969d4e7f55fc57bb7a24d9d719056762cab74"
],
"answer": [
{
"evidence": [
"The data stems from a joint ADAPT-Microsoft project. An overview of the provided dataset is given in Table TABREF16 . Notice that the available amount of data differs per language."
],
"extractive_spans": [],
"free_form_answer": "The dataset from a joint ADAPT-Microsoft project",
"highlighted_evidence": [
"The data stems from a joint ADAPT-Microsoft project. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"is the dataset balanced across the four languages?",
"what evaluation metrics were used?",
"what dataset was used?"
],
"question_id": [
"97159b8b1ab360c34a1114cd81e8037474bd37db",
"cb20aebfedad1a306e82966d6e9e979129fcd9f9",
"45a2ce68b4a9fd4f04738085865fbefa36dd0727"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Figure 1: Overview of our ALL-IN-1 model.",
"Table 1: Overview of the dataset (instances).",
"Figure 2: Distribution of the labels per language.",
"Table 3: Results on the test data, weighted F1. MONOLING: monolingual models. MULTILING: the multilingual ALL-IN-1 model. TRANS: translated targets to English and classified with EN model.",
"Table 2: Results on the development data, weighted F1. MONOLINGUAL: per-language model; MULTILINGUAL: ALL-IN-1 (with C+Embeds features trained on En+Es+Fr+Jp). ‡ indicates submitted systems.",
"Table 5: Test set results (F1) per category (comment (comm), complaint (compl), request (req), meaningless (ml) and bug), official evaluation.",
"Table 4: Final test set results (Exact accuracy) for top 5 teams (ranked by macro average accuracy). Rankings for micro F1 are similar, we refer to the shared task paper for details. Winning system per language in bold. †: no system description available at the time of writing this description paper."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"2-Figure2-1.png",
"4-Table3-1.png",
"4-Table2-1.png",
"5-Table5-1.png",
"5-Table4-1.png"
]
} | [
"what dataset was used?"
] | [
[
"1710.09589-Data-0"
]
] | [
"The dataset from a joint ADAPT-Microsoft project"
] | 700 |
1904.01608 | Structural Scaffolds for Citation Intent Classification in Scientific Publications | Identifying the intent of a citation in scientific papers (e.g., background information, use of methods, comparing results) is critical for machine reading of individual publications and automated analysis of the scientific literature. We propose structural scaffolds, a multitask model to incorporate structural information of scientific papers into citations for effective classification of citation intents. Our model achieves a new state-of-the-art on an existing ACL anthology dataset (ACL-ARC) with a 13.3% absolute increase in F1 score, without relying on external linguistic resources or hand-engineered features as done in existing methods. In addition, we introduce a new dataset of citation intents (SciCite) which is more than five times larger and covers multiple scientific domains compared with existing datasets. Our code and data are available at: https://github.com/allenai/scicite. | {
"paragraphs": [
[
"Citations play a unique role in scientific discourse and are crucial for understanding and analyzing scientific work BIBREF0 , BIBREF1 . They are also typically used as the main measure for assessing impact of scientific publications, venues, and researchers BIBREF2 . The nature of citations can be different. Some citations indicate direct use of a method while some others merely serve as acknowledging a prior work. Therefore, identifying the intent of citations (Figure 1 ) is critical in improving automated analysis of academic literature and scientific impact measurement BIBREF1 , BIBREF3 . Other applications of citation intent classification are enhanced research experience BIBREF4 , information retrieval BIBREF5 , summarization BIBREF6 , and studying evolution of scientific fields BIBREF7 .",
"In this work, we approach the problem of citation intent classification by modeling the language expressed in the citation context. A citation context includes text spans in a citing paper describing a referenced work and has been shown to be the primary signal in intent classification BIBREF8 , BIBREF9 , BIBREF7 . Existing models for this problem are feature-based, modeling the citation context with respect to a set of predefined hand-engineered features (such as linguistic patterns or cue phrases) and ignoring other signals that could improve prediction.",
"In this paper we argue that better representations can be obtained directly from data, sidestepping problems associated with external features. To this end, we propose a neural multitask learning framework to incorporate knowledge into citations from the structure of scientific papers. In particular, we propose two auxiliary tasks as Istructural scaffolds to improve citation intent prediction: (1) predicting the section title in which the citation occurs and (2) predicting whether a sentence needs a citation. Unlike the primary task of citation intent prediction, it is easy to collect large amounts of training data for scaffold tasks since the labels naturally occur in the process of writing a paper and thus, there is no need for manual annotation. On two datasets, we show that the proposed neural scaffold model outperforms existing methods by large margins.",
"Our contributions are: (i) we propose a neural scaffold framework for citation intent classification to incorporate into citations knowledge from structure of scientific papers; (ii) we achieve a new state-of-the-art of 67.9% F1 on the ACL-ARC citations benchmark, an absolute 13.3% increase over the previous state-of-the-art BIBREF7 ; and (iii) we introduce SciCite, a new dataset of citation intents which is at least five times as large as existing datasets and covers a variety of scientific domains."
],
[
"We propose a neural multitask learning framework for classification of citation intents. In particular, we introduce and use two structural scaffolds, auxiliary tasks related to the structure of scientific papers. The auxiliary tasks may not be of interest by themselves but are used to inform the main task. Our model uses a large auxiliary dataset to incorporate this structural information available in scientific documents into the citation intents. The overview of our model is illustrated in Figure 2 .",
"Let $C$ denote the citation and $x̭$ denote the citation context relevant to $C$ . We encode the tokens in the citation context of size $n$ as $x̭=\\lbrace x̭_1, ..., x̭_n\\rbrace $ , where $x̭_i\\in \\mathcal {R}^{d_1}$ is a word vector of size $d_1$ which concatenates non-contextualized word representations BIBREF10 and contextualized embeddings BIBREF11 , i.e.: $x̭_i = \\big [x̭_i^{\\text{GloVe}};x̭_i^{\\text{ELMo}}\\big ]$ ",
"We then use a bidirectional long short-term memory BIBREF12 (BiLSTM) network with hidden size of $d_2$ to obtain a contextual representation of each token vector with respect to the entire sequence: $ h̭_i = \\big [\\overrightarrow{\\mathrm {LSTM}}(x̭, i);\\overleftarrow{\\mathrm {LSTM}}(x̭, i)\\big ],$ ",
"where $ h̭ \\in \\mathcal {R}^{(n, 2d_2)} $ and $\\overrightarrow{\\mathrm {LSTM}}(x̭,i)$ processes $x̭$ from left to write and returns the LSTM hidden state at position $i$ (and vice versa for the backward direction $\\overleftarrow{\\mathrm {LSTM}}$ ). We then use an attention mechanism to get a single vector representing the whole input sequence: $ z̭ = \\sum _{i=1}^n\\alpha _i h̭_i, \\quad \\alpha _i = \\operatorname{softmax}(w̭^\\top h̭_i),$ ",
"where $w̭$ is a parameter served as the query vector for dot-product attention. So far we have obtained the citation representation as a vector $z̭$ . Next, we describe our two proposed structural scaffolds for citation intent prediction."
],
[
"In scientific writing there is a connection between the structure of scientific papers and the intent of citations. To leverage this connection for more effective classification of citation intents, we propose a multitask framework with two structural scaffolds (auxiliary tasks) related to the structure of scientific documents. A key point for our proposed scaffolds is that they do not need any additional manual annotation as labels for these tasks occur naturally in scientific writing. The structural scaffolds in our model are the following:",
"The first scaffold task that we consider is “citation worthiness” of a sentence, indicating whether a sentence needs a citation. The language expressed in citation sentences is likely distinctive from regular sentences in scientific writing, and such information could also be useful for better language modeling of the citation contexts. To this end, using citation markers such as “[12]” or “Lee et al (2010)”, we identify sentences in a paper that include citations and the negative samples are sentences without citation markers. The goal of the model for this task is to predict whether a particular sentence needs a citation.",
"The second scaffold task relates to predicting the section title in which a citation appears. Scientific documents follow a standard structure where the authors typically first introduce the problem, describe methodology, share results, discuss findings and conclude the paper. The intent of a citation could be relevant to the section of the paper in which the citation appears. For example, method-related citations are more likely to appear in the methods section. Therefore, we use the section title prediction as a scaffold for predicting citation intents. Note that this scaffold task is different than simply adding section title as an additional feature in the input. We are using the section titles from a larger set of data than training data for the main task as a proxy to learn linguistic patterns that are helpful for citation intents. In particular, we leverage a large number of scientific papers for which the section information is known for each citation to automatically generate large amounts of training data for this scaffold task.",
"Multitask learning as defined by BIBREF13 is an approach to inductive transfer learning that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It requires the model to have at least some sharable parameters between the tasks. In a general setting in our model, we have a main task $Task^{(1)}$ and $n-1$ auxiliary tasks $Task^{(i)}$ . As shown in Figure 2 , each scaffold task will have its task-specific parameters for effective classification and the parameters for the lower layers of the network are shared across tasks. We use a Multi Layer Perceptron (MLP) for each task and then a softmax layer to obtain prediction probabilites. In particular, given the vector $z̭$ we pass it to $n$ MLPs and obtain $n$ output vectors $y̭^{(i)}$ : $ y̭^{(i)} = \\operatorname{softmax}(\\mathrm {MLP}^{(i)}(z̭)) $ ",
"We are only interested in the output $y̭^{(1)}$ and the rest of outputs $(y̭^{(2)}, ..., y̭^{(n)})$ are regarding the scaffold tasks and only used in training to inform the model of knowledge in the structure of the scientific documents. For each task, we output the class with the highest probability in $y̭$ . An alternative inference method is to sample from the output distribution.",
"0.5pt 1.0pt"
],
[
"Let $\\mathcal {D}_1$ be the labeled dataset for the main task $Task^{(1)}$ , and $\\mathcal {D}_i$ denote the labeled datasets corresponding to the scaffold task $Task^{(i)}$ where $i\\in \\lbrace 2,...,n\\rbrace $ . Similarly, let $\\mathcal {L}_1$ and $\\mathcal {L}_i$ be the main loss and the loss of the auxiliary task $i$ , respectively. The final loss of the model is: ",
"$$\\small \n\\mathcal {L}=\\sum _{(x̭,y̭)\\in \\mathcal {D}_1} \\mathcal {L}_1(x̭,y̭) + \\sum _{i=2}^n \\lambda _i \\sum _{(x̭,y̭)\\in \\mathcal {D}_i} \\mathcal {L}_i(x̭,y̭),$$ (Eq. 15) ",
"where $\\lambda _i$ is a hyper-parameter specifying the sensitivity of the parameters of the model to each specific task. Here we have two scaffold tasks and hence $n{=}3$ . $\\lambda _i$ could be tuned based on performance on validation set (see § \"Experiments\" for details).",
"We train this model jointly across tasks and in an end-to-end fashion. In each training epoch, we construct mini-batches with the same number of instances from each of the $n$ tasks. We compute the total loss for each mini-batch as described in Equation 15 , where $\\mathcal {L}_i{=}0$ for all instances of other tasks $j{\\ne }i$ . We compute the gradient of the loss for each mini-batch and tune model parameters using the AdaDelta optimizer BIBREF14 with gradient clipping threshold of 5.0. We stop training the model when the development macro F1 score does not improve for five consecutive epochs."
],
[
"We compare our results on two datasets from different scientific domains. While there has been a long history of studying citation intents, there are only a few existing publicly available datasets on the task of citation intent classification. We use the most recent and comprehensive (ACL-ARC citations dataset) by BIBREF7 as a benchmark dataset to compare the performance of our model to previous work. In addition, to address the limited scope and size of this dataset, we introduce SciCite, a new dataset of citation intents that addresses multiple scientific domains and is more than five times larger than ACL-ARC. Below is a description of both datasets."
],
[
"ACL-ARC is a dataset of citation intents released by BIBREF7 . The dataset is based on a sample of papers from the ACL Anthology Reference Corpus BIBREF15 and includes 1,941 citation instances from 186 papers and is annotated by domain experts in the NLP field. The data was split into three standard stratified sets of train, validation, and test with 85% of data used for training and remaining 15% divided equally for validation and test. Each citation unit includes information about the immediate citation context, surrounding context, as well as information about the citing and cited paper. The data includes six intent categories outlined in Table 2 ."
],
[
"Most existing datasets contain citation categories that are too fine-grained. Some of these intent categories are very rare or not useful in meta analysis of scientific publications. Since some of these fine-grained categories only cover a minimal percentage of all citations, it is difficult to use them to gain insights or draw conclusions on impacts of papers. Furthermore, these datasets are usually domain-specific and are relatively small (less than 2,000 annotated citations).",
"To address these limitations, we introduce SciCite, a new dataset of citation intents that is significantly larger, more coarse-grained and general-domain compared with existing datasets. Through examination of citation intents, we found out many of the categories defined in previous work such as motivation, extension or future work, can be considered as background information providing more context for the current research topic. More interesting intent categories are a direct use of a method or comparison of results. Therefore, our dataset provides a concise annotation scheme that is useful for navigating research topics and machine reading of scientific papers. We consider three intent categories outlined in Table 1 : Background, Method and ResultComparison. Below we describe data collection and annotation details.",
"Citation intent of sentence extractions was labeled through the crowdsourcing platform Figure Eight. We selected a sample of papers from the Semantic Scholar corpus, consisting of papers in general computer science and medicine domains. Citation contexts were extracted using science-parse. The annotators were asked to identify the intent of a citation, and were directed to select among three citation intent options: Method, ResultComparison and Background. The annotation interface also included a dummy option Other which helps improve the quality of annotations of other categories. We later removed instances annotated with the Other option from our dataset (less than 1% of the annotated data), many of which were due to citation contexts which are incomplete or too short for the annotator to infer the citation intent.",
"We used 50 test questions annotated by a domain expert to ensure crowdsource workers were following directions and disqualify annotators with accuracy less than 75%. Furthermore, crowdsource workers were required to remain on the annotation page (five annotations) for at least ten seconds before proceeding to the next page. Annotations were dynamically collected. The annotations were aggregated along with a confidence score describing the level of agreement between multiple crowdsource workers. The confidence score is the agreement on a single instance weighted by a trust score (accuracy of the annotator on the initial 50 test questions).",
"To only collect high quality annotations, instances with confidence score of $\\le $ 0.7 were discarded. In addition, a subset of the dataset with 100 samples was re-annotated by a trained, expert annotator to check for quality, and the agreement rate with crowdsource workers was 86%. Citation contexts were annotated by 850 crowdsource workers who made a total of 29,926 annotations and individually made between 4 and 240 annotations. Each sentence was annotated, on average, 3.74 times. This resulted in a total 9,159 crowdsourced instances which were divided to training and validation sets with 90% of the data used for the training set. In addition to the crowdsourced data, a separate test set of size 1,861 was annotated by a trained, expert annotator to ensure high quality of the dataset."
],
[
"For the first scaffold (citation worthiness), we sample sentences from papers and consider the sentences with citations as positive labels. We also remove the citation markers from those sentences such as numbered citations (e.g., [1]) or name-year combinations (e.g, Lee et al (2012)) to not make the second task artificially easy by only detecting citation markers. For the second scaffold (citation section title), respective to each test dataset, we sample citations from the ACL-ARC corpus and Semantic Scholar corpus and extract the citation context as well as their corresponding sections. We manually define regular expression patterns mappings to normalized section titles: “introduction”, “related work”, “method”, “experiments”, “conclusion”. Section titles which did not map to any of the aforementioned titles were excluded from the dataset. Overall, the size of the data for scaffold tasks on the ACL-ARC dataset is about 47K (section title scaffold) and 50K (citation worthiness) while on SciCite is about 91K and 73K for section title and citation worthiness scaffolds, respectively."
],
[
"We implement our proposed scaffold framework using the AllenNLP library BIBREF16 . For word representations, we use 100-dimensional GloVe vectors BIBREF17 trained on a corpus of 6B tokens from Wikipedia and Gigaword. For contextual representations, we use ELMo vectors released by BIBREF18 with output dimension size of 1,024 which have been trained on a dataset of 5.5B tokens. We use a single-layer BiLSTM with a hidden dimension size of 50 for each direction. For each of scaffold tasks, we use a single-layer MLP with 20 hidden nodes , ReLU BIBREF19 activation and a Dropout rate BIBREF20 of 0.2 between the hidden and input layers. The hyperparameters $\\lambda _i$ are tuned for best performance on the validation set of the respective datasets using a 0.0 to 0.3 grid search. For example, the following hyperparameters are used for the ACL-ARC. Citation worthiness saffold: $\\lambda _2{=}0.08$ , $\\lambda _3{=}0$ , section title scaffold: $\\lambda _3{=}0.09$ , $\\lambda _2{=}0$ ; both scaffolds: $\\lambda _2{=}0.1$ , $\\lambda _3{=}0.05$ . Batch size is 8 for ACL-ARC dataset and 32 for SciCite dataset (recall that SciCite is larger than ACL-ARC). We use Beaker for running the experiments. On the smaller dataset, our best model takes approximately 30 minutes per epoch to train (training time without ELMo is significantly faster). It is known that multiple runs of probabilistic deep learning models can have variance in overall scores BIBREF21 . We control this by setting random-number generator seeds; the reported overall results are average of multiple runs with different random seeds. To facilitate reproducibility, we release our code, data, and trained models."
],
[
"We compare our results to several baselines including the model with state-of-the-art performance on the ACL-ARC dataset.",
"[leftmargin=6pt]",
"BiLSTM Attention (with and without ELMo). This baseline uses a similar architecture to our proposed neural multitask learning framework, except that it only optimizes the network for the main loss regarding the citation intent classification ( $\\mathcal {L}_1$ ) and does not include the structural scaffolds. We experiment with two variants of this model: with and without using the contextualized word vector representations (ELMo) of BIBREF18 . This baseline is useful for evaluating the effect of adding scaffolds in controlled experiments.",
" BIBREF7 . To make sure our results are competitive with state-of-the-art results on this task, we also compare our model to BIBREF7 which has the best reported results on the ACL-ARC dataset. BIBREF7 incorporate a variety of features, ranging from pattern-based features to topic-modeling features, to citation graph features. They also incorporate section titles and relative section position in the paper as features. Our implementation of this model achieves a macro-averaged F1 score of 0.526 using 10-fold cross-validation, which is in line with the highest reported results in BIBREF7 : 0.53 using leave-one-out cross validation. We were not able to use leave-one-out cross validation in our experiments since it is impractical to re-train each variant of our deep learning models thousands of times. Therefore, we opted for a standard setup of stratified train/validation/test data splits with 85% data used for training and the rest equally split between validation and test."
],
[
"Our main results for the ACL-ARC dataset BIBREF7 is shown in Table 3 . We observe that our scaffold-enhanced models achieve clear improvements over the state-of-the-art approach on this task. Starting with the `BiLSTM-Attn' baseline with a macro F1 score of 51.8, adding the first scaffold task in `BiLSTM-Attn + section title scaffold' improves the F1 score to 56.9 ( $\\Delta {=}5.1$ ). Adding the second scaffold in `BiLSTM-Attn + citation worthiness scaffold' also results in similar improvements: 56.3 ( $\\Delta {=}4.5$ ). When both scaffolds are used simultaneously in `BiLSTM-Attn + both scaffolds', the F1 score further improves to 63.1 ( $\\Delta {=}11.3$ ), suggesting that the two tasks provide complementary signal that is useful for citation intent prediction.",
"The best result is achieved when we also add ELMo vectors BIBREF18 to the input representations in `BiLSTM-Attn w/ ELMo + both scaffolds', achieving an F1 of 67.9, a major improvement from the previous state-of-the-art results of BIBREF7 54.6 ( $\\Delta {=}13.3$ ). We note that the scaffold tasks provide major contributions on top of the ELMo-enabled baseline ( $\\Delta {=}$ 13.6), demonstrating the efficacy of using structural scaffolds for citation intent prediction. We note that these results were obtained without using hand-curated features or additional linguistic resources as used in BIBREF7 . We also experimented with adding features used in BIBREF7 to our best model and not only we did not see any improvements, but we observed at least 1.7% decline in performance. This suggests that these additional manual features do not provide the model with any additional useful signals beyond what the model already learns from the data.",
"0.5pt 1.0pt",
"Table 4 shows the main results on SciCite dataset, where we see similar patterns. Each scaffold task improves model performance. Adding both scaffolds results in further improvements. And the best results are obtained by using ELMo representation in addition to both scaffolds. Note that this dataset is more than five times larger in size than the ACL-ARC, therefore the performance numbers are generally higher and the F1 gains are generally smaller since it is easier for the models to learn optimal parameters utilizing the larger annotated data. On this dataset, the best baseline is the neural baseline with addition of ELMo contextual vectors achieving an F1 score of 82.6 followed by BIBREF7 , which is expected because neural models generally achieve higher gains when more training data is available and because BIBREF7 was not designed with the SciCite dataset in mind.",
"The breakdown of results by intent on ACL-ARC and SciCite datasets is respectively shown in Tables 5 and 6 . Generally we observe that results on categories with more number of instances are higher. For example on ACL-ARC, the results on the Background category are the highest as this category is the most common. Conversely, the results on the FutureWork category are the lowest. This category has the fewest data points (see distribution of the categories in Table 2 ) and thus it is harder for the model to learn the optimal parameters for correct classification in this category."
],
[
"To gain more insight into why the scaffolds are helping the model in improved citation intent classification, we examine the attention weights assigned to inputs for our best proposed model (`BiLSTM-Attn w/ ELMo + both scaffolds') compared with the best neural baseline (`BiLSTM-Attn w/ ELMO'). We conduct this analysis for examples from both datasets. Figure 3 shows an example input citation along with the horizontal line and the heatmap of attention weights for this input resulting from our model versus the baseline. For first example ( 3 ) the true label is FutureWork. We observe that our model puts more weight on words surrounding the word “future” which is plausible given the true label. On the other hand, the baseline model attends most to the words “compare” and consequently incorrectly predicts a Compare label. In second example ( 3 ) the true label is ResultComparison. The baseline incorrectly classifies it as a Background, likely due to attending to another part of the sentence (“analyzed seprately”). Our model correctly classifies this instance by putting more attention weights on words that relate to comparison of the results. This suggests that the our model is more successful in learning optimal parameters for representing the citation text and classifying its respective intent compared with the baseline. Note that the only difference between our model and the neural baseline is inclusion of the structural scaffolds. Therefore, suggesting the effectiveness the scaffolds in informing the main task of relevant signals for citation intent classification.",
"0.5pt 1.0pt",
"We next investigate errors made by our best model (Figure 4 plots classification errors). One general error pattern is that the model has more tendency to make false positive errors in the Background category likely due to this category dominating both datasets. It's interesting that for the ACL-ARC dataset some prediction errors are due to the model failing to properly differentiate the Use category with Background. We found out that some of these errors would have been possibly prevented by using additional context. Table 7 shows a sample of such classification errors. For the citation in the first row of the table, the model is likely distracted by “model in (citation)” and “ILP formulation from (citation)” deeming the sentence is referring to the use of another method from a cited paper and it misses the first part of the sentence describing the motivation. This is likely due to the small number of training instances in the Motivation category, preventing the model to learn such nuances. For the examples in the second and third row, it is not clear if it is possible to make the correct prediction without additional context. And similarly in the last row the instance seems ambiguous without accessing to additional context. Similarly as shown in Figure 4 two of FutureWork labels are wrongly classified. One of them is illustrated in the forth row of Table 7 where perhaps additional context could have helped the model in identifying the correct label. One possible way to prevent this type of errors, is to provide the model with an additional input, modeling the extended surrounding context. We experimented with encoding the extended surrounding context using a BiLSTM and concatenating it with the main citation context vector (z), but it resulted in a large decline in overall performance likely due to the overall noise introduced by the additional context. A possible future work is to investigate alternative effective approaches for incorporating the surrounding extended context."
],
[
"There is a large body of work studying the intent of citations and devising categorization systems BIBREF22 , BIBREF4 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF8 , BIBREF26 , BIBREF27 . Most of these efforts provide citation categories that are too fine-grained, some of which rarely occur in papers. Therefore, they are hardly useful for automated analysis of scientific publications. To address these problems and to unify previous efforts, in a recent work, BIBREF7 proposed a six category system for citation intents. In this work, we focus on two schemes: (1) the scheme proposed by BIBREF7 and (2) an additional, more coarse-grained general-purpose category system that we propose (details in § \"Data\" ). Unlike other schemes that are domain-specific, our scheme is general and naturally fits in scientific discourse in multiple domains.",
"Early works in automated citation intent classification were based on rule-based systems (e.g., BIBREF23 , BIBREF28 ). Later, machine learning methods based on linguistic patterns and other hand-engineered features from citation context were found to be effective. For example, BIBREF8 proposed use of “cue phrases”, a set of expressions that talk about the act of presenting research in a paper. BIBREF9 relied on lexical, structural, and syntactic features and a linear SVM for classification. Researchers have also investigated methods of finding cited spans in the cited papers. Examples include feature-based methods BIBREF29 , domain-specific knowledge BIBREF30 , and a recent CNN-based model for joint prediction of cited spans and citation function BIBREF31 . We also experimented with CNNs but found the attention BiLSTM model to work significantly better. BIBREF7 expanded all pre-existing feature-based efforts on citation intent classification by proposing a comprehensive set of engineered features, including boostrapped patterns, topic modeling, dependency-based, and metadata features for the task. We argue that we can capture necessary information from the citation context using a data driven method, without the need for hand-engineered domain-dependent features or external resources. We propose a novel scaffold neural model for citation intent classification to incorporate structural information of scientific discourse into citations, borrowing the “scaffold” terminology from BIBREF32 who use auxiliary syntactic tasks for semantic problems."
],
[
"In this work, we show that structural properties related to scientific discourse can be effectively used to inform citation intent classification. We propose a multitask learning framework with two auxiliary tasks (predicting section titles and citation worthiness) as two scaffolds related to the main task of citation intent prediction. Our model achieves state-of-the-art result (F1 score of 67.9%) on the ACL-ARC dataset with 13.3 absolute increase over the best previous results. We additionally introduce SciCite, a new large dataset of citation intents and also show the effectiveness of our model on this dataset. Our dataset, unlike existing datasets that are designed based on a specific domain, is more general and fits in scientific discourse from multiple scientific domains.",
"We demonstrate that carefully chosen auxiliary tasks that are inherently relevant to a main task can be leveraged to improve the performance on the main task. An interesting line of future work is to explore the design of such tasks or explore the properties or similarities between the auxiliary and the main tasks. Another relevant line of work is adapting our model to other domains containing documents with similar linked structured such as Wikipedia articles. Future work may benefit from replacing ELMo with other types of contextualized representations such as BERT in our scaffold model. For example, at the time of finalizing the camera ready version of this paper, BIBREF33 showed that a BERT contextualized representation model BIBREF34 trained on scientific text can achieve promising results on the SciCite dataset."
],
[
"We thank Kyle Lo, Dan Weld, and Iz Beltagy for helpful discussions, Oren Etzioni for feedback on the paper, David Jurgens for helping us with their ACL-ARC dataset and reproducing their results, and the three anonymous reviewers for their comments and suggestions. Computations on beaker.org were supported in part by credits from Google Cloud."
]
],
"section_name": [
"Introduction",
"Model",
"Structural scaffolds",
"Training",
"Data",
"ACL-ARC citations dataset",
"SciCite dataset",
"Data for scaffold tasks",
"Implementation",
"Baselines",
"Results",
"Analysis",
"Related Work",
"Conclusions and future work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"dd8068de5c948fd6c1193db79d6c2cb0956920fa"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Characteristics of SciCite compared with ACL-ARC dataset by Jurgens et al. (2018)"
],
"extractive_spans": [],
"free_form_answer": "Background, extends, uses, motivation, compare/contrast, and future work for the ACL-ARC dataset. Background, method, result comparison for the SciCite dataset.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Characteristics of SciCite compared with ACL-ARC dataset by Jurgens et al. (2018)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"7b6dc8a8a76f19415d774428415fe274a8fd248a"
],
"answer": [
{
"evidence": [
"ACL-ARC is a dataset of citation intents released by BIBREF7 . The dataset is based on a sample of papers from the ACL Anthology Reference Corpus BIBREF15 and includes 1,941 citation instances from 186 papers and is annotated by domain experts in the NLP field. The data was split into three standard stratified sets of train, validation, and test with 85% of data used for training and remaining 15% divided equally for validation and test. Each citation unit includes information about the immediate citation context, surrounding context, as well as information about the citing and cited paper. The data includes six intent categories outlined in Table 2 ."
],
"extractive_spans": [
"includes 1,941 citation instances from 186 papers"
],
"free_form_answer": "",
"highlighted_evidence": [
"ACL-ARC is a dataset of citation intents released by BIBREF7 . The dataset is based on a sample of papers from the ACL Anthology Reference Corpus BIBREF15 and includes 1,941 citation instances from 186 papers and is annotated by domain experts in the NLP field."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"yes",
"yes"
],
"question": [
"What are the citation intent labels in the datasets?",
"What is the size of ACL-ARC datasets?"
],
"question_id": [
"9349acbfce95cb5d6b4d09ac626b55a9cb90e55e",
"be7f52c4f2bad20e728785a357c383853d885d94"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"citation scaffolds",
"citation scaffolds"
],
"topic_background": [
"research",
"research"
]
} | {
"caption": [
"Figure 1: Example of citations with different intents (BACKGROUND and METHOD).",
"Figure 2: Our proposed scaffold model for identifying citation intents. The main task is predicting the citation intent (top left) and two scaffolds are predicting the section title and predicting if a sentence needs a citation (citation worthiness).",
"Table 1: The definition and examples of citation intent categories in our SciCite.",
"Table 2: Characteristics of SciCite compared with ACL-ARC dataset by Jurgens et al. (2018)",
"Table 3: Results on the ACL-ARC citations dataset.",
"Table 4: Results on the SciCite dataset.",
"Figure 3: Visualization of attention weights corresponding to our best scaffold model compared with the best baseline neural baseline model without scaffolds.",
"Table 5: Detailed per category classification results on ACL-ARC dataset.",
"Table 6: Detailed per category classification results on the SciCite dataset.",
"Table 7: A sample of model’s classification errors on ACL-ARC dataset",
"Figure 4: Confusion matrix showing classification errors of our best model on two datasets. The diagonal is masked to bring focus only on errors."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"7-Figure3-1.png",
"8-Table5-1.png",
"8-Table6-1.png",
"8-Table7-1.png",
"9-Figure4-1.png"
]
} | [
"What are the citation intent labels in the datasets?"
] | [
[
"1904.01608-4-Table2-1.png"
]
] | [
"Background, extends, uses, motivation, compare/contrast, and future work for the ACL-ARC dataset. Background, method, result comparison for the SciCite dataset."
] | 701 |
1912.03184 | GoodNewsEveryone: A Corpus of News Headlines Annotated with Emotions, Semantic Roles, and Reader Perception | Most research on emotion analysis from text focuses on the task of emotion classification or emotion intensity regression. Fewer works address emotions as structured phenomena, which can be explained by the lack of relevant datasets and methods. We fill this gap by releasing a dataset of 5000 English news headlines annotated via crowdsourcing with their dominant emotions, emotion experiencers and textual cues, emotion causes and targets, as well as the reader's perception and emotion of the headline. We propose a multiphase annotation procedure which leads to high quality annotations on such a task via crowdsourcing. Finally, we develop a baseline for the task of automatic prediction of structures and discuss results. The corpus we release enables further research on emotion classification, emotion intensity prediction, emotion cause detection, and supports further qualitative studies. | {
"paragraphs": [
[
"Research in emotion analysis from text focuses on mapping words, sentences, or documents to emotion categories based on the models of Ekman1992 or Plutchik2001, which propose the emotion classes of joy, sadness, anger, fear, trust, disgust, anticipation and surprise. Emotion analysis has been applied to a variety of tasks including large scale social media mining BIBREF0, literature analysis BIBREF1, BIBREF2, lyrics and music analysis BIBREF3, BIBREF4, and the analysis of the development of emotions over time BIBREF5.",
"There are at least two types of questions which cannot yet be answered by these emotion analysis systems. Firstly, such systems do not often explicitly model the perspective of understanding the written discourse (reader, writer, or the text's point of view). For example, the headline “Djokovic happy to carry on cruising” BIBREF6 contains an explicit mention of joy carried by the word “happy”. However, it may evoke different emotions in a reader (e. g., the reader is a supporter of Roger Federer), and the same applies to the author of the headline. To the best of our knowledge, only one work takes this point into consideration BIBREF7. Secondly, the structure that can be associated with the emotion description in text is not uncovered. Questions like: “Who feels a particular emotion?” or “What causes that emotion?” still remain unaddressed. There has been almost no work in this direction, with only few exceptions in English BIBREF8, BIBREF9 and Mandarin BIBREF10, BIBREF11.",
"With this work, we argue that emotion analysis would benefit from a more fine-grained analysis that considers the full structure of an emotion, similar to the research in aspect-based sentiment analysis BIBREF12, BIBREF13, BIBREF14, BIBREF15.",
"Consider the headline: “A couple infuriated officials by landing their helicopter in the middle of a nature reserve” BIBREF16 depicted on Figure FIGREF1. One could mark “officials” as the experiencer, “a couple” as the target, and “landing their helicopter in the middle of a nature reserve” as the cause of anger. Now let us imagine that the headline starts with “A cheerful couple” instead of “A couple”. A simple approach to emotion detection based on cue words will capture that this sentence contains descriptions of anger (“infuriated”) and joy (“cheerful”). It would, however, fail in attributing correct roles to the couple and the officials, thus, the distinction between their emotion experiences would remain hidden from us.",
"In this study, we focus on an annotation task with the goal of developing a dataset that would enable addressing the issues raised above. Specifically, we introduce the corpus GoodNewsEveryone, a novel dataset of news English headlines collected from 82 different sources analyzed in the Media Bias Chart BIBREF17 annotated for emotion class, emotion intensity, semantic roles (experiencer, cause, target, cue), and reader perspective. We use semantic roles, since identifying who feels what and why is essentially a semantic role labeling task BIBREF18. The roles we consider are a subset of those defined for the semantic frame for “Emotion” in FrameNet BIBREF19.",
"We focus on news headlines due to their brevity and density of contained information. Headlines often appeal to a reader's emotions, and hence are a potential good source for emotion analysis. In addition, news headlines are easy-to-obtain data across many languages, void of data privacy issues associated with social media and microblogging.",
"Our contributions are: (1) we design a two phase annotation procedure for emotion structures via crowdsourcing, (2) present the first resource of news headlines annotated for emotions, cues, intensity, experiencers, causes, targets, and reader emotion, and, (3), provide results of a baseline model to predict such roles in a sequence labeling setting. We provide our annotations at http://www.romanklinger.de/data-sets/GoodNewsEveryone.zip."
],
[
"Our annotation is built upon different tasks and inspired by different existing resources, therefore it combines approaches from each of those. In what follows, we look at related work on each task and specify how it relates to our new corpus."
],
[
"Emotion classification deals with mapping words, sentences, or documents to a set of emotions following psychological models such as those proposed by Ekman1992 (anger, disgust, fear, joy, sadness and surprise) or Plutchik2001; or continuous values of valence, arousal and dominance BIBREF20.",
"One way to create annotated datasets is via expert annotation BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF7. The creators of the ISEAR dataset make use of self-reporting instead, where subjects are asked to describe situations associated with a specific emotion BIBREF25. Crowdsourcing is another popular way to acquire human judgments BIBREF26, BIBREF9, BIBREF9, BIBREF27, BIBREF28. Another recent dataset for emotion recognition reproduces the ISEAR dataset in a crowdsourcing setting for both English and German BIBREF29. Lastly, social network platforms play a central role in data acquisition with distant supervision, because they provide a cheap way to obtain large amounts of noisy data BIBREF26, BIBREF9, BIBREF30, BIBREF31. Table TABREF3 shows an overview of resources. More details could be found in Bostan2018."
],
[
"In emotion intensity prediction, the term intensity refers to the degree an emotion is experienced. For this task, there are only a few datasets available. To our knowledge, the first dataset annotated for emotion intensity is by Aman2007, who ask experts for ratings, followed by the datasets released for the EmoInt shared tasks BIBREF32, BIBREF28, both annotated via crowdsourcing through the best-worst scaling. The annotation task can also be formalized as a classification task, similarly to the emotion classification task, where the goal would be to map some textual input to a class from a set of predefined classes of emotion intensity categories. This approach is used by Aman2007, where they annotate high, moderate, and low."
],
[
"The task of finding a function that segments a textual input and finds the span indicating an emotion category is less researched. Cue or trigger words detection could also be formulated as an emotion classification task for which the set of classes to be predicted is extended to cover other emotion categories with cues. First work that annotated cues was done manually by one expert and three annotators on the domain of blog posts BIBREF21. Mohammad2014 annotates the cues of emotions in a corpus of $4,058$ electoral tweets from US via crowdsourcing. Similar in annotation procedure, Yan2016emocues curate a corpus of 15,553 tweets and annotate it with 28 emotion categories, valence, arousal, and cues.",
"To the best of our knowledge, there is only one work BIBREF8 that leverages the annotations for cues and considers the task of emotion detection where the exact spans that represent the cues need to be predicted."
],
[
"Detecting the cause of an expressed emotion in text received relatively little attention, compared to emotion detection. There are only few works on English that focus on creating resources to tackle this task BIBREF23, BIBREF9, BIBREF8, BIBREF33. The task can be formulated in different ways. One is to define a closed set of potential causes after annotation. Then, cause detection is a classification task BIBREF9. Another setting is to find the cause in the text. This is formulated as segmentation or clause classification BIBREF23, BIBREF8. Finding the cause of an emotion is widely researched on Mandarin in both resource creation and methods. Early works build on rule-based systems BIBREF34, BIBREF35, BIBREF36 which examine correlations between emotions and cause events in terms of linguistic cues. The works that follow up focus on both methods and corpus construction, showing large improvements over the early works BIBREF37, BIBREF38, BIBREF33, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF43, BIBREF11. The most recent work on cause extraction is being done on Mandarin and formulates the task jointly with emotion detection BIBREF10, BIBREF44, BIBREF45. With the exception of Mohammad2014 who is annotating via crowdsourcing, all other datasets are manually labeled, usually by using the W3C Emotion Markup Language."
],
[
"Semantic role labeling in the context of emotion analysis deals with extracting who feels (experiencer) which emotion (cue, class), towards whom the emotion is expressed (target), and what is the event that caused the emotion (stimulus). The relations are defined akin to FrameNet's Emotion frame BIBREF19.",
"There are two works that work on annotation of semantic roles in the context of emotion. Firstly, Mohammad2014 annotate a dataset of $4,058$ tweets via crowdsourcing. The tweets were published before the U.S. presidential elections in 2012. The semantic roles considered are the experiencer, the stimulus, and the target. However, in the case of tweets, the experiencer is mostly the author of the tweet. Secondly, Kim2018 annotate and release REMAN (Relational EMotion ANnotation), a corpus of $1,720$ paragraphs based on Project Gutenberg. REMAN was manually annotated for spans which correspond to emotion cues and entities/events in the roles of experiencers, targets, and causes of the emotion. They also provide baseline results for the automatic prediction of these structures and show that their models benefit from joint modeling of emotions with its roles in all subtasks. Our work follows in motivation Kim2018 and in procedure Mohammad2014."
],
[
"Studying the impact of different annotation perspectives is another little explored area. There are few exceptions in sentiment analysis which investigate the relation between sentiment of a blog post and the sentiment of their comments BIBREF46 or model the emotion of a news reader jointly with the emotion of a comment writer BIBREF47.",
"Fewer works exist in the context of emotion analysis. 5286061 deal with writer's and reader's emotions on online blogs and find that positive reader emotions tend to be linked to positive writer emotions. Buechel2017b and buechel-hahn-2017-emobank look into the effects of different perspectives on annotation quality and find that the reader perspective yields better inter-annotator agreement values."
],
[
"We gather the data in three steps: (1) collecting the news and the reactions they elicit in social media, (2) filtering the resulting set to retain relevant items, and (3) sampling the final selection using various metrics.",
"The headlines are then annotated via crowdsourcing in two phases by three annotators in the first phase and by five annotators in the second phase. As a last step, the annotations are adjudicated to form the gold standard. We describe each step in detail below."
],
[
"The first step consists of retrieving news headlines from the news publishers. We further retrieve content related to a news item from social media: tweets mentioning the headlines together with replies and Reddit posts that link to the headlines. We use this additional information for subsampling described later.",
"We manually select all news sources available as RSS feeds (82 out of 124) from the Media Bias Chart BIBREF48, a project that analyzes reliability (from original fact reporting to containing inaccurate/fabricated information) and political bias (from most extreme left to most extreme right) of U.S. news sources.",
"Our news crawler retrieved daily headlines from the feeds, together with the attached metadata (title, link, and summary of the news article) from March 2019 until October 2019. Every day, after the news collection finished, Twitter was queried for 50 valid tweets for each headline. In addition to that, for each collected tweet, we collect all valid replies and counts of being favorited, retweeted and replied to in the first 24 hours after its publication.",
"The last step in the pipeline is aquiring the top (“hot”) submissions in the /r/news, /r/worldnews subreddits, and their metadata, including the number of up and downvotes, upvote ratio, number of comments, and comments themselves."
],
[
"We remove any headlines that have less than 6 tokens (e. g., “Small or nothing”, “But Her Emails”, “Red for Higher Ed”), as well as those starting with certain phrases, such as “Ep.”,“Watch Live:”, “Playlist:”, “Guide to”, and “Ten Things”. We also filter-out headlines that contain a date (e. g., “Headlines for March 15, 2019”) and words from the headlines which refer to visual content, like “video”, “photo”, “image”, “graphic”, “watch”, etc."
],
[
"We stratify the remaining headlines by source (150 headlines from each source) and subsample equally according to the following strategies: 1) randomly select headlines, 2) select headlines with high count of emotion terms, 3) select headlines that contain named entities, and 4) select the headlines with high impact on social media. Table TABREF16 shows how many headlines are selected by each sampling method in relation to the most dominant emotion (see Section SECREF25)."
],
[
"The goal of the first sampling method is to collect a random sample of headlines that is representative and not biased towards any source or content type. Note that the sample produced using this strategy might not be as rich with emotional content as the other samples."
],
[
"For the second sampling strategy we hypothesize that headlines containing emotionally charged words are also likely to contain the structures we aim to annotate. This strategy selects headlines whose words are in the NRC dictionary BIBREF49."
],
[
"We further hypothesize that headlines that mention named entities may also contain experiencers or targets of emotions, and therefore, they are likely to present a complete emotion structure. This sampling method yields headlines that contain at least one entity name, according to the recognition from spaCy that is trained on OntoNotes 5 and on Wikipedia corpus. We consider organization names, persons, nationalities, religious, political groups, buildings, countries, and other locations."
],
[
"The last sampling strategy involves our Twitter and Reddit metadata. This enables us to select and sample headlines based on their impact on social media (under the assumption that this correlates with emotion connotation of the headline). This strategy chooses them equally from the most favorited tweets, most retweeted headlines on Twitter, most replied to tweets on Twitter, as well as most upvoted and most commented on posts on Reddit."
],
[
"Using these sampling and filtering methods, we select $9,932$ headlines. Next, we set up two questionnaires (see Table TABREF17) for the two annotation phases that we describe below. We use Figure Eight."
],
[
"The first questionnaire is meant to determine the dominant emotion of a headline, if that exists, and whether the headline triggers an emotion in a reader. We hypothesize that these two questions help us to retain only relevant headlines for the next, more expensive, annotation phase.",
"During this phase, $9,932$ headlines were annotated by three annotators. The first question of the first phase (P1Q1) is: “Which emotion is most dominant in the given headline?” and annotators are provided a closed list of 15 emotion categories to which the category No emotion was added. The second question (P1Q2) aims to answer whether a given headline would stir up an emotion in most readers and the annotators are provided with only two possible answers (yes or no, see Table TABREF17 and Figure FIGREF1 for details).",
"Our set of 15 emotion categories is an extended set over Plutchik's emotion classes and comprises anger, annoyance, disgust, fear, guilt, joy, love, pessimism, negative surprise, optimism, positive surprise, pride, sadness, shame, and trust. Such a diverse set of emotion labels is meant to provide a more fine-grained analysis and equip the annotators with a wider range of answer choices."
],
[
"The annotations collected during the first phase are automatically ranked and the ranking is used to decide which headlines are further annotated in the second phase. Ranking consists of sorting by agreement on P1Q1, considering P1Q2 in the case of ties.",
"The top $5,000$ ranked headlines are annotated by five annotators for emotion class, intensity, reader emotion, and other emotions in case there is not only a dominant emotion. Along with these closed annotation tasks, the annotators are asked to answer several open questions, namely (1) who is the experiencer of the emotion (if mentioned), (2) what event triggered the annotated emotion (if mentioned), (3) if the emotion had a target, and (4) who or what is the target. The annotators are free to select multiple instances related to the dominant emotion by copy-paste into the answer field. For more details on the exact questions and example of answers, see Table TABREF17. Figure FIGREF1 shows a depiction of the procedure."
],
[
"To control the quality, we ensured that a single annotator annotates maximum 120 headlines (this protects the annotators from reading too many news headlines and from dominating the annotations). Secondly, we let only annotators who geographically reside in the U.S. contribute to the task.",
"We test the annotators on a set of $1,100$ test questions for the first phase (about 10% of the data) and 500 for the second phase. Annotators were required to pass 95%. The questions were generated based on hand-picked non-ambiguous real headlines through swapping out relevant words from the headline in order to obtain a different annotation, for instance, for “Djokovic happy to carry on cruising”, we would swap “Djokovic” with a different entity, the cue “happy” to a different emotion expression.",
"Further, we exclude Phase 1 annotations that were done in less than 10 seconds and Phase 2 annotations that were done in less than 70 seconds.",
"After we collected all annotations, we found unreliable annotators for both phases in the following way: for each annotator and for each question, we compute the probability with which the annotator agrees with the response chosen by the majority. If the computed probability is more than two standard deviations away from the mean we discard all annotations done by that annotator.",
"On average, 310 distinct annotators needed 15 seconds in the first phase. We followed the guidelines of the platform regarding payment and decided to pay for each judgment $$0.02$ (USD) for Phase 1 (total of $$816.00$ USD). For the second phase, 331 distinct annotators needed on average $\\approx $1:17 minutes to perform one judgment. Each judgment was paid with $0.08$$ USD (total $$2,720.00$ USD)."
],
[
"In this section, we describe the adjudication process we undertook to create the gold dataset and the difficulties we faced in creating a gold set out of the collected annotations.",
"The first step was to discard obviously wrong annotations for open questions, such as annotations in other languages than English, or annotations of spans that were not part of the headline. In the next step, we incrementally apply a set of rules to the annotated instances in a one-or-nothing fashion. Specifically, we incrementally test each instance for a number of criteria in such a way that if at least one criteria is satisfied the instance is accepted and its adjudication is finalized. Instances that do not satisfy at least one criterium are adjudicated manually."
],
[
"This filter is applied to all questions regardless of their type. Effectively, whenever an entire annotation is agreed upon by at least two annotators, we use all parts of this annotation as the gold annotation. Given the headline depicted in Figure FIGREF1 with the following target role annotations by different annotators: “A couple”, “None”, “A couple”, “officials”, “their helicopter”. The resulting gold annotation is “A couple” and the adjudication process for the target ends."
],
[
"This rule is only applied to open text questions. It takes the most common smallest string intersection of all annotations. In the headline above, the experiencer annotations “A couple”, “infuriated officials”, “officials”, “officials”, “infuriated officials” would lead to “officials”."
],
[
"This rule is only applied two different intersections are the most common (previous rule), and these two intersect. We then accept the longest common subsequence. Revisiting the example for deciding on the cause role with the annotations “by landing their helicopter in the nature reserve”, “by landing their helicopter”, “landing their helicopter in the nature reserve”, “a couple infuriated officials”, “infuriated” the adjudicated gold is “landing their helicopter in the nature reserve”.",
"Table TABREF27 shows through examples of how each rule works and how many instances are “solved” by each adjudication rule."
],
[
"For the role of experiencer, we accept only the most-common noun-chunk(s).",
"The annotations that are left after being processed by all the rules described above are being adjudicated manually by the authors of the paper. We show examples for all roles in Table TABREF29."
],
[
"We calculate the agreement on the full set of annotations from each phase for the two question types, namely open vs. closed, where the first deal with emotion classification and second with the roles cue, experiencer, cause, and target."
],
[
"We use Fleiss' Kappa ($\\kappa $) to measure the inter-annotator agreement for closed questions BIBREF50, BIBREF51. In addition, we report the average percentage of overlaps between all pairs of annotators (%) and the mean entropy of annotations in bits. Higher agreement correlates with lower entropy. As Table TABREF38 shows, the agreement on the question whether a headline is emotional or not obtains the highest agreement ($0.34$), followed by the question on intensity ($0.22$). The lowest agreement is on the question to find the most dominant emotion ($0.09$).",
"All metrics show comparably low agreement on the closed questions, especially on the question of the most dominant emotion. This is reasonable, given that emotion annotation is an ambiguous, subjective, and difficult task. This aspect lead to the decision of not purely calculating a majority vote label but to consider the diversity in human interpretation of emotion categories and publish the annotations by all annotators.",
"Table TABREF40 shows the counts of annotators agreeing on a particular emotion. We observe that Love, Pride, and Sadness show highest intersubjectivity followed closely by Fear and Joy. Anger and Annoyance show, given their similarity, lower scores. Note that the micro average of the basic emotions (+ love) is $0.21$ for when more than five annotators agree."
],
[
"Table TABREF41 presents the mean of pair-wise inter-annotator agreement for each role. We report average pair-wise Fleiss' $\\kappa $, span-based exact $\\textrm {F}_1$ over the annotated spans, accuracy, proportional token overlap, and the measure of agreement on set-valued items, MASI BIBREF52.",
"We observe a fair agreement on the open annotation tasks. The highest agreement is for the role of the Experiencer, followed by Cue, Cause, and Target.",
"This seems to correlate with the length of the annotated spans (see Table TABREF42). This finding is consistent with Kim2018. Presumably, Experiencers are easier to annotate as they often are noun phrases whereas causes can be convoluted relative clauses."
],
[
"In the following, we report numbers of the adjudicated data set for simplicity of discussion. Please note that we publish all annotations by all annotators and suggest that computational models should consider the distribution of annotations instead of one adjudicated gold. The latter for be a simplification which we consider to not be appropriate.",
"GoodNewsEveryone contains $5,000$ headlines from various news sources described in the Media Bias Chart BIBREF17. Overall, the corpus is composed of $56,612$ words ($354,173$ characters) out of which $17,513$ are unique. The headline length is short with 11 words on average. The shortest headline contains 6 words while the longest headline contains 32 words. The length of a headline in characters ranges from 24 the shortest to 199 the longest.",
"Table TABREF42 presents the total number of adjudicated annotations for each role in relation to the dominant emotion. GoodNewsEveryone consists of $5,000$ headlines, $3,312$ of which have annotated dominant emotion via majority vote. The rest of $1,688$ headlines (up to $5,000$) ended in ties for the most dominant emotion category and were adjudicated manually. The emotion category Negative Surprise has the highest number of annotations, while Love has the lowest number of annotations. In most cases, Cues are single tokens (e. g., “infuriates”, “slams”), Cause has the largest proportion of annotations that span more than seven tokens on average (65% out of all annotations in this category),",
"For the role of Experiencer, we see the lowest number of annotations (19%), which is a very different result to the one presented by Kim2018, where the role Experiencer was the most annotated. We hypothesize that this is the effect of the domain we annotated; it is more likely to encounter explicit experiencers in literature (as literary characters) than in news headlines. As we can see, the cue and the cause relations dominate the dataset (27% each), followed by Target (25%) relations.",
"Table TABREF42 also shows how many times each emotion triggered a certain relation. In this sense, Negative Surprise and Positive Surprise has triggered the most Experiencer, and Cause and Target relations, which due to the prevalence of the annotations for this emotion in the dataset.",
"Further, Figure FIGREF44, shows the distances of the different roles from the cue. The causes and targets are predominantly realized right of the cue, while the experiencer occurs more often left of the cue."
],
[
"As an estimate for the difficulty of the task, we provide baseline results. We formulate the task as sequence labeling of emotion cues, mentions of experiencers, targets, and causes with a bidirectional long short-term memory networks with a CRF layer (biLSTM-CRF) that uses Elmo embeddings as input and an IOB alphabet as output. The results are shown in Table TABREF45."
],
[
"We introduce GoodNewsEveryone, a corpus of $5,000$ headlines annotated for emotion categories, semantic roles, and reader perspective. Such a dataset enables answering instance-based questions, such as, “who is experiencing what emotion and why?” or more general questions, like “what are typical causes of joy in media?”. To annotate the headlines, we employ a two-phase procedure and use crowdsourcing. To obtain a gold dataset, we aggregate the annotations through automatic heuristics.",
"As the evaluation of the inter-annotator agreement and the baseline model results show, the task of annotating structures encompassing emotions with the corresponding roles is a very difficult one.",
"However, we also note that developing such a resource via crowdsourcing has its limitations, due to the subjective nature of emotions, it is very challenging to come up with an annotation methodology that would ensure less dissenting annotations for the domain of headlines.",
"We release the raw dataset, the aggregated gold dataset, the carefully designed questionnaires, and baseline models as a freely available repository (partially only after acceptance of the paper). The released dataset will be useful for social science scholars, since it contains valuable information about the interactions of emotions in news headlines, and gives interesting insights into the language of emotion expression in media. Note that this dataset is also useful since it introduces a new dataset to test on structured prediction models. We are currently investigating the dataset for understanding the interaction between media bias and annotated emotions and roles."
],
[
"This research has been conducted within the CRETA project (http://www.creta.uni-stuttgart.de/) which is funded by the German Ministry for Education and Research (BMBF) and partially funded by the German Research Council (DFG), projects SEAT (Structured Multi-Domain Emotion Analysis from Text, KL 2869/1-1). We thank Enrica Troiano and Jeremy Barnes for fruitful discussions.",
"same"
]
],
"section_name": [
"Introduction",
"Related Work",
"Related Work ::: Emotion Classification",
"Related Work ::: Emotion Intensity",
"Related Work ::: Cue or Trigger Words",
"Related Work ::: Emotion Cause Detection",
"Related Work ::: Semantic Role Labeling of Emotions",
"Related Work ::: Reader vs. Writer vs. Text Perspective",
"Data Collection & Annotation",
"Data Collection & Annotation ::: Collecting Headlines",
"Data Collection & Annotation ::: Filtering & Postprocessing",
"Data Collection & Annotation ::: Sampling Headlines",
"Data Collection & Annotation ::: Sampling Headlines ::: Random Sampling.",
"Data Collection & Annotation ::: Sampling Headlines ::: Sampling via NRC.",
"Data Collection & Annotation ::: Sampling Headlines ::: Sampling Entities.",
"Data Collection & Annotation ::: Sampling Headlines ::: Sampling based on Reddit & Twitter.",
"Data Collection & Annotation ::: Annotation Procedure",
"Data Collection & Annotation ::: Annotation Procedure ::: Phase 1: Selecting Emotional Headlines",
"Data Collection & Annotation ::: Annotation Procedure ::: Phase 2: Emotion and Role Annotation",
"Data Collection & Annotation ::: Annotation Procedure ::: Quality Control and Results",
"Data Collection & Annotation ::: Adjudication of Annotations",
"Data Collection & Annotation ::: Adjudication of Annotations ::: Relative Majority Rule.",
"Data Collection & Annotation ::: Adjudication of Annotations ::: Most Common Subsequence Rule.",
"Data Collection & Annotation ::: Adjudication of Annotations ::: Longest Common Subsequence Rule.",
"Data Collection & Annotation ::: Adjudication of Annotations ::: Noun Chunks",
"Analysis ::: Inter-Annotator Agreement",
"Analysis ::: Inter-Annotator Agreement ::: Emotion",
"Analysis ::: Inter-Annotator Agreement ::: Roles",
"Analysis ::: General Corpus Statistics",
"Baseline",
"Conclusion & Future Work",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"7c1b1571ec673762909a9c4639a43996fd5677b9"
],
"answer": [
{
"evidence": [
"To control the quality, we ensured that a single annotator annotates maximum 120 headlines (this protects the annotators from reading too many news headlines and from dominating the annotations). Secondly, we let only annotators who geographically reside in the U.S. contribute to the task.",
"We test the annotators on a set of $1,100$ test questions for the first phase (about 10% of the data) and 500 for the second phase. Annotators were required to pass 95%. The questions were generated based on hand-picked non-ambiguous real headlines through swapping out relevant words from the headline in order to obtain a different annotation, for instance, for “Djokovic happy to carry on cruising”, we would swap “Djokovic” with a different entity, the cue “happy” to a different emotion expression."
],
"extractive_spans": [],
"free_form_answer": "Annotators went through various phases to make sure their annotations did not deviate from the mean.",
"highlighted_evidence": [
"To control the quality, we ensured that a single annotator annotates maximum 120 headlines (this protects the annotators from reading too many news headlines and from dominating the annotations). Secondly, we let only annotators who geographically reside in the U.S. contribute to the task.\n\nWe test the annotators on a set of $1,100$ test questions for the first phase (about 10% of the data) and 500 for the second phase. Annotators were required to pass 95%."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"two"
],
"paper_read": [
"no"
],
"question": [
"How is quality of annotation measured?"
],
"question_id": [
"4c50f75b1302f749c1351de0782f2d658d4bea70"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
""
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 1: Example of an annotated headline from our dataset. Each color represents an annotator.",
"Table 1: Related resources for emotion analysis in English.",
"Table 2: Sampling methods counts per adjudicated emotion.",
"Table 3: Questionnaires for the two annotation phases. Emotions are Anger, Annoyance, Disgust, Fear, Guilt, Joy, Love, Pessimism, Neg. Surprise, Optimism, Negative Surprise, Optimism, Positive Surprise, Pride, Sadness, Shame, and Trust.",
"Table 4: Heuristics used in adjudicating gold corpus in the order of application on the questions of the type open and their counts. wi refers to the the word with the index i in the headline, each set of words represents an annotation.",
"Table 5: Example linguistic realization of entities.",
"Table 6: Agreement statistics on closed questions. Comparing with the questions in Table 3, Emotional/Non-Emotional uses the annotations of Phase 1 Question 1 (P1Q1). In the same way, Reader perception refers to P1Q2, Dominant Emotion is P2Q1, Intensity is linked to P2Q2, Other Emotions to P2Q8, and Reader Emotions to P2Q9.",
"Table 7: Percentage Agreement per emotion category on most dominant emotion (second phase). Each column shows the percentage of emotions for which the # of annotators agreeing is greater than 2, 3, 4, and 5",
"Table 8: Pairwise inter-annotator agreement (mean) for the open questions annotations. We report for each role the following scores: Fleiss’s κ, Accuracy, F1 score, Proportional Token Overlap, MASI and Entropy",
"Table 9: Corpus statistics for role annotations. Columns indicate how frequent the respective emotions are in relation to the annotated role and annotation length.",
"Figure 2: Distances between emotion cues and the other relations: cause, experiencer, and target.",
"Table 10: Results for the baseline experiments."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"6-Table5-1.png",
"7-Table6-1.png",
"7-Table7-1.png",
"7-Table8-1.png",
"8-Table9-1.png",
"8-Figure2-1.png",
"8-Table10-1.png"
]
} | [
"How is quality of annotation measured?"
] | [
[
"1912.03184-Data Collection & Annotation ::: Annotation Procedure ::: Quality Control and Results-1",
"1912.03184-Data Collection & Annotation ::: Annotation Procedure ::: Quality Control and Results-0"
]
] | [
"Annotators went through various phases to make sure their annotations did not deviate from the mean."
] | 704 |
1911.13066 | A Multi-cascaded Deep Model for Bilingual SMS Classification | Most studies on text classification are focused on the English language. However, short texts such as SMS are influenced by regional languages. This makes the automatic text classification task challenging due to the multilingual, informal, and noisy nature of language in the text. In this work, we propose a novel multi-cascaded deep learning model called McM for bilingual SMS classification. McM exploits n-gram level information as well as long-term dependencies of text for learning. Our approach aims to learn a model without any code-switching indication, lexical normalization, language translation, or language transliteration. The model relies entirely upon the text as no external knowledge base is utilized for learning. For this purpose, a 12 class bilingual text dataset is developed from SMS feedbacks of citizens on public services containing mixed Roman Urdu and English languages. Our model achieves high accuracy for classification on this dataset and outperforms the previous model for multilingual text classification, highlighting language independence of McM. | {
"paragraphs": [
[
"Social media such as Facebook, Twitter, and Short Text Messaging Service (SMS) are popular channels for getting feedback from consumers on products and services. In Pakistan, with the emergence of e-government practices, SMS is being used for getting feedback from the citizens on different public services with the aim to reduce petty corruption and deficient delivery in services. Automatic classification of these SMS into predefined categories can greatly decrease the response time on complaints and consequently improve the public services rendered to the citizens. While Urdu is the national language of Pakistan, English is treated as the official language of the country. This leads to the development of a distinct dialect of communication known as Roman Urdu, which utilizes English alphabets to write Urdu. Hence, the SMS texts contain multilingual text written in the non-native script and informal diction. The utilization of two or more languages simultaneously is known as multilingualism BIBREF0. Consequently, alternation of two languages in a single conversation, a phenomenon known as code-switching, is inevitable for a multilingual speaker BIBREF1. Factors like informal verbiage, improper grammar, variation in spellings, code-switching, and short text length make the problem of automatic bilingual SMS classification highly challenging.",
"In Natural Language Processing (NLP), deep learning has revolutionized the modeling and understanding of human languages. The richness, expressiveness, ambiguities, and complexity of the natural language can be addressed by deep neural networks without the need to produce complex engineered features BIBREF2. Deep learning models have been successfully used in many NLP tasks involving multilingual text. A Convolutional Neural Network (CNN) based model for sentiment classification of a multilingual dataset was proposed in BIBREF3. However, a particular record in the dataset belonged to one language only. In our case, a record can have either one or two languages. There is very little published work on this specific setting. One way to classify bilingual text is to normalize the different variations of a word to a standard spelling before training the model BIBREF4. However, such normalization requires external resources such as lexical database, and Roman Urdu is under-resourced in this context. Another approach for an under-resourced language is to adapt the resources from resource-rich language BIBREF5. However, such an approach is not generalizable in the case of Roman Urdu text as it is an informal language with no proper grammatical rules and dictionary. More recent approach utilizes code-switching annotations to improve the predictive performance of the model, where each word is annotated with its respective language label. Such an approach is not scalable for large data as annotation task becomes tedious.",
"In this paper, we propose a multi-cascaded deep learning network, called as McM for multi-class classification of bilingual short text. Our goal is to achieve this without any prior knowledge of the language, code-switching indication, language translation, normalizing lexical variations, or language transliteration. In multilingual text classification, previous approaches employ a single deep learning architecture, such as CNN or Long Short Term Memory (LSTM) for feature learning and classification. McM, on the other hand, employs three cascades (aka feature learners) to learn rich textual representations from three perspectives. These representations are then forwarded to a small discriminator network for final prediction. We compare the performance of the proposed model with existing CNN-based model for multilingual text classification BIBREF3. We report a series of experiments using 3 kinds of embedding initialization approaches as well as the effect of attention mechanism BIBREF6.",
"The English language is well studied under the umbrella of NLP, hence many resources and datasets for the different problems are available. However, research on English-Roman Urdu bilingual text lags behind because of non-availability of gold standard datasets. Our second contribution is that we present a large scale annotated dataset in Roman Urdu and English language with code-switching, for multi-class classification. The dataset consists of more than $0.3$ million records and has been made available for future research.",
"The rest of the paper is organized as follows. Section SECREF2 defines the dataset acquiring process and provides an explanation of the class labels. In section SECREF3, the architecture of the proposed model, its hyperparameters, and the experimental setup is discussed. We discuss the results in section SECREF4 and finally, concluding remarks are presented in section SECREF5.",
"."
],
[
"The dataset consists of SMS feedbacks of the citizens of Pakistan on different public services availed by them. The objective of collecting these responses is to measure the performance of government departments rendering different public services. Preprocessing of the data is kept minimal. All records having only single word in SMS were removed as cleaning step. To construct the “gold standard\", $313,813$ samples are manually annotated into 12 predefined categories by two annotators in supervision of a domain-expert. Involvement of the domain-expert was to ensure the practicality and quality of the “gold standard\". Finally, stratified sampling method was opted for splitting the data into train and test partitions with $80-20$ ratio (i.e., $80\\%$ records for training and $20\\%$ records for testing). This way, training split has $251,050$ records while testing split has $62,763$ records. The rationale behind stratified sampling was to maintain the ratio of every class in both splits. The preprocessed and annotated data along with train and test split is made available . Note that the department names and service availed by the citizens is mapped to an integer identifier for anonymity.",
"Class label ratios, corresponding labels, and it's description are presented in Table TABREF1."
],
[
"The proposed model, named McM, is mainly inspired by the findings by Reimers, N., & Gurevych (2017) , who concluded that deeper model have minimal effect on the predictive performance of the model BIBREF7. McM manifests a wider model, which employ three feature learners (cascades) that are trained for classification independently (in parallel).",
"The input text is first mapped to embedding matrix of size $l \\times d$ where $l$ denotes the number of words in the text while $d$ is dimensions of the embedding vector for each of these words. More formally, let $\\mathcal {T} \\in \\lbrace w_1, w_2, ..., w_l\\rbrace $ be the input text with $l$ words, embedding matrix is defined by ${X} \\in \\mathbb {R}^{l \\times d}$. This representation is then fed to three feature learners, which are trained with local supervision. The learned features are then forwarded to discriminator network for final prediction as shown in Fig. FIGREF3. Each of these components are discussed in subsequent subsections."
],
[
"CNN learner is employed to learn $n$-gram features for identification of relationships between words. A 1-d convolution filter is used with a sliding window (kernel) of size $k$ (number of $n$-grams) in order to extract the features. A filter $W$ is defined as $W \\in \\mathbb {R}^{k \\times d}$ for the convolution function. The word vectors starting from the position $j$ to the position $j + k -1$ are processed by the filter $W$ at a time. The window $h_j$ is expressed as:",
"Where, the $\\oplus $ represents the concatenation of word vectors. The number of filters are usually decided empirically. Each filter convolves with one window at a time to generate a feature map $f_j$ for that specific window as:",
"Where, the $\\odot $ represents convolution operation, $b$ is a bias term, and $\\sigma $ is a nonlinear transformation function ReLU, which is defined as $\\sigma (x) = max(x,0)$. The feature maps of each window are concatenated across all filters to get a high level vector representation and fed as input to next CNN layer. Output of second CNN layer is followed by (i) global max-pooling to remove low activation information from feature maps of all filters, and (ii) global average-pooling to get average activation across all the $n$-grams.",
"These two outputs are then concatenated and forwarded to a small feedforward network having two fully-connected layers, followed by a softmax layer for prediction of this particular learner. Dropout and batch-normalization layers are repeatedly used between both fully-connected layers to avoid features co-adaptation BIBREF8, BIBREF9."
],
[
"The traditional methods in deep learning do not account for previous information while processing current input. LSTM, however, is able to memorize past information and correlate it with current information BIBREF10. LSTM structure has memory cells (aka LSTM cells) that store the information selectively. Each word is treated as one time step and is fed to LSTM in a sequential manner. While processing the input at current time step $X_t$, LSTM also takes into account the previous hidden state $h_{t-1}$. The LSTM represents each time step with an input, a memory, and an output gate, denoted as $i_t, f_t$ and $o_t$ respectively. The hidden state $h_t$ of input $X_t$ for each time step $t$ is given by:",
"Where, the $*$ is element-wise multiplication and $\\sigma $ is sigmoid activation function.",
"Stacked-LSTM learner is comprised of two LSTM layers. Let ${H_1}$ be a matrix consisting of output vectors $\\lbrace h_1, h_2, ..., h_l\\rbrace $ that the first LSTM layer produced, denoting output at each time steps. This matrix is fed to second LSTM layer. Similarly, second layer produces another output matrix $H_2$ which is used to apply global max-pooling and global-average pooling. These two outputs are concatenated and forwarded to a two layered feedforward network for intermediate supervision (prediction), identical to previously described stacked-CNN learner."
],
[
"LSTM learner is employed to learn long-term dependencies of the text as described in BIBREF10. This learner encodes complete input text recursively. It takes one word vector at a time as input and outputs a single vector. The dimensions of the output vector are equal to the number of LSTM units deployed. This encoded text representation is then forwarded to a small feedforward network, identical to aforementioned two learners, for intermediate supervision in order to learn features. This learner differs from stacked-LSTM learner as it learns sentence features, and not average and max features of all time steps (input words)."
],
[
"The objective of discriminator network is to aggregate features learned by each of above described three learners and squash them into a small network for final prediction. The discriminator employs two fully-connected layers with batch-normalization and dropout layer along with ReLU activation function for non-linearity. The softmax activation function with categorical cross-entropy loss is used on the final prediction layer to get probabilities of each class. The class label is assigned based on maximum probability. This is treated as final prediction of the proposed model. The complete architecture, along with dimensions of each output is shown in Fig. FIGREF3."
],
[
"Pre-trained word embeddings on massive data, such as GloVe BIBREF11, give boost to predictive performance for multi-class classification BIBREF12. However, such embeddings are limited to English language only with no equivalence for Roman Urdu. Therefore, in this study, we avoid using any word-based pre-trained embeddings to give equal treatment to words of each language. We perform three kinds of experiments. (1) Embedding matrix is constructed using ELMo embeddings BIBREF13, which utilizes characters to form word vectors and produces a word vector with $d = 1024$. We call this variation of the model McM$_\\textsubscript {E}$. (2) Embedding matrix is initialized randomly for each word with word vector of size $d = 300$. We refer this particular model as McM$_\\textsubscript {R}$. (3) We train domain specific embeddings using word2vec with word vector of size $d = 300$ as suggested in original study BIBREF14. We refer to this particular model as McM$_\\textsubscript {D}$.",
"Furthermore, we also introduce soft-attention BIBREF6 between two layers of CNN and LSTM (in respective feature learner) to evaluate effect of attention on bilingual text classification. Attention mechanism “highlights\" (assigns more weight) a particular word that contributes more towards correct classification. We refer to attention based experiments with subscript $A$ for all three embedding initializations. This way, a total of 6 experiments are performed with different variations of the proposed model. To mitigate effect of random initialization of network weights, we fix the random seed across all experiments. We train each model for 20 epochs and create a checkpoint at epoch with best predictive performance on test split.",
"We re-implement the model proposed in BIBREF3, and use it as a baseline for our problem. The rationale behind choosing this particular model as a baseline is it's proven good predictive performance on multilingual text classification. For McM, the choices of number of convolutional filters, number of hidden units in first dense layer, number of hidden units in second dense layer, and recurrent units for LSTM are made empirically. Rest of the hyperparameters were selected by performing grid search using $20\\%$ stratified validation set from training set on McM$_\\textsubscript {R}$. Available choices and final selected parameters are mentioned in Table TABREF18. These choices remained same for all experiments and the validation set was merged back into training set."
],
[
"We employed the standard metrics that are widely adapted in the literature for measuring multi-class classification performance. These metrics are accuracy, precision, recall, and F1-score, where latter three can be computed using micro-average or macro-average strategies BIBREF15. In micro-average strategy, each instance holds equal weight and outcomes are aggregated across all classes to compute a particular metric. This essentially means that the outcome would be influenced by the frequent class, if class distribution is skewed. In macro-average however, metrics for each class are calculated separately and then averaged, irrespective of their class label occurrence ratio. This gives each class equal weight instead of each instance, consequently favoring the under-represented classes.",
"In our particular dataset, it is more plausible to favor smaller classes (i.e., other than “Appreciation\" and “Satisfied\") to detect potential complaints. Therefore, we choose to report macro-average values for precision, recall, and F1-score which are defined by (DISPLAY_FORM20), (DISPLAY_FORM21), and (DISPLAY_FORM22) respectively."
],
[
"Before evaluating the McM, we first tested the baseline model on our dataset. Table TABREF23 presents results of baseline and all variations of our experiments. We focus our discussion on F1-score as accuracy is often misleading for dataset with unbalanced class distribution. However, for completeness sake, all measures are reported.",
"It is observed from the results that baseline model performs worst among all the experiments. The reason behind this degradation in performance can be traced back to the nature of the texts in the datasets (i.e., datasets used in original paper of baseline model BIBREF3 and in our study). The approach in base model measure the performance of the model on multilingual dataset in which there is no code-switching involved. The complete text belongs to either one language or the other. However, in our case, the SMS text can have code-switching between two language, variation of spelling, or non-standard grammar. Baseline model is simple 1 layered CNN model that is unable to tackle such challenges. On the other hand, McM learns the features from multiple perspectives, hence feature representations are richer, which consequently leads to a superior predictive performance. As every learner in McM is also supervised, all 4 components of the proposed model (i.e., stacked-CNN learner, stacked-LSTM learner, LSTM-learner, and discriminator) can also be compared with each other.",
"In our experiments, the best performing variation of the proposed model is McM$_\\textsubscript {D}$. On this particular setting, discriminator is able to achieve an F1-score of $0.69$ with precision and recall values of $0.72$ and $0.68$ respectively. Other components of McM also show the highest stats for all performance measures. However, for McM$_\\textsubscript {DA}$, a significant reduction in performance is observed, although, attention-based models have been proven to show improvement in performance BIBREF6. Investigating the reason behind this drop in performance is beyond the scope of this study. The model variations trained on ELMo embedding have second highest performance. Discriminator of McM$_\\textsubscript {E}$ achieves an F1-score of $0.66$, beating other learners in this experiment. However, reduction in performance is persistent when attention is used for McM$_\\textsubscript {EA}$.",
"Regarding the experiments with random embedding initialization, McM$_\\textsubscript {R}$ shows similar performance to McM$_\\textsubscript {EA}$, while McM$_\\textsubscript {RA}$ performs the worst. It is worth noting that in each experiment, discriminator network stays on top or performs equally as compared to other components in terms of F1-score. This is indication that discriminator network is able to learn richer representations of text as compared to methods where only single feature learner is deployed.",
"Furthermore, the results for testing error for each component (i.e., 3 learners and a discriminator network) for all 4 variations of the proposed model are presented in Fig. FIGREF24. It is evident that the least error across all components is achieved by McM$_\\textsubscript {D}$ model. Turning now to individual component performance, in ELMo embeddings based two models, lowest error is achieved by discriminator network, closely followed by stacked LSTM learner and stacked-CNN learner, while LSTM learner has the highest error. As far as model variations with random embeddings initializations are concerned, most interesting results are observed. As shown in subplot (c) and (d) in Fig. FIGREF24, McM$_\\textsubscript {R}$ and McM$_\\textsubscript {RA}$ tend to overfit. After second epoch, the error rate for all components of these two variations tend to increase drastically. However, it shows minimum error for discriminator in both variations, again proving that the features learned through multiple cascades are more robust and hold greater discriminative power. Note that in all 6 variations of experiments, the error of discriminator network is the lowest as compared to other components of McM. Hence it can be deduced that learning features through multiple perspectives and aggregating them for final prediction is more fruitful as compared to single method of learning."
],
[
"In this work, a new large-scale dataset and a novel deep learning architecture for multi-class classification of bilingual (English-Roman Urdu) text with code-switching is presented. The dataset is intended for enhancement of petty corruption detection in public offices and provides grounds for future research in this direction. While deep learning architecture is proposed for multi-class classification of bilingual SMS without utilizing any external resource. Three word embedding initialization techniques and soft-attention mechanism is also investigated. The observations from extensive experimentation led us to conclude that: (1) word embeddings vectors generated through characters tend to favor bilingual text classification as compared to random embedding initialization, (2) the attention mechanism tend to decrease the predictive performance of the model, irrespective of embedding types used, (3) using features learned through single perspective yield poor performance for bilingual text with code-switching, (4) training domain specific embeddings on a large corpus and using them to train the model achieves the highest performance.",
"With regards to future work, we intend to investigate the reason behind degradation of model performance with soft-attention."
]
],
"section_name": [
"Introduction",
"Dataset Acquisition and Description",
"Proposed Model and Experimentation",
"Proposed Model and Experimentation ::: Stacked-CNN Learner",
"Proposed Model and Experimentation ::: Stacked-LSTM Learner",
"Proposed Model and Experimentation ::: LSTM Learner",
"Proposed Model and Experimentation ::: Discriminator Network",
"Proposed Model and Experimentation ::: Experimental Setup",
"Proposed Model and Experimentation ::: Evaluation Metrics",
"Results and Discussion",
"Concluding Remarks"
]
} | {
"answers": [
{
"annotation_id": [
"97699596d6edde902e1177fb0522e10ad5d36585"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3. Performance evaluation of variations of the proposed model and baseline. Showing highest scores in boldface."
],
"extractive_spans": [],
"free_form_answer": "the best performing model obtained an accuracy of 0.86",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3. Performance evaluation of variations of the proposed model and baseline. Showing highest scores in boldface."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"7e503e315e1cd62f2a0ec5b3037875d29fb3f89d"
],
"answer": [
{
"evidence": [
"We re-implement the model proposed in BIBREF3, and use it as a baseline for our problem. The rationale behind choosing this particular model as a baseline is it's proven good predictive performance on multilingual text classification. For McM, the choices of number of convolutional filters, number of hidden units in first dense layer, number of hidden units in second dense layer, and recurrent units for LSTM are made empirically. Rest of the hyperparameters were selected by performing grid search using $20\\%$ stratified validation set from training set on McM$_\\textsubscript {R}$. Available choices and final selected parameters are mentioned in Table TABREF18. These choices remained same for all experiments and the validation set was merged back into training set."
],
"extractive_spans": [
"the model proposed in BIBREF3"
],
"free_form_answer": "",
"highlighted_evidence": [
"We re-implement the model proposed in BIBREF3, and use it as a baseline for our problem. The rationale behind choosing this particular model as a baseline is it's proven good predictive performance on multilingual text classification."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"81767610ba00ed9b488ef4f121dc43c25ad04731"
],
"answer": [
{
"evidence": [
"The English language is well studied under the umbrella of NLP, hence many resources and datasets for the different problems are available. However, research on English-Roman Urdu bilingual text lags behind because of non-availability of gold standard datasets. Our second contribution is that we present a large scale annotated dataset in Roman Urdu and English language with code-switching, for multi-class classification. The dataset consists of more than $0.3$ million records and has been made available for future research."
],
"extractive_spans": [
"$0.3$ million records"
],
"free_form_answer": "",
"highlighted_evidence": [
"The dataset consists of more than $0.3$ million records and has been made available for future research."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"810157d29af3d7a04ebd48683d9cddff2040396c"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1. Description of class label along with distribution of each class (in %) in the acquired dataset"
],
"extractive_spans": [],
"free_form_answer": "Appreciation, Satisfied, Peripheral complaint, Demanded inquiry, Corruption, Lagged response, Unresponsive, Medicine payment, Adverse behavior, Grievance ascribed and Obnoxious/irrelevant",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1. Description of class label along with distribution of each class (in %) in the acquired dataset"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What accuracy score do they obtain?",
"What is their baseline model?",
"What is the size of the dataset?",
"What is the 12 class bilingual text?"
],
"question_id": [
"160e6d2fc6e04bb0b4ee8d59c06715355dec4a17",
"2c88b46c7e3a632cfa10b7574276d84ecec7a0af",
"6ff240d985bbe96b9d5042c9b372b4e8f498f264",
"30dad5d9b4a03e56fa31f932c879aa56e11ed15b"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1. Description of class label along with distribution of each class (in %) in the acquired dataset",
"Fig. 1. Multi-cascaded model (McM) for bilingual short text classification (figure best seen in color)",
"Table 2. Hyperparameter tuning, the selection range, and final choice",
"Table 3. Performance evaluation of variations of the proposed model and baseline. Showing highest scores in boldface.",
"Fig. 2. Test error for all three feature learners and discriminator network over the epochs for all 4 variations of the model, showing lowest error for domain specific embeddings while highest for random embedding initialization."
],
"file": [
"3-Table1-1.png",
"4-Figure1-1.png",
"7-Table2-1.png",
"9-Table3-1.png",
"10-Figure2-1.png"
]
} | [
"What accuracy score do they obtain?",
"What is the 12 class bilingual text?"
] | [
[
"1911.13066-9-Table3-1.png"
],
[
"1911.13066-3-Table1-1.png"
]
] | [
"the best performing model obtained an accuracy of 0.86",
"Appreciation, Satisfied, Peripheral complaint, Demanded inquiry, Corruption, Lagged response, Unresponsive, Medicine payment, Adverse behavior, Grievance ascribed and Obnoxious/irrelevant"
] | 706 |
1908.05969 | Simplify the Usage of Lexicon in Chinese NER | Recently, many works have tried to utilizing word lexicon to augment the performance of Chinese named entity recognition (NER). As a representative work in this line, Lattice-LSTM \cite{zhang2018chinese} has achieved new state-of-the-art performance on several benchmark Chinese NER datasets. However, Lattice-LSTM suffers from a complicated model architecture, resulting in low computational efficiency. This will heavily limit its application in many industrial areas, which require real-time NER response. In this work, we ask the question: if we can simplify the usage of lexicon and, at the same time, achieve comparative performance with Lattice-LSTM for Chinese NER? ::: Started with this question and motivated by the idea of Lattice-LSTM, we propose a concise but effective method to incorporate the lexicon information into the vector representations of characters. This way, our method can avoid introducing a complicated sequence modeling architecture to model the lexicon information. Instead, it only needs to subtly adjust the character representation layer of the neural sequence model. Experimental study on four benchmark Chinese NER datasets shows that our method can achieve much faster inference speed, comparative or better performance over Lattice-LSTM and its follwees. It also shows that our method can be easily transferred across difference neural architectures. | {
"paragraphs": [
[
"Named Entity Recognition (NER) is concerned with identifying named entities, such as person, location, product, and organization names, in unstructured text. In languages where words are naturally separated (e.g., English), NER was conventionally formulated as a sequence labeling problem, and the state-of-the-art results have been achieved by those neural-network-based models BIBREF1, BIBREF2, BIBREF3, BIBREF4.",
"Compared with NER in English, Chinese NER is more difficult since sentences in Chinese are not previously segmented. Thus, one common practice in Chinese NER is first performing word segmentation using an existing CWS system and then applying a word-level sequence labeling model to the segmented sentence BIBREF5, BIBREF6. However, it is inevitable that the CWS system will wrongly segment the query sequence. This will, in turn, result in entity boundary detection errors and even entity category prediction errors in the following NER. Take the character sequence “南京市 (Nanjing) / 长江大桥 (Yangtze River Bridge)\" as an example, where “/\" indicates the gold segmentation result. If the sequence is segmented into “南京 (Nanjing) / 市长 (mayor) / 江大桥 (Daqiao Jiang)\", the word-based NER system is definitely not able to correctly recognize “南京市 (Nanjing)\" and “长江大桥 (Yangtze River Bridge)\" as two entities of the location type. Instead, it is possible to incorrectly treat “南京 (Nanjing)\" as a location entity and predict “江大桥 (Daqiao Jiang)\" to be a person's name. Therefore, some works resort to performing Chinese NER directly on the character level, and it has been shown that this practice can achieve better performance BIBREF7, BIBREF8, BIBREF9, BIBREF0.",
"A drawback of the purely character-based NER method is that word information, which has been proved to be useful, is not fully exploited. With this consideration, BIBREF0 proposed to incorporating word lexicon into the character-based NER model. In addition, instead of heuristically choosing a word for the character if it matches multiple words of the lexicon, they proposed to preserving all matched words of the character, leaving the following NER model to determine which matched word to apply. To achieve this, they introduced an elaborate modification to the LSTM-based sequence modeling layer of the LSTM-CRF model BIBREF1 to jointly model the character sequence and all of its matched words. Experimental studies on four public Chinese NER datasets show that Lattice-LSTM can achieve comparative or better performance on Chinese NER over existing methods.",
"Although successful, there exists a big problem in Lattice-LSTM that limits its application in many industrial areas, where real-time NER responses are needed. That is, its model architecture is quite complicated. This slows down its inference speed and makes it difficult to perform training and inference in parallel. In addition, it is far from easy to transfer the structure of Lattice-LSTM to other neural-network architectures (e.g., convolutional neural networks and transformers), which may be more suitable for some specific datasets.",
"In this work, we aim to find a easier way to achieve the idea of Lattice-LSTM, i.e., incorporating all matched words of the sentence to the character-based NER model. The first principle of our method design is to achieve a fast inference speed. To this end, we propose to encoding the matched words, obtained from the lexicon, into the representations of characters. Compared with Lattice-LSTM, this method is more concise and easier to implement. It can avoid complicated model architecture design thus has much faster inference speed. It can also be quickly adapted to any appropriate neural architectures without redesign. Given an existing neural character-based NER model, we only have to modify its character representation layer to successfully introduce the word lexicon. In addition, experimental studies on four public Chinese NER datasets show that our method can even achieve better performance than Lattice-LSTM when applying the LSTM-CRF model. Our source code is published at https://github.com/v-mipeng/LexiconAugmentedNER."
],
[
"In this section, we provide a concise description of the generic character-based neural NER model, which conceptually contains three stacked layers. The first layer is the character representation layer, which maps each character of a sentence into a dense vector. The second layer is the sequence modeling layer. It plays the role of modeling the dependence between characters, obtaining a hidden representation for each character. The final layer is the label inference layer. It takes the hidden representation sequence as input and outputs the predicted label (with probability) for each character. We detail these three layers below."
],
[
"For a character-based Chinese NER model, the smallest unit of a sentence is a character and the sentence is seen as a character sequence $s=\\lbrace c_1, \\cdots , c_n\\rbrace \\in \\mathcal {V}_c$, where $\\mathcal {V}_c$ is the character vocabulary. Each character $c_i$ is represented using a dense vector (embedding):",
"where $\\mathbf {e}^{c}$ denotes the character embedding lookup table."
],
[
"In addition, BIBREF0 has proved that character bigrams are useful for representing characters, especially for those methods not use word information. Therefore, it is common to augment the character representation with bigram information by concatenating bigram embeddings with character embeddings:",
"where $\\mathbf {e}^{b}$ denotes the bigram embedding lookup table, and $\\oplus $ denotes the concatenation operation. The sequence of character representations $\\mathbf {\\mathrm {x}}_i^c$ form the matrix representation $\\mathbf {\\mathrm {x}}^s=\\lbrace \\mathbf {\\mathrm {x}}_1^c, \\cdots , \\mathbf {\\mathrm {x}}_n^c\\rbrace $ of $s$."
],
[
"The sequence modeling layer models the dependency between characters built on vector representations of the characters. In this work, we explore the applicability of our method to three popular architectures of this layer: the LSTM-based, the CNN-based, and the transformer-based."
],
[
"The bidirectional long-short term memory network (BiLSTM) is one of the most commonly used architectures for sequence modeling BIBREF10, BIBREF3, BIBREF11. It contains two LSTM BIBREF12 cells that model the sequence in the left-to-right (forward) and right-to-left (backward) directions with two distinct sets of parameters. Here, we precisely show the definition of the forward LSTM:",
"where $\\sigma $ is the element-wise sigmoid function and $\\odot $ represents element-wise product. $\\mathbf {\\mathrm {\\mathrm {W}}} \\in {\\mathbf {\\mathrm {\\mathbb {R}}}^{4k_h\\times (k_h+k_w)}}$ and $\\mathbf {\\mathrm {\\mathrm {b}}}\\in {\\mathbf {\\mathrm {\\mathbb {R}}}^{4k_h}}$ are trainable parameters. The backward LSTM shares the same definition as the forward one but in an inverse sequence order. The concatenated hidden states at the $i^{th}$ step of the forward and backward LSTMs $\\mathbf {\\mathrm {h}}_i=[\\overrightarrow{\\mathbf {\\mathrm {h}}}_i \\oplus \\overleftarrow{\\mathbf {\\mathrm {h}}}_i]$ forms the context-dependent representation of $c_i$."
],
[
"Another popular architecture for sequence modeling is the convolution network BIBREF13, which has been proved BIBREF14 to be effective for Chinese NER. In this work, we apply a convolutional layer to model trigrams of the character sequence and gradually model its multigrams by stacking multiple convolutional layers. Specifically, let $\\mathbf {\\mathrm {h}}^l_i$ denote the hidden representation of $c_i$ in the $l^{th}$ layer with $\\mathbf {\\mathrm {h}}_i^0=\\mathbf {\\mathrm {x}}^c_i$, and $\\mathbf {\\mathrm {F}}^l \\in \\mathbb {R}^{k_l \\times k_c \\times 3}$ denote the corresponding filter used in this layer. To obtain the hidden representation $\\mathbf {\\mathrm {h}}^{l+1}_i$ of $c_i$ in the $(l+1)^{th}$ layer, it takes the convolution of $\\mathbf {\\mathrm {F}}^l$ over the 3-gram representation:",
"where $\\mathbf {\\mathrm {h}}^l_{<i-1, i+1>} = [\\mathbf {\\mathrm {h}}^l_{i-1}; \\mathbf {\\mathrm {h}}^l_{i}; \\mathbf {\\mathrm {h}}^l_{i+1}]$ and $\\langle A,B \\rangle _i=\\mbox{Tr}(AB[i, :, :]^T)$. This operation applies $L$ times, obtaining the final context-dependent representation, $\\mathbf {\\mathrm {h}}_i = \\mathbf {\\mathrm {h}}_i^L$, of $c_i$."
],
[
"Transformer BIBREF15 is originally proposed for sequence transduction, on which it has shown several advantages over the recurrent or convolutional neural networks. Intrinsically, it can also be applied to the sequence labeling task using only its encoder part.",
"In similar, let $\\mathbf {\\mathrm {h}}^l_i$ denote the hidden representation of $c_i$ in the $l^{th}$ layer with $\\mathbf {\\mathrm {h}}_i^0=\\mathbf {\\mathrm {x}}^c_i$, and $f^l$ denote a feedforward module used in this layer. To obtain the hidden representation matrix $\\mathbf {\\mathrm {h}}^{l+1}$ of $s$ in the $(l+1)^{th}$ layer, it takes the self-attention of $\\mathbf {\\mathrm {h}}^l$:",
"where $d^l$ is the dimension of $\\mathbf {\\mathrm {h}}^l_i$. This process applies $L$ times, obtaining $\\mathbf {\\mathrm {h}}^L$. After that, the position information of each character $c_i$ is introduced into $\\mathbf {\\mathrm {h}}^L_i$ to obtain its final context-dependent representation $\\mathbf {\\mathrm {h}}_i$:",
"where $PE_i=sin(i/1000^{2j/d^L}+j\\%2\\cdot \\pi /2)$. We recommend you to refer to the excellent guides “The Annotated Transformer.” for more implementation detail of this architecture."
],
[
"On top of the sequence modeling layer, a sequential conditional random field (CRF) BIBREF16 layer is applied to perform label inference for the character sequence as a whole:",
"where $\\mathcal {Y}_s$ denotes all possible label sequences of $s$, $\\phi _{t}({y}^\\prime , {y}|\\mathbf {\\mathrm {s}})=\\exp (\\mathbf {w}^T_{{y}^\\prime , {y}} \\mathbf {\\mathrm {h}}_t + b_{{y}^\\prime , {y}})$, where $\\mathbf {w}_{{y}^\\prime , {y}}$ and $ b_{{y}^\\prime , {y}}$ are trainable parameters corresponding to the label pair $({y}^\\prime , {y})$, and $\\mathbf {\\theta }$ denotes model parameters. For label inference, it searches for the label sequence $\\mathbf {\\mathrm {y}}^{*}$ with the highest conditional probability given the input sequence ${s}$:",
"which can be efficiently solved using the Viterbi algorithm BIBREF17."
],
[
"Lattice-LSTM designs to incorporate word lexicon into the character-based neural sequence labeling model. To achieve this purpose, it first performs lexicon matching on the input sentence. It will add an directed edge from $c_i$ to $c_j$, if the sub-sequence $\\lbrace c_i, \\cdots , c_j\\rbrace $ of the sentence matches a word of the lexicon for $i < j$. And it preserves all lexicon matching results on a character by allowing the character to connect with multiple characters. Concretely, for a sentence $\\lbrace c_1, c_2, c_3, c_4, c_5\\rbrace $, if both its sub-sequences $\\lbrace c_1, c_2, c_3, c_4\\rbrace $ and $\\lbrace c_2, c_3, c_4\\rbrace $ match a word of the lexicon, it will add a directed edge from $c_1$ to $c_4$ and a directed edge from $c_2$ to $c_4$. This practice will turn the input form of the sentence from a chained sequence into a graph.",
"To model the graph-based input, Lattice-LSTM accordingly modifies the LSTM-based sequence modeling layer. Specifically, let $s_{<*, j>}$ denote the list of sub-sequences of a sentence $s$ that match the lexicon and end with $c_j$, $\\mathbf {\\mathrm {h}}_{<*, j>}$ denote the corresponding hidden state list $\\lbrace \\mathbf {\\mathrm {h}}_i, \\forall s_{<i, j>} \\in s_{<*, j>}\\rbrace $, and $\\mathbf {\\mathrm {c}}_{<*, j>}$ denote the corresponding memory cell list $\\lbrace \\mathbf {\\mathrm {c}}_i, \\forall s_{<i, j>} \\in s_{<*, j>}\\rbrace $. In Lattice-LSTM, the hidden state $\\mathbf {\\mathrm {h}}_j$ and memory cell $\\mathbf {\\mathrm {c}}_j$ of $c_j$ are now updated by:",
"where $f$ is a simplified representation of the function used by Lattice-LSTM to perform memory update. Note that, in the updating process, the inputs now contains current step character representation $\\mathbf {\\mathrm {x}}_j^c$, last step hidden state $\\mathbf {\\mathrm {h}}_{j-1}$ and memory cell $\\mathbf {\\mathrm {c}}_{j-1}$, and lexicon matched sub-sequences $s_{<*, j>}$ and their corresponding hidden state and memory cell lists, $\\mathbf {\\mathrm {h}}_{<*, j>}$ and $\\mathbf {\\mathrm {c}}_{<*, j>}$. We refer you to the paper of Lattice-LSTM BIBREF0 for more detail of the implementation of $f$.",
"A problem of Lattice-LSTM is that its speed of sequence modeling is much slower than the normal LSTM architecture since it has to additionally model $s_{<*, j>}$, $\\mathbf {\\mathrm {h}}_{<*, j>}$, and $\\mathbf {\\mathrm {c}}_{<*, j>}$ for memory update. In addition, considering the implementation of $f$, it is hard for Lattice-LSTM to process multiple sentences in parallel (in the published implementation of Lattice-LSTM, the batch size was set to 1). This raises the necessity to design a simpler way to achieve the function of Lattice-LSTM for incorporating the word lexicon into the character-based NER model."
],
[
"In this section, we introduce our method, which aims to keep the merit of Lattice-LSTM and at the same time, make the computation efficient. We will start the description of our method from our thinking on Lattice-LSTM.",
"From our view, the advance of Lattice-LSTM comes from two points. The first point is that it preserve all possible matching words for each character. This can avoid the error propagation introduced by heuristically choosing a matching result of the character to the NER system. The second point is that it can introduce pre-trained word embeddings to the system, which bring great help to the final performance. While the disadvantage of Lattice-LSTM is that it turns the input form of a sentence from a chained sequence into a graph. This will greatly increase the computational cost for sentence modeling. Therefore, the design of our method should try to keep the chained input form of the sentence and at the same time, achieve the above two advanced points of Lattice-LSTM.",
"With this in mind, our method design was firstly motivated by the Softword technique, which was originally used for incorporating word segmentation information into downstream tasks BIBREF18, BIBREF19. Precisely, the Softword technique augments the representation of a character with the embedding of its corresponding segmentation label:",
"Here, $seg(c_j) \\in \\mathcal {Y}_{seg}$ denotes the segmentation label of the character $c_j$ predicted by the word segmentor, $\\mathbf {e}^{seg}$ denotes the segmentation label embedding lookup table, and commonly $\\mathcal {Y}_{seg}=\\lbrace \\text{B}, \\text{M}, \\text{E}, \\text{S}\\rbrace $ with B, M, E indicating that the character is the beginning, middle, and end of a word, respectively, and S indicating that the character itself forms a single-character word.",
"The first idea we come out based on the Softword technique is to construct a word segmenter using the lexicon and allow a character to have multiple segmentation labels. Take the sentence $s=\\lbrace c_1, c_2, c_3, c_4, c_5\\rbrace $ as an example. If both its sub-sequences $\\lbrace c_1, c_2, c_3, c_4\\rbrace $ and $\\lbrace c_3, c_4\\rbrace $ match a word of the lexicon, then the segmentation label sequence of $s$ using the lexicon is $segs(s)=\\lbrace \\lbrace \\text{B}\\rbrace , \\lbrace \\text{M}\\rbrace , \\lbrace \\text{B}, \\text{M}\\rbrace , \\lbrace \\text{E}\\rbrace , \\lbrace \\text{O}\\rbrace \\rbrace $. Here, $segs(s)_1=\\lbrace \\text{B}\\rbrace $ indicates that there is at least one sub-sequence of $s$ matching a word of the lexicon and beginning with $c_1$, $segs(s)_3=\\lbrace \\text{B}, \\text{M}\\rbrace $ means that there is at least one sub-sequence of $s$ matching the lexicon and beginning with $c_3$ and there is also at least one lexicon matched sub-sequence in the middle of which $c_3$ occurs, and $segs(s)_5=\\lbrace \\text{O}\\rbrace $ means that there is no sub-sequence of $s$ that matches the lexicon and contains $c_5$. The character representation is then obtained by:",
"where $\\mathbf {e}^{seg}(segs(s)_j)$ is a 5-dimensional binary vector with each dimension corresponding to an item of $\\lbrace \\text{B, M, E, S, O\\rbrace }$. We call this method as ExSoftword in the following.",
"However, through the analysis of ExSoftword, we can find out that the ExSoftword method cannot fully inherit the two merits of Lattice-LSTM. Firstly, it cannot not introduce pre-trained word embeddings. Secondly, though it tries to keep all the lexicon matching results by allowing a character to have multiple segmentation labels, it still loses lots of information. In many cases, we cannot restore the matching results from the segmentation label sequence. Consider the case that in the sentence $s=\\lbrace c_1, c_2, c_3, c_4\\rbrace $, $\\lbrace c_1, c_2, c_3\\rbrace $ and $\\lbrace c_2, c_3, c_4\\rbrace $ match the lexicon. In this case, $segs(s) = \\lbrace \\lbrace \\text{B}\\rbrace , \\lbrace \\text{B}, \\text{M}\\rbrace , \\lbrace \\text{M}, \\text{E}\\rbrace , \\lbrace \\text{E}\\rbrace \\rbrace $. However, based on $segs(s)$ and $s$, we cannot say that it is $\\lbrace c_1, c_2, c_3\\rbrace $ and $\\lbrace c_2, c_3, c_4\\rbrace $ matching the lexicon since we will obtain the same segmentation label sequence when $\\lbrace c_1, c_2, c_3, c_4\\rbrace $ and $\\lbrace c_2,c_3\\rbrace $ match the lexicon.",
"To this end, we propose to preserving not only the possible segmentation labels of a character but also their corresponding matched words. Specifically, in this improved method, each character $c$ of a sentence $s$ corresponds to four word sets marked by the four segmentation labels “BMES\". The word set $\\rm {B}(c)$ consists of all lexicon matched words on $s$ that begin with $c$. Similarly, $\\rm {M}(c)$ consists of all lexicon matched words in the middle of which $c$ occurs, $\\rm {E}(c)$ consists of all lexicon matched words that end with $c$, and $\\rm {S}(c)$ is the single-character word comprised of $c$. And if a word set is empty, we will add a special word “NONE\" to it to indicate this situation. Consider the sentence $s=\\lbrace c_1, \\cdots , c_5\\rbrace $ and suppose that $\\lbrace c_1, c_2\\rbrace $, $\\lbrace c_1, c_2, c_3\\rbrace $, $\\lbrace c_2, c_3, c_4\\rbrace $, and $\\lbrace c_2, c_3, c_4, c_5\\rbrace $ match the lexicon. Then, for $c_2$, $\\rm {B}(c_2)=\\lbrace \\lbrace c_2, c_3, c_4\\rbrace , \\lbrace c_2, c_3, c_4, c_5\\rbrace \\rbrace $, $\\rm {M}(c_2)=\\lbrace \\lbrace c_1, c_2, c_3\\rbrace \\rbrace $, $\\rm {E}(c_2)=\\lbrace \\lbrace c_1, c_2\\rbrace \\rbrace $, and $\\rm {S}(c_2)=\\lbrace NONE\\rbrace $. In this way, we can now introduce the pre-trained word embeddings and moreover, we can exactly restore the matching results from the word sets of each character.",
"The next step of the improved method is to condense the four word sets of each character into a fixed-dimensional vector. In order to retain information as much as possible, we choose to concatenate the representations of the four word sets to represent them as a whole and add it to the character representation:",
"Here, $\\mathbf {v}^s$ denotes the function that maps a single word set to a dense vector.",
"This also means that we should map each word set into a fixed-dimensional vector. To achieve this purpose, we first tried the mean-pooling algorithm to get the vector representation of a word set $\\mathcal {S}$:",
"Here, $\\mathbf {e}^w$ denotes the word embedding lookup table. However, the empirical studies, as depicted in Table TABREF31, show that this algorithm performs not so well . Through the comparison with Lattice-LSTM, we find out that in Lattice-LSTM, it applies a dynamic attention algorithm to weigh each matched word related to a single character. Motivated by this practice, we propose to weighing the representation of each word in the word set to get the pooling representation of the word set. However, considering the computational efficiency, we do not want to apply a dynamical weighing algorithm, like attention, to get the weight of each word. With this in mind, we propose to using the frequency of the word as an indication of its weight. The basic idea beneath this algorithm is that the more times a character sequence occurs in the data, the more likely it is a word. Note that, the frequency of a word is a static value and can be obtained offline. This can greatly accelerate the calculation of the weight of each word (e.g., using a lookup table).",
"Specifically, let $w_c$ denote the character sequence constituting $w$ and $z(w)$ denote the frequency of $w_c$ occurring in the statistic data set (in this work, we combine training and testing data of a task to construct the statistic data set. Of course, if we have unlabelled data for the task, we can take the unlabeled data as the statistic data set). Note that, we do not add the frequency of $w$ if $w_c$ is covered by that of another word of the lexicon in the sentence. For example, suppose that the lexicon contains both “南京 (Nanjing)\" and “南京市 (Nanjing City)\". Then, when counting word frequency on the sequence “南京市长江大桥\", we will not add the frequency of “南京\" since it is covered by “南京市\" in the sequence. This can avoid the situation that the frequency of “南京\" is definitely higher than “南京市\". Finally, we get the weighted representation of the word set $\\mathcal {S}$ by:",
"where",
"Here, we perform weight normalization on all words of the four word sets to allow them compete with each other across sets.",
"Further, we have tried to introducing a smoothing to the weight of each word to increase the weights of infrequent words. Specifically, we add a constant $c$ into the frequency of each word and re-define $\\mathbf {v}^s$ by:",
"where",
"We set $c$ to the value that there are 10% of training words occurring less than $c$ times within the statistic data set. In summary, our method mainly contains the following four steps. Firstly, we scan each input sentence with the word lexicon, obtaining the four 'BMES' word sets for each character of the sentence. Secondly, we look up the frequency of each word counted on the statistic data set. Thirdly, we obtain the vector representation of the four word sets of each character according to Eq. (DISPLAY_FORM22), and add it to the character representation according to Eq. (DISPLAY_FORM20). Finally, based on the augmented character representations, we perform sequence labeling using any appropriate neural sequence labeling model, like LSTM-based sequence modeling layer + CRF label inference layer."
],
[
"Firstly, we performed a development study on our method with the LSTM-based sequence modeling layer, in order to compare the implementations of $\\mathbf {v}^s$ and to determine whether or not to use character bigrams in our method. Decision made in this step will be applied to the following experiments. Secondly, we verified the computational efficiency of our method compared with Lattice-LSTM and LR-CNN BIBREF20, which is a followee of Lattice-LSTM for faster inference speed. Thirdly, we verified the effectiveness of our method by comparing its performance with that of Lattice-LSTM and other comparable models on four benchmark Chinese NER data sets. Finally, we verified the applicability of our method to different sequence labeling models."
],
[
"Most experimental settings in this work follow the protocols of Lattice-LSTM BIBREF0, including tested datasets, compared baselines, evaluation metrics (P, R, F1), and so on. To make this work self-completed, we concisely illustrate some primary settings of this work."
],
[
"The methods were evaluated on four Chinese NER datasets, including OntoNotes BIBREF21, MSRA BIBREF22, Weibo NER BIBREF23, BIBREF24, and Resume NER BIBREF0. OntoNotes and MSRA are from the newswire domain, where gold-standard segmentation is available for training data. For OntoNotes, gold segmentation is also available for development and testing data. Weibo NER and Resume NER are from social media and resume, respectively. There is no gold standard segmentation in these two datasets. Table TABREF26 shows statistic information of these datasets. As for the lexicon, we used the same one as Lattice-LSTM, which contains 5.7k single-character words, 291.5k two-character words, 278.1k three-character words, and 129.1k other words."
],
[
"When applying the LSTM-based sequence modeling layer, we followed most implementation protocols of Lattice-LSTM, including character and word embedding sizes, dropout, embedding initialization, and LSTM layer number. The hidden size was set to 100 for Weibo and 256 for the rest three datasets. The learning rate was set to 0.005 for Weibo and Resume and 0.0015 for OntoNotes and MSRA with Adamax BIBREF25.",
"When applying the CNN- and transformer- based sequence modeling layers, most hyper-parameters were the same as those used in the LSTM-based model. In addition, the layer number $L$ for the CNN-based model was set to 4, and that for transformer-based model was set to 2 with h=4 parallel attention layers. Kernel number $k_f$ of the CNN-based model was set to 512 for MSRA and 128 for the other datasets in all layers."
],
[
"In this experiment, we compared the implementations of $\\mathbf {v}^s$ with the LSTM-based sequence modeling layer. In addition, we study whether or not character bigrams can bring improvement to our method.",
"Table TABREF31 shows performance of three implementations of $\\mathbf {v}^s$ without using character bigrams. From the table, we can see that the weighted pooling algorithm performs generally better than the other two implementations. Of course, we may obtain better results with the smoothed weighted pooling algorithm by reducing the value of $c$ (when $c=0$, it is equivalent to the weighted pooling algorithm). We did not do so for two reasons. The first one is to guarantee the generality of our system for unexplored tasks. The second one is that the performance of the weighted pooling algorithm is good enough compared with other state-of-the-art baselines. Therefore, in the following experiments, we in default applied the weighted pooling algorithm to implement $\\mathbf {v}^s$.",
"Figure FIGREF32 shows the F1-score of our method against the number of training iterations when using character bigram or not. From the figure, we can see that additionally introducing character bigrams cannot bring considerable improvement to our method. A possible explanation of this phenomenon is that the introduced word information by our proposed method has covered the bichar information. Therefore, in the following experiments, we did not use bichar in our method."
],
[
"Table TABREF34 shows the inference speed of our method when implementing the sequnece modeling layer with the LSTM-based, CNN-based, and Transformer-based architecture, respectively. The speed was evaluated by average sentences per second using a GPU (NVIDIA TITAN X). For a fair comparison with Lattice-LSTM and LR-CNN, we set the batch size of our method to 1 at inference time. From the table, we can see that our method has a much faster inference speed than Lattice-LSTM when using the LSTM-based sequence modeling layer, and it was also much faster than LR-CNN, which used an CNN architecture to implement the sequence modeling layer. And as expected, our method with the CNN-based sequence modeling layer showed some advantage in inference speed than those with the LSTM-based and Transformer-based sequence model layer."
],
[
"Table TABREF37$-$TABREF43 show the performance of method with the LSTM-based sequence modeling layer compared with Lattice-LSTM and other comparative baselines."
],
[
"Table TABREF37 shows results on OntoNotes, which has gold segmentation for both training and testing data. The methods of the “Gold seg\" and \"Auto seg\" group are word-based that build on the gold word segmentation results and the automatic segmentation results, respectively. The automatic segmentation results were generated by the segmenter trained on training data of OntoNotes. Methods of the \"No seg\" group are character-based. From the table, we can obtain several informative observations. First, by replacing the gold segmentation with the automatically generated segmentation, the F1-score of the Word-based (LSTM) + char + bichar model decreased from 75.77% to 71.70%. This shows the problem of the practice that treats the predicted word segmentation result as the true one for the word-based Chinese NER. Second, the Char-based (LSTM)+bichar+ExSoftword model achieved a 71.89% to 72.40% improvement over the Char-based (LSTM)+bichar+softword baseline on the F1-score. This indicates the feasibility of the naive extension of ExSoftword to softword. However, it still greatly underperformed Lattice-LSTM, showing its deficiency in utilizing word information. Finally, our proposed method, which is a further extension of Exsoftword, obtained a statistically significant improvement over Lattice-LSTM and even performed similarly to those word-based methods with gold segmentation, verifying its effectiveness on this data set."
],
[
"Table TABREF40 shows results on MSRA. The word-based methods were built on the automatic segmentation results generated by the segmenter trained on training data of MSRA. Compared methods included the best statistical models on this data set, which leveraged rich handcrafted features BIBREF28, BIBREF29, BIBREF30, character embedding features BIBREF31, and radical features BIBREF32. From the table, we observe that our method obtained a statistically significant improvement over Lattice-LSTM and other comparative baselines on the recall and F1-score, verifying the effectiveness of our method on this data set."
],
[
"Table TABREF42 shows results on Weibo NER, where NE, NM, and Overall denote F1-scores for named entities, nominal entities (excluding named entities) and both, respectively. The existing state-of-the-art system BIBREF19 explored rich embedding features, cross-domain data, and semi-supervised data. From the table, we can see that our proposed method achieved considerable improvement over the compared baselines on this data set. Table TABREF43 shows results on Resume. Consistent with observations on the other three tested data sets, our proposed method significantly outperformed Lattice-LSTM and the other comparable methods on this data set."
],
[
"Table TABREF46 shows performance of our method with different sequence modeling architectures. From the table, we can first see that the LSTM-based architecture performed better than the CNN- and transformer- based architectures. In addition, our methods with different sequence modeling layers consistently outperformed their corresponding ExSoftword baselines. This shows that our method is applicable to different neural sequence modeling architectures for exploiting lexicon information."
],
[
"In this work, we address the computational efficiency for utilizing word lexicon in Chinese NER. To achieve a high-performing NER system with fast inference speed, we proposed to adding lexicon information into the character representation and keeping the input form of a sentence as a chained sequence. Experimental study on four benchmark Chinese NER datasets shows that our method can obtain faster inference speed than the comparative methods and at the same time, achieve high performance. It also shows that our methods can apply to different neural sequence labeling models for Chinese NER."
]
],
"section_name": [
"Introduction",
"Generic Character-based Neural Architecture for Chinese NER",
"Generic Character-based Neural Architecture for Chinese NER ::: Character Representation Layer",
"Generic Character-based Neural Architecture for Chinese NER ::: Character Representation Layer ::: Char + bichar.",
"Generic Character-based Neural Architecture for Chinese NER ::: Sequence Modeling Layer",
"Generic Character-based Neural Architecture for Chinese NER ::: Sequence Modeling Layer ::: LSTM-based",
"Generic Character-based Neural Architecture for Chinese NER ::: Sequence Modeling Layer ::: CNN-based",
"Generic Character-based Neural Architecture for Chinese NER ::: Sequence Modeling Layer ::: Transformer-based",
"Generic Character-based Neural Architecture for Chinese NER ::: Label Inference Layer",
"Lattice-LSTM for Chinese NER",
"Proposed Method",
"Experiments ::: Experiment Design",
"Experiments ::: Experiment Setup",
"Experiments ::: Experiment Setup ::: Datasets",
"Experiments ::: Experiment Setup ::: Implementation Detail",
"Experiments ::: Development Experiments",
"Experiments ::: Computational Efficiency Study",
"Experiments ::: Effectiveness Study",
"Experiments ::: Effectiveness Study ::: OntoNotes.",
"Experiments ::: Effectiveness Study ::: MSRA.",
"Experiments ::: Effectiveness Study ::: Weibo/Resume.",
"Experiments ::: Transferability Study",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"bda4b01fa2dd21394e8f3d43120de698a2745560"
],
"answer": [
{
"evidence": [
"The sequence modeling layer models the dependency between characters built on vector representations of the characters. In this work, we explore the applicability of our method to three popular architectures of this layer: the LSTM-based, the CNN-based, and the transformer-based.",
"Table TABREF46 shows performance of our method with different sequence modeling architectures. From the table, we can first see that the LSTM-based architecture performed better than the CNN- and transformer- based architectures. In addition, our methods with different sequence modeling layers consistently outperformed their corresponding ExSoftword baselines. This shows that our method is applicable to different neural sequence modeling architectures for exploiting lexicon information."
],
"extractive_spans": [],
"free_form_answer": "The sequence model architectures which this method is transferred to are: LSTM and Transformer-based models",
"highlighted_evidence": [
"In this work, we explore the applicability of our method to three popular architectures of this layer: the LSTM-based, the CNN-based, and the transformer-based.",
"Table TABREF46 shows performance of our method with different sequence modeling architectures. From the table, we can first see that the LSTM-based architecture performed better than the CNN- and transformer- based architectures. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
},
{
"annotation_id": [
"7ef344319ddcefe92ba5091bcade93be7b1fdc6e"
],
"answer": [
{
"evidence": [
"Table TABREF34 shows the inference speed of our method when implementing the sequnece modeling layer with the LSTM-based, CNN-based, and Transformer-based architecture, respectively. The speed was evaluated by average sentences per second using a GPU (NVIDIA TITAN X). For a fair comparison with Lattice-LSTM and LR-CNN, we set the batch size of our method to 1 at inference time. From the table, we can see that our method has a much faster inference speed than Lattice-LSTM when using the LSTM-based sequence modeling layer, and it was also much faster than LR-CNN, which used an CNN architecture to implement the sequence modeling layer. And as expected, our method with the CNN-based sequence modeling layer showed some advantage in inference speed than those with the LSTM-based and Transformer-based sequence model layer."
],
"extractive_spans": [],
"free_form_answer": "Across 4 datasets, the best performing proposed model (CNN) achieved an average of 363% improvement over the state of the art method (LR-CNN)",
"highlighted_evidence": [
"Table TABREF34 shows the inference speed of our method when implementing the sequnece modeling layer with the LSTM-based, CNN-based, and Transformer-based architecture, respectively. ",
"From the table, we can see that our method has a much faster inference speed than Lattice-LSTM when using the LSTM-based sequence modeling layer, and it was also much faster than LR-CNN, which used an CNN architecture to implement the sequence modeling layer. And as expected, our method with the CNN-based sequence modeling layer showed some advantage in inference speed than those with the LSTM-based and Transformer-based sequence model layer."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"somewhat",
"somewhat"
],
"question": [
"Which are the sequence model architectures this method can be transferred across?",
" What percentage of improvement in inference speed is obtained by the proposed method over the newest state-of-the-art methods?"
],
"question_id": [
"54c9147ffd57f1f7238917b013444a9743f0deb8",
"16f71391335a5d574f01235a9c37631893cd3bb0"
],
"question_writer": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
],
"search_query": [
"lexicon",
"lexicon"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Statistics of datasets.",
"Table 2: F1-score of our method with different implementations of vs. MP denotes the mean-pooling algorithm depicted in Eq. (13), WP denotes the frequency weighted pooling algorithm depicted in Eq. (14), and SWP denotes the smoothed weighted pooling algorithm depicted in Eq. (15).",
"Figure 1: F1 of our proposed method against the number of training iterations on OntoNotes when using bichar or not.",
"Table 3: Inference speed (average sentences per second, the larger the better) of our method with different implementations of the sequence modeling layer compared with Lattice-LSTM and LR-CNN.",
"Table 4: Performance on OntoNotes. A method followed by (LSTM) (e.g., Proposed (LSTM)) indicates that its sequence modeling layer is LSTM-based.",
"Table 5: Performance on MSRA.",
"Table 7: Performance on Resume.",
"Table 8: F1-score with different implementations of the sequence modeling layer. ExSoftword is the shorthand of Char-based+bichar+ExSoftword.",
"Table 6: Performance on Weibo. NE, NM and Overall denote F1-scores for named entities, nominal entities (excluding named entities) and both, respectively."
],
"file": [
"6-Table1-1.png",
"7-Table2-1.png",
"7-Figure1-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"9-Table5-1.png",
"9-Table7-1.png",
"9-Table8-1.png",
"9-Table6-1.png"
]
} | [
"Which are the sequence model architectures this method can be transferred across?",
" What percentage of improvement in inference speed is obtained by the proposed method over the newest state-of-the-art methods?"
] | [
[
"1908.05969-Experiments ::: Transferability Study-0",
"1908.05969-Generic Character-based Neural Architecture for Chinese NER ::: Sequence Modeling Layer-0"
],
[
"1908.05969-Experiments ::: Computational Efficiency Study-0"
]
] | [
"The sequence model architectures which this method is transferred to are: LSTM and Transformer-based models",
"Across 4 datasets, the best performing proposed model (CNN) achieved an average of 363% improvement over the state of the art method (LR-CNN)"
] | 708 |
1804.11297 | Sampling strategies in Siamese Networks for unsupervised speech representation learning | Recent studies have investigated siamese network architectures for learning invariant speech representations using same-different side information at the word level. Here we investigate systematically an often ignored component of siamese networks: the sampling procedure (how pairs of same vs. different tokens are selected). We show that sampling strategies taking into account Zipf's Law, the distribution of speakers and the proportions of same and different pairs of words significantly impact the performance of the network. In particular, we show that word frequency compression improves learning across a large range of variations in number of training pairs. This effect does not apply to the same extent to the fully unsupervised setting, where the pairs of same-different words are obtained by spoken term discovery. We apply these results to pairs of words discovered using an unsupervised algorithm and show an improvement on state-of-the-art in unsupervised representation learning using siamese networks. | {
"paragraphs": [
[
"Current speech and language technologies based on Deep Neural Networks (DNNs) BIBREF0 require large quantities of transcribed data and additional linguistic resources (phonetic dictionary, transcribed data). Yet, for many languages in the world, such resources are not available and gathering them would be very difficult due to a lack of stable and widespread orthography BIBREF1 .",
"The goal of Zero-resource technologies is to build speech and language systems in an unknown language by using only raw speech data BIBREF2 . The Zero Resource challenges (2015 and 2017) focused on discovering invariant sub-word representations (Track 1) and audio terms (Track 2) in an unsupervised fashion. Several teams have proposed to use terms discovered in Track 2 to provide DNNs with pairs of same versus different words as a form of weak or self supervision for Track 1: correspondence auto-encoders BIBREF3 , BIBREF4 , siamese networks BIBREF5 , BIBREF6 .",
"This paper extends and complements the ABnet Siamese network architecture proposed by BIBREF7 , BIBREF5 for the sub-word modelling task. DNN contributions typically focus on novel architectures or objective functions. Here, we study an often overlooked component of Siamese networks: the sampling procedure which chooses the set of pairs of same versus different tokens. To assess how each parameter contributes to the algorithm performance, we conduct a comprehensive set of experiments with a large range of variations in one parameter, holding constant the quantity of available data and the other parameters. We find that frequency compression of the word types has a particularly important effect. This is congruent with other frequency compression techniques used in NLP, for instance in the computation of word embeddings (word2vec BIBREF8 ). Besides, Levy et al. BIBREF9 reveals that the performance differences between word-embedding algorithms are due more to the choice of the hyper-parameters, than to the embedding algorithms themselves.",
"In this study, we first show that, using gold word-level annotations on the Buckeye corpus, a flattened frequency range gives the best results on phonetic learning in a Siamese network. Then, we show that the hyper-parameters that worked best with gold annotations yield improvements in the zero-resource scenario (unsupervised pairs) as well. Specifically, they improve on the state-of-the-art obtained with siamese and auto-encoder architectures."
],
[
"We developed a new package abnet3 using the pytorch framework BIBREF10 . The code is open-sourced (BSD 3-clause) and available on github, as is the code for the experiments for this paper."
],
[
"For the weakly-supervised study, we use 4 subsets of the Buckeye BIBREF11 dataset from the ZeroSpeech 2015 challenge BIBREF2 with, respectively, 1%, 10%, 50%, and 100% of the original data (see Table 1 ). The original dataset is composed of American English casual conversations recorded in the laboratory, with no overlap, no speech noises, separated in two splits: 12 speakers for training and 2 speakers for test. A Voice Activity Detection file indicates the onset and offset of each utterance and enables to discard silence portions of each file. We use the orthographic transcription from word-level annotations to determine same and different pairs to train the siamese networks.",
"In the fully unsupervised setting, we obtain pairs of same and different words from the Track 2 baseline of the 2015 ZeroSpeech challenge BIBREF2 : the Spoken Term Discovery system from BIBREF12 . We use both the original files from the baseline, and a rerun of the algorithm with systematic variations on its similarity threshold parameter.",
"For the speech signal pre-processing, frames are taken every 10ms and each one is encoded by a 40 log-energy Mel-scale filterbank representing 25ms of speech (Hamming windowed), without deltas or delta-delta coefficients. The input to the Siamese network is a stack of 7 successive filterbank frames. The features are mean-variance normalized per file, using the VAD information."
],
[
"A Siamese network is a type of neural network architecture that is used for representation learning, initially introduced for signature verification BIBREF13 . It contains 2 subnetworks sharing the same architecture and weights. In our case, to obtain the training information, we use the lexicon of words to learn an embedding of speech sounds which is more representative of the linguistic properties of the signal at the sub-word level (phoneme structure) and invariant to non-linguistic ones (speaker ID, channel, etc). A token $t$ is from a specific word type $w$ (ex: “the”,“process” etc.) pronounced by a specific speaker $s$ . The input to the network during training is a pair of stacked frames of filterbank features $x_1$ and $x_2$ and we use as label $y = {1}(\\lbrace w_1 = w_2\\rbrace )$ . For pairs of identical words, we realign them at the frame level using the Dynamic Time Warping (DTW) algorithm BIBREF14 . Based on the alignment paths from the DTW algorithm, the sequences of the stacked frames are then presented as the entries of the siamese network. Dissimilar pairs are aligned along the shortest word, e.g. the longest word is trimmed. With these notions of similarity, we can learn a representation where the distance between the two outputs of the siamese network $e(x_1)$ and $e(x_2)$ try to respect as much as possible the local constraints between $x_1$ and $x_2$ . To do so, ABnet is trained with the margin cosine loss function: $w$0 ",
"For a clear and fair comparison between the sampling procedures we fixed the network architecture and loss function as in BIBREF5 . The subnetwork is composed of 2 hidden layers with 500 units, with the Sigmoid as non-linearity and a final embedding layer of 100 units. For regularization, we use the Batch Normalization technique BIBREF15 , with a loss margin $\\gamma =0.5$ . All the experiments are carried using the Adam training procedure BIBREF16 and early-stopping on a held-out validation set of $30\\%$ of spoken words. We sample the validation set in the same way as the training set."
],
[
"The sampling strategy refers to the way pairs of tokens are fed to the Siamese network. Sampling every possible pairs of tokens becomes quickly intractable as the dataset grows (cf. Table 1 ).",
"There are four different possible configurations for a pair of word tokens $(t_1,t_2) $ : whether, or not, the tokens are from the same word type, $w_1 = w_2$ . and whether, or not, the tokens are pronounced by the same speaker, $s_1 = s_2$ .",
"Each specific word type $w$ is characterized by the total number of occurrences $n_w$ it has been spoken in the whole corpus. Then, is deduced the frequency of appearances $f_w \\propto n_w$ , and $r_w$ its frequency rank in the given corpus. We want to sample a pair of word tokens, in our framework we sample independently these 2 tokens. We define the probability to sample a specific token word type $w$ as a function of $n_w$ . We introduce the function $\\phi $ as the sampling compression function: ",
"$$\\mathbb {P}(w) = \\frac{\\phi (n_w)}{\\sum \\limits _{\\forall w^{\\prime }}\\phi (n_{w^{\\prime }})}$$ (Eq. 7) ",
"When a specific word type $w$ is selected according to these probabilities, a token $t$ is selected randomly from the specific word type $w$ . The usual strategy to select pairs to train siamese networks is to randomly pick two tokens from the whole list of training tokens examples BIBREF13 , BIBREF17 , BIBREF5 . In this framework, the sampling function corresponds $\\phi : n \\rightarrow n$ . Yet, there is a puzzling phenomenon in human language, there exists an empirical law for the distribution of words, also known as the Zipf's law BIBREF18 . Words types appear following a power law relationship between the frequency $f_w$ and the corresponding rank $r_w$ : a few very high-frequency types account for almost all tokens in a natural corpus (most of them are function words such as “the”,“a”,“it”, etc.) and there are many word types with a low frequency of appearances (“magret”,“duck”,“hectagon”). The frequency $f_t$ of type $t$ scales with its corresponding $r_t$ following a power law, with a parameter $\\alpha $ depending on the language: $t$0 ",
"One main effect on the training is the oversampling of word types with high frequency, and this is accentuated with the sampling of two tokens for the siamese. These frequent, usually monosyllabic, word types do not carry the necessary phonetic diversity to learn an embedding robust to rarer co-articulations, and rarer phones. To study and minimize this empirical linguistic trend, we will examine 4 other possibilities for the $\\phi $ function that compress the word frequency type: : n [2]n, : n [3]n",
" : n (1+n), : n 1",
"The first two options minimize the effect of the Zipf's Law on the frequency, but the power law is kept. The $\\log $ option removes the power law distribution, yet it keeps a linear weighting as a function of the rank of the types. Finally with the last configuration, the word types are sampled uniformly.",
"Another important variation factor in speech realizations is the speaker identity. We expect that the learning of speech representations to take advantage of word pairs from different speakers, to generalize better to new ones, and improve the ABX performance. $\nP^s_{-} = \\frac{\\# \\text{Sampled pairs pronounced by different speakers}}{\\# \\text{Sampled pairs}}\n$ ",
"Given the natural statistics of the dataset, the number of possible \"different\" pairs exceeds by a large margin the number of possible \"same\" pairs ( $\\sim 1\\%$ of all token pairs for the Buckeye-100%). The siamese loss is such that \"Same\" pairs are brought together in embedding space, and \"Different\" pairs are pulled apart. Should we reflect this statistic during the training, or eliminate it by presenting same and different pairs equally? We manipulate systematically the proportion of pairs from different word types fed to the network: $\nP^w_{-} = \\frac{\\# \\text{Sampled pairs with non-matching word types}}{\\# \\text{Sampled pairs}}\n$ "
],
[
"To test if the learned representations can separate phonetic categories, we use a minimal pair ABX discrimination task BIBREF19 , BIBREF20 . It only requires to define a dissimilarity function $d$ between speech tokens, no external training algorithm is needed. We define the ABX-discriminability of category $x$ from category $y$ as the probability that $A$ and $X$ are further apart than $B$ and $X$ when $A$ and $X$ are from category $x$ and $x$0 is from category $x$1 , according to a dissimilarity function $x$2 . Here, we focus on phone triplet minimal pairs: sequences of 3 phonemes that differ only in the central one (“beg”-“bag”, “api”-“ati”, etc.). For the within-speaker task, all the phones triplets belong to the same speaker (e.g. $x$3 ) Finally the scores for every pair of central phones are averaged and subtracted from 1 to yield the reported within-talker ABX error rate. For the across-speaker task, $x$4 and $x$5 belong to the same speaker, and $x$6 to a different one (e.g. $x$7 ). The scores for a given minimal pair are first averaged across all of the pairs of speakers for which this contrast can be made. As above, the resulting scores are averaged over all contexts over all pairs of central phones and converted to an error rate."
],
[
"We first analyze the results for the sampling compression function $\\phi $ Figure 1 . For all training datasets, we observe a similar pattern for the performances on both tasks: the word frequency compression improves the learning and generalization. The result show that, compared to the raw filterbank features baseline, all the trained ABnet networks improve the scores on the phoneme discrimination tasks, even in the $1\\%$ scenario. Yet, the improvement with the usual sampling scenario $\\phi : n \\rightarrow n$ is small in all 4 training datasets. The optimal function for the within and across speaker task on all training configuration is the uniform function $\\phi : n \\rightarrow 1$ . It yields substantial improvements over the raw filterbanks for ABX task across-speaker ( $5.6 $ absolute points and $16.8 \\%$ relative improvement for the $1\\%$ -Buckeye training). The addition of data for these experiments improves the performance of the network, but not in a substantial way: the improvements from $1\\%$ -Buckeye to $100\\%$ -Buckeye, for $\\phi : n \\rightarrow 1$ , is $1\\%$0 absolute points and $1\\%$1 relative. These results show that using frequency compression is clearly beneficial, and surprisingly adding more data is still advantageous but not as much as the choice of $1\\%$2 . Renshaw et al. BIBREF4 , found similar results with a correspondence auto-encoder, training with more training data did not yield improvements for their system.",
"We now look at the effect on the ABX performances of the proportion of pairs of words pronounced by two different speakers Figure 2 . We start from our best sampling function configuration so far $\\phi : n \\rightarrow 1$ . We report on the graph only the two extreme training settings. The variations for the 4 different training splits are similar, and still witness a positive effect with additional data on the siamese network performances. Counter-intuitively, the performances on the ABX tasks does not take advantage of pairs from different speakers. It even shows a tendency to increase the ABX error rate: for the $100\\%$ -Buckeye we witness an augmentation of the ABX error-rate (2.9 points and $11.6\\%$ relative) between $P_{-}^s=0$ and $P_{-}^s=1$ . One of our hypothesis on this surprising effect, might be the poor performance of the DTW alignment algorithm directly on raw filterbanks features of tokens from 2 different speakers.",
"We next study the influence of the proportion of pairs from different word types $P^w_{-}$ Figure 3 . In all training scenarios, to privilege either only the positive or the negative examples is not the solution. For the different training splits, the optimal number for $P_{-}^w$ is either $0.7$ or $0.8$ in the within and across speaker ABX task. We do not observe a symmetric influence of the positive and negative examples, but it is necessary to keep the same and different pairs. The results collapsed, if the siamese network is provided only with positive labels to match: the network will tend to map all speech tokens to the same vector point and the discriminability is at chance level."
],
[
"Now, we transfer the findings about sampling from the weakly supervised setting, to the fully unsupervised setting. We report in Table 2 our results for the two ZeroSpeech 2015 BIBREF2 corpus: the same subset of the Buckeye Corpus as earlier and a subset of the NCHLT corpus of Xitsonga BIBREF21 . To train our siamese networks, we use as BIBREF5 , the top-down information from the baseline for the Track 2 (Spoken Term Discovery) of the ZeroSpeech 2015 challenge from BIBREF12 . The resulting clusters are not perfect, whereas we had perfect clusters in our previous analysis.",
"In Thiolière et al. BIBREF5 the sampling is done with : $P^w_{-} = P^s_{-} = 0.5$ , and $\\phi = n \\rightarrow n$ . This gives us a baseline to compare our sampling method improvements with our own implementation of siamese networks.",
"First, the “discovered” clusters – obtained from spoken term discovery system – don't follow the Zipf's law like the gold clusters. This difference of distributions diminishes the impact of the sampling compression function $\\phi $ .",
"We matched state-of-the-art for this challenge only on the ABX task within-speaker for the Buckeye, otherwise the modified DPGMM algorithm proposed by Heck et al. stays the best submissions for the 2015 ZeroSpeech challenge.",
"Finally, we study the influence of the DTW-threshold $\\delta $ used in the spoken discovery system on the phonetic discriminability of siamese networks. We start again from our best finding from weakly supervised learning. The clusters found by the Jansen et al. BIBREF12 system are very sensitive to this parameter with a trade-off between the Coverage and the Normalized Edit Distance (NED) introduced by BIBREF24 .",
"We find that ABnet is getting good results across the various outputs of the STD system shown in Table 3 and improves over the filterbanks results in all cases. Obtaining more data with the STD system involves a loss in words quality. In contrast with the weakly supervised setting, there is an optimal trade-off between the amount and quality of discovered words for the sub-word modelling task with siamese networks."
],
[
"We presented a systematic study of the sampling component in siamese networks. In the weakly-supervised setting, we established that the word frequency compression had an important impact on the discriminability performances. We also found that optimal proportions of pairs with different types and speakers are not the ones usually used in siamese networks. We transferred the best parameters to the unsupervised setting to compare our results to the 2015 Zero Resource challenge submissions. It lead to improvements over the previous neural networks architectures, yet the Gaussian mixture methods (DPGMM) remain the state-of-the-art in the phonetic discriminability task. In the future, we will study in the same systematic way the influence of sampling in the fully unsupervised setting. We will then try to leverage the better discriminability of our representations obtained with ABnet to improve the spoken term discovery, which relies on frame-level discrimination to find pairs of similar words. Besides, power law distributions are endemic in natural language tasks. It would be interesting to extend this principle to other tasks (for instance, language modeling)."
],
[
"The team's project is funded by the European Research Council (ERC-2011-AdG-295810 BOOTPHON), the Agence Nationale pour la Recherche (ANR-10-LABX-0087 IEC, ANR-10-IDEX-0001-02 PSL* ), Almerys (industrial chair Data Science and Security), Facebook AI Research (Doctoral research contract), Microsoft Research (joint MSR-INRIA center) and a Google Award Grant."
]
],
"section_name": [
"Introduction",
"Methods",
"Data preparation",
"ABnet",
"Sampling",
"Evaluation with ABX tasks",
"Weakly supervised Learning",
"Applications to fully unsupervised setting",
"Conclusions and Future work",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"801ef29897febeafe1995127bebceb0e84e7644f"
],
"answer": [
{
"evidence": [
"To test if the learned representations can separate phonetic categories, we use a minimal pair ABX discrimination task BIBREF19 , BIBREF20 . It only requires to define a dissimilarity function $d$ between speech tokens, no external training algorithm is needed. We define the ABX-discriminability of category $x$ from category $y$ as the probability that $A$ and $X$ are further apart than $B$ and $X$ when $A$ and $X$ are from category $x$ and $x$0 is from category $x$1 , according to a dissimilarity function $x$2 . Here, we focus on phone triplet minimal pairs: sequences of 3 phonemes that differ only in the central one (“beg”-“bag”, “api”-“ati”, etc.). For the within-speaker task, all the phones triplets belong to the same speaker (e.g. $x$3 ) Finally the scores for every pair of central phones are averaged and subtracted from 1 to yield the reported within-talker ABX error rate. For the across-speaker task, $x$4 and $x$5 belong to the same speaker, and $x$6 to a different one (e.g. $x$7 ). The scores for a given minimal pair are first averaged across all of the pairs of speakers for which this contrast can be made. As above, the resulting scores are averaged over all contexts over all pairs of central phones and converted to an error rate."
],
"extractive_spans": [],
"free_form_answer": "error rate in a minimal pair ABX discrimination task",
"highlighted_evidence": [
"To test if the learned representations can separate phonetic categories, we use a minimal pair ABX discrimination task BIBREF19 , BIBREF20 .",
"For the within-speaker task, all the phones triplets belong to the same speaker (e.g. $x$3 ) Finally the scores for every pair of central phones are averaged and subtracted from 1 to yield the reported within-talker ABX error rate. For the across-speaker task, $x$4 and $x$5 belong to the same speaker, and $x$6 to a different one (e.g. $x$7 ). The scores for a given minimal pair are first averaged across all of the pairs of speakers for which this contrast can be made. As above, the resulting scores are averaged over all contexts over all pairs of central phones and converted to an error rate."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"no"
],
"question": [
"What is the metric that is measures in this paper?"
],
"question_id": [
"33f72c8da22dd7d1378d004cbd8d2dcd814a5291"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Table 1: Statistics for the 4 Buckeye splits used for the weakly supervised training, the duration in minutes expressed the total amount of speech for training",
"Figure 1: ABX across-speaker error rates on test set with various sampling compression functions φ for the 4 different Buckeye splits used for weakly supervised training. Here, the proportions of pairs with different speakers P s− and with different word types Pw− are kept fixed: P s − = 0.5, P w − = 0.5",
"Figure 2: Average ABX error rates across-speaker with various proportion pairs of different speakers P s−, with φ : n → 1 and Pw− = 0.5.",
"Figure 3: Average ABX error rates across-speaker with various proportion pairs with different word types Pw− , where φ : n→ 1 and P s− = 0.5",
"Table 3: Number of found clusters, NED, Coverage, ABX discriminability results with our ABnet with Pw− = 0.7, P s − = 0, φ : n → 1, for the ZeroSpeech2015 Buckeye for various DTW-thresholds δ in the Jansen et al. [13] STD system. The best results for each metric are in bold.",
"Table 2: ABX discriminability results for the ZeroSpeech2015 datasets. The best error rates for each conditions for siamese architectures are in bold. The best error rates for each conditions overall are underlined."
],
"file": [
"3-Table1-1.png",
"4-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"5-Table3-1.png",
"5-Table2-1.png"
]
} | [
"What is the metric that is measures in this paper?"
] | [
[
"1804.11297-Evaluation with ABX tasks-0"
]
] | [
"error rate in a minimal pair ABX discrimination task"
] | 709 |
1910.13215 | Transformer-based Cascaded Multimodal Speech Translation | This paper describes the cascaded multimodal speech translation systems developed by Imperial College London for the IWSLT 2019 evaluation campaign. The architecture consists of an automatic speech recognition (ASR) system followed by a Transformer-based multimodal machine translation (MMT) system. While the ASR component is identical across the experiments, the MMT model varies in terms of the way of integrating the visual context (simple conditioning vs. attention), the type of visual features exploited (pooled, convolutional, action categories) and the underlying architecture. For the latter, we explore both the canonical transformer and its deliberation version with additive and cascade variants which differ in how they integrate the textual attention. Upon conducting extensive experiments, we found that (i) the explored visual integration schemes often harm the translation performance for the transformer and additive deliberation, but considerably improve the cascade deliberation; (ii) the transformer and cascade deliberation integrate the visual modality better than the additive deliberation, as shown by the incongruence analysis. | {
"paragraphs": [
[
"The recently introduced How2 dataset BIBREF2 has stimulated research around multimodal language understanding through the availability of 300h instructional videos, English subtitles and their Portuguese translations. For example, BIBREF3 successfully demonstrates that semantically rich action-based visual features are helpful in the context of machine translation (MT), especially in the presence of input noise that manifests itself as missing source words. Therefore, we hypothesize that a speech-to-text translation (STT) system may also benefit from the visual context, especially in the traditional cascaded framework BIBREF4, BIBREF5 where noisy automatic transcripts are obtained from an automatic speech recognition system (ASR) and further translated into the target language using a machine translation (MT) component. The dataset enables the design of such multimodal STT systems, since we have access to a bilingual corpora as well as the corresponding audio-visual stream. Hence, in this paper, we propose a cascaded multimodal STT with two components: (i) an English ASR system trained on the How2 dataset and (ii) a transformer-based BIBREF0 visually grounded MMT system.",
"MMT is a relatively new research topic which is interested in leveraging auxiliary modalities such as audio or vision in order to improve translation performance BIBREF6. MMT has proved effective in scenarios such as for disambiguation BIBREF7 or when the source sentences are corrupted BIBREF8. So far, MMT has mostly focused on integrating visual features into neural MT (NMT) systems using visual attention through convolutional feature maps BIBREF9, BIBREF10 or visual conditioning of encoder/decoder blocks through fully-connected features BIBREF11, BIBREF12, BIBREF13, BIBREF14.",
"Inspired by previous research in MMT, we explore several multimodal integration schemes using action-level video features. Specifically, we experiment with visually conditioning the encoder output and adding visual attention to the decoder. We further extend the proposed schemes to the deliberation variant BIBREF1 of the canonical transformer in two ways: additive and cascade multimodal deliberation, which are distinct in their textual attention regimes. Overall, the results show that multimodality in general leads to performance degradation for the canonical transformer and the additive deliberation variant, but can result in substantial improvements for the cascade deliberation. Our incongruence analysis BIBREF15 reveals that the transformer and cascade deliberation are more sensitive to and therefore more reliant on visual features for translation, whereas the additive deliberation is much less impacted. We also observe that incongruence sensitivity and translation performance are not necessarily correlated."
],
[
"In this section, we briefly describe the proposed multimodal speech translation system and its components."
],
[
"The baseline ASR system that we use to obtain English transcripts is an attentive sequence-to-sequence architecture with a stacked encoder of 6 bidirectional LSTM layers BIBREF16. Each LSTM layer is followed by a tanh projection layer. The middle two LSTM layers apply a temporal subsampling BIBREF17 by skipping every other input, reducing the length of the sequence $\\mathrm {X}$ from $T$ to $T/4$. All LSTM and projection layers have 320 hidden units. The forward-pass of the encoder produces the source encodings on top of which attention will be applied within the decoder. The hidden and cell states of all LSTM layers are initialized with 0. The decoder is a 2-layer stacked GRU BIBREF18, where the first GRU receives the previous hidden state of the second GRU in a transitional way. GRU layers, attention layer and embeddings have 320 hidden units. We share the input and output embeddings to reduce the number of parameters BIBREF19. At timestep $t\\mathrm {=}0$, the hidden state of the first GRU is initialized with the average-pooled source encoding states."
],
[
"A human translator typically produces a translation draft first, and then refines it towards the final translation. The idea behind the deliberation networks BIBREF20 simulates this process by extending the conventional attentive encoder-decoder architecture BIBREF21 with a second pass refinement decoder. Specifically, the encoder first encodes a source sentence of length $N$ into a sequence of hidden states $\\mathcal {H} = \\lbrace h_1, h_2,\\dots ,h_{N}\\rbrace $ on top of which the first pass decoder applies the attention. The pre-softmax hidden states $\\lbrace \\hat{s}_1,\\hat{s}_2,\\dots ,\\hat{s}_{M}\\rbrace $ produced by the decoder leads to a first pass translation $\\lbrace \\hat{y}_1,\\hat{y}_2,\\dots , \\hat{y}_{M}\\rbrace $. The second pass decoder intervenes at this point and generates a second translation by attending separately to both $\\mathcal {H}$ and the concatenated state vectors $\\lbrace [\\hat{s}_1;\\hat{y}_1], [\\hat{s}_2; \\hat{y}_2],\\dots ,[\\hat{s}_{M}; \\hat{y}_{M}]\\rbrace $. Two context vectors are produced as a result, and they are joint inputs with $s_{t-1}$ (previous hidden state of ) and $y_{t-1}$ (previous output of ) to to yield $s_t$ and then $y_t$.",
"A transformer-based deliberation architecture is proposed by BIBREF1. It follows the same two-pass refinement process, with every second-pass decoder block attending to both the encoder output $\\mathcal {H}$ and the first-pass pre-softmax hidden states $\\mathcal {\\hat{S}}$. However, it differs from BIBREF20 in that the actual first-pass translation $\\hat{Y}$ is not used for the second-pass attention."
],
[
"We experiment with three types of video features, namely average-pooled vector representations (), convolutional layer outputs (), and Ten-Hot action category embeddings (). The features are provided by the How2 dataset using the following approach: a video is segmented into smaller parts of 16 frames each, and the segments are fed to a 3D ResNeXt-101 CNN BIBREF22, trained to recognise 400 action classes BIBREF23. The 2048-D fully-connected features are then averaged across the segments to obtain a single feature vector for the overall video.",
"In order to obtain the features, 16 equi-distant frames are sampled from a video, and they are then used as input to an inflated 3D ResNet-50 CNN BIBREF24 fine-tuned on the Moments in Time action video dataset. The CNN hence takes in a video and classifies it into one of 339 categories. The features, taken at the CONV$_4$ layer of the network, has a $7 \\times 7 \\times 2048$ dimensionality.",
"Higher-level semantic information can be more helpful than convolutional features. We apply the same CNN to a video as we do for features, but this time the focus is on the softmax layer output: we process the embedding matrix to keep the 10 most probable category embeddings intact while zeroing out the remaining ones. We call this representation ten-hot action category embeddings ()."
],
[
"Encoder with Additive Visual Conditioning (-) In this approach, inspired by BIBREF7, we add a projection of the visual features to each output of the vanilla transformer encoder (-). This projection is strictly linear from the 2048-D features to the 1024-D space in which the self attention hidden states reside, and the projection matrix is learned jointly with the translation model.",
"Decoder with Visual Attention (-) In order to accommodate attention to visual features at the decoder side and inspired by BIBREF25, we insert one layer of visual cross attention at a decoder block immediately before the fully-connected layer. We name the transformer decoder with such an extra layer as –, where this layer is immediately after the textual attention to the encoder output. Specifically, we experiment with attention to , and features separately. The visual attention is distributed across the 49 video regions in , the 339 action category word embeddings in , or the 32 rows in where we reshape the 2048-D vector into a $32 \\times 64$ matrix."
],
[
"The vanilla text-only transformer (-) is used as a baseline, and we design two variants: with additive visual conditioning (-) and with attention to visual features (-). A -features a -and a vanilla transformer decoder (-), therefore utilising visual information only at the encoder side. In contrast, a -is configured with a -and a –, exploiting visual cues only at the decoder. Figure FIGREF7 summarises the two approaches."
],
[
"Our multimodal deliberation models differ from each other in two ways: whether to use additive () BIBREF7 or cascade () textual deliberation to integrate the textual attention to the original input and to the first pass, and whether to employ visual attention (-) or additive visual conditioning (-) to integrate the visual features into the textual MT model. Figures FIGREF9 and FIGREF10 show the configurations of our additive and cascade deliberation models, respectively, each also showing the connections necessary for -and -.",
"Additive () & Cascade () Textual Deliberation",
"In an additive-deliberation second-pass decoder (–) block, the first layer is still self-attention, whereas the second layer is the addition of two separate attention sub-layers. The first sub-layer attends to the encoder output in the same way -does, while the attention of the second sub-layer is distributed across the concatenated first pass outputs and hidden states. The input to both sub-layers is the output of the self-attention layer, and the outputs of the sub-layers are summed as the final output and then (with a residual connection) fed to the visual attention layer if the decoder is multimodal or to the fully connected layer otherwise.",
"For the cascade version, the only difference is that, instead of two sub-layers, we have two separate, successive layers with the same functionalities.",
"It is worth mentioning that we introduce the attention to the first pass only at the initial three decoder blocks out of the total six of the second pass decoder (-), following BIBREF7.",
"Additive Visual Conditioning (-) & Visual Attention (-)",
"-and -are simply applying -and -respectively to a deliberation model, therefore more details have been introduced in Section SECREF5.",
"For -, similar to in -, we add a projection of the visual features to the output of -, and use -as the first pass decoder and either additive or cascade deliberation as the -.",
"For -, in a similar vein as -, the encoder in this setting is simply -and the first pass decoder is just -, but this time -is responsible for attending to the first pass output as well as the visual features. For both additive and cascade deliberation, a visual attention layer is inserted immediately before the fully-connected layer, so that the penultimate layer of a decoder block now attends to visual information."
],
[
"We stick to the default training/validation/test splits and the pre-extracted speech features for the How2 dataset, as provided by the organizers. As for the pre-processing, we lowercase the sentences and then tokenise them using Moses BIBREF26. We then apply subword segmentation BIBREF27 by learning separate English and Portuguese models with 20,000 merge operations each. The English corpus used when training the subword model consists of both the ground-truth video subtitles and the noisy transcripts produced by the underlying ASR system. We do not share vocabularies between the source and target domains. Finally for the post-processing step, we merge the subword tokens, apply recasing and detokenisation. The recasing model is a standard Moses baseline trained again on the parallel How2 corpus.",
"The baseline ASR system is trained on the How2 dataset as well. This system is then used to obtain noisy transcripts for the whole dataset, using beam-search with beam size of 10. The pre-processing pipeline for the ASR is different from the MT pipeline in the sense that the punctuations are removed and the subword segmentation is performed using SentencePiece BIBREF28 with a vocabulary size of 5,000. The test-set performance of this ASR is around 19% WER."
],
[
"We train our transformer and deliberation models until convergence largely with transformer_big hyperparameters: 16 attention heads, 1024-D hidden states and a dropout of 0.1. During inference, we apply beam-search with beam size of 10. For deliberation, we first train the underlying transformer model until convergence, and use its weights to initialise the encoder and the first pass decoder. After freezing those weights, we train -until convergence. The reason for the partial freezing is that our preliminary experiments showed that it enabled better performance compared to updating the whole model. Following BIBREF20, we obtain 10-best samples from the first pass with beam-search for source augmentation during the training of -.",
"We train all the models on an Nvidia RTX 2080Ti with a batch size of 1024, a base learning rate of 0.02 with 8,000 warm-up steps for the Adam BIBREF29 optimiser, and a patience of 10 epochs for early stopping based on approx-BLEU () for the transformers and 3 epochs for the deliberation models. After the training finishes, we evaluate all the checkpoints on the validation set and compute the real BIBREF30 scores, based on which we select the best model for inference on the test set. The transformer and the deliberation models are based upon the library BIBREF31 (v1.3.0 RC1) as well as the vanilla transformer-based deliberation BIBREF20 and their multimodal variants BIBREF7."
],
[
"We report tokenised results obtained using the multeval toolkit BIBREF32. We focus on single system performance and thus, do not perform any ensembling or checkpoint averaging.",
"The scores of the models are shown in Table TABREF17. Evident from the table is that the best models overall are -and –with a score of 39.8, and the other multimodal transformers have slightly worse performance, showing score drops around 0.1. Also, none of the multimodal transformer systems are significantly different from the baseline, which is a sign of the limited extent to which visual features affect the output.",
"For additive deliberation (-), the performance variation is considerably larger: -and take the lead with 37.6 , but the next best system (-) plunges to 37.2. The other two (-& -) also have noticeably worse results (36.0 and 37.0). Overall, however, -is still similar to the transformers in that the baseline generally yields higher-quality translations.",
"Cascade deliberation, on the other hand, is different in that its text-only baseline is outperformed by most of its multimodal counterparts. Multimodality enables boosts as large as around 1 point in the cases of -and -, both of which achieve about 37.4 and are significantly different from the baseline.",
"Another observation is that the deliberation models as a whole lead to worse performance than the canonical transformers, with deterioration ranging from 2.3 (across -variants) to 3.5 (across -systems), which defies the findings of BIBREF7. We leave this to future investigations."
],
[
"To further probe the effect of multimodality, we follow the incongruent decoding approach BIBREF15, where our multimodal models are fed with mismatched visual features. The general assumption is that a model will have learned to exploit visual information to help with its translation, if it shows substantial performance degradation when given wrong visual features. The results are reported in Table TABREF19.",
"Overall, there are considerable parallels between the transformers and the cascade deliberation models in terms of the incongruence effect, such as universal performance deterioration (ranging from 0.1 to 0.6 ) and more noticeable score changes ($\\downarrow $ 0.5 for –and $\\downarrow $ 0.6 for —) in the -setting compared to the other scenarios. Additive deliberation, however, manifests a drastically different pattern, showing almost no incongruence effect for -, only a 0.2 decrease for -, and even a 0.1 boost for -and -.",
"Therefore, the determination can be made that and -models are considerably more sensitive to incorrect visual information than -, which means the former better utilise visual clues during translation.",
"Interestingly, the extent of performance degradation caused by incongruence is not necessarily correlated with the congruent scores. For example, –is on par with –in congruent decoding (differing by around 0.1 ), but the former suffers only a 0.1-loss with incongruence whereas the figure for the latter is 0.4, in addition to the fact that the latter becomes significantly different after incongruent decoding. This means that some multimodal models that are sensitive to incongruence likely complement visual attention with textual attention but without getting higher-quality translation as a result.",
"The differences between the multimodal behaviour of additive and cascade deliberation also warrant more investigation, since the two types of deliberation are identical in their utilisation of visual features and only vary in their handling of the textual attention to the outputs of the encoder and the first pass decoder."
],
[
"We explored a series of transformers and deliberation based models to approach cascaded multimodal speech translation as our participation in the How2-based speech translation task of IWSLT 2019. We submitted the –system, which is a canonical transformer with visual attention over the convolutional features, as our primary system with the remaining ones marked as contrastive ones. The primary system obtained a of 39.63 on the public IWSLT19 test set, whereas -, the top contrastive system on the same set, achieved 39.85. Our main conclusions are as follows: (i) the visual modality causes varying levels of translation quality damage to the transformers and additive deliberation, but boosts cascade deliberation; (ii) the multimodal transformers and cascade deliberation show performance degradation due to incongruence, but additive deliberation is not as affected; (iii) there is no strict correlation between incongruence sensitivity and translation performance."
],
[
"This work was supported by the MultiMT (H2020 ERC Starting Grant No. 678017) and MMVC (Newton Fund Institutional Links Grant, ID 352343575) projects."
]
],
"section_name": [
"Introduction",
"Methods",
"Methods ::: Automatic Speech Recognition",
"Methods ::: Deliberation-based NMT",
"Methods ::: Multimodality ::: Visual Features",
"Methods ::: Multimodality ::: Integration Approaches",
"Methods ::: Multimodality ::: Multimodal Transformers",
"Methods ::: Multimodality ::: Multimodal Deliberation",
"Experiments ::: Dataset",
"Experiments ::: Training",
"Results & Analysis ::: Quantitative Results",
"Results & Analysis ::: Incongruence Analysis",
"Conclusions",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"a51f30c454ac4b3b94510e6b93f2ab7ae7b14998"
],
"answer": [
{
"evidence": [
"We train all the models on an Nvidia RTX 2080Ti with a batch size of 1024, a base learning rate of 0.02 with 8,000 warm-up steps for the Adam BIBREF29 optimiser, and a patience of 10 epochs for early stopping based on approx-BLEU () for the transformers and 3 epochs for the deliberation models. After the training finishes, we evaluate all the checkpoints on the validation set and compute the real BIBREF30 scores, based on which we select the best model for inference on the test set. The transformer and the deliberation models are based upon the library BIBREF31 (v1.3.0 RC1) as well as the vanilla transformer-based deliberation BIBREF20 and their multimodal variants BIBREF7.",
"FLOAT SELECTED: Table 1: BLEU scores for the test set: bold highlights our best results. † indicates a system is significantly different from its text-only counterpart (p-value ≤ 0.05)."
],
"extractive_spans": [],
"free_form_answer": "BLEU scores",
"highlighted_evidence": [
"After the training finishes, we evaluate all the checkpoints on the validation set and compute the real BIBREF30 scores, based on which we select the best model for inference on the test set.",
"FLOAT SELECTED: Table 1: BLEU scores for the test set: bold highlights our best results. † indicates a system is significantly different from its text-only counterpart (p-value ≤ 0.05)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"80de776672302aa64bed13c1448a02c6b31fead3"
],
"answer": [
{
"evidence": [
"The recently introduced How2 dataset BIBREF2 has stimulated research around multimodal language understanding through the availability of 300h instructional videos, English subtitles and their Portuguese translations. For example, BIBREF3 successfully demonstrates that semantically rich action-based visual features are helpful in the context of machine translation (MT), especially in the presence of input noise that manifests itself as missing source words. Therefore, we hypothesize that a speech-to-text translation (STT) system may also benefit from the visual context, especially in the traditional cascaded framework BIBREF4, BIBREF5 where noisy automatic transcripts are obtained from an automatic speech recognition system (ASR) and further translated into the target language using a machine translation (MT) component. The dataset enables the design of such multimodal STT systems, since we have access to a bilingual corpora as well as the corresponding audio-visual stream. Hence, in this paper, we propose a cascaded multimodal STT with two components: (i) an English ASR system trained on the How2 dataset and (ii) a transformer-based BIBREF0 visually grounded MMT system."
],
"extractive_spans": [
"How2"
],
"free_form_answer": "",
"highlighted_evidence": [
"Hence, in this paper, we propose a cascaded multimodal STT with two components: (i) an English ASR system trained on the How2 dataset and (ii) a transformer-based BIBREF0 visually grounded MMT system."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"Was evaluation metrics and criteria were used to evaluate the output of the cascaded multimodal speech translation?",
"What dataset was used in this work?"
],
"question_id": [
"98eb245c727c0bd050d7686d133fa7cd9d25a0fb",
"537a786794604ecc473fb3ef6222e0c3cb81f772"
],
"question_writer": [
"f7c76ad7ff9c8b54e8c397850358fa59258c6672",
"f7c76ad7ff9c8b54e8c397850358fa59258c6672"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Unimodal and multimodal transformers: Trans-Cond and the Trans-Attn extend the text-only Trans-Baseline with dashed green- and blue-arrow routes, respectively. Each multimodal model activates either the dashed green-arrow route for Trans-Cond or one of the three dashed blue-arrow routes (i.e. VideoSum, Action Category Embeddings or Convolutional Layer Output, as shown) for Trans-Attn.",
"Figure 2: Unimodal and multimodal additive deliberation: Delib-Cond and Delib-Attn extend the text-only DelibBaseline with dashed green- and blue-arrow routes, respectively. Each multimodal model activates either the dashed greenarrow route for Delib-Cond or one of the three dashed blue-arrow routes (i.e. VideoSum, Action Category Embeddings or Convolutional Layer Output, as shown) for Delib-Attn.",
"Figure 3: Unimodal and multimodal cascade deliberation: Delib-Cond and Delib-Attn extend the text-only DelibBaseline with dashed green- and blue-arrow routes, respectively. Each multimodal model activates either the dashed greenarrow route for Delib-Cond or one of the three dashed blue-arrow routes (i.e. VideoSum, Action Category Embeddings or Convolutional Layer Output, as shown) for Delib-Attn.",
"Table 1: BLEU scores for the test set: bold highlights our best results. † indicates a system is significantly different from its text-only counterpart (p-value ≤ 0.05).",
"Table 2: Incongruent decoding results for the test set: BLEU changes are w.r.t the congruent counterparts from Table 1. † marks incongruent decoding results that are significantly different (p-value ≤ 0.05) from congruent counterparts."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"4-Figure3-1.png",
"5-Table1-1.png",
"6-Table2-1.png"
]
} | [
"Was evaluation metrics and criteria were used to evaluate the output of the cascaded multimodal speech translation?"
] | [
[
"1910.13215-Experiments ::: Training-1",
"1910.13215-5-Table1-1.png"
]
] | [
"BLEU scores"
] | 711 |
1809.02731 | Exploiting Invertible Decoders for Unsupervised Sentence Representation Learning | The encoder-decoder models for unsupervised sentence representation learning tend to discard the decoder after being trained on a large unlabelled corpus, since only the encoder is needed to map the input sentence into a vector representation. However, parameters learnt in the decoder also contain useful information about language. In order to utilise the decoder after learning, we present two types of decoding functions whose inverse can be easily derived without expensive inverse calculation. Therefore, the inverse of the decoding function serves as another encoder that produces sentence representations. We show that, with careful design of the decoding functions, the model learns good sentence representations, and the ensemble of the representations produced from the encoder and the inverse of the decoder demonstrate even better generalisation ability and solid transferability. | {
"paragraphs": [
[
"Learning sentence representations from unlabelled data is becoming increasingly prevalent in both the machine learning and natural language processing research communities, as it efficiently and cheaply allows knowledge extraction that can successfully transfer to downstream tasks. Methods built upon the distributional hypothesis BIBREF0 and distributional similarity BIBREF1 can be roughly categorised into two types:",
"Word-prediction Objective: The objective pushes the system to make better predictions of words in a given sentence. As the nature of the objective is to predict words, these are also called generative models. In one of the two classes of models of this type, an encoder-decoder model is learnt using a corpus of contiguous sentences BIBREF2 , BIBREF3 , BIBREF4 to make predictions of the words in the next sentence given the words in the current one. After training, the decoder is usually discarded as it is only needed during training and is not designed to produce sentence representations. In the other class of models of this type, a large language model is learnt BIBREF5 , BIBREF6 , BIBREF7 on unlabelled corpora, which could be an autoregressive model or a masked language model, which gives extremely powerful language encoders but requires massive computing resources and training time.",
"Similarity-based Objective: The objective here relies on a predefined similarity function to enforce the model to produce more similar representations for adjacent sentences than those that are not BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . Therefore, the inductive biases introduced by the two key components, the differential similarity function and the context window, in the objective crucially determine the quality of learnt representations and what information of sentences can be encoded in them.",
"To avoid tuning the inductive biases in the similarity-based objective, we follow the word-prediction objective with an encoder and a decoder, and we are particularly interested in exploiting invertible decoding functions, which can then be used as additional encoders during testing. The contribution of our work is summarised as follows:"
],
[
"Learning vector representations for words with a word embedding matrix as the encoder and a context word embedding matrix as the decoder BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 can be considered as a word-level example of our approach, as the models learn to predict the surrounding words in the context given the current word, and the context word embeddings can also be utilised to augment the word embeddings BIBREF14 , BIBREF16 . We are thus motivated to explore the use of sentence decoders after learning instead of ignoring them as most sentence encoder-decoder models do.",
"Our approach is to invert the decoding function in order to use it as another encoder to assist the original encoder. In order to make computation of the inverse function well-posed and tractable, careful design of the decoder is needed. A simple instance of an invertible decoder is a linear projection with an orthonormal square matrix, whose transpose is its inverse. A family of bijective transformations with non-linear functions BIBREF17 , BIBREF18 , BIBREF19 can also be considered as it empowers the decoder to learn a complex data distribution.",
"In our paper, we exploit two types of plausible decoding functions, including linear projection and bijective functions with neural networks BIBREF17 , and with proper design, the inverse of each of the decoding functions can be derived without expensive inverse calculation after learning. Thus, the decoder function can be utilised along with the encoder for building sentence representations. We show that the ensemble of the encoder and the inverse of the decoder outperforms each of them."
],
[
"Our model has similar structure to that of skip-thought BIBREF2 and, given the neighbourhood hypothesis BIBREF20 , learns to decode the next sentence given the current one instead of predicting both the previous sentence and the next one at the same time."
],
[
"Given the finding BIBREF4 that neither an autoregressive nor an RNN decoder is necessary for learning sentence representations that excel on downstream tasks as the autoregressive decoders are slow to train and the quality of the generated sequences is not highly correlated with that of the representations of the sentences, our model only learns to predict words in the next sentence in a non-autoregressive fashion.",
"Suppose that the $i$ -th sentence $S_i=\\lbrace w_1,w_2,...,w_{N_i}\\rbrace $ has $N_i$ words, and $S_{i+1}$ has $N_{i+1}$ words. The learning objective is to maximise the averaged log-likelihood for all sentence pairs: ",
"$$\\ell _{S_{i+i}|S_i}(\\phi ,\\theta )=\\frac{1}{N_{i+1}}\\sum _{w_j\\in S_{i+1}}\\log P(w_j|S_i) \\nonumber $$ (Eq. 5) ",
" where $\\theta $ and $\\phi $ contain the parameters in the encoder $f_\\text{en}(S_i;\\theta )$ and the decoder $f_\\text{de}(_i;\\phi )$ respectively. The forward computation of our model for a given sentence pair $\\lbrace S_i, S_{i+1}\\rbrace $ , in which the words in $S_i$ are the input to the learning system and the words in $S_{i+1}$ are targets is defined as: ",
"$$_i &= f_\\text{en}(S_i;\\theta ) \\nonumber \\\\\n_i &= f_\\text{de}(_i;\\phi ) \\nonumber $$ (Eq. 6) ",
" where $_i$ is the vector representation of $S_i$ , and $_i$ is the vector output of the decoder which will be compared with the vector representations of words in the next sentence $S_{i+1}$ . Since calculating the likelihood of generating each word involves a computationally demanding softmax function, the negative sampling method BIBREF12 is applied to replace the softmax, and $\\log P(w_j|s_i)$ is calculated as: ",
"$$\\log \\sigma (_i^\\top _{w_j}) + \\sum _{k=1}^{K}\\mathbb {E}_{w_k\\sim P_e(w)}\\log \\sigma (-_i^\\top _{w_k}) \\nonumber $$ (Eq. 7) ",
" where $_{w_k}\\in ^{d_}$ is the pretrained vector representation for $w_k$ , the empirical distribution $P_e(w)$ is the unigram distribution of words in the training corpus raised to power 0.75 as suggested in the prior work BIBREF21 , and $K$ is the number of negative samples. In this case, we enforce the output of the decoder $_i$ to have the same dimensionality as the pretrained word vectors $_{w_j}$ . The loss function is summed over all contiguous sentence pairs in the training corpus. For simplicity, we omit the subscription for indexing the sentences in the following sections."
],
[
"The encoder $f_\\text{en}(S;\\theta )$ is a bi-directional Gated Recurrent Unit BIBREF22 with $d$ -dimensions in each direction. It processes word vectors in an input sentence $\\lbrace _{w_1},_{w_2},...,_{w_{N}}\\rbrace $ sequentially according to the temporal order of the words, and generates a sequence of hidden states. During learning, in order to reduce the computation load, only the last hidden state serves as the sentence representation $\\in ^{d_}$ , where $d_=2d$ ."
],
[
"As the goal is to reuse the decoding function $f_{\\text{de}}()$ as another plausible encoder for building sentence representations after learning rather than ignoring it, one possible solution is to find the inverse function of the decoder function during testing, which is noted as $f^{-1}_{\\text{de}}()$ . In order to reduce the complexity and the running time during both training and testing, the decoding function $f_{\\text{de}}()$ needs to be easily invertible. Here, two types of decoding functions are considered and explored.",
"In this case, the decoding function is a linear projection, which is $= f_{\\text{de}}()=+ $ , where $\\in ^{d_\\times d_}$ is a trainable weight matrix and $\\in ^{d_\\times 1}$ is the bias term.",
"As $f_\\text{de}$ is a linear projection, the simplest situation is when $$ is an orthogonal matrix and its inverse is equal to its transpose. Often, as the dimensionality of vector $$ doesn't necessarily need to match that of word vectors $$ , $$ is not a square matrix . To enforce invertibility on $$ , a row-wise orthonormal regularisation on $$ is applied during learning, which leads to $^\\top =$ , where $$0 is the identity matrix, thus the inverse function is simply $$1 , which is easily computed. The regularisation formula is $$2 , where $$3 is the Frobenius norm. Specifically, the update rule BIBREF23 for the regularisation is: ",
"$$:=(1+\\beta )-\\beta (^\\top )\\nonumber $$ (Eq. 12) ",
" The usage of the decoder during training and testing is defined as follows: ",
"$$\\text{Training:} \\hspace{2.84544pt} & = f_{\\text{de}}()=+ \\nonumber \\\\\n\\text{Testing:} \\hspace{2.84544pt} & = f_\\text{de}^{-1}()=^\\top (- ) \\nonumber $$ (Eq. 13) ",
" Therefore, the decoder is also utilised after learning to serve as a linear encoder in addition to the RNN encoder.",
"A general case is to use a bijective function as the decoder, as the bijective functions are naturally invertible. However, the inverse of a bijective function could be hard to find and its calculation could also be computationally intense.",
"A family of bijective transformation was designed in NICE BIBREF17 , and the simplest continuous bijective function $f:^D\\rightarrow ^D$ and its inverse $f^{-1}$ is defined as: ",
"$$h: \\hspace{14.22636pt} _1 &= _1, & _2 &= _2+m(_1) \\nonumber \\\\\nh^{-1}: \\hspace{14.22636pt} _1 &= _1, & _2 &= _2-m(_1) \\nonumber $$ (Eq. 15) ",
" where $_1$ is a $d$ -dimensional partition of the input $\\in ^D$ , and $m:^d\\rightarrow ^{D-d}$ is an arbitrary continuous function, which could be a trainable multi-layer feedforward neural network with non-linear activation functions. It is named as an `additive coupling layer' BIBREF17 , which has unit Jacobian determinant. To allow the learning system to explore more powerful transformation, we follow the design of the `affine coupling layer' BIBREF24 : ",
"$$h: \\hspace{5.69046pt} _1 &= _1, & _2 &= _2 \\odot \\text{exp}(s(_1)) + t(_1) \\nonumber \\\\\nh^{-1}: \\hspace{5.69046pt} _1 &= _1, & _2 &= (_2-t(_1)) \\odot \\text{exp}(-s(_1)) \\nonumber $$ (Eq. 16) ",
" where $s:^d\\rightarrow ^{D-d}$ and $t:^d\\rightarrow ^{D-d}$ are both neural networks with linear output units.",
"The requirement of the continuous bijective transformation is that, the dimensionality of the input $$ and the output $$ need to match exactly. In our case, the output $\\in ^{d_}$ of the decoding function $f_{\\text{de}}$ has lower dimensionality than the input $\\in ^{d_}$ does. Our solution is to add an orthonormal regularised linear projection before the bijective function to transform the vector representation of a sentence to the desired dimension.",
"The usage of the decoder that is composed of a bijective function and a regularised linear projection during training and testing is defined as: ",
"$$\\text{Training:} \\hspace{2.84544pt} & = f_{\\text{de}}() = h(+ ) \\nonumber \\\\\n\\text{Testing:} \\hspace{2.84544pt} & = f_\\text{de}^{-1}() = ^\\top (h^{-1}() - ) \\nonumber $$ (Eq. 17) "
],
[
"As the decoder is easily invertible, it is also used to produce vector representations. The post-processing step BIBREF25 that removes the top principal component is applied on the representations from $f_\\text{en}$ and $f^{-1}_\\text{de}$ individually. In the following sections, $_\\text{en}$ denotes the post-processed representation from $f_\\text{en}$ , and $_\\text{de}$ from $f^{-1}_\\text{de}$ . Since $f_\\text{en}$ and $f^{-1}_\\text{de}$ naturally process sentences in distinctive ways, it is reasonable to expect that the ensemble of $_\\text{en}$ and $_\\text{de}$ will outperform each of them."
],
[
"Experiments are conducted in PyTorch BIBREF26 , with evaluation using the SentEval package BIBREF27 with modifications to include the post-processing step. Word vectors $_{w_j}$ are initialised with FastText BIBREF15 , and fixed during learning."
],
[
"Two unlabelled corpora, including BookCorpus BIBREF28 and UMBC News Corpus BIBREF29 , are used to train models with invertible decoders. These corpora are referred as B, and U in Table 3 and 5 . The UMBC News Corpus is roughly twice as large as the BookCorpus, and the details are shown in Table 1 ."
],
[
"The unsupervised tasks include five tasks from SemEval Semantic Textual Similarity (STS) in 2012-2016 BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 and the SemEval2014 Semantic Relatedness task (SICK-R) BIBREF35 .",
"The cosine similarity between vector representations of two sentences determines the textual similarity of two sentences, and the performance is reported in Pearson's correlation score between human-annotated labels and the model predictions on each dataset."
],
[
"It includes Semantic relatedness (SICK) BIBREF35 , SemEval (STS-B) BIBREF36 , paraphrase detection (MRPC) BIBREF37 , question-type classification (TREC) BIBREF38 , movie review sentiment (MR) BIBREF39 , Stanford Sentiment Treebank (SST) BIBREF40 , customer product reviews (CR) BIBREF41 , subjectivity/objectivity classification (SUBJ) BIBREF42 , opinion polarity (MPQA) BIBREF43 .",
"In these tasks, MR, CR, SST, SUBJ, MPQA and MRPC are binary classification tasks, TREC is a multi-class classification task. SICK and MRPC require the same feature engineering method BIBREF44 in order to compose a vector from vector representations of two sentences to indicate the difference between them."
],
[
"The hyperparameters are tuned on the averaged scores on STS14 of the model trained on BookCorpus, thus it is marked with a $^\\star $ in tables to indicate potential overfitting.",
"The hyperparameter setting for our model is summarised as follows: the batch size $N=512$ , the dimension of sentence vectors $d_=2048$ , the dimension of word vectors $d_{_{w_j}}=300$ , the number of negative samples $K=5$ , and the initial learning rate is $5\\times 10^{-4}$ which is kept fixed during learning. The Adam optimiser BIBREF45 with gradient clipping BIBREF46 is applied for stable learning. Each model in our experiment is only trained for one epoch on the given training corpus.",
" $\\beta $ in the invertible constraint of the linear projection is set to be $0.01$ , and after learning, all 300 eigenvalues are close to 1. For the bijective transformation, in order to make sure that each output unit is influenced by all input units, we stack four affine coupling layers in the bijective transformation BIBREF17 . The non-linear mappings $s$ and $t$ are both neural networks with one hidden layer with the rectified linear activation function."
],
[
"Various pooling functions are applied to produce vector representations for input sentences.",
"For unsupervised evaluation tasks, as recommended in previous studies BIBREF14 , BIBREF50 , BIBREF51 , a global mean-pooling function is applied on both the output of the RNN encoder $f_\\text{en}$ to produce a vector representation $_\\text{en}$ and the inverse of the decoder $f_\\text{de}^{-1}$ to produce $_\\text{de}$ .",
"For supervised evaluation tasks, three pooling functions, including global max-, min-, and mean-pooling, are applied on top of the encoder and the outputs from three pooling functions are concatenated to serve as a vector representation for a given sentence. The same representation pooling strategy is applied on the inverse of the decoder.",
"The reason for applying different representation pooling strategies for two categories of tasks is:",
"(1) cosine similarity of two vector representations is directly calculated in unsupervised evaluation tasks to determine the textual similarity of two sentences, and it suffers from the curse-of-dimensionality BIBREF52 , which leads to more equidistantly distributed representations for higher dimensional vector representations decreasing the difference among similarity scores.",
"(2) given Cover's theorem BIBREF53 and the blessings-of-dimensionality property, it is more likely for the data points to be linearly separable when they are presented in high dimensional space, and in the supervised evaluation tasks, high dimensional vector representations are preferred as a linear classifier will be learnt to evaluate how likely the produced sentence representations are linearly separable;",
"(3) in our case, both the encoder and the inverse of the decoder are capable of producing a vector representation per time step in a given sentence, although during training, only the last one is regarded as the sentence representation for the fast training speed, it is more reasonable to make use of all representations at all time steps with various pooling functions to compute a vector representations to produce high-quality sentence representations that excel the downstream tasks."
],
[
"It is worth discussing the motivation of the model design and the observations in our experiments. As mentioned as one of the take-away messages BIBREF54 , to demonstrate the effectiveness of the invertible constraint, the comparison of our model with the constraint and its own variants use the same word embeddings from FastText BIBREF15 and have the same dimensionaility of sentence representations during learning, and use the same classifier on top of the produced representations with the same hyperparameter settings.",
"Overall, given the performance of the inverse of each decoder presented in Table 3 and 5 , it is reasonable to state that the inverse of the decoder provides high-quality sentence representations as well as the encoder does. However, there is no significant difference between the two decoders in terms of the performance on the downstream tasks. In this section, observations and thoughts are presented based on the analyses of our model with the invertible constraint."
],
[
"The motivation of enforcing the invertible constraint on the decoder during learning is to make it usable and potentially helpful during testing in terms of boosting the performance of the lone RNN encoder in the encoder-decoder models (instead of ignoring the decoder part after learning). Therefore, it is important to check the necessity of the invertible constraint on the decoders.",
"A model with the same hyperparameter settings but without the invertible constraint is trained as the baseline model, and macro-averaged results that summarise the same type of tasks are presented in Table 2 .",
"As noted in the prior work BIBREF55 , there exists significant inconsistency between the group of unsupervised tasks and the group of supervised ones, it is possible for a model to excel on one group of tasks but fail on the other one. As presented in our table, the inverse of the decoder tends to perform better than the encoder on unsupervised tasks, and the situation reverses when it comes to the supervised ones.",
"In our model, the invertible constraint helps the RNN encoder $f_\\text{en}$ to perform better on the unsupervised evaluation tasks, and helps the inverse of the decoder $f_\\text{de}^{-1}$ to provide better results on single sentence classification tasks. An interesting observation is that, by enforcing the invertible constraint, the model learns to sacrifice the performance of $f_\\text{de}^{-1}$ and improve the performance of $f_\\text{en}$ on unsupervised tasks to mitigate the gap between the two encoding functions, which leads to more aligned vector representations between $f_\\text{en}$ and $f_\\text{de}^{-1}$ ."
],
[
"Although encouraging the invertible constraint leads to slightly poorer performance of $f_\\text{de}^{-1}$ on unsupervised tasks, it generally leads to better sentence representations when the ensemble of the encoder $f_\\text{en}$ and the inverse of the decoder $f_\\text{de}^{-1}$ is considered. Specifically, for unsupervised tasks, the ensemble is an average of two vector representations produced from two encoding functions during the testing time, and for supervised tasks, the concatenation of two representations is regarded as the representation of a given sentence. The ensemble method is recommended in prior work BIBREF14 , BIBREF16 , BIBREF51 , BIBREF56 , BIBREF4 , BIBREF54 .",
"As presented in Table 2 , on unsupervised evaluation tasks (STS12-16 and SICK14), the ensemble of two encoding functions is averaging, which benefits from aligning representations from $f_\\text{en}$ and $f_\\text{de}^{-1}$ by enforcing the invertible constraint. While in the learning system without the invertible constraint, the ensemble of two encoding functions provides worse performance than $f_\\text{de}^{-1}$ .",
"On supervised evaluation tasks, as the ensemble method is concatenation and a linear model is applied on top of the concatenated representations, as long as the two encoding functions process sentences distinctively, the linear classifier is capable of picking relevant feature dimensions from both encoding functions to make good predictions, thus there is no significant difference between our model with and without invertible constraint."
],
[
"Recent research BIBREF54 showed that the improvement on the supervised evaluation tasks led by learning from labelled or unlabelled corpora is rather insignificant compared to random initialised projections on top of pretrained word vectors. Another interesting direction of research that utilises probabilistic random walk models on the unit sphere BIBREF57 , BIBREF25 , BIBREF58 derived several simple yet effective post-processing methods that operate on pretrained word vectors and are able to boost the performance of the averaged word vectors as the sentence representation on unsupervised tasks. While these papers reveal interesting aspects of the downstream tasks and question the need for optimising a learning objective, our results show that learning on unlabelled corpora helps.",
"On unsupervised evaluation tasks, in order to show that learning from an unlabelled corpus helps, the performance of our learnt representations should be directly compared with the pretrained word vectors, FastText in our system, at the same dimensionality with the same post-processing BIBREF25 . The word vectors are scattered in the 300-dimensional space, and our model has a decoder that is learnt to project a sentence representation $\\in ^{d_}$ to $=f_\\text{de}(;\\phi )\\in ^{300}$ . The results of our learnt representations and averaged word vectors with the same postprocessing are presented in Table 4 .",
"As shown in the Table 4 , the performance of our learnt system is better than FastText at the same dimensionality. It is worth mentioning that, in our system, the final representation is an average of postprocessed word vectors and the learnt representations $$ , and the invertible constraint guarantees that the ensemble of both gives better performance. Otherwise, as discussed in the previous section, an ensemble of postprocessed word vectors and some random encoders won't necessarily lead to stronger results. Table 3 also provides evidence for the effectiveness of learning on the unsupervised evaluation tasks.",
"On supervised evaluation tasks, we agree that higher dimensional vector representations give better results on the downstream tasks. Compared to random projections with $4096\\times 6$ output dimensions, learning from unlabelled corpora leverages the distributional similarity BIBREF1 at the sentence-level into the learnt representations and potentially helps capture the meaning of a sentence. In our system, the raw representations are in 2400-dimensional space, and the use of various pooling functions expands it to $2048\\times 6$ dimensions, which is half as large as the random projection dimension and still yields better performance. Both our models and random projections with no training are presented in Table 5 .",
"The evidence from both sets of downstream tasks support our argument that learning from unlabelled corpora helps the representations capture meaning of sentences. However, current ways of incorporating the distributional hypothesis only utilise it as a weak and noisy supervision, which might limit the quality of the learnt sentence representations."
],
[
"Two types of decoders, including an orthonormal regularised linear projection and a bijective transformation, whose inverses can be derived effortlessly, are presented in order to utilise the decoder as another encoder in the testing phase. The experiments and comparisons are conducted on two large unlabelled corpora, and the performance on the downstream tasks shows the high usability and generalisation ability of the decoders in testing.",
"Analyses show that the invertible constraint enforced on the decoder encourages each one to learn from the other one during learning, and provides improved encoding functions after learning. Ensemble of the encoder and the inverse of the decoder gives even better performance when the invertible constraint is applied on the decoder side. Furthermore, by comparing with prior work, we argue that learning from unlabelled corpora indeed helps to improve the sentence representations, although the current way of utilising corpora might not be optimal.",
"We view this as unifying the generative and discriminative objectives for unsupervised sentence representation learning, as it is trained with a generative objective which when inverted can be seen as creating a discriminative target.",
"Our proposed method in our implementation doesn't provide extremely good performance on the downstream tasks, but we see our method as an opportunity to fuse all possible components in a model, even a usually discarded decoder, to produce sentence representations. Future work could potentially expand our work into end-to-end invertible model that is able to produce high-quality representations by omnidirectional computations."
],
[
"Many Thanks to Andrew Ying for helpful clarifications on several concepts."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model Design",
"Training Objective",
"Encoder",
"Decoder",
"Using Decoder in the Test Phase",
"Experimental Design",
"Unlabelled Corpora",
"Unsupervised Evaluation",
"Supervised Evaluation",
"Hyperparameter Tuning",
"Representation Pooling",
"Discussion",
"Effect of Invertible Constraint",
"Effect on Ensemble",
"Effect of Learning",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"80f5bd5751b66c73fb477d96e955b58388d61c85"
],
"answer": [
{
"evidence": [
"Unsupervised Evaluation",
"The unsupervised tasks include five tasks from SemEval Semantic Textual Similarity (STS) in 2012-2016 BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 and the SemEval2014 Semantic Relatedness task (SICK-R) BIBREF35 .",
"The cosine similarity between vector representations of two sentences determines the textual similarity of two sentences, and the performance is reported in Pearson's correlation score between human-annotated labels and the model predictions on each dataset.",
"Supervised Evaluation",
"It includes Semantic relatedness (SICK) BIBREF35 , SemEval (STS-B) BIBREF36 , paraphrase detection (MRPC) BIBREF37 , question-type classification (TREC) BIBREF38 , movie review sentiment (MR) BIBREF39 , Stanford Sentiment Treebank (SST) BIBREF40 , customer product reviews (CR) BIBREF41 , subjectivity/objectivity classification (SUBJ) BIBREF42 , opinion polarity (MPQA) BIBREF43 ."
],
"extractive_spans": [
"The unsupervised tasks include five tasks from SemEval Semantic Textual Similarity (STS) in 2012-2016 BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 and the SemEval2014 Semantic Relatedness task (SICK-R) BIBREF35 .\n\nThe cosine similarity between vector representations of two sentences determines the textual similarity of two sentences, and the performance is reported in Pearson's correlation score between human-annotated labels and the model predictions on each dataset.",
"Supervised Evaluation\nIt includes Semantic relatedness (SICK) BIBREF35 , SemEval (STS-B) BIBREF36 , paraphrase detection (MRPC) BIBREF37 , question-type classification (TREC) BIBREF38 , movie review sentiment (MR) BIBREF39 , Stanford Sentiment Treebank (SST) BIBREF40 , customer product reviews (CR) BIBREF41 , subjectivity/objectivity classification (SUBJ) BIBREF42 , opinion polarity (MPQA) BIBREF43 ."
],
"free_form_answer": "",
"highlighted_evidence": [
"Unsupervised Evaluation\nThe unsupervised tasks include five tasks from SemEval Semantic Textual Similarity (STS) in 2012-2016 BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 and the SemEval2014 Semantic Relatedness task (SICK-R) BIBREF35 .\n\nThe cosine similarity between vector representations of two sentences determines the textual similarity of two sentences, and the performance is reported in Pearson's correlation score between human-annotated labels and the model predictions on each dataset.\n\nSupervised Evaluation\nIt includes Semantic relatedness (SICK) BIBREF35 , SemEval (STS-B) BIBREF36 , paraphrase detection (MRPC) BIBREF37 , question-type classification (TREC) BIBREF38 , movie review sentiment (MR) BIBREF39 , Stanford Sentiment Treebank (SST) BIBREF40 , customer product reviews (CR) BIBREF41 , subjectivity/objectivity classification (SUBJ) BIBREF42 , opinion polarity (MPQA) BIBREF43 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"d81a91ef38c8fd6d74d7c9ef18cd79fb6d21ab30"
],
"answer": [
{
"evidence": [
"In this case, the decoding function is a linear projection, which is $= f_{\\text{de}}()=+ $ , where $\\in ^{d_\\times d_}$ is a trainable weight matrix and $\\in ^{d_\\times 1}$ is the bias term.",
"A family of bijective transformation was designed in NICE BIBREF17 , and the simplest continuous bijective function $f:^D\\rightarrow ^D$ and its inverse $f^{-1}$ is defined as:",
"$$h: \\hspace{14.22636pt} _1 &= _1, & _2 &= _2+m(_1) \\nonumber \\\\ h^{-1}: \\hspace{14.22636pt} _1 &= _1, & _2 &= _2-m(_1) \\nonumber $$ (Eq. 15)",
"where $_1$ is a $d$ -dimensional partition of the input $\\in ^D$ , and $m:^d\\rightarrow ^{D-d}$ is an arbitrary continuous function, which could be a trainable multi-layer feedforward neural network with non-linear activation functions. It is named as an `additive coupling layer' BIBREF17 , which has unit Jacobian determinant. To allow the learning system to explore more powerful transformation, we follow the design of the `affine coupling layer' BIBREF24 :",
"$$h: \\hspace{5.69046pt} _1 &= _1, & _2 &= _2 \\odot \\text{exp}(s(_1)) + t(_1) \\nonumber \\\\ h^{-1}: \\hspace{5.69046pt} _1 &= _1, & _2 &= (_2-t(_1)) \\odot \\text{exp}(-s(_1)) \\nonumber $$ (Eq. 16)",
"where $s:^d\\rightarrow ^{D-d}$ and $t:^d\\rightarrow ^{D-d}$ are both neural networks with linear output units.",
"The requirement of the continuous bijective transformation is that, the dimensionality of the input $$ and the output $$ need to match exactly. In our case, the output $\\in ^{d_}$ of the decoding function $f_{\\text{de}}$ has lower dimensionality than the input $\\in ^{d_}$ does. Our solution is to add an orthonormal regularised linear projection before the bijective function to transform the vector representation of a sentence to the desired dimension."
],
"extractive_spans": [],
"free_form_answer": "a linear projection and a bijective function with continuous transformation though ‘affine coupling layer’ of (Dinh et al.,2016). ",
"highlighted_evidence": [
"In this case, the decoding function is a linear projection, which is $= f_{\\text{de}}()=+ $ , where $\\in ^{d_\\times d_}$ is a trainable weight matrix and $\\in ^{d_\\times 1}$ is the bias term.",
"A family of bijective transformation was designed in NICE BIBREF17 , and the simplest continuous bijective function $f:^D\\rightarrow ^D$ and its inverse $f^{-1}$ is defined as:\n\n$$h: \\hspace{14.22636pt} _1 &= _1, & _2 &= _2+m(_1) \\nonumber \\\\ h^{-1}: \\hspace{14.22636pt} _1 &= _1, & _2 &= _2-m(_1) \\nonumber $$ (Eq. 15)\n\nwhere $_1$ is a $d$ -dimensional partition of the input $\\in ^D$ , and $m:^d\\rightarrow ^{D-d}$ is an arbitrary continuous function, which could be a trainable multi-layer feedforward neural network with non-linear activation functions. It is named as an `additive coupling layer' BIBREF17 , which has unit Jacobian determinant. To allow the learning system to explore more powerful transformation, we follow the design of the `affine coupling layer' BIBREF24 :\n\n$$h: \\hspace{5.69046pt} _1 &= _1, & _2 &= _2 \\odot \\text{exp}(s(_1)) + t(_1) \\nonumber \\\\ h^{-1}: \\hspace{5.69046pt} _1 &= _1, & _2 &= (_2-t(_1)) \\odot \\text{exp}(-s(_1)) \\nonumber $$ (Eq. 16)\n\nwhere $s:^d\\rightarrow ^{D-d}$ and $t:^d\\rightarrow ^{D-d}$ are both neural networks with linear output units.\n\nThe requirement of the continuous bijective transformation is that, the dimensionality of the input $$ and the output $$ need to match exactly."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"How do they evaluate the sentence representations?",
"What are the two decoding functions?"
],
"question_id": [
"dc5ff2adbe1a504122e3800c9ca1d348de391c94",
"04b43deab0fd753e3419ed8741c10f652b893f02"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Summary statistics of the two corpora used. For simplicity, the two corpora are referred to as B and U in the following tables respectively.",
"Table 2: The effect of the invertible constraint on linear projection. The arrow and its associated value of a representation is the relative performance gain or loss compared to its comparison partner with the invertible constraint. As shown, the invertible constraint does help improve each representation, and ensures the ensemble of two encoding functions gives better performance. Better view in colour.",
"Table 3: Results on unsupervised evaluation tasks (Pearson’s r × 100) . Bold numbers are the best results among unsupervised transfer models, and underlined numbers are the best ones among all models. ‘WR’ refers to the post-processing step that removes the top principal component.",
"Table 4: Comparison of the learnt representations in our system with the same dimensionality as the average of the same pretrained word vectors on unsupervised evaluation tasks. The encoding function that is learnt to compose a sentence representation from pretrained word vectors outperforms averaging the same word vectors, which supports our argument that learning helps to produce higher-quality sentence representations.",
"Table 5: Results on supervised evaluation tasks. Bold numbers are the best results among unsupervised transfer models with ordered sentences, and underlined numbers are the best ones among all models."
],
"file": [
"4-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"8-Table5-1.png"
]
} | [
"What are the two decoding functions?"
] | [
[
"1809.02731-Decoder-13",
"1809.02731-Decoder-1"
]
] | [
"a linear projection and a bijective function with continuous transformation though ‘affine coupling layer’ of (Dinh et al.,2016). "
] | 712 |
1909.05855 | Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset | Virtual assistants such as Google Assistant, Alexa and Siri provide a conversational interface to a large number of services and APIs spanning multiple domains. Such systems need to support an ever-increasing number of services with possibly overlapping functionality. Furthermore, some of these services have little to no training data available. Existing public datasets for task-oriented dialogue do not sufficiently capture these challenges since they cover few domains and assume a single static ontology per domain. In this work, we introduce the the Schema-Guided Dialogue (SGD) dataset, containing over 16k multi-domain conversations spanning 16 domains. Our dataset exceeds the existing task-oriented dialogue corpora in scale, while also highlighting the challenges associated with building large-scale virtual assistants. It provides a challenging testbed for a number of tasks including language understanding, slot filling, dialogue state tracking and response generation. Along the same lines, we present a schema-guided paradigm for task-oriented dialogue, in which predictions are made over a dynamic set of intents and slots, provided as input, using their natural language descriptions. This allows a single dialogue system to easily support a large number of services and facilitates simple integration of new services without requiring additional training data. Building upon the proposed paradigm, we release a model for dialogue state tracking capable of zero-shot generalization to new APIs, while remaining competitive in the regular setting. | {
"paragraphs": [
[
"Virtual assistants help users accomplish tasks including but not limited to finding flights, booking restaurants and, more recently, navigating user interfaces, by providing a natural language interface to services and APIs on the web. The recent popularity of conversational interfaces and the advent of frameworks like Actions on Google and Alexa Skills, which allow developers to easily add support for new services, has resulted in a major increase in the number of application domains and individual services that assistants need to support, following the pattern of smartphone applications.",
"Consequently, recent work has focused on scalable dialogue systems that can handle tasks across multiple application domains. Data-driven deep learning based approaches for multi-domain modeling have shown promise, both for end-to-end and modular systems involving dialogue state tracking and policy learning. This line of work has been facilitated by the release of multi-domain dialogue corpora such as MultiWOZ BIBREF0, M2M BIBREF1 and FRAMES BIBREF2.",
"However, existing datasets for multi-domain task-oriented dialogue do not sufficiently capture a number of challenges that arise with scaling virtual assistants in production. These assistants need to support a large BIBREF3, constantly increasing number of services over a large number of domains. In comparison, existing public datasets cover few domains. Furthermore, they define a single static API per domain, whereas multiple services with overlapping functionality, but heterogeneous interfaces, exist in the real world.",
"To highlight these challenges, we introduce the Schema-Guided Dialogue (SGD) dataset, which is, to the best of our knowledge, the largest public task-oriented dialogue corpus. It exceeds existing corpora in scale, with over 16000 dialogues in the training set spanning 26 services belonging to 16 domains (more details in Table TABREF2). Further, to adequately test the models' ability to generalize in zero-shot settings, the evaluation sets contain unseen services and domains. The dataset is designed to serve as an effective testbed for intent prediction, slot filling, state tracking and language generation, among other tasks in large-scale virtual assistants.",
"We also propose the schema-guided paradigm for task-oriented dialogue, advocating building a single unified dialogue model for all services and APIs. Using a service's schema as input, the model would make predictions over this dynamic set of intents and slots present in the schema. This setting enables effective sharing of knowledge among all services, by relating the semantic information in the schemas, and allows the model to handle unseen services and APIs. Under the proposed paradigm, we present a novel architecture for multi-domain dialogue state tracking. By using large pretrained models like BERT BIBREF4, our model can generalize to unseen services and is robust to API changes, while achieving state-of-the-art results on the original and updated BIBREF5 MultiWOZ datasets."
],
[
"Task-oriented dialogue systems have constituted an active area of research for decades. The growth of this field has been consistently fueled by the development of new datasets. Initial datasets were limited to one domain, such as ATIS BIBREF6 for spoken language understanding for flights. The Dialogue State Tracking Challenges BIBREF7, BIBREF8, BIBREF9, BIBREF10 contributed to the creation of dialogue datasets with increasing complexity. Other notable related datasets include WOZ2.0 BIBREF11, FRAMES BIBREF2, M2M BIBREF1 and MultiWOZ BIBREF0. These datasets have utilized a variety of data collection techniques, falling within two broad categories:",
"Wizard-of-Oz This setup BIBREF12 connects two crowd workers playing the roles of the user and the system. The user is provided a goal to satisfy, and the system accesses a database of entities, which it queries as per the user's preferences. WOZ2.0, FRAMES and MultiWOZ, among others, have utilized such methods.",
"Machine-machine Interaction A related line of work explores simulation-based dialogue generation, where the user and system roles are simulated to generate a complete conversation flow, which can then be converted to natural language using crowd workers BIBREF1. Such a framework may be cost-effective and error-resistant since the underlying crowd worker task is simpler, and semantic annotations are obtained automatically.",
"As virtual assistants incorporate diverse domains, recent work has focused on zero-shot modeling BIBREF13, BIBREF14, BIBREF15, domain adaptation and transfer learning techniques BIBREF16. Deep-learning based approaches have achieved state of the art performance on dialogue state tracking tasks. Popular approaches on small-scale datasets estimate the dialogue state as a distribution over all possible slot-values BIBREF17, BIBREF11 or individually score all slot-value combinations BIBREF18, BIBREF19. Such approaches are not practical for deployment in virtual assistants operating over real-world services having a very large and dynamic set of possible values. Addressing these concerns, approaches utilizing a dynamic vocabulary of slot values have been proposed BIBREF20, BIBREF21, BIBREF22."
],
[
"An important goal of this work is to create a benchmark dataset highlighting the challenges associated with building large-scale virtual assistants. Table TABREF2 compares our dataset with other public datasets. Our Schema-Guided Dialogue (SGD) dataset exceeds other datasets in most of the metrics at scale. The especially larger number of domains, slots, and slot values, and the presence of multiple services per domain, are representative of these scale-related challenges. Furthermore, our evaluation sets contain many services, and consequently slots, which are not present in the training set, to help evaluate model performance on unseen services.",
"The 17 domains (`Alarm' domain not included in training) present in our dataset are listed in Table TABREF5. We create synthetic implementations of a total of 34 services or APIs over these domains. Our simulator framework interacts with these services to generate dialogue outlines, which are a structured representation of dialogue semantics. We then used a crowd-sourcing procedure to paraphrase these outlines to natural language utterances. Our novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection. In this section, we describe these steps in detail and then present analyses of the collected dataset."
],
[
"We define the schema for a service as a combination of intents and slots with additional constraints, with an example in Figure FIGREF7. We implement all services using a SQL engine. For constructing the underlying tables, we sample a set of entities from Freebase and obtain the values for slots defined in the schema from the appropriate attribute in Freebase. We decided to use Freebase to sample real-world entities instead of synthetic ones since entity attributes are often correlated (e.g, a restaurant's name is indicative of the cuisine served). Some slots like event dates/times and available ticket counts, which are not present in Freebase, are synthetically sampled.",
"To reflect the constraints present in real-world services and APIs, we impose a few other restrictions. First, our dataset does not expose the set of all possible slot values for some slots. Having such a list is impractical for slots like date or time because they have infinitely many possible values or for slots like movie or song names, for which new values are periodically added. Our dataset specifically identifies such slots as non-categorical and does not provide a set of all possible values for these. We also ensure that the evaluation sets have a considerable fraction of slot values not present in the training set to evaluate the models in the presence of new values. Some slots like gender, number of people, day of the week etc. are defined as categorical and we specify the set of all possible values taken by them. However, these values are not assumed to be consistent across services. E.g., different services may use (`male', `female'), (`M', `F') or (`he', `she') as possible values for gender slot.",
"Second, real-world services can only be invoked with a limited number of slot combinations: e.g. restaurant reservation APIs do not let the user search for restaurants by date without specifying a location. However, existing datasets simplistically allow service calls with any given combination of slot values, thus giving rise to flows unsupported by actual services or APIs. As in Figure FIGREF7, the different service calls supported by a service are listed as intents. Each intent specifies a set of required slots and the system is not allowed to call this intent without specifying values for these required slots. Each intent also lists a set of optional slots with default values, which the user can override."
],
[
"The dialogue simulator interacts with the services to generate dialogue outlines. Figure FIGREF9 shows the overall architecture of our dialogue simulator framework. It consists of two agents playing the roles of the user and the system. Both agents interact with each other using a finite set of actions specified through dialogue acts over a probabilistic automaton designed to capture varied dialogue trajectories. These dialogue acts can take a slot or a slot-value pair as argument. Figure FIGREF13 shows all dialogue acts supported by the agents.",
"At the start of a conversation, the user agent is seeded with a scenario, which is a sequence of intents to be fulfilled. We identified over 200 distinct scenarios for the training set, each comprising up to 5 intents. For multi-domain dialogues, we also identify combinations of slots whose values may be transferred when switching intents e.g. the 'address' slot value in a restaurant service could be transferred to the 'destination' slot for a taxi service invoked right after.",
"The user agent then generates the dialogue acts to be output in the next turn. It may retrieve arguments i.e. slot values for some of the generated acts by accessing either the service schema or the raw SQL backend. The acts, combined with the respective parameters yield the corresponding user actions. Next, the system agent generates the next set of actions using a similar procedure. Unlike the user agent, however, the system agent has restricted access to the services (denoted by dashed line), e.g. it can only query the services by supplying values for all required slots for some service call. This helps us ensure that all generated flows are valid.",
"After an intent is fulfilled through a series of user and system actions, the user agent queries the scenario to proceed to the next intent. Alternatively, the system may suggest related intents e.g. reserving a table after searching for a restaurant. The simulator also allows for multiple intents to be active during a given turn. While we skip many implementation details for brevity, it is worth noting that we do not include any domain-specific constraints in the simulation automaton. All domain-specific constraints are encoded in the schema and scenario, allowing us to conveniently use the simulator across a wide variety of domains and services."
],
[
"The dialogue paraphrasing framework converts the outlines generated by the simulator into a natural conversation. Figure FIGREF11a shows a snippet of the dialogue outline generated by the simulator, containing a sequence of user and system actions. The slot values present in these actions are in a canonical form because they obtained directly from the service. However, users may refer to these values in various different ways during the conversation, e.g., “los angeles\" may be referred to as “LA\" or “LAX\". To introduce these natural variations in the slot values, we replace different slot values with a randomly selected variation (kept consistent across user turns in a dialogue) as shown in Figure FIGREF11b.",
"Next we define a set of action templates for converting each action into a utterance. A few examples of such templates are shown below. These templates are used to convert each action into a natural language utterance, and the resulting utterances for the different actions in a turn are concatenated together as shown in Figure FIGREF11c. The dialogue transformed by these steps is then sent to the crowd workers. One crowd worker is tasked with paraphrasing all utterances of a dialogue to ensure naturalness and coherence.",
"In our paraphrasing task, the crowd workers are instructed to exactly repeat the slot values in their paraphrases. This not only helps us verify the correctness of the paraphrases, but also lets us automatically obtain slot spans in the generated utterances by string search. This automatic slot span generation greatly reduced the annotation effort required, with little impact on dialogue naturalness, thus allowing us to collect more data with the same resources. Furthermore, it is important to note that this entire procedure preserves all other annotations obtained from the simulator including the dialogue state. Hence, no further annotation is needed."
],
[
"With over 16000 dialogues in the training set, the Schema-Guided Dialogue dataset is the largest publicly available annotated task-oriented dialogue dataset. The annotations include the active intents and dialogue states for each user utterance and the system actions for every system utterance. We have a few other annotations like the user actions but we withhold them from the public release. These annotations enable our dataset to be used as benchmark for tasks like intent detection, dialogue state tracking, imitation learning of dialogue policy, dialogue act to text generation etc. The schemas contain semantic information about the schema and the constituent intents and slots, in the form of natural language descriptions and other details (example in Figure FIGREF7).",
"The single-domain dialogues in our dataset contain an average of 15.3 turns, whereas the multi-domain ones contain 23 turns on an average. These numbers are also reflected in Figure FIGREF13 showing the histogram of dialogue lengths on the training set. Table TABREF5 shows the distribution of dialogues across the different domains. We note that the dataset is largely balanced in terms of the domains and services covered, with the exception of Alarm domain, which is only present in the development set. Figure FIGREF13 shows the frequency of dialogue acts contained in the dataset. Note that all dialogue acts except INFORM, REQUEST and GOODBYE are specific to either the user or the system."
],
[
"Virtual assistants aim to support a large number of services available on the web. One possible approach is to define a large unified schema for the assistant, to which different service providers can integrate with. However, it is difficult to come up with a common schema covering all use cases. Having a common schema also complicates integration of tail services with limited developer support. We propose the schema-guided approach as an alternative to allow easy integration of new services and APIs.",
"Under our proposed approach, each service provides a schema listing the supported slots and intents along with their natural language descriptions (Figure FIGREF7 shows an example). These descriptions are used to obtain a semantic representation of these schema elements. The assistant employs a single unified model containing no domain or service specific parameters to make predictions conditioned on these schema elements. For example, Figure FIGREF14 shows how dialogue state representation for the same dialogue can vary for two different services. Here, the departure and arrival cities are captured by analogously functioning but differently named slots in both schemas. Furthermore, values for the number_stops and direct_only slots highlight idiosyncrasies between services interpreting the same concept.",
"There are many advantages to this approach. First, using a single model facilitates representation and transfer of common knowledge across related services. Second, since the model utilizes semantic representation of schema elements as input, it can interface with unseen services or APIs on which it has not been trained. Third, it is robust to changes like addition of new intents or slots to the service."
],
[
"Models in the schema-guided setting can condition on the pertinent services' schemas using descriptions of intents and slots. These models, however, also need access to representations for potentially unseen inputs from new services. Recent pretrained models like ELMo BIBREF23 and BERT BIBREF4 can help, since they are trained on very large corpora. Building upon these, we present our zero-shot schema-guided dialogue state tracking model."
],
[
"We use a single model, shared among all services and domains, to make these predictions. We first encode all the intents, slots and slot values for categorical slots present in the schema into an embedded representation. Since different schemas can have differing numbers of intents or slots, predictions are made over dynamic sets of schema elements by conditioning them on the corresponding schema embeddings. This is in contrast to existing models which make predictions over a static schema and are hence unable to share knowledge across domains and services. They are also not robust to changes in schema and require the model to be retrained with new annotated data upon addition of a new intent, slot, or in some cases, a slot value to a service."
],
[
"This component obtains the embedded representations of intents, slots and categorical slot values in each service schema. Table TABREF18 shows the sequence pairs used for embedding each schema element. These sequence pairs are fed to a pretrained BERT encoder shown in Figure FIGREF20 and the output $\\mathbf {u}_{\\texttt {CLS}}$ is used as the schema embedding.",
"For a given service with $I$ intents and $S$ slots, let $\\lbrace \\mathbf {i}_j\\rbrace $, ${1 \\le j \\le I}$ and $\\lbrace \\mathbf {s}_j\\rbrace $, ${1 \\le j \\le S}$ be the embeddings of all intents and slots respectively. As a special case, we let $\\lbrace \\mathbf {s}^n_j\\rbrace $, ${1 \\le j \\le N \\le S}$ denote the embeddings for the $N$ non-categorical slots in the service. Also, let $\\lbrace \\textbf {v}_j^k\\rbrace $, $1 \\le j \\le V^k$ denote the embeddings for all possible values taken by the $k^{\\text{th}}$ categorical slot, $1 \\le k \\le C$, with $C$ being the number of categorical slots and $N + C = S$. All these embeddings are collectively called schema embeddings."
],
[
"Like BIBREF24, we use BERT to encode the user utterance and the preceding system utterance to obtain utterance pair embedding $\\mathbf {u} = \\mathbf {u}_{\\texttt {CLS}}$ and token level representations $\\mathbf {t}_1, \\mathbf {t}_2 \\cdots \\mathbf {t}_M$, $M$ being the total number of tokens in the two utterances. The utterance and schema embeddings are used together to obtain model predictions using a set of projections (defined below)."
],
[
"Let $\\mathbf {x}, \\mathbf {y} \\in \\mathbb {R}^d$. For a task $K$, we define $\\mathbf {l} = \\mathcal {F}_K(\\mathbf {x}, \\mathbf {y}, p)$ as a projection transforming $\\mathbf {x}$ and $\\mathbf {y}$ into the vector $\\mathbf {l} \\in \\mathbb {R}^p$ using Equations DISPLAY_FORM22-. Here, $\\mathbf {h_1},\\mathbf {h_2} \\in \\mathbb {R}^d$, $W^K_i$ and $b^K_i$ for $1 \\le i \\le 3$ are trainable parameters of suitable dimensions and $A$ is the activation function. We use $\\texttt {gelu}$ BIBREF25 activation as in BERT."
],
[
"For a given service, the active intent denotes the intent requested by the user and currently being fulfilled by the system. It takes the value “NONE\" if no intent for the service is currently being processed. Let $\\mathbf {i}_0$ be a trainable parameter in $\\mathbb {R}^d$ for the “NONE\" intent. We define the intent network as below.",
"The logits $l^{j}_{\\text{int}}$ are normalized using softmax to yield a distribution over all $I$ intents and the “NONE\" intent. During inference, we predict the highest probability intent as active."
],
[
"These are the slots whose values are requested by the user in the current utterance. Projection $\\mathcal {F}_{\\text{req}}$ predicts logit $l^j_{\\text{req}}$ for the $j^{\\text{th}}$ slot. Obtained logits are normalized using sigmoid to get a score in $[0,1]$. During inference, all slots with $\\text{score} > 0.5$ are predicted as requested."
],
[
"We define the user goal as the user constraints specified over the dialogue context till the current user utterance. Instead of predicting the entire user goal after each user utterance, we predict the difference between the user goal for the current turn and preceding user turn. During inference, the predicted user goal updates are accumulated to yield the predicted user goal. We predict the user goal updates in two stages. First, for each slot, a distribution of size 3 denoting the slot status and taking values none, dontcare and active is obtained by normalizing the logits obtained in equation DISPLAY_FORM28 using softmax. If the status of a slot is predicted to be none, its assigned value is assumed to be unchanged. If the prediction is dontcare, then the special dontcare value is assigned to it. Otherwise, a slot value is predicted and assigned to it in the second stage.",
"In the second stage, equation is used to obtain a logit for each value taken by each categorical slot. Logits for a given categorical slot are normalized using softmax to get a distribution over all possible values. The value with the maximum mass is assigned to the slot. For each non-categorical slot, logits obtained using equations and are normalized using softmax to yield two distributions over all tokens. These two distributions respectively correspond to the start and end index of the span corresponding to the slot. The indices $p \\le q$ maximizing $start[p] + end[q]$ are predicted to be the span boundary and the corresponding value is assigned to the slot."
],
[
"We consider the following metrics for evaluation of the dialogue state tracking task:",
"Active Intent Accuracy: The fraction of user turns for which the active intent has been correctly predicted.",
"Requested Slot F1: The macro-averaged F1 score for requested slots over all eligible turns. Turns with no requested slots in ground truth and predictions are skipped.",
"Average Goal Accuracy: For each turn, we predict a single value for each slot present in the dialogue state. The slots which have a non-empty assignment in the ground truth dialogue state are considered for accuracy. This is the average accuracy of predicting the value of a slot correctly. A fuzzy matching score is used for non-categorical slots to reward partial matches with the ground truth.",
"Joint Goal Accuracy: This is the average accuracy of predicting all slot assignments for a turn correctly. For non-categorical slots a fuzzy matching score is used."
],
[
"We evaluate our model on public datasets WOZ2.0, MultiWOZ 2.0 and the updated MultiWOZ 2.1 BIBREF5. As results in Table TABREF37 show, our model performs competitively on all these datasets. Furthermore, we obtain state-of-the-art joint goal accuracies of 0.516 on MultiWOZ 2.0 and 0.489 on MultiWOZ 2.1 test sets respectively, exceeding the best-known results of 0.486 and 0.456 on these datasets as reported in BIBREF5."
],
[
"The model performs well for Active Intent Accuracy and Requested Slots F1 across both seen and unseen services, shown in Table TABREF37. For joint goal and average goal accuracy, the model performs better on seen services compared to unseen ones (Figure FIGREF38). The main reason for this performance difference is a significantly higher OOV rate for slot values of unseen services."
],
[
"The model performance also varies across various domains. The performance for the different domains is shown in (Table TABREF39) below. We observe that one of the factors affecting the performance across domains is still the presence of the service in the training data (seen services). Among the seen services, those in the `Events' domain have a very low OOV rate for slot values and the largest number of training examples which might be contributing to the high joint goal accuracy. For unseen services, we notice that the `Services' domain has a lower joint goal accuracy because of higher OOV rate and higher average turns per dialogue. For `Services' and `Flights' domains, the difference between joint goal accuracy and average accuracy indicates a possible skew in performance across slots where the performance on a few of the slots is much worse compared to all the other slots, thus considerably degrading the joint goal accuracy. The `RideSharing' domain also exhibits poor performance, since it possesses the largest number of the possible slot values across the dataset. We also notice that for categorical slots, with similar slot values (e.g. “Psychologist\" and “Psychiatrist\"), there is a very weak signal for the model to distinguish between the different classes, resulting in inferior performance."
],
[
"It is often argued that simulation-based data collection does not yield natural dialogues or sufficient coverage, when compared to other approaches such as Wizard-of-Oz. We argue that simulation-based collection is a better alternative for collecting datasets like this owing to the factors below.",
"Fewer Annotation Errors: All annotations are automatically generated, so these errors are rare. In contrast, BIBREF5 reported annotation errors in 40% of turns in MultiWOZ 2.0 which utilized a Wizard-of-Oz setup.",
"Simpler Task: The crowd worker task of paraphrasing a readable utterance for each turn is simple. The error-prone annotation task requiring skilled workers is not needed.",
"Low Cost: The simplicity of the crowd worker task and lack of an annotation task greatly cut data collection costs.",
"Better Coverage: A wide variety of dialogue flows can be collected and specific usecases can be targeted."
],
[
"We presented the Schema-Guided Dialogue dataset to encourage scalable modeling approaches for virtual assistants. We also introduced the schema-guided paradigm for task-oriented dialogue that simplifies the integration of new services and APIs with large scale virtual assistants. Building upon this paradigm, we present a scalable zero-shot dialogue state tracking model achieving state-of-the-art results."
],
[
"The authors thank Guan-Lin Chao for help with model design and implementation, and Amir Fayazi and Maria Wang for help with data collection."
]
],
"section_name": [
"Introduction",
"Related Work",
"The Schema-Guided Dialogue Dataset",
"The Schema-Guided Dialogue Dataset ::: Services and APIs",
"The Schema-Guided Dialogue Dataset ::: Dialogue Simulator Framework",
"The Schema-Guided Dialogue Dataset ::: Dialogue Paraphrasing",
"The Schema-Guided Dialogue Dataset ::: Dataset Analysis",
"The Schema-Guided Approach",
"Zero-Shot Dialogue State Tracking",
"Zero-Shot Dialogue State Tracking ::: Model",
"Zero-Shot Dialogue State Tracking ::: Model ::: Schema Embedding",
"Zero-Shot Dialogue State Tracking ::: Model ::: Utterance Encoding",
"Zero-Shot Dialogue State Tracking ::: Model ::: Projection",
"Zero-Shot Dialogue State Tracking ::: Model ::: Active Intent",
"Zero-Shot Dialogue State Tracking ::: Model ::: Requested Slots",
"Zero-Shot Dialogue State Tracking ::: Model ::: User Goal",
"Zero-Shot Dialogue State Tracking ::: Evaluation",
"Zero-Shot Dialogue State Tracking ::: Evaluation ::: Performance on other datasets",
"Zero-Shot Dialogue State Tracking ::: Evaluation ::: Performance on SGD",
"Zero-Shot Dialogue State Tracking ::: Evaluation ::: Performance on different domains (SGD)",
"Discussion",
"Conclusions",
"Conclusions ::: Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"839b2e1fbb9347472547d4f92503d9653859f204"
],
"answer": [
{
"evidence": [
"Machine-machine Interaction A related line of work explores simulation-based dialogue generation, where the user and system roles are simulated to generate a complete conversation flow, which can then be converted to natural language using crowd workers BIBREF1. Such a framework may be cost-effective and error-resistant since the underlying crowd worker task is simpler, and semantic annotations are obtained automatically.",
"It is often argued that simulation-based data collection does not yield natural dialogues or sufficient coverage, when compared to other approaches such as Wizard-of-Oz. We argue that simulation-based collection is a better alternative for collecting datasets like this owing to the factors below."
],
"extractive_spans": [
"simulation-based dialogue generation, where the user and system roles are simulated to generate a complete conversation flow, which can then be converted to natural language using crowd workers "
],
"free_form_answer": "",
"highlighted_evidence": [
"Machine-machine Interaction A related line of work explores simulation-based dialogue generation, where the user and system roles are simulated to generate a complete conversation flow, which can then be converted to natural language using crowd workers BIBREF1. Such a framework may be cost-effective and error-resistant since the underlying crowd worker task is simpler, and semantic annotations are obtained automatically.",
"It is often argued that simulation-based data collection does not yield natural dialogues or sufficient coverage, when compared to other approaches such as Wizard-of-Oz. We argue that simulation-based collection is a better alternative for collecting datasets like this owing to the factors below."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"c1ecf5504bc59eaddc3dd172d610cdb26c36b323"
],
"answer": [
{
"evidence": [
"The 17 domains (`Alarm' domain not included in training) present in our dataset are listed in Table TABREF5. We create synthetic implementations of a total of 34 services or APIs over these domains. Our simulator framework interacts with these services to generate dialogue outlines, which are a structured representation of dialogue semantics. We then used a crowd-sourcing procedure to paraphrase these outlines to natural language utterances. Our novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection. In this section, we describe these steps in detail and then present analyses of the collected dataset.",
"FLOAT SELECTED: Table 2: The number of intents (services in parentheses) and dialogues for each domain in the train and dev sets. Multidomain dialogues contribute to counts of each domain. The domain Service includes salons, dentists, doctors etc."
],
"extractive_spans": [],
"free_form_answer": "Alarm\nBank\nBus\nCalendar\nEvent\nFlight\nHome\nHotel\nMedia\nMovie\nMusic\nRentalCar\nRestaurant\nRideShare\nService\nTravel\nWeather",
"highlighted_evidence": [
"The 17 domains (`Alarm' domain not included in training) present in our dataset are listed in Table TABREF5. We create synthetic implementations of a total of 34 services or APIs over these domains. ",
"FLOAT SELECTED: Table 2: The number of intents (services in parentheses) and dialogues for each domain in the train and dev sets. Multidomain dialogues contribute to counts of each domain. The domain Service includes salons, dentists, doctors etc."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"no",
"no"
],
"question": [
"How did they gather the data?",
"What are the domains covered in the dataset?"
],
"question_id": [
"3ee721c3531bf1b9a1356a40205d088c9a7a44fc",
"6dcbe941a3b0d5193f950acbdc574f1cfb007845"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Comparison of our SGD dataset to existing related datasets for task-oriented dialogue. Note that the numbers reported are for the training portions for all datasets except FRAMES, where the numbers for the complete dataset are reported.",
"Table 2: The number of intents (services in parentheses) and dialogues for each domain in the train and dev sets. Multidomain dialogues contribute to counts of each domain. The domain Service includes salons, dentists, doctors etc.",
"Figure 1: Example schema for a digital wallet service.",
"Figure 2: The overall architecture of the dialogue simulation framework for generating dialogue outlines.",
"Figure 3: Steps for obtaining paraphrased conversations. To increase the presence of relative dates like tomorrow, next Monday, the current date is assumed to be March 1, 2019.",
"Figure 4: Detailed statistics of the SGD dataset.",
"Table 3: Input sequences for the pretrained BERT model to obtain embeddings of different schema elements.",
"Figure 5: The predicted dialogue state (shown with dashed edges) for the first two user turns for an example dialogue, showing the active intent and slot assignments, with two related annotation schemas. Note that the dialogue state representation is conditioned on the schema under consideration, which is provided as input, as are the user and system utterances.",
"Figure 6: BERT encoder, taking in two sequences p and q as input and outputs an embedded sequence pair representation uCLS and token level representations {t1 · · · tn+m}. We use BERT to obtain schema element embeddings and encode system and user utterances for dialogue state tracking.",
"Figure 7: Performance of the model on all services, services seen in training data, services not seen in training data.",
"Table 5: Model performance per domain (GA: goal accuracy). Domains marked with ’*’ are those for which the service in the dev set is not present in the training set. Hotels domain marked with ’**’ has one unseen and one seen service. For other domains, the service in the dev set was also seen in the training set. We see that the model generally performs better for domains containing services seen during training.",
"Table 4: Model performance on test sets of the respective datasets (except SGD variants, where dev sets were used). SGD-Single model is trained and evaluated on singledomain dialogues only whereas SGD-All model is trained and evaluated on the entire dataset. We also report results on MultiWOZ 2.0, the updated MultiWOZ 2.1, and WOZ2.0. N.A. indicates tasks not available for those datasets.",
"Figure A.3: An example multi-domain dialogue from our dataset covering “Movies”, “Restaurants” and “RideSharing” domains.",
"Figure A.5: Interface of the dialogue paraphrasing task where the crowd workers are asked to rephrase the dialogue outlines to a more natural expression. The actual interface shows the entire conversation, but only a few utterances have been shown in this figure. All non-categorical slot values are highlighted in blue. The task cannot be submitted unless all highlighted values in the outline are also present in the conversational dialogue."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"3-Figure1-1.png",
"4-Figure2-1.png",
"4-Figure3-1.png",
"5-Figure4-1.png",
"5-Table3-1.png",
"6-Figure5-1.png",
"6-Figure6-1.png",
"7-Figure7-1.png",
"7-Table5-1.png",
"7-Table4-1.png",
"10-FigureA.3-1.png",
"11-FigureA.5-1.png"
]
} | [
"What are the domains covered in the dataset?"
] | [
[
"1909.05855-3-Table2-1.png",
"1909.05855-The Schema-Guided Dialogue Dataset-1"
]
] | [
"Alarm\nBank\nBus\nCalendar\nEvent\nFlight\nHome\nHotel\nMedia\nMovie\nMusic\nRentalCar\nRestaurant\nRideShare\nService\nTravel\nWeather"
] | 716 |
1905.01962 | Harvey Mudd College at SemEval-2019 Task 4: The Clint Buchanan Hyperpartisan News Detector | We investigate the recently developed Bidirectional Encoder Representations from Transformers (BERT) model for the hyperpartisan news detection task. Using a subset of hand-labeled articles from SemEval as a validation set, we test the performance of different parameters for BERT models. We find that accuracy from two different BERT models using different proportions of the articles is consistently high, with our best-performing model on the validation set achieving 85% accuracy and the best-performing model on the test set achieving 77%. We further determined that our model exhibits strong consistency, labeling independent slices of the same article identically. Finally, we find that randomizing the order of word pieces dramatically reduces validation accuracy (to approximately 60%), but that shuffling groups of four or more word pieces maintains an accuracy of about 80%, indicating the model mainly gains value from local context. | {
"paragraphs": [
[
"SemEval Task 4 BIBREF1 tasked participating teams with identifying news articles that are misleading to their readers, a phenomenon often associated with “fake news” distributed by partisan sources BIBREF2 .",
"We approach the problem through transfer learning to fine-tune a model for the document classification task. We use the BERT model based on the implementation of the github repository pytorch-pretrained-bert on some of the data provided by Task 4 of SemEval. BERT has been used to learn useful representations for a variety of natural language tasks, achieving state of the art performance in these tasks after being fine-tuned BIBREF0 . It is a language representation model that is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. Thus, it may be able to adequately account for complex characteristics as such blind, prejudiced reasoning and extreme bias that are important to reliably identifying hyperpartisanship in articles.",
"We show that BERT performs well on hyperpartisan sentiment classification. We use unsupervised learning on the set of 600,000 source-labeled articles provided as part of the task, then train using supervised learning for the 645 hand-labeled articles. We believe that learning on source-labeled articles would bias our model to learn the partisanship of a source, instead of the article. Additionally, the accuracy of the model on validation data labeled by article differs heavily when the articles are labeled by publisher. Thus, we decided to use a small subset of the hand-labeled articles as our validation set for all of our experiments. As the articles are too large for the model to be trained on the full text each time, we consider the number of word-pieces that the model uses from each article a hyperparameter.",
"A second major issue we explore is what information the model is using to make decisions. This is particularly important for BERT because neural models are often viewed like black boxes. This view is problematic for a task like hyperpartisan news detection where users may reasonably want explanations as to why an article was flagged. We specifically explore how much of the article is needed by the model, how consistent the model behaves on an article, and whether the model focuses on individual words and phrases or if it uses more global understanding. We find that the model only needs a short amount of context (100 word pieces), is very consistent throughout an article, and most of the model's accuracy arises from locally examining the article.",
"In this paper, we demonstrate the effectiveness of BERT models for the hyperpartisan news classification task, with validation accuracy as high as 85% and test accuracy as high as 77% . We also make significant investigations into the importance of different factors relating to the articles and training in BERT's success. The remainder of this paper is organized as follows. Section SECREF2 describes previous work on the BERT model and semi-supervised learning. Section SECREF3 outlines our model, data, and experiments. Our results are presented in Section SECREF4 , with their ramifications discussed in Section SECREF5 . We close with an introduction to our system's namesake, fictional journalist Clint Buchanan, in Section SECREF6 ."
],
[
"We build upon the Bidirectional Encoder Representations from Transformers (BERT) model. BERT is a deep bidirectional transformer that has been successfully tuned to a variety of tasks BIBREF0 . BERT functions as a language model over character sequences, with tokenization as described by BIBREF3 . The transformer architecture BIBREF4 is based upon relying on self-attention layers to encode a sequence. To allow the language model to be trained in a bidirectional manner instead of predicting tokens autoregressively, BERT was pre-trained to fill in the blanks for a piece of text, also known as the Cloze task BIBREF5 .",
"Due to the small size of our training data, it was necessary to explore techniques from semi-supervised learning. BIBREF6 found pre-training a model as a language model on a larger corpus to be beneficial for a variety of experiments. We also investigated the use of self-training BIBREF7 to increase our effective training dataset size. Lastly, the motivation of examining the effective context of our classification model was based on BIBREF8 . It was found that much higher performance than expected was achieved on the ImageNet dataset BIBREF9 by aggregating predictions from local patches. This revealed that typical ImageNet models could acquire most of their performance from local decisions."
],
[
"Next, we describe the variations of the BERT model used in our experiments, the data we used, and details of the setup of each of our experiments."
],
[
"We adjust the standard BERT model for the hyperpartisan news task, evaluating its performance both on a validation set we construct and on the test set provided by Task 4 at SemEval. The training of the model follows the methodology of the original BERT paper.",
"We choose to experiment with the use of the two different pre-trained versions of the BERT model, BERT-LARGE and BERT-BASE. The two differ in the number of layers and hidden sizes in the underlying model. BERT-BASE consists of 12 layers and 110 million parameters, while BERT-LARGE consists of 24 layers and 340 million parameters."
],
[
"We focus primarily on the smaller data set of 645 hand-labeled articles provided to task participants, both for training and for validation. We take the first 80% of this data set for our training set and the last 20% for the validation set. Since the test set is also hand-labeled we found that the 645 articles are much more representative of the final test set than the articles labeled by publisher. The model's performance on articles labeled by publisher was not much above chance level.",
"Due to an intrinsic limitation of the BERT model, we are unable to consider sequences of longer than 512 word pieces for classification problems. These word pieces refer to the byte-pair encoding that BERT relies on for tokenization. These can be actual words, but less common words may be split into subword pieces BIBREF3 . The longest article in the training set contains around 6500 word pieces. To accommodate this model limitation, we work with truncated versions of the articles.",
"We use the additional INLINEFORM0 training articles labeled by publisher as an unsupervised data set to further train the BERT model."
],
[
"We first investigate the impact of pre-training on BERT-BASE's performance. We then compare the performance of BERT-BASE with BERT-LARGE. For both, we vary the number of word-pieces from each article that are used in training. We perform tests with 100, 250 and 500 word pieces.",
"We also explore whether and how the BERT models we use classify different parts of each individual article. Since the model can only consider a limited number of word pieces and not a full article, we test how the model judges different sections of the same article. Here, we are interested in the extent to which the same class will be assigned to each segment of an article. Finally, we test whether the model's behavior varies if we randomly shuffle word-pieces from the articles during training. Our goal in this experiment is to understand whether the model focuses on individual words and phrases or if it achieves more global understanding. We alter the the size of the chunks to be shuffled ( INLINEFORM0 ) in each iteration of this experiment, from shuffling individual word-pieces ( INLINEFORM1 ) to shuffling larger multiword chunks."
],
[
"Our results are primarily based on a validation set we constructed using the last 20% of the hand-labeled articles. It is important to note that our validation set was fairly unbalanced. About 72% of articles were not hyperpartisan and this mainly arose because we were not provided with a balanced set of hand-labeled articles. The small validation split ended up increasing the imbalance in exchange for training on a more balanced set. The test accuracies we report were done on SemEval Task 4's balanced test dataset."
],
[
"Our first experiment was checking the importance of pre-training. We pre-trained BERT-base on the 600,000 articles without labels by using the same Cloze task BIBREF5 that BERT had originally used for pre-training. We then trained the model on sequence lengths of 100, 250 and 500. The accuracy for each sequence length after 100 epochs is shown in TABREF7 and is labeled as UP (unsupervised pre-training). The other column shows how well BERT-base trained without pre-training. We found improvements for lower sequence lengths, but not at 500 word pieces. Since the longer chunk should have been more informative, and since our hand-labeled training set only contained 516 articles, this likely indicates that BERT experiences training difficulty when dealing with long sequences on such a small dataset. As the cost to do pre-training was only a one time cost all of our remaining experiments use a pre-trained model.",
"We evaluated this model on the SemEval 2019 Task 4: Hyperpartisan News Detection competition's pan19-hyperpartisan-news-detection-by-article-test-dataset-2018-12-07 dataset using TIRA BIBREF10 . Our model, with a maximium sequence length of 250, had an accuracy of INLINEFORM0 . It had higher precision ( INLINEFORM1 ) than recall ( INLINEFORM2 ), for an overall F1-score of INLINEFORM3 ."
],
[
"Next, we further explore the impact of sequence length using BERT-LARGE. The model took approximately 3 days to pre-train when using 4 NVIDIA GeForce GTX 1080 Ti. On the same computer, fine tuning the model on the small training set took only about 35 minutes for sequence length 100. The model's training time scaled roughly linearly with sequence length. We did a grid search on sequence length and learning rate.",
"Table TABREF9 shows that the model consistently performed best at a sequence length of 100. This is a discrepancy from BERT-BASE indicating that the larger model struggled more with training on a small amount of long sequences. For our best trained BERT-LARGE, we submitted the model for evaluation on TIRA. Surprisingly, the test performance (75.1%) of the larger model was worse than the base model. The experiments in BIBREF0 consistently found improvements when using the large model. The main distinction here is a smaller training dataset than in their tasks. The experiments in the remaining sections use the same hyperparameters as the optimal BERT-LARGE."
],
[
"Due to the small training dataset, we tried self-training to increase our effective training set. We trained the model for 40 epochs. For the remaining 60 epochs, after each epoch we had the model make predictions on five slices of 500 unlabeled articles. If an article had the same prediction for more than four slices, we added it to the labeled training data. The model always added every article to the training set, though, since it always made the same prediction for all 5 slices. This caused self-training to be ineffective, but also revealed that the model's predictions were very consistent across segments of a single article."
],
[
"Finally, we investigate whether the model's accuracy primarily arose from examining words or short phrases, or if the decisions were more global. We permuted the word pieces in the article at various levels of granularity. At the finest level (permute_ngrams = 1), we permuted every single word piece, forcing the model to process a bag of word pieces. At coarser levels, ngrams were permuted. As the sequence length for these experiments was 100, permute_ngrams = 100 corresponds to no permutation. The results can be found in TABREF13 .",
"Accuracy drops a lot with only a bag of word pieces, but still reaches 67.4%. Also, most of the accuracy of the model (within 2%) is achieved with only 4-grams of word pieces, so the model is not getting much of a boost from global content."
],
[
"Our successful results demonstrate the adaptability of the BERT model to different tasks. With a relatively small training set of articles, we were able to train models with high accuracy on both the validation set and the test set.",
"Our models classified different parts of a given article identically, demonstrating that the overall hyperpartisan aspects were similar across an article. In addition, the model had significantly lower accuracy when word pieces were shuffled around, but that accuracy was almost entirely restored when shuffling around chunks of four or more word pieces, suggesting that most of the important features can already be extracted at this level.",
"In future work, we we would like to make use of the entire article. Naively, running this over each chunk would be computationally infeasible, so it may be worth doing a full pass on a few chunks and cheaper computations on other chunks."
],
[
"Our system is named after Clint Buchanan, a fictional journalist on the soap opera One Life to Live. Following the unbelievable stories of Clint and his associates may be one of the few tasks more difficult than identifying hyperpartisan news."
]
],
"section_name": [
"Introduction",
"Related Work",
"Methodology",
"Model",
"Training and Test Sets",
"Experiments",
"Results",
"Importance of Pre-training",
"Importance of Sequence Length",
"Model Consistency",
"Effective Model Context",
"Discussion",
"Namesake"
]
} | {
"answers": [
{
"annotation_id": [
"fd88a4ecb2faccb5bd274d7eb650adb5f627c6a8"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"84763150ebf679160c917edc816a926508d7e5d4"
],
"answer": [
{
"evidence": [
"We use the additional INLINEFORM0 training articles labeled by publisher as an unsupervised data set to further train the BERT model.",
"We first investigate the impact of pre-training on BERT-BASE's performance. We then compare the performance of BERT-BASE with BERT-LARGE. For both, we vary the number of word-pieces from each article that are used in training. We perform tests with 100, 250 and 500 word pieces.",
"Next, we further explore the impact of sequence length using BERT-LARGE. The model took approximately 3 days to pre-train when using 4 NVIDIA GeForce GTX 1080 Ti. On the same computer, fine tuning the model on the small training set took only about 35 minutes for sequence length 100. The model's training time scaled roughly linearly with sequence length. We did a grid search on sequence length and learning rate."
],
"extractive_spans": [],
"free_form_answer": "They pre-train the models using 600000 articles as an unsupervised dataset and then fine-tune the models on small training set.",
"highlighted_evidence": [
"We use the additional INLINEFORM0 training articles labeled by publisher as an unsupervised data set to further train the BERT model.",
"We first investigate the impact of pre-training on BERT-BASE's performance. ",
"On the same computer, fine tuning the model on the small training set took only about 35 minutes for sequence length 100. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"c91fb32946aaf42e9067a48a1f7b7b0c8f94a002"
],
"answer": [
{
"evidence": [
"We focus primarily on the smaller data set of 645 hand-labeled articles provided to task participants, both for training and for validation. We take the first 80% of this data set for our training set and the last 20% for the validation set. Since the test set is also hand-labeled we found that the 645 articles are much more representative of the final test set than the articles labeled by publisher. The model's performance on articles labeled by publisher was not much above chance level.",
"Our first experiment was checking the importance of pre-training. We pre-trained BERT-base on the 600,000 articles without labels by using the same Cloze task BIBREF5 that BERT had originally used for pre-training. We then trained the model on sequence lengths of 100, 250 and 500. The accuracy for each sequence length after 100 epochs is shown in TABREF7 and is labeled as UP (unsupervised pre-training). The other column shows how well BERT-base trained without pre-training. We found improvements for lower sequence lengths, but not at 500 word pieces. Since the longer chunk should have been more informative, and since our hand-labeled training set only contained 516 articles, this likely indicates that BERT experiences training difficulty when dealing with long sequences on such a small dataset. As the cost to do pre-training was only a one time cost all of our remaining experiments use a pre-trained model."
],
"extractive_spans": [],
"free_form_answer": "645, 600000",
"highlighted_evidence": [
"We focus primarily on the smaller data set of 645 hand-labeled articles provided to task participants, both for training and for validation. ",
"We pre-trained BERT-base on the 600,000 articles without labels by using the same Cloze task BIBREF5 that BERT had originally used for pre-training. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do they use the cased or uncased BERT model?",
"How are the two different models trained?",
"How long is the dataset?"
],
"question_id": [
"f887d5b7cf2bcc1412ef63bff4146f7208818184",
"ace60950ccd6076bf13e12ee2717e50bc038a175",
"2e1660405bde64fb6c211e8753e52299e269998f"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Validation accuracy for BERT-base with and without Unsupervised Pre-training (UP).",
"Table 2: Validation Accuracy on BERT-LARGE across sequence length and learning rate.",
"Table 3: BERT-LARGE across permute ngrams.",
"Figure 1: Jerry verDorn as Clint Buchanan."
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"4-Figure1-1.png"
]
} | [
"How are the two different models trained?",
"How long is the dataset?"
] | [
[
"1905.01962-Training and Test Sets-2",
"1905.01962-Experiments-0",
"1905.01962-Importance of Sequence Length-0"
],
[
"1905.01962-Importance of Pre-training-0",
"1905.01962-Training and Test Sets-0"
]
] | [
"They pre-train the models using 600000 articles as an unsupervised dataset and then fine-tune the models on small training set.",
"645, 600000"
] | 718 |
1909.06434 | Adaptive Scheduling for Multi-Task Learning | To train neural machine translation models simultaneously on multiple tasks (languages), it is common to sample each task uniformly or in proportion to dataset sizes. As these methods offer little control over performance trade-offs, we explore different task scheduling approaches. We first consider existing non-adaptive techniques, then move on to adaptive schedules that over-sample tasks with poorer results compared to their respective baseline. As explicit schedules can be inefficient, especially if one task is highly over-sampled, we also consider implicit schedules, learning to scale learning rates or gradients of individual tasks instead. These techniques allow training multilingual models that perform better for low-resource language pairs (tasks with small amount of data), while minimizing negative effects on high-resource tasks. | {
"paragraphs": [
[
"Multiple tasks may often benefit from others by leveraging more available data. For natural language tasks, a simple approach is to pre-train embeddings BIBREF0, BIBREF1 or a language model BIBREF2, BIBREF3 over a large corpus. The learnt representations may then be used for upstream tasks such as part-of-speech tagging or parsing, for which there is less annotated data. Alternatively, multiple tasks may be trained simultaneously with either a single model or by sharing some model components. In addition to potentially benefit from multiple data sources, this approach also reduces the memory use. However, multi-task models of similar size as single-task baselines often under-perform because of their limited capacity. The underlying multi-task model learns to improve on harder tasks, but may hit a plateau, while simpler (or data poor) tasks can be over-trained (over-fitted). Regardless of data complexity, some tasks may be forgotten if the schedule is improper, also known as catastrophic forgetting BIBREF4.",
"In this paper, we consider multilingual neural machine translation (NMT), where both of the above pathological learning behaviors are observed, sub-optimal accuracy on high-resource, and forgetting on low-resource language pairs. Multilingual NMT models are generally trained by mixing language pairs in a predetermined fashion, such as sampling from each task uniformly BIBREF5 or in proportion to dataset sizes BIBREF6. While results are generally acceptable with a fixed schedule, it leaves little control over the performance of each task. We instead consider adaptive schedules that modify the importance of each task based on their validation set performance. The task schedule may be modified explicitly by controlling the probability of each task being sampled. Alternatively, the schedule may be fixed, with the impact of each task controlled by scaling the gradients or the learning rates. In this case, we highlight important subtleties that arise with adaptive learning rate optimizers such as Adam BIBREF7. Our proposed approach improves the low-resource pair accuracy while keeping the high resource accuracy intact within the same multi-task model."
],
[
"A common approach for multi-task learning is to train on each task uniformly BIBREF5. Alternatively, each task may be sampled following a fixed non-uniform schedule, often favoring either a specific task of interest or tasks with larger amounts of data BIBREF6, BIBREF8. Kipperwasser and Ballesteros BIBREF8 also propose variable schedules that increasingly favor some tasks over time. As all these schedules are pre-defined (as a function of the training step or amount of available training data), they offer limited control over the performance of all tasks. As such, we consider adaptive schedules that vary based on the validation performance of each task during training.",
"To do so, we assume that the baseline validation performance of each task, if trained individually, is known in advance. When training a multi-task model, validation scores are continually recorded in order to adjust task sampling probabilities. The unnormalized score $w_i$ of task $i$ is given by",
"where $s_i$ is the latest validation BLEU score and $b_i$ is the (approximate) baseline performance. Tasks that perform poorly relative to their baseline will be over-sampled, and vice-versa for language pairs with good performance. The hyper-parameter $\\alpha $ controls how agressive oversampling is, while $\\epsilon $ prevents numerical errors and slightly smooths out the distribution. Final probabilities are simply obtained by dividing the raw scores by their sum."
],
[
"Explicit schedules may possibly be too restrictive in some circumstances, such as models trained on a very high number of tasks, or when one task is sampled much more often than others. Instead of explicitly varying task schedules, a similar impact may be achieved through learning rate or gradient manipulation. For example, the GradNorm BIBREF9 algorithm scales task gradients based on the magnitude of the gradients as well as on the training losses.",
"As the training loss is not always a good proxy for validation and test performance, especially compared to a single-task baseline, we continue using validation set performance to guide gradient scaling factors. Here, instead of the previous weighting schemes, we consider one that satisfies the following desiderata. In addition to favoring tasks with low relative validation performance, we specify that task weights are close to uniform early on, when performance is still low on all tasks. We also as set a minimum task weight to avoid catastrophic forgetting.",
"Task weights $w_i, i=1,...,N$, follow",
"where $S_i = \\frac{s_i}{b_i}$ and $\\overline{S}$ is the average relative score $(\\sum _{j=1}^N S_j)/N$. $\\gamma $ sets the floor to prevent catastrophic forgetting, $\\alpha $ adjusts how quickly and strongly the schedule may deviate from uniform, while a small $\\beta $ emphasizes deviations from the mean score. With two tasks, the task weights already sum up to two, as in GradNorm BIBREF9. With more tasks, the weights may be adjusted so their their sum matches the number of tasks."
],
[
"Scaling either the gradients $g_t$ or the per-task learning rates $\\alpha $ is equivalent with standard stochastic gradient descent, but not with adaptive optimizers such as Adam BIBREF7, whose update rule is given in Eq. DISPLAY_FORM5.",
"Moreover, sharing or not the optimizer accumulators (eg. running average of 1st and 2nd moment $\\hat{m}_t$ and $\\hat{v}_t$ of the gradients) is also impactful. Using separate optimizers and simultaneously scaling the gradients of individual tasks is ineffective. Indeed, Adam is scale-insensitive because the updates are divided by the square root of the second moment estimate $\\hat{v}_t$. The opposite scenario, a shared optimizer across tasks with scaled learning rates, is also problematic as the momentum effect ($\\hat{m}_t$) will blur all tasks together at every update. All experiments we present use distinct optimizers, with scaled learning rates. The converse, a shared optimizer with scaled gradients, could also potentially be employed."
],
[
"We extract data from the WMT'14 English-French (En-Fr) and English-German (En-De) datasets. To create a larger discrepancy between the tasks, so that there is a clear dataset size imbalance, the En-De data is artificially restricted to only 1 million parallel sentences, while the full En-Fr dataset, comprising almost 40 million parallel sentences, is used entirely. Words are split into subwords units with a joint vocabulary of 32K tokens. BLEU scores are computed on the tokenized output with multi-bleu.perl from Moses BIBREF10."
],
[
"All baselines are Transformer models in their base configuration BIBREF11, using 6 encoder and decoder layers, with model and hidden dimensions of 512 and 2048 respectively, and 8 heads for all attention layers. For initial multi-task experiments, all model parameters were shared BIBREF12, but performance was down by multiple BLEU points compared to the baselines. As the source language pair is the same for both tasks, in subsequent experiments, only the encoder is shared BIBREF5. For En-Fr, 10% dropout is applied as in BIBREF11. After observing severe overfitting on En-De in early experiments, the rate is increased to 25% for this lower-resource task. All models are trained on 16 GPUs, using Adam optimizer with a learning rate schedule (inverse square root BIBREF11) and warmup."
],
[
"The main results are summarized in Table TABREF10. Considering the amount of training data, we trained single task baselines for 400K and 600K steps for En-De and En-Fr respectively, where multi-task models are trained for 900K steps after training. All reported scores are the average of the last 20 checkpoints. Within each general schedule type, model selection was performed by maximizing the average development BLEU score between the two tasks.",
"With uniform sampling, results improve by more than 1 BLEU point on En-De, but there is a significant degradation on En-Fr. Sampling En-Fr with a 75% probability gives similar results on En-De, but the En-Fr performance is now comparable to the baseline. Explicit adaptive scheduling behaves similarly on En-De and somewhat trails the En-Fr baseline.",
"For implicit schedules, GradNorm performs reasonably strongly on En-De, but suffers on En-Fr, although slightly less than with uniform sampling. Implicit validation-based scheduling still improves upon the En-De baseline, but less than the other approaches. On En-Fr, this approach performs about as well as the baseline and the multilingual model with a fixed 75% En-Fr sampling probability.",
"Overall, adaptive approaches satisfy our desiderata, satisfactory performance on both tasks, but an hyper-parameter search over constant schedules led to slightly better results. One main appeal of adaptive models is their potential ability to scale much better to a very large number of tasks, where a large hyper-parameter search would prove prohibitively expensive.",
"Additional results are presented in the appendix."
],
[
"To train multi-task vision models, Liu et al. BIBREF13 propose a similar dynamic weight average approach. Task weights are controlled by the ratio between a recent training loss and the loss at a previous time step, so that tasks that progress faster will be downweighted, while straggling ones will be upweighted. This approach contrasts with the curriculum learning framework proposed by Matiisen et al. BIBREF14, where tasks with faster progress are preferred. Loss progress, and well as a few other signals, were also employed by Graves et al. BIBREF15, which formulated curriculum learning as a multi-armed bandit problem. One advantage of using progress as a signal is that the final baseline losses are not needed. Dynamic weight average could also be adapted to employ a validation metric as opposed to the training loss. Alternatively, uncertainty may be used to adjust multi-task weights BIBREF16.",
"Sener and Volkun BIBREF17 discuss multi-task learning as a multi-objective optimization. Their objective tries to achieve Pareto optimality, so that a solution to a multi-task problem cannot improve on one task without hurting another. Their approach is learning-based, and contrarily to ours, doesn't require a somewhat ad-hoc mapping between task performance (or progress) and task weights. However, Pareto optimality of the training losses does not guarantee Pareto optimality of the evaluation metrics. Xu et al. present AutoLoss BIBREF18, which uses reinforcement learning to train a controller that determines the optimization schedule. In particular, they apply their framework to (single language pair) NMT with auxiliary tasks.",
"With implicit scheduling approaches, the effective learning rates are still dominated by the underlying predefined learning rate schedule. For single tasks, hypergradient descent BIBREF19 adjusts the global learning rate by considering the direction of the gradient and of the previous update. This technique could likely be adapted for multi-task learning, as long as the tasks are sampled randomly.",
"Tangentially, adaptive approaches may behave poorly if validation performance varies much faster than the rate at which it is computed. Figure FIGREF36 (appendix) illustrates a scenario, with an alternative parameter sharing scheme, where BLEU scores and task probabilities oscillate wildly. As one task is favored, the other is catastrophically forgotten. When new validation scores are computed, the sampling weights change drastically, and the first task now begins to be forgotten."
],
[
"We have presented adaptive schedules for multilingual machine translation, where task weights are controlled by validation BLEU scores. The schedules may either be explicit, directly changing how task are sampled, or implicit by adjusting the optimization process. Compared to single-task baselines, performance improved on the low-resource En-De task and was comparable on high-resource En-Fr task.",
"For future work, in order to increase the utility of adaptive schedulers, it would be beneficial to explore their use on a much larger number of simultaneous tasks. In this scenario, they may prove more useful as hyper-parameter search over fixed schedules would become cumbersome."
],
[
"In this appendix, we present the impact of various hyper-parameters for the different schedule types.",
"Figure FIGREF11 illustrates the effect of sampling ratios in explicit constant scheduling. We vary the sampling ratio for a task from 10% to 90% and evaluated the development and test BLEU scores by using this fixed schedule throughout the training. Considering the disproportional dataset sizes between two tasks (1/40), oversampling high-resource task yields better overall performance for both tasks. While a uniform sampling ratio favors the low-resource task (50%-50%), more balanced results are obtained with a 75% - 25% split favoring the high-resource task.",
"Explicit Dev-Based schedule results are illustrated in Figure FIGREF16 below, where we explored varying $\\alpha $ and $\\epsilon $ parameters, to control oversampling and forgetting."
],
[
"We here present how the task weights, learning rates and validation BLEU scores are modified over time with an implicit schedule. For the implicit schedule hyper-parameters, we set $\\alpha =16$, $\\beta =0.1$, $\\gamma =0.05$ with baselines $b_i$ being 24 and 35 for En-De and En-Fr respectively. For the best performing model, we used inverse-square root learning rate schedule BIBREF11 with a learning rate of 1.5 and 40K warm-up steps.",
"Task weights are adaptively changed by the scheduler during training (Figure FIGREF31 top-left), and predicted weights are used to adjust the learning rates for each task (Figure FIGREF31 top-right). Following Eq. DISPLAY_FORM3, computed relative scores for each task, $S_j$, are illustrated in Figure FIGREF31 bottom-left. Finally, progression of the validation set BLEU scores with their corresponding baselines (as solid horizontal lines) are given in in Figure FIGREF31 bottom-right."
],
[
"This appendix presents a failed experiment with wildly varying oscillations. All encoder parameters were tied, as well as the first four layers of the decoder and the softmax. An explicit schedule was employed."
]
],
"section_name": [
"Introduction",
"Explicit schedules",
"Implicit schedules",
"Implicit schedules ::: Optimization details",
"Experiments ::: Data",
"Experiments ::: Models",
"Experiments ::: Results",
"Discussion and other related work",
"Conclusion",
"Impact of hyper-parameters",
"Implicit validation-based scheduling progress",
"Possible training instabilities"
]
} | {
"answers": [
{
"annotation_id": [
"c54787f2763a97d8f0abdd51dd6e3506a1b3e22d"
],
"answer": [
{
"evidence": [
"We have presented adaptive schedules for multilingual machine translation, where task weights are controlled by validation BLEU scores. The schedules may either be explicit, directly changing how task are sampled, or implicit by adjusting the optimization process. Compared to single-task baselines, performance improved on the low-resource En-De task and was comparable on high-resource En-Fr task."
],
"extractive_spans": [],
"free_form_answer": "The negative effects were insignificant.",
"highlighted_evidence": [
"Compared to single-task baselines, performance improved on the low-resource En-De task and was comparable on high-resource En-Fr task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"84cfcaa01ce396f4f30b6be44eed85ac69989ef8"
],
"answer": [
{
"evidence": [
"We extract data from the WMT'14 English-French (En-Fr) and English-German (En-De) datasets. To create a larger discrepancy between the tasks, so that there is a clear dataset size imbalance, the En-De data is artificially restricted to only 1 million parallel sentences, while the full En-Fr dataset, comprising almost 40 million parallel sentences, is used entirely. Words are split into subwords units with a joint vocabulary of 32K tokens. BLEU scores are computed on the tokenized output with multi-bleu.perl from Moses BIBREF10."
],
"extractive_spans": [
"the WMT'14 English-French (En-Fr) and English-German (En-De) datasets."
],
"free_form_answer": "",
"highlighted_evidence": [
"We extract data from the WMT'14 English-French (En-Fr) and English-German (En-De) datasets. To create a larger discrepancy between the tasks, so that there is a clear dataset size imbalance, the En-De data is artificially restricted to only 1 million parallel sentences, while the full En-Fr dataset, comprising almost 40 million parallel sentences, is used entirely. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"f472571d9ef86d1ab24a66b7a93f13e49c7374dd"
],
"answer": [
{
"evidence": [
"We extract data from the WMT'14 English-French (En-Fr) and English-German (En-De) datasets. To create a larger discrepancy between the tasks, so that there is a clear dataset size imbalance, the En-De data is artificially restricted to only 1 million parallel sentences, while the full En-Fr dataset, comprising almost 40 million parallel sentences, is used entirely. Words are split into subwords units with a joint vocabulary of 32K tokens. BLEU scores are computed on the tokenized output with multi-bleu.perl from Moses BIBREF10."
],
"extractive_spans": [],
"free_form_answer": "English to French and English to German",
"highlighted_evidence": [
"We extract data from the WMT'14 English-French (En-Fr) and English-German (En-De) datasets. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"912897751ba2315480979a8de3faf2b5c72f3245"
],
"answer": [
{
"evidence": [
"All baselines are Transformer models in their base configuration BIBREF11, using 6 encoder and decoder layers, with model and hidden dimensions of 512 and 2048 respectively, and 8 heads for all attention layers. For initial multi-task experiments, all model parameters were shared BIBREF12, but performance was down by multiple BLEU points compared to the baselines. As the source language pair is the same for both tasks, in subsequent experiments, only the encoder is shared BIBREF5. For En-Fr, 10% dropout is applied as in BIBREF11. After observing severe overfitting on En-De in early experiments, the rate is increased to 25% for this lower-resource task. All models are trained on 16 GPUs, using Adam optimizer with a learning rate schedule (inverse square root BIBREF11) and warmup."
],
"extractive_spans": [
"Transformer models in their base configuration BIBREF11, using 6 encoder and decoder layers, with model and hidden dimensions of 512 and 2048 respectively, and 8 heads for all attention layers"
],
"free_form_answer": "",
"highlighted_evidence": [
"All baselines are Transformer models in their base configuration BIBREF11, using 6 encoder and decoder layers, with model and hidden dimensions of 512 and 2048 respectively, and 8 heads for all attention layers. For initial multi-task experiments, all model parameters were shared BIBREF12, but performance was down by multiple BLEU points compared to the baselines."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How big are negative effects of proposed techniques on high-resource tasks?",
"What datasets are used for experiments?",
"Are this techniques used in training multilingual models, on what languages?",
"What baselines non-adaptive baselines are used?"
],
"question_id": [
"82a28c1ed7988513d5984f6dcacecb7e90f64792",
"d4a6f5034345036dbc2d4e634a8504f79d42ca69",
"54fa5196d0e6d5e84955548f4ef51bfd9b707a32",
"a997fc1a62442fd80d1873cd29a9092043f025ad"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Comparison of scheduling methods, measured by BLEU scores. Best results in bold.",
"Figure 1: BLEU Score Results for Explicit Constant Schedules. Higher scores are color coded with darker colors and indicate better accuracy.",
"Figure 2: Explicit Dev-Based Schedules",
"Figure 3: Implicit GradNorm Schedules",
"Figure 4: Implicit Dev-Based Schedules",
"Figure 5: Implicit Validation-Based Scheduling Progress.",
"Figure 6: Wild oscillations"
],
"file": [
"3-Table1-1.png",
"7-Figure1-1.png",
"7-Figure2-1.png",
"8-Figure3-1.png",
"8-Figure4-1.png",
"9-Figure5-1.png",
"10-Figure6-1.png"
]
} | [
"How big are negative effects of proposed techniques on high-resource tasks?",
"Are this techniques used in training multilingual models, on what languages?"
] | [
[
"1909.06434-Conclusion-0"
],
[
"1909.06434-Experiments ::: Data-0"
]
] | [
"The negative effects were insignificant.",
"English to French and English to German"
] | 719 |
1703.02507 | Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features | The recent tremendous success of unsupervised word embeddings in a multitude of applications raises the obvious question if similar methods could be derived to improve embeddings (i.e. semantic representations) of word sequences as well. We present a simple but efficient unsupervised objective to train distributed representations of sentences. Our method outperforms the state-of-the-art unsupervised models on most benchmark tasks, highlighting the robustness of the produced general-purpose sentence embeddings. | {
"paragraphs": [
[
" Improving unsupervised learning is of key importance for advancing machine learning methods, as to unlock access to almost unlimited amounts of data to be used as training resources. The majority of recent success stories of deep learning does not fall into this category but instead relied on supervised training (in particular in the vision domain). A very notable exception comes from the text and natural language processing domain, in the form of semantic word embeddings trained unsupervised BIBREF0 , BIBREF1 , BIBREF2 . Within only a few years from their invention, such word representations – which are based on a simple matrix factorization model as we formalize below – are now routinely trained on very large amounts of raw text data, and have become ubiquitous building blocks of a majority of current state-of-the-art NLP applications.",
" While very useful semantic representations are available for words, it remains challenging to produce and learn such semantic embeddings for longer pieces of text, such as sentences, paragraphs or entire documents. Even more so, it remains a key goal to learn such general-purpose representations in an unsupervised way.",
"Currently, two contrary research trends have emerged in text representation learning: On one hand, a strong trend in deep-learning for NLP leads towards increasingly powerful and complex models, such as recurrent neural networks (RNNs), LSTMs, attention models and even Neural Turing Machine architectures. While extremely strong in expressiveness, the increased model complexity makes such models much slower to train on larger datasets. On the other end of the spectrum, simpler “shallow” models such as matrix factorizations (or bilinear models) can benefit from training on much larger sets of data, which can be a key advantage, especially in the unsupervised setting.",
" Surprisingly, for constructing sentence embeddings, naively using averaged word vectors was shown to outperform LSTMs (see BIBREF3 for plain averaging, and BIBREF4 for weighted averaging). This example shows potential in exploiting the trade-off between model complexity and ability to process huge amounts of text using scalable algorithms, towards the simpler side. In view of this trade-off, our work here further advances unsupervised learning of sentence embeddings. Our proposed model can be seen as an extension of the C-BOW BIBREF0 , BIBREF1 training objective to train sentence instead of word embeddings. We demonstrate that the empirical performance of our resulting general-purpose sentence embeddings very significantly exceeds the state of the art, while keeping the model simplicity as well as training and inference complexity exactly as low as in averaging methods BIBREF3 , BIBREF4 , thereby also putting the work by BIBREF4 in perspective.",
" Contributions. The main contributions in this work can be summarized as follows:"
],
[
"Our model is inspired by simple matrix factor models (bilinear models) such as recently very successfully used in unsupervised learning of word embeddings BIBREF0 , BIBREF1 , BIBREF2 , BIBREF5 as well as supervised of sentence classification BIBREF6 . More precisely, these models can all be formalized as an optimization problem of the form DISPLAYFORM0 ",
"for two parameter matrices INLINEFORM0 and INLINEFORM1 , where INLINEFORM2 denotes the vocabulary. Here, the columns of the matrix INLINEFORM3 represent the learnt source word vectors whereas those of INLINEFORM4 represent the target word vectors. For a given sentence INLINEFORM5 , which can be of arbitrary length, the indicator vector INLINEFORM6 is a binary vector encoding INLINEFORM7 (bag of words encoding).",
"Fixed-length context windows INLINEFORM0 running over the corpus are used in word embedding methods as in C-BOW BIBREF0 , BIBREF1 and GloVe BIBREF2 . Here we have INLINEFORM1 and each cost function INLINEFORM2 only depends on a single row of its input, describing the observed target word for the given fixed-length context INLINEFORM3 . In contrast, for sentence embeddings which are the focus of our paper here, INLINEFORM4 will be entire sentences or documents (therefore variable length). This property is shared with the supervised FastText classifier BIBREF6 , which however uses soft-max with INLINEFORM5 being the number of class labels."
],
[
"We propose a new unsupervised model, Sent2Vec, for learning universal sentence embeddings. Conceptually, the model can be interpreted as a natural extension of the word-contexts from C-BOW BIBREF0 , BIBREF1 to a larger sentence context, with the sentence words being specifically optimized towards additive combination over the sentence, by means of the unsupervised objective function.",
"Formally, we learn a source (or context) embedding INLINEFORM0 and target embedding INLINEFORM1 for each word INLINEFORM2 in the vocabulary, with embedding dimension INLINEFORM3 and INLINEFORM4 as in ( EQREF6 ). The sentence embedding is defined as the average of the source word embeddings of its constituent words, as in ( EQREF8 ). We augment this model furthermore by also learning source embeddings for not only unigrams but also n-grams present in each sentence, and averaging the n-gram embeddings along with the words, i.e., the sentence embedding INLINEFORM5 for INLINEFORM6 is modeled as DISPLAYFORM0 ",
" where INLINEFORM0 is the list of n-grams (including unigrams) present in sentence INLINEFORM1 . In order to predict a missing word from the context, our objective models the softmax output approximated by negative sampling following BIBREF0 . For the large number of output classes INLINEFORM2 to be predicted, negative sampling is known to significantly improve training efficiency, see also BIBREF7 . Given the binary logistic loss function INLINEFORM3 coupled with negative sampling, our unsupervised training objective is formulated as follows: INLINEFORM4 ",
" where INLINEFORM0 corresponds to the current sentence and INLINEFORM1 is the set of words sampled negatively for the word INLINEFORM2 . The negatives are sampled following a multinomial distribution where each word INLINEFORM5 is associated with a probability INLINEFORM6 , where INLINEFORM7 is the normalized frequency of INLINEFORM8 in the corpus.",
"To select the possible target unigrams (positives), we use subsampling as in BIBREF6 , BIBREF5 , each word INLINEFORM0 being discarded with probability INLINEFORM1 where INLINEFORM2 . Where INLINEFORM3 is the subsampling hyper-parameter. Subsampling prevents very frequent words of having too much influence in the learning as they would introduce strong biases in the prediction task. With positives subsampling and respecting the negative sampling distribution, the precise training objective function becomes DISPLAYFORM0 "
],
[
"In contrast to more complex neural network based models, one of the core advantages of the proposed technique is the low computational cost for both inference and training. Given a sentence INLINEFORM0 and a trained model, computing the sentence representation INLINEFORM1 only requires INLINEFORM2 floating point operations (or INLINEFORM3 to be precise for the n-gram case, see ( EQREF8 )), where INLINEFORM4 is the embedding dimension. The same holds for the cost of training with SGD on the objective ( EQREF10 ), per sentence seen in the training corpus. Due to the simplicity of the model, parallel training is straight-forward using parallelized or distributed SGD.",
"Also, in order to store higher-order n-grams efficiently, we use the standard hashing trick, see e.g. BIBREF8 , with the same hashing function as used in FastText BIBREF6 , BIBREF5 ."
],
[
"C-BOW BIBREF0 , BIBREF1 aims to predict a chosen target word given its fixed-size context window, the context being defined by the average of the vectors associated with the words at a distance less than the window size hyper-parameter INLINEFORM0 . If our system, when restricted to unigram features, can be seen as an extension of C-BOW where the context window includes the entire sentence, in practice there are few important differences as C-BOW uses important tricks to facilitate the learning of word embeddings. C-BOW first uses frequent word subsampling on the sentences, deciding to discard each token INLINEFORM1 with probability INLINEFORM2 or alike (small variations exist across implementations). Subsampling prevents the generation of n-grams features, and deprives the sentence of an important part of its syntactical features. It also shortens the distance between subsampled words, implicitly increasing the span of the context window. A second trick consists of using dynamic context windows: for each subsampled word INLINEFORM3 , the size of its associated context window is sampled uniformly between 1 and INLINEFORM4 . Using dynamic context windows is equivalent to weighing by the distance from the focus word INLINEFORM5 divided by the window size BIBREF9 . This makes the prediction task local, and go against our objective of creating sentence embeddings as we want to learn how to compose all n-gram features present in a sentence. In the results section, we report a significant improvement of our method over C-BOW."
],
[
"Three different datasets have been used to train our models: the Toronto book corpus, Wikipedia sentences and tweets. The Wikipedia and Toronto books sentences have been tokenized using the Stanford NLP library BIBREF10 , while for tweets we used the NLTK tweets tokenizer BIBREF11 . For training, we select a sentence randomly from the dataset and then proceed to select all the possible target unigrams using subsampling. We update the weights using SGD with a linearly decaying learning rate.",
"Also, to prevent overfitting, for each sentence we use dropout on its list of n-grams INLINEFORM0 , where INLINEFORM1 is the set of all unigrams contained in sentence INLINEFORM2 . After empirically trying multiple dropout schemes, we find that dropping INLINEFORM3 n-grams ( INLINEFORM4 ) for each sentence is giving superior results compared to dropping each token with some fixed probability. This dropout mechanism would negatively impact shorter sentences. The regularization can be pushed further by applying L1 regularization to the word vectors. Encouraging sparsity in the embedding vectors is particularly beneficial for high dimension INLINEFORM5 . The additional soft thresholding in every SGD step adds negligible computational cost. See also Appendix SECREF8 . We train two models on each dataset, one with unigrams only and one with unigrams and bigrams. All training parameters for the models are provided in Table TABREF25 in the supplementary material. Our C++ implementation builds upon the FastText library BIBREF6 , BIBREF5 . We will make our code and pre-trained models available open-source."
],
[
"We discuss existing models which have been proposed to construct sentence embeddings. While there is a large body of works in this direction – several among these using e.g. labelled datasets of paraphrase pairs to obtain sentence embeddings in a supervised manner BIBREF12 , BIBREF3 , BIBREF13 to learn sentence embeddings – we here focus on unsupervised, task-independent models. While some methods require ordered raw text i.e., a coherent corpus where the next sentence is a logical continuation of the previous sentence, others rely only on raw text i.e., an unordered collection of sentences. Finally, we also discuss alternative models built from structured data sources."
],
[
"The ParagraphVector DBOW model BIBREF14 is a log-linear model which is trained to learn sentence as well as word embeddings and then use a softmax distribution to predict words contained in the sentence given the sentence vector representation. They also propose a different model ParagraphVector DM where they use n-grams of consecutive words along with the sentence vector representation to predict the next word.",
" BIBREF15 also presented an early approach to obtain compositional embeddings from word vectors. They use different compositional techniques including static averaging or Fisher vectors of a multivariate Gaussian to obtain sentence embeddings from word2vec models.",
" BIBREF16 propose a Sequential (Denoising) Autoencoder, S(D)AE. This model first introduces noise in the input data: Firstly each word is deleted with probability INLINEFORM0 , then for each non-overlapping bigram, words are swapped with probability INLINEFORM1 . The model then uses an LSTM-based architecture to retrieve the original sentence from the corrupted version. The model can then be used to encode new sentences into vector representations. In the case of INLINEFORM2 , the model simply becomes a Sequential Autoencoder. BIBREF16 also propose a variant (S(D)AE + embs.) in which the words are represented by fixed pre-trained word vector embeddings.",
" BIBREF4 propose a model in which sentences are represented as a weighted average of fixed (pre-trained) word vectors, followed by post-processing step of subtracting the principal component. Using the generative model of BIBREF17 , words are generated conditioned on a sentence “discourse” vector INLINEFORM0 : INLINEFORM1 ",
" where INLINEFORM0 and INLINEFORM1 and INLINEFORM2 , INLINEFORM3 are scalars. INLINEFORM4 is the common discourse vector, representing a shared component among all discourses, mainly related to syntax. It allows the model to better generate syntactical features. The INLINEFORM5 term is here to enable the model to generate some frequent words even if their matching with the discourse vector INLINEFORM6 is low.",
"Therefore, this model tries to generate sentences as a mixture of three type of words: words matching the sentence discourse vector INLINEFORM0 , syntactical words matching INLINEFORM1 , and words with high INLINEFORM2 . BIBREF4 demonstrated that for this model, the MLE of INLINEFORM3 can be approximated by INLINEFORM4 , where INLINEFORM5 is a scalar. The sentence discourse vector can hence be obtained by subtracting INLINEFORM6 estimated by the first principal component of INLINEFORM7 's on a set of sentences. In other words, the sentence embeddings are obtained by a weighted average of the word vectors stripping away the syntax by subtracting the common discourse vector and down-weighting frequent tokens. They generate sentence embeddings from diverse pre-trained word embeddings among which are unsupervised word embeddings such as GloVe BIBREF2 as well as supervised word embeddings such as paragram-SL999 (PSL) BIBREF18 trained on the Paraphrase Database BIBREF19 .",
"In a very different line of work, C-PHRASE BIBREF20 relies on additional information from the syntactic parse tree of each sentence, which is incorporated into the C-BOW training objective.",
" BIBREF21 show that single layer CNNs can be modeled using a tensor decomposition approach. While building on an unsupervised objective, the employed dictionary learning step for obtaining phrase templates is task-specific (for each use-case), not resulting in general-purpose embeddings."
],
[
"The SkipThought model BIBREF22 combines sentence level models with recurrent neural networks. Given a sentence INLINEFORM0 from an ordered corpus, the model is trained to predict INLINEFORM1 and INLINEFORM2 .",
"FastSent BIBREF16 is a sentence-level log-linear bag-of-words model. Like SkipThought, it uses adjacent sentences as the prediction target and is trained in an unsupervised fashion. Using word sequences allows the model to improve over the earlier work of paragraph2vec BIBREF14 . BIBREF16 augment FastSent further by training it to predict the constituent words of the sentence as well. This model is named FastSent + AE in our comparisons.",
"Compared to our approach, Siamese C-BOW BIBREF23 shares the idea of learning to average word embeddings over a sentence. However, it relies on a Siamese neural network architecture to predict surrounding sentences, contrasting our simpler unsupervised objective.",
"Note that on the character sequence level instead of word sequences, FastText BIBREF5 uses the same conceptual model to obtain better word embeddings. This is most similar to our proposed model, with two key differences: Firstly, we predict from source word sequences to target words, as opposed to character sequences to target words, and secondly, our model is averaging the source embeddings instead of summing them."
],
[
"DictRep BIBREF24 is trained to map dictionary definitions of the words to the pre-trained word embeddings of these words. They use two different architectures, namely BOW and RNN (LSTM) with the choice of learning the input word embeddings or using them pre-trained. A similar architecture is used by the CaptionRep variant, but here the task is the mapping of given image captions to a pre-trained vector representation of these images."
],
[
"We use a standard set of supervised as well as unsupervised benchmark tasks from the literature to evaluate our trained models, following BIBREF16 . The breadth of tasks allows to fairly measure generalization to a wide area of different domains, testing the general-purpose quality (universality) of all competing sentence embeddings. For downstream supervised evaluations, sentence embeddings are combined with logistic regression to predict target labels. In the unsupervised evaluation for sentence similarity, correlation of the cosine similarity between two embeddings is compared to human annotators.",
"Downstream Supervised Evaluation. Sentence embeddings are evaluated for various supervised classification tasks as follows. We evaluate paraphrase identification (MSRP) BIBREF25 , classification of movie review sentiment (MR) BIBREF26 , product reviews (CR) BIBREF27 , subjectivity classification (SUBJ) BIBREF28 , opinion polarity (MPQA) BIBREF29 and question type classification (TREC) BIBREF30 . To classify, we use the code provided by BIBREF22 in the same manner as in BIBREF16 . For the MSRP dataset, containing pairs of sentences INLINEFORM0 with associated paraphrase label, we generate feature vectors by concatenating their Sent2Vec representations INLINEFORM1 with the component-wise product INLINEFORM2 . The predefined training split is used to tune the L2 penalty parameter using cross-validation and the accuracy and F1 scores are computed on the test set. For the remaining 5 datasets, Sent2Vec embeddings are inferred from input sentences and directly fed to a logistic regression classifier. Accuracy scores are obtained using 10-fold cross-validation for the MR, CR, SUBJ and MPQA datasets. For those datasets nested cross-validation is used to tune the L2 penalty. For the TREC dataset, as for the MRSP dataset, the L2 penalty is tuned on the predefined train split using 10-fold cross-validation, and the accuracy is computed on the test set.",
"Unsupervised Similarity Evaluation. We perform unsupervised evaluation of the learnt sentence embeddings using the sentence cosine similarity, on the STS 2014 BIBREF31 and SICK 2014 BIBREF32 datasets. These similarity scores are compared to the gold-standard human judgements using Pearson's INLINEFORM0 BIBREF33 and Spearman's INLINEFORM1 BIBREF34 correlation scores. The SICK dataset consists of about 10,000 sentence pairs along with relatedness scores of the pairs. The STS 2014 dataset contains 3,770 pairs, divided into six different categories on the basis of the origin of sentences/phrases, namely Twitter, headlines, news, forum, WordNet and images."
],
[
"In Tables TABREF18 and TABREF19 , we compare our results with those obtained by BIBREF16 on different models. Table TABREF21 in the last column shows the dramatic improvement in training time of our models (and other C-BOW-inspired models) in contrast to neural network based models. All our Sent2Vec models are trained on a machine with 2x Intel Xeon E5 INLINEFORM0 2680v3, 12 cores @2.5GHz.",
"Along with the models discussed in Section SECREF3 , this also includes the sentence embedding baselines obtained by simple averaging of word embeddings over the sentence, in both the C-BOW and skip-gram variants. TF-IDF BOW is a representation consisting of the counts of the 200,000 most common feature-words, weighed by their TF-IDF frequencies. To ensure coherence, we only include unsupervised models in the main paper. Performance of supervised and semi-supervised models on these evaluations can be observed in Tables TABREF29 and TABREF30 in the supplementary material.",
"Downstream Supervised Evaluation Results. On running supervised evaluations and observing the results in Table TABREF18 , we find that on an average our models are second only to SkipThought vectors. Also, both our models achieve state of the art results on the CR task. We also observe that on half of the supervised tasks, our unigrams + bigram model is the best model after SkipThought. Our models are weaker on the MSRP task (which consists of the identification of labelled paraphrases) compared to state-of-the-art methods. However, we observe that the models which perform very strongly on this task end up faring very poorly on the other tasks, indicating a lack of generalizability. On rest of the tasks, our models perform extremely well. The SkipThought model is able to outperform our models on most of the tasks as it is trained to predict the previous and next sentences and a lot of tasks are able to make use of this contextual information missing in our Sent2Vec models. For example, the TREC task is a poor measure of how one predicts the content of the sentence (the question) but a good measure of how the next sentence in the sequence (the answer) is predicted.",
"Unsupervised Similarity Evaluation Results. In Table TABREF19 , we see that our Sent2Vec models are state-of-the-art on the majority of tasks when comparing to all the unsupervised models trained on the Toronto corpus, and clearly achieve the best averaged performance. Our Sent2Vec models also on average outperform or are at par with the C-PHRASE model, despite significantly lagging behind on the STS 2014 WordNet and News subtasks. This observation can be attributed to the fact that a big chunk of the data that the C-PHRASE model is trained on comes from English Wikipedia, helping it to perform well on datasets involving definition and news items. Also, C-PHRASE uses data three times the size of the Toronto book corpus. Interestingly, our model outperforms C-PHRASE when trained on Wikipedia, as shown in Table TABREF21 , despite the fact that we use no parse tree information. Official STS 2017 benchmark. In the official results of the most recent edition of the STS 2017 benchmark BIBREF35 , our model also significantly outperforms C-PHRASE, and in fact delivers the best unsupervised baseline method.",
"For the Siamese C-BOW model trained on the Toronto corpus, supervised evaluation as well as similarity evaluation results on the SICK 2014 dataset are unavailable.",
"Macro Average. To summarize our contributions on both supervised and unsupervised tasks, in Table TABREF21 we present the results in terms of the macro average over the averages of both supervised and unsupervised tasks along with the training times of the models. For unsupervised tasks, averages are taken over both Spearman and Pearson scores. The comparison includes the best performing unsupervised and semi-supervised methods described in Section SECREF3 . For models trained on the Toronto books dataset, we report a 3.8 INLINEFORM0 points improvement over the state of the art. Considering all supervised, semi-supervised methods and all datasets compared in BIBREF16 , we report a 2.2 INLINEFORM1 points improvement.",
"We also see a noticeable improvement in accuracy as we use larger datasets like Twitter and Wikipedia. We furthermore see that the Sent2Vec models are faster to train when compared to methods like SkipThought and DictRep, owing to the SGD optimizer allowing a high degree of parallelizability.",
"We can clearly see Sent2Vec outperforming other unsupervised and even semi-supervised methods. This can be attributed to the superior generalizability of our model across supervised and unsupervised tasks.",
"Comparison with BIBREF4 . We also compare our work with BIBREF4 who also use additive compositionality to obtain sentence embeddings. However, in contrast to our model, they use fixed, pre-trained word embeddings to build a weighted average of these embeddings using unigram probabilities. While we couldn't find pre-trained state of the art word embeddings trained on the Toronto books corpus, we evaluated their method using GloVe embeddings obtained from the larger Common Crawl Corpus, which is 42 times larger than our twitter corpus, greatly favoring their method over ours.",
"In Table TABREF22 , we report an experimental comparison to their model on unsupervised tasks. In the table, the suffix W indicates that their down-weighting scheme has been used, while the suffix R indicates the removal of the first principal component. They report values of INLINEFORM0 as giving the best results and used INLINEFORM1 for all their experiments. We observe that our results are competitive with the embeddings of BIBREF4 for purely unsupervised methods. It is important to note that the scores obtained from supervised task-specific PSL embeddings trained for the purpose of semantic similarity outperform our method on both SICK and average STS 2014, which is expected as our model is trained purely unsupervised.",
"In order to facilitate a more detailed comparison, we also evaluated the unsupervised Glove + WR embeddings on downstream supervised tasks and compared them to our twitter models. To use BIBREF4 's method in a supervised setup, we precomputed and stored the common discourse vector INLINEFORM0 using 2 million random Wikipedia sentences. On an average, our models outperform their unsupervised models by a significant margin, this despite the fact that they used GloVe embeddings trained on larger corpora than ours (42 times larger). Our models also outperform their semi-supervised PSL + WR model. This indicates our model learns a more precise weighing scheme than the static one proposed by BIBREF4 .",
"",
"The effect of datasets and n-grams. Despite being trained on three very different datasets, all of our models generalize well to sometimes very specific domains. Models trained on Toronto Corpus are the state-of-the-art on the STS 2014 images dataset even beating the supervised CaptionRep model trained on images. We also see that addition of bigrams to our models doesn't help much when it comes to unsupervised evaluations but gives a significant boost-up in accuracy on supervised tasks. We attribute this phenomenon to the ability of bigrams models to capture some non-compositional features missed by unigrams models. Having a single representation for “not good\" or “very bad\" can boost the supervised model's ability to infer relevant features for the corresponding classifier. For semantic similarity tasks however, the relative uniqueness of bigrams results in pushing sentence representations further apart, which can explain the average drop of scores for bigrams models on those tasks.",
"On learning the importance and the direction of the word vectors. Our model – by learning how to generate and compose word vectors – has to learn both the direction of the word embeddings as well as their norm. Considering the norms of the used word vectors as by our averaging over the sentence, we observe an interesting distribution of the “importance” of each word. In Figure FIGREF24 we show the profile of the INLINEFORM0 -norm as a function of INLINEFORM1 for each INLINEFORM2 , and compare it to the static down-weighting mechanism of BIBREF4 . We can observe that our model is learning to down-weight frequent tokens by itself. It is also down-weighting rare tokens and the INLINEFORM3 profile seems to roughly follow Luhn's hypothesis BIBREF36 , a well known information retrieval paradigm, stating that mid-rank terms are the most significant to discriminate content.",
""
],
[
"",
"In this paper, we introduce a novel, computationally efficient, unsupervised, C-BOW-inspired method to train and infer sentence embeddings. On supervised evaluations, our method, on an average, achieves better performance than all other unsupervised competitors with the exception of SkipThought. However, SkipThought vectors show a very poor performance on sentence similarity tasks while our model is state-of-the-art for these evaluations on average. Also, our model is generalizable, extremely fast to train, simple to understand and easily interpretable, showing the relevance of simple and well-grounded representation models in contrast to the models using deep architectures. Future work could focus on augmenting the model to exploit data with ordered sentences. Furthermore, we would like to investigate the model's ability to use pre-trained embeddings for downstream transfer learning tasks."
],
[
"Optionally, our model can be additionally improved by adding an L1 regularizer term in the objective function, leading to slightly better generalization performance. Additionally, encouraging sparsity in the embedding vectors is beneficial for memory reasons, allowing higher embedding dimensions INLINEFORM0 .",
"We propose to apply L1 regularization individually to each word (and n-gram) vector (both source and target vectors). Formally, the training objective function ( EQREF10 ) then becomes DISPLAYFORM0 ",
" where INLINEFORM0 is the regularization parameter.",
"Now, in order to minimize a function of the form INLINEFORM0 where INLINEFORM1 is not differentiable over the domain, we can use the basic proximal-gradient scheme. In this iterative method, after doing a gradient descent step on INLINEFORM2 with learning rate INLINEFORM3 , we update INLINEFORM4 as DISPLAYFORM0 ",
"where INLINEFORM0 is called the proximal function BIBREF37 of INLINEFORM1 with INLINEFORM2 being the proximal parameter and INLINEFORM3 is the value of INLINEFORM4 after a gradient (or SGD) step on INLINEFORM5 .",
"In our case, INLINEFORM0 and the corresponding proximal operator is given by DISPLAYFORM0 ",
" where INLINEFORM0 corresponds to element-wise product.",
"Similar to the proximal-gradient scheme, in our case we can optionally use the thresholding operator on the updated word and n-gram vectors after an SGD step. The soft thresholding parameter used for this update is INLINEFORM0 and INLINEFORM1 for the source and target vectors respectively where INLINEFORM2 is the current learning rate, INLINEFORM3 is the INLINEFORM4 regularization parameter and INLINEFORM5 is the sentence on which SGD is being run.",
"We observe that INLINEFORM0 regularization using the proximal step gives our models a small boost in performance. Also, applying the thresholding operator takes only INLINEFORM1 floating point operations for the updating the word vectors corresponding to the sentence and INLINEFORM2 for updating the target as well as the negative word vectors, where INLINEFORM3 is the number of negatives sampled and INLINEFORM4 is the embedding dimension. Thus, performing INLINEFORM5 regularization using soft-thresholding operator comes with a small computational overhead.",
"We set INLINEFORM0 to be 0.0005 for both the Wikipedia and the Toronto Book Corpus unigrams + bigrams models."
]
],
"section_name": [
"Introduction",
"Model",
"Proposed Unsupervised Model",
"Computational Efficiency",
"Comparison to C-BOW",
"Model Training",
"Related Work",
"Unsupervised Models Independent of Sentence Ordering",
"Unsupervised Models Depending on Sentence Ordering",
"Models requiring structured data",
"Evaluation Tasks",
"Results and Discussion",
"Conclusion",
"L1 regularization of models"
]
} | {
"answers": [
{
"annotation_id": [
"be4dcf7ef4d966f75d9671684a5ebbd975b3e08d"
],
"answer": [
{
"evidence": [
"Three different datasets have been used to train our models: the Toronto book corpus, Wikipedia sentences and tweets. The Wikipedia and Toronto books sentences have been tokenized using the Stanford NLP library BIBREF10 , while for tweets we used the NLTK tweets tokenizer BIBREF11 . For training, we select a sentence randomly from the dataset and then proceed to select all the possible target unigrams using subsampling. We update the weights using SGD with a linearly decaying learning rate.",
"Unsupervised Similarity Evaluation Results. In Table TABREF19 , we see that our Sent2Vec models are state-of-the-art on the majority of tasks when comparing to all the unsupervised models trained on the Toronto corpus, and clearly achieve the best averaged performance. Our Sent2Vec models also on average outperform or are at par with the C-PHRASE model, despite significantly lagging behind on the STS 2014 WordNet and News subtasks. This observation can be attributed to the fact that a big chunk of the data that the C-PHRASE model is trained on comes from English Wikipedia, helping it to perform well on datasets involving definition and news items. Also, C-PHRASE uses data three times the size of the Toronto book corpus. Interestingly, our model outperforms C-PHRASE when trained on Wikipedia, as shown in Table TABREF21 , despite the fact that we use no parse tree information. Official STS 2017 benchmark. In the official results of the most recent edition of the STS 2017 benchmark BIBREF35 , our model also significantly outperforms C-PHRASE, and in fact delivers the best unsupervised baseline method."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Three different datasets have been used to train our models: the Toronto book corpus, Wikipedia sentences and tweets. ",
"Our Sent2Vec models also on average outperform or are at par with the C-PHRASE model, despite significantly lagging behind on the STS 2014 WordNet and News subtasks. This observation can be attributed to the fact that a big chunk of the data that the C-PHRASE model is trained on comes from English Wikipedia, helping it to perform well on datasets involving definition and news items. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"c999a16360d5dfcfd59344a1df3c78d46ec29e00"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Comparison of the performance of different models on different supervised evaluation tasks. An underline indicates the best performance for the dataset. Top 3 performances in each data category are shown in bold. The average is calculated as the average of accuracy for each category (For MSRP, we take the accuracy). )",
"We use a standard set of supervised as well as unsupervised benchmark tasks from the literature to evaluate our trained models, following BIBREF16 . The breadth of tasks allows to fairly measure generalization to a wide area of different domains, testing the general-purpose quality (universality) of all competing sentence embeddings. For downstream supervised evaluations, sentence embeddings are combined with logistic regression to predict target labels. In the unsupervised evaluation for sentence similarity, correlation of the cosine similarity between two embeddings is compared to human annotators.",
"Downstream Supervised Evaluation. Sentence embeddings are evaluated for various supervised classification tasks as follows. We evaluate paraphrase identification (MSRP) BIBREF25 , classification of movie review sentiment (MR) BIBREF26 , product reviews (CR) BIBREF27 , subjectivity classification (SUBJ) BIBREF28 , opinion polarity (MPQA) BIBREF29 and question type classification (TREC) BIBREF30 . To classify, we use the code provided by BIBREF22 in the same manner as in BIBREF16 . For the MSRP dataset, containing pairs of sentences INLINEFORM0 with associated paraphrase label, we generate feature vectors by concatenating their Sent2Vec representations INLINEFORM1 with the component-wise product INLINEFORM2 . The predefined training split is used to tune the L2 penalty parameter using cross-validation and the accuracy and F1 scores are computed on the test set. For the remaining 5 datasets, Sent2Vec embeddings are inferred from input sentences and directly fed to a logistic regression classifier. Accuracy scores are obtained using 10-fold cross-validation for the MR, CR, SUBJ and MPQA datasets. For those datasets nested cross-validation is used to tune the L2 penalty. For the TREC dataset, as for the MRSP dataset, the L2 penalty is tuned on the predefined train split using 10-fold cross-validation, and the accuracy is computed on the test set.",
"We propose a new unsupervised model, Sent2Vec, for learning universal sentence embeddings. Conceptually, the model can be interpreted as a natural extension of the word-contexts from C-BOW BIBREF0 , BIBREF1 to a larger sentence context, with the sentence words being specifically optimized towards additive combination over the sentence, by means of the unsupervised objective function.",
"The ParagraphVector DBOW model BIBREF14 is a log-linear model which is trained to learn sentence as well as word embeddings and then use a softmax distribution to predict words contained in the sentence given the sentence vector representation. They also propose a different model ParagraphVector DM where they use n-grams of consecutive words along with the sentence vector representation to predict the next word.",
"BIBREF16 propose a Sequential (Denoising) Autoencoder, S(D)AE. This model first introduces noise in the input data: Firstly each word is deleted with probability INLINEFORM0 , then for each non-overlapping bigram, words are swapped with probability INLINEFORM1 . The model then uses an LSTM-based architecture to retrieve the original sentence from the corrupted version. The model can then be used to encode new sentences into vector representations. In the case of INLINEFORM2 , the model simply becomes a Sequential Autoencoder. BIBREF16 also propose a variant (S(D)AE + embs.) in which the words are represented by fixed pre-trained word vector embeddings.",
"The SkipThought model BIBREF22 combines sentence level models with recurrent neural networks. Given a sentence INLINEFORM0 from an ordered corpus, the model is trained to predict INLINEFORM1 and INLINEFORM2 .",
"FastSent BIBREF16 is a sentence-level log-linear bag-of-words model. Like SkipThought, it uses adjacent sentences as the prediction target and is trained in an unsupervised fashion. Using word sequences allows the model to improve over the earlier work of paragraph2vec BIBREF14 . BIBREF16 augment FastSent further by training it to predict the constituent words of the sentence as well. This model is named FastSent + AE in our comparisons.",
"In a very different line of work, C-PHRASE BIBREF20 relies on additional information from the syntactic parse tree of each sentence, which is incorporated into the C-BOW training objective.",
"Compared to our approach, Siamese C-BOW BIBREF23 shares the idea of learning to average word embeddings over a sentence. However, it relies on a Siamese neural network architecture to predict surrounding sentences, contrasting our simpler unsupervised objective.",
"FLOAT SELECTED: Table 2: Unsupervised Evaluation Tasks: Comparison of the performance of different models on Spearman/Pearson correlation measures. An underline indicates the best performance for the dataset. Top 3 performances in each data category are shown in bold. The average is calculated as the average of entries for each correlation measure.",
"In Tables TABREF18 and TABREF19 , we compare our results with those obtained by BIBREF16 on different models. Table TABREF21 in the last column shows the dramatic improvement in training time of our models (and other C-BOW-inspired models) in contrast to neural network based models. All our Sent2Vec models are trained on a machine with 2x Intel Xeon E5 INLINEFORM0 2680v3, 12 cores @2.5GHz.",
"Along with the models discussed in Section SECREF3 , this also includes the sentence embedding baselines obtained by simple averaging of word embeddings over the sentence, in both the C-BOW and skip-gram variants. TF-IDF BOW is a representation consisting of the counts of the 200,000 most common feature-words, weighed by their TF-IDF frequencies. To ensure coherence, we only include unsupervised models in the main paper. Performance of supervised and semi-supervised models on these evaluations can be observed in Tables TABREF29 and TABREF30 in the supplementary material."
],
"extractive_spans": [
"Sequential (Denoising) Autoencoder",
"TF-IDF BOW",
"SkipThought",
"FastSent",
"Siamese C-BOW",
"C-BOW",
"C-PHRASE",
"ParagraphVector"
],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Comparison of the performance of different models on different supervised evaluation tasks. An underline indicates the best performance for the dataset. Top 3 performances in each data category are shown in bold. The average is calculated as the average of accuracy for each category (For MSRP, we take the accuracy). )",
"We use a standard set of supervised as well as unsupervised benchmark tasks from the literature to evaluate our trained models, following BIBREF16",
"Sentence embeddings are evaluated for various supervised classification tasks as follows. We evaluate paraphrase identification (MSRP) BIBREF25 , classification of movie review sentiment (MR) BIBREF26 , product reviews (CR) BIBREF27 , subjectivity classification (SUBJ) BIBREF28 , opinion polarity (MPQA) BIBREF29 and question type classification (TREC) BIBREF30 .",
"We propose a new unsupervised model, Sent2Vec, for learning universal sentence embeddings. ",
"The ParagraphVector DBOW model BIBREF14 is a log-linear model which is trained to learn sentence as well as word embeddings and then use a softmax distribution to predict words contained in the sentence given the sentence vector representation. ",
"They also propose a different model ParagraphVector DM where they use n-grams of consecutive words along with the sentence vector representation to predict the next word.",
"BIBREF16 propose a Sequential (Denoising) Autoencoder, S(D)AE.",
"The SkipThought model BIBREF22 combines sentence level models with recurrent neural networks. ",
"FastSent BIBREF16 is a sentence-level log-linear bag-of-words model.",
"In a very different line of work, C-PHRASE BIBREF20 relies on additional information from the syntactic parse tree of each sentence, which is incorporated into the C-BOW training objective.",
"Compared to our approach, Siamese C-BOW BIBREF23 shares the idea of learning to average word embeddings over a sentence. ",
"FLOAT SELECTED: Table 2: Unsupervised Evaluation Tasks: Comparison of the performance of different models on Spearman/Pearson correlation measures. An underline indicates the best performance for the dataset. Top 3 performances in each data category are shown in bold. The average is calculated as the average of entries for each correlation measure.",
"In Tables TABREF18 and TABREF19 , we compare our results with those obtained by BIBREF16 on different models.",
"Along with the models discussed in Section SECREF3 , this also includes the sentence embedding baselines obtained by simple averaging of word embeddings over the sentence, in both the C-BOW and skip-gram variants. TF-IDF BOW is a representation consisting of the counts of the 200,000 most common feature-words, weighed by their TF-IDF frequencies",
"In a very different line of work, C-PHRASE BIBREF20 relies on additional information from the syntactic parse tree of each sentence, which is incorporated into the C-BOW training objective."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"981f5049fb610cda08b0c1c41c2365722ec67428"
],
"answer": [
{
"evidence": [
"We use a standard set of supervised as well as unsupervised benchmark tasks from the literature to evaluate our trained models, following BIBREF16 . The breadth of tasks allows to fairly measure generalization to a wide area of different domains, testing the general-purpose quality (universality) of all competing sentence embeddings. For downstream supervised evaluations, sentence embeddings are combined with logistic regression to predict target labels. In the unsupervised evaluation for sentence similarity, correlation of the cosine similarity between two embeddings is compared to human annotators.",
"Downstream Supervised Evaluation. Sentence embeddings are evaluated for various supervised classification tasks as follows. We evaluate paraphrase identification (MSRP) BIBREF25 , classification of movie review sentiment (MR) BIBREF26 , product reviews (CR) BIBREF27 , subjectivity classification (SUBJ) BIBREF28 , opinion polarity (MPQA) BIBREF29 and question type classification (TREC) BIBREF30 . To classify, we use the code provided by BIBREF22 in the same manner as in BIBREF16 . For the MSRP dataset, containing pairs of sentences INLINEFORM0 with associated paraphrase label, we generate feature vectors by concatenating their Sent2Vec representations INLINEFORM1 with the component-wise product INLINEFORM2 . The predefined training split is used to tune the L2 penalty parameter using cross-validation and the accuracy and F1 scores are computed on the test set. For the remaining 5 datasets, Sent2Vec embeddings are inferred from input sentences and directly fed to a logistic regression classifier. Accuracy scores are obtained using 10-fold cross-validation for the MR, CR, SUBJ and MPQA datasets. For those datasets nested cross-validation is used to tune the L2 penalty. For the TREC dataset, as for the MRSP dataset, the L2 penalty is tuned on the predefined train split using 10-fold cross-validation, and the accuracy is computed on the test set.",
"Unsupervised Similarity Evaluation. We perform unsupervised evaluation of the learnt sentence embeddings using the sentence cosine similarity, on the STS 2014 BIBREF31 and SICK 2014 BIBREF32 datasets. These similarity scores are compared to the gold-standard human judgements using Pearson's INLINEFORM0 BIBREF33 and Spearman's INLINEFORM1 BIBREF34 correlation scores. The SICK dataset consists of about 10,000 sentence pairs along with relatedness scores of the pairs. The STS 2014 dataset contains 3,770 pairs, divided into six different categories on the basis of the origin of sentences/phrases, namely Twitter, headlines, news, forum, WordNet and images."
],
"extractive_spans": [],
"free_form_answer": "Accuracy and F1 score for supervised tasks, Pearson's and Spearman's correlation for unsupervised tasks",
"highlighted_evidence": [
"We use a standard set of supervised as well as unsupervised benchmark tasks from the literature to evaluate our trained models, following BIBREF16",
"Sentence embeddings are evaluated for various supervised classification tasks as follows.",
"The predefined training split is used to tune the L2 penalty parameter using cross-validation and the accuracy and F1 scores are computed on the test set. ",
"We perform unsupervised evaluation of the learnt sentence embeddings using the sentence cosine similarity, on the STS 2014 BIBREF31 and SICK 2014 BIBREF32 datasets. These similarity scores are compared to the gold-standard human judgements using Pearson's INLINEFORM0 BIBREF33 and Spearman's INLINEFORM1 BIBREF34 correlation scores. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"8790b9851110185847e05d267a8ab81e08143c6d"
],
"answer": [
{
"evidence": [
"We propose a new unsupervised model, Sent2Vec, for learning universal sentence embeddings. Conceptually, the model can be interpreted as a natural extension of the word-contexts from C-BOW BIBREF0 , BIBREF1 to a larger sentence context, with the sentence words being specifically optimized towards additive combination over the sentence, by means of the unsupervised objective function.",
"Formally, we learn a source (or context) embedding INLINEFORM0 and target embedding INLINEFORM1 for each word INLINEFORM2 in the vocabulary, with embedding dimension INLINEFORM3 and INLINEFORM4 as in ( EQREF6 ). The sentence embedding is defined as the average of the source word embeddings of its constituent words, as in ( EQREF8 ). We augment this model furthermore by also learning source embeddings for not only unigrams but also n-grams present in each sentence, and averaging the n-gram embeddings along with the words, i.e., the sentence embedding INLINEFORM5 for INLINEFORM6 is modeled as DISPLAYFORM0"
],
"extractive_spans": [
"by also learning source embeddings for not only unigrams but also n-grams present in each sentence, and averaging the n-gram embeddings along with the words"
],
"free_form_answer": "",
"highlighted_evidence": [
"We propose a new unsupervised model, Sent2Vec, for learning universal sentence embeddings",
"Formally, we learn a source (or context) embedding INLINEFORM0 and target embedding INLINEFORM1 for each word INLINEFORM2 in the vocabulary, with embedding dimension INLINEFORM3 and INLINEFORM4 as in ( EQREF6 ). The sentence embedding is defined as the average of the source word embeddings of its constituent words, as in ( EQREF8 ). We augment this model furthermore by also learning source embeddings for not only unigrams but also n-grams present in each sentence, and averaging the n-gram embeddings along with the words, i.e., the sentence embedding INLINEFORM5 for INLINEFORM6 is modeled as DISPLAYFORM0"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"Which other unsupervised models are used for comparison?",
"What metric is used to measure performance?",
"How do the n-gram features incorporate compositionality?"
],
"question_id": [
"09a993756d2781a89f7ec5d7992f812d60e24232",
"37eba8c3cfe23778498d95a7dfddf8dfb725f8e2",
"cdf1bf4b202576c39e063921f6b63dc9e4d6b1ff",
"03f4e5ac5a9010191098d6d66ed9bbdfafcbd013"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Comparison of the performance of different models on different supervised evaluation tasks. An underline indicates the best performance for the dataset. Top 3 performances in each data category are shown in bold. The average is calculated as the average of accuracy for each category (For MSRP, we take the accuracy). )",
"Table 2: Unsupervised Evaluation Tasks: Comparison of the performance of different models on Spearman/Pearson correlation measures. An underline indicates the best performance for the dataset. Top 3 performances in each data category are shown in bold. The average is calculated as the average of entries for each correlation measure.",
"Table 3: Best unsupervised and semi-supervised methods ranked by macro average along with their training times. ** indicates trained on GPU. * indicates trained on a single node using 30 threads. Training times for non-Sent2Vec models are due to Hill et al. (2016a). For CPU based competing methods, we were able to reproduce all published timings (+-10%) using our same hardware as for training Sent2Vec.",
"Table 4: Comparison of the performance of the unsupervised and semi-supervised sentence embeddings by (Arora et al., 2017) with our models. Unsupervised comparisons are in terms of Pearson’s correlation, while comparisons on supervised tasks are stating the average described in Table 1.",
"Figure 1: Left figure: the profile of the word vector L2norms as a function of log(fw) for each vocabulary word w, as learnt by our unigram model trained on Toronto books. Right figure: down-weighting scheme proposed by Arora et al. (2017): weight(w) = a a+fw .",
"Table 5: Training parameters for the Sent2Vec models",
"Table 6: Comparison of the performance of different Sent2Vec models with different semisupervised/supervised models on different downstream supervised evaluation tasks. An underline indicates the best performance for the dataset and Sent2Vec model performances are bold if they perform as well or better than all other non-Sent2Vec models, including those presented in Table 1.",
"Table 7: Unsupervised Evaluation: Comparison of the performance of different Sent2Vec models with semi-supervised/supervised models on Spearman/Pearson correlation measures. An underline indicates the best performance for the dataset and Sent2Vec model performances are bold if they perform as well or better than all other non-Sent2Vec models, including those presented in Table 2.",
"Table 8: Average sentence lengths for the datasets used in the comparison."
],
"file": [
"7-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"8-Figure1-1.png",
"12-Table5-1.png",
"13-Table6-1.png",
"13-Table7-1.png",
"13-Table8-1.png"
]
} | [
"What metric is used to measure performance?"
] | [
[
"1703.02507-Evaluation Tasks-2",
"1703.02507-Evaluation Tasks-1",
"1703.02507-Evaluation Tasks-0"
]
] | [
"Accuracy and F1 score for supervised tasks, Pearson's and Spearman's correlation for unsupervised tasks"
] | 721 |
1911.08915 | Universal and non-universal text statistics: Clustering coefficient for language identification | Abstract In this work we analyze statistical properties of 91 relatively small texts in 7 different languages (Spanish, English, French, German, Turkish, Russian, Icelandic) as well as texts with randomly inserted spaces. Despite the size (around 11260 different words), the well known universal statistical laws -namely Zipf and Herdan-Heap’s laws- are confirmed, and are in close agreement with results obtained elsewhere. We also construct a word co-occurrence network of each text. While the degree distribution is again universal, we note that the distribution of clustering coefficients, which depend strongly on the local structure of networks, can be used to differentiate between languages, as well as to distinguish natural languages from random texts. | {
"paragraphs": [
[
"Statistical characterization of languages has been a field of study for decadesBIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Even simple quantities, like letter frequency, can be used to decode simple substitution cryptogramsBIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, probably the most surprising result in the field is Zipf's law, which states that if one ranks words by their frequency in a large text, the resulting rank frequency distribution is approximately a power law, for all languages BIBREF0, BIBREF11. These kind of universal results have long piqued the interest of physicists and mathematicians, as well as linguistsBIBREF12, BIBREF13, BIBREF14. Indeed, a large amount of effort has been devoted to try to understand the origin of Zipf's law, in some cases arguing that it arises from the fact that texts carry information BIBREF15, all the way to arguing that it is the result of mere chance BIBREF16, BIBREF17. Another interesting characterization of texts is the Heaps-Herdan law, which describes how the vocabulary -that is, the set of different words- grows with the size of a text, the number of which, empirically, has been found to grow as a power of the text size BIBREF18, BIBREF19. It is worth noting that it has been argued that this law is a consequence Zipf's law. BIBREF20, BIBREF21",
"A different tool used to characterize texts is the adjacency (or co-ocurrence) network BIBREF22, BIBREF23, BIBREF24, BIBREF25. The nodes in this network represent the words in the text, and a link is placed between nodes if the corresponding words are adjacent in the text. These links can be directed -according to the order in which the words appear-, or undirected. In this work we study properties of the adjacency network of various texts in several languages, using undirected links. The advantage of representing the text as a network is that we can describe properties of the text using the tools of network theory BIBREF26. The simplest characterization of a network is its degree distribution, that is, the fraction of nodes with a given number of links, and we will see that this distribution is also a universal power law for all languages. As we argue ahead, this may follow from the fact that Zipf's law is satisfied.",
"Another interesting use for text statistics is to distinguish texts and languages. In particular, as occurs with letter frequencies, other more subtle statistics may be used to distinguish different languages, and beyond that, provide a metric to group languages into different families BIBREF27, BIBREF28, BIBREF29. In this paper we use the clustering coefficient BIBREF26 to show that even though the degree distribution of the adjacency matrices is common to all languages, the statistics of their clustering coefficients, while approximately similar for various texts in each language, appears to be different from one language to another.",
"We use different texts (see Appendix (SECREF8)) instead of a large single corpus for each language because clustering coefficients typically decrease as a function of the size of the networkBIBREF30. Actually, we must compare the statistics of the clustering coefficient in texts with adjacency networks of comparable sizes. In the following section we present the rank vs frequency distribution for these texts. We also measure how the vocabulary increases with text size, as well as the respective degree distributions of the networks corresponding to every text, and compare them with a null \"random\" hypothesis. This null hypothesis consists of a set of texts constructed as follows: we select a text and remove all the spaces between words, then we reintroduce the spaces at random with the restriction that there cannot be a space next to another. We identify as words all strings of letters between consecutive spaces (the restriction avoids the possibility of having empty words). The reason we build the null hypothesis this way instead of the usual independent random letters with random spaces most commonly used BIBREF17, BIBREF31, is that consecutive letters are not independent: they are correlated to ensure word pronunciability, as well as due to spelling rules. Our method for constructing these random texts conserves most of the correlations between consecutive letters in a given language.",
"Next, we calculate the distribution of the clustering coefficients of the nodes of the adjacency network for each text. These distribution functions are more or less similar for all the texts of the same language, provided the networks are of the same size. However, it is apparent that the distributions are different between different languages. We also compare the clustering coefficient distributions with those of the null hypothesis. The data show that the strongest differences between languages occur for the fractions of nodes with clustering coefficients 0 and 1. We build a scatter plot for these fractions for all the texts in each language. Though there is overlap between some languages, other languages are clearly differentiated in the plot. We fit correlated bivariate gaussian distributions to the data of each language, which allows us to estimate a likelihood that a text is in a given language."
],
[
"We analyzed 91 texts written in 7 languages: Spanish, English, German, French, Turkish, Russian and Icelandic. We also considered as null texts, 12 realizations of a randomized version the Portrait of Dorian Gray book, twice for each language analyzed here (except Icelandic). As mentioned above, the process for randomizing the text is as follows: first we remove the spaces in the original text. Then, we take the first letter, and with a probability of $1/2$ we add the next letter in the sequence, or the next letter in the sequence and a space. We advance to the last symbol added, and repeat the process until we reach the end of the text. This way we destroy the grammar of the original language, keeping the letter frequencies as well as most of the correlations between consecutive letters. The set of documents we used in this work are shown in Appendix SECREF8.",
"All texts were intervened to remove punctuation marks, numbers, parenthesis and other uncommon symbols, and all the letters were turned into lower case, so a word appearing with different case letters would not be counted as two different words. Also, we do not transliterate the texts, instead, we use the original symbols of the texts (Cyrillic alphabet for Russian texts or the special characters in Icelandic) using the UTF-8 encoding.",
"Also, since clustering coefficients depend non trivially on the size of the networks, we cut the texts so they all have essentially the same vocabulary size ($\\simeq 11260$).",
"In table TABREF1 we summarize for each language, the averages of the length, vocabulary size, maximum frequency and number of hapax legomena (i.e. words that appear only once in a document or corpus) of the texts studied here. It is important to note that for different languages, very different text lengths are required to achieve the same vocabulary size. We also note that in all cases, hapax legomena represent approximately half of the vocabulary in each text.",
"In figure (FIGREF2) we show Zipf plots for some of the texts, including the random texts constructed as described previously. It is clear that all the texts reproduce convincingly Zipf's law: $f(n)\\sim 1/n^\\alpha $ where $n=1,2,...N_{tot}$ is the word rank, $N_{tot}$ is the size of the vocabulary and $f(n)$ is its frequency. This is in contrast to previous work in which it is argued that there are differences between the Zipf plots of texts and random sequencesBIBREF32, this might be due to the fact that our random text construction preserves correlations between letters, whereas the letters in BIBREF32 were placed independently. Our findings are summarized in Appendix (SECREF7).",
"Figure (FIGREF2) is the typical rank vs frequency plot for a randomly chosen text in each language. From the figure, we see that $\\alpha \\simeq 1$, obtained by least squares fits to the plot, describes very well all the texts. Therefore, given that $n/N_{tot}$ is the fraction of words with frequencies greater or equal to $f(n)$, then",
"where $p(f) \\simeq 1/f^{\\alpha _z}$ is the frequency distribution of the vocabulary. Now, if $f(n)\\sim 1/n^\\alpha $, then $p(f)\\sim 1/f^{1+1/\\alpha }$, i.e. $\\alpha _z=1+1/\\alpha $. Substituting $\\alpha =1$, we have $\\alpha _Z = 2$, which is in close agreement with what we observe. See figure (FIGREF5) and the tables in Appendix (SECREF7)",
"Figure (FIGREF6) shows the size of the vocabulary $V(L)$, as a function of the length $L$ of the text considered. Once again, all the texts, including the random texts, follow the Heaps-Herdan law $V(L)\\sim L^{\\beta }$ reasonably well. Again, the parameters describing the various texts are given in Appendix(SECREF7)",
"Continuing with the universal laws describing texts, in figure (FIGREF7) we show an example of the degree distribution for the adjacency network of the texts studied in this work. It is clear that except for the low odd degrees ($k=1,3,5,7$, see inset in fig.(FIGREF7)), the distribution is well described by a power law. The parameters corresponding to the texts are given in Appendix(SECREF7). As mentioned previously, this asymptotic behavior is a consequence of Zipf's law. If we assume that each time a word appears, the input degree $k_{in}$ (alternatively, the output degree $k_{out}$) of the corresponding node increases approximately by one, then the input degree could be expected to grow proportional to the frequency of each word. Further, in general we can expect that the total degree of a node to be $k\\approx k_{in}+k_{out}\\approx 2k_{in}$ (clearly this is not always true: for example, a word can appear twice, being preceded both times by the same word and followed by different words each time, leading to a degree $k=3$). Then, up to multiplicative factors, we can apply the same argument as in Equation DISPLAY_FORM4 for $\\mathrm {p}(k)$, the degree distribution of the network, instead of $p(f)$ From this equation it again follows that if $f(n)\\sim 1/n^\\alpha $, then $\\mathrm {p}(k)\\sim 1/k^{1+1/\\alpha }$, which is again in close agreement with what we observe."
],
[
"Thus far, our results confirm that the all our texts exhibit the expected universal statistics observed in natural languages. Actually, it could be argued that these laws may be \"too universal\", not being able to clearly distinguish texts written in real languages from our random texts. Further, all these laws appear to be consequence of Zipf's law, and this law reflects only the frequency of words, not their order. Thus, all three laws would still hold if the words of the texts were randomly shuffled. Clearly, shuffling the words destroys whatever relations may exist between successive words in a text, depending on the language in which it was written. This relation between successive words is what conveys meaning to a text. Thus, we expect that the clustering coefficient BIBREF26 of the adjacency network of each text,(constructed using words as nodes and linking those that are adjacent in the text), which depends strongly on the local structure, will distinguish between random texts and real texts, and even between texts in different languages.",
"The clustering coefficient $C_i(k_i)$ of node $i$ with degree $k_i$ is defined as the ratio of the number of links between node $i$'s neighbors over the total number of links that would be possible for this node $k_i(k_i-1)/2$. Thus, clearly, $0\\le C_i(k_i)\\le 1$. Hapax legomena, for example, mostly correspond to nodes with degree $k=2$, thus their clustering coefficient can only take the values 0 and 1 (degree $k=1$ is possible if the hapax appears followed and preceded by the same word, but these are rare occurrences). In general terms, the actual values of the clustering coefficients vary as a function of the size of the network BIBREF30, thus, in order to compare the clustering coefficients of networks corresponding to different texts, we have trimmed our texts so they all have approximately the same vocabulary size ($\\simeq 11260$). In figure (FIGREF8) we show an example of the clustering coefficient as a function of $k$. There are many values $C(k)$ for each $k$ corresponding to the diverse nodes with the same degree. The red points in the graph denote the average clustering coefficient for each $k$, and the solid black line is the log-binning of this average."
],
[
"In order to quantify differences between languages, for each text we define the quantity $\\nu (C)$ as",
"In figure (FIGREF10) we show $\\nu (C)$ vs $C$ for Don Quixote in six different languages. From the graph it is clear that $\\nu (0)$ and $\\nu (1)$ show the largest degree of variation between the various languages, thus, we propose to focus on these two numbers to characterize the various languages.",
"In figure (FIGREF11) we show a scatter plot of $\\nu (1)$ vs $\\nu (0)$ for the texts in every language presented here. Using maximum likelihood estimators, we fit correlated bi-variate Gaussian distributions to the scatter plots of each language, the contour plots of which are also shown in the graph. First and most importantly, we can see in the figure that there is a clear distinction between languages and random texts. Also, we can see that languages tend to cluster in a way that is consistent with the known relationships among the languages. For example, in the figure we note that the contours corresponding to French and Spanish show a strong overlap, which might have been expected as they are closely related languages BIBREF35. On the other hand, Russian is far from French and Spanish. This suggest that these curves may be used as a quantitative aid for the classification of languages into families. For example, French and Spanish which are both Romance languages, appear closer to each other than to Russian and Turkish, which have different origins.",
"In order to test the validity our results, we calculate $\\nu (0)$ and $\\nu (1)$ for another set of books, (see tables in the appendix (SECREF8)) and using the fitted Gaussian distributions for each language, we calculated the probability that a text in each language would have those values, which allows us to assign a likelihood that a text is written in one or another language.",
"In table TABREF12 we can see, for example, that it is most likely that Smásögur I (Short stories in Icelandic) are written in Icelandic than in any of the languages analyzed, or that they are a random text.",
"Not surprisingly, it is not so easy to tell if Voltaire in French, is really written in French or in Spanish, likewise, it is not easy to tell if Moby Dick in Spanish is written in Spanish or French, and in both cases the maximum likelihood prediction fails. Nevertheless, it is clear that these books are not written in any of the other languages presented here, nor do they correspond to a random text. On the other hand, Twenty thousand leagues under the sea in Spanish and Les Miserables in French, are correctly identified, as well as all the other texts analyzed, including the random texts.",
"To try to pinpoint the origin of the differentiation between different languages, we note that an inspection of the nodes with $C=0$ and 1 reveals that they mainly consist of hapax legomena (as noted before, hapax legomena only have $C$ values of 0 and 1). To measure the relative importance of these words, we calculate the ratio of hapax legomena to the total number of words with $C=0$ and 1, we call this number $\\nu ^{\\prime }_{H}(C)$.",
"In Table TABREF13, we show the fraction of hapax legomena of the words with $C=0,1$ for several texts in English. A value close to 1 indicates that most of the nodes that contribute to $\\nu ^{\\prime }_H(C)$ are words that appear only once in the document. This indicates that the local structure around those words, i.e, the way that they relate in the adjacency network, is particular to each language, and seems to be a key for language differentiation.",
"In the Table TABREF14 we see the average of $\\nu ^{\\prime }_H(C)$ for each of the languages studied here. Note that for example the values are clearly different for Spanish and Turkish, similar for Spanish and French, and very different for all languages and random."
],
[
"Zipf's law is one of the most universal statistics of natural languages. However, it may be too universal. While it may not strictly apply to sequences of independent random symbols with random spacings BIBREF32, it appears to describe random texts that conserve most of the correlations between successive symbols, as accurately as it describes texts written in real languages. Further, Heaps-Herdan law and the degree distribution of the adjacency network, appear to be consequences of Zipf's law, and are, thus, as universal.",
"In this work we studied 91 texts in seven different languages, as well as random texts constructed by randomizing the spacings between words without altering the order of the letters in the text. We find that they are all well described by the universal laws. However, we also found that the distribution of clustering coefficients of the networks of each text appears to vary from one language to another, and to distinguish random texts from real languages. The nodes that vary the most among the distributions of $C(k)$ are those for which $C(k)$ is equal to 0 or 1. We fit the scatter plot of these nodes to bivariate Gaussian distributions, which allows us to define the likelihood that a text is written in each given language. This method was very successful identifying the languages in which test were written, only failing to distinguish a couple of texts, confusing texts french and spanish, which have a strong overlap. In Table (TABREF12) we present the evidence that we can use the statistics of clustering coefficient to measure a sort of distance between languages.",
"Though hapax legomena account for most of the value $\\nu (C)$ for $C=0$ and 1, we found that the fraction $\\nu ^{\\prime }_H(C)$ of hapax to other words is similar for French and Spanish, and different for Spanish and, say, Turkish. Further, $\\nu ^{\\prime }_H(C)$ is different between random texts and the languages we study. These observations might give some clue to the mechanism by which the clustering coefficient, and in particular the local structure around hapax legomena, helps to differentiate languages.",
"Unlike the work presented by Gamallo et. al BIBREF27, which is Corpus-based, our work uses a relatively small amount of texts. Also as we can see in tables presented in Appendix (SECREF7), the length of the texts we use is not necessarily the length of the complete work. Texts were cut at the appropriate length for all of them to have approximately the same vocabulary ($\\simeq 11260$). Thus, actual lengths ranged from 368076 words for the Jane Austen books in English, to 26347 words for the text we called Turkish I. This is important not only for computational reasons, it may also be important for studies of the relation between languages for which large corpora do not exist, something very common in the linguistic studies of the indigenous languages. The method proposed in this work can be useful in such cases, as small texts trimmed to fill some appropriate vocabulary size is the only necessary ingredient."
],
[
"Diego Espitia acknowledges financial support through a doctoral scholarship from Consejo Nacional de Ciencia y Tecnología (CONACyT)."
],
[
"In this appendix we present tables of results for the data analyzed in this work. Here $\\alpha _k$ and $\\sigma _k$ represent the exponent and standard error of the power law for the degree distribution of the co-occurrence networks $p(k) \\propto 1/k^{\\alpha _k} $, for $k> k_{min}$, where $k_{min}$ is the smallest degree for which the power law holds. Similarly, $\\alpha _Z$ and $\\sigma _z$ represents the exponent and standard error of the distribution of frequencies $p(f)\\propto 1/f^{\\alpha _z}$; for $f > f_{min}$ where now $f_{min}$ is the smallest frequency for which the power law is satisfied. The values of the Heap's law $\\beta $ and $\\sigma _h$ were obtained via least square fitting.",
"For the estimation of the parameters we use the Maximum Likelihood Estimation (MLE) method for discerning and quantifying power-law behavior in empirical data BIBREF36. The MLE works as follows: assuming that the data fits a power law, we estimate $\\alpha $ via",
"where $x_i > x_{min}^*$ for $i=1,...N$ and using as $x^*_{min}$ each element of the data set $\\lbrace x\\rbrace $. Then, using the Kolmogorov–Smirnov test we find the distance $D$ between the cumulative distribution of the data set and the cumulative distribution $P_{(x^*_{min},\\alpha ^*)}(x)$. From these set of distances, we find the value which minimizes $D$, this $x_{min}$, is the smallest data for which the power law holds, and can be used to determine the parameter of the power law $\\hat{\\alpha }$. In order to perform a goodness of the fit test, we construct 1000 synthetic data, using the previous $\\hat{\\alpha }$ and $x_{min}$. Now we can count the fraction of the synthetic distances that are larger than the distance obtained from the data. This fraction is known as p-value If this p-value$>0.1$, then the difference between the data set and the model can be attributed to statistical fluctuations alone; if it is small, the model is not a plausible fit to the data.BIBREF36",
"Spanish",
"English",
"French",
"German",
"Turkish",
"Russian",
"Icelandic",
"Random"
],
[
"Here we present the text used in this work. The vast majority of the texts were obtained from the Gutemberg project, except for the texts in Russian, Turkish and Icelandic, which were obtained from other sources.",
"@ll@ 2cIcelandic",
"Torfhildi Hólm Brynjólfur Biskup Sveinsson",
"Sagas I",
"",
"Sagas II",
"",
"Sagas III",
"",
"Sagas IV",
"",
"Sagas V",
"",
"Sagas VI",
"",
"Sagas VII",
"",
"Jón Trausti",
"",
"Jón Thoroddsen Maður Og Kona",
"Þorgils Gjallanda",
"",
"Smásögur I",
"",
"Smásögur II",
"",
"",
"Source: All sagas were obtained from https://sagadb.org/. The other texts were obtained from https://www.snerpa.is/net/index.html"
]
],
"section_name": [
"Introduction",
"Texts and Universal laws",
"Clustering coefficient",
"Language differentiation",
"Conclusions",
"Acknowledgments",
"Tables and Results",
"Texts used"
]
} | {
"answers": [
{
"annotation_id": [
"883485cf14777d2979f41faf5386f9cc24d333a1"
],
"answer": [
{
"evidence": [
"Statistical characterization of languages has been a field of study for decadesBIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Even simple quantities, like letter frequency, can be used to decode simple substitution cryptogramsBIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, probably the most surprising result in the field is Zipf's law, which states that if one ranks words by their frequency in a large text, the resulting rank frequency distribution is approximately a power law, for all languages BIBREF0, BIBREF11. These kind of universal results have long piqued the interest of physicists and mathematicians, as well as linguistsBIBREF12, BIBREF13, BIBREF14. Indeed, a large amount of effort has been devoted to try to understand the origin of Zipf's law, in some cases arguing that it arises from the fact that texts carry information BIBREF15, all the way to arguing that it is the result of mere chance BIBREF16, BIBREF17. Another interesting characterization of texts is the Heaps-Herdan law, which describes how the vocabulary -that is, the set of different words- grows with the size of a text, the number of which, empirically, has been found to grow as a power of the text size BIBREF18, BIBREF19. It is worth noting that it has been argued that this law is a consequence Zipf's law. BIBREF20, BIBREF21"
],
"extractive_spans": [],
"free_form_answer": "Zipf's law describes change of word frequency rate, while Heaps-Herdan describes different word number in large texts (assumed that Hepas-Herdan is consequence of Zipf's)",
"highlighted_evidence": [
"However, probably the most surprising result in the field is Zipf's law, which states that if one ranks words by their frequency in a large text, the resulting rank frequency distribution is approximately a power law, for all languages BIBREF0, BIBREF11.",
"Another interesting characterization of texts is the Heaps-Herdan law, which describes how the vocabulary -that is, the set of different words- grows with the size of a text, the number of which, empirically, has been found to grow as a power of the text size BIBREF18, BIBREF19. It is worth noting that it has been argued that this law is a consequence Zipf's law. BIBREF20, BIBREF21"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two"
],
"paper_read": [
"no"
],
"question": [
"How do Zipf and Herdan-Heap's laws differ?"
],
"question_id": [
"3103502cf07726d3eeda34f31c0bdf1fc0ae964e"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"Spanish"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Table 1: Averages for each language of the length, vocabulary size, maximum word frequency and number of hapax legomena of the texts studied in this work. Notice the large variations in text length required to achieve the same vocabulary size Ntot from one language to another.",
"Figure 1: Word frequency f(n) versus rank n illustrating Zipf’s Law f(n) ∼ 1/nα for single randomly chosen texts in each language: English, Russian, Turkish, French, German, Spanish and Icelandic, using Log-binned data (colored symbols). Black dots represent the random texts constructed as described in the text. The dashed line corresponds to α = 1. In the inset we show an example of rank vs frequency plot without log-binning. Note that words with f = 1 (hapax legomena) represent a large fraction of the vocabulary of the text",
"Figure 2: Cumulative of the frequency distribution P (f) ≡ ∫ ∞ f",
"Figure 3: The Herdan-Heap’s Law V (L) ∼ Lβ for single randomly chosen texts in each language: English, Russian, French, German, Spanish, Icelandic and Turkish. Black dots represent random texts. The dashed line corresponds to a power law with exponent β = 0.8, which is the average over all the texts we studied.",
"Figure 4: Cumulative of the degree distribution P(f) ≡ ∫ ∞ f p(ζ)dζ for single randomly chosen texts in each language: Spanish, English, French, German, Turkish, Russian, and Icelandic (colored symbols). As in figure 2, black dots represent a random text, and the dashed line corresponds to the behavior when the exponent in Zipf’s law is α = 1. Inset: Degree distribution of the Don Quixote in French, note that the first few odd degrees k = 1, 3, 5, 7 deviate from the power law behavior.",
"Figure 5: Clustering coefficient as function of the degree for Don Quixote (Spanish). The gray dots represents the C(k) for each node. Red circles are the average of C(k). The black line is the logarithmic binning of the average.",
"Figure 6: Fraction of nodes with same Clustering Coefficient for Don Quixote in English, Spanish, Turkish, Russian, French and German. Note that nodes with C = 0 and 1 present the largest variability between different languages",
"Figure 7: Bi-variate normal distribution for ν(0) and ν(1) for the different texts and random sequences. Note that differences in the distributions are clear for languages that are known to be part of different linguistic families, for example Turkish and English. Languages that belongs to the same family (Spanish and French) are essentially indistinguishable.",
"Table 2: Probability density function for different texts written in several languages, and random texts. Values less than 1× 10−8 are neglected.",
"Table 3: Fraction of hapax legomena with clustering coefficient equal to 0 or 1 for English texts",
"Table 4: Average values of ν′H(C) for Spanish, English, French, German, Turkish, Icelandic and Random texts.",
"Table 5: Source: Gutemberg Project",
"Table 8: Source: https://www.e-reading.club",
"Table 6: Source: Gutemberg Project",
"Table 10: Source: Gutemberg Project",
"Table 11: Source: All sagas were obtained from https://sagadb.org/. The other texts were obtained from https://www.snerpa.is/net/index.html"
],
"file": [
"3-Table1-1.png",
"3-Figure1-1.png",
"4-Figure2-1.png",
"4-Figure3-1.png",
"5-Figure4-1.png",
"5-Figure5-1.png",
"6-Figure6-1.png",
"7-Figure7-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"13-Table5-1.png",
"13-Table8-1.png",
"13-Table6-1.png",
"14-Table10-1.png",
"15-Table11-1.png"
]
} | [
"How do Zipf and Herdan-Heap's laws differ?"
] | [
[
"1911.08915-Introduction-0"
]
] | [
"Zipf's law describes change of word frequency rate, while Heaps-Herdan describes different word number in large texts (assumed that Hepas-Herdan is consequence of Zipf's)"
] | 723 |
2004.04696 | BLEURT: Learning Robust Metrics for Text Generation | Text generation has made significant advances in the last few years. Yet, evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a learned evaluation metric based on BERT that can model human judgments with a few thousand possibly biased training examples. A key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. BLEURT provides state-of-the-art results on the last three years of the WMT Metrics shared task and the WebNLG Competition dataset. In contrast to a vanilla BERT-based approach, it yields superior results even when the training data is scarce and out-of-distribution. | {
"paragraphs": [
[
"In the last few years, research in natural text generation (NLG) has made significant progress, driven largely by the neural encoder-decoder paradigm BIBREF0, BIBREF1 which can tackle a wide array of tasks including translation BIBREF2, summarization BIBREF3, BIBREF4, structured-data-to-text generation BIBREF5, BIBREF6, BIBREF7 dialog BIBREF8, BIBREF9 and image captioning BIBREF10. However, progress is increasingly impeded by the shortcomings of existing metrics BIBREF7, BIBREF11, BIBREF12.",
"Human evaluation is often the best indicator of the quality of a system. However, designing crowd sourcing experiments is an expensive and high-latency process, which does not easily fit in a daily model development pipeline. Therefore, NLG researchers commonly use automatic evaluation metrics, which provide an acceptable proxy for quality and are very cheap to compute. This paper investigates sentence-level, reference-based metrics, which describe the extent to which a candidate sentence is similar to a reference one. The exact definition of similarity may range from string overlap to logical entailment.",
"The first generation of metrics relied on handcrafted rules that measure the surface similarity between the sentences. To illustrate, BLEU BIBREF13 and ROUGE BIBREF14, two popular metrics, rely on N-gram overlap. Because those metrics are only sensitive to lexical variation, they cannot appropriately reward semantic or syntactic variations of a given reference. Thus, they have been repeatedly shown to correlate poorly with human judgment, in particular when all the systems to compare have a similar level of accuracy BIBREF15, BIBREF16, BIBREF17.",
"Increasingly, NLG researchers have addressed those problems by injecting learned components in their metrics. To illustrate, consider the WMT Metrics Shared Task, an annual benchmark in which translation metrics are compared on their ability to imitate human assessments. The last two years of the competition were largely dominated by neural net-based approaches, RUSE, YiSi and ESIM BIBREF18, BIBREF11. Current approaches largely fall into two categories. Fully learned metrics, such as BEER, RUSE, and ESIM are trained end-to-end, and they typically rely on handcrafted features and/or learned embeddings. Conversely, hybrid metrics, such as YiSi and BERTscore combine trained elements, e.g., contextual embeddings, with handwritten logic, e.g., as token alignment rules. The first category typically offers great expressivity: if a training set of human ratings data is available, the metrics may take full advantage of it and fit the ratings distribution tightly. Furthermore, learned metrics can be tuned to measure task-specific properties, such as fluency, faithfulness, grammatically, or style. On the other hand, hybrid metrics offer robustness. They may provide better results when there is little to no training data, and they do not rely on the assumption that training and test data are identically distributed.",
"And indeed, the iid assumption is particularly problematic in NLG evaluation because of domain drifts, that have been the main target of the metrics literature, but also because of quality drifts: NLG systems tend to get better over time, and therefore a model trained on ratings data from 2015 may fail to distinguish top performing systems in 2019, especially for newer research tasks. An ideal learned metric would be able to both take full advantage of available ratings data for training, and be robust to distribution drifts, i.e., it should be able to extrapolate.",
"Our insight is that it is possible to combine expressivity and robustness by pre-training a fully learned metric on large amounts of synthetic data, before fine-tuning it on human ratings. To this end, we introduce Bleurt, a text generation metric based on BERT BIBREF19. A key ingredient of Bleurt is a novel pre-training scheme, which uses random perturbations of Wikipedia sentences augmented with a diverse set of lexical and semantic-level supervision signals.",
"To demonstrate our approach, we train Bleurt for English and evaluate it under different generalization regimes. We first verify that it provides state-of-the-art results on all recent years of the WMT Metrics Shared task (2017 to 2019, to-English language pairs). We then stress-test its ability to cope with quality drifts with a synthetic benchmark based on WMT 2017. Finally, we show that it can easily adapt to a different domain with three tasks from a data-to-text dataset, WebNLG 2017 BIBREF20. Ablations show that our synthetic pretraining scheme increases performance in the iid setting, and is critical to ensure robustness when the training data is scarce, skewed, or out-of-domain."
],
[
"Define $= (x_1,..,x_{r})$ to be the reference sentence of length $r$ where each $x_i$ is a token and let $\\tilde{} = (\\tilde{x}_1,..,\\tilde{x}_{p})$ be a prediction sentence of length $p$. Let $\\lbrace (_i, \\tilde{}_i, y_i)\\rbrace _{n=1}^{N}$ be a training dataset of size $N$ where $y_i \\in [0, 1]$ is the human rating that indicates how good $\\tilde{}_i$ is with respect to $_i$. Given the training data, our goal is to learn a function $: (, \\tilde{}) \\rightarrow y$ that predicts the human rating."
],
[
"Given the small amounts of rating data available, it is natural to leverage unsupervised representations for this task. In our model, we use BERT (Bidirectional Encoder Representations from Transformers) BIBREF19, which is an unsupervised technique that learns contextualized representations of sequences of text. Given $$ and $\\tilde{}$, BERT is a Transformer BIBREF21 that returns a sequence of contextualized vectors:",
"where $_{\\mathrm {[CLS]}}$ is the representation for the special $\\mathrm {[CLS]}$ token. As described by devlin2018bert, we add a linear layer on top of the $\\mathrm {[CLS]}$ vector to predict the rating:",
"where $$ and $$ are the weight matrix and bias vector respectively. Both the above linear layer as well as the BERT parameters are trained (i.e. fine-tuned) on the supervised data which typically numbers in a few thousand examples. We use the regression loss $\\ell _{\\textrm {supervised}} = \\frac{1}{N} \\sum _{n=1}^{N} \\Vert y_i - \\hat{y} \\Vert ^2 $.",
"Although this approach is quite straightforward, we will show in Section SECREF5 that it gives state-of-the-art results on WMT Metrics Shared Task 17-19, which makes it a high-performing evaluation metric. However, fine-tuning BERT requires a sizable amount of iid data, which is less than ideal for a metric that should generalize to a variety of tasks and model drift."
],
[
"The key aspect of our approach is a pre-training technique that we use to “warm up” BERT before fine-tuning on rating data. We generate a large number of of synthetic reference-candidate pairs $(, \\tilde{})$, and we train BERT on several lexical- and semantic-level supervision signals with a multitask loss. As our experiments will show, Bleurt generalizes much better after this phase, especially with incomplete training data.",
"Any pre-training approach requires a dataset and a set of pre-training tasks. Ideally, the setup should resemble the final NLG evaluation task, i.e., the sentence pairs should be distributed similarly and the pre-training signals should correlate with human ratings. Unfortunately, we cannot have access to the NLG models that we will evaluate in the future. Therefore, we optimized our scheme for generality, with three requirements. (1) The set of reference sentences should be large and diverse, so that Bleurt can cope with a wide range of NLG domains and tasks. (2) The sentence pairs should contain a wide variety of lexical, syntactic, and semantic dissimilarities. The aim here is to anticipate all variations that an NLG system may produce, e.g., phrase substitution, paraphrases, noise, or omissions. (3) The pre-training objectives should effectively capture those phenomena, so that Bleurt can learn to identify them. The following sections present our approach."
],
[
"One way to expose Bleurt to a wide variety of sentence differences is to use existing sentence pairs datasets BIBREF22, BIBREF23, BIBREF24. These sets are a rich source of related sentences, but they may fail to capture the errors and alterations that NLG systems produce (e.g., omissions, repetitions, nonsensical substitutions). We opted for an automatic approach instead, that can be scaled arbitrarily and at little cost: we generate synthetic sentence pairs $(, \\tilde{})$ by randomly perturbing 1.8 million segments $$ from Wikipedia. We use three techniques: mask-filling with BERT, backtranslation, and randomly dropping out words. We obtain about 6.5 million perturbations $\\tilde{}$. Let us describe those techniques."
],
[
"BERT's initial training task is to fill gaps (i.e., masked tokens) in tokenized sentences. We leverage this functionality by inserting masks at random positions in the Wikipedia sentences, and fill them with the language model. Thus, we introduce lexical alterations while maintaining the fluency of the sentence. We use two masking strategies—we either introduce the masks at random positions in the sentences, or we create contiguous sequences of masked tokens. More details are provided in the Appendix."
],
[
"We generate paraphrases and perturbations with backtranslation, that is, round trips from English to another language and then back to English with a translation model BIBREF25, BIBREF26, BIBREF27. Our primary aim is to create variants of the reference sentence that preserves semantics. Additionally, we use the mispredictions of the backtranslation models as a source of realistic alterations."
],
[
"We found it useful in our experiments to randomly drop words from the synthetic examples above to create other examples. This method prepares Bleurt for “pathological” behaviors or NLG systems, e.g., void predictions, or sentence truncation."
],
[
"The next step is to augment each sentence pair $(, \\tilde{})$ with a set of pre-training signals $\\lbrace {\\tau }_k\\rbrace $, where ${\\tau }_k$ is the target vector of pre-training task $k$. Good pre-training signals should capture a wide variety of lexical and semantic differences. They should also be cheap to obtain, so that the approach can scale to large amounts of synthetic data. The following section presents our 9 pre-training tasks, summarized in Table TABREF3. Additional implementation details are in the Appendix."
],
[
"We create three signals ${\\tau _{\\text{BLEU}}}$, ${\\tau _{\\text{ROUGE}}}$, and ${\\tau _{\\text{BERTscore}}}$ with sentence BLEU BIBREF13, ROUGE BIBREF14, and BERTscore BIBREF28 respectively (we use precision, recall and F-score for the latter two)."
],
[
"The idea behind this signal is to leverage existing translation models to measure semantic equivalence. Given a pair $(, \\tilde{})$, this training signal measures the probability that $\\tilde{}$ is a backtranslation of $$, $P(\\tilde{} | )$, normalized by the length of $\\tilde{}$. Let $P_{\\texttt {en}\\rightarrow \\texttt {fr}}(_{\\texttt {fr}} | )$ be a translation model that assigns probabilities to French sentences $_{\\texttt {fr}}$ conditioned on English sentences $$ and let $P_{\\texttt {fr}\\rightarrow \\texttt {en}}(| _{\\texttt {fr}})$ be a translation model that assigns probabilities to English sentences given french sentences. If $|\\tilde{}|$ is the number of tokens in $\\tilde{}$, we define our score as $ {\\tau }_{\\text{en-fr}, \\tilde{} \\mid } = \\frac{\\log P(\\tilde{} | )}{|\\tilde{}|}$, with:",
"Because computing the summation over all possible French sentences is intractable, we approximate the sum using $_{\\texttt {fr}}^\\ast = P_{\\texttt {en}\\rightarrow \\texttt {fr}} (_{\\texttt {fr}} | )$ and we assume that $P_{\\texttt {en}\\rightarrow \\texttt {fr}}(_{\\texttt {fr}}^\\ast | ) \\approx 1$:",
"We can trivially reverse the procedure to compute $P(| \\tilde{})$, thus we create 4 pre-training signals ${\\tau }_{\\text{en-fr}, \\mid \\tilde{}}$, ${\\tau }_{\\text{en-fr}, \\tilde{} \\mid }$, ${\\tau }_{\\text{en-de}, \\mid \\tilde{}}$, ${\\tau }_{\\text{en-de}, \\tilde{} \\mid }$ with two pairs of languages ($\\texttt {en}\\leftrightarrow \\texttt {de}$ and $\\texttt {en}\\leftrightarrow \\texttt {fr}$) in both directions."
],
[
"The signal ${\\tau }_\\text{entail}$ expresses whether $$ entails or contradicts $\\tilde{}$ using a classifier. We report the probability of three labels: Entail, Contradict, and Neutral, using BERT fine-tuned on an entailment dataset, MNLI BIBREF19, BIBREF23."
],
[
"The signal ${\\tau }_\\text{backtran\\_flag}$ is a Boolean that indicates whether the perturbation was generated with backtranslation or with mask-filling."
],
[
"For each pre-training task, our model uses either a regression or a classification loss. We then aggregate the task-level losses with a weighted sum.",
"Let ${\\tau }_k$ describe the target vector for each task, e.g., the probabilities for the classes Entail, Contradict, Neutral, or the precision, recall, and F-score for ROUGE. If ${\\tau }_k$ is a regression task, then the loss used is the $\\ell _2$ loss i.e. $\\ell _k = \\Vert {\\tau }_k - \\hat{{\\tau }}_k \\Vert _2^2 / |{\\tau }_k|$ where $|{\\tau }_k|$ is the dimension of ${\\tau }_k$ and $\\hat{{\\tau }}_k$ is computed by using a task-specific linear layer on top of the $\\textrm {[CLS]}$ embedding: $\\hat{{\\tau }}_k = _{\\tau _k} \\tilde{}_{\\textrm {[CLS]}} + _{\\tau _k}$. If ${\\tau }_k$ is a classification task, we use a separate linear layer to predict a logit for each class $c$: $\\hat{{\\tau }}_{kc} = _{\\tau _{kc}} \\tilde{}_{\\textrm {[CLS]}} + _{\\tau _{kc}}$, and we use the multiclass cross-entropy loss. We define our aggregate pre-training loss function as follows: pre-training = 1M m=1M k=1K k k(km, km) where ${\\tau }_k^m$ is the target vector for example $m$, $M$ is number of synthetic examples, and $\\gamma _k$ are hyperparameter weights obtained with grid search (more details in the Appendix)."
],
[
"In this section, we report our experimental results for two tasks, translation and data-to-text. First, we benchmark Bleurt against existing text generation metrics on the last 3 years of the WMT Metrics Shared Task BIBREF29. We then evaluate its robustness to quality drifts with a series of synthetic datasets based on WMT17. We test Bleurt's ability to adapt to different tasks with the WebNLG 2017 Challenge Dataset BIBREF20. Finally, we measure the contribution of each pre-training task with ablation experiments."
],
[
"Unless specified otherwise, all Bleurt models are trained in three steps: regular BERT pre-training BIBREF19, pre-training on synthetic data (as explained in Section SECREF4), and fine-tuning on task-specific ratings (translation and/or data-to-text). We experiment with two versions of Bleurt, BLEURT and BLEURTbase, respectively based on BERT-Large (24 layers, 1024 hidden units, 16 heads) and BERT-Base (12 layers, 768 hidden units, 12 heads) BIBREF19, both uncased. We use batch size 32, learning rate 1e-5, and 800,000 steps for pre-training and 40,000 steps for fine-tuning. We provide the full detail of our training setup in the Appendix."
],
[
"We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs. For each year, we used the official WMT test set, which include several thousand pairs of sentences with human ratings from the news domain. The training sets contain 5,360, 9,492, and 147,691 records for each year. The test sets for years 2018 and 2019 are noisier, as reported by the organizers and shown by the overall lower correlations.",
"We evaluate the agreement between the automatic metrics and the human ratings. For each year, we report two metrics: Kendall's Tau $\\tau $ (for consistency across experiments), and the official WMT metric for that year (for completeness). The official WMT metric is either Pearson's correlation or a robust variant of Kendall's Tau called DARR, described in the Appendix. All the numbers come from our own implementation of the benchmark. Our results are globally consistent with the official results but we report small differences in 2018 and 2019, marked in the tables."
],
[
"We experiment with four versions of Bleurt: BLEURT, BLEURTbase, BLEURT -pre and BLEURTbase -pre. The first two models are based on BERT-large and BERT-base. In the latter two versions, we skip the pre-training phase and fine-tune directly on the WMT ratings. For each year of the WMT shared task, we use the test set from the previous years for training and validation. We describe our setup in further detail in the Appendix. We compare Bleurt to participant data from the shared task and automatic metrics that we ran ourselves. In the former case, we use the the best-performing contestants for each year, that is, chrF++, BEER, Meteor++, RUSE, Yisi1, ESIM and Yisi1-SRL BIBREF30. All the contestants use the same WMT training data, in addition to existing sentence or token embeddings. In the latter case, we use Moses sentenceBLEU, BERTscore BIBREF28, and MoverScore BIBREF31. For BERTscore, we use BERT-large uncased for fairness, and roBERTa (the recommended version) for completeness BIBREF32. We run MoverScore on WMT 2017 using the scripts published by the authors."
],
[
"Tables TABREF14, TABREF15, TABREF16 show the results. For years 2017 and 2018, a Bleurt-based metric dominates the benchmark for each language pair (Tables TABREF14 and TABREF15). BLEURT and BLEURTbase are also competitive for year 2019: they yield the best results for every language pair on Kendall's Tau, and they come first for 4 out of 7 pairs on DARR. As expected, BLEURT dominates BLEURTbase in the majority of cases. Pre-training consistently improves the results of BLEURT and BLEURTbase. We observe the largest effect on year 2017, where it adds up to 7.4 Kendall Tau points for BLEURTbase (zh-en). The effect is milder on years 2018 and 2019, up to 2.1 points (tr-en, 2018). We explain the difference by the fact that the training data used for 2017 is smaller than the datasets used for the following years, so pre-training is likelier to help. In general pre-training yields higher returns for BERT-base than for BERT-large—in fact, BLEURTbase with pre-training is often better than BLEURT without.",
"Takeaways: Pre-training delivers consistent improvements, especially for BERT-base. Bleurt yields state-of-the art performance for all years of the WMT Metrics Shared task."
],
[
"We assess our claim that pre-training makes Bleurt robust to quality drifts, by constructing a series of tasks for which it is increasingly pressured to extrapolate. All the experiments that follow are based on the WMT Metrics Shared Task 2017, because the ratings for this edition are particularly reliable."
],
[
"We create increasingly challenging datasets by sub-sampling the records from the WMT Metrics shared task, keeping low-rated translations for training and high-rated translations for test. The key parameter is the skew factor $\\alpha $, that measures how much the training data is left-skewed and the test data is right-skewed. Figure FIGREF24 demonstrates the ratings distribution that we used in our experiments. The training data shrinks as $\\alpha $ increases: in the most extreme case ($\\alpha =3.0$), we use only 11.9% of the original 5,344 training records. We give the full detail of our sampling methodology in the Appendix.",
"We use BLEURT with and without pre-training and we compare to Moses sentBLEU and BERTscore. We use BERT-large uncased for both BLEURT and BERTscore."
],
[
"Figure FIGREF25 presents Bleurt's performance as we vary the train and test skew independently. Our first observation is that the agreements fall for all metrics as we increase the test skew. This effect was already described is the 2019 WMT Metrics report BIBREF11. A common explanation is that the task gets more difficult as the ratings get closer—it is easier to discriminate between “good” and “bad” systems than to rank “good” systems.",
"Training skew has a disastrous effect on Bleurt without pre-training: it is below BERTscore for $\\alpha =1.0$, and it falls under sentBLEU for $\\alpha \\ge 1.5$. Pre-trained Bleurt is much more robust: the only case in which it falls under the baselines is $\\alpha =3.0$, the most extreme drift, for which incorrect translations are used for train while excellent ones for test."
],
[
"Pre-training makes BLEURT significantly more robust to quality drifts."
],
[
"In this section, we evaluate Bleurt's performance on three tasks from a data-to-text dataset, the WebNLG Challenge 2017 BIBREF33. The aim is to assess Bleurt's capacity to adapt to new tasks with limited training data."
],
[
"The WebNLG challenge benchmarks systems that produce natural language description of entities (e.g., buildings, cities, artists) from sets of 1 to 5 RDF triples. The organizers released the human assessments for 9 systems over 223 inputs, that is, 4,677 sentence pairs in total (we removed null values). Each input comes with 1 to 3 reference descriptions. The submissions are evaluated on 3 aspects: semantics, grammar, and fluency. We treat each type of rating as a separate modeling task. The data has no natural split between train and test, therefore we experiment with several schemes. We allocate 0% to about 50% of the data to training, and we split on both the evaluated systems or the RDF inputs in order to test different generalization regimes."
],
[
"BLEURT -pre -wmt, is a public BERT-large uncased checkpoint directly trained on the WebNLG ratings. BLEURT -wmtwas first pre-trained on synthetic data, then fine-tuned on WebNLG data. BLEURT was trained in three steps: first on synthetic data, then on WMT data (16-18), and finally on WebNLG data. When a record comes with several references, we run BLEURT on each reference and report the highest value BIBREF28.",
"We report four baselines: BLEU, TER, Meteor, and BERTscore. The first three were computed by the WebNLG competition organizers. We ran the latter one ourselves, using BERT-large uncased for a fair comparison."
],
[
"Figure FIGREF26 presents the correlation of the metrics with human assessments as we vary the share of data allocated to training. The more pre-trained Bleurt is, the quicker it adapts. The vanilla BERT approach BLEURT -pre -wmt requires about a third of the WebNLG data to dominate the baselines on the majority of tasks, and it still lags behind on semantics (split by system). In contrast, BLEURT -wmt is competitive with as little as 836 records, and Bleurt is comparable with BERTscore with zero fine-tuning."
],
[
"Thanks to pre-training, Bleurt can quickly adapt to the new tasks. Bleurt fine-tuned twice (first on synthetic data, then on WMT data) provides acceptable results on all tasks without training data."
],
[
"Figure FIGREF36 presents our ablation experiments on WMT 2017, which highlight the relative importance of each pre-training task. On the left side, we compare Bleurt pre-trained on a single task to Bleurt without pre-training. On the right side, we compare full Bleurt to Bleurt pre-trained on all tasks except one. Pre-training on BERTscore, entailment, and the backtranslation scores yield improvements (symmetrically, ablating them degrades Bleurt). Oppositely, BLEU and ROUGE have a negative impact. We conclude that pre-training on high quality signals helps BLEURT, but that metrics that correlate less well with human judgment may in fact harm the model."
],
[
"The WMT shared metrics competition BIBREF34, BIBREF18, BIBREF11 has inspired the creation of many learned metrics, some of which use regression or deep learning BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF30. Other metrics have been introduced, such as the recent MoverScore BIBREF31 which combines contextual embeddings and Earth Mover's Distance. We provide a head-to-head comparison with the best performing of those in our experiments. Other approaches do not attempt to estimate quality directly, but use information extraction or question answering as a proxy BIBREF7, BIBREF39, BIBREF40. Those are complementary to our work.",
"There has been recent work that uses BERT for evaluation. BERTScore BIBREF28 proposes replacing the hard n-gram overlap of BLEU with a soft-overlap using BERT embeddings. We use it in all our experiments. Bertr BIBREF30 and YiSi BIBREF30 also make use of BERT embeddings to compute a similarity score. Sum-QE BIBREF41 fine-tunes BERT for quality estimation as we describe in Section SECREF3. Our focus is different—we train metrics that are not only state-of-the-art in conventional iid experimental setups, but also robust in the presence of scarce and out-of-distribution training data. To our knowledge no existing work has explored pre-training and extrapolation in the context of NLG.",
"Noisy pre-training has been proposed before for other tasks such as paraphrasing BIBREF42, BIBREF43 but generally not with synthetic data. Generating synthetic data via paraphrases and perturbations has been commonly used for generating adversarial examples BIBREF44, BIBREF45, BIBREF46, BIBREF47, an orthogonal line of research."
],
[
"We presented Bleurt, a reference-based text generation metric for English. Because the metric is trained end-to-end, Bleurt can model human assessment with superior accuracy. Furthermore, pre-training makes the metrics robust particularly robust to both domain and quality drifts. Future research directions include multilingual NLG evaluation, and hybrid methods involving both humans and classifiers."
],
[
"Thanks to Eunsol Choi, Nicholas FitzGerald, Jacob Devlin, and to the members of the Google AI Language team for the proof-reading, feedback, and suggestions. We also thank Madhavan Kidambi and Ming-Wei Chang, who implemented blank-filling with BERT."
],
[
"This section provides implementation details for some of the pre-training techniques described in the main paper."
],
[
"We use two masking strategies. The first strategy samples random words in the sentence and it replaces them with masks (one for each token). Thus, the masks are scattered across the sentence. The second strategy creates contiguous sequences: it samples a start position $s$, a length $l$ (uniformly distributed), and it masks all the tokens spanned by words between positions $s$ and $s+l$. In both cases, we use up to 15 masks per sentence. Instead of running the language model once and picking the most likely token at each position, we use beam search (the beam size 8 by default). This enforces consistency and avoids repeated sequences, e.g., “,,,”."
],
[
"Consider English and French. Given a forward translation model $P_{\\texttt {en}\\rightarrow \\texttt {fr}}(z_{\\texttt {fr}} | z_{\\texttt {en}})$ and backward translation model $P_{\\texttt {fr}\\rightarrow \\texttt {en}}(z_{\\texttt {en}} | z_{\\texttt {fr}})$, we generate $\\tilde{}$ as follows: = zen (Pfren(zen | zfr) ) where $z_{\\texttt {fr}}^\\ast = _{z_{\\texttt {fr}}} \\left( P_{\\texttt {fr}\\rightarrow \\texttt {en}}(z_{\\texttt {fr}} | z ) \\right)$. For the translations, we use a Transformer model BIBREF21, trained on English-German with the tensor2tensor framework."
],
[
"Given a synthetic example $(, \\tilde{})$ we generate a pair $(, \\tilde{}^{\\prime })$, by randomly dropping words from $\\tilde{}$. We draw the number of words to drop uniformly, up to the length of the sentence. We apply this transformation on about 30% of the data generated with the previous method."
],
[
"We set the weights $\\gamma _k$ with grid search, optimizing Bleurt's performance on WMT 17's validation set. To reduce the size of the grid, we make groups of pre-training tasks that share the same weights: $({\\tau }_{\\text{BLEU}}, {\\tau }_{\\text{ROUGE}}, {\\tau }_{\\text{BERTscore}})$, $({\\tau }_{\\text{en-fr}, z \\mid \\tilde{z}}, {\\tau }_{\\text{en-fr}, \\tilde{z} \\mid z}, {\\tau }_{\\text{en-de}, z \\mid \\tilde{z}}, {\\tau }_{\\text{en-de}, \\tilde{z} \\mid z})$, and $({\\tau }_{\\text{entail}}, {\\tau }_{\\text{backtran\\_flag}})$."
],
[
"We now provide additional details on the signals we uses for pre-training."
],
[
"As shown in the table, we use three types of signals: BLEU, ROUGE, and BERTscore. For BLEU, we used the original Moses sentenceBLEU implementation, using the Moses tokenizer and the default parameters. For ROUGE, we used the seq2seq implementation of ROUGE-N. We used a custom implementation of BERTscore, based on BERT-large uncased. ROUGE and BERTscore return three scores: precision, recall, and F-score. We use all three quantities."
],
[
"We compute all the losses using custom Transformer model BIBREF21, trained on two language pairs (English-French and English-German) with the tensor2tensor framework."
],
[
"We user BERT's public checkpoints with Adam (the default optimizer), learning rate 1e-5, and batch size 32. Unless specified otherwise, we use 800,00 training steps for pre-training and 40,000 steps for fine-tuning. We run training and evaluation in parallel: we run the evaluation every 1,500 steps and store the checkpoint that performs best on a held-out validation set (more details on the data splits and our choice of metrics in the following sections). We use Google Cloud TPUs v2 for learning, and Nvidia Tesla V100 accelerators for evaluation and test. Our code uses Tensorflow 1.15 and Python 2.7."
],
[
"The metrics used to compare the evaluation systems vary across the years. The organizers use Pearson's correlation on standardized human judgments across all segments in 2017, and a custom variant of Kendall's Tau named “DARR” on raw human judgments in 2018 and 2019. The latter metrics operates as follows. The organizers gather all the translations for the same reference segment, they enumerate all the possible pairs $(\\text{translation}_1, \\text{translation}_2)$, and they discard all the pairs which have a “similar” score (less than 25 points away on a 100 points scale). For each remaining pair, they then determine which translation is the best according both human judgment and the candidate metric. Let $|\\text{Concordant}|$ be the number of pairs on which the NLG metrics agree and $|\\text{Discordant}|$ be those on which they disagree, then the score is computed as follows:",
"The idea behind the 25 points filter is to make the evaluation more robust, since the judgments collected for WMT 2018 and 2019 are noisy. Kendall's Tau is identical, but it does not use the filter."
],
[
"To separate training and validation data, we set aside a fixed ratio of records in such a way that there is no “leak” between the datasets (i.e., train and validation records that share the same source). We use 10% of the data for validation for years 2017 and 2018, and 5% for year 2019. We report results for the models that yield the highest Kendall Tau across all records on validation data. The weights associated to each pretraining task (see our Modeling section) are set with grid search, using the train/validation setup of WMT 2017."
],
[
"we use three metrics: the Moses implementation of sentenceBLEU, BERTscore, and MoverScore, which are all available online. We run the Moses tokenizer on the reference and candidate segments before computing sentenceBLEU."
],
[
"We sample the training and test separately, as follows. We split the data in 10 bins of equal size. We then sample each record in the dataset with probabilities $\\frac{1}{B^\\alpha }$ and $\\frac{1}{(11-B)^\\alpha }$ for train and test respectively, where $B$ is the bin index of the record between 1 and 10, and $\\alpha $ is a predefined skew factor. The skew factor $\\alpha $ controls the drift: a value of 0 has no effect (the ratings are centered around 0), and value of 3.0 yields extreme differences. Note that the sizes of the datasets decrease as $\\alpha $ increases: we use 50.7%, 30.3%, 20.4%, and 11.9% of the original 5,344 training records for $\\alpha =0.5$, $1.0$, $1.5$, and $3.0$ respectively."
],
[
"To understand the relationship between pre-training time and downstream accuracy, we pre-train several versions of BLEURT and we fine-tune them on WMT17 data, varying the number of pre-training steps. Figure FIGREF60 presents the results. Most gains are obtained during the first 400,000 steps, that is, after about 2 epochs over our synthetic dataset."
]
],
"section_name": [
"Introduction",
"Preliminaries",
"Fine-Tuning BERT for Quality Evaluation",
"Pre-Training on Synthetic Data",
"Pre-Training on Synthetic Data ::: Generating Sentence Pairs",
"Pre-Training on Synthetic Data ::: Generating Sentence Pairs ::: Mask-filling with BERT:",
"Pre-Training on Synthetic Data ::: Generating Sentence Pairs ::: Backtranslation:",
"Pre-Training on Synthetic Data ::: Generating Sentence Pairs ::: Dropping words:",
"Pre-Training on Synthetic Data ::: Pre-Training Signals",
"Pre-Training on Synthetic Data ::: Pre-Training Signals ::: Automatic Metrics:",
"Pre-Training on Synthetic Data ::: Pre-Training Signals ::: Backtranslation Likelihood:",
"Pre-Training on Synthetic Data ::: Pre-Training Signals ::: Textual Entailment:",
"Pre-Training on Synthetic Data ::: Pre-Training Signals ::: Backtranslation flag:",
"Pre-Training on Synthetic Data ::: Modeling",
"Experiments",
"Experiments ::: Our Models:",
"Experiments ::: WMT Metrics Shared Task ::: Datasets and Metrics:",
"Experiments ::: WMT Metrics Shared Task ::: Models:",
"Experiments ::: WMT Metrics Shared Task ::: Results:",
"Experiments ::: Robustness to Quality Drift",
"Experiments ::: Robustness to Quality Drift ::: Methodology:",
"Experiments ::: Robustness to Quality Drift ::: Results:",
"Experiments ::: Robustness to Quality Drift ::: Takeaways:",
"Experiments ::: WebNLG Experiments",
"Experiments ::: WebNLG Experiments ::: Dataset and Evaluation Tasks:",
"Experiments ::: WebNLG Experiments ::: Systems and Baselines:",
"Experiments ::: WebNLG Experiments ::: Results:",
"Experiments ::: WebNLG Experiments ::: Takeaways:",
"Experiments ::: Ablation Experiments",
"Related Work",
"Conclusion",
"Acknowledgments",
"Implementation Details of the Pre-Training Phase",
"Implementation Details of the Pre-Training Phase ::: Data Generation ::: Random Masking:",
"Implementation Details of the Pre-Training Phase ::: Data Generation ::: Backtranslation:",
"Implementation Details of the Pre-Training Phase ::: Data Generation ::: Word dropping:",
"Implementation Details of the Pre-Training Phase ::: Modeling ::: Setting the weights of the pre-training tasks:",
"Implementation Details of the Pre-Training Phase ::: Pre-Training Tasks",
"Implementation Details of the Pre-Training Phase ::: Pre-Training Tasks ::: Automatic Metrics:",
"Implementation Details of the Pre-Training Phase ::: Pre-Training Tasks ::: Backtranslation Likelihood:",
"Experiments–Supplementary Material ::: Training Setup for All Experiments",
"Experiments–Supplementary Material ::: WMT Metric Shared Task ::: Metrics.",
"Experiments–Supplementary Material ::: WMT Metric Shared Task ::: Training setup.",
"Experiments–Supplementary Material ::: WMT Metric Shared Task ::: Baselines.",
"Experiments–Supplementary Material ::: Robustness to Quality Drift ::: Data Re-sampling Methodology:",
"Experiments–Supplementary Material ::: Ablation Experiment–How Much Pre-Training Time is Necessary?"
]
} | {
"answers": [
{
"annotation_id": [
"887a218ad420aefc9707e6fd0633ee41599b5309"
],
"answer": [
{
"evidence": [
"One way to expose Bleurt to a wide variety of sentence differences is to use existing sentence pairs datasets BIBREF22, BIBREF23, BIBREF24. These sets are a rich source of related sentences, but they may fail to capture the errors and alterations that NLG systems produce (e.g., omissions, repetitions, nonsensical substitutions). We opted for an automatic approach instead, that can be scaled arbitrarily and at little cost: we generate synthetic sentence pairs $(, \\tilde{})$ by randomly perturbing 1.8 million segments $$ from Wikipedia. We use three techniques: mask-filling with BERT, backtranslation, and randomly dropping out words. We obtain about 6.5 million perturbations $\\tilde{}$. Let us describe those techniques."
],
"extractive_spans": [],
"free_form_answer": "Random perturbation of Wikipedia sentences using mask-filling with BERT, backtranslation and randomly drop out",
"highlighted_evidence": [
" We opted for an automatic approach instead, that can be scaled arbitrarily and at little cost: we generate synthetic sentence pairs $(, \\tilde{})$ by randomly perturbing 1.8 million segments $$ from Wikipedia. We use three techniques: mask-filling with BERT, backtranslation, and randomly dropping out words. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
}
],
"nlp_background": [
"two"
],
"paper_read": [
"no"
],
"question": [
"How are the synthetic examples generated?"
],
"question_id": [
"3f5f74c39a560b5d916496e05641783c58af2c5d"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
""
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Table 1: Our pre-training signals.",
"Table 2: Agreement with human ratings on the WMT17 Metrics Shared Task. The metrics are Kendall Tau (τ ) and the Pearson correlation (r, the official metric of the shared task), divided by 100.",
"Table 3: Agreement with human ratings on the WMT18 Metrics Shared Task. The metrics are Kendall Tau (τ ) and WMT’s Direct Assessment metrics divided by 100. The star * indicates results that are more than 0.2 percentage points away from the official WMT results (up to 0.4 percentage points away).",
"Table 4: Agreement with human ratings on the WMT19 Metrics Shared Task. The metrics are Kendall Tau (τ ) and WMT’s Direct Assessment metrics divided by 100. All the values reported for Yisi1 SRL and ESIM fall within 0.2 percentage of the official WMT results.",
"Figure 1: Distribution of the human ratings in the train/validation and test datasets for different skew factors.",
"Figure 2: Agreement between BLEURT and human ratings for different skew factors in train and test.",
"Figure 3: Absolute Kendall Tau of BLEU, Meteor, and BLEURT with human judgements on the WebNLG dataset, varying the size of the data used for training and validation.",
"Figure 4: Improvement in Kendall Tau on WMT 17 varying the pre-training tasks.",
"Figure 5: Improvement in Kendall Tau accuracy on all language pairs of the WMT Metrics Shared Task 2017, varying the number of pre-training steps. 0 steps corresponds to 0.555 Kendall Tau for BLEURTbase and 0.580 for BLEURT."
],
"file": [
"4-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"6-Figure1-1.png",
"6-Figure2-1.png",
"7-Figure3-1.png",
"8-Figure4-1.png",
"12-Figure5-1.png"
]
} | [
"How are the synthetic examples generated?"
] | [
[
"2004.04696-Pre-Training on Synthetic Data ::: Generating Sentence Pairs-0"
]
] | [
"Random perturbation of Wikipedia sentences using mask-filling with BERT, backtranslation and randomly drop out"
] | 725 |
1710.09340 | Non-Projective Dependency Parsing with Non-Local Transitions | We present a novel transition system, based on the Covington non-projective parser, introducing non-local transitions that can directly create arcs involving nodes to the left of the current focus positions. This avoids the need for long sequences of No-Arc transitions to create long-distance arcs, thus alleviating error propagation. The resulting parser outperforms the original version and achieves the best accuracy on the Stanford Dependencies conversion of the Penn Treebank among greedy transition-based algorithms. | {
"paragraphs": [
[
"Greedy transition-based parsers are popular in NLP, as they provide competitive accuracy with high efficiency. They syntactically analyze a sentence by greedily applying transitions, which read it from left to right and produce a dependency tree.",
"However, this greedy process is prone to error propagation: one wrong choice of transition can lead the parser to an erroneous state, causing more incorrect decisions. This is especially crucial for long attachments requiring a larger number of transitions. In addition, transition-based parsers traditionally focus on only two words of the sentence and their local context to choose the next transition. The lack of a global perspective favors the presence of errors when creating arcs involving multiple transitions. As expected, transition-based parsers build short arcs more accurately than long ones BIBREF0 .",
"Previous research such as BIBREF1 and BIBREF2 proves that the widely-used projective arc-eager transition-based parser of Nivre2003 benefits from shortening the length of transition sequences by creating non-local attachments. In particular, they augmented the original transition system with new actions whose behavior entails more than one arc-eager transition and involves a context beyond the traditional two focus words. attardi06 and sartorio13 also extended the arc-standard transition-based algorithm BIBREF3 with the same success.",
"In the same vein, we present a novel unrestricted non-projective transition system based on the well-known algorithm by covington01fundamental that shortens the transition sequence necessary to parse a given sentence by the original algorithm, which becomes linear instead of quadratic with respect to sentence length. To achieve that, we propose new transitions that affect non-local words and are equivalent to one or more Covington actions, in a similar way to the transitions defined by Qi2017 based on the arc-eager parser. Experiments show that this novel variant significantly outperforms the original one in all datasets tested, and achieves the best reported accuracy for a greedy dependency parser on the Stanford Dependencies conversion of the WSJ Penn Treebank."
],
[
"The original non-projective parser defined by covington01fundamental was modelled under the transition-based parsing framework by Nivre2008. We only sketch this transition system briefly for space reasons, and refer to BIBREF4 for details.",
"Parser configurations have the form INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 are lists of partially processed words, INLINEFORM3 a list (called buffer) of unprocessed words, and INLINEFORM4 the set of dependency arcs built so far. Given an input string INLINEFORM5 , the parser starts at the initial configuration INLINEFORM6 and runs transitions until a terminal configuration of the form INLINEFORM7 is reached: at that point, INLINEFORM8 contains the dependency graph for the input.",
"The set of transitions is shown in the top half of Figure FIGREF1 . Their logic can be summarized as follows: when in a configuration of the form INLINEFORM0 , the parser has the chance to create a dependency involving words INLINEFORM1 and INLINEFORM2 , which we will call left and right focus words of that configuration. The INLINEFORM3 and INLINEFORM4 transitions are used to create a leftward ( INLINEFORM5 ) or rightward arc ( INLINEFORM6 ), respectively, between these words, and also move INLINEFORM7 from INLINEFORM8 to the first position of INLINEFORM9 , effectively moving the focus to INLINEFORM10 and INLINEFORM11 . If no dependency is desired between the focus words, the INLINEFORM12 transition makes the same modification of INLINEFORM13 and INLINEFORM14 , but without building any arc. Finally, the INLINEFORM15 transition moves the whole content of the list INLINEFORM16 plus INLINEFORM17 to INLINEFORM18 when no more attachments are pending between INLINEFORM19 and the words of INLINEFORM20 , thus reading a new input word and placing the focus on INLINEFORM21 and INLINEFORM22 . Transitions that create arcs are disallowed in configurations where this would violate the single-head or acyclicity constraints (cycles and nodes with multiple heads are not allowed in the dependency graph). Figure FIGREF4 shows the transition sequence in the Covington transition system which derives the dependency graph in Figure FIGREF3 .",
"The resulting parser can generate arbitrary non-projective trees, and its complexity is INLINEFORM0 ."
],
[
"The original logic described by covington01fundamental parses a sentence by systematically traversing every pair of words. The INLINEFORM0 transition, introduced by Nivre2008 in the transition-based version, is an optimization that avoids the need to apply a sequence of INLINEFORM1 transitions to empty the list INLINEFORM2 before reading a new input word.",
"However, there are still situations where sequences of INLINEFORM0 transitions are needed. For example, if we are in a configuration INLINEFORM1 with focus words INLINEFORM2 and INLINEFORM3 and the next arc we need to create goes from INLINEFORM4 to INLINEFORM5 INLINEFORM6 , then we will need INLINEFORM7 consecutive INLINEFORM8 transitions to move the left focus word to INLINEFORM9 and then apply INLINEFORM10 . This could be avoided if a non-local INLINEFORM11 transition could be undertaken directly at INLINEFORM12 , creating the required arc and moving INLINEFORM13 words to INLINEFORM14 at once. The advantage of such approach would be twofold: (1) less risk of making a mistake at INLINEFORM15 due to considering a limited local context, and (2) shorter transition sequence, alleviating error propagation.",
"We present a novel transition system called NL-Covington (for “non-local Covington”), described in the bottom half of Figure FIGREF1 . It consists in a modification of the non-projective Covington algorithm where: (1) the INLINEFORM0 and INLINEFORM1 transitions are parameterized with INLINEFORM2 , allowing the immediate creation of any attachment between INLINEFORM3 and the INLINEFORM4 th leftmost word in INLINEFORM5 and moving INLINEFORM6 words to INLINEFORM7 at once, and (2) the INLINEFORM8 transition is removed since it is no longer necessary.",
"This new transition system can use some restricted global information to build non-local dependencies and, consequently, reduce the number of transitions needed to parse the input. For instance, as presented in Figure FIGREF5 , the NL-Covington parser will need 9 transitions, instead of 12 traditional Covington actions, to analyze the sentence in Figure FIGREF3 .",
"In fact, while in the standard Covington algorithm a transition sequence for a sentence of length INLINEFORM0 has length INLINEFORM1 in the worst case (if all nodes are connected to the first node, then we need to traverse every node to the left of each right focus word); for NL-Covington the sequence length is always INLINEFORM2 : one INLINEFORM3 transition for each of the INLINEFORM4 words, plus one arc-building transition for each of the INLINEFORM5 arcs in the dependency tree. Note, however, that this does not affect the parser's time complexity, which is still quadratic as in the original Covington parser. This is because the algorithm has INLINEFORM6 possible transitions to be scored at each configuration, while the original Covington has INLINEFORM7 transitions due to being limited to creating local leftward/rightward arcs between the focus words.",
"The completeness and soundness of NL-Covington can easily be proved as there is a mapping between transition sequences of both parsers, where a sequence of INLINEFORM0 INLINEFORM1 and one arc transition in Covington is equivalent to a INLINEFORM2 or INLINEFORM3 in NL-Covington."
],
[
"We use 9 datasets from the CoNLL-X BIBREF5 and all datasets from the CoNLL-XI shared task BIBREF6 . To compare our system to the current state-of-the-art transition-based parsers, we also evaluate it on the Stanford Dependencies BIBREF7 conversion (using the Stanford parser v3.3.0) of the WSJ Penn Treebank BIBREF8 , hereinafter PT-SD, with standard splits. Labelled and Unlabelled Attachment Scores (LAS and UAS) are computed excluding punctuation only on the PT-SD, for comparability. We repeat each experiment with three independent random initializations and report the average accuracy. Statistical significance is assessed by a paired test with 10,000 bootstrap samples."
],
[
"To implement our approach we take advantage of the model architecture described in Qi2017 for the arc-swift parser, which extends the architecture of Kiperwasser2016 by applying a biaffine combination during the featurization process. We implement both the Covington and NL-Covington parsers under this architecture, adapt the featurization process with biaffine combination of Qi2017 to these parsers, and use their same training setup. More details about these model parameters are provided in Appendix SECREF6 .",
"Since this architecture uses batch training, we train with a static oracle. The NL-Covington algorithm has no spurious ambiguity at all, so there is only one possible static oracle: canonical transition sequences are generated by choosing the transition that builds the shortest pending gold arc involving the current right focus word INLINEFORM0 , or INLINEFORM1 if there are no unbuilt gold arcs involving INLINEFORM2 .",
"We note that a dynamic oracle can be obtained for the NL-Covington parser by adapting the one for standard Covington of GomFerACL2015. As NL-Covington transitions are concatenations of Covington ones, their loss calculation algorithm is compatible with NL-Covington. Apart from error exploration, this also opens the way to incorporating non-monotonicity BIBREF9 . While these approaches have shown to improve accuracy under online training settings, here we prioritize homogeneous comparability to BIBREF2 , so we use batch training and a static oracle, and still obtain state-of-the-art accuracy for a greedy parser."
],
[
"Table TABREF10 presents a comparison between the Covington parser and the novel variant developed here. The NL-Covington parser outperforms the original version in all datasets tested, with all improvements statistically significant ( INLINEFORM0 ).",
"Table TABREF12 compares our novel system with other state-of-the-art transition-based dependency parsers on the PT-SD. Greedy parsers are in the first block, beam-search and dynamic programming parsers in the second block. The third block shows the best result on this benchmark, obtained with constituent parsing with generative re-ranking and conversion to dependencies. Despite being the only non-projective parser tested on a practically projective dataset, our parser achieves the highest score among greedy transition-based models (even above those trained with a dynamic oracle).",
"We even slightly outperform the arc-swift system of Qi2017, with the same model architecture, implementation and training setup, but based on the projective arc-eager transition-based parser instead. This may be because our system takes into consideration any permissible attachment between the focus word INLINEFORM0 and any word in INLINEFORM1 at each configuration, while their approach is limited by the arc-eager logic: it allows all possible rightward arcs (possibly fewer than our approach as the arc-eager stack usually contains a small number of words), but only one leftward arc is permitted per parser state. It is also worth noting that the arc-swift and NL-Covington parsers have the same worst-case time complexity, ( INLINEFORM2 ), as adding non-local arc transitions to the arc-eager parser increases its complexity from linear to quadratic, but it does not affect the complexity of the Covington algorithm. Thus, it can be argued that this technique is better suited to Covington than to arc-eager parsing.",
"We also compare NL-Covington to the arc-swift parser on the CoNLL datasets (Table TABREF15 ). For fairness of comparison, we projectivize (via maltparser) all training datasets, instead of filtering non-projective sentences, as some of the languages are significantly non-projective. Even doing that, the NL-Covington parser improves over the arc-swift system in terms of UAS in 14 out of 19 datasets, obtaining statistically significant improvements in accuracy on 7 of them, and statistically significant decreases in just one.",
"Finally, we analyze how our approach reduces the length of the transition sequence consumed by the original Covington parser. In Table TABREF16 we report the transition sequence length per sentence used by the Covington and the NL-Covington algorithms to analyze each dataset from the same benchmark used for evaluating parsing accuracy. As seen in the table, NL-Covington produces notably shorter transition sequences than Covington, with a reduction close to 50% on average."
],
[
"We present a novel variant of the non-projective Covington transition-based parser by incorporating non-local transitions, reducing the length of transition sequences from INLINEFORM0 to INLINEFORM1 . This system clearly outperforms the original Covington parser and achieves the highest accuracy on the WSJ Penn Treebank (Stanford Dependencies) obtained to date with greedy dependency parsing."
],
[
"This work has received funding from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the TELEPARES-UDC (FFI2014-51978-C2-2-R) and ANSWER-ASAP (TIN2017-85160-C2-1-R) projects from MINECO, and from Xunta de Galicia (ED431B 2017/01)."
],
[
"We provide more details of the neural network architecture used in this paper, which is taken from Qi2017.",
"The model consists of two blocks of 2-layered bidirectional long short-term memory (BiLSTM) networks BIBREF23 with 400 hidden units in each direction. The first block is used for POS tagging and the second one, for parsing. As the input of the tagging block, we use words represented as word embeddings, and BiLSTMs are employed to perform feature extraction. The resulting output is fed into a multi-layer perceptron (MLP), with a hidden layer of 100 rectified linear units (ReLU), that provides a POS tag for each input token in a 32-dimensional representation. Word embeddings concatenated to these POS tag embeddings serve as input of the second block of BiLSTMs to undertake the parsing stage. Then, the output of the parsing block is fed into a MLP with two separate ReLU hidden layers (one for deriving the representation of the head, and the other for the dependency label) that, after being merged and by means of a softmax function, score all the feasible transitions, allowing to greedily choose and apply the highest-scoring one.",
"Moreover, we adapt the featurization process with biaffine combination described in Qi2017 for the arc-swift system to be used on the original Covington and NL-Covington parsers. In particular, arc transitions are featurized by the concatenation of the representation of the head and dependent words of the arc to be created, the INLINEFORM0 transition is featurized by the rightmost word in INLINEFORM1 and the leftmost word in the buffer INLINEFORM2 and, finally, for the INLINEFORM3 transition only the leftmost word in INLINEFORM4 is used. Unlike Qi2017 do for baseline parsers, we do not use the featurization method detailed in Kiperwasser2016 for the original Covington parser, as we observed that this results in lower scores and then the comparison would be unfair in our case. We implement both systems under the same framework, with the original Covington parser represented as the NL-Covington system plus the INLINEFORM5 transition and with INLINEFORM6 limited to 1. A thorough description of the model architecture and featurization mechanism can be found in Qi2017.",
"Our training setup is exactly the same used by Qi2017, training the models during 10 epochs for large datasets and 30 for small ones. In addition, we initialize word embeddings with 100-dimensional GloVe vectors BIBREF25 for English and use 300-dimensional Facebook vectors BIBREF20 for other languages. The other parameters of the neural network keep the same values.",
"The parser's source code is freely available at https://github.com/danifg/Non-Local-Covington."
]
],
"section_name": [
"Introduction",
"Non-Projective Covington Parser",
"Non-Projective NL-Covington Parser",
"Data and Evaluation",
"Model",
"Results",
"Conclusion",
"Acknowledgments",
"Model Details"
]
} | {
"answers": [
{
"annotation_id": [
"892adb2dc3fc6a68bc3144aaf4c20c068294207c"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"cd9882f929370639c2ee526c360ac89edf9b736c"
],
"answer": [
{
"evidence": [
"Table TABREF12 compares our novel system with other state-of-the-art transition-based dependency parsers on the PT-SD. Greedy parsers are in the first block, beam-search and dynamic programming parsers in the second block. The third block shows the best result on this benchmark, obtained with constituent parsing with generative re-ranking and conversion to dependencies. Despite being the only non-projective parser tested on a practically projective dataset, our parser achieves the highest score among greedy transition-based models (even above those trained with a dynamic oracle).",
"We even slightly outperform the arc-swift system of Qi2017, with the same model architecture, implementation and training setup, but based on the projective arc-eager transition-based parser instead. This may be because our system takes into consideration any permissible attachment between the focus word INLINEFORM0 and any word in INLINEFORM1 at each configuration, while their approach is limited by the arc-eager logic: it allows all possible rightward arcs (possibly fewer than our approach as the arc-eager stack usually contains a small number of words), but only one leftward arc is permitted per parser state. It is also worth noting that the arc-swift and NL-Covington parsers have the same worst-case time complexity, ( INLINEFORM2 ), as adding non-local arc transitions to the arc-eager parser increases its complexity from linear to quadratic, but it does not affect the complexity of the Covington algorithm. Thus, it can be argued that this technique is better suited to Covington than to arc-eager parsing.",
"FLOAT SELECTED: Table 2: Accuracy comparison of state-of-theart transition-based dependency parsers on PT-SD. The “Type” column shows the type of parser: gs is a greedy parser trained with a static oracle, gd a greedy parser trained with a dynamic oracle, b(n) a beam search parser with beam size n, dp a parser that employs global training with dynamic programming, and c a constituent parser with conversion to dependencies."
],
"extractive_spans": [],
"free_form_answer": "Proposed method achieves 94.5 UAS and 92.4 LAS compared to 94.3 and 92.2 of best state-of-the -art greedy based parser. Best state-of-the art parser overall achieves 95.8 UAS and 94.6 LAS.",
"highlighted_evidence": [
"Table TABREF12 compares our novel system with other state-of-the-art transition-based dependency parsers on the PT-SD. Greedy parsers are in the first block, beam-search and dynamic programming parsers in the second block. The third block shows the best result on this benchmark, obtained with constituent parsing with generative re-ranking and conversion to dependencies. Despite being the only non-projective parser tested on a practically projective dataset, our parser achieves the highest score among greedy transition-based models (even above those trained with a dynamic oracle).\n\nWe even slightly outperform the arc-swift system of Qi2017, with the same model architecture, implementation and training setup, but based on the projective arc-eager transition-based parser instead.",
"FLOAT SELECTED: Table 2: Accuracy comparison of state-of-theart transition-based dependency parsers on PT-SD. The “Type” column shows the type of parser: gs is a greedy parser trained with a static oracle, gd a greedy parser trained with a dynamic oracle, b(n) a beam search parser with beam size n, dp a parser that employs global training with dynamic programming, and c a constituent parser with conversion to dependencies."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"Do they measure the number of created No-Arc long sequences?",
"By how much does the new parser outperform the current state-of-the-art?"
],
"question_id": [
"07f5e360e91b99aa2ed0284d7d6688335ed53778",
"11dde2be9a69a025f2fc29ce647201fb5a4df580"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Transitions of the non-projective Covington (top) and NL-Covington (bottom) dependency parsers. The notation i→∗ j ∈ A means that there is a (possibly empty) directed path from i to j in A.",
"Figure 2: Dependency tree for an input sentence.",
"Figure 3: Transition sequence for parsing the sentence in Figure 2 using the Covington parser (LA=LEFT-ARC, RA=RIGHT-ARC, NA=NO-ARC, SH=SHIFT).",
"Figure 4: Transition sequence for parsing the sentence in Figure 2 using the NL-Covington parser (LA=LEFT-ARC, RA=RIGHT-ARC, SH=SHIFT).",
"Table 2: Accuracy comparison of state-of-theart transition-based dependency parsers on PT-SD. The “Type” column shows the type of parser: gs is a greedy parser trained with a static oracle, gd a greedy parser trained with a dynamic oracle, b(n) a beam search parser with beam size n, dp a parser that employs global training with dynamic programming, and c a constituent parser with conversion to dependencies.",
"Table 1: Parsing accuracy (UAS and LAS, including punctuation) of the Covington and NLCovington non-projective parsers on CoNLL-XI (first block) and CoNLL-X (second block) datasets. Best results for each language are shown in bold. All improvements in this table are statistically significant (α = .05).",
"Table 3: Parsing accuracy (UAS and LAS, with punctuation) of the arc-swift and NL-Covington parsers on CoNLL-XI (1st block) and CoNLL-X (2nd block) datasets. Best results for each language are in bold. * indicates statistically significant improvements (α = .05).",
"Table 4: Average transitions executed per sentence (trans./sent.) when analyzing each dataset by the original Covington and NL-Covington algorithms."
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"2-Figure3-1.png",
"3-Figure4-1.png",
"4-Table2-1.png",
"4-Table1-1.png",
"5-Table3-1.png",
"5-Table4-1.png"
]
} | [
"By how much does the new parser outperform the current state-of-the-art?"
] | [
[
"1710.09340-Results-2",
"1710.09340-Results-1",
"1710.09340-4-Table2-1.png"
]
] | [
"Proposed method achieves 94.5 UAS and 92.4 LAS compared to 94.3 and 92.2 of best state-of-the -art greedy based parser. Best state-of-the art parser overall achieves 95.8 UAS and 94.6 LAS."
] | 726 |
2003.04967 | KryptoOracle: A Real-Time Cryptocurrency Price Prediction Platform Using Twitter Sentiments | Cryptocurrencies, such as Bitcoin, are becoming increasingly popular, having been widely used as an exchange medium in areas such as financial transaction and asset transfer verification. However, there has been a lack of solutions that can support real-time price prediction to cope with high currency volatility, handle massive heterogeneous data volumes, including social media sentiments, while supporting fault tolerance and persistence in real time, and provide real-time adaptation of learning algorithms to cope with new price and sentiment data. In this paper we introduce KryptoOracle, a novel real-time and adaptive cryptocurrency price prediction platform based on Twitter sentiments. The integrative and modular platform is based on (i) a Spark-based architecture which handles the large volume of incoming data in a persistent and fault tolerant way; (ii) an approach that supports sentiment analysis which can respond to large amounts of natural language processing queries in real time; and (iii) a predictive method grounded on online learning in which a model adapts its weights to cope with new prices and sentiments. Besides providing an architectural design, the paper also describes the KryptoOracle platform implementation and experimental evaluation. Overall, the proposed platform can help accelerate decision-making, uncover new opportunities and provide more timely insights based on the available and ever-larger financial data volume and variety. | {
"paragraphs": [
[
"A cryptocurrency is a digital currency designed to work as a medium of exchange that uses strong cryptography to secure financial transactions, control the creation of additional units, and verify the transfer of assets. They are based on decentralized systems built on block-chain technology, a distributed ledger enforced by a disparate network of computers BIBREF0. The first decentralized cryptocurrency, Bitcoin, was released as open-source software in 2009. After this release, approximately 4000 altcoins (other cryptocurrencies) have been released. As of August 2019, the total market capitalization of cryptocurrencies is $258 billion, where Bitcoin alone has a market capitalization of $179 billion BIBREF1.",
"Considering the huge market value of these currencies, they have attracted significant attention, where some people consider them as actual currencies and others as investment opportunities. This has resulted in large fluctuations in their prices. For instance in 2017 the value of Bitcoin increased approximately 2000% from $863 on January 9, 2017 to a high of $17,900 on December 15, 2017. However, eight weeks later, on February 5, 2018, the price had been more than halved to a value of just $6200 BIBREF2.",
"This high volatility in the value of cryptocurrencies means there is uncertainty for both investors, and for people who intend to use them as an actual currency. Cryptocurrency prices do not behave as traditional currencies and, therefore, it is difficult to determine what leads to this volatility. This in turn makes it a challenge to correctly predict the future prices of any cryptocurrency. To predict these prices, huge heterogeneous data volumes need to be collected from various sources such as blogs, IRC channels and social media. Especially, tweets from highly influential people and mass has significant effects on the price of cryptocurrency BIBREF3. However, tweets need to be filtered and their sentiments need to be calculated in a timely fashion to help predict cryptocurrency prices in real time. Furthermore, real-time prediction also calls for real-time updating of learning algorithms, which introduces an additional difficulty. These challenges call for learning platforms based on big data architectures that can not only handle heterogeneous volumes of data but also be fault tolerant and persistent in real time.",
"In this paper we provide a novel real-time and adaptive cryptocurrency price prediction platform based on Twitter sentiments. The integrative and modular platform copes with the three aforementioned challenges in several ways. Firstly, it provides a Spark-based architecture which handles the large volume of incoming data in a persistent and fault tolerant way. Secondly, the proposed platform offers an approach that supports sentiment analysis based on VADER which can respond to large amounts of natural language processing queries in real time. Thirdly, the platform supports a predictive approach based on online learning in which a machine learning model adapts its weights to cope with new prices and sentiments. Finally, the platform is modular and integrative in the sense that it combines these different solutions to provide novel real-time tool support for bitcoin price prediction that is more scalable, data-rich, and proactive, and can help accelerate decision-making, uncover new opportunities and provide more timely insights based on the available and ever-larger financial data volume and variety.",
"The rest of the paper is organized as follows. Section 2 discusses the related work proposed in the literature. Section 3 discusses the design and implementation of KryptoOracle in detail and includes the description of all of its sub-components. Section 4 presents an experimental evaluation, including experimental data, setup and results. Finally, section 5 concludes the paper and describes future work."
],
[
"In this section we present a brief review of the state of the art related to cryptocurrency price prediction. Related works can be divided into three main categories: (i) social media sentiments and financial markets (including cryptocurrency markets); (ii) machine learning for cryptocurrency price prediction; and (iii) big data platforms for financial market prediction.",
"The `prospect theory' framed by Daniel Kahneman and Amos Tversky presents that financial decisions are significantly influenced by risk and emotions, and not just the value alone BIBREF4. This is further reinforced by other works in economic psychology and decision making such as BIBREF5 which show that variations in feelings that are widely experienced by people, influence investor decision-making and, consequently, lead to predictable patterns in equity pricing. These insights, therefore, open the possibility to leverage techniques such as sentiment analysis to identify patterns that could affect the price of an entity.",
"Considering the emergence and ubiquity of media, especially social media, further works have explored how it effects user sentiment and therefore financial markets. Paul Tetlock in BIBREF6, explains how high media pessimism predicts downward pressure on market prices, and unusually high or low pessimism predicts high trading volume. Moreover, Gartner found in a study that majority of consumers use social networks to inform buying decisions BIBREF7. This insight has given rise to several research materials which have attempted to find correlations between media sentiments and different financial markets.",
"The authors in BIBREF8 retrieve, extract, and analyze the effects of news sentiments on the stock market. They develop a sentiment analysis dictionary for the financial sector leading to a dictionary-based sentiment analysis model. With this model trained only on news sentiments, the paper achieved a directional accuracy of 70.59% in predicting the trends in short-term stock price movement. The authors in BIBREF9 use the sentiment of message board comments to predict the stock movement. Unlike other approaches where the overall moods or sentiments are considered, this paper extracts the ‘topic-sentiment’ feature, which represents the sentiments of the specific topics of the company and uses that for stock forecasting. Using this method the accuracy average over 18 stocks in one year transactions, achieved 2.07% better performance than the model using historical prices only. Similarly, Alan Dennis and Lingyao Yuan collected valence scores on tweets about the companies in the S&P 500 and found that they correlated with stock prices BIBREF10. The authors in BIBREF11 used a self-organizing fuzzy neural network, with Twitter mood from sentiment as an input, to predict price changes in the DOW Jones Industrial average and achieved a 86.7% accuracy.",
"With the recent emergence of cryptocurrencies and the widespread investment in them, has motivated researchers to try to predict their price variations. The authors in BIBREF2 predict price fluctuations for three cryptocurrencies: Bitcoin, Litecoin and Ethereum. The news and social media data was labeled based on actual price changes one day in the future for each coin, rather than on positive or negative sentiment. By taking this approach, the model was able to directly predict price fluctuations instead of needing to first predict sentiment. Logistic regression worked best for Bitcoin predictions and the model was able to predict 43.9% of price increases and 61.9% of price decreases correctly. A work by Abhraham et al. uses Twitter sentiment and google trends data to predict the price of Bitcoin and Ethereum BIBREF12. The paper uses the tweet volume in addition to the Twitter sentiment to establish a correlation with cryptocurrency price.",
"KryptoOracle draws greatest inspiration from BIBREF13 and BIBREF14. Both works use Twitter sentiments to find correlation with Bitcoin prices. The tweets are cleaned of non-alphanumeric symbols and then processed with VADER (Valence Aware Dictionary and sEntiment Reasoner) to analyze the sentiment of each tweet and classify it as negative, neutral, or positive. The compound sentiment score is then used to establish correlation with the Bitcoin prices over different lag intervals. KryptoOracle builds on what has been discussed above but goes beyond to construct a prediction engine that forecasts Bitcoin prices at specified intervals.",
"Machine learning has also been employed directly for cryptocurrency price prediction. For instance, the authors in BIBREF15 contribute to the Bitcoin forecasting literature by testing auto-regressive integrated moving average (ARIMA) and neural network auto-regression (NNAR) models to forecast the daily price movement based only on the historical price points. Similarly the author in BIBREF16 presents a Neural Network framework to provide a deep machine learning solution to the cryptocurrency price prediction problem. The framework is realized in three instants with a Multi-layer Perceptron (MLP), a simple Recurrent Neural Network (RNN) and a Long Short-Term Memory (LSTM), which can learn long dependencies. In contrast our prediction model in addition to considering the social media influence, also employs online learning to continuously learn from its mistakes and improve itself in the process.",
"Since our engine is designed to run for an indefinite amount of time and it continuously obtains real-time data, it is inevitable that this will lead to data storage concerns in the long run. Therefore, we treat our objective as a big data problem and employ big data tools to ensure scalability and performance. We take inspiration from BIBREF17 which uses Apache Spark and Hadoop HDFS to forecast stock market trends based on social media sentiment and historical price. Similarly, we leverage the performance of Apache Spark RDDs and the persistence of Apache Hive to build a solution that is fast, accurate and fault-tolerant. To our knowledge KryptoOracle is the first of its kind solution that provides an out of box solution for real-time cryptocurrency price forecasting based on Twitter sentiments while ensuring that the data volume does not become a bottle neck to its performance."
],
[
"KryptoOracle is an engine that aims at predicting the trends of any cryptocurrency based on the sentiment of the crowd. It does so by learning the correlation between the sentiments of relevant tweets and the real time price of the cryptocurrency. The engine bootstraps itself by first learning from the history given to it and starts predicting based on the previous correlation. KryptoOracle is also capable of reinforcing itself by the mistakes it makes and tries to improve itself at prediction. In addition, the engine supports trend visualization over time based on records of both incoming data and intermediate results. This engine has been built keeping in mind the increasing data volume, velocity and variety that has been made available and is therefore able to scale and manage high volumes of heterogeneous data.",
"KryptoOracle has been built in the Apache ecosystem and uses Apache Spark. Data structures in Spark are based on resilient distributed datasets (RDD), a read only multi-set of data which can be distributed over a cluster of machines and is fault tolerant. Spark applications run as separate processes on different clusters and are coordinated by the Spark object also referred to as the SparkContext. This element is the main driver of the program which connects with the cluster manager and helps acquire executors on different nodes to allocate resource across applications. Spark is highly scalable, being 100x faster than Hadoop on large datasets, and provides out of the box libraries for both streaming and machine learning."
],
[
"The growth of the volume of data inspired us to opt for a big data architecture which can not only handle the prediction algorithms but also the streaming and increasing volume of data in a fault tolerant way.",
"Figure FIGREF2 gives an overview of the architecture design. Central to this design is Apache Spark which acts as an in-memory data store and allows us to perform computations in a scalable manner. This data is the input to our machine learning model for making predictions. To bootstrap our model, we first gather a few days of data and store that in Apache Spark RDDs. Next, we perform computations to construct features from the raw data. All these computations are performed on data that is distributed across multiple Spark clusters and therefore will scale as the data grows continuously.",
"Once the machine learning model has been bootstrapped, we commence data streaming to get real-time data related to both the social media (in our case, Twitter) and the cryptocurrency. Similar computations are performed on this data to calculate the features and then this new data-point is used to get a future prediction from the model. This computed data-point is then appended to the already existing data in Spark RDDs, obtained from the bootstrap data. Therefore, in addition to making predictions we also keep expanding our data store which allows us to extract holistic visualizations from the data regarding the cryptocurrency market trend and how our own predictions capture that. Moreover, as we discuss later the new data-points are also used to retrain our model.",
"An important property of this architecture is the persistence of the data and the model. The machine learning model persists itself by storing its weights to disk and loading from it while retraining or reinforcing itself to learn from mistakes. The tweets and cryptocurrency training data is also stored in Apache Hive which provides data warehousing support to read, write and manage distributed datasets directly from disk. This persistence technique helps the whole platform to reset itself without omissions in real time.",
"Spark RDD has the innate capability to recover itself because it stores all execution steps in a lineage graph. In case of any faults in the system, Spark redoes all the previous executions from the built DAG and recovers itself to the previous steady state from any fault such as memory overload. Spark RDDs lie in the core of KryptoOracle and therefore make it easier for it to recover from faults. Moreover, faults like memory overload or system crashes may require for the whole system to hard reboot. However, due to the duplicate copies of the RDDs in Apache Hive and the stored previous state of the machine learning model, KryptoOracle can easily recover to the previous steady state."
],
[
"In KryptoOracle we focus on sentiment analysis on a document level where each tweet is considered as a single document and we intend to determine its sentiment score. In general, there are primarily two main approaches for sentiment analysis: machine learning-based and lexicon-based. Machine learning-based approaches use classification techniques to classify text, while lexicon-based methods use a sentiment dictionary with opinion words and match them with the data to determine polarity. They assign sentiment scores to the opinion words describing how positive or negative the words contained in the dictionary are BIBREF18. Machine learning-based approaches are inherently supervised and require an adequately large training set for the model to learn the differentiating characteristics of the text corpus. In this paper we choose to forego this training aspect in favour of using a lexicon-based approach. This is because our objective is not to innovate in the natural language processing domain but instead to establish a scalable architecture that is able to capture the relationship between social media sources and financial markets, specifically in the context of the cryptocurrency market.",
"To measure the sentiment of each tweet VADER (Valence Aware Dictionary and sEntiment Reasoner) is used BIBREF19. VADER is a lexicon and rule-based sentiment analysis tool that is specifically attuned to sentiments expressed in social media. When given a text corpus, VADER outputs three valence scores for each sentiment i.e. positive, negative and neutral. A fourth compound score is computed by summing the valence scores of each word in the lexicon, adjusted according to the rules, and then normalized to be between -1 (extreme negative) and +1 (extreme positive). To summarize, it is a normalized, weighted composite score. This is the most useful metric for us since it provides a single uni-dimensional measure of sentiment for a given tweet. Therefore, we capture the sentiment of each tweet using the compound score.",
"However, this score is not the final metric that we use to build our machine learning model. It is quite intuitive that tweets belonging to influential personalities should be assigned more weight since they will have a more significant impact on the price of any cryptocurrency. To capture this relationship the compound score is multiplied by the poster's follower count, the number of likes on the tweet and the retweet count. The final score is calculated with the following equation:",
"The +1 to both the RetweetCount and Likes ensures that the final score does not become zero if there are no likes or re-tweets for the tweet in subject. UserFollowerCount does not have +1 to filter out the numerous bots on Twitter which flood crytocurrency forums. We further normalize the score by taking the root of the final score and multiplying by -1 if the score is negative. This final score belongs to a single tweet and since our prediction scope is for a certain time frame, we sum up all the normalized scores for the different tweets received during that time frame. This summed up score is then used as one of the features for our model to predict the cryptocurrency price for the future time frame."
],
[
"An important element of our architecture is the machine learning model, trained to capture the correlation between social media sentiment and a certain metric of the financial market, in our case, the price of cryptocurrency. An essential characteristic of the model is that it should be able to continuously evolve and adjust its weights according to the ever-changing social media sentiments and the volatile cryptocurrency market. We discuss later how we incorporate this in our model design. However, it is worth mentioning that our problem deals with structured data with features related to the social media sentiments and primitive or computed metrics of the cryptocurrency market.",
"In prediction problems involving unstructured data, ANNs (Artificial Neural Networks) tend to outperform all other algorithms or frameworks. However, when it comes to small-to-medium structured/tabular data like in our case, decision tree based algorithms are currently considered best-in-class. Therefore, we experimented with a few techniques but then ultimately decided to use XGBoost BIBREF20 owing to its speed, performance and the quality of being easily re-trainable. XGBoost is under development and will be released to work in PySpark. Therefore, at this moment we choose to deploy the model outside of our Spark framework. For bootstrapping the model, historical data points are exported outside the Spark framework and used to train the model initially. After this, as new real-time data arrives it is processed to create a new data-point of the required features. This data-point is then also exported outside Spark and fed to the machine learning model to obtain a prediction for the future price.",
"To continuously improve the model we employ online learning. The model is saved to disk and after every prediction we wait for the actual price value to arrive. This actual price value is then used to retrain the model as shown in Figure FIGREF5, so that it can learn from the error between the value it had predicted earlier and the actual value that arrived later. In this way the model keeps readjusting its weights to stay up to date with the market trends."
],
[
"We used PySpark v2.3 in Jupyter notebooks with Python 2.7 kernels to code KryptoOracle. The entire source code was tested on a server instance on the SOSCIP cloud with 32 GB RAM, 8 CPUs and 120 GB HDD running on Ubuntu 18.04 over a period of 30 days. The data extraction and correlation codes were taken from “Correlation of Twitter sentiments with the evolution of cryptocurrencies,\" which is publicly availableBIBREF14. The data collected for this experiment was for the Bitcoin cryptocurrency."
],
[
"The data fed into KryptoOracle is primarily of two types, Twitter data which consists of tweets related to the cryptocurrency and the minutely cryptocurrency value.",
"Twitter data: We used the Twitter API to scrap tweets with hashtags. For instance, for Bitcoin, the #BTC and #Bitcoin tags were used. The Twitter API only allows a maximum of 450 requests per 15 minute and historical data up to 7 days. Throughout our project we collect data for almost 30 days. Bitcoin had about 25000 tweets per day amounting to a total of approximately 10 MB of data daily. For each tweet, the ID, text, username, number of followers, number of retweets, creation date and time was also stored. All non-English tweets were filtered out by the API. We further processed the full tweet text by removing links, images, videos and hashtags to feed in to the algorithm.",
"Cryptocurrency data: To obtain cryptocurrency data, the Cryptocompare API BIBREF21 was used. It provides a free API that provides the 7 day minutely values of any cryptocurrency. The data has several fields: time, open, close, high and low that correspond to the opening, closing, high and low values of the cryptocurrency in that particular time frame in USD.",
"After collecting all the data, we aligned all tweets and cryptocurrecy data by defined time windows of one minute and stored the resulting data into a training data RDD. This training data RDD was further processed as described in the later subsections and then fed into the machine learning algorithm. The same API and structure was also used to stream in real time to KryptoOracle."
],
[
"We started by collecting Twitter data with hashtags #Bitcoin and #BTC for a period of 14 days using Twython, a python library which uses Twitter API to extract tweets using relevant queries. The real time price of Bitcoin was also simultaneously collected using the crytocompare API. The Twitter data was cleaned to remove any hashtags, links, images and videos from the tweets. The sentiment score of each tweet was collected to get the scores as described in the previous section.",
"To analyze the data, we calculated the Spearman and Pearson correlation between the tweet scores and the Bitcoin prices as shown in Figure FIGREF13. The y-axis of the graphs denote the lag in minutes to see if there was any lag between the arrival of tweets and the Bitcoin prices. The trend of the tweet scores and the corresponding Bitcoin prices is captured in Figure FIGREF6. The hourly summed up Twitter sentiments and their corresponding mean bitcoin price for the hour have been plotted in the graph. It can be seen in the figure that some spikes in sentiment scores correspond directly or with some lag with the Bitcoin price. We also noticed that the volume of incoming streaming tweets in the time of a radical change increases, which results in higher cumulative score for the hour.",
"The bitcoin price and Twitter sentiment features were not enough to predict the next minute price as they did not capture the ongoing trend. It was therefore important that the historical price of the cryptocurrency was also incorporated in the features so as to get a better prediction for the future. We, therefore, performed some time series manipulation to engineer two new features for our model. The first feature was the Previous Close Price that captured the close price of the cryptocurrency in the previous time frame. The next feature was the Moving Average of Close Price. This feature was a rolling average of the last 100 time frame close prices and aimed to capture the pattern with which the price was constrained to change. A similar new third feature called Moving Average of Scores was designed to capture the rolling average of the last 100 scores. This new feature captured the past sentiment information. With these three additional features combined with the final sentiment score computed in the previous subsection, we got the final training data as shown in Figure FIGREF14.",
"Once the historical data was stored, all information was fed to the machine learning model. In our experiment, we stored historical data for a month but this can be easily extended as per user requirements.",
"Once the KryptoOracle engine was bootstrapped with historical data, the real time streamer was started. The real-time tweets scores were calculated in the same way as the historical data and summed up for a minute and sent to the machine learning model with the Bitcoin price in the previous minute and the rolling average price. It predicted the next minute's Bitcoin price from the given data. After the actual price arrived, the RMS value was calculated and the machine learning model updated itself to predict with better understanding the next value. All the calculated values were then stored back to the Spark training RDD for storage. The RDD persisted all the data while training and check-pointed itself to the Hive database after certain period of time.",
"We ran the engine for one day and got an overall root mean square (RMS) error of 10$ between the actual and the predicted price of Bitcoin. The results for RMS values can be seen below.",
"Figure FIGREF15 shows the RMS error (in USD) for a period of 5 hours at the end of our experiment. The visualization graph at the end of KryptoOracle can be seen in Figure FIGREF12 which captures the actual price of Bitcoin and the predicted price by KryptoOracle over the same period of 5 hours. The graph shows clearly how KryptoOracle has been able to correctly predict the bitcoin price ahead of 1 minute time. The engine clearly learns from the errors it makes and rewires itself to predict in real-time which can be seen from the adaptive nature of the predicted price graph."
],
[
"In this paper, we present a novel big data platform that can learn, predict and update itself in real time. We tested the engine on Twitter sentiments and cryptocurrency prices. We envision that this engine can be generalized to work on any real time changing market trend such as stock prices, loyalty towards product/company or even election results. Sentiments in real world can be extracted from not only tweets but also chats from IRC channels, news and other sources such as images and videos from YouTube or TV channels. This implies that the platform can be customized for tasks where the objective is to make predictions based on social media sentiments. In future, we plan to create a front-end for this system which can be used to visually capture the trend and also show historical aggregated data as per user input. Such a front-end could also allow the time window for prediction to be tweaked to predict prices for further ahead in time.",
"We understand that crytocurrency prices are influenced by a lot of factors which cannot be captured by Twitter sentiments. Supply and demand of the coin and interest of major investors are two major factors BIBREF22. To capture these factors one has to add more features to the training data with inferences from multiple sources such as news, political reforms and macro-financial external factors such as stocks, gold rates and exchange rates. While we performed our experiments, the crytocurrency values did not go through any major changes and thus this engine also needs to be tested with more adverse fluctuations. One way to capture fluctuations can be to trace back to the features that have gone through the major changes and adaptively assign them more weights while training the machine learning model.",
"There is also future work related to the machine learning part of the engine. The state of the art time series machine learning algorithms include the modern deep learning algorithms such as RNNs and LSTMs BIBREF23, but unfortunately Spark does not provide deep learning libraries yet. There are some plugins, such as Sparkflow, that facilitate neural network support, but work is also under way to provide Spark with such in-built deep learning support. Currently, Spark also does not have much streaming machine learning support, other than linear regression and linear classification. However, the advent of additional streaming algorithm support in Spark will certainly benefit engines such as KryptoOracle.",
""
]
],
"section_name": [
"Introduction",
"Related Work",
"KryptoOracle",
"KryptoOracle ::: Architecture",
"KryptoOracle ::: Sentiment Analysis",
"KryptoOracle ::: Machine Learning",
"Experimental Evaluation",
"Experimental Evaluation ::: Data",
"Experimental Evaluation ::: Procedure and Results",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"e8746427f0ecb2acd04fbcfe1dc7442966e8354a"
],
"answer": [
{
"evidence": [
"Twitter data: We used the Twitter API to scrap tweets with hashtags. For instance, for Bitcoin, the #BTC and #Bitcoin tags were used. The Twitter API only allows a maximum of 450 requests per 15 minute and historical data up to 7 days. Throughout our project we collect data for almost 30 days. Bitcoin had about 25000 tweets per day amounting to a total of approximately 10 MB of data daily. For each tweet, the ID, text, username, number of followers, number of retweets, creation date and time was also stored. All non-English tweets were filtered out by the API. We further processed the full tweet text by removing links, images, videos and hashtags to feed in to the algorithm."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Twitter data: We used the Twitter API to scrap tweets with hashtags.",
"All non-English tweets were filtered out by the API."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"8a77341f4c3ed1067599e918ac3d1272ece52fb9"
],
"answer": [
{
"evidence": [
"Once the KryptoOracle engine was bootstrapped with historical data, the real time streamer was started. The real-time tweets scores were calculated in the same way as the historical data and summed up for a minute and sent to the machine learning model with the Bitcoin price in the previous minute and the rolling average price. It predicted the next minute's Bitcoin price from the given data. After the actual price arrived, the RMS value was calculated and the machine learning model updated itself to predict with better understanding the next value. All the calculated values were then stored back to the Spark training RDD for storage. The RDD persisted all the data while training and check-pointed itself to the Hive database after certain period of time."
],
"extractive_spans": [],
"free_form_answer": "root mean square error between the actual and the predicted price of Bitcoin for every minute",
"highlighted_evidence": [
"The real-time tweets scores were calculated in the same way as the historical data and summed up for a minute and sent to the machine learning model with the Bitcoin price in the previous minute and the rolling average price. It predicted the next minute's Bitcoin price from the given data. After the actual price arrived, the RMS value was calculated and the machine learning model updated itself to predict with better understanding the next value."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"ccac06c32ae36445d24b4289738e8994d085bf62"
],
"answer": [
{
"evidence": [
"KryptoOracle has been built in the Apache ecosystem and uses Apache Spark. Data structures in Spark are based on resilient distributed datasets (RDD), a read only multi-set of data which can be distributed over a cluster of machines and is fault tolerant. Spark applications run as separate processes on different clusters and are coordinated by the Spark object also referred to as the SparkContext. This element is the main driver of the program which connects with the cluster manager and helps acquire executors on different nodes to allocate resource across applications. Spark is highly scalable, being 100x faster than Hadoop on large datasets, and provides out of the box libraries for both streaming and machine learning.",
"Spark RDD has the innate capability to recover itself because it stores all execution steps in a lineage graph. In case of any faults in the system, Spark redoes all the previous executions from the built DAG and recovers itself to the previous steady state from any fault such as memory overload. Spark RDDs lie in the core of KryptoOracle and therefore make it easier for it to recover from faults. Moreover, faults like memory overload or system crashes may require for the whole system to hard reboot. However, due to the duplicate copies of the RDDs in Apache Hive and the stored previous state of the machine learning model, KryptoOracle can easily recover to the previous steady state."
],
"extractive_spans": [],
"free_form_answer": "By using Apache Spark which stores all executions in a lineage graph and recovers to the previous steady state from any fault",
"highlighted_evidence": [
"KryptoOracle has been built in the Apache ecosystem and uses Apache Spark. Data structures in Spark are based on resilient distributed datasets (RDD), a read only multi-set of data which can be distributed over a cluster of machines and is fault tolerant. ",
"Spark RDD has the innate capability to recover itself because it stores all execution steps in a lineage graph. In case of any faults in the system, Spark redoes all the previous executions from the built DAG and recovers itself to the previous steady state from any fault such as memory overload. Spark RDDs lie in the core of KryptoOracle and therefore make it easier for it to recover from faults. Moreover, faults like memory overload or system crashes may require for the whole system to hard reboot. However, due to the duplicate copies of the RDDs in Apache Hive and the stored previous state of the machine learning model, KryptoOracle can easily recover to the previous steady state.\n\n"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"ff61799f0c73ca451fa1e74bb9a9f6c77748b0f8"
],
"answer": [
{
"evidence": [
"In this paper we provide a novel real-time and adaptive cryptocurrency price prediction platform based on Twitter sentiments. The integrative and modular platform copes with the three aforementioned challenges in several ways. Firstly, it provides a Spark-based architecture which handles the large volume of incoming data in a persistent and fault tolerant way. Secondly, the proposed platform offers an approach that supports sentiment analysis based on VADER which can respond to large amounts of natural language processing queries in real time. Thirdly, the platform supports a predictive approach based on online learning in which a machine learning model adapts its weights to cope with new prices and sentiments. Finally, the platform is modular and integrative in the sense that it combines these different solutions to provide novel real-time tool support for bitcoin price prediction that is more scalable, data-rich, and proactive, and can help accelerate decision-making, uncover new opportunities and provide more timely insights based on the available and ever-larger financial data volume and variety."
],
"extractive_spans": [],
"free_form_answer": "handling large volume incoming data, sentiment analysis on tweets and predictive online learning",
"highlighted_evidence": [
"n this paper we provide a novel real-time and adaptive cryptocurrency price prediction platform based on Twitter sentiments. The integrative and modular platform copes with the three aforementioned challenges in several ways. Firstly, it provides a Spark-based architecture which handles the large volume of incoming data in a persistent and fault tolerant way. Secondly, the proposed platform offers an approach that supports sentiment analysis based on VADER which can respond to large amounts of natural language processing queries in real time. Thirdly, the platform supports a predictive approach based on online learning in which a machine learning model adapts its weights to cope with new prices and sentiments. Finally, the platform is modular and integrative in the sense that it combines these different solutions to provide novel real-time tool support for bitcoin price prediction that is more scalable, data-rich, and proactive, and can help accelerate decision-making, uncover new opportunities and provide more timely insights based on the available and ever-larger financial data volume and variety."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Do they evaluate only on English datasets?",
"What experimental evaluation is used?",
"How is the architecture fault-tolerant?",
"Which elements of the platform are modular?"
],
"question_id": [
"bcce5eef9ddc345177b3c39c469b4f8934700f80",
"d3092f78bdbe7e741932e3ddf997e8db42fa044c",
"e2427f182d7cda24eb7197f7998a02bc80550f15",
"0457242fb2ec33446799de229ff37eaad9932f2a"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. KryptoOracle Architecture",
"Fig. 2. The XGBoost model is retrained on each iteration of the real time stream",
"Fig. 3. Sentiment scores and Bitcoin prices",
"Fig. 4. Number of tweets collected per day",
"Fig. 5. KryptoOracle’s predictions",
"Fig. 7. Machine learning Features",
"Fig. 6. Correlation graphs",
"Fig. 8. Error in the predicted and actual price measured over 5 hours"
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"5-Figure4-1.png",
"6-Figure5-1.png",
"6-Figure7-1.png",
"6-Figure6-1.png",
"7-Figure8-1.png"
]
} | [
"What experimental evaluation is used?",
"How is the architecture fault-tolerant?",
"Which elements of the platform are modular?"
] | [
[
"2003.04967-Experimental Evaluation ::: Procedure and Results-4"
],
[
"2003.04967-KryptoOracle ::: Architecture-4",
"2003.04967-KryptoOracle-1"
],
[
"2003.04967-Introduction-3"
]
] | [
"root mean square error between the actual and the predicted price of Bitcoin for every minute",
"By using Apache Spark which stores all executions in a lineage graph and recovers to the previous steady state from any fault",
"handling large volume incoming data, sentiment analysis on tweets and predictive online learning"
] | 727 |
1906.05474 | Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets | Inspired by the success of the General Language Understanding Evaluation benchmark, we introduce the Biomedical Language Understanding Evaluation (BLUE) benchmark to facilitate research in the development of pre-training language representations in the biomedicine domain. The benchmark consists of five tasks with ten datasets that cover both biomedical and clinical texts with different dataset sizes and difficulties. We also evaluate several baselines based on BERT and ELMo and find that the BERT model pre-trained on PubMed abstracts and MIMIC-III clinical notes achieves the best results. We make the datasets, pre-trained models, and codes publicly available at https://github.com/ncbi-nlp/BLUE_Benchmark. | {
"paragraphs": [
[
"With the growing amount of biomedical information available in textual form, there have been significant advances in the development of pre-training language representations that can be applied to a range of different tasks in the biomedical domain, such as pre-trained word embeddings, sentence embeddings, and contextual representations BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 .",
"In the general domain, we have recently observed that the General Language Understanding Evaluation (GLUE) benchmark BIBREF5 has been successfully promoting the development of language representations of general purpose BIBREF2 , BIBREF6 , BIBREF7 . To the best of our knowledge, however, there is no publicly available benchmarking in the biomedicine domain.",
"To facilitate research on language representations in the biomedicine domain, we present the Biomedical Language Understanding Evaluation (BLUE) benchmark, which consists of five different biomedicine text-mining tasks with ten corpora. Here, we rely on preexisting datasets because they have been widely used by the BioNLP community as shared tasks BIBREF8 . These tasks cover a diverse range of text genres (biomedical literature and clinical notes), dataset sizes, and degrees of difficulty and, more importantly, highlight common biomedicine text-mining challenges. We expect that the models that perform better on all or most tasks in BLUE will address other biomedicine tasks more robustly.",
"To better understand the challenge posed by BLUE, we conduct experiments with two baselines: One makes use of the BERT model BIBREF7 and one makes use of ELMo BIBREF2 . Both are state-of-the-art language representation models and demonstrate promising results in NLP tasks of general purpose. We find that the BERT model pre-trained on PubMed abstracts BIBREF9 and MIMIC-III clinical notes BIBREF10 achieves the best results, and is significantly superior to other models in the clinical domain. This demonstrates the importance of pre-training among different text genres.",
"In summary, we offer: (i) five tasks with ten biomedical and clinical text-mining corpora with different sizes and levels of difficulty, (ii) codes for data construction and model evaluation for fair comparisons, (iii) pretrained BERT models on PubMed abstracts and MIMIC-III, and (iv) baseline results."
],
[
"There is a long history of using shared language representations to capture text semantics in biomedical text and data mining research. Such research utilizes a technique, termed transfer learning, whereby the language representations are pre-trained on large corpora and fine-tuned in a variety of downstream tasks, such as named entity recognition and relation extraction.",
"One established trend is a form of word embeddings that represent the semantic, using high dimensional vectors BIBREF0 , BIBREF11 , BIBREF12 . Similar methods also have been derived to improve embeddings of word sequences by introducing sentence embeddings BIBREF1 . They always, however, require complicated neural networks to be effectively used in downstream applications.",
"Another popular trend, especially in recent years, is the context-dependent representation. Different from word embeddings, it allows the meaning of a word to change according to the context in which it is used BIBREF13 , BIBREF2 , BIBREF7 , BIBREF14 . In the scientific domain, BIBREF15 released SciBERT which is trained on scientific text. In the biomedical domain, BioBERT BIBREF3 and BioELMo BIBREF16 were pre-trained and applied to several specific tasks. In the clinical domain, BIBREF17 released a clinical BERT base model trained on the MIMIC-III database. Most of these works, however, were evaluated on either different datasets or the same dataset with slightly different sizes of examples. This makes it challenging to fairly compare various language models.",
"Based on these reasons, a standard benchmarking is urgently required. Parallel to our work, BIBREF3 introduced three tasks: named entity recognition, relation extraction, and QA, while BIBREF16 introduced NLI in addition to named entity recognition. To this end, we deem that BLUE is different in three ways. First, BLUE is selected to cover a diverse range of text genres, including both biomedical and clinical domains. Second, BLUE goes beyond sentence or sentence pairs by including document classification tasks. Third, BLUE provides a comprehensive suite of codes to reconstruct dataset from scratch without removing any instances."
],
[
"BLUE contains five tasks with ten corpora that cover a broad range of data quantities and difficulties (Table 1 ). Here, we rely on preexisting datasets because they have been widely used by the BioNLP community as shared tasks."
],
[
"The sentence similarity task is to predict similarity scores based on sentence pairs. Following common practice, we evaluate similarity by using Pearson correlation coefficients.",
"BIOSSES is a corpus of sentence pairs selected from the Biomedical Summarization Track Training Dataset in the biomedical domain BIBREF18 . To develop BIOSSES, five curators judged their similarity, using scores that ranged from 0 (no relation) to 4 (equivalent). Here, we randomly select 80% for training and 20% for testing because there is no standard splits in the released data.",
"MedSTS is a corpus of sentence pairs selected from Mayo Clinic’s clinical data warehouse BIBREF19 . To develop MedSTS, two medical experts graded the sentence's semantic similarity scores from 0 to 5 (low to high similarity). We use the standard training and testing sets in the shared task."
],
[
"The aim of the named entity recognition task is to predict mention spans given in the text BIBREF20 . The results are evaluated through a comparison of the set of mention spans annotated within the document with the set of mention spans predicted by the model. We evaluate the results by using the strict version of precision, recall, and F1-score. For disjoint mentions, all spans also must be strictly correct. To construct the dataset, we used spaCy to split the text into a sequence of tokens when the original datasets do not provide such information.",
"BC5CDR is a collection of 1,500 PubMed titles and abstracts selected from the CTD-Pfizer corpus and was used in the BioCreative V chemical-disease relation task BIBREF21 . The diseases and chemicals mentioned in the articles were annotated independently by two human experts with medical training and curation experience. We use the standard training and test set in the BC5CDR shared task BIBREF22 .",
"ShARe/CLEF eHealth Task 1 Corpus is a collection of 299 deidentified clinical free-text notes from the MIMIC II database BIBREF23 . The disorders mentioned in the clinical notes were annotated by two professionally trained annotators, followed by an adjudication step, resulting in high inter-annotator agreement. We use the standard training and test set in the ShARe/CLEF eHealth Tasks 1."
],
[
"The aim of the relation extraction task is to predict relations and their types between the two entities mentioned in the sentences. The relations with types were compared to annotated data. We use the standard micro-average precision, recall, and F1-score metrics.",
"DDI extraction 2013 corpus is a collection of 792 texts selected from the DrugBank database and other 233 Medline abstracts BIBREF24 . The drug-drug interactions, including both pharmacokinetic and pharmacodynamic interactions, were annotated by two expert pharmacists with a substantial background in pharmacovigilance. In our benchmark, we use 624 train files and 191 test files to evaluate the performance and report the micro-average F1-score of the four DDI types.",
"ChemProt consists of 1,820 PubMed abstracts with chemical-protein interactions annotated by domain experts and was used in the BioCreative VI text mining chemical-protein interactions shared task BIBREF25 . We use the standard training and test sets in the ChemProt shared task and evaluate the same five classes: CPR:3, CPR:4, CPR:5, CPR:6, and CPR:9.",
"i2b2 2010 shared task collection consists of 170 documents for training and 256 documents for testing, which is the subset of the original dataset BIBREF26 . The dataset was collected from three different hospitals and was annotated by medical practitioners for eight types of relations between problems and treatments."
],
[
"The multilabel classification task predicts multiple labels from the texts.",
"HoC (the Hallmarks of Cancers corpus) consists of 1,580 PubMed abstracts annotated with ten currently known hallmarks of cancer BIBREF27 . Annotation was performed at sentence level by an expert with 15+ years of experience in cancer research. We use 315 ( $\\sim $ 20%) abstracts for testing and the remaining abstracts for training. For the HoC task, we followed the common practice and reported the example-based F1-score on the abstract level BIBREF28 , BIBREF29 ."
],
[
"The aim of the inference task is to predict whether the premise sentence entails or contradicts the hypothesis sentence. We use the standard overall accuracy to evaluate the performance.",
"MedNLI is a collection of sentence pairs selected from MIMIC-III BIBREF30 . Given a premise sentence and a hypothesis sentence, two board-certified radiologists graded whether the task predicted whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). We use the same training, development, and test sets in Romanov and Shivade BIBREF30 ."
],
[
"Following the practice in BIBREF5 and BIBREF3 , we use a macro-average of F1-scores and Pearson scores to determine a system's position."
],
[
"For baselines, we evaluate several pre-training models as described below. The original code for the baselines is available at https://github.com/ncbi-nlp/NCBI_BERT."
],
[
"BERT BIBREF7 is a contextualized word representation model that is pre-trained based on a masked language model, using bidirectional Transformers BIBREF31 .",
"In this paper, we pre-trained our own model BERT on PubMed abstracts and clinical notes (MIMIC-III). The statistics of the text corpora on which BERT was pre-trained are shown in Table 2 .",
"We initialized BERT with pre-trained BERT provided by BIBREF7 . We then continue to pre-train the model, using the listed corpora.",
"We released our BERT-Base and BERT-Large models, using the same vocabulary, sequence length, and other configurations provided by BIBREF7 . Both models were trained with 5M steps on the PubMed corpus and 0.2M steps on the MIMIC-III corpus.",
"BERT is applied to various downstream text-mining tasks while requiring only minimal architecture modification.",
"For sentence similarity tasks, we packed the sentence pairs together into a single sequence, as suggested in BIBREF7 .",
"For named entity recognition, we used the BIO tags for each token in the sentence. We considered the tasks similar to machine translation, as predicting the sequence of BIO tags from the input sentence.",
"We treated the relation extraction task as a sentence classification by replacing two named entity mentions of interest in the sentence with pre-defined tags (e.g., @GENE$, @DRUG$) BIBREF3 . For example, we used “@CHEMICAL$ protected against the RTI-76-induced inhibition of @GENE$ binding.” to replace the original sentence “Citalopram protected against the RTI-76-induced inhibition of SERT binding.” in which “citalopram” and “SERT” has a chemical-gene relation.",
"For multi-label tasks, we fine-tuned the model to predict multi-labels for each sentence in the document. We then combine the labels in one document and compare them with the gold-standard.",
"Like BERT, we provided sources code for fine-tuning, prediction, and evaluation to make it straightforward to follow those examples to use our BERT pre-trained models for all tasks."
]
],
"section_name": [
"Introduction",
"Related work",
"Tasks",
"Sentence similarity",
"Named entity recognition",
"Relation extraction",
"Document multilabel classification",
"Inference task",
"Total score",
"Baselines",
"BERT"
]
} | {
"answers": [
{
"annotation_id": [
"8d4cbaeae73751ae0cde74d9d3d6ff8066ae0871"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: BLUE tasks",
"The aim of the inference task is to predict whether the premise sentence entails or contradicts the hypothesis sentence. We use the standard overall accuracy to evaluate the performance.",
"The aim of the relation extraction task is to predict relations and their types between the two entities mentioned in the sentences. The relations with types were compared to annotated data. We use the standard micro-average precision, recall, and F1-score metrics.",
"The aim of the named entity recognition task is to predict mention spans given in the text BIBREF20 . The results are evaluated through a comparison of the set of mention spans annotated within the document with the set of mention spans predicted by the model. We evaluate the results by using the strict version of precision, recall, and F1-score. For disjoint mentions, all spans also must be strictly correct. To construct the dataset, we used spaCy to split the text into a sequence of tokens when the original datasets do not provide such information.",
"The sentence similarity task is to predict similarity scores based on sentence pairs. Following common practice, we evaluate similarity by using Pearson correlation coefficients.",
"HoC (the Hallmarks of Cancers corpus) consists of 1,580 PubMed abstracts annotated with ten currently known hallmarks of cancer BIBREF27 . Annotation was performed at sentence level by an expert with 15+ years of experience in cancer research. We use 315 ( $\\sim $ 20%) abstracts for testing and the remaining abstracts for training. For the HoC task, we followed the common practice and reported the example-based F1-score on the abstract level BIBREF28 , BIBREF29 ."
],
"extractive_spans": [],
"free_form_answer": "BLUE utilizes different metrics for each of the tasks: Pearson correlation coefficient, F-1 scores, micro-averaging, and accuracy",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: BLUE tasks",
"We use the standard overall accuracy to evaluate the performance",
"We use the standard micro-average precision, recall, and F1-score metrics",
"We evaluate the results by using the strict version of precision, recall, and F1-score.",
"Following common practice, we evaluate similarity by using Pearson correlation coefficients.",
"we followed the common practice and reported the example-based F1-score on the abstract level",
"we followed the common practice and reported the example-based F1-score on the abstract level"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"74eea9f3f4f790836045fcc75d0b3f5156901499"
]
},
{
"annotation_id": [
"8b05ab7aa89207c13179d19dc3490526b4733b06"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: BLUE tasks"
],
"extractive_spans": [
"Inference task\nThe aim of the inference task is to predict whether the premise sentence entails or contradicts the hypothesis sentence",
"Document multilabel classification\nThe multilabel classification task predicts multiple labels from the texts.",
"Relation extraction\nThe aim of the relation extraction task is to predict relations and their types between the two entities mentioned in the sentences.",
"Named entity recognition\nThe aim of the named entity recognition task is to predict mention spans given in the text ",
"Sentence similarity\nThe sentence similarity task is to predict similarity scores based on sentence pairs"
],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: BLUE tasks"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"74eea9f3f4f790836045fcc75d0b3f5156901499"
]
}
],
"nlp_background": [
"two",
"two"
],
"paper_read": [
"no",
"no"
],
"question": [
"Could you tell me more about the metrics used for performance evaluation?",
"which tasks are used in BLUE benchmark?"
],
"question_id": [
"b540cd4fe9dc4394f64d5b76b0eaa4d9e30fb728",
"41173179efa6186eef17c96f7cbd8acb29105b0e"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"elmo",
"elmo"
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: BLUE tasks",
"Table 2: Corpora",
"Table 3: Baseline performance on the BLUE task test sets."
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png"
]
} | [
"Could you tell me more about the metrics used for performance evaluation?"
] | [
[
"1906.05474-Named entity recognition-0",
"1906.05474-3-Table1-1.png",
"1906.05474-Document multilabel classification-1",
"1906.05474-Relation extraction-0",
"1906.05474-Sentence similarity-0",
"1906.05474-Inference task-0"
]
] | [
"BLUE utilizes different metrics for each of the tasks: Pearson correlation coefficient, F-1 scores, micro-averaging, and accuracy"
] | 731 |
1808.08780 | Improving Cross-Lingual Word Embeddings by Meeting in the Middle | Cross-lingual word embeddings are becoming increasingly important in multilingual NLP. Recently, it has been shown that these embeddings can be effectively learned by aligning two disjoint monolingual vector spaces through linear transformations, using no more than a small bilingual dictionary as supervision. In this work, we propose to apply an additional transformation after the initial alignment step, which moves cross-lingual synonyms towards a middle point between them. By applying this transformation our aim is to obtain a better cross-lingual integration of the vector spaces. In addition, and perhaps surprisingly, the monolingual spaces also improve by this transformation. This is in contrast to the original alignment, which is typically learned such that the structure of the monolingual spaces is preserved. Our experiments confirm that the resulting cross-lingual embeddings outperform state-of-the-art models in both monolingual and cross-lingual evaluation tasks. | {
"paragraphs": [
[
"Word embeddings are one of the most widely used resources in NLP, as they have proven to be of enormous importance for modeling linguistic phenomena in both supervised and unsupervised settings. In particular, the representation of words in cross-lingual vector spaces (henceforth, cross-lingual word embeddings) is quickly gaining in popularity. One of the main reasons is that they play a crucial role in transferring knowledge from one language to another, specifically in downstream tasks such as information retrieval BIBREF0 , entity linking BIBREF1 and text classification BIBREF2 , while at the same time providing improvements in multilingual NLP problems such as machine translation BIBREF3 .",
"There exist different approaches for obtaining these cross-lingual embeddings. One of the most successful methodological directions, which constitutes the main focus of this paper, attempts to learn bilingual embeddings via a two-step process: first, word embeddings are trained on monolingual corpora and then the resulting monolingual spaces are aligned by taking advantage of bilingual dictionaries BIBREF4 , BIBREF5 , BIBREF6 .",
"These alignments are generally modeled as linear transformations, which are constrained such that the structure of the initial monolingual spaces is left unchanged. This can be achieved by imposing an orthogonality constraint on the linear transformation BIBREF6 , BIBREF7 . Our hypothesis in this paper is that such approaches can be further improved, as they rely on the assumption that the internal structure of the two monolingual spaces is identical. In reality, however, this structure is influenced by language-specific phenomena, e.g., the fact that Spanish distinguishes between masculine and feminine nouns BIBREF8 as well as the specific biases of the different corpora from which the monolingual spaces were learned. Because of this, monolingual embedding spaces are not isomorphic BIBREF9 , BIBREF10 . On the other hand, simply dropping the orthogonality constraints leads to overfitting, and is thus not effective in practice.",
"The solution we propose is to start with existing state-of-the-art alignment models BIBREF11 , BIBREF12 , and to apply a further transformation to the resulting initial alignment. For each word $w$ with translation $w^{\\prime }$ , this additional transformation aims to map the vector representations of both $w$ and $w^{\\prime }$ onto their average, thereby creating a cross-lingual vector space which intuitively corresponds to the average of the two aligned monolingual vector spaces. Similar to the initial alignment, this mapping is learned from a small bilingual lexicon.",
"Our experimental results show that the proposed additional transformation does not only benefit cross-lingual evaluation tasks, but, perhaps surprisingly, also monolingual ones. In particular, we perform an extensive set of experiments on standard benchmarks for bilingual dictionary induction and monolingual and cross-lingual word similarity, as well as on an extrinsic task: cross-lingual hypernym discovery.",
"Code and pre-trained embeddings to reproduce our experiments and to apply our model to any given cross-lingual embeddings are available at https://github.com/yeraidm/meemi."
],
[
"Bilingual word embeddings have been extensively studied in the literature in recent years. Their nature varies with respect to the supervision signals used for training BIBREF13 , BIBREF14 . Some common signals to learn bilingual embeddings come from parallel BIBREF15 , BIBREF16 , BIBREF17 or comparable corpora BIBREF18 , BIBREF19 , BIBREF20 , or lexical resources such as WordNet, ConceptNet or BabelNet BIBREF21 , BIBREF22 , BIBREF23 . However, these sources of supervision may be scarce, limited to certain domains or may not be directly available for certain language pairs.",
"Another branch of research exploits pre-trained monolingual embeddings with weak signals such as bilingual lexicons for learning bilingual embeddings BIBREF4 , BIBREF5 , BIBREF24 , BIBREF7 . mikolov2013exploiting was one of the first attempts into this line of research, applying a linear transformation in order to map the embeddings from one monolingual space into another. They also noted that more sophisticated approaches, such as using multilayer perceptrons, do not improve with respect to their linear counterparts. xing2015normalized built upon this work by normalizing word embeddings during training and adding an orthogonality constraint. In a complementary direction, faruqui2014improving put forward a technique based on canonical correlation analysis to obtain linear mappings for both monolingual embedding spaces into a new shared space. artetxe2016learning proposed a similar linear mapping to mikolov2013exploiting, generalizing it and providing theoretical justifications which also served to reinterpret the methods of faruqui2014improving and xing2015normalized. smith2017offline further showed how orthogonality was required to improve the consistency of bilingual mappings, making them more robust to noise. Finally, a more complete generalization providing further insights on the linear transformations used in all these models can be found in artetxe2018generalizing.",
"These approaches generally require large bilingual lexicons to effectively learn multilingual embeddings BIBREF11 . Recently, however, alternatives which only need very small dictionaries, or even none at all, have been proposed to learn high-quality embeddings via linear mappings BIBREF11 , BIBREF12 . More details on the specifics of these two approaches can be found in Section \"Aligning monolingual spaces\" . These models have in turn paved the way for the development of machine translation systems which do not require any parallel corpora BIBREF25 , BIBREF26 . Moreover, the fact that such approaches only need monolingual embeddings, instead of parallel or comparable corpora, makes them easily adaptable to different domains (e.g., social media or web corpora).",
"In this paper we build upon these state-of-the-art approaches by applying an additional transformation, which aims to map each word and its translation onto the average of their vector representations. This strategy bears some resemblance with the idea of learning meta-embeddings BIBREF27 . Meta-embeddings are vector space representations which aggregate several pre-trained word embeddings from a given language (e.g., trained using different corpora and/or different word embedding models). Empirically it was found that such meta-embeddings can often outperform the individual word embeddings from which they were obtained. In particular, it was recently argued that word vector averaging can be a highly effective approach for learning such meta-embeddings BIBREF28 . The main difference between such approaches and our work is that because we rely on a small dictionary, we cannot simply average word vectors, since for most words we do not know the corresponding translation. Instead, we train a regression model to predict this average word vector from the vector representation of the given word only, i.e., without using the vector representation of its translation."
],
[
"Our approach for improving cross-lingual embeddings consists of three main steps, where the first two steps are the same as in existing methods. In particular, given two monolingual corpora, a word vector space is first learned independently for each language. This can be achieved with common word embedding models, e.g., Word2vec BIBREF29 , GloVe BIBREF30 or FastText BIBREF31 . Second, a linear alignment strategy is used to map the monolingual embeddings to a common bilingual vector space (Section \"Aligning monolingual spaces\" ). Third, a final transformation is applied on the aligned embeddings so the word vectors from both languages are refined and further integrated with each other (Section \"Conclusions and Future Work\" ). This third step is the main contribution of our paper."
],
[
"Once the monolingual word embeddings have been obtained, a linear transformation is applied in order to integrate them into the same vector space. This linear transformation is generally carried out using a supervision signal, typically in the form of a bilingual dictionary. In the following we explain two state-of-the-art models performing this linear transformation.",
"VecMap uses an orthogonal transformation over normalized word embeddings. An iterative two-step procedure is also implemented in order to avoid the need of starting with a large seed dictionary (e.g., in the original paper it was tested with a very small bilingual dictionary of just 25 pairs). In this procedure, first, the linear mapping is estimated using a small bilingual dictionary, and then, this dictionary is augmented by applying the learned transformation to new words from the source language. Lastly, the process is repeated until some convergence criterion is met.",
"In this case, the transformation matrix is learned through an iterative Procrustes alignment BIBREF32 . The anchor points needed for this alignment can be obtained either through a supplied bilingual dictionary or through an unsupervised model. This unsupervised model is trained using adversarial learning to obtain an initial alignment of the two monolingual spaces, which is then refined by the Procrustes alignment using the most frequent words as anchor points. A new distance metric for the embedding space, referred to as cross-domain similarity local scaling, is also introduced. This metric, which takes into account the nearest neighbors of both source and target words, was shown to better handle high-density regions of the space, thus alleviating the hubness problem of word embedding models BIBREF33 , BIBREF34 , which arises when a few points (known as hubs) become the nearest neighbors of many other points in the embedding space."
],
[
"After the initial alignment of the monolingual word embeddings, our proposed method leverages an additional linear model to refine the resulting bilingual word embeddings. This is because the methods presented in the previous section apply constraints to ensure that the structure of the monolingual embeddings is largely preserved. As already mentioned in the introduction, conceptually this may not be optimal, as embeddings for different languages and trained from different corpora can be expected to be structured somewhat differently. Empirically, as we will see in the evaluation, after applying methods such as VecMap and MUSE there still tend to be significant gaps between the vector representations of words and their translations. Our method directly attempts to reduce these gaps by moving each word vector towards the middle point between its current representation and the representation of its translation. In this way, by bringing the two monolingual fragments of the space closer to each other, we can expect to see an improved performance on cross-lingual evaluation tasks such as bilingual dictionary induction. Importantly, the internal structure of the two monolingual fragments themselves is also affected by this step. By averaging between the representations obtained from different languages, we hypothesize that the impact of language-specific phenomena and corpus specific biases will be reduced, thereby ending up with more “neutral” monolingual embeddings.",
"In the following, we detail our methodological approach. First, we leverage the same bilingual dictionary that was used to obtain the initial alignment (Section \"Aligning monolingual spaces\" ). Specifically, let $D=\\lbrace (w,w^{\\prime })\\rbrace $ be the given bilingual dictionary, where $w \\in V$ and $w^{\\prime } \\in V^{\\prime }$ , with $V$ and $V^{\\prime }$ representing the vocabulary of the first and second language, respectively. For pairs $(w,w^{\\prime }) \\in D$ , we can simply compute the corresponding average vector $\\vec{\\mu }_{w,w^{\\prime }}=\\frac{\\vec{v}_w+\\vec{v}_{w^{\\prime }}}{2}$ . Then, using the pairs in $D$ as training data, we learn a linear mapping $X$ such that $X \\vec{v}_w \\approx \\vec{\\mu }_{w,w^{\\prime }}$ for all $w \\in V$0 . This mapping $w \\in V$1 can then be used to predict the averages for words outside the given dictionary. To find the mapping $w \\in V$2 , we solve the following least squares linear regression problem: ",
"$$E=\\sum _{(w,w^{\\prime }) \\in D} \\Vert X\\vec{w}-\\vec{\\mu }_ {w,w^{\\prime }}\\Vert ^2$$ (Eq. 6) ",
"Similarly, for the other language, we separately learn a mapping $X^{\\prime }$ such that $X^{\\prime } \\vec{v}_{w^{\\prime }} \\approx \\vec{\\mu }_{w,w^{\\prime }}$ .",
"It is worth pointing out that we experimented with several variants of this linear regression formulation. For example, we also tried using a multilayer perceptron to learn non-linear mappings, and we experimented with several regularization terms to penalize mappings that deviate too much from the identity mapping. None of these variants, however, were found to improve on the much simpler formulation in ( 6 ), which can be solved exactly and efficiently. Furthermore, one may wonder whether the initial alignment is actually needed, since e.g., coates2018frustratingly obtained high-quality meta-embeddings without such an alignment set. However, when applying our approach directly to the initial monolingual non-aligned embedding spaces, we obtained results which were competitive but slightly below the two considered alignment strategies."
],
[
"We test our bilingual embedding refinement approach on both intrinsic and extrinsic tasks. In Section \"Cross-lingual embeddings training\" we describe the common training setup for all experiments and language pairs. The languages we considered are English, Spanish, Italian, German and Finnish. Throughout all the experiments we use publicly available resources in order to make comparisons and reproducibility of our experiments easier."
],
[
"Corpora. In our experiments we make use of web-extracted corpora. For English we use the 3B-word UMBC WebBase Corpus BIBREF35 , while we chose the Spanish Billion Words Corpus BIBREF36 for Spanish. For Italian and German, we use the itWaC and sdeWaC corpora from the WaCky project BIBREF37 , containing 2 and 0.8 billion words, respectively. Lastly, for Finnish, we use the Common Crawl monolingual corpus from the Machine Translation of News Shared Task 2016, composed of 2.8B words. All corpora are tokenized and lowercased.",
"Monolingual embeddings. The monolingual word embeddings are trained with the Skipgram model from FastText BIBREF31 on the corpora described above. The dimensionality of the vectors was set to 300, with the default FastText hyperparameters.",
"Bilingual dictionaries. We use the bilingual dictionaries packaged together by artetxe-labaka-agirre:2017:Long, each one conformed by 5000 word translations. They are used both for the initial bilingual mappings and then again for our linear transformation.",
"Initial mapping. Following previous works, for the purpose of obtaining the initial alignment, English is considered as source language and the remaining languages are used as target. We make use of the open-source implementations of VecMap BIBREF11 and MUSE BIBREF12 , which constitute strong baselines for our experiments (cf. Section \"Aligning monolingual spaces\" ). Both of them were used with the recommended parameters and in their supervised setting, using the aforementioned bilingual dictionaries.",
"Meeting in the Middle. Then, once the initial cross-lingual embeddings are trained, and as explained in Section \"Conclusions and Future Work\" , we obtain our linear transformation by using the exact solution to the least squares linear regression problem. To this end, we use the same bilingual dictionaries as in the previous step. Henceforth, we will refer to our transformed models as VecMap $_\\mu $ and MUSE $_\\mu $ , depending on the initial mapping."
],
[
"We test our cross-lingual word embeddings in two intrinsic tasks, i.e., bilingual dictionary induction (Section UID14 ) and word similarity (Section UID20 ), and an extrinsic task, i.e., cross-lingual hypernym discovery (Section UID31 ).",
"The dictionary induction task consists in automatically generating a bilingual dictionary from a source to a target language, using as input a list of words in the source language.",
"For this task, and following previous works, we use the English-Italian test set released by dinu2015improving and those released by artetxe-labaka-agirre:2017:Long for the remaining language pairs. These test sets have no overlap with respect to the training and development sets, and contain around 1900 entries each. Given an input word from the source language, word translations are retrieved through a nearest-neighbor search of words in the target language, using cosine distance. Note that this gives us a ranked list of candidates for each word from the source language. Accordingly, the performance of the embeddings is evaluated with the precision at $k$ ( $P@k$ ) metric, which evaluates for what percentage of test pairs, the correct answer is among the $k$ highest ranked candidates.",
"As can be seen in Table 1 , our refinement method consistently improves over the baselines (i.e., VecMap and MUSE) on all language pairs and metrics. The higher scores indicate that the two monolingual embedding spaces become more tightly integrated because of our additional transformation. It is worth highlighting here the case of English-Finnish, where the gains obtained in $P@5$ and $P@10$ are considerable. This might indicate that our approach is especially useful for morphologically richer languages such as Finnish, where the limitations of the previous bilingual mappings are most apparent.",
"When analyzing the source of errors in $P@1$ , we came to similar conclusions as artetxe-labaka-agirre:2017:Long. Several source words are translated to words that are closely related to the one in the gold reference in the target language; e.g., for the English word essentially we obtain básicamente (basically) instead of fundamentalmente (fundamentally) in Spanish, both of them closely related, or the closest neighbor for dirt being mugre (dirt) instead of suciedad (dirt), which in fact was among the five closest neighbors. We can also find multiple examples of the higher performance of our models compared to the baselines. For instance, in the English-Spanish cross-lingual models, after the initial alignment, we can find that seconds has minutos (minutes) as nearest neighbour, but after applying our additional transformation, seconds becomes closest to segundos (seconds). Similarly, paint initially has tintado (tinted) as the closest Spanish word, and then pintura (paint).",
"We perform experiments on both monolingual and cross-lingual word similarity. In monolingual similarity, models are tested in their ability to determine the similarity between two words in the same language, whereas in cross-lingual similarity the words belong to different languages. While in the monolingual setting the main objective is to test the quality of the monolingual subsets of the bilingual vector space, the cross-lingual setting constitutes a straightforward benchmark to test the quality of bilingual embeddings.",
"For monolingual word similarity we use the English SimLex-999 BIBREF38 , and the language-specific versions of SemEval-17 BIBREF39 , WordSim-353 BIBREF40 , and RG-65 BIBREF41 . The corresponding cross-lingual datasets from SemEval-18, WordSim-353 and RG-65 were considered for the cross-lingual word similarity evaluation. Cosine similarity is again used as comparison measure.",
"Tables 2 and 3 show the monolingual and cross-lingual word similarity results, respectively. For both the monolingual and cross-lingual settings, we can notice that our models generally outperform the corresponding baselines. Moreover, in cases where no improvement is obtained, the differences tend to be minimal, with the exception of RG-65, but this is a very small test set for which larger variations can thus be expected. In contrast, there are a few cases where substantial gains were obtained by using our model. This is most notable for English WordSim and SimLex in the monolingual setting.",
"In order to further understand the movements of the space with respect to the original VecMap and MUSE spaces, Figure 1 displays the average similarity values on the SemEval cross-lingual datasets (the largest among all benchmarks) of each model. As expected, the figure clearly shows how our model consistently brings the words from both languages closer on all language pairs. Furthermore, this movement is performed smoothly across all pairs, i.e., our model does not make large changes to specific words but rather small changes overall. This can be verified by inspecting the standard deviation of the difference in similarity after applying our transformation. These standard deviation scores range from 0.031 (English-Spanish for VecMap) to 0.039 (English-Italian for MUSE), which are relatively small given that the cosine similarity scale ranges from -1 to 1.",
"As a complement of this analysis we show some qualitative results which give us further insights on the transformations of the vector space after our average approximation. In particular, we analyze the reasons behind the higher quality displayed by our bilingual embeddings in monolingual settings. While VecMap and MUSE do not transform the initial monolingual spaces, our model transforms both spaces simultaneously. In this analysis we focus on the source language of our experiments (i.e., English). We found interesting patterns which are learned by our model and help understand these monolingual gains. For example, a recurring pattern is that words in English which are translated to the same word, or to semantically close words, in the target language end up closer together after our transformation. For example, in the case of English-Spanish the following pairs were among the pairs whose similarity increased the most by applying our transformation: cellphone-telephone, movie-film, book-manuscript or rhythm-cadence, which are either translated to the same word in Spanish (i.e., teléfono and película in the first two cases) or are already very close in the Spanish space. More generally, we found that word pairs which move together the most tend to be semantically very similar and belong to the same domain, e.g., car-bicycle, opera-cinema, or snow-ice.",
"Modeling hypernymy is a crucial task in NLP, with direct applications in diverse areas such as semantic search BIBREF43 , BIBREF44 , question answering BIBREF45 , BIBREF46 or textual entailment BIBREF47 . Hypernyms, in addition, are the backbone of lexical ontologies BIBREF48 , which are in turn useful for organizing, navigating and retrieving online content BIBREF49 . Thus, we propose to evaluate the contribution of cross-lingual embeddings towards the task of hypernym discovery, i.e., given an input word (e.g., cat), retrieve or discover its most likely (set of) valid hypernyms (e.g., animal, mammal, feline, and so on). Intuitively, by leveraging a bilingual vector space condensing the semantics of two languages, one of them being English, the need for large amounts of training data in the target language may be reduced.",
"We follow EspinosaEMNLP2016 and learn a (cross-lingual) linear transformation matrix between the hyponym and hypernym spaces, which is afterwards used to predict the most likely (set of) hypernyms, given an unseen hyponym. Training and evaluation data come from the SemEval 2018 Shared Task on Hypernym Discovery BIBREF50 . Note that current state-of-the-art systems aimed at modeling hypernymy BIBREF51 , BIBREF52 combine large amounts of annotated data along with language-specific rules and cue phrases such as Hearst Patterns BIBREF53 , both of which are generally scarcely (if at all) available for languages other than English. Therefore, we report experiments with training data only from English (11,779 hyponym-hypernym pairs), and “enriched” models informed with relatively few training pairs (500, 1k and 2k) from the target languages. Evaluation is conducted with the same metrics as in the original SemEval task, i.e., Mean Reciprocal Rank (MRR), Mean Average Precision (MAP) and Precision at 5 (P@5). These measures explain a model's behavior from complementary prisms, namely how often at least one valid hypernym was highly ranked (MRR), and in cases where there is more than one correct hypernym, to what extent they were all correctly retrieved (MAP and P@5). Finally, as in the previous experiments, we report comparative results between our proposed models and the two competing baselines (VecMap and MUSE). As an additional informative baseline, we include the highest scoring unsupervised system at the SemEval task for both Spanish and Italian (BestUns), which is based on the distributional models described in shwartz2017hypernymy.",
"The results listed in Table 4 indicate several trends. First and foremost, in terms of model-wise comparisons, we observe that our proposed alterations of both VecMap and MUSE improve their quality in a consistent manner, across most metrics and data configurations. In Italian our proposed model shows an improvement across all configurations. However, in Spanish VecMap emerges as a highly competitive baseline, with our model only showing an improved performance when training data in this language abounds (in this specific case there is an increase from 17.2 to 19.5 points in the MRR metric). This suggests that the fact that the monolingual spaces are closer in our model is clearly beneficial when hybrid training data is given as input, opening up avenues for future work on weakly-supervised learning. Concerning the other baseline, MUSE, the contribution of our proposed model is consistent for both languages, again becoming more apparent in the Italian split and in a fully cross-lingual setting, where the improvement in MRR is almost 3 points (from 10.6 to 13.3). Finally, it is noteworthy that even in the setting where no training data from the target language is leveraged, all the systems based on cross-lingual embeddings outperform the best unsupervised baseline, which is a very encouraging result with regards to solving tasks for languages on which training data is not easily accessible or not directly available.",
"A manual exploration of the results obtained in cross-lingual hypernym discovery reveals a systematic pattern when comparing, for example, VecMap and our model. It was shown in Table 4 that the performance of our model gradually increased alongside the size of the training data in the target language until surpassing VecMap in the most informed configuration (i.e., EN+2k). Specifically, our model seems to show a higher presence of generic words in the output hypernyms, which may be explained by these being closer in the space. In fact, out of 1000 candidate hyponyms, our model correctly finds person 143 times, as compared to the 111 of VecMap, and this systematically occurs with generic types such as citizen or transport. Let us mention, however, that the considered baselines perform remarkably well in some cases. For example, the English-only VecMap configuration (EN), unlike ours, correctly discovered the following hypernyms for Francesc Macià (a Spanish politician and soldier): politician, ruler, leader and person. These were missing from the prediction of our model in all configurations until the most informed one (EN+2k)."
],
[
"We have shown how to refine bilingual word embeddings by applying a simple transformation which moves cross-lingual synonyms closer towards their average representation. Before applying this strategy, we start by aligning the monolingual embeddings of the two languages of interest. For this initial alignment, we have considered two state-of-the-art methods from the literature, namely VecMap BIBREF11 and MUSE BIBREF12 , which also served as our baselines. Our approach is motivated by the fact that these alignment methods do not change the structure of the individual monolingual spaces. However, the internal structure of embeddings is, at least to some extent, language-specific, and is moreover affected by biases of the corpus from which they are trained, meaning that after the initial alignment significant gaps remain between the representations of cross-lingual synonyms. We tested our approach on a wide array of datasets from different tasks (i.e., bilingual dictionary induction, word similarity and cross-lingual hypernym discovery) with state-of-the-art results.",
"This paper opens up several promising avenues for future work. First, even though both languages are currently being treated symmetrically, the initial monolingual embedding of one of the languages may be more reliable than that of the other. In such cases, it may be of interest to replace the vectors $\\vec{\\mu }_ {w,w^{\\prime }}$ by a weighted average of the monolingual word vectors. Second, while we have only considered bilingual scenarios in this paper, our approach can naturally be applied to scenarios involving more languages. In this case, we would first choose a single target language, and obtain alignments between all the other languages and this target language. To apply our model, we can then simply learn mappings to predict averaged word vectors across all languages. Finally, it would also be interesting to use the obtained embeddings in downstream applications such as language identification or cross-lingual sentiment analysis, and extend our analysis to other languages, with a particular focus on morphologically-rich languages (after seeing our success with Finnish), for which the bilingual induction task has proved more challenging for standard cross-lingual embedding models BIBREF9 ."
],
[
"Yerai Doval is funded by the Spanish Ministry of Economy, Industry and Competitiveness (MINECO) through project FFI2014-51978-C2-2-R, and by the Spanish State Secretariat for Research, Development and Innovation (which belongs to MINECO) and the European Social Fund (ESF) under a FPI fellowship (BES-2015-073768) associated to project FFI2014-51978-C2-1-R. Jose Camacho-Collados, Luis Espinosa-Anke and Steven Schockaert have been supported by ERC Starting Grant 637277."
]
],
"section_name": [
"Introduction",
"Related Work",
"Methodology",
"Aligning monolingual spaces",
"Meeting in the middle",
"Evaluation",
"Cross-lingual embeddings training",
"Experiments",
"Conclusions and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"8c34becdc69526b85016cce334d59317850ee70f"
],
"answer": [
{
"evidence": [
"Our experimental results show that the proposed additional transformation does not only benefit cross-lingual evaluation tasks, but, perhaps surprisingly, also monolingual ones. In particular, we perform an extensive set of experiments on standard benchmarks for bilingual dictionary induction and monolingual and cross-lingual word similarity, as well as on an extrinsic task: cross-lingual hypernym discovery.",
"Tables 2 and 3 show the monolingual and cross-lingual word similarity results, respectively. For both the monolingual and cross-lingual settings, we can notice that our models generally outperform the corresponding baselines. Moreover, in cases where no improvement is obtained, the differences tend to be minimal, with the exception of RG-65, but this is a very small test set for which larger variations can thus be expected. In contrast, there are a few cases where substantial gains were obtained by using our model. This is most notable for English WordSim and SimLex in the monolingual setting.",
"As can be seen in Table 1 , our refinement method consistently improves over the baselines (i.e., VecMap and MUSE) on all language pairs and metrics. The higher scores indicate that the two monolingual embedding spaces become more tightly integrated because of our additional transformation. It is worth highlighting here the case of English-Finnish, where the gains obtained in $P@5$ and $P@10$ are considerable. This might indicate that our approach is especially useful for morphologically richer languages such as Finnish, where the limitations of the previous bilingual mappings are most apparent.",
"The results listed in Table 4 indicate several trends. First and foremost, in terms of model-wise comparisons, we observe that our proposed alterations of both VecMap and MUSE improve their quality in a consistent manner, across most metrics and data configurations. In Italian our proposed model shows an improvement across all configurations. However, in Spanish VecMap emerges as a highly competitive baseline, with our model only showing an improved performance when training data in this language abounds (in this specific case there is an increase from 17.2 to 19.5 points in the MRR metric). This suggests that the fact that the monolingual spaces are closer in our model is clearly beneficial when hybrid training data is given as input, opening up avenues for future work on weakly-supervised learning. Concerning the other baseline, MUSE, the contribution of our proposed model is consistent for both languages, again becoming more apparent in the Italian split and in a fully cross-lingual setting, where the improvement in MRR is almost 3 points (from 10.6 to 13.3). Finally, it is noteworthy that even in the setting where no training data from the target language is leveraged, all the systems based on cross-lingual embeddings outperform the best unsupervised baseline, which is a very encouraging result with regards to solving tasks for languages on which training data is not easily accessible or not directly available."
],
"extractive_spans": [],
"free_form_answer": "bilingual dictionary induction, monolingual and cross-lingual word similarity, and cross-lingual hypernym discovery",
"highlighted_evidence": [
"In particular, we perform an extensive set of experiments on standard benchmarks for bilingual dictionary induction and monolingual and cross-lingual word similarity, as well as on an extrinsic task: cross-lingual hypernym discovery.",
"For both the monolingual and cross-lingual settings, we can notice that our models generally outperform the corresponding baselines. ",
"As can be seen in Table 1 , our refinement method consistently improves over the baselines (i.e., VecMap and MUSE) on all language pairs and metrics.",
"First and foremost, in terms of model-wise comparisons, we observe that our proposed alterations of both VecMap and MUSE improve their quality in a consistent manner, across most metrics and data configurations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ca2a4695129d0180768a955fb5910d639f79aa34"
]
},
{
"annotation_id": [
"a27e515fd55020328ac00f0086e91e5e5ac78888"
],
"answer": [
{
"evidence": [
"As a complement of this analysis we show some qualitative results which give us further insights on the transformations of the vector space after our average approximation. In particular, we analyze the reasons behind the higher quality displayed by our bilingual embeddings in monolingual settings. While VecMap and MUSE do not transform the initial monolingual spaces, our model transforms both spaces simultaneously. In this analysis we focus on the source language of our experiments (i.e., English). We found interesting patterns which are learned by our model and help understand these monolingual gains. For example, a recurring pattern is that words in English which are translated to the same word, or to semantically close words, in the target language end up closer together after our transformation. For example, in the case of English-Spanish the following pairs were among the pairs whose similarity increased the most by applying our transformation: cellphone-telephone, movie-film, book-manuscript or rhythm-cadence, which are either translated to the same word in Spanish (i.e., teléfono and película in the first two cases) or are already very close in the Spanish space. More generally, we found that word pairs which move together the most tend to be semantically very similar and belong to the same domain, e.g., car-bicycle, opera-cinema, or snow-ice."
],
"extractive_spans": [],
"free_form_answer": "because word pair similarity increases if the two words translate to similar parts of the cross-lingual embedding space",
"highlighted_evidence": [
"We found interesting patterns which are learned by our model and help understand these monolingual gains. For example, a recurring pattern is that words in English which are translated to the same word, or to semantically close words, in the target language end up closer together after our transformation. For example, in the case of English-Spanish the following pairs were among the pairs whose similarity increased the most by applying our transformation: cellphone-telephone, movie-film, book-manuscript or rhythm-cadence, which are either translated to the same word in Spanish (i.e., teléfono and película in the first two cases) or are already very close in the Spanish space.",
"More generally, we found that word pairs which move together the most tend to be semantically very similar and belong to the same domain, e.g., car-bicycle, opera-cinema, or snow-ice.\n\n"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ca2a4695129d0180768a955fb5910d639f79aa34"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"What are the tasks that this method has shown improvements?",
"Why does the model improve in monolingual spaces as well? "
],
"question_id": [
"0bd683c51a87a110b68b377e9a06f0a3e12c8da0",
"a979749e59e6e300a453d8a8b1627f97101799de"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Bilingual dictionary induction results. Precision at k (P@K) performance for Spanish (ES), Italian (IT), German (DE) and Finnish (FI), using English (EN) as source language.",
"Table 2: Monolingual word similarity results. Pearson (r) and Spearman (ρ) correlation.",
"Figure 1: Comparative average similarity between VecMap and MUSE (blue) and our proposed model (red) on the SemEval cross-lingual similarity datasets.",
"Table 3: Cross-lingual word similarity results. Pearson (r) and Spearman (ρ) correlation.",
"Table 4: Results on the hypernym discovery task."
],
"file": [
"5-Table1-1.png",
"6-Table2-1.png",
"6-Figure1-1.png",
"7-Table3-1.png",
"8-Table4-1.png"
]
} | [
"What are the tasks that this method has shown improvements?",
"Why does the model improve in monolingual spaces as well? "
] | [
[
"1808.08780-Introduction-4",
"1808.08780-Experiments-7",
"1808.08780-Experiments-3",
"1808.08780-Experiments-12"
],
[
"1808.08780-Experiments-9"
]
] | [
"bilingual dictionary induction, monolingual and cross-lingual word similarity, and cross-lingual hypernym discovery",
"because word pair similarity increases if the two words translate to similar parts of the cross-lingual embedding space"
] | 732 |
1704.04539 | Cross-lingual Abstract Meaning Representation Parsing | Abstract Meaning Representation (AMR) annotation efforts have mostly focused on English. In order to train parsers on other languages, we propose a method based on annotation projection, which involves exploiting annotations in a source language and a parallel corpus of the source language and a target language. Using English as the source language, we show promising results for Italian, Spanish, German and Chinese as target languages. Besides evaluating the target parsers on non-gold datasets, we further propose an evaluation method that exploits the English gold annotations and does not require access to gold annotations for the target languages. This is achieved by inverting the projection process: a new English parser is learned from the target language parser and evaluated on the existing English gold standard. | {
"paragraphs": [
[
"Abstract Meaning Representation (AMR) parsing is the process of converting natural language sentences into their corresponding AMR representations BIBREF0 . An AMR is a graph with nodes representing the concepts of the sentence and edges representing the semantic relations between them. Most available AMR datasets large enough to train statistical models consist of pairs of English sentences and AMR graphs.",
"The cross-lingual properties of AMR across languages has been the subject of preliminary discussions. The AMR guidelines state that AMR is not an interlingua BIBREF0 and bojar2014comparing categorizes different kinds of divergences in the annotation between English AMRs and Czech AMRs. xue2014not show that structurally aligning English AMRs with Czech and Chinese AMRs is not always possible but that refined annotation guidelines suffice to resolve some of these cases. We extend this line of research by exploring whether divergences among languages can be overcome, i.e., we investigate whether it is possible to maintain the AMR annotated for English as a semantic representation for sentences written in other languages, as in Figure 1 .",
"We implement AMR parsers for Italian, Spanish, German and Chinese using annotation projection, where existing annotations are projected from a source language (English) to a target language through a parallel corpus BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . By evaluating the parsers and manually analyzing their output, we show that the parsers are able to recover the AMR structures even when there exist structural differences between the languages, i.e., although AMR is not an interlingua it can act as one. This method also provides a quick way to prototype multilingual AMR parsers, assuming that Part-of-speech (POS) taggers, Named Entity Recognition (NER) taggers and dependency parsers are available for the target languages. We also propose an alternative approach, where Machine Translation (MT) is used to translate the input sentences into English so that an available English AMR parser can be employed. This method is an even quicker solution which only requires translation models between the target languages and English.",
"Due to the lack of gold standard in the target languages, we exploit the English data to evaluate the parsers for the target languages. (Henceforth, we will use the term target parser to indicate a parser for a target language.) We achieve this by first learning the target parser from the gold standard English parser, and then inverting this process to learn a new English parser from the target parser. We then evaluate the resulting English parser against the gold standard. We call this “full-cycle” evaluation.",
"Similarly to evangcross, we also directly evaluate the target parser on “silver” data, obtained by parsing the English side of a parallel corpus.",
"In order to assess the reliability of these evaluation methods, we collected gold standard datasets for Italian, Spanish, German and Chinese by acquiring professional translations of the AMR gold standard data to these languages. We hypothesize that the full-cycle score can be used as a more reliable proxy than the silver score for evaluating the target parser. We provide evidence to this claim by comparing the three evaluation procedures (silver, full-cycle, and gold) across languages and parsers.",
"Our main contributions are:"
],
[
"AMR is a semantic representation heavily biased towards English, where labels for nodes and edges are either English words or Propbank frames BIBREF5 . The goal of AMR is to abstract away from the syntactic realization of the original sentences while maintaining its underlying meaning. As a consequence, different phrasings of one sentence are expected to provide identical AMR representations. This canonicalization does not always hold across languages: two sentences that express the same meaning in two different languages are not guaranteed to produce identical AMR structures BIBREF6 , BIBREF7 . However, xue2014not show that in many cases the unlabeled AMRs are in fact shared across languages. We are encouraged by this finding and argue that it should be possible to develop algorithms that account for some of these differences when they arise. We therefore introduce a new problem, which we call cross-lingual AMR parsing: given a sentence in any language, the goal is to recover the AMR graph that was originally devised for its English translation. This task is harder than traditional AMR parsing as it requires to recover English labels as well as to deal with structural differences between languages, usually referred as translation divergence. We propose two initial solutions to this problem: by annotation projection and by machine translation."
],
[
"AMR is not grounded in the input sentence, therefore there is no need to change the AMR annotation when projecting to another language. We think of English labels for the graph nodes as ones from an independent language, which incidentally looks similar to English. However, in order to train state-of-the-art AMR parsers, we also need to project the alignments between AMR nodes and words in the sentence (henceforth called AMR alignments). We use word alignments, similarly to other annotation projection work, to project the AMR alignments to the target languages.",
"Our approach depends on an underlying assumption that we make: if a source word is word-aligned to a target word and it is AMR aligned with an AMR node, then the target word is also aligned to that AMR node. More formally, let $S = s_1 \\dots s_{\\vert s \\vert }$ be the source language sentence and $T = t_1 \\dots t_{\\vert t \\vert }$ be the target language sentence; $A_s(\\cdot )$ be the AMR alignment mapping word tokens in $S$ to the set of AMR nodes that are triggered by it; $A_t(\\cdot )$ be the same function for $T$ ; $v$ be a node in the AMR graph; and finally, $W(\\cdot )$ be an alignment that maps a word in $S$ to a subset of words in $T$ . Then, the AMR projection assumption is: $T = t_1 \\dots t_{\\vert t \\vert }$0 ",
"In the example of Figure 1 , Questa is word-aligned with This and therefore AMR-aligned with the node this, and the same logic applies to the other aligned words. The words is, the and of do not generate any AMR nodes, so we ignore their word alignments. We apply this method to project existing AMR annotations to other languages, which are then used to train the target parsers."
],
[
"We invoke an MT system to translate the sentence into English so that we can use an available English parser to obtain its AMR graph. Naturally, the quality of the output graph depends on the quality of the translations. If the automatic translation is close to the reference translation, then the predicted AMR graph will be close to the reference AMR graph. It is therefore evident that this method is not informative in terms of the cross-lingual properties of AMR. However, its simplicity makes it a compelling engineering solution for parsing other languages."
],
[
"We now turn to the problem of evaluation. Let us assume that we trained a parser for a target language, for example using the annotation projection method discussed in Section \"Related Work\" . In line with rapid development of new parsers, we assume that the only gold AMR dataset available is the one released for English.",
"We can generate a silver test set by running an automatic (English) AMR parser on the English side of a parallel corpus and use the output AMRs as references. However, the silver test set is affected by mistakes made by the English AMR parser, therefore it may not be reliable.",
"In order to perform the evaluation on a gold test set, we propose full-cycle evaluation: after learning the target parser from the English parser, we invert this process to learn a new English parser from the target parser, in the same way that we learned the target parser from the English parser. The resulting English parser is then evaluated against the (English) AMR gold standard. We hypothesize that the score of the new English parser can be used as a proxy to the score of the target parser.",
"To show whether the evaluation methods proposed can be used reliably, we also generated gold test AMR datasets for four target languages (Italian, Spanish, German and Chinese). In order to do so, we collected professional translations for the English sentences in the AMR test set. We were then able to create pairs of human-produced sentences with human-produced AMR graphs.",
"A diagram summarizing the different evaluation stages is shown in Figure 2 . In the case of MT-based systems, the full-cycle corresponds to first translating from English to the target language and then back to English (back-translation), and only then parsing the sentences with the English AMR parser. At the end of this process, a noisy version of the original sentence will be returned and its parsed graph will be a noisy version of the graph parsed from the original sentence."
],
[
"We run experiments on four languages: Italian, Spanish, German and Chinese. We use Europarl BIBREF8 as the parallel corpus for Italian, Spanish and German, containing around 1.9M sentences for each language pair. For Chinese, we use the first 2M sentences from the United Nations Parallel Corpus BIBREF9 . For each target language we extract two parallel datasets of 20,000/2,000/2,000 (train/dev/test) sentences for the two step of the annotation projection (English $\\rightarrow $ target and target $\\rightarrow $ English). These are used to train the AMR parsers. The projection approach also requires training the word alignments, for which we use all the remaining sentences from the parallel corpora (Europarl for Spanish/German/Italian and UN Parallel Corpus for Chinese). These are also the sentences we use to train the MT models. The gold AMR dataset is LDC2015E86, containing 16,833 training sentences, 1,368 development sentences, and 1,371 testing sentences.",
"Word alignments were generated using fast_align BIBREF10 , while AMR alignments were generated with JAMR BIBREF11 . AMREager BIBREF12 was chosen as the pre-existing English AMR parser. AMREager is an open-source AMR parser that needs only minor modifications for re-use with other languages. Our multilingual adaptation of AMREager is available at http://www.github.com/mdtux89/amr-eager-multilingual. It requires tokenization, POS tagging, NER tagging and dependency parsing, which for English, German and Chinese are provided by CoreNLP BIBREF13 . We use Freeling BIBREF14 for Spanish, as CoreNLP does not provide dependency parsing for this language. Italian is not supported in CoreNLP: we use Tint BIBREF15 , a CoreNLP-compatible NLP pipeline for Italian.",
"In order to experiment with the approach of Section \"Conclusions\" , we experimented with translations from Google Translate. As Google Translate has access to a much larger training corpus, we also trained baseline MT models using Moses BIBREF16 and Nematus BIBREF17 , with the same training data we use for the projection method and default hyper-parameters.",
"Smatch BIBREF18 is used to evaluate AMR parsers. It looks for the best alignment between the predicted AMR and the reference AMR and it then computes precision, recall and $F_1$ of their edges. The original English parser achieves 65% Smatch score on the test split of LDC2015E86. Full-cycle and gold evaluations use the same dataset, while silver evaluation is performed on the split of the parallel corpora we reserved for testing. Results are shown in Table 1 . The Google Translate system outperforms all other systems, but is not directly comparable to them, as it has the unfair advantage of being trained on a much larger dataset. Due to noisy JAMR alignments and silver training data involved in the annotation projection approach, the MT-based systems give in general better parsing results. The BLEU scores of all translation systems are shown in Table 2 .",
"There are several sources of noise in the annotation projection method, which affect the parsing results: 1) the parsers are trained on silver data obtained by an automatic parser for English; 2) the projection uses noisy word alignments; 3) the AMR alignments on the source side are also noisy; 4) translation divergences exist between the languages, making it sometimes difficult to project the annotation without loss of information."
],
[
"Figure 3 shows examples of output parses for all languages, including the AMR alignments by-product of the parsing process, that we use to discuss the mistakes made by the parsers.",
"In the Italian example, the only evident error is that Infine (Lastly) should be ignored. In the Spanish example, the word medida (measure) is wrongly ignored: it should be used to generate a child of the node impact-01. Some of the :ARG roles are also not correct. In the German example, meines (my) should reflect the fact that the speaker is talking about his own country. Finally, in the Chinese example, there are several mistakes including yet another concept identification mistake: intend-01 is erroneously triggered.",
"Most mistakes involve concept identification. In particular, relevant words are often erroneously ignored by the parser. This is directly related to the problem of noisy word alignments in annotation projection: the parser learns what words are likely to trigger a node (or a set of nodes) in the AMR by looking at their AMR alignments (which are induced by the word alignments). If an important word consistently remains unaligned, the parser will erroneously learn to discard it. More accurate alignments are therefore crucial in order to achieve better parsing results. We computed the percentage of words in the training data that are learned to be non-content-bearing in each parser and we found that the Chinese parser, which is our least accurate parser, is the one that most suffer from this, with 33% non-content-bearing words. On the other hand, in the German parser, which is the highest scoring, only 26% of the words are non-content-bearing, which is the lowest percentage amongst all parsers."
],
[
"In order to investigate the hypothesis that AMR can be shared across these languages, we now look at translational divergence and discuss how it affects parsing, following the classification used in previous work BIBREF19 , BIBREF20 , which identifies classes of divergences for several languages. sulem2015conceptual also follow the same categorization for French.",
"Figure 4 shows six sentences displaying these divergences. The aim of this analysis is to assess how the parsers deal with the different kind of translational divergences, regardless of the overall quality of the output.",
"This divergence happens when two languages use different POS tags to express the same meaning. For example, the English sentence I am jealous of you is translated into Spanish as Tengo envidia de ti (I have jealousy of you). The English adjective jealous is translated in the Spanish noun envidia. In Figure 4 a we note that the categorical divergence does not create problems since the parsers correctly recognized that envidia (jealousy/envy) should be used as the predicate, regardless of its POS.",
"This divergence happens when verbs expressed in a language with a single word can be expressed with more words in another language. Two subtypes are distinguished: manner and light verb. Manner refers to a manner verb that is mapped to a motion verb plus a manner-bearing word. For example, We will answer is translated in the Italian sentence Noi daremo una riposta (We will give an answer), where to answer is translated as daremo una risposta (will give an answer). Figure 4 b shows that the Italian parser generates a sensible output for this sentence by creating a single node labeled answer-01 for the expression dare una riposta.",
"In a light verb conflational divergence, a verb is mapped to a light verb plus an additional meaning unit, such as when I fear is translated as Io ho paura (I have fear) in Italian: to fear is mapped to the light verb ho (have) plus the noun paura (fear). Figure 4 e shows that also this divergence is dealt properly by the Italian parser: ho paura correctly triggers the root fear-01.",
"This divergence happens when verb arguments result in different syntactic configurations, for example, due to an additional PP attachment. When translating He entered the house with Lui è entrato nella casa (He entered in the house), the Italian translation has an additional in preposition. Also this parsed graph, in Figure 4 c, is structurally correct. The missing node he is due to pronoun-dropping, which is frequent in Italian.",
"This divergence occurs when the direction of the dependency between two words is inverted. For example, I like eating, where like is head of eating, becomes Ich esse gern (I eat likingly) in German, where the dependency is inverted. Unlike all other examples, in this case, the German parser does not cope well with this divergence: it is unable to recognize like-01 as the main concept in the sentence, as shown in Figure 4 d.",
"Finally, the parse of Figure 4 f has to deal with a thematic divergence, which happens when the semantic roles of a predicate are inverted. In the sentence I like grapes, translated to Spanish as Me gustan uvas, I is the subject in English while Me is the object in Spanish. Even though we note an erroneous reentrant edge between grape and I, the thematic divergence does not create problems: the parser correctly recognizes the :ARG0 relationship between like-01 and I and the :ARG1 relationship between like-01 and grape. In this case, the edge labels are important, as this type of divergence is concerned with the semantic roles."
],
[
"AMR parsing for languages other than English has made only a few steps forward. In previous work BIBREF22 , BIBREF7 , BIBREF6 , nodes of the target graph were labeled with either English words or with words in the target language. We instead use the AMR annotation used for English for the target language as well, without translating any word. To the best of our knowledge, the only previous work that attempts to automatically parse AMR graphs for non-English sentences is by vanderwende2015amr. Sentences in several languages (French, German, Spanish and Japanese) are parsed into a logical representation, which is then converted to AMR using a small set of rules. A comparison with this work is difficult, as the authors do not report results for the parsers (due to the lack of an annotated corpus) or release their code.",
"Besides AMR, other semantic parsing frameworks for non-English languages have been investigated BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 . evangcross is the most closely related to our work as it uses a projection mechanism similar to ours for CCG. A crucial difference is that, in order to project CCG parse trees to the target languages, they only make use of literal translation. Previous work has also focused on assessing the stability across languages of semantic frameworks such as AMR BIBREF7 , BIBREF6 , UCCA BIBREF27 and Propbank BIBREF28 .",
"Cross-lingual techniques can cope with the lack of labeled data on languages when this data is available in at least one language, usually English. The annotation projection method, which we follow in this work, is one way to address this problem. It was introduced for POS tagging, base noun phrase bracketing, NER tagging, and inflectional morphological analysis BIBREF29 but it has also been used for dependency parsing BIBREF30 , role labeling BIBREF31 , BIBREF32 and semantic parsing BIBREF26 . Another common thread of cross-lingual work is model transfer, where parameters are shared across languages BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 ."
],
[
"We introduced the problem of parsing AMR structures, annotated for English, from sentences written in other languages as a way to test the cross-lingual properties of AMR. We provided evidence that AMR can be indeed shared across the languages tested and that it is possible to overcome translational divergences. We further proposed a novel way to evaluate the target parsers that does not require manual annotations of the target language. The full-cycle procedure is not limited to AMR parsing and could be used for other cross-lingual problems in NLP. The results of the projection-based AMR parsers indicate that there is a vast room for improvements, especially in terms of generating better alignments. We encourage further work in this direction by releasing professional translations of the AMR test set into four languages."
],
[
"The authors would like to thank the three anonymous reviewers and Sameer Bansal, Gozde Gul Sahin, Sorcha Gilroy, Ida Szubert, Esma Balkir, Nikos Papasarantopoulos, Joana Ribeiro, Shashi Narayan, Toms Bergmanis, Clara Vania, Yang Liu and Adam Lopez for their helpful comments. This research was supported by a grant from Bloomberg and by the H2020 project SUMMA, under grant agreement 688139."
]
],
"section_name": [
"Introduction",
"Cross-lingual AMR parsing",
"Method 1: Annotation Projection",
"Method 2: Machine Translation",
"Evaluation",
"Experiments",
"Qualitative Analysis",
"Translational Divergence",
"Related Work",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"ceee7f9c97946e3aedd9890cd2fee1e6780a6dee"
],
"answer": [
{
"evidence": [
"Cross-lingual techniques can cope with the lack of labeled data on languages when this data is available in at least one language, usually English. The annotation projection method, which we follow in this work, is one way to address this problem. It was introduced for POS tagging, base noun phrase bracketing, NER tagging, and inflectional morphological analysis BIBREF29 but it has also been used for dependency parsing BIBREF30 , role labeling BIBREF31 , BIBREF32 and semantic parsing BIBREF26 . Another common thread of cross-lingual work is model transfer, where parameters are shared across languages BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The annotation projection method, which we follow in this work, is one way to address this problem. It was introduced for POS tagging, base noun phrase bracketing, NER tagging, and inflectional morphological analysis BIBREF29 but it has also been used for dependency parsing BIBREF30 , role labeling BIBREF31 , BIBREF32 and semantic parsing BIBREF26 ."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
},
{
"annotation_id": [
"8e865b88762f910bccb60d97637d4fe2e896a6f2"
],
"answer": [
{
"evidence": [
"AMR is not grounded in the input sentence, therefore there is no need to change the AMR annotation when projecting to another language. We think of English labels for the graph nodes as ones from an independent language, which incidentally looks similar to English. However, in order to train state-of-the-art AMR parsers, we also need to project the alignments between AMR nodes and words in the sentence (henceforth called AMR alignments). We use word alignments, similarly to other annotation projection work, to project the AMR alignments to the target languages.",
"Our approach depends on an underlying assumption that we make: if a source word is word-aligned to a target word and it is AMR aligned with an AMR node, then the target word is also aligned to that AMR node. More formally, let $S = s_1 \\dots s_{\\vert s \\vert }$ be the source language sentence and $T = t_1 \\dots t_{\\vert t \\vert }$ be the target language sentence; $A_s(\\cdot )$ be the AMR alignment mapping word tokens in $S$ to the set of AMR nodes that are triggered by it; $A_t(\\cdot )$ be the same function for $T$ ; $v$ be a node in the AMR graph; and finally, $W(\\cdot )$ be an alignment that maps a word in $S$ to a subset of words in $T$ . Then, the AMR projection assumption is: $T = t_1 \\dots t_{\\vert t \\vert }$0",
"Word alignments were generated using fast_align BIBREF10 , while AMR alignments were generated with JAMR BIBREF11 . AMREager BIBREF12 was chosen as the pre-existing English AMR parser. AMREager is an open-source AMR parser that needs only minor modifications for re-use with other languages. Our multilingual adaptation of AMREager is available at http://www.github.com/mdtux89/amr-eager-multilingual. It requires tokenization, POS tagging, NER tagging and dependency parsing, which for English, German and Chinese are provided by CoreNLP BIBREF13 . We use Freeling BIBREF14 for Spanish, as CoreNLP does not provide dependency parsing for this language. Italian is not supported in CoreNLP: we use Tint BIBREF15 , a CoreNLP-compatible NLP pipeline for Italian."
],
"extractive_spans": [],
"free_form_answer": "Word alignments are generated for parallel text, and aligned words are assumed to also share AMR node alignments.",
"highlighted_evidence": [
"We use word alignments, similarly to other annotation projection work, to project the AMR alignments to the target languages.",
"Our approach depends on an underlying assumption that we make: if a source word is word-aligned to a target word and it is AMR aligned with an AMR node, then the target word is also aligned to that AMR node.",
"Word alignments were generated using fast_align BIBREF10 , while AMR alignments were generated with JAMR BIBREF11 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"no",
"no"
],
"question": [
"Do the authors test their annotation projection techniques on tasks other than AMR?",
"How is annotation projection done when languages have different word order?"
],
"question_id": [
"8fa7011e7beaa9fb4083bf7dd75d1216f9c7b2eb",
"e0b7acf4292b71725b140f089c6850aebf2828d2"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"representation",
"representation"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: AMR alignments for a English sentence and its Italian translation.",
"Figure 2: Description of SILVER, FULL-CYCLE and GOLD evaluations. e stands for English and f stands for the target (foreign) language. Dashed lines represent the process of transferring learning across languages (e.g. with annotation projection). SILVER uses a parsed parallel corpus as reference (“Ref”), FULL-CYCLE uses the English gold standard (Gold e) and GOLD uses the target language gold standard we collected (Silver f ).",
"Table 1: Silver, gold and full-cycle Smatch scores for projection-based and MT-based systems.",
"Table 2: BLEU scores for Moses, Nematus and Google Translate (GT) on the (out-of-domain) LDC2015E86 test set",
"Figure 3: Parsed AMR graph and alignments (dashed lines) for an Italian sentence, a Spanish sentence, a German sentences and a Chinese sentence.",
"Figure 4: Parsing examples in several languages involving common translational divergence phenomena: (a) contains a categorical divergence, (b) and (e) conflational divergences, (c) a structural divergence, (d) an head swapping and (f) a thematic divergence.",
"Figure 5: Linear regression lines for silver and fullcycle."
],
"file": [
"2-Figure1-1.png",
"5-Figure2-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"7-Figure3-1.png",
"9-Figure4-1.png",
"9-Figure5-1.png"
]
} | [
"How is annotation projection done when languages have different word order?"
] | [
[
"1704.04539-Method 1: Annotation Projection-0",
"1704.04539-Experiments-1"
]
] | [
"Word alignments are generated for parallel text, and aligned words are assumed to also share AMR node alignments."
] | 734 |
1906.11180 | Canonicalizing Knowledge Base Literals | Abstract. Ontology-based knowledge bases (KBs) like DBpedia are very valuable resources, but their usefulness and usability is limited by various quality issues. One such issue is the use of string literals instead of semantically typed entities. In this paper we study the automated canonicalization of such literals, i.e., replacing the literal with an existing entity from the KB or with a new entity that is typed using classes from the KB. We propose a framework that combines both reasoning and machine learning in order to predict the relevant entities and types, and we evaluate this framework against state-of-the-art baselines for both semantic typing and entity matching. | {
"paragraphs": [
[
"Ontology-based knowledge bases (KBs) like DBpedia BIBREF0 are playing an increasingly important role in domains such knowledge management, data analysis and natural language understanding. Although they are very valuable resources, the usefulness and usability of such KBs is limited by various quality issues BIBREF1 , BIBREF2 , BIBREF3 . One such issue is the use of string literals (both explicitly typed and plain literals) instead of semantically typed entities; for example in the triple $\\langle $ River_Thames, passesArea, “Port Meadow, Oxford\" $\\rangle $ . This weakens the KB as it does not capture the semantics of such literals. If, in contrast, the object of the triple were an entity, then this entity could, e.g., be typed as Wetland and Park, and its location given as Oxford. This problem is pervasive and hence results in a significant loss of information: according to statistics from Gunaratna et al. BIBREF4 in 2016, the DBpedia property dbp:location has over 105,000 unique string literals that could be matched with entities. Besides DBpedia, such literals can also be found in some other KBs from encyclopedias (e.g., zhishi.me BIBREF5 ), in RDF graphs transformed from tabular data (e.g., LinkedGeoData BIBREF6 ), in aligned or evolving KBs, etc. ",
"One possible remedy for this problem is to apply automated semantic typing and entity matching (AKA canonicalization) to such literals. To the best of our knowledge, semantic typing of KB literals has rarely been studied. Gunaratna et al. BIBREF4 used semantic typing in their entity summarization method, first identifying the so called focus term of a phrase via grammatical structure analysis, and then matching the focus term with both KB types and entities. Their method is, however, rather simplistic: it neither utilizes the literal's context, such as the associated property and subject, nor captures the contextual meaning of the relevant words. What has been widely studied is the semantic annotation of KB entities BIBREF7 , BIBREF8 , BIBREF9 and of noun phrases outside the KB (e.g., from web tables) BIBREF10 , BIBREF11 , BIBREF12 ; in such cases, however, the context is very different, and entity typing can, for example, exploit structured information such as the entity's linked Wikipedia page BIBREF7 and the domain and range of properties that the entity is associated with BIBREF8 .",
"With the development of deep learning, semantic embedding and feature learning have been widely adopted for exploring different kinds of contextual semantics in prediction, with Recurrent Neural Network (RNN) being a state-of-the-art method for dealing with structured data and text. One well known example is word2vec — an RNN language model which can represent words in a vector space that retains their meaning BIBREF13 . Another example is a recent study by Kartsaklis et al. BIBREF14 , which maps text to KB entities with a Long-short Term Memory RNN for textual feature learning. These methods offer the potential for developing accurate prediction-based methods for KB literal typing and entity matching where the contextual semantics is fully exploited.",
"In this study, we investigate KB literal canonicalization using a combination of RNN-based learning and semantic technologies. We first predict the semantic types of a literal by: (i) identifying candidate classes via lexical entity matching and KB queries; (ii) automatically generating positive and negative examples via KB sampling, with external semantics (e.g., from other KBs) injected for improved quality; (iii) training classifiers using relevant subject-predicate-literal triples embedded in an attentive bidirectional RNN (AttBiRNN); and (iv) using the trained classifiers and KB class hierarchy to predict candidate types. The novelty of our framework lies in its knowledge-based learning; this includes automatic candidate class extraction and sampling from the KB, triple embedding with different importance degrees suggesting different semantics, and using the predicted types to identify a potential canonical entity from the KB. We have evaluated our framework using a synthetic literal set (S-Lite) and a real literal set (R-Lite) from DBpedia BIBREF0 . The results are very promising, with significant improvements over several baselines, including the existing state-of-the-art."
],
[
"In this study we consider a knowledge base (KB) that includes both ontological axioms that induce (at least) a hierarchy of semantic types (i.e., classes), and assertions that describe concrete entities (individuals). Each such assertion is assumed to be in the form of an RDF triple $\\langle s,p,o \\rangle $ , where $s$ is an entity, $p$ is a property and $o$ can be either an entity or a literal (i.e., a typed or untyped data value such as a string or integer).",
"We focus on triples of the form $\\langle s,p,l \\rangle $ , where $l$ is a string literal; such literals can be identified by regular expressions, as in BIBREF4 , or by data type inference as in BIBREF15 . Our aim is to cononicalize $l$ by first identifying the type of $l$ , i.e., a set of classes $\\mathcal {C}_l$ that an entity corresponding to $l$ should be an instance of, and then determining if such an entity already exists in the KB. The first subtask is modeled as a machine learning classification problem where a real value score in $\\left[0,1\\right]$ is assigned to each class $c$ occurring in the KB, and $\\mathcal {C}_l$ is the set of classes determined by the assigned score with strategies e.g., adopting a class if its score exceeds some threshold. The second subtask is modeled as an entity lookup problem constrained by $\\mathcal {C}_l$ .",
"It is important to note that:",
"When we talk about a literal $l$ we mean the occurrence of $l$ in a triple $\\langle s,p,l \\rangle $ . Lexically equivalent literals might be treated very differently depending on their triple contexts.",
"If the KB is an OWL DL ontology, then the set of object properties (which connect two entities) and data properties (which connect an entity to a literal) should be disjoint. In practice, however, KBs such as DBpedia often don't respect this constraint. In any case, we avoid the issue by simply computing the relevant typing and canonicalization information, and leaving it up to applications as to how they want to exploit it.",
"We assume that no manual annotations or external labels are given — the classifier is automatically trained using the KB."
],
[
"The technical framework for the classification problem is shown in Fig. 1 . It involves three main steps: (i) candidate class extraction; (ii) model training and prediction; and (iii) literal typing and canonicalization.",
"Popular KBs like DBpedia often contain a large number of classes. For efficiency reasons, and to reduce noise in the learning process, we first identify a subset of candidate classes. This selection should be rather inclusive so as to maximize potential recall. In order to achieve this we pool the candidate classes for all literals occurring in triples with a given property; i.e., to compute the candidate classes for a literal $ł$ occurring in a triple $\\langle s,p,l \\rangle $ , we consider all triples that use property $p$ . Note that, as discussed above, in practice such triples may include both literals and entities as their objects. We thus use two techniques for identifying candidate classes from the given set of triples. In the case where the object of the triple is an entity, the candidates are just the set of classes that this entity is an instance of. In practice we identify the candidates for the set of all such entities, which we denote $E_P$ , via a SPARQL query to the KB, with the resulting set of classes being denoted $C_P$ . In the case where the object of the triple is a literal, we first match the literal to entities using a lexical index which is built based on the entity's name, labels and anchor text (description). To maximize recall, the literal, its tokens (words) and its sub-phrases are used to retrieve entities by lexical matching; this technique is particularly effective when the literal is a long phrase. As in the first case, we identify all relevant entities, which we denote $E_M$ , and then retrieve the relevant classes $C_M$ using a SPARQL query. The candidate class set is simply the union of $C_P$ and $C_M$ , denoted as $C_{PM}$ .",
"We adopt the strategy of training one binary classifier for each candidate class, instead of multi-class classification, so as to facilitate dealing with the class hierarchy BIBREF16 . The classifier architecture includes an input layer with word embedding, an encoding layer with bidirectional RNNs, an attention layer and a fully connected (FC) layer for modeling the contextual semantics of the literal. To train a classifier, both positive and negative entities (samples), including those from $E_M$ (particular samples) and those outside $E_M$ (general samples) are extracted from the KB, with external KBs and logical constraints being used to improve sample quality. The trained classifiers are used to compute a score for each candidate class.",
"The final stage is to semantically type and, where possible, canonicalise literals. For a given literal, two strategies, independent and hierarchical, are used to determine its types (classes), with a score for each type. We then use these types and scores to try to identify an entity in the KB that could reasonably be substituted for the literal."
],
[
"Given a phrase literal $l$ and its associated RDF triple $\\langle s, p, l \\rangle $ , our neural network model aims at utilizing the semantics of $s$ , $p$ and $l$ for the classification of $l$ . The architecture is shown in Fig. 2 . It first separately parses the subject label, the property label and the literal into three word (token) sequences whose lengths, denoted as $T_s$ , $T_p$ and $T_l$ , are fixed to the maximum subject, property and literal sequence lengths from the training data by padding shorter sequences with null words. We then concatenate the three sequences into a single word sequence ( $word_t, t \\in \\left[1,T\\right]$ ), where $\\langle s, p, l \\rangle $0 . Each word is then encoded into a vector via word embedding (null is encoded into a zero vector), and the word sequence is transformed into a vector sequence ( $\\langle s, p, l \\rangle $1 ). Note that this preserves information about the position of words in $\\langle s, p, l \\rangle $2 , $\\langle s, p, l \\rangle $3 and $\\langle s, p, l \\rangle $4 .",
"The semantics of forward and backward surrounding words is effective in predicting a word's semantics. For example, “Port” and “Meadow” are more likely to indicate a place as they appear after “Area” and before “Oxford”. To embed such contextual semantics into a feature vector, we stack a layer composed of bidirectional Recurrent Neural Networks (BiRNNs) with Gated Recurrent Unit (GRU) BIBREF17 . Within each RNN, a reset gate $r_t$ is used to control the contribution of the past word, and an update gate $z_t$ is used to balance the contributions of the past words and the new words. The hidden state (embedding) at position $t$ is computed as ",
"$${\\left\\lbrace \\begin{array}{ll}\nh_t = (1-z_t) \\odot h_{t-1} + z_t \\odot \\tilde{h}_t, \\\\\n\\tilde{h}_t = \\tau (W_h x_t + r_t \\odot (U_h h_{t-1}) + b_h), \\\\\nz_t = \\sigma (W_z x_t + U_z h_{t-1} + b_z), \\\\\nr_t = \\sigma (W_r x_t + U_r h_{t-1} + b_r),\n\\end{array}\\right.}$$ (Eq. 13) ",
"where $\\odot $ denotes the Hadamard product, $\\sigma $ and $\\tau $ denote the activation function of sigmod and tanh respectively, and $W_h$ , $U_h$ , $b_h$ , $W_z$ , $U_z$ , $b_z$ , $W_r$ , $\\sigma $0 and $\\sigma $1 are parameters to learn. With the two bidirectional RNNs, one forward hidden state and one backward hidden state are calculated for the sequence, denoted as ( $\\sigma $2 ) and ( $\\sigma $3 ) respectively. They are concatenated as the output of the RNN layer: $\\sigma $4 .",
"We assume different words are differently informative towards the type of the literal. For example, the word “port” is more important than the other words in distinguishing the type Wetland from other concrete types of Place. To this end, an attention layer is further stacked. Given the input from the RNN layer ( $h_t, t \\in \\left[1,T \\right]$ ), the attention layer outputs $h_a = \\left[\\alpha _t h_t \\right], t \\in \\left[1,T \\right]$ , where $\\alpha _t$ is the normalized weight of the word at position $t$ and is calculated as ",
"$${\\left\\lbrace \\begin{array}{ll}\n\\alpha _t = \\frac{exp(u^T_t u_w)}{\\sum _{t \\in \\left[1,T\\right]} exp (u^T_t u_w)} \\\\\nu_t = \\tau (W_w h_t + b_w),\n\\end{array}\\right.}$$ (Eq. 14) ",
"where $u_w$ , $W_w$ and $b_w$ are parameters to learn. Specifically, $u_w$ denotes the general informative degrees of all the words, while $\\alpha _t$ denotes the attention of the word at position $t$ w.r.t. other words in the sequence. Note that the attention weights can also be utilized to justify a prediction. In order to exploit information about the location of a word in the subject, property or literal, we do not calculate the weighted sum of the BiRNN output but concatenate the weighted vectors. The dimension of each RNN hidden state (i.e., $\\overleftarrow{h_t}$ and $\\overrightarrow{h_t}$ ), denoted as $d_r$ , and the dimension of each attention layer output (i.e., $\\alpha _t h_t$ ), denoted as $W_w$0 , are two hyper parameters of the network architecture.",
"A fully connected (FC) layer and a logistic regression layer are finally stacked for modeling the nonlinear relationship and calculating the output score respectively: ",
"$$ \nf(s, p, l) = \\sigma (W_f h_a + b_f),$$ (Eq. 15) ",
"where $W_f$ and $b_f$ are the parameters to learn, $\\sigma $ denotes the sigmod function, and $f$ denotes the function of the whole network."
],
[
"We first extract both particular samples and general samples from the KB using SPARQL queries and reasoning; we then improve sample quality by detecting and repairing wrong and missing entity classifications with the help of external KBs; and finally we train the classifiers.",
"Particular samples are based on the entities $E_M$ that are lexically matched by the literals. For each literal candidate class $c$ in $C_M$ , its particular samples are generated by:",
"Extracting its positive particular entities: $E_M^c = \\left\\lbrace e | e \\in E_M, e \\text{ is an instance of } c \\right\\rbrace $ ;",
"Generating its positive particular samples as ",
"$$\\mathcal {P}_c^{+} = \\cup _{e \\in E_M^c} \\left\\lbrace \\langle s,p,l \\rangle | s \\in S(p,e), l \\in L(e) \\right\\rbrace ,$$ (Eq. 20) ",
"where $S(p,e)$ denotes the set of entities occurring in the subject position in a triple of the form $\\langle s, p, e\\rangle $ , and $L(e)$ denotes all the labels (text phrases) of the entity $e$ ;",
"Extracting its negative particular entities $E_M^{\\widetilde{c}}$ as those entities in $E_M$ that are instances of some sibling class of $c$ and not instances of $c$ ;",
"Generating its negative particular samples $\\mathcal {P}_c^-$ with $E_M^{\\widetilde{c}}$ using the same approach as for positive samples.",
"Given that the literal matched candidate classes $C_M$ are only a part of all the candidate classes $C_{PM}$ , and that the size of particular samples may be too small to train the neural network, we additionally generate general samples based on common KB entities. For each candidate class $c$ in $C_{PM}$ , all its entities in the KB, denoted as $E^c$ , are extracted and then its positive general samples, denoted as $\\mathcal {G}_c^+$ , are generated from $E^c$ using the same approach as for particular samples. Similarly, entities of the sibling classes of $c$ , denoted as $E^{\\widetilde{c}}$ , are extracted, and general negative samples, denoted as $\\mathcal {G}_c^-$ , are generated from $C_{PM}$0 . As for negative particular entities, we check each entity in $C_{PM}$1 and remove those that are not instances of $C_{PM}$2 .",
"Unlike the particular samples, the positive and negative general samples are balanced. This means that we reduce the size of $\\mathcal {G}_c^+$ and $\\mathcal {G}_c^-$ to the minimum of $\\#(\\mathcal {G}_c^+)$ , $\\#(\\mathcal {G}_c^-)$ and $N_0$ , where $\\#()$ denotes set cardinality, and $N_0$ is a hyper parameter for sampling. Size reduction is implemented via random sampling.",
"Many KBs are quite noisy, with wrong or missing entity classifications. For example, when using the SPARQL endpoint of DBpedia, dbr:Scotland is classified as dbo:MusicalArtist instead of as dbo:Country, while dbr:Afghan appears without a type. We have corrected and complemented the sample generation by combining the outputs of more than one KB. For example, the DBpedia endpoint suggestions are compared against Wikidata and the DBpedia lookup service. Most DBpedia entities are mapped to Wikidata entities whose types are used to validate and complement the suggested types from the DBpedia endpoint. In addition, the lookup service, although incomplete, typically provides very precise types that can also confirm the validity of the DBpedia endpoint types. The validation is performed by identifying if the types suggested by one KB are compatible with those returned by other KBs, that is, if the relevant types belong to the same branch of the hierarchy (e.g., the DBpedia taxonomy). With the new entity classifications, the samples are revised accordingly.",
"We train a binary classifier $f^c$ for each class $c$ in $C_{PM}$ . It is first pre-trained with general samples $\\mathcal {G}_{c}^+ \\cup \\mathcal {G}_{c}^-$ , and then fine tuned with particular samples $\\mathcal {P}_{c}^+ \\cup \\mathcal {P}_{c}^-$ . Pre-training deals with the shortage of particular samples, while fine-tuning bridges the gap between common KB entities and the entities associated with the literals, which is also known as domain adaptation. Given that pre-training is the most time consuming step, but is task agnostic, classifiers for all the classes in a KB could be pre-trained in advance to accelerate a specific literal canonicalization task."
],
[
"In prediction, the binary classifier for class $c$ , denoted as $f^c$ , outputs a score $y_l^c$ indicating the probability that a literal $l$ belongs to class $c$ : $y_l^c = f^c(l)$ , $y_l^c \\in \\left[0,1\\right]$ . With the predicted scores, we adopt two strategies – independent and hierarchical to determine the types. In the independent strategy, the relationship between classes is not considered. A class $c$ is selected as a type of $l$ if its score $y_l^c \\ge \\theta $ , where $f^c$0 is a threshold hyper parameter in $f^c$1 .",
"The hierarchical strategy considers the class hierarchy and the disjointness between sibling classes. We first calculate a hierarchical score for each class with the predicted scores of itself and its descendents: ",
"$$s_l^c = max\\left\\lbrace y_l^{c^{\\prime }} | c^{\\prime } \\sqsubseteq c,\\text{ } c^{\\prime } \\in C_{PM} \\right\\rbrace ,$$ (Eq. 28) ",
"where $\\sqsubseteq $ denotes the subclass relationship between two classes, $C_{PM}$ is the set of candidate classes for $l$ , and $max$ denotes the maximum value of a set. For a candidate class $c^{\\prime }$ in $C_{PM}$ , we denote all disjoint candidate classes as $\\mathcal {D}(C_{PM}, c^{\\prime })$ . They can be defined as sibling classes of both $c^{\\prime }$ and its ancestors, or via logical constraints in the KB. A class $c$ is selected as a type of $l$ if (i) its hierarchical score $C_{PM}$0 , and (ii) it satisfies the following soft exclusion condition: ",
"$$s_l^c - max\\left\\lbrace s_l^{c^{\\prime }} | c^{\\prime } \\in \\mathcal {D}(C_{PM}, c) \\right\\rbrace \\ge \\kappa ,$$ (Eq. 29) ",
"where $\\kappa $ is a relaxation hyper parameter. The exclusion of disjoint classes is hard if $\\kappa $ is set to 0, and relaxed if $\\kappa $ is set to a negative float with a small absolute value e.g., $-0.1$ .",
"Finally, for a given literal $l$ , we return the set of all selected classes as its types $\\mathcal {C}_l$ ."
],
[
"Given a literal $l$ , we use $\\mathcal {C}_l$ to try to identify an associated entity. A set of candidate entities are first retrieved using the lexical index that is built on the entity's name, label, anchor text, etc. Unlike candidate class extraction, here we use the whole text phrase of the literal, and rank the candidate entities according to their lexical similarities. Those entities that are not instances of any classes in $\\mathcal {C}_l$ are then filtered out, and the most similar entity among the remainder is selected as the associated entity for $l$ . If no entities are retrieved, or all the retrieved entities are filtered out, then the literal could be associated with a new entity whose types are those most specific classes in $\\mathcal {C}_l$ . In either case we can improve the quality of our results by checking that the resulting entities would be consistent if added to the KB, and discarding any entity associations that would lead to inconsistency."
],
[
"In the experiments, we adopt a real literal set (R-Lite) and a synthetic literal set (S-Lite) , both of which are extracted from DBpedia. R-Lite is based on the property and literal pairs published by Gunaratna et al. in 2016 BIBREF4 . We refine the data by (i) removing literals that no longer exist in the current version of DBpedia; (ii) extracting new literals from DBpedia for properties whose existing literals were all removed in step (i); (iii) extending each property and literal pair with an associated subject; and (iv) manually adding ground truth types selected from classes defined in the DBpedia Ontology (DBO). To fully evaluate the study with more data, we additionally constructed S-Lite from DBpedia by repeatedly: (i) selecting a DBpedia triple of the form $\\langle s,p,e \\rangle $ , where $e$ is an entity; (ii) replacing $e$ with it's label $l$ to give a triple $\\langle s,p,l \\rangle $ ; (iii) eliminating the entity $e$ from DBpedia; and (iv) adding as ground truth types the DBpedia classes of which $e$ is (implicitly) an instance. More data details are shown in Table 1 .",
"In evaluating the typing performance, Precision, Recall and F1 Score are used. For a literal $l$ , the computed types $\\mathcal {C}_l$ are compared with the ground truths $\\mathcal {C}_l^{gt}$ , and the following micro metrics are calculated: $P_l = {\\# (\\mathcal {C}_l \\cap \\mathcal {C}_l^{gt}) }{\\# (\\mathcal {C}_l)}$ , $R_l = {\\# (\\mathcal {C}_l \\cap \\mathcal {C}_l^{gt} )}{\\# (\\mathcal {C}_l^{gt})}$ , and ${F_1}_l = {(2 \\times P_l \\times R_l)}{(P_l + R_l)}$ . They are then averaged over all the literals as the final Precision, Recall and F1 Score of a literal set. Although F1 Score measures the overall performance with both Precision and Recall considered, it depends on the threshold hyper parameter $\\theta $ as with Precision and Recall. Thus we let $\\theta $ range from 0 to 1 with a step of $0.01$ , and calculate the average of all the F1 Scores (AvgF1@all) and top 5 highest F1 Scores (AvgF1@top5). AvgF1@all measures the overall pattern recognition capability, while AvgF1@top5 is relevant in real applications where we often use a validation data set to find a $\\theta $ setting that is close to the optimum. We also use the highest (top) Precision in evaluating the sample refinement.",
"In evaluating entity matching performance, Precision is measured by manually checking whether the identified entity is correct or not. S-Lite is not used for entity matching evaluation as the corresponding entities for all its literals are assumed to be excluded from the KB. We are not able to measure recall for entity matching as we do not have the ground truths; instead, we have evaluated entity matching with different confidence thresholds and compared the number of correct results.",
"The evaluation includes three aspects. We first compare different settings of the typing framework, analyzing the impacts of sample refinement, fine tuning by particular samples, BiRNN and the attention mechanism. We also compare the independent and hierarchical typing strategies. We then compare the overall typing performance of our framework with (i) Gunaratna et al. BIBREF4 , which matches the literal to both classes and entities; (ii) an entity lookup based method; and (iii) a probabilistic property range estimation method. Finally, we analyze the performance of entity matching with and without the predicted types.",
"The DBpedia lookup service, which is based on the Spotlight index BIBREF18 , is used for entity lookup (retrieval). The DBpedia SPARQL endpoint is used for query answering and reasoning. The reported results are based on the following settings: the Adam optimizer together with cross-entropy loss are used for network training; $d_r$ and $d_a$ are set to 200 and 50 respectively; $N_0$ is set to 1200; word2vec trained with the latest Wikipedia article dump is adopted for word embedding; and ( $T_s$ , $T_p$ , $T_l$ ) are set to (12, 4, 12) for S-Lite and (12, 4, 15) for R-Lite. The experiments are run on a workstation with Intel(R) Xeon(R) CPU E5-2670 @2.60GHz, with programs implemented by Tensorflow."
],
[
"We first evaluate the impact of the neural network architecture, fine tuning and different typing strategies, with their typing results on S-Lite shown in Table 2 and Fig. 3 . Our findings are supported by comparable results on R-Lite. We further evaluate sample refinement, with some statistics of the refinement operations as well as performance improvements shown in Fig. 4 .",
"According to Table 2 , we find BiRNN significantly outperforms Multiple Layer Perceptron (MLP), a basic but widely used neural network model, while stacking an attention layer (AttBiRNN) further improves AvgF1@all and AvgF1@top5, for example by $3.7\\%$ and $3.1\\%$ respectively with hierarchical typing ( $\\kappa $ = $-0.1$ ). The result is consistent for both pre-trained models and fine tuned models, using both independent and hierarchical typing strategies. This indicates the effectiveness of our neural network architecture. Meanwhile, the performance of all the models is significantly improved after they are fine tuned by the particular samples, as expected. For example, when the independent typing strategy is used, AvgF1@all and AvgF1@top5 of AttBiRNN are improved by $54.1\\%$ and $35.2\\%$ respectively.",
"The impact of independent and hierarchical typing strategies is more complex. As shown in Table 2 , when the classifier is weak (e.g., pre-trained BiRNN), hierarchical typing with both hard exclusion ( $\\kappa $ = 0) and relaxed exclusion ( $\\kappa $ = $-0.1$ ) has higher AvgF1@all and AvgF1@top5 than independent typing. However, when a strong classifier (e.g., fine tuned AttBiRNN) is used, AvgF1@all and AvgF1@top5 of hierarchical typing with relaxed exclusion are close to independent typing, while hierarchical typing with hard exclusion has worse performance. We further analyze Precision, Recall and F1 Score of both typing strategies under varying threshold ( $\\theta $ ) values, as shown in Fig. 3 . In comparison with independent typing, hierarchical typing achieves (i) more stable Precision, Recall and F1 Score curves; and (ii) significantly higher Precision, especially when $\\theta $ is small. Meanwhile, as with the results in Table 2 , relaxed exclusion outperforms hard exclusion in hierarchical typing except for Precision when $\\theta $ is between 0 and $0.05$ .",
"Fig. 4 [Right] shows the ratio of positive and negative particular samples that are deleted and added during sample refinement. The AttBiRNN classifiers fine tuned by the refined particular samples are compared with those fine tuned by the original particular samples. The improvements on AvgF1@all, AvgF1@top5 and top Precision, which are based on the average of the three above typing settings, are shown in Fig. 4 [Left]. On the one hand, we find sample refinement benefits both S-Lite and R-Lite, as expected. On the other hand, we find the improvement on S-Lite is limited, while the improvement on R-Lite is quite significant: F1@all and top Precision, e.g., are improved by around $0.8\\%$ and $1.8\\%$ respectively on S-Lite, but $4.3\\%$ and $7.4\\%$ respectively on R-Lite. This may be due to two factors: (i) the ground truths of S-Lite are the entities' class and super classes inferred from the KB itself, while the ground truths of R-Lite are manually labeled; (ii) sample refinement deletes many more noisy positive and negative samples (which are caused by wrong entity classifications of the KB) on R-Lite than on S-Lite, as shown in Fig. 4 [Right]."
],
[
"Table 3 displays the overall semantic typing performance of our method and the baselines. Results for two optimum settings are reported for each method. The baseline Entity-Lookup retrieves one or several entities using the whole phrase of the literal, and uses their classes and super classes as the types. Gunaratna BIBREF4 matches the literal's focus term (head word) to an exact class, then an exact entity, and then a class with the highest similarity score. It stops as soon as some classes or entities are matched. We extend its original “exact entity match\" setting with “relaxed entity match\" which means multiple entities are retrieved. Property Range Estimation gets the classes and super classes from the entity objects of the property, and calculates the score of each class as the ratio of entity objects that belong to that class. (H/I, $\\kappa $ , $\\cdot $ )@top-P (F1) denotes the setting where the highest Precision (F1 Score) is achieved.",
"As we can see, AttBiRNN achieves much higher performance than all three baselines on both S-Lite and R-Lite. For example, the F1 Score of AttBiRNN is $67.6\\%$ , $160.2\\%$ and $13.8\\%$ higher than those of Gunaratna, Entity-Lookup and Property Range Estimation respectively on S-Lite, and $28.5\\%$ , $58.3\\%$ and $37.9\\%$ higher respectively on R-Lite. AttBiRNN also has significantly higher Precision and Recall, even when the setting is adjusted for the highest F1 Score. This is as expected, because our neural network, which learns the semantics (statistical correlation) from both word vector corpus and KB, models and utilizes the contextual meaning of the literal and its associated triple, while Gunaratna and Entity-Lookup are mostly based on lexical similarity. The performance of Property Range Estimation is limited because the object annotation in DBpedia usually does not follow the property range, especially for those properties in R-Lite. For example, objects of the property dbp:office have 35 DBO classes, ranging from dbo:City and dbo:Country to dbo:Company.",
"It is also notable that AttBiRNN and Property Range Estimation perform better on S-Lite than on R-Lite. The top F1 Score is $20.7\\%$ and $46.2\\%$ higher respectively, while the top Precision is $11.4\\%$ and $43.6\\%$ higher respectively. This is because R-Lite is more noisy, with longer literals, and has more ground truth types on average (cf. Table 1 ), while S-Lite has fewer properties, and each property has a large number of entity objects, which significantly benefits Property Range Estimation. In contrast, the two entity matching based methods, Gunaratna and Entity-Lookup, perform worse on S-Lite than on R-Lite; this is because the construction of S-Lite removes those KB entities from which literals were derived. Gunaratna outperforms Entity-Lookup as it extracts the head word and matches it to both entities and classes. Note that the head word is also included in our candidate class extraction with lookup."
],
[
"Table 4 displays the number of correct matched entities and the Precision of entity matching on R-Lite. The types are predicted by the fine-tuned AttBiRNN with independent typing and two threshold settings. We can see that Precision is improved when the retrieved entities that do not belong to any of the predicted types are filtered out. The improvement is $6.1\\%$ and $5.8\\%$ when $\\theta $ is set to $0.15$ and $0.01$ respectively. Meanwhile, although the total number of matches may decrease because of the filtering, the number of correct matches still increases from 396 to 404 ( $\\theta =0.01$ ). This means that Recall is also improved."
],
[
"Work on KB quality issues can can be divided into KB quality assessment BIBREF2 , BIBREF1 , and KB quality improvement/refinement BIBREF3 . The former includes error and anomaly detection methods, such as test-driven and query template based approaches BIBREF19 , BIBREF20 , with statistical methods BIBREF21 and consistency reasoning BIBREF22 also being applied to assess KB quality with different kinds of metric. The latter includes (i) KB completion, such as entity classification BIBREF7 , BIBREF8 , BIBREF9 , relation prediction BIBREF23 and data typing BIBREF15 ; and (ii) KB diagnosis and repair, such as abnormal value detection BIBREF20 , erroneous identity link detection BIBREF24 and data mapping (e.g., links to Wikipedia pages) correction BIBREF25 .",
"KB canonicalization refers to those refinement works that deal with redundant and ambiguous KB components as well as poorly expressed knowledge with limited reasoning potential. Some works in open information extraction (IE) BIBREF26 , BIBREF27 , BIBREF28 aim to identify synonymous noun phrases and relation phrases of open KBs which are composed of triple assertions extracted from text without any ontologies. For example, the recently proposed CESI method BIBREF27 utilizes both learned KB embeddings and side information like WordNet to find synonyms via clustering. Other works analyze synonyms for ontological KBs. Abedjan et al. BIBREF29 discovered synonymously used predicates for query expansion on DBpedia. Pujara et al. BIBREF30 identified coreferent entities of NELL with ontological constraints considered. These clustering, embedding, or entity linking based methods in open IE however can not be directly applied or do not work well for our KB literal canonicalization. The utilization of these techniques will be in our future work. ",
"String literals in ontological KBs such as DBpedia often represent poorly expressed knowledge, with semantic types and coreferent entities missed. As far as we known, canonicalization of such literals has been little studied. Gunaratna et al. BIBREF4 typed the literal by matching its head term to ontology classes and KB entities, but the literal context (e.g., the associated subject and property) and semantic meaning of the composition words were not utilized. Some ideas of entity classification can be borrowed for literal typing but will become ineffective as the context differs. For example, the baseline Property Range Estimation in our experiments uses the idea of SDType BIBREF8 — utilizing the statistical distribution of types in the subject position and object position of properties to estimate an entity's type probabilities. As a literal is associated with only one property, such probabilistic estimation becomes inaccurate (cf. results in Table 3 ).",
"Our literal classification model is in some degree inspired by those natural language understanding and web table annotation works that match external noun phrases to KB types and entities BIBREF14 , BIBREF10 , BIBREF12 using neural networks and semantic embeddings for modeling the contextual semantics. For example, Luo et al. BIBREF10 learned features from the surrounding cells of a target cell to predict its entity association. However the context in those works is very different, i.e., a simple regular structure of rows/columns with limited (table) metadata. In contrast, KBs have a complex irregular structure and rich meta data (the knowledge captured in the KB). Differently from these works, we developed different methods, e.g., candidate class extraction and high quality sampling, to learn the network from the KB with its assertions, terminologies and reasoning capability."
],
[
"In this paper we present our study on KB literal canonicalization — an important problem on KB quality that has been little studied. A new technical framework is proposed with neural network and knowledge-based learning. It (i) extracts candidate classes as well as their positive and negative samples from the KB by lookup and query answering, with their quality improved using an external KB; (ii) trains classifiers that can effectively learn a literal's contextual features with BiRNNs and an attention mechanism; (iii) identifies types and matches entity for canonicalization. We use a real data set and a synthetic data set, both extracted from DBpedia, for evaluation. It achieves much higher performance than the baselines that include the state-of-the-art. We discuss below some more subjective observations and possible directions for future work."
],
[
"The work is supported by the AIDA project (U.K. Government's Defence & Security Programme in support of the Alan Turing Institute), the SIRIUS Centre for Scalable Data Access (Research Council of Norway, project 237889), the Royal Society, EPSRC projects DBOnto, $\\text{MaSI}^{\\text{3}}$ and $\\text{ED}^{\\text{3}}$ ."
]
],
"section_name": [
"Introduction",
"Problem Statement",
"Technical Framework",
"Prediction Model",
"Sampling and Training",
"Independent and Hierarchical Typing",
"Canonicalization",
"Experiment Setting",
"Results on Framework Settings",
"Results on Semantic Typing",
"Results on Entity Matching",
"Related Work",
"Discussion and Outlook",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"8fd4d9b44dfc63927d5300006a970cb8bb7a66ff"
],
"answer": [
{
"evidence": [
"The DBpedia lookup service, which is based on the Spotlight index BIBREF18 , is used for entity lookup (retrieval). The DBpedia SPARQL endpoint is used for query answering and reasoning. The reported results are based on the following settings: the Adam optimizer together with cross-entropy loss are used for network training; $d_r$ and $d_a$ are set to 200 and 50 respectively; $N_0$ is set to 1200; word2vec trained with the latest Wikipedia article dump is adopted for word embedding; and ( $T_s$ , $T_p$ , $T_l$ ) are set to (12, 4, 12) for S-Lite and (12, 4, 15) for R-Lite. The experiments are run on a workstation with Intel(R) Xeon(R) CPU E5-2670 @2.60GHz, with programs implemented by Tensorflow."
],
"extractive_spans": [
"SPARQL"
],
"free_form_answer": "",
"highlighted_evidence": [
"The DBpedia SPARQL endpoint is used for query answering and reasoning."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86"
]
},
{
"annotation_id": [
"9f62e8e4f9ec022102a21400fc690c823b6aa28f"
],
"answer": [
{
"evidence": [
"In this study, we investigate KB literal canonicalization using a combination of RNN-based learning and semantic technologies. We first predict the semantic types of a literal by: (i) identifying candidate classes via lexical entity matching and KB queries; (ii) automatically generating positive and negative examples via KB sampling, with external semantics (e.g., from other KBs) injected for improved quality; (iii) training classifiers using relevant subject-predicate-literal triples embedded in an attentive bidirectional RNN (AttBiRNN); and (iv) using the trained classifiers and KB class hierarchy to predict candidate types. The novelty of our framework lies in its knowledge-based learning; this includes automatic candidate class extraction and sampling from the KB, triple embedding with different importance degrees suggesting different semantics, and using the predicted types to identify a potential canonical entity from the KB. We have evaluated our framework using a synthetic literal set (S-Lite) and a real literal set (R-Lite) from DBpedia BIBREF0 . The results are very promising, with significant improvements over several baselines, including the existing state-of-the-art."
],
"extractive_spans": [
"DBpedia"
],
"free_form_answer": "",
"highlighted_evidence": [
"We have evaluated our framework using a synthetic literal set (S-Lite) and a real literal set (R-Lite) from DBpedia BIBREF0 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86"
]
},
{
"annotation_id": [
"beefd1690fc4f586b9bd460368439ac0dc5a67f1"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3. Overall typing performance of our method and the baselines on S-Lite and R-Lite.",
"FLOAT SELECTED: Table 4. Overall performance of entity matching on R-Lite with and without type constraint."
],
"extractive_spans": [],
"free_form_answer": "0.8320 on semantic typing, 0.7194 on entity matching",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3. Overall typing performance of our method and the baselines on S-Lite and R-Lite.",
"FLOAT SELECTED: Table 4. Overall performance of entity matching on R-Lite with and without type constraint."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What is the reasoning method that is used?",
"What KB is used in this work?",
"What's the precision of the system?"
],
"question_id": [
"b6ffa18d49e188c454188669987b0a4807ca3018",
"2b61893b22ac190c94c2cb129e86086888347079",
"a996b6aee9be88a3db3f4127f9f77a18ed10caba"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1. The technical framework for KB literal canonicalization.",
"Fig. 2. The architecture of the neural network.",
"Table 1. Statistics of S-Lite and R-Lite.",
"Table 2. Typing performance of our framework on S-Lite under different settings.",
"Fig. 3. (P)recision, (R)ecall and (F1) Score of independent (I) and hierarchical (H) typing for S-Lite, with the scores predicted by the fine tuned AttBiRNN.",
"Fig. 4. [Left] Performance improvement (%) by sample refinement; [Right] Ratio (%) of added (deleted) positive (negative) particular sample per classifier during sample refinement.",
"Table 3. Overall typing performance of our method and the baselines on S-Lite and R-Lite.",
"Table 4. Overall performance of entity matching on R-Lite with and without type constraint."
],
"file": [
"3-Figure1-1.png",
"5-Figure2-1.png",
"9-Table1-1.png",
"10-Table2-1.png",
"11-Figure3-1.png",
"12-Figure4-1.png",
"12-Table3-1.png",
"13-Table4-1.png"
]
} | [
"What's the precision of the system?"
] | [
[
"1906.11180-13-Table4-1.png",
"1906.11180-12-Table3-1.png"
]
] | [
"0.8320 on semantic typing, 0.7194 on entity matching"
] | 735 |
1911.03681 | BERT is Not a Knowledge Base (Yet): Factual Knowledge vs. Name-Based Reasoning in Unsupervised QA | The BERT language model (LM) (Devlin et al., 2019) is surprisingly good at answering cloze-style questions about relational facts. Petroni et al. (2019) take this as evidence that BERT memorizes factual knowledge during pre-training. We take issue with this interpretation and argue that the performance of BERT is partly due to reasoning about (the surface form of) entity names, e.g., guessing that a person with an Italian-sounding name speaks Italian. More specifically, we show that BERT's precision drops dramatically when we filter certain easy-to-guess facts. As a remedy, we propose E-BERT, an extension of BERT that replaces entity mentions with symbolic entity embeddings. E-BERT outperforms both BERT and ERNIE (Zhang et al., 2019) on hard-to-guess queries. We take this as evidence that E-BERT is richer in factual knowledge, and we show two ways of ensembling BERT and E-BERT. | {
"paragraphs": [
[
"Imagine that you have a friend who claims to know a lot of trivia. During a quiz, you ask them about the native language of actor Jean Marais. They correctly answer French. For a moment you are impressed, until you realize that Jean is a typical French name. So you ask the same question about Daniel Ceccaldi (another French actor, but with an Italian-sounding name). This time your friend says “Italian, I guess.” If this were a Question Answering (QA) benchmark, your friend would have achieved a respectable accuracy of 50%. Yet, their performance does not indicate factual knowledge about the native languages of actors. Rather, it shows that they are able to reason about the likely origins of peoples' names (see Table TABREF1 for more examples).",
"BIBREF1 argue that the unsupervised BERT LM BIBREF0 memorizes factual knowledge about entities and relations. They base this statement on the unsupervised QA benchmark LAMA (§SECREF2), where BERT rivals a knowledge base (KB) built by relation extraction. They suggest that BERT and similar LMs could become a “viable alternative to traditional knowledge bases extracted from text”. We argue that the impressive performance of BERT is partly due to reasoning about (the surface form of) entity names. In §SECREF4, we construct LAMA-UHN (UnHelpful Names), a more “factual” subset of LAMA-Google-RE and LAMA-T-REx, by filtering out queries that are easy to answer from entity names alone. We show that the performance of BERT decreases dramatically on LAMA-UHN.",
"In §SECREF3, we propose E-BERT, a simple mapping-based extension of BERT that replaces entity mentions with wikipedia2vec entity embeddings BIBREF3. In §SECREF4, we show that E-BERT rivals BERT and the recently proposed entity-enhanced ERNIE model BIBREF2 on LAMA. E-BERT has a substantial lead over both baselines on LAMA-UHN; furthermore, ensembles of E-BERT and BERT outperform all baselines on original LAMA."
],
[
"The LAMA (LAnguage Model Analysis) benchmark BIBREF1 is supposed to probe for “factual and commonsense knowledge” inherent in LMs. In this paper, we focus on LAMA-Google-RE and LAMA-T-REx BIBREF5, which are aimed at factual knowledge. Contrary to most previous works on QA, LAMA tests LMs as-is, without supervised finetuning.",
"The LAMA probing task follows this schema: Given a KB triple of the form (S, R, O), the object is elicited with a relation-specific cloze-style question, e.g., (Jean_Marais, native-language, French) becomes: “The native language of Jean Marais is [MASK].” The LM predicts a distribution over a limited vocabulary to replace [MASK], which is evaluated against the known gold answer."
],
[
"It is often possible to guess properties of an entity from its name, with zero factual knowledge of the entity itself. This is because entities are often named according to implicit or explicit rules (e.g., the cultural norms involved in naming a child, copyright laws for industrial products, or simply a practical need for descriptive names). LAMA makes guessing even easier by its limited vocabulary, which may only contain a few candidates for a particular entity type.",
"We argue that a QA benchmark that does not control for entity names does not assess whether an LM is good at reasoning about names, good at memorizing facts, or both. In this Section, we describe the creation of LAMA-UHN (UnHelpfulNames), a subset of LAMA-Google-RE and LAMA-T-REx.",
"Filter 1: The string match filter deletes all KB triples where the correct answer (e.g., Apple) is a case-insensitive substring of the subject entity name (e.g., Apple Watch). This simple heuristic deletes up to 81% of triples from individual relations (see Appendix for statistics and examples).",
"Filter 2: Of course, entity names can be revealing in ways that are more subtle. As illustrated by our French actor example, a person's name can be a useful prior for guessing their native language and by extension, their nationality, place of birth, etc. Our person name filter uses cloze-style questions to elicit name associations inherent in BERT, and deletes KB triples that correlate with them. Consider our previous example (Jean_Marais, native-language, French). We whitespace-tokenize the subject name into Jean and Marais. If BERT considers either name to be a common French name, then a correct answer is insufficient evidence for factual knowledge about the entity Jean_Marais. On the other hand, if neither Jean nor Marais are considered French, but a correct answer is given nonetheless, then we consider this sufficient evidence for factual knowledge.",
"We query BERT for answers to “[X] is a common name in the following language: [MASK].” for both [X] = Jean and [X] = Marais. If the correct answer is among the top-3 for either query, we delete the triple. We apply this filter to Google-RE:place-of-birth, Google-RE:place-of-death, T-REx:P19 (place of birth), T-REx:P20 (place of death), T-REx:P27 (nationality), T-REx:P103 (native language) and T-REx:P1412 (language used). See Appendix for statistics. Depending on the relation, we replace “language” with “city” or “country” in the template.",
"Figure FIGREF5 (blue bars) shows that BERT is strongly affected by filtering, with a drop of 5%–10% mean P@1 from original LAMA to LAMA-UHN. This suggests that BERT does well on LAMA partly because it reasons about (the surface form of) entity names. Of course, name-based reasoning is a useful ability in its own right; however, conflating it with factual knowledge may be misleading."
],
[
"BERT BIBREF0 is a deep bidirectional transformer encoder BIBREF6 pretrained on unlabeled text. It segments text into subword tokens from a vocabulary $\\mathbb {L}_b$. During training, some tokens are masked by a special [MASK] token. Tokens are embedded into real-valued vectors by an embedding function $\\mathcal {E}_\\mathcal {B} : \\mathbb {L}_b \\rightarrow \\mathbb {R}^{d_\\mathcal {B}}$. The embedded tokens are contextualized by the BERT encoder $\\mathcal {B}$ and the output of $\\mathcal {B}$ is fed into a function $\\mathcal {M}_\\mathcal {B}: \\mathbb {R}^{d_\\mathcal {B}} \\rightarrow \\mathbb {L}_b$ that predicts the identity of masked tokens. BERT can thus be used as an LM."
],
[
"Wikipedia2vec BIBREF3 embeds words and wikipedia pages ($\\approx $ entities) in a common space. It learns an embedding function for a vocabulary of words $\\mathbb {L}_w$ and a set of entities $\\mathbb {L}_e$. We denote this function as $\\mathcal {F}: \\mathbb {L}_w \\cup \\mathbb {L}_e \\rightarrow \\mathbb {R}^{d_\\mathcal {F}}$. The wikipedia2vec loss has three components: (a) skipgram word2vec BIBREF7 operating on $\\mathbb {L}_w$ (b) a graph loss on the wikipedia link graph on $\\mathbb {L}_e$ (c) a version of word2vec where words are predicted from entity mentions. Loss (c) ensures that word and entity embeddings share a space. Figure FIGREF5 (black horizontal bars) shows that loss (b) is vital for our use case."
],
[
"We want to transform the output space of $\\mathcal {F}$ in such a way that $\\mathcal {B}$ is fooled into accepting entity embeddings in lieu of its native subword embeddings. We approximate this goal by minimizing the squared distance of transformed wikipedia2vec word vectors and BERT subword vectors:",
"where $\\mathcal {W}$ is a linear projection obtained by least squares. Since $\\mathcal {F}$ embeds $\\mathbb {L}_w$ and $\\mathbb {L}_e$ into the same space, $\\mathcal {W}$ is applicable to members of $\\mathbb {L}_e$, even though it was learned on members of $\\mathbb {L}_w$.",
"Recall that BERT segments text into subwords, e.g., our previous example is tokenized as: The native language of Jean Mara ##is is [MASK] .",
"E-BERT replaces the subwords that correspond to the entity mention with the symbolic entity: The native language of Jean_Marais is [MASK] .",
"The entity (truetype) is embedded by $\\mathcal {W} \\circ \\mathcal {F}$, while other tokens (italics) continue to be embedded by $\\mathcal {E}_\\mathcal {B}$. The altered embedding sequence is fed into $\\mathcal {B}$, where it is treated like any other embedding sequence. Neither $\\mathcal {B}$ nor $\\mathcal {M}_\\mathcal {B}$ are changed.",
"We ensemble BERT and E-BERT by (a) mean-pooling their outputs (AVG) or (b) concatenating the entity and its name with a slash symbol (CONCAT), e.g.: Jean_Marais / Jean Mara ##is."
],
[
"We train cased wikipedia2vec on a recent wikipedia dump (2019-09-02), setting $d_\\mathcal {F} = d_\\mathcal {B}$. To learn $\\mathcal {W}$, we intersect the wikipedia2vec word vocabulary with the cased BERT vocabulary.",
"Our primary baselines are BERT$_\\mathrm {base}$ and BERT$_\\mathrm {large}$ as evaluated in BIBREF1. We also test ERNIE BIBREF2, a BERT$_\\mathrm {base}$ type model that uses wikidata TransE entity embeddings BIBREF8 as additional input. ERNIE has two transformers, one for tokens and one for entities, which are fused by a trainable feed-forward module. To accommodate the new parameters, ERNIE is pre-trained with (a) standard BERT loss and (b) predicting Wikipedia entities.",
"Note that wikipedia2vec and TransE have low coverage on LAMA-Google-RE (wikipedia2vec: 54%, TransE: 71%). When an entity embedding is missing, we fall back onto original BERT. Coverage of LAMA-T-REx is $>98$% for both systems."
],
[
"In keeping with BIBREF1, we report P@k macro-averaged over relations. Macro-averaging ensures that every relation has the same impact on the metric before and after filtering.",
"Figure FIGREF5 shows that E-BERT performs comparable to BERT and ERNIE on unfiltered LAMA. However, E-BERT is less affected by filtering on LAMA-UHN, suggesting that its performance is more strongly due to factual knowledge. Recall that we lack entity embeddings for 46% of Google-RE subjects, i.e., E-BERT cannot improve over BERT on almost half of the Google-RE tuples.",
"Figure FIGREF15 plots deltas in mean P@1 on unfiltered LAMA-T-REx relations relative to BERT, along with the frequency of tuples whose object entity name is a substring of the subject entity name – i.e., the ratio of queries that would be deleted by the string match filter. We see that E-BERT losses relative to BERT (negative red bars) are mostly on relations with a high percentage of trivial substring answers. By contrast, E-BERT typically outperforms BERT on relations where such trivial answers are rare. The ensembles are able to mitigate the losses of E-BERT on almost all relations, while keeping most of its gains (purple and orange bars). This suggests that they successfully combine BERT's ability to reason about entity names with E-BERT's enhanced factual knowledge.",
"Figure FIGREF17 shows that the lead of E-BERT and the ensembles over BERT and ERNIE in terms of mean P@k is especially salient for bigger k."
],
[
"We also evaluate on the FewRel relation classification dataset BIBREF9, using the setup and data split from zhang2019ernie (see Appendix for details). Table TABREF19 shows that E-BERT beats BERT, and the ensembles perform comparable to ERNIE despite not having a dedicated entity encoder."
],
[
"Factual QA is typically tackled as a supervised problem (e.g., BIBREF10, BIBREF11). In contrast, LAMA BIBREF1 tests for knowledge learned by LMs without supervision; similar experiments were performed by BIBREF12. Their experiments do not differentiate between factual knowledge of LMs and their ability to reason about entity names.",
"The E-BERT embedding mapping strategy is inspired by cross-lingual embedding mapping on identical strings BIBREF13. A similar method was recently applied by BIBREF14 to map cross-lingual FastText subword vectors BIBREF15 into the multilingual BERT subword embedding space. BIBREF16 mimick BERT subword embeddings for rare English words from their contexts and form.",
"Other contextualized models that incorporate entity embeddings are ERNIE BIBREF2 (see §SECREF4) and KnowBert BIBREF17. KnowBert is contemporaneous to our work, and at the time of writing, the model was not available for comparison.",
"Both ERNIE and KnowBert add new parameters to the BERT architecture, which must be integrated by additional pretraining. By contrast, E-BERT works with the unchanged BERT model, and $\\mathcal {W}$ has an efficient closed-form solution. This means that we can update E-BERT to the newest wikipedia dump at little computational cost – the most expensive operation would be training wikipedia2vec, which takes a few hours on CPUs."
],
[
"We have presented evidence that the surprising performance of BERT on the recently published LAMA QA benchmark is partly due to reasoning about entity names rather than factual knowledge. We have constructed more “factual” subsets of LAMA-Google-RE and LAMA-T-REx by filtering out easy-to-guess queries. The resulting benchmark, LAMA-UHN, is more difficult for BERT.",
"As a remedy, we proposed E-BERT, a simple extension of BERT that injects wikipedia2vec entity embeddings into BERT. E-BERT outperforms BERT and ERNIE on LAMA-UHN, which we take as evidence that E-BERT is richer in factual knowledge. Additionally, ensembling yields improvements over both BERT and E-BERT on unfiltered LAMA and on the FewRel relation classification dataset."
],
[
"We use the sentence classification setup from BIBREF2. We mark subjects and objects with the symbols # and $, i.e., the inputs to BERT, E-BERT and the CONCAT ensemble look as follows:",
"[CLS] $ Tang ##ier $ ' s # Ibn Bat ##to ##uta Airport # is the busiest airport in the region . [SEP]",
"[CLS] $ Tangier $ ' s # Tangier_Ibn_Battouta_Airport # is the busiest airport in the region . [SEP]",
"[CLS] $ Tangier / Tang ##ier $ ' s # Tangier_Ibn_Battouta_Airport / Ibn Bat ##to ##uta Airport # is the busiest airport in the region . [SEP]",
"where entities (in truetype) are embedded by $\\mathcal {W} \\circ \\mathcal {F}$ and all other tokens (in italics) are embedded by $\\mathcal {E}_\\mathcal {B}$. Note that entity IDs are provided by FewRel. If we lack an entity embedding, we fall back onto the standard BERT segmentation.",
"To predict the relation, we feed the contextualized embedding of the [CLS] token into a linear classifier. During training we finetune all network parameters except for the embeddings. For hyperparameter tuning, we use the ranges from BIBREF2 except for the number of epochs, which we fix at 10. The AVG ensemble averages over BERT's and E-BERT's output distributions. Experiments were run on two GeForce GTX 1080 Ti GPUs with data-parallel training."
],
[
"The cased BERT vocabulary is a superset of the LAMA vocabulary. This ensures that BERT can in principle answer all LAMA queries correctly. The uncased ERNIE vocabulary does not have this property. For ERNIE, we therefore lowercase all queries and restrict the model output to the intersection of its vocabulary with the lowercased LAMA vocabulary. As a result, ERNIE selects an answer from $\\sim $18K candidates (instead of the standard $\\sim $21K), which should work in its favor. We verify that all lowercased object names from LAMA-T-REx and LAMA-Google-RE appear in ERNIE's vocabulary, i.e., ERNIE is in principle able to answer all lowercased queries correctly."
]
],
"section_name": [
"Introduction",
"LAMA",
"LAMA ::: LAMA-UHN",
"E-BERT ::: BERT.",
"E-BERT ::: Wikipedia2vec.",
"E-BERT ::: E-BERT.",
"Experiments ::: Systems.",
"Experiments ::: LAMA.",
"Experiments ::: FewRel.",
"Related work",
"Conclusion",
"FewRel training",
"A note on casing"
]
} | {
"answers": [
{
"annotation_id": [
"a60c505fdef5a59ae9cf1991c04f27ef5f9c87ef"
],
"answer": [
{
"evidence": [
"Figure FIGREF5 shows that E-BERT performs comparable to BERT and ERNIE on unfiltered LAMA. However, E-BERT is less affected by filtering on LAMA-UHN, suggesting that its performance is more strongly due to factual knowledge. Recall that we lack entity embeddings for 46% of Google-RE subjects, i.e., E-BERT cannot improve over BERT on almost half of the Google-RE tuples."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Table 2) CONCAT ensemble",
"highlighted_evidence": [
"Figure FIGREF5 shows that E-BERT performs comparable to BERT and ERNIE on unfiltered LAMA."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"90311cc2ae2acf4aebd8ba2f7856780efd8c429c"
],
"answer": [
{
"evidence": [
"We ensemble BERT and E-BERT by (a) mean-pooling their outputs (AVG) or (b) concatenating the entity and its name with a slash symbol (CONCAT), e.g.: Jean_Marais / Jean Mara ##is."
],
"extractive_spans": [
"mean-pooling their outputs (AVG)",
"concatenating the entity and its name with a slash symbol (CONCAT)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We ensemble BERT and E-BERT by (a) mean-pooling their outputs (AVG) or (b) concatenating the entity and its name with a slash symbol (CONCAT), e.g.: Jean_Marais / Jean Mara ##is."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"d5b47b3dc8166aba64f48bebe2f0711cc441792a"
],
"answer": [
{
"evidence": [
"Filter 1: The string match filter deletes all KB triples where the correct answer (e.g., Apple) is a case-insensitive substring of the subject entity name (e.g., Apple Watch). This simple heuristic deletes up to 81% of triples from individual relations (see Appendix for statistics and examples).",
"Filter 2: Of course, entity names can be revealing in ways that are more subtle. As illustrated by our French actor example, a person's name can be a useful prior for guessing their native language and by extension, their nationality, place of birth, etc. Our person name filter uses cloze-style questions to elicit name associations inherent in BERT, and deletes KB triples that correlate with them. Consider our previous example (Jean_Marais, native-language, French). We whitespace-tokenize the subject name into Jean and Marais. If BERT considers either name to be a common French name, then a correct answer is insufficient evidence for factual knowledge about the entity Jean_Marais. On the other hand, if neither Jean nor Marais are considered French, but a correct answer is given nonetheless, then we consider this sufficient evidence for factual knowledge."
],
"extractive_spans": [
" filter deletes all KB triples where the correct answer (e.g., Apple) is a case-insensitive substring of the subject entity name (e.g., Apple Watch)",
"person name filter uses cloze-style questions to elicit name associations inherent in BERT, and deletes KB triples that correlate with them"
],
"free_form_answer": "",
"highlighted_evidence": [
"Filter 1: The string match filter deletes all KB triples where the correct answer (e.g., Apple) is a case-insensitive substring of the subject entity name (e.g., Apple Watch).",
"Filter 2: Of course, entity names can be revealing in ways that are more subtle. As illustrated by our French actor example, a person's name can be a useful prior for guessing their native language and by extension, their nationality, place of birth, etc. Our person name filter uses cloze-style questions to elicit name associations inherent in BERT, and deletes KB triples that correlate with them."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Which of the two ensembles yields the best performance?",
"What are the two ways of ensembling BERT and E-BERT?",
"How is it determined that a fact is easy-to-guess?"
],
"question_id": [
"83f14af3ccca4ab9deb4c6d208f624d1e79dc7eb",
"0154d8be772193bfd70194110f125813057413a4",
"e737cfe0f6cfc6d3ac6bec32231d9c893bfc3fc9"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"italian",
"italian",
"italian"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [],
"file": []
} | [
"Which of the two ensembles yields the best performance?"
] | [
[
"1911.03681-Experiments ::: LAMA.-1"
]
] | [
"Answer with content missing: (Table 2) CONCAT ensemble"
] | 737 |
1706.09147 | Named Entity Disambiguation for Noisy Text | We address the task of Named Entity Disambiguation (NED) for noisy text. We present WikilinksNED, a large-scale NED dataset of text fragments from the web, which is significantly noisier and more challenging than existing news-based datasets. To capture the limited and noisy local context surrounding each mention, we design a neural model and train it with a novel method for sampling informative negative examples. We also describe a new way of initializing word and entity embeddings that significantly improves performance. Our model significantly outperforms existing state-of-the-art methods on WikilinksNED while achieving comparable performance on a smaller newswire dataset. | {
"paragraphs": [
[
"Named Entity Disambiguation (NED) is the task of linking mentions of entities in text to a given knowledge base, such as Freebase or Wikipedia. NED is a key component in Entity Linking (EL) systems, focusing on the disambiguation task itself, independently from the tasks of Named Entity Recognition (detecting mention bounds) and Candidate Generation (retrieving the set of potential candidate entities). NED has been recognized as an important component in NLP tasks such as semantic parsing BIBREF0 .",
"Current research on NED is mostly driven by a number of standard datasets, such as CoNLL-YAGO BIBREF1 , TAC KBP BIBREF2 and ACE BIBREF3 . These datasets are based on news corpora and Wikipedia, which are naturally coherent, well-structured, and rich in context. Global disambiguation models BIBREF4 , BIBREF5 , BIBREF6 leverage this coherency by jointly disambiguating all the mentions in a single document. However, domains such as web-page fragments, social media, or search queries, are often short, noisy, and less coherent; such domains lack the necessary contextual information for global methods to pay off, and present a more challenging setting in general.",
"In this work, we investigate the task of NED in a setting where only local and noisy context is available. In particular, we create a dataset of 3.2M short text fragments extracted from web pages, each containing a mention of a named entity. Our dataset is far larger than previously collected datasets, and contains 18K unique mentions linking to over 100K unique entities. We have empirically found it to be noisier and more challenging than existing datasets. For example:",
"“I had no choice but to experiment with other indoor games. I was born in Atlantic City so the obvious next choice was Monopoly. I played until I became a successful Captain of Industry.”",
"This short fragment is considerably less structured and with a more personal tone than a typical news article. It references the entity Monopoly_(Game), however expressions such as “experiment” and “Industry” can distract a naive disambiguation model because they are also related the much more common entity Monopoly (economics term). Some sense of local semantics must be considered in order to separate the useful signals (e.g. “indoor games”, “played”) from the noisy ones.",
"We therefore propose a new model that leverages local contextual information to disambiguate entities. Our neural approach (based on RNNs with attention) leverages the vast amount of training data in WikilinksNED to learn representations for entity and context, allowing it to extract signals from noisy and unexpected context patterns.",
"While convolutional neural networks BIBREF7 , BIBREF8 and probabilistic attention BIBREF9 have been applied to the task, this is the first model to use RNNs and a neural attention model for NED. RNNs account for the sequential nature of textual context while the attention model is applied to reduce the impact of noise in the text.",
"Our experiments show that our model significantly outperforms existing state-of-the-art NED algorithms on WikilinksNED, suggesting that RNNs with attention are able to model short and noisy context better than current approaches. In addition, we evaluate our algorithm on CoNLL-YAGO BIBREF1 , a dataset of annotated news articles. We use a simple domain adaptation technique since CoNLL-YAGO lacks a large enough training set for our model, and achieve comparable results to other state-of-the-art methods. These experiments highlight the difference between the two datasets, indicating that our NED benchmark is substantially more challenging.",
"Code and data used for our experiments can be found at https://github.com/yotam-happy/NEDforNoisyText"
],
[
"We introduce WikilinksNED, a large-scale NED dataset based on text fragments from the web. Our dataset is derived from the Wikilinks corpus BIBREF14 , which was constructed by crawling the web and collecting hyperlinks (mentions) linking to Wikipedia concepts (entities) and their surrounding text (context). Wikilinks contains 40 million mentions covering 3 million entities, collected from over 10 million web pages.",
"Wikilinks can be seen as a large-scale, naturally-occurring, crowd-sourced dataset where thousands of human annotators provide ground truths for mentions of interest. This means that the dataset contains various kinds of noise, especially due to incoherent contexts. The contextual noise presents an interesting test-case that supplements existing datasets that are sourced from mostly coherent and well-formed text.",
"To get a sense of textual noise we have set up a small experiment where we measure the similarity between entities mentioned in WikilinksNED and their surrounding context, and compare the results to CoNLL-YAGO. We use state-of-the-art word and entity embeddings obtained from yamada2016joint and compute cosine similarity between embeddings of the correct entity assignment and the mean of context words. We compare results from all mentions in CoNLL-YAGO to a sample of 50000 web fragments taken from WikilinksNED, using a window of words of size 40 around entity mentions. We find that similarity between context and correct entity is indeed lower for web mentions ( $0.163$ ) than for CoNLL-YAGO mentions ( $0.188$ ), and find this result to be statistically significant with very high probability ( $p<10^{-5}$ ) . This result indicates that web fragments in WikilinksNED are indeed noisier compared to CoNLL-YAGO documents.",
"We prepare our dataset from the local-context version of Wikilinks, and resolve ground-truth links using a Wikipedia dump from April 2016. We use the page and redirect tables for resolution, and keep the database pageid column as a unique identifier for Wikipedia entities. We discard mentions where the ground-truth could not be resolved (only 3% of mentions).",
"We collect all pairs of mention $m$ and entity $e$ appearing in the dataset, and compute the number of times $m$ refers to $e$ ( $\\#(m,e)$ ), as well as the conditional probability of $e$ given $m$ : $P(e|m)=\\#(m,e)/\\sum _{e^{\\prime }}\\#(m,e^{\\prime })$ . Examining these distributions reveals many mentions belong to two extremes – either they have very little ambiguity, or they appear in the dataset only a handful of times and refer to different entities only a couple of times each. We deem the former to be less interesting for the purpose of NED, and suspect the latter to be noise with high probability. To filter these cases, we keep only mentions for which at least two different entities have 10 mentions each ( $\\#(m,e) \\ge 10$ ) and consist of at least 10% of occurrences ( $P(e|m) \\ge 0.1$ ). This procedure aggressively filters our dataset and we are left with $e$0 mentions.",
"Finally, we randomly split the data into train (80%), validation (10%), and test (10%), according to website domains in order to minimize lexical memorization BIBREF18 ."
],
[
"Our DNN model is a discriminative model which takes a pair of local context and candidate entity, and outputs a probability-like score for the candidate entity being correct. Both words and entities are represented using embedding dictionaries and we interpret local context as a window-of-words to the left and right of a mention. The left and right contexts are fed into a duo of Attention-RNN (ARNN) components which process each side and produce a fixed length vector representation. The resulting vectors are concatenated and along with the entity embedding are and then fed into a classifier network with two output units that are trained to emit a probability-like score of the candidate being a correct or corrupt assignment."
],
[
"Figure 1 illustrates the main components of our architecture: an embedding layer, a duo of ARNNs, each processing one side of the context (left and right), and a classifier.",
"The embedding layer first embeds both the entity and the context words as vectors (300 dimensions each).",
"The ARNN unit is composed from an RNN and an attention mechanism. Equation 10 represents the general semantics of an RNN unit. An RNN reads a sequence of vectors $\\lbrace v_t\\rbrace $ and maintains a hidden state vector $\\lbrace h_t\\rbrace $ . At each step a new hidden state is computed based on the previous hidden state and the next input vector using some function $f$ , and an output is computed using $g$ . This allows the RNN to “remember” important signals while scanning the context and to recognize signals spanning multiple words. ",
"$$\\begin{aligned}\n& h_t=f_{\\Theta _1}(h_{t-1}, v_t) \\\\\n& o_t=g_{\\Theta _2}(h_t)\n\\end{aligned}$$ (Eq. 10) ",
"Our implementation uses a standard GRU unit BIBREF19 as an RNN. We fit the RNN unit with an additional attention mechanism, commonly used with state-of-the-art encoder-decoder models BIBREF20 , BIBREF21 . Since our model lacks a decoder, we use the entity embedding as a control signal for the attention mechanism.",
"Equation 11 details the equations governing the attention model. ",
"$$\\begin{aligned}\n& a_t \\in \\mathbb {R}; a_t=r_{\\Theta _3}(o_t, v_{candidate}) \\\\\n& a^{\\prime }_t = \\frac{1}{\\sum _{i=1}^{t} \\exp \\lbrace a_i\\rbrace } \\exp \\lbrace a_t\\rbrace \\\\\n& o_{attn}=\\sum _{t} a^{\\prime }_t o_t\n\\end{aligned}$$ (Eq. 11) ",
"The function $r$ computes an attention value at each step, using the RNN output $o_t$ and the candidate entity $v_{candidate}$ . The final output vector $o_{attn}$ is a fixed-size vector, which is the sum of all the output vectors of the RNN weighted according to the attention values. This allows the attention mechanism to decide on the importance of different context parts when examining a specific candidate. We follow bahdanau2014neural and parametrize the attention function $r$ as a single layer NN as shown in equation 12 . ",
"$$r_{\\Theta _3}(o_t, v_{candidate}) = Ao_t + Bv_{candidate} + b \\\\$$ (Eq. 12) ",
"The classifier network consists of a hidden layer and an output layer with two output units in a softmax. The output units are trained by optimizing a cross-entropy loss function."
],
[
"We assume our model is only given training examples for correct entity assignments and therefore use corrupt-sampling, where we automatically generate examples of wrong assignments. For each context-entity pair $(c,e)$ , where $e$ is the correct assignment for $c$ , we produce $k$ corrupt examples with the same context $c$ but with a different, corrupt entity $e^{\\prime }$ . We considered two alternatives for corrupt sampling and provide an empirical comparison of the two approaches (Section \"Evaluation\" ):",
"Near-Misses: Sampling out of the candidate set of each mention. We have found this to be more effective where the training data reliably reflects the test-set distribution.",
"All-Entity: Sampling from the entire dictionary of entities. Better suited to cases where the training data or candidate generation does not reflect the test-set well. Has an added benefit of allowing us to utilize unambiguous training examples where only a single candidate is found.",
"We sample corrupt examples uniformly in both alternatives since with uniform sampling the ratio between the number of positive and negative examples of an entity is higher for popular entities, thus biasing the network towards popular entities. In the All-Entity case, this ratio is approximately proportional to the prior probability of the entity.",
"We note that preliminary experiments revealed that corrupt-sampling according to the distribution of entities in the dataset (as is done by Mikolov at el. mikolov2013distributed), rather than uniform sampling, did not perform well in our settings due to the lack of biasing toward popular entities.",
"Model optimization was carried out using standard backpropagation and an AdaGrad optimizer BIBREF22 . We allowed the error to propagate through all parts of the network and fine tune all trainable parameters, including the word and entity embeddings themselves. We found the performance of our model substantially improves for the first few epochs and then continues to slowly converge with marginal gains, and therefore trained all models for 8 epochs with $k=5$ for corrupt-sampling."
],
[
"Training our model implicitly embeds the vocabulary of words and collection of entities in a common space. However, we found that explicitly initializing these embeddings with vectors pre-trained over a large collection of unlabeled data significantly improved performance (see Section \"Effects of initialized embeddings and corrupt-sampling schemes\" ). To this end, we implemented an approach based on the Skip-Gram with Negative-Sampling (SGNS) algorithm by mikolov2013distributed that simultaneously trains both word and entity vectors.",
"We used word2vecf BIBREF23 , which allows one to train word and context embeddings using arbitrary definitions of \"word\" and \"context\" by providing a dataset of word-context pairs $(w,c)$ , rather than a textual corpus. In our usage, we define a context as an entity $e$ . To compile a dataset of $(w,e)$ pairs, we consider every word $w$ that appeared in the Wikipedia article describing entity $e$ . We limit our vocabularies to words that appeared at least 20 times in the corpus and entities that contain at least 20 words in their articles. We ran the process for 10 epochs and produced vectors of 300 dimensions; other hyperparameters were set to their defaults.",
"levy2014neural showed that SGNS implicitly factorizes the word-context PMI matrix. Our approach is doing the same for the word-entity PMI matrix, which is highly related to the word-entity TFIDF matrix used in Explicit Semantic Analysis BIBREF24 ."
],
[
"In this section, we describe our experimental setup and compare our model to the state of the art on two datasets: our new WikilinksNED dataset, as well as the commonly-used CoNLL-YAGO dataset BIBREF1 . We also examine the effect of different corrupt-sampling schemes, and of initializing our model with pre-trained word and entity embeddings.",
"In all experiments, our model was trained with fixed-size left and right contexts (20 words in each side). We used a special padding symbol when the actual context was shorter than the window. Further, we filtered stopwords using NLTK's stop-word list prior to selecting the window in order to focus on more informative words. Our model was implemented using the Keras BIBREF25 and Tensorflow BIBREF26 libraries."
],
[
"we use Near-Misses corrupt-sampling which was found to perform well due to a large training set that represents the test set well.",
"To isolate the effect of candidate generation algorithms, we used the following simple method for all systems: given a mention $m$ , consider all candidate entities $e$ that appeared as the ground-truth entity for $m$ at least once in the training corpus. This simple method yields $97\\%$ ground-truth recall on the test set.",
"Since we are the first to evaluate NED algorithms on WikilinksNED, we ran a selection of existing local NED systems and compared their performance to our algorithm's.",
"Yamada et al. yamada2016joint created a state-of-the-art NED system that models entity-context similarity with word and entity embeddings trained using the skip-gram model. We obtained the original embeddings from the authors, and trained the statistical features and ranking model on the WikilinksNED training set. Our configuration of Yamada et al.'s model used only their local features.",
"Cheng et al. Cheng2013 have made their global NED system publicly available. This algorithm uses GLOW BIBREF10 for local disambiguation. We compare our results to the ranking step of the algorithm, without the global component. Due to the long running time of this system, we only evaluated their method on the smaller test set, which contains 10,000 randomly sampled instances from the full 320,000-example test set.",
"Finally, we include the Most Probable Sense (MPS) baseline, which selects the entity that was seen most with the given mention during training.",
"We used standard micro P@1 accuracy for evaluation. Experimental results comparing our model with the baselines are reported in Table 1 . Our RNN model significantly outperforms Yamada at el. on this data by over 5 points, indicating that the more expressive RNNs are indeed beneficial for this task. We find that the attention mechanism further improves our results by a small, yet statistically significant, margin."
],
[
"CoNLL-YAGO has a training set with 18505 non-NIL mentions, which our experiments showed is not sufficient to train our model on. To fit our model to this dataset we first used a simple domain adaptation technique and then incorporated a number of basic statistical and string based features.",
"We used a simple domain adaptation technique where we first trained our model on an available large corpus of label data derived from Wikipedia, and then trained the resulting model on the smaller training set of CoNLL BIBREF27 . The Wikipedia corpus was built by extracting all cross-reference links along with their context, resulting in over 80 million training examples. We trained our model with All-Entity corrupt sampling for 1 epoch on this data. The resulting model was then adapted to CoNLL-YAGO by training 1 epoch on CoNLL-YAGO's training set, where corrupt examples were produced by considering all possible candidates for each mention as corrupt-samples (Near-Misses corrupt sampling).",
"We proceeded to use the model in a similar setting to yamada2016joint where a Gradient Boosting Regression Tree (GBRT) BIBREF28 model was trained with our model's prediction as a feature along with a number of statistical and string based features defined by Yamada. The statistical features include entity prior probability, conditional probability, number of candidates for the given mention and maximum conditional probability of the entity in the document. The string based features include edit distance between mention and entity title and two boolean features indicating whether the entity title starts or ends with the mention and vice versa. The GBRT model parameters where set to the values reported as optimal by Yamada.",
"For comparability with existing methods we used two publicly available candidates datasets: (1) PPRforNED - Pershina at el. pershina2015personalized; (2) YAGO - Hoffart at el. hoffart2011robust.",
"As a baseline we took the standard Most Probable Sense (MPS) prediction, which selects the entity that was seen most with the given mention during training. We also compare to the following papers - Francis-Landau et al. francis2016capturing, Yamada at el. yamada2016joint, and Chisholm et al. chisholm2015entity, as they are all strong local approaches and a good source for comparison.",
"Table 2 displays the micro and macro P@1 scores on CoNLL-YAGO test-b for the different training steps. We find that when using only the training set of CoNLL-YAGO our model is under-trained and that the domain adaptation significant boosts performance. We find that incorporating extra statistical and string features yields a small extra improvement in performance.",
"The final micro and macro P@1 scores on CoNLL-YAGO test-b are displayed in table 3 . On this dataset our model achieves comparable results, however it does not outperform the state-of-the-art, probably because of the relatively small training set and our reliance on domain adaptation."
],
[
"We performed a study of the effects of using pre-initialized embeddings for our model, and of using either All-Entity or Near-Misses corrupt-sampling. The evaluation was done on a $10\\%$ sample of the evaluation set of the WikilinksNED corpus and can be seen in Table 4 .",
"We have found that using pre-initialized embeddings results in significant performance gains, due to the better starting point. We have also found that using Near-Misses, our model achieves significantly improved performance. We attribute this difference to the more efficient nature of training with near misses. Both these results were found to be statistically significant."
],
[
"We randomly sampled and manually analyzed 200 cases of prediction errors made by our model. This set was obtained from WikilinksNED's validation set that was not used for training.",
"Working with crowd-sourced data, we expected some errors to result from noise in the ground truths themselves. Indeed, we found that $19.5$ % (39/200) of the errors were not false, out of which $5\\%$ (2) where wrong labels, $33\\%$ (13) were predictions with an equivalent meaning as the correct entity, and in $61.5\\%$ (24) our model suggested a more convincing solution than the original author by using specific hints from the context. In this manner, the mention 'Supreme leader' , which was contextually associated to the Iranian leader Ali Khamenei, was linked by our model with 'supreme leader of Iran' while the \"correct\" tag was the general 'supreme leader' entity.",
"In addition, $15.5\\%$ (31/200) were cases where a Wikipedia disambiguation-page was either the correct or predicted entity ( $2.5\\%$ and $14\\%$ , respectively). We considered the rest of the 130 errors as true semantic errors, and analyzed them in-depth.",
"First, we noticed that in $31.5$ % of the true errors (41/130) our model selected an entity that can be understood as a specific ( $6.5$ %) or general (25%) realization of the correct solution. For example, instead of predicting 'Aroma of wine' for a text on the scent and flavor of Turkish wine, the model assigned the mention 'Aroma' with the general 'Odor' entity. We observed that in 26% (34/130) of the error cases, the predicted entity had a very strong semantic relationship to the correct entity. A closer look discovered two prominent types of 'almost correct' errors occurred repeatedly in the data. The first was a film/book/theater type of error ( $8.4$ %), where the actual and the predicted entities were a different display of the same narrative. Even though having different jargon and producers, those fields share extremely similar content, which may explain why they tend to be frequently confused by the algorithm. A third (4/14) of those cases were tagged as truly ambiguous even for human reader. The second prominent type of 'almost correct' errors where differentiating between adjectives that are used to describe properties of a nation. Particularity, mentions such as 'Germanic', 'Chinese' and 'Dutch' were falsely assigned to entities that describe language instead of people, and vice versa. We observed this type of mistake in $8.4$ % of the errors (11/130).",
"Another interesting type of errors where in cases where the correct entity had insufficient training. We defined insufficient training errors as errors where the correct entity appeared less than 10 times in the training data. We saw that the model followed the MPS in 75% of these cases, showing that our model tends to follow the baseline in such cases. Further, the amount of generalization error in insufficient-training conditions was also significant ( $35.7\\%$ ), as our model tended to select more general entities."
],
[
"Our results indicate that the expressibility of attention-RNNs indeed allows us to extract useful features from noisy context, when sufficient amounts of training examples are available. This allows our model to significantly out-perform existing state-of-the-art models. We find that both using pre-initialized embedding vocabularies, and the corrupt-sampling method employed are very important for properly training our model.",
"However, the gap between results of all systems tested on both CoNLL-YAGO and WikilinksNED indicates that mentions with noisy context are indeed a challenging test. We believe this to be an important real-world scenario, that represents a distinct test-case that fills a gap between existing news-based datasets and the much noisier Twitter data BIBREF29 that has received increasing attention. We find recurrent neural models are a promising direction for this task.",
"Finally, our error analysis shows a number of possible improvements that should be addressed. Since we use the training set for candidate generation, non-nonsensical candidates (i.e. disambiguation pages) cause our model to err and should be removed from the candidate set. In addition, we observe that lack of sufficient training for long-tail entities is still a problem, even when a large training set is available. We believe this, and some subtle semantic cases (book/movie) can be at least partially addressed by considering semantic properties of entities, such as types and categories. We intend to address these issues in future work."
]
],
"section_name": [
"Introduction",
"The WikilinksNED Dataset: Entity Mentions in the Web",
"Algorithm",
"Model Architecture",
"Training",
"Embedding Initialization",
"Evaluation",
"WikilinksNED",
"CoNLL-YAGO",
"Effects of initialized embeddings and corrupt-sampling schemes",
"Error Analysis",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"912e2017e0a3e3dcde3bf55c7b0389ca6bd360db"
],
"answer": [
{
"evidence": [
"Training our model implicitly embeds the vocabulary of words and collection of entities in a common space. However, we found that explicitly initializing these embeddings with vectors pre-trained over a large collection of unlabeled data significantly improved performance (see Section \"Effects of initialized embeddings and corrupt-sampling schemes\" ). To this end, we implemented an approach based on the Skip-Gram with Negative-Sampling (SGNS) algorithm by mikolov2013distributed that simultaneously trains both word and entity vectors."
],
"extractive_spans": [],
"free_form_answer": "They initialize their word and entity embeddings with vectors pre-trained over a large corpus of unlabeled data.",
"highlighted_evidence": [
"However, we found that explicitly initializing these embeddings with vectors pre-trained over a large collection of unlabeled data significantly improved performance (see Section \"Effects of initialized embeddings and corrupt-sampling schemes\" )."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"ef3ec101b07be72610a1191008d32ddeaa094741"
],
"answer": [
{
"evidence": [
"Wikilinks can be seen as a large-scale, naturally-occurring, crowd-sourced dataset where thousands of human annotators provide ground truths for mentions of interest. This means that the dataset contains various kinds of noise, especially due to incoherent contexts. The contextual noise presents an interesting test-case that supplements existing datasets that are sourced from mostly coherent and well-formed text.",
"We prepare our dataset from the local-context version of Wikilinks, and resolve ground-truth links using a Wikipedia dump from April 2016. We use the page and redirect tables for resolution, and keep the database pageid column as a unique identifier for Wikipedia entities. We discard mentions where the ground-truth could not be resolved (only 3% of mentions)."
],
"extractive_spans": [],
"free_form_answer": "The authors believe that the Wikilinks corpus contains ground truth annotations while being noisy. They discard mentions that cannot have ground-truth verified by comparison with Wikipedia.",
"highlighted_evidence": [
"Wikilinks can be seen as a large-scale, naturally-occurring, crowd-sourced dataset where thousands of human annotators provide ground truths for mentions of interest. This means that the dataset contains various kinds of noise, especially due to incoherent contexts.",
"We prepare our dataset from the local-context version of Wikilinks, and resolve ground-truth links using a Wikipedia dump from April 2016. We use the page and redirect tables for resolution, and keep the database pageid column as a unique identifier for Wikipedia entities. We discard mentions where the ground-truth could not be resolved (only 3% of mentions)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What is the new initialization method proposed in this paper?",
"How was a quality control performed so that the text is noisy but the annotations are accurate?"
],
"question_id": [
"22b8836cb00472c9780226483b29771ae3ebdc87",
"540e9db5595009629b2af005e3c06610e1901b12"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The architecture of our Neural Network model. A close-up of the Attention-RNN component appears in the dashed box.",
"Table 1: Evaluation on noisy web data (WikilinksNED)",
"Table 2: Evaluation of training steps on CoNLLYAGO.",
"Table 3: Evaluation on CoNLL-YAGO.",
"Table 4: Corrupt-sampling and Initialization",
"Table 5: Error distribution in 200 samples. Categories of true errors are not fully distinct."
],
"file": [
"4-Figure1-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Table4-1.png",
"8-Table5-1.png"
]
} | [
"What is the new initialization method proposed in this paper?",
"How was a quality control performed so that the text is noisy but the annotations are accurate?"
] | [
[
"1706.09147-Embedding Initialization-0"
],
[
"1706.09147-The WikilinksNED Dataset: Entity Mentions in the Web-1",
"1706.09147-The WikilinksNED Dataset: Entity Mentions in the Web-3"
]
] | [
"They initialize their word and entity embeddings with vectors pre-trained over a large corpus of unlabeled data.",
"The authors believe that the Wikilinks corpus contains ground truth annotations while being noisy. They discard mentions that cannot have ground-truth verified by comparison with Wikipedia."
] | 740 |
1711.06351 | Question Asking as Program Generation | A hallmark of human intelligence is the ability to ask rich, creative, and revealing questions. Here we introduce a cognitive model capable of constructing human-like questions. Our approach treats questions as formal programs that, when executed on the state of the world, output an answer. The model specifies a probability distribution over a complex, compositional space of programs, favoring concise programs that help the agent learn in the current context. We evaluate our approach by modeling the types of open-ended questions generated by humans who were attempting to learn about an ambiguous situation in a game. We find that our model predicts what questions people will ask, and can creatively produce novel questions that were not present in the training set. In addition, we compare a number of model variants, finding that both question informativeness and complexity are important for producing human-like questions. | {
"paragraphs": [
[
"In active machine learning, a learner is able to query an oracle in order to obtain information that is expected to improve performance. Theoretical and empirical results show that active learning can speed acquisition for a variety of learning tasks BIBREF0 . Although impressive, most work on active machine learning has focused on relatively simple types of information requests (most often a request for a supervised label). In contrast, humans often learn by asking far richer questions which more directly target the critical parameters in a learning task. A human child might ask “Do all dogs have long tails?\" or “What is the difference between cats and dogs?\" BIBREF1 . A long term goal of artificial intelligence (AI) is to develop algorithms with a similar capacity to learn by asking rich questions. Our premise is that we can make progress toward this goal by better understanding human question asking abilities in computational terms BIBREF2 .",
"To that end, in this paper, we propose a new computational framework that explains how people construct rich and interesting queries within in a particular domain. A key insight is to model questions as programs that, when executed on the state of a possible world, output an answer. For example, a program corresponding to “Does John prefer coffee to tea?” would return True for all possible world states where this is the correct answer and False for all others. Other questions may return different types of answers. For example “How many sugars does John take in his coffee?” would return a number 0, 1, 2, etc. depending on the world state. Thinking of questions as syntactically well-formed programs recasts the problem of question asking as one of program synthesis. We show that this powerful formalism offers a new approach to modeling question asking in humans and may eventually enable more human-like question asking in machines.",
"We evaluate our model using a data set containing natural language questions asked by human participants in an information-search game BIBREF3 . Given an ambiguous situation or context, our model can predict what questions human learners will ask by capturing constraints in how humans construct semantically meaningful questions. The method successfully predicts the frequencies of human questions given a game context, and can also synthesize novel human-like questions that were not present in the training set."
],
[
"Contemporary active learning algorithms can query for labels or causal interventions BIBREF0 , but they lack the representational capacity to consider a richer range of queries, including those expressed in natural language. AI dialog systems are designed to ask questions, yet these systems are still far from achieving human-like question asking. Goal-directed dialog systems BIBREF4 , BIBREF5 , applied to tasks such as booking a table at a restaurant, typically choose between a relatively small set of canned questions (e.g., “How can I help you?”, “What type of food are you looking for?”), with little genuine flexibility or creativity. Deep learning systems have also been developed for visual “20 questions” style tasks BIBREF6 ; although these models can produce new questions, the questions typically take a stereotyped form (“Is it a person?”, “Is it a glove?” etc.). More open-ended question asking can be achieved by non-goal-driven systems trained on large amounts of natural language dialog, such as the recent progress demonstrated in BIBREF7 . However, these approaches cannot capture intentional, goal-directed forms of human question asking.",
"Recent work has probed other aspects of question asking. The Visual Question Generation (VQG) data set BIBREF8 contains images paired with interesting, human-generated questions. For instance, an image of a car wreck might be paired with the question, “What caused the accident?” Deep neural networks, similar to those used for image captioning, are capable of producing these types of questions after extensive training BIBREF8 , BIBREF9 , BIBREF10 . However, they require large datasets of images paired with questions, whereas people can ask intelligent questions in a novel scenario with no (or very limited) practice, as shown in our task below. Moreover, human question asking is robust to changes in task and goals, while state-of-the-art neural networks do not generalize flexibly in these ways."
],
[
"Our goal was to develop a model of context-sensitive, goal-directed question asking in humans, which falls outside the capabilities of the systems described above. We focused our analysis on a data set we collected in BIBREF3 , which consists of 605 natural language questions asked by 40 human players to resolve an ambiguous game situation (similar to “Battleship”). Players were individually presented with a game board consisting of a 6 $\\times $ 6 grid of tiles. The tiles were initially turned over but each could be flipped to reveal an underlying color. The player's goal was to identify as quickly as possible the size, orientation, and position of “ships\" (i.e., objects composed of multiple adjacent tiles of the same color) BIBREF11 . Every board had exactly three ships which were placed in non-overlapping but otherwise random locations. The ships were identified by their color S = {Blue, Red, Purple}. All ships had a width of 1, a length of N = {2, 3, 4} and orientation O = {Horizontal, Vertical}. Any tile that did not overlap with a ship displayed a null “water” color (light gray) when flipped.",
"After extensive instructions about the rules and purpose of the game and a number of practice rounds BIBREF3 , on each of 18 target contexts players were presented with a partly revealed game board (similar to Figure 1 B and 1 C) that provided ambiguous information about the actual shape and location of the ships. They were then given the chance to ask a natural-language question about the configuration. The player's goal was to use this question asking opportunity to gain as much information as possible about the hidden game board configuration. The only rules given to players about questions was that they must be answerable using one word (e.g., true/false, a number, a color, a coordinate like A1 or a row or column number) and no combination of questions was allowed. The questions were recorded via an HTML text box in which people typed what they wanted to ask. A good question for the context in Figure 1 B is “Do the purple and the red ship touch?”, while “What is the color of tile A1?” is not helpful because it can be inferred from the revealed game board and the rules of the game (ship sizes, etc.) that the answer is “Water” (see Figure 3 for additional example questions).",
"Each player completed 18 contexts where each presented a different underlying game board and partially revealed pattern. Since the usefulness of asking a question depends on the context, the data set consists of 605 question-context pairs $\\langle q, c \\rangle $ , with 26 to 39 questions per context. The basic challenge for our active learning method is to predict which question $q$ a human will ask from the given context $c$ and the overall rules of the game. This is a particularly challenging data set to model because of the the subtle differences between contexts that determine if a question is potentially useful along with the open-ended nature of human question asking."
],
[
"Here we describe the components of our probabilistic model of question generation. Section \"Compositionality and computability\" describes two key elements of our approach, compositionality and computability, as reflected in the choice to model questions as programs. Section \"A grammar for producing questions\" describes a grammar that defines the space of allowable questions/programs. Section \"Probabilistic generative model\" specifies a probabilistic generative model for sampling context-sensitive, relevant programs from this space. The remaining sections cover optimization, the program features, and alternative models (Sections \"Optimization\" - \"Alternative models\" )."
],
[
"The analysis of the data set BIBREF3 revealed that many of the questions in the data set share similar concepts organized in different ways. For example, the concept of ship size appeared in various ways across questions:",
"[noitemsep,nolistsep]",
"“How long is the blue ship?”",
"“Does the blue ship have 3 tiles?”",
"“Are there any ships with 4 tiles?”",
"“Is the blue ship less then 4 blocks?”",
"“Are all 3 ships the same size?”",
"“Does the red ship have more blocks than the blue ship?”",
"As a result, the first key element of modeling question generation was to recognize the compositionality of these questions. In other words, there are conceptual building blocks (predicates like size(x) and plus(x,y)) that can be put together to create the meaning of other questions (plus(size(Red), size(Purple))). Combining meaningful parts to give meaning to larger expressions is a prominent approach in linguistics BIBREF12 , and compositionality more generally has been an influential idea in cognitive science BIBREF13 , BIBREF14 , BIBREF15 .",
"The second key element is the computability of questions. We propose that human questions are like programs that when executed on the state of a world output an answer. For example, a program that when executed looks up the number of blue tiles on a hypothesized or imagined Battleship game board and returns said number corresponds to the question “How long is the blue ship?”. In this way, programs can be used to evaluate the potential for useful information from a question by executing the program over a set of possible or likely worlds and preferring questions that are informative for identifying the true world state. This approach to modeling questions is closely related to formalizing question meaning as a partition over possible worlds BIBREF16 , a notion used in previous studies in linguistics BIBREF17 and psychology BIBREF18 . Machine systems for question answering have also fruitfully modeled questions as programs BIBREF19 , BIBREF20 , and computational work in cognitive science has modeled various kinds of concepts as programs BIBREF21 , BIBREF22 , BIBREF23 . An important contribution of our work here is that it tackles question asking and provides a method for generating meaningful questions/programs from scratch."
],
[
"To capture both compositionality and computability, we represent questions in a simple programming language, based on lambda calculus and LISP. Every unit of computation in that language is surrounded by parentheses, with the first element being a function and all following elements being arguments to that function (i.e., using prefix notation). For instance, the question “How long is the blue ship?” would be represented by the small program (size Blue). More examples will be discussed below. With this step we abstracted the question representation from the exact choice of words while maintaining its meaning. As such the questions can be thought of as being represented in a “language of thought” BIBREF24 .",
"Programs in this language can be combined as in the example (> (size Red) (size Blue)), asking whether the red ship is larger than the blue ship. To compute an answer, first the inner parentheses are evaluated, each returning a number corresponding to the number of red or blue tiles on the game board, respectively. Then these numbers are used as arguments to the > function, which returns either True or False.",
"A final property of interest is the generativity of questions, that is, the ability to construct novel expressions that are useful in a given context. To have a system that can generate expressions in this language we designed a grammar that is context-free with a few exceptions, inspired by BIBREF21 . The grammar consists of a set of rewrite rules, which are recursively applied to grow expressions. An expression that cannot be further grown (because no rewrite rules are applicable) is guaranteed to be an interpretable program in our language.",
"To create a question, our grammar begins with an expression that contains the start symbol A and then rewrites the symbols in the expression by applying appropriate grammatical rules until no symbol can be rewritten. For example, by applying the rules A $\\rightarrow $ N, N $\\rightarrow $ (size S), and S $\\rightarrow $ Red, we arrive at the expression (size Red). Table SI-1 (supplementary materials) shows the core rewrite rules of the grammar. This set of rules is sufficient to represent all 605 questions in the human data set.",
"To enrich the expressiveness and conciseness of our language we added lambda expressions, mapping, and set operators (Table SI-2, supplementary material). Their use can be seen in the question “Are all ships the same size?”, which can be conveniently represented by (= (map ( $\\lambda $ x (size x)) (set Blue Red Purple))). During evaluation, map sequentially assigns each element from the set to x in the $\\lambda $ -part and ultimately returns a vector of the three ship sizes. The three ship sizes are then compared by the = function. Of course, the same question could also be represented as (= (= (size Blue) (size Red)) (size Purple))."
],
[
"An artificial agent using our grammar is able to express a wide range of questions. To decide which question to ask, the agent needs a measure of question usefulness. This is because not all syntactically well-formed programs are informative or useful. For instance, the program (> (size Blue) (size Blue)) representing the question “Is the blue ship larger than itself?” is syntactically coherent. However, it is not a useful question to ask (and is unlikely to be asked by a human) because the answer will always be False (“no”), no matter the true size of the blue ship.",
"We propose a probabilistic generative model that aims to predict which questions people will ask and which not. Parameters of the model can be fit to predict the frequency that humans ask particular questions in particular context in the data set by BIBREF3 . Formally, fitting the generative model is a problem of density estimation in the space of question-like programs, where the space is defined by the grammar. We define the probability of question $x$ (i.e., the probability that question $x$ is asked) with a log-linear model. First, the energy of question $x$ is the weighted sum of question features ",
"$$ \n\\mathcal {E}(x) = \\theta _1 f_1(x) + \\theta _2 f_2(x) + ... + \\theta _K f_K(x),$$ (Eq. 13) ",
"where $\\theta _k$ is the weight of feature $f_k$ of question $x$ . We will describe all features below. Model variants will differ in the features they use. Second, the energy is related to the probability by ",
"$$ \np(x;\\mathbf {\\theta }) = \\frac{\n\\exp (-\\mathcal {E}(x))\n}{\n\\sum _{x \\in X} \\exp (-\\mathcal {E}(x))\n}\n= \\frac{\n\\exp (-\\mathcal {E}(x))\n}{\nZ\n},$$ (Eq. 14) ",
"where $\\mathbf {\\theta }$ is the vector of feature weights, highlighting the fact that the probability is dependent on a parameterization of these weights, $Z$ is the normalizing constant, and $X$ is the set of all possible questions that can be generated by the grammar in Tables SI-1 and SI-2 (up to a limit on question length). The normalizing constant needs to be approximated since $X$ is too large to enumerate."
],
[
"The objective is to find feature weights that maximize the likelihood of asking the human-produced questions. Thus, we want to optimize ",
"$$\\operatornamewithlimits{arg\\,max}_{\\mathbf {\\theta }} \\,\n\\sum _{i = 1}^{N} \\text{log}\\,p(d^{(i)}; \\mathbf {\\theta }),$$ (Eq. 17) ",
"where $D = \\lbrace d^{(1)},...,d^{(N)}\\rbrace $ are the questions (translated into programs) in the human data set. To optimize via gradient ascent, we need the gradient of the log-likelihood with respect to each $\\theta _k$ , which is given by ",
"$$\\frac{\\partial \\text{log}\\,p(D;\\mathbf {\\theta })}{\\partial \\theta _k}= N \\, \\mathbb {E}_{x \\sim D}[f_k(x)] - N \\, \\mathbb {E}_{x \\sim P_\\theta }[f_k(x)].$$ (Eq. 18) ",
"The term $\\mathbb {E}_{x \\sim D}[f_k(x)] = \\frac{1}{N}\\sum _{i=1}^{N}f_k(d^{(i)})$ is the expected (average) feature values given the empirical set of human questions. The term $\\mathbb {E}_{x \\sim P_\\theta }[f_k(x)] = \\sum _{x \\in X} f_k(x) p(x;\\mathbf {\\theta })$ is the expected feature values given the model. Thus, when the gradient is zero, the model has perfectly matched the data in terms of the average values of the features.",
"Computing the exact expected feature values from the model is intractable, since there is a very large number of possible questions (as with the normalizing constant in Equation 14 ). We use importance sampling to approximate this expectation. To create a proposal distribution, denoted as $q(x)$ , we use the question grammar as a probabilistic context free grammar with uniform distributions for choosing the re-write rules.",
"The details of optimization are as follows. First, a large set of 150,000 questions is sampled in order to approximate the gradient at each step via importance sampling. Second, to run the procedure for a given model and training set, we ran 100,000 iterations of gradient ascent at a learning rate of 0.1. Last, for the purpose of evaluating the model (computing log-likelihood), the importance sampler is also used to approximate the normalizing constant in Eq. 14 via the estimator $Z \\approx \\mathbb {E}_{x\\sim q}[\\frac{p(x;\\mathbf {\\theta })}{q(x)}]$ ."
],
[
"We now turn to describe the question features we considered (cf. Equation 13 ), namely two features for informativeness, one for length, and four for the answer type.",
"Informativeness. Perhaps the most important feature is a question's informativeness, which we model through a combination of Bayesian belief updating and Expected Information Gain (EIG). To compute informativeness, our agent needs to represent several components: A belief about the current world state, a way to update its belief once it receives an answer, and a sense of all possible answers to the question. In the Battleship game, an agent must identify a single hypothesis $h$ (i.e., a hidden game board configuration) in the space of possible configurations $H$ (i.e., possible board games). The agent can ask a question $x$ and receive the answer $d$ , updating its hypothesis space by applying Bayes' rule, $p(h|d;x) \\propto p(d|h;x)p(h)$ . The prior $p(h)$ is specified first by a uniform choice over the ship sizes, and second by a uniform choice over all possible configurations given those sizes. The likelihood $p(d|h;x) \\propto 1$ if $d$ is a valid output of the question program $x$ when executed on $h$ , and zero otherwise.",
"The Expected Information Gain (EIG) value of a question $x$ is the expected reduction in uncertainty about the true hypothesis $h$ , averaged across all possible answers $A_x$ of the question ",
"$$\\mathit {EIG}(x) = \\sum _{d \\in A_x} p(d;x) \\Big [ I[p(h)] - I[p(h|d;x)] \\Big ],$$ (Eq. 22) ",
"where $I[\\cdot ]$ is the Shannon entropy. Complete details about the Bayesian ideal observer follow the approach we used in BIBREF3 . Figure 3 shows the EIG scores for the top two human questions for selected contexts.",
"In addition to feature $f_\\text{EIG}(x) = \\text{EIG}(x)$ , we added a second feature $f_\\text{EIG=0}(x)$ , which is 1 if EIG is zero and 0 otherwise, to provide an offset to the linear EIG feature. Note that the EIG value of a question always depends on the game context. The remaining features described below are independent of the context.",
"Complexity. Purely maximizing EIG often favors long and complicated programs (e.g., polynomial questions such as size(Red)+10*size(Blue)+100*size(Purple)+...). Although a machine would not have a problem with answering such questions, it poses a problem for a human answerer. Generally speaking, people prefer concise questions and the rather short questions in the data set reflect this. The probabilistic context free grammar provides a measure of complexity that favors shorter programs, and we use the log probability under the grammar $f_\\text{comp}(x) = -\\log q(x)$ as the complexity feature.",
"Answer type. We added four features for the answer types Boolean, Number, Color, and Location. Each question program belongs to exactly one of these answer types (see Table SI-1). The type Orientation was subsumed in Boolean, with Horizontal as True and Vertical as False. This allows the model to capture differences in the base rates of question types (e.g., if people prefer true/false questions over other types).",
"Relevance. Finally, we added one auxiliary feature to deal with the fact that the grammar can produce syntactically coherent programs that have no reference to the game board at all (thus are not really questions about the game; e.g., (+ 1 1)). The “filter” feature $f_\\emptyset (x)$ marks questions that refer to the Battleship game board with a value of 1 (see the $^b$ marker in Table SI-1) and 0 otherwise."
],
[
"To evaluate which features are important for human-like question generation, we tested the full model that uses all features, as well as variants in which we respectively lesioned one key property. The information-agnostic model did not use $f_\\text{EIG}(x)$ and $f_\\text{EIG=0}(x)$ and thus ignored the informativeness of questions. The complexity-agnostic model ignored the complexity feature. The type-agnostic model ignored the answer type features."
],
[
"The probabilistic model of question generation was evaluated in two main ways. First, it was tasked with predicting the distribution of questions people asked in novel scenarios, which we evaluate quantitatively. Second, it was tasked with generating genuinely novel questions that were not present in the data set, which we evaluate qualitatively. To make predictions, the different candidate models were fit to 15 contexts and asked to predict the remaining one (i.e., leave one out cross-validation). This results in 64 different model fits (i.e., 4 models $\\times $ 16 fits).",
"First, we verify that compositionality is an essential ingredient in an account of human question asking. For any given context, about 15% of the human questions did not appear in any of the other contexts. Any model that attempts to simply reuse/reweight past questions will be unable to account for this productivity (effectively achieving a log-likelihood of $-\\infty $ ), at least not without a much larger training set of questions. The grammar over programs provides one account of the productivity of the human behavior.",
"Second, we compared different models on their ability to quantitatively predict the distribution of human questions. Table 1 summarizes the model predictions based on the log-likelihood of the questions asked in the held-out contexts. The full model – with learned features for informativeness, complexity, answer type, and relevance – provides the best account of the data. In each case, lesioning its key components resulted in lower quality predictions. The complexity-agnostic model performed far worse than the others, highlighting the important role of complexity (as opposed to pure informativeness) in understanding which questions people choose to ask. The full model also outperformed the information-agnostic and type-agnostic models, suggesting that people also optimize for information gain and prefer certain question types (e.g., true/false questions are very common). Because the log-likelihood values are approximate, we bootstrapped the estimate of the normalizing constant $Z$ and compared the full model and each alternative. The full model's log-likelihood advantage over the complexity-agnostic model held in 100% of the bootstrap samples, over the information-agnostic model in 81% of samples, and over type-agnostic model in 88%.",
"Third, we considered the overall match between the best-fit model and the human question frequencies. Figure 2 shows the correlations between the energy values according to the held-out predictions of the full model (Eq. 13 ) and the frequencies of human questions (e.g., how often participants asked “What is the size of the red ship?\" in a particular context). The results show very strong agreement for some contexts along with more modest alignment for others, with an average Spearman's rank correlation coefficient of 0.64. In comparison, the information-agnostic model achieved 0.65, the complexity-agnostic model achieved -0.36, and the type-agnostic model achieved 0.55. One limitation is that the human data is sparse (many questions were only asked once), and thus correlations are limited as a measure of fit. However, there is, surprisingly, no correlation at all between question generation frequency and EIG alone BIBREF3 , again suggesting a key role of question complexity and the other features.",
"Last, the model was tasked with generating novel, “human-like” questions that were not part of the human data set. Figure 3 shows five novel questions that were sampled from the model, across four different game contexts. Questions were produced by taking five weighted samples from the set of programs produced in Section \"Optimization\" for approximate inference, with weights determined by their energy (Eq. 14 ). To ensure novelty, samples were rejected if they were equivalent to any human question in the training data set or to an already sampled question. Equivalence between any two questions was determined by the mutual information of their answer distributions (i.e., their partitions over possible hypotheses), and or if the programs differed only through their arguments (e.g. (size Blue) is equivalent to (size Red)). The generated questions in Figure 3 demonstrate that the model is capable of asking novel (and clever) human-like questions that are useful in their respective contexts. Interesting new questions that were not observed in the human data include “Are all the ships horizontal?\" (Context 7), “What is the top left of all the ship tiles?\" (Context 9), “Are blue and purple ships touching and red and purple not touching (or vice versa)?\" (Context 9), and “What is the column of the top left of the tiles that have the color of the bottom right corner of the board?\" (Context 15). The four contexts were selected to illustrate the creative range of the model, and the complete set of contexts is shown in the supplementary materials."
],
[
"People use question asking as a cognitive tool to gain information about the world. Although people ask rich and interesting questions, most active learning algorithms make only focused requests for supervised labels. Here were formalize computational aspects of the rich and productive way that people inquire about the world. Our central hypothesis is that active machine learning concepts can be generalized to operate over a complex, compositional space of programs that are evaluated over possible worlds. To that end, this project represents a step toward more capable active learning machines.",
"There are also a number of limitations of our current approach. First, our system operates on semantic representations rather than on natural language text directly, although it is possible that such a system can interface with recent tools in computational linguistics to bridge this gap BIBREF19 . Second, some aspects of our grammar are specific to the Battleship domain. It is often said that some knowledge is needed to ask a good question, but critics of our approach will point out that the model begins with substantial domain knowledge and special purpose structures. On the other hand, many aspects of our grammar are domain general rather than domain specific, including very general functions and programming constructs such as logical connectives, set operations, arithmetic, and mapping. To extend this approach to new domains, it is unclear exactly how much new knowledge engineering will be needed, and how much can be preserved from the current architecture. Future work will bring additional clarity as we extend our approach to different domains.",
"From the perspective of computational cognitive science, our results show how people balance informativeness and complexity when producing semantically coherent questions. By formulating question asking as program generation, we provide the first predictive model to date of open-ended human question asking."
],
[
"We thank Chris Barker, Sam Bowman, Noah Goodman, and Doug Markant for feedback and advice. This research was supported by NSF grant BCS-1255538, the John Templeton Foundation “Varieties of Understanding” project, a John S. McDonnell Foundation Scholar Award to TMG, and the Moore-Sloan Data Science Environment at NYU."
],
[
" The supplementary material contains the following: the game boards that served as contexts in the human question data set (Figure SI-1 ), the full set of grammatical rules used in the simulations (Table SI-1 & SI-2 ), and five novel questions for each context produced by the computational model (Table SI-3 & SI-4 )."
]
],
"section_name": [
"Introduction",
"Related work",
"The question data set",
"A probabilistic model of question generation",
"Compositionality and computability",
"A grammar for producing questions",
"Probabilistic generative model",
"Optimization",
"Question features",
"Alternative models",
"Results and Discussion",
"Conclusions",
"Acknowledgments",
"Supplementary material"
]
} | {
"answers": [
{
"annotation_id": [
"913e47a7cbed83ecc22cc8b9388c9777d6d53679"
],
"answer": [
{
"evidence": [
"Here we describe the components of our probabilistic model of question generation. Section \"Compositionality and computability\" describes two key elements of our approach, compositionality and computability, as reflected in the choice to model questions as programs. Section \"A grammar for producing questions\" describes a grammar that defines the space of allowable questions/programs. Section \"Probabilistic generative model\" specifies a probabilistic generative model for sampling context-sensitive, relevant programs from this space. The remaining sections cover optimization, the program features, and alternative models (Sections \"Optimization\" - \"Alternative models\" ).",
"The details of optimization are as follows. First, a large set of 150,000 questions is sampled in order to approximate the gradient at each step via importance sampling. Second, to run the procedure for a given model and training set, we ran 100,000 iterations of gradient ascent at a learning rate of 0.1. Last, for the purpose of evaluating the model (computing log-likelihood), the importance sampler is also used to approximate the normalizing constant in Eq. 14 via the estimator $Z \\approx \\mathbb {E}_{x\\sim q}[\\frac{p(x;\\mathbf {\\theta })}{q(x)}]$ ."
],
"extractive_spans": [],
"free_form_answer": "No, it is a probabilistic model trained by finding feature weights through gradient ascent",
"highlighted_evidence": [
"Here we describe the components of our probabilistic model of question generation. ",
"The details of optimization are as follows. First, a large set of 150,000 questions is sampled in order to approximate the gradient at each step via importance sampling. Second, to run the procedure for a given model and training set, we ran 100,000 iterations of gradient ascent at a learning rate of 0.1."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"no"
],
"question": [
"Is it a neural model? How is it trained?"
],
"question_id": [
"bd1a3c651ca2b27f283d3f36df507ed4eb24c2b0"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"question"
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The Battleship game used to obtain the question data set by Rothe et al. [19]. (A) The hidden positions of three ships S = {Blue, Red, Purple} on a game board that players sought to identify. (B) After observing the partly revealed board, players were allowed to ask a natural language question. (C) The partly revealed board in context 4.",
"Figure 2: Out-of-sample model predictions regarding the frequency of asking a particular question. The y-axis shows the empirical question frequency, and x-axis shows the model’s energy for the question (Eq. 1, based on the full model). The rank correlation ρ is shown for each context.",
"Table 1: Log likelihoods of model variants averaged across held out contexts.",
"Figure 3: Novel questions generated by the probabilistic model. Across four contexts, five model questions are displayed, next to the two most informative human questions for comparison. Model questions were sampled such that they are not equivalent to any in the training set. The natural language translations of the question programs are provided for interpretation. Questions with lower energy are more likely according to the model."
],
"file": [
"3-Figure1-1.png",
"6-Figure2-1.png",
"7-Table1-1.png",
"9-Figure3-1.png"
]
} | [
"Is it a neural model? How is it trained?"
] | [
[
"1711.06351-A probabilistic model of question generation-0",
"1711.06351-Optimization-6"
]
] | [
"No, it is a probabilistic model trained by finding feature weights through gradient ascent"
] | 741 |
1911.03350 | Ask to Learn: A Study on Curiosity-driven Question Generation | We propose a novel text generation task, namely Curiosity-driven Question Generation. We start from the observation that the Question Generation task has traditionally been considered as the dual problem of Question Answering, hence tackling the problem of generating a question given the text that contains its answer. Such questions can be used to evaluate machine reading comprehension. However, in real life, and especially in conversational settings, humans tend to ask questions with the goal of enriching their knowledge and/or clarifying aspects of previously gathered information. We refer to these inquisitive questions as Curiosity-driven: these questions are generated with the goal of obtaining new information (the answer) which is not present in the input text. In this work, we experiment on this new task using a conversational Question Answering (QA) dataset; further, since the majority of QA dataset are not built in a conversational manner, we describe a methodology to derive data for this novel task from non-conversational QA data. We investigate several automated metrics to measure the different properties of Curious Questions, and experiment different approaches on the Curiosity-driven Question Generation task, including model pre-training and reinforcement learning. Finally, we report a qualitative evaluation of the generated outputs. | {
"paragraphs": [
[
"The growing interest in Machine Reading Comprehension (MRC) has sparked significant research efforts on Question Generation (QG), the dual task to Question Answering (QA). In QA, the objective is to produce an adequate response given a query and a text; conversely, for QG, the task is generally defined as generating relevant question given a source text, focusing on a specific answer span. To our knowledge, all works tackling QG have thus far focused exclusively on generating relevant questions which can be answered given the source text: for instance, given AAAI was founded in 1979 as input, a question likely to be automatically generated would be When was AAAI founded?, where the answer 1979 is a span of the input. Such questions are useful to evaluate reading comprehension for both machines BIBREF0, BIBREF1 and humans BIBREF2.",
"However, the human ability of asking questions goes well beyond evaluation: asking questions is essential in education BIBREF3 and has been proven to be fundamental for children cognitive development BIBREF4. Curiosity is baked into the human experience. It allows to extend one's comprehension and knowledge by asking questions that, while being relevant to context, are not directly answerable by it, thus being inquisitive and curious. The significance of such kind of questions is two-fold: first, they allow for gathering novel relevant information, e.g. a student asking for clarification; second, they are also tightly linked to one's understanding of the context, e.g. a teacher testing a student's knowledge by asking questions whose answers require a deeper understanding of the context and more complex reasoning.",
"From an applicative point of view, we deem the ability to generate curious, inquisitive, questions as highly beneficial for a broad range of scenarios: i) in the context of human-machine interaction (e.g. robots, chat-bots, educational tools), where the communication with the users could be more natural; ii) during the learning process itself, which could be partially driven in a self-supervised manner, reminiscent of how humans learn by exploring and interacting with their environment.",
"To our knowledge, this is the first paper attempting to tackle Curiosity-driven neural question generation. The contributions of this paper can be summarized as follow:",
"we propose a new natural language generation task: curiosity-driven question generation;",
"we propose a method to derive data for the task from popular non-conversational QA datasets;",
"we experiment using language model pre-training and reinforcement learning, on two different datasets;",
"we report a human evaluation analysis to assess both the pertinence of the automatic metrics used and the efficacy of the proposed dataset-creation method above."
],
[
"Deep learning models have been widely applied to text generation tasks such as machine translation BIBREF5, abstractive summarization BIBREF6 or dialog BIBREF7, providing significant gains in performance. The state of the art approaches are based on sequence to sequence models BIBREF8, BIBREF9. In recent years, significant research efforts have been directed to the tasks of Machine Reading Comprehension (MRC) and Question Answering (QA) BIBREF0, BIBREF10. The data used for tackling these tasks are usually composed of $\\lbrace context, question, answer\\rbrace $ triplets: given a context and the question, a model is trained to predict the answer.",
"Conversely, the Question Generation (QG) task introduced by BIBREF11, BIBREF12 can be considered as the dual task for QA BIBREF13: thus, given a context and (optionally) an answer, the model is trained to generate the question. Following QA, research on QG BIBREF14 has also seen increasing interest from the community. One of the main motivations is that an effective QG model can be used to generate synthetic data in order to augment existing QA datasets BIBREF15, BIBREF16. For instance, BIBREF15 proposed a reinforcement learning setup trained using a QA-based metric reward: given a paragraph and an answer, the model first generates questions; then, the paragraph and the corresponding generated questions are given to a pre-trained QA model which predicts an answer; finally, the reward is computed as the number of overlapping words between the ground truth answer and the predicted answer. For an extensive evalution of models trained with different rewards we refer the reader to BIBREF17. Most of these works followed BIBREF18, who applied reinforcement to neural machine translation. First, a sequence to sequence model is trained under teacher forcing BIBREF19 to optimize cross-entropy, hence helping to reduce the action space (i.e. the vocabulary size). Then, the model is finetuned with a mix of teacher forcing and REINFORCE BIBREF20.",
"For automatic evaluation, all previous works on QG resort to BLEU metrics BIBREF21, originally developed and widely used in Machine Translation. However, how to evaluate text generation models remains an open research question: BIBREF22 pointed out that, on QG tasks, the correlation between BLEU and human evaluation was poor.",
"A thorough investigation of the behavior of open-domain conversational agents has been recently presented by BIBREF23. Using controllable neural text generation methods, the authors control important attributes for chit-chat dialogues, including question-asking behavior. Among the take-away messages of this work, is that question-asking represents an essential component in an engaging chit-chat pipeline: the authors find, via a large-scale human validation study, that agents with higher rates of question-asking obtain qualitative improvements in terms of inquisitiveness, interestingness and engagingness.",
"Indeed, in a conversational setting, it can be expected that the nature of follow-up questions significantly differs from those used as target in a traditional QG training setup: as mentioned earlier, QG has so far been tackled as the dual task to QA, hence training models to generate questions whose answer is present in the input context. On the contrary, we argue that in natural conversations the questions follow the input context but are rather a mean to augment one's knowledge (thus, their answer is not present in the input context). In this work, we thus define the task as Curiosity-driven Question Generation."
],
[
"Question Answering datasets are usually composed of a set of questions associated with the corresponding answers and the reading passages (the context) containing the answer. The QA task is defined as finding the answer to a question given the context. As opposed, the Question Generation (QG) task is to generate the question given the input and (optionally) the answer. Most previous efforts on the QG task have resorted to the widely used Stanford Question Answering Dataset (SQuAD) BIBREF10. It contains roughly 100,000 questions posed by crowd-workers on selected sample of Wikipedia articles. Several other QA datasets have also been recently published accounting for characteristic such as requiring multi-passage or discrete reasoning BIBREF24, BIBREF25; further, conversational QA datasets have been made available: CoQA BIBREF26 and QuAC BIBREF27 have the desirable property to be in a dialogue-like setting.",
"In our scenario, Curiosity-driven QG, the reading passage associated with a question should not contain the answer, but rather pave the way for asking a new question – whose answer would eventually enrich the knowledge on the matter at hand. Therefore, a natural choice to build QG data would be to rely on existing datasets for conversational QA. A detailed comparison of the above-mentioned CoQA and QuAC datasets is provided by BIBREF28, who reports the proportion of Topic Error (questions unlikely to be asked in the context) and Entity Salad (i.e. questions unanswerable for any context): CoQA includes a significantly higher proportion Topic Error and Entity Salad compared to QuAC. For this reason, we resort to QuAC in order to derive data Curiosity-driven QG.",
"Furthermore, recognizing the fact that the great majority of QA datasets available does not account for conversational characteristics, we propose a methodology to derive data for Curiosity-driven Question Generation from standard QA datasets, applying it to the popular SQuAD BIBREF10.",
"For both our data sources, and consistently with standard QA and QG tasks, we encode each sample as a triplet $\\lbrace P, q, a\\rbrace $ where the paragraph $P$ comprises $n$ sentences $[s_0,..., s_n]$, and $a$ represents the answer to the question $q$. A canonical QG approach would thus use $s_a$, i.e. the sentence of $P$ that contains the answer, as source, and $q$ as generation target. On the contrary, for Curiosity-driven QG, any sentence $s_x$ from $P$ can potentially be used as the source sequence, as long as it does not contain the answer – i.e. under the necessary constraint of $x \\ne a$. In the following subsections, we elaborate on additional constraints depending on the nature of the source data.",
"In general, we define samples as triplets",
"where $s_x$ and $P^{\\prime }$ are, respectively, the input sentence and the paragraph $P$ modified according to the appropriate dataset-depending constraint, and $y$ is the reference (target) question."
],
[
"As mentioned above, we first derive our data from the QuAC dataset, which is built from Wikipedia articles by iterating over the following procedure: given a sentence, a student annotator asks a relevant question for which he does not have the answer; then, the teacher – annotator – retrieves a sentence that contains the answer. Thus, a QuAC question is curious by design, given the text that precedes it. More formally, for the question $q$ (i.e. our target), the source $s_x$ is composed by the concatenation of the sentences of $P$ which appear before the sentence $s_a$ that contains the answer. Therefore, our QuAC-derived dataset is built by applying the stricter constraint $x < a$.",
"Numerically, the QuAC dataset compounds to 83,568 questions (on 11,567 articles) for the train set, 7,354 for the validation set and 7,353 for the test set (1,000 articles each). Since the test set is not public, we use the original QuAC validation set to build our test set. From the training set, we randomly drop 1,000 articles (hence, 7,224 samples) which we use to derive our validation set, thus resulting in 76,345 questions for training."
],
[
"Most of the available QA datasets are not conversational. Thus, we propose a simple method to obtain data for Curiosity-driven QG from standard QA datasets. For this, we use the widely popular SQuADBIBREF10, and specifically the original splits released by BIBREF11 which is commonly used for Question Generation.",
"As opposed to QuAC, the questions in SQuAD do not follow logical ordering. Therefore, any sentence $s_x$ from $P$ can potentially be used as the source sequence, as long as it does not contain the answer $a$ (constraint: $x \\ne a$). Nonetheless, as is reasonable for factoid QA datasets, several questions are so specific to their associated sentence $s_a$ that they would be extremely unlikely to be asked without knowing the contents of $s_a$ itself.",
"To exemplify this issue, take the following paragraph from SQuAD:",
"Tesla was the fourth of five children. He had an older brother named Dane and three sisters, Milka, Angelina and Marica. Dane was killed in a horse-riding accident when Nikola was five. In 1861, Tesla attended the “Lower\" or “Primary\" School in Smiljan where he studied German, arithmetic, and religion. In 1862, the Tesla family moved to Gospić, Austrian Empire, where Tesla's father worked as a pastor. Nikola completed “Lower\" or “Primary\" School, followed by the “Lower Real Gymnasium\" or “Normal School.",
"Given “Dane was killed in a horse-riding accident when Nikola was five.\" as $s_a$, and operating under the sole constraint of $x \\ne a$, the sentence “Tesla was the fourth of five children\" would be eligible as a source $s_x$ for the target question “What happened to Dane?\". This question can only be asked if either contextual information or background knowledge is available, since it requires to know that Dane was among Tesla's four siblings.",
"To overcome this problem, we added an additional constraint based on Named Entity Recognition (NER): $s_x$ is an acceptable input only if all the entities present in the question $q$ are also present in the input sentence $s_x$. In the previous example, this would thus filter out the target “What happened to Dane?\" while allowing for “What was Tesla's brother's name?\".",
"For our experiments we used spaCy.",
"In Table TABREF10 we report the number of samples we obtained from SQuAD before and after applying NER filtering. After applying the above methodology to construct a dataset for Curiosity-driven QG, our training dataset contains 25,356 samples for training, 2,076 for development, and 2,087 for testing."
],
[
"Automatic evaluation of Natural Language Generation (NLG) systems is a challenging task BIBREF22. For QG, $n$-gram based similarity metrics are commonly used. These measures evaluate how similar the generated text is to the corresponding reference(s). While they are known to suffer from several shortcomings BIBREF29, BIBREF30, they allow to evaluate specific properties of the developed models. In this work, the metrics detailed below are proposed and we evaluate their quality through a human evaluation in subsection SECREF32."
],
[
"One of the most popular metrics for QG, BLEU BIBREF21 provides a set of measures to compare automatically generated texts against one or more references. In particular, BLEU-N is based on the count of overlapping n-grams between the candidate and its corresponding reference(s)."
],
[
"Within the field of Computational Creativity, Diversity is considered a desirable property BIBREF31. Indeed, generating always the same question such as “What is the meaning of the universe?\" would be an undesirable behavior, reminiscent of the “collapse mode\" observed in Generative Adversarial Networks (GAN) BIBREF32. Therefore, we adopt Self-BLEU, originally proposed by BIBREF33, as a measure of diversity for the generated text sequences. Self-BLEU is computed as follows: for each generated sentence $s_i$, a BLEU score is computed using $s_i$ as hypothesis while the other generated sentences are used as reference. When averaged over all the references, it thus provides a measure of how diverse the sentences are. Lower Self-BLEU scores indicate more diversity. We refer to these metrics as Self-B* throughout this paper."
],
[
"Given a text, a question can be considered curious if the answer is not contained in the input text. In our task, this implies that a question $q$ should not be answerable given its corresponding input sentence $s_x$. Thanks to the recent improvements obtained on Question Answering tasks – for instance, human-level performance has been achieved on SQuAD-v1 – the answerability of a question can be automatically measured.",
"Therefore, given a question-context pair as input to a QA model, two type of metrics can be computed:",
"n-gram based score: measuring the average overlap between the retrieved answer and the ground truth.",
"probability score: the confidence of the QA model for its retrieved answer; this corresponds to the probability of being the correct answer assigned by the QA model to the retrieved answer.",
"Since several diverse questions can be generated for a given input, we consider the latter metric (probability score) to better fit the Curiosity-driven QG task.",
"Hence, given the evaluated question $q$ and the input text $s_x$, we define a metric QA_prob as the confidence of the QA model that its predicted answer is correct. This metric measures answerability of $q$ given $s_x$: therefore, the lower this score, the less likely the answer is contained in the input text.",
"While being non-answerable represents a necessary condition for $q$ being a curious question with respect to its context $s_x$, we also want $q$ to be as relevant and useful as possible. To this end, we compute the above QA_prob for question $q$ on $P^{\\prime }$, which represents the source paragraph stripped from the sentence containing the answer (see Eq. DISPLAY_FORM6). The higher this score, the more likely the question is relevant and useful to augment the knowledge provided by $s_x$.",
"Thus, the two proposed metrics are defined as",
"and",
"Under our definition, Curiosity-driven questions are those that minimize $QA_{source}$ while maximizing $QA_{context}$. To compute these QA-based metrics, we use the HuggingFace implementation of BERT BIBREF34."
],
[
"As baseline architecture we adopt the popular Transformer BIBREF35, which proved to perform well on a wide range of text generation tasks. Among these, neural machine translation BIBREF36, automatic summarization BIBREF37, and question generation BIBREF38, BIBREF39. It can be briefly described as a sequence-to-sequence model with a symmetric encoder and decoder based on a self-attention mechanism, which allows to overcome the inherent obstacles to parallelism present in recurrent models such as Long Short Time Memory (LSTM) networks BIBREF40.",
"The copy mechanism BIBREF41 proved beneficial for QG BIBREF42, BIBREF39: indeed, the QG task is very sensitive to rare and out of vocabulary words such as named entities and such a mechanism help deal with it efficiently: more than 50% of the answers in the SQuAD dataset, for instance, correspond to named entities (see Table 2 in BIBREF10. Hence, following BIBREF37, BIBREF39, we include a copy mechanism in our Transformer architecture.",
"For our experiments, we used the following hyper-parameters for the transformer: N = 2 (number of blocks); d_model = 256 (hidden state dimension); d_ff = 512 (position-wise feed-forward networks dimension); and, h = 2 (number of attention heads).",
"Experiments run with the original hyper-parameters as proposed by BIBREF35 obtained consistent and numerically similar results. During training, we used mini batches of size 64 and the Adam optimizer BIBREF43. At generation time, the decoding steps are computed trough the beam search algorithm with $k=5$ beams by default."
],
[
"Reinforcement Learning (RL) is an efficient technique to maximize discrete metrics for text generation. Previously, BIBREF18 used the REINFORCE algorithm BIBREF20 to train RNNs for several generation tasks, showing improvements over previous supervised approaches. Moreover, BIBREF29 combined supervised and reinforcement learning, demonstrating improvements over competing approaches both in terms of ROUGE and on human evaluation.",
"However, the metrics used as reward are often overfit, leading to numerical improvements which do not translate to increased – and, rather, contribute to degrading – output quality, thus leading to reduced effectiveness of the trained models for practical applications. On this matter, and with a particular focus on QG, BIBREF17 performed a human evaluation on RL models trained with several metrics as reward, finding them to be indeed poorly aligned with human judgments: the models appear to learn to exploit the weaknesses of the reward source.",
"To overcome this issue, we propose to use a balanced reward:",
"thus maximizing the probability of finding an answer to the generated question within the input paragraph but not inside the source sentence.",
"In our experiments, we follow the approach proposed by BIBREF18, BIBREF29, considering a mixed loss $L_{ml+rl}$ which combines supervised and reinforcement learning schemes:",
"where the maximum likelihood $L_{ml}$ is defined as",
"where $X=[x_1,...,x_n]$ represents the source text of length $n$ and $Y=[y_1,...,y_m]$ the corresponding reference question of length $m$.",
"Conversely, we define the reinforcement loss $L_{rl}$ to be minimized according to the standard RL actor-critic scheme, where $r(q, P, P^{\\prime })$ is the reward function defined in DISPLAY_FORM23:",
"Greedy decoding according to the conditional distribution $p(y|X)$ is used to obtain a sequence $\\widehat{Y}$. The model is sampled using its Markov property, that is, one token at a time, giving rise to the sequence $Y^s$."
],
[
"As shown in Table TABREF10, the constrained dataset amounts to roughly three times less samples than both QuAC and the original SQuAD dataset it derives from. We thus investigate, for this dataset, the effect of pretraining the model under the traditional (i.e. not Curiosity-driven) QG training setup, using the training set as provided by BIBREF11). Then we resume training on the final dataset obtained after applying the NER-based constraint for Curiosity-driven QG on the same training samples.",
"For the QuAC Curiosity-driven dataset, the amount of data is comparable to the original dataset, given the conversational nature of QuAC. Therefore, we do not use pretraining for the experiments on QuAC."
],
[
"In Table TABREF29 we report the results of our experiments on QuAC for the baseline model (base) and the RL model. We use a beam $k$, and compute the results for $k=[1,3,5]$. In addition the generated questions with a beam $k=5$, we also computed the results for $k=1$ and $k=3$. While one would expect to see for all the metrics a slight improvement, with increasing beam size, we observe a strong divergence among the results: increasing values for $k$ correspond to a significant improvements in terms of BLEU-4 and notable drops for BLEU-1. A similar phenomena was observed by BIBREF44 in the context of machine translation: in this work, the presence of 1 or 2% of noisy data is found to be enough to significantly degrade the beam search results. In our case, one of most frequent generated question is Are there any other interesting aspects about this article ?. Indeed, the frequency of this question in our training set amounts to 4.18% of the questions. On the test set we see that roughly 80% of the generated questions start with the token “are\" . Generating this sequence is not very likely with a greedy search ($k=1$): at any time step during the generation, if any other token has a higher probability, this question will be dismissed. On the other hand, with a higher beam, it is likely to be kept and eventually result as the most probable sequence, among the different remaining beams at the end of the inference.",
"Moving to our SQuAD-based experiments, we observe that the models trained on SQuAD do not seem to suffer from this issue since all the metrics improved when increasing the beam size from $k=1$ to $k=5$. This is consistent with the results reported by BIBREF42 where improving the beam improve slightly all the metrics. Thus, we only report the results with $k=5$ in Table TABREF30. A possible explanation is that SQuAD, as opposed to QuAC, only contains factoid questions.",
"We observe that the models trained with RL obtain, as could be expected, higher scores for QAcontext with respect to those trained without RL. A higher QAcontext implies that the QA model is more likely to find an answer in the near context of the source. QAsource is lower, as expected, for SQuAD based models, though comparatively higher than the models trained with RL on QuAC. We identify two possible reasons for this: first, the QA model is trained on answerable questions; second, the nature of the QUaC questions is less factoid than the SQuAD ones, and non-factoid questions can arguably be harder for the QA model to evaluate. This could explain why, in the RL setting, QAcontext (the evaluation on answerable questions) is higher for both SQuAD and QUaC models, but only SQuAD models achieve a lower QA_source (the evaluation on non answerable questions).",
"Furthermore, we see that pretraining allows to achieve higher BLEU scores, at the cost of lower Self-BLEU, thus showing an increased accuracy but less diversity in the generated questions. Indeed, we find that pretrained models tend to generate a higher number of questions starting with “What” compared to both other models and the references; the distribution for the first words of the human questions appears closer to that non pretrained models.",
"In Figure FIGREF31 we report the distribution of the first word frequency for the different models trained: the models without pretraining appear closer to the human-quality samples and also show more diversity."
],
[
"In addition to the automatic metrics, we proceeded to a human evaluation. We chose to use the data from our SQuAD-based experiments in order to also to measure the effectiveness of the proposed approach to derive Curiosity-driven QG data from a standard, non-conversational, QA dataset. We randomly sampled 50 samples from the test set. Three professional English speakers were asked to evaluate the questions generated by: humans (i.e. the reference questions), and models trained using pre-training (PT) or (RL), and all combinations of those methods.",
"Before submitting the samples for human evaluation, the questions were shuffled. Ratings were collected on a 1-to-5 likert scale, to measure to what extent the generated questions were: answerable by looking at their context; grammatically correct; how much external knowledge is required to answer; relevant to their context; and, semantically sound. The results of the human evaluation are reported in Table TABREF33."
],
[
"We observe that for pretrained models (i.e. PT and PT+RL) the Correctness is significantly higher than the models without pretraining (i.e. base and RL). It corroborates the higher BLEU observed for these models in Table TABREF30. An other observation is that the External Knowledge is lower for the pretrained models while the Relevance is slightly higher. It could be due to the nature of the pretraing for which the models learn to generate non curious questions that focus on their inputs. It correlates with the significantly higher QA_source reported in Table TABREF30 for those pretrained models."
],
[
"From the human assessment we conducted – see Table TABREF33, we observe for the models trained with RL obtain higher scores for Relevance and lower Soundness as compared to their non-reinforced counterparts. Further, the results reported in Table TABREF30 show reinforced model obtaining lower BLEU and $QA_{source}$ source; conversely they score higher when it comes to $QA_{context}$. To summarize those results, we conclude that reinforcement brings improvements in terms of diversity of the generated questions, at the price of slightly degraded formulations in the outputs."
],
[
"Looking at the bottom row of Table TABREF33, which shows the results obtained by the reference (i.e. human-generated) questions, we observe the highest relative score for all assessed dimensions, with the exception of Answerability. This indicates that the data we derived seem to fit well the task of Curiosity-driven question generation. As a sidenote, we remark that the models built obtain even lower scores in terms of Answerability than humans, a fact we hypothesize due to the lower quality of the generated questions: the less sound and correct, the less answerable a question would be, regardless of its context."
],
[
"We report the pairwise Spearman correlation and p-value among all the different metrics and human measures in Figure FIGREF37. Correlation analysis on the human assessment data shows that BLEU correlates positively with Relevance, Answerability, Soundness and Unexpectedness. Self-BLEU metrics correlate significantly with Soundness and Correctness and QAcontext with Relevance. The only human measure that does not correlate significantly with any automatic metric is External knowledge. It is indeed one of the most challenging aspect to evaluate, even for humans. However, as expected, it correlates negatively with Answerability."
],
[
"The human skill of asking inquisitive questions allows them to learn from the other and increase their knowledge. Curiosity-driven question generation could be a key component for several human-machine interaction scenarios. We thus proposed a new task: Curiosity-driven Question Generation. In absence of data directly usable for this task, we propose an automatic method to derive it from conversational QA datasets. Recognizing that the great majority of QA datasets are not dialogue-based, we also extend the method to standard QA data. Our experiments, including strategies as pretraining and reinforcement, show promising results under both automatic and human evaluation.",
"In future works, we plan to extend the approach to conditional generation of Curiosity-driven questions."
],
[
"All our experiments were run on a single nVidia 2080ti gpu. For SQuAD experiments, training time amounted to circa 45 minutes and 12 hours for the model built without and with reinforcement, respectively. The additional pretraining step took roughly 2 hours. For QuAC experiments, training time amounted to circa 2 hours and 15 hours for the models built without and with reinforcement, respectively."
],
[
"Context ($P^{\\prime }$):Discovery in the United KingdomThe Seekers were offered a twelve-month position as on-board entertainment on the Sitmar Line passenger cruise ship Fairsky in March 1964. In May, they travelled to the U.K. and had intended to return to Australia after staying ten weeks, but upon arrival they were offered work by a London booking agency, the Grade Organisation.Model $\\Rightarrow $ Outputs:base_beam1 $\\Rightarrow $ what was the name of the band ?base_beam3 $\\Rightarrow $ are there any other interesting aspects about this article ?base_beam5 $\\Rightarrow $ are there any other interesting aspects about this article ?RL_beam1 $\\Rightarrow $ what was the name of the album ?RL_beam3 $\\Rightarrow $ did they have any other albums ?RL_beam5 $\\Rightarrow $ are there any other interesting aspects about this article ?Human reference:human $\\Rightarrow $ what else can you tell me about thier discovery ?",
"Context ($P^{\\prime }$):1977-1980: Death of a Ladies' Man and End of the CenturyPhillip Harvey Spector (born Harvey Phillip Spector, December 26, 1939) is an American record producer, musician, and songwriter who developed the Wall of Sound, a music production formula he described as a \"Wagnerian\" approach to rock and roll. Spector is considered the first auteur among musical artists for the unprecedented freedom and control he had over every phase of the recording process. Additionally, he helped engender the idea of the studio as its own distinct instrument. For these contributions, he is acknowledged as one of the most influential figures in pop music history. Model $\\Rightarrow $ Outputs:base_beam1 $\\Rightarrow $ what was his first album ?base_beam3 $\\Rightarrow $ what happened in 1985 ?base_beam5 $\\Rightarrow $ are there any other interesting aspects about this article ?RL_beam1 $\\Rightarrow $ what was the name of the album ?RL_beam3 $\\Rightarrow $ what was the name of the album ?RL_beam5 $\\Rightarrow $ did he have any other albums ?Human reference:human $\\Rightarrow $ was death of a ladies man an album ?"
],
[
"Context ($P^{\\prime }$):The Broncos defeated the Pittsburgh Steelers in the divisional round, 23–16, by scoring 11 points in the final three minutes of the game.Model $\\Rightarrow $ Outputs:base $\\Rightarrow $ who was the head of the steelers ?PT $\\Rightarrow $ what was the name of the game ?RT $\\Rightarrow $ when was the broncos game ?PT+RT $\\Rightarrow $ what was the name of the steelers ?Human reference:human $\\Rightarrow $ how many seconds were left in the game when the broncos intercepted the pass that won the game ?",
"Context ($P^{\\prime }$):More than 1 million people are expected to attend the festivities in San Francisco during Super Bowl Week.Model $\\Rightarrow $ Outputs:base $\\Rightarrow $ how many people live in san diego ?PT $\\Rightarrow $ how many people live in san diego ?RT $\\Rightarrow $ what is the average rainfall in san diego ?PT+RT $\\Rightarrow $ how many people live in san diego ?Human reference:human $\\Rightarrow $ who is the mayor of san francisco ?"
]
],
"section_name": [
"Introduction",
"Related Works",
"Dataset",
"Dataset ::: Conversational QA Data",
"Dataset ::: Standard QA Data",
"Metrics",
"Metrics ::: BLEU",
"Metrics ::: Self-BLEU",
"Metrics ::: QA-based metrics",
"Experiments ::: Baseline model",
"Experiments ::: Reinforcement",
"Experiments ::: Pretraining (PT)",
"Results ::: Automatic metrics",
"Results ::: Human Evaluation",
"Discussion ::: What is the impact of the pretraining?",
"Discussion ::: Does Reinforcement help?",
"Discussion ::: How effective is our dataset creation methodology?",
"Discussion ::: How well do the metrics fit human judgement?",
"Conclusions",
"Computational Costs",
"Sample Outputs ::: From QuAC (test set):",
"Sample Outputs ::: From SQuAD (test set):"
]
} | {
"answers": [
{
"annotation_id": [
"b7813d13da6f65bd51a9e1e5fefa8bdf0f037314"
],
"answer": [
{
"evidence": [
"Automatic evaluation of Natural Language Generation (NLG) systems is a challenging task BIBREF22. For QG, $n$-gram based similarity metrics are commonly used. These measures evaluate how similar the generated text is to the corresponding reference(s). While they are known to suffer from several shortcomings BIBREF29, BIBREF30, they allow to evaluate specific properties of the developed models. In this work, the metrics detailed below are proposed and we evaluate their quality through a human evaluation in subsection SECREF32.",
"In addition to the automatic metrics, we proceeded to a human evaluation. We chose to use the data from our SQuAD-based experiments in order to also to measure the effectiveness of the proposed approach to derive Curiosity-driven QG data from a standard, non-conversational, QA dataset. We randomly sampled 50 samples from the test set. Three professional English speakers were asked to evaluate the questions generated by: humans (i.e. the reference questions), and models trained using pre-training (PT) or (RL), and all combinations of those methods.",
"Before submitting the samples for human evaluation, the questions were shuffled. Ratings were collected on a 1-to-5 likert scale, to measure to what extent the generated questions were: answerable by looking at their context; grammatically correct; how much external knowledge is required to answer; relevant to their context; and, semantically sound. The results of the human evaluation are reported in Table TABREF33."
],
"extractive_spans": [],
"free_form_answer": "Through human evaluation where they are asked to evaluate the generated output on a likert scale.",
"highlighted_evidence": [
"In this work, the metrics detailed below are proposed and we evaluate their quality through a human evaluation in subsection SECREF32.",
"In addition to the automatic metrics, we proceeded to a human evaluation. We chose to use the data from our SQuAD-based experiments in order to also to measure the effectiveness of the proposed approach to derive Curiosity-driven QG data from a standard, non-conversational, QA dataset. We randomly sampled 50 samples from the test set. Three professional English speakers were asked to evaluate the questions generated by: humans (i.e. the reference questions), and models trained using pre-training (PT) or (RL), and all combinations of those methods.\n\nBefore submitting the samples for human evaluation, the questions were shuffled. Ratings were collected on a 1-to-5 likert scale, to measure to what extent the generated questions were: answerable by looking at their context; grammatically correct; how much external knowledge is required to answer; relevant to their context; and, semantically sound. The results of the human evaluation are reported in Table TABREF33."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"92acb186191b106458155a3e7da88ecb31b9f5cc"
],
"answer": [
{
"evidence": [
"One of the most popular metrics for QG, BLEU BIBREF21 provides a set of measures to compare automatically generated texts against one or more references. In particular, BLEU-N is based on the count of overlapping n-grams between the candidate and its corresponding reference(s).",
"Within the field of Computational Creativity, Diversity is considered a desirable property BIBREF31. Indeed, generating always the same question such as “What is the meaning of the universe?\" would be an undesirable behavior, reminiscent of the “collapse mode\" observed in Generative Adversarial Networks (GAN) BIBREF32. Therefore, we adopt Self-BLEU, originally proposed by BIBREF33, as a measure of diversity for the generated text sequences. Self-BLEU is computed as follows: for each generated sentence $s_i$, a BLEU score is computed using $s_i$ as hypothesis while the other generated sentences are used as reference. When averaged over all the references, it thus provides a measure of how diverse the sentences are. Lower Self-BLEU scores indicate more diversity. We refer to these metrics as Self-B* throughout this paper.",
"Therefore, given a question-context pair as input to a QA model, two type of metrics can be computed:",
"n-gram based score: measuring the average overlap between the retrieved answer and the ground truth.",
"probability score: the confidence of the QA model for its retrieved answer; this corresponds to the probability of being the correct answer assigned by the QA model to the retrieved answer."
],
"extractive_spans": [
"BLEU",
"Self-BLEU",
"n-gram based score",
"probability score"
],
"free_form_answer": "",
"highlighted_evidence": [
"One of the most popular metrics for QG, BLEU BIBREF21 provides a set of measures to compare automatically generated texts against one or more references. In particular, BLEU-N is based on the count of overlapping n-grams between the candidate and its corresponding reference(s).",
"Therefore, we adopt Self-BLEU, originally proposed by BIBREF33, as a measure of diversity for the generated text sequences. ",
"Therefore, given a question-context pair as input to a QA model, two type of metrics can be computed:\n\nn-gram based score: measuring the average overlap between the retrieved answer and the ground truth.\n\nprobability score: the confidence of the QA model for its retrieved answer; this corresponds to the probability of being the correct answer assigned by the QA model to the retrieved answer."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero"
],
"paper_read": [
"no",
"no"
],
"question": [
"How they evaluate quality of generated output?",
"What automated metrics authors investigate?"
],
"question_id": [
"5d6cc65b73f428ea2a499bcf91995ef5441f63d4",
"0a8bc204a76041a25cee7e9f8e2af332a17da67a"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Data distributions over the train-validationtest splits. Learning to ask refers to the original split released by (Du et al., 2017), from which our data is derived. The bottom rows refer to the data we obtain using our methodology, with and without NER constraining.",
"Table 2: Results obtained on QuAC-derived data.",
"Table 3: Results obtained on SQuAD-derived data.",
"Figure 1: Distribution of the first word frequency per models for SQuAD (top) and QuAC (bottom). “Other” does not refer literally to the other token, but represents any other token.",
"Table 4: Qualitative results obtained via human evaluation.",
"Figure 2: Correlation matrix obtained from the human assessment data (∗‘ : p < .05, ∗∗ : p < .005)."
],
"file": [
"4-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Figure1-1.png",
"9-Table4-1.png",
"9-Figure2-1.png"
]
} | [
"How they evaluate quality of generated output?"
] | [
[
"1911.03350-Metrics-0",
"1911.03350-Results ::: Human Evaluation-1",
"1911.03350-Results ::: Human Evaluation-0"
]
] | [
"Through human evaluation where they are asked to evaluate the generated output on a likert scale."
] | 744 |
1708.09609 | Identifying Products in Online Cybercrime Marketplaces: A Dataset for Fine-grained Domain Adaptation | One weakness of machine-learned NLP models is that they typically perform poorly on out-of-domain data. In this work, we study the task of identifying products being bought and sold in online cybercrime forums, which exhibits particularly challenging cross-domain effects. We formulate a task that represents a hybrid of slot-filling information extraction and named entity recognition and annotate data from four different forums. Each of these forums constitutes its own"fine-grained domain"in that the forums cover different market sectors with different properties, even though all forums are in the broad domain of cybercrime. We characterize these domain differences in the context of a learning-based system: supervised models see decreased accuracy when applied to new forums, and standard techniques for semi-supervised learning and domain adaptation have limited effectiveness on this data, which suggests the need to improve these techniques. We release a dataset of 1,938 annotated posts from across the four forums. | {
"paragraphs": [
[
"NLP can be extremely useful for enabling scientific inquiry, helping us to quickly and efficiently understand large corpora, gather evidence, and test hypotheses BIBREF0 , BIBREF1 . One domain for which automated analysis is particularly useful is Internet security: researchers obtain large amounts of text data pertinent to active threats or ongoing cybercriminal activity, for which the ability to rapidly characterize that text and draw conclusions can reap major benefits BIBREF2 , BIBREF3 . However, conducting automatic analysis is difficult because this data is out-of-domain for conventional NLP models, which harms the performance of both discrete models BIBREF4 and deep models BIBREF5 . Not only that, we show that data from one cybercrime forum is even out of domain with respect to another cybercrime forum, making this data especially challenging.",
"In this work, we present the task of identifying products being bought and sold in the marketplace sections of these online cybercrime forums. We define a token-level annotation task where, for each post, we annotate references to the product or products being bought or sold in that post. Having the ability to automatically tag posts in this way lets us characterize the composition of a forum in terms of what products it deals with, identify trends over time, associate users with particular activity profiles, and connect to price information to better understand the marketplace. Some of these analyses only require post-level information (what is the product being bought or sold in this post?) whereas other analyses might require token-level references; we annotate at the token level to make our annotation as general as possible. Our dataset has already proven enabling for case studies on these particular forums BIBREF6 , including a study of marketplace activity on bulk hacked accounts versus users selling their own accounts.",
"Our task has similarities to both slot-filling information extraction (with provenance information) as well as standard named-entity recognition (NER). Compared to NER, our task features a higher dependence on context: we only care about the specific product being bought or sold in a post, not other products that might be mentioned. Moreover, because we are operating over forums, the data is substantially messier than classical NER corpora like CoNLL BIBREF7 . While prior work has dealt with these messy characteristics for syntax BIBREF8 and for discourse BIBREF9 , BIBREF10 , BIBREF11 , our work is the first to tackle forum data (and marketplace forums specifically) from an information extraction perspective.",
"Having annotated a dataset, we examine supervised and semi-supervised learning approaches to the product extraction problem. Binary or CRF classification of tokens as products is effective, but performance drops off precipitously when a system trained on one forum is applied to a different forum: in this sense, even two different cybercrime forums seem to represent different “fine-grained domains.” Since we want to avoid having to annotate data for every new forum that might need to be analyzed, we explore several methods for adaptation, mixing type-level annotation BIBREF12 , BIBREF13 , token-level annotation BIBREF14 , and semi-supervised approaches BIBREF15 , BIBREF16 . We find little improvement from these methods and discuss why they fail to have a larger impact.",
"Overall, our results characterize the challenges of our fine-grained domain adaptation problem in online marketplace data. We believe that this new dataset provides a useful testbed for additional inquiry and investigation into modeling of fine-grained domain differences."
],
[
"We consider several forums that vary in the nature of products being traded:",
"Table TABREF3 gives some statistics of these forums. These are the same forums used to study product activity in PortnoffEtAl2017. We collected all available posts and annotated a subset of them. In total, we annotated 130,336 tokens; accounting for multiple annotators, our annotators considered 478,176 tokens in the process of labeling the data.",
"Figure FIGREF2 shows two examples of posts from Darkode. In addition to aspects of the annotation, which we describe below, we see that the text exhibits common features of web text: abbreviations, ungrammaticality, spelling errors, and visual formatting, particularly in thread titles. Also, note how some words that are not products here might be in other contexts (e.g., Exploits)."
],
[
"We developed our annotation guidelines through six preliminary rounds of annotation, covering 560 posts. Each round was followed by discussion and resolution of every post with disagreements. We benefited from members of our team who brought extensive domain expertise to the task. As well as refining the annotation guidelines, the development process trained annotators who were not security experts. The data annotated during this process is not included in Table TABREF3 .",
"Once we had defined the annotation standard, we annotated datasets from Darkode, Hack Forums, Blackhat, and Nulled as described in Table TABREF3 . Three people annotated every post in the Darkode training, Hack Forums training, Blackhat test, and Nulled test sets; these annotations were then merged into a final annotation by majority vote. The development and test sets for Darkode and Hack Forums were annotated by additional team members (five for Darkode, one for Hack Forums), and then every disagreement was discussed and resolved to produce a final annotation. The authors, who are researchers in either NLP or computer security, did all of the annotation.",
"We preprocessed the data using the tokenizer and sentence-splitter from the Stanford CoreNLP toolkit BIBREF17 . Note that many sentences in the data are already delimited by line breaks, making the sentence-splitting task much easier. We performed annotation on the tokenized data so that annotations would be consistent with surrounding punctuation and hyphenated words.",
"Our full annotation guide is available with our data release. Our basic annotation principle is to annotate tokens when they are either the product that will be delivered or are an integral part of the method leading to the delivery of that product. Figure FIGREF2 shows examples of this for a deliverable product (bot) as well as a service (cleaning). Both a product and service may be annotated in a single example: for a post asking to hack an account, hack is the method and the deliverable is the account, so both are annotated. In general, methods expressed as verbs may be annotated in addition to nominal references.",
"",
"",
"When the product is a multiword expression (e.g., Backconnect bot), it is almost exclusively a noun phrase, in which case we annotate the head word of the noun phrase (bot). Annotating single tokens instead of spans meant that we avoided having to agree on an exact parse of each post, since even the boundaries of base noun phrases can be quite difficult to agree on in ungrammatical text.",
"If multiple different products are being bought or sold, we annotate them all. We do not annotate:",
"Features of products",
"Generic product references, e.g., this, them",
"Product mentions inside “vouches” (reviews from other users)",
"Product mentions outside of the first and last 10 lines of each post",
"Table TABREF3 shows inter-annotator agreement according to our annotation scheme. We use the Fleiss' Kappa measurement BIBREF18 , treating our task as a token-level annotation where every token is annotated as either a product or not. We chose this measure as we are interested in agreement between more than two annotators (ruling out Cohen's kappa), have a binary assignment (ruling out correlation coefficients) and have datasets large enough that the biases Krippendorff's Alpha addresses are not a concern. The values indicate reasonable agreement."
],
[
"Because we annotate entities in a context-sensitive way (i.e., only annotating those in product context), our task resembles a post-level information extraction task. The product information in a post can be thought of as a list-valued slot to be filled in the style of TAC KBP BIBREF19 , BIBREF20 , with the token-level annotations constituting provenance information. However, we chose to anchor the task fully at the token level to simplify the annotation task: at the post level, we would have to decide whether two distinct product mentions were actually distinct products or not, which requires heavier domain knowledge. Our approach also resembles the fully token-level annotations of entity and event information in the ACE dataset BIBREF21 ."
],
[
"In light of the various views on this task and its different requirements for different potential applications, we describe and motivate a few distinct evaluation metrics below. The choice of metric will impact system design, as we discuss in the following sections."
],
[
"Another axis of variation in metrics comes from whether we consider token-level or phrase-level outputs. As noted in the previous section, we did not annotate noun phrases, but we may actually be interested in identifying them. In Figure FIGREF2 , for example, extracting Backconnect bot is more useful than extracting bot in isolation, since bot is a less specific characterization of the product.",
"We can convert our token-level annotations to phrase-level annotations by projecting our annotations to the noun phrase level based on the output of an automatic parser. We used the parser of ChenManning2014 to parse all sentences of each post. For each annotated token that was given a nominal tag (N*), we projected that token to the largest NP containing it of length less than or equal to 7; most product NPs are shorter than this, and when the parser predicts a longer NP, our analysis found that it typically reflects a mistake. In Figure FIGREF2 , the entire noun phrase Backconnect bot would be labeled as a product. For products realized as verbs (e.g., hack), we leave the annotation as the single token.",
"Throughout the rest of this work, we will evaluate sometimes at the token-level and sometimes at the NP-level (including for the product type evaluation and post-level accuracy); we will specify which evaluation is used where."
],
[
"We consider several baselines for product extraction, two supervised learning-based methods (here), and semi-supervised methods (Section SECREF5 )."
],
[
"Table TABREF30 shows development set results on Darkode for each of the four systems for each metric described in Section SECREF3 . Our learning-based systems substantially outperform the baselines on the metrics they are optimized for. The post-level system underperforms the binary classifier on the token evaluation, but is superior at not only post-level accuracy but also product type F INLINEFORM0 . This lends credence to our hypothesis that picking one product suffices to characterize a large fraction of posts. Comparing the automatic systems with human annotator performance we see a substantial gap. Note that our best annotator's token F INLINEFORM1 was 89.8, and NP post accuracy was 100%; a careful, well-trained annotator can achieve very high performance, indicating a high skyline.",
"The noun phrase metric appears to be generally more forgiving, since token distinctions within noun phrases are erased. The post-level NP system achieves an F-score of 78 on product type identification, and post-level accuracy is around 88%. While there is room for improvement, this system is accurate enough to enable analysis of Darkode with automatic annotation.",
"Throughout the rest of this work, we focus on NP-level evaluation and post-level NP accuracy."
],
[
"Table TABREF30 only showed results for training and evaluating within the same forum (Darkode). However, we wish to apply our system to extract product occurrences from a wide variety of forums, so we are interested in how well the system will generalize to a new forum. Tables TABREF33 and TABREF38 show full results of several systems in within-forum and cross-forum evaluation settings. Performance is severely degraded in the cross-forum setting compared to the within-forum setting, e.g., on NP-level F INLINEFORM0 , a Hack Forums-trained model is 14.6 F INLINEFORM1 worse at the Darkode task than a Darkode-trained model (61.2 vs. 75.8). Differences in how the systems adapt between different forums will be explored more thoroughly in Section SECREF43 .",
"In the next few sections, we explore several possible methods for improving results in the cross-forum settings and attempting to build a more domain-general system. These techniques generally reflect two possible hypotheses about the source of the cross-domain challenges:"
],
[
"To test Hypothesis 1, we investigate whether additional lexical information helps identify product-like words in new domains. A classic semi-supervised technique for exploiting unlabeled target data is to fire features over word clusters or word vectors BIBREF15 . These features should generalize well across domains that the clusters are formed on: if product nouns occur in similar contexts across domains and therefore wind up in the same cluster, then a model trained on domain-limited data should be able to learn that that cluster identity is indicative of products.",
"We form Brown clusters on our unlabeled data from both Darkode and Hack Forums (see Table TABREF3 for sizes). We use Liang2005's implementation to learn 50 clusters. Upon inspection, these clusters do indeed capture some of the semantics relevant to the problem: for example, the cluster 110 has as its most frequent members service, account, price, time, crypter, and server, many of which are product-associated nouns. We incorporate these as features into our model by characterizing each token with prefixes of the Brown cluster ID; we used prefixes of length 2, 4, and 6.",
"Tables TABREF33 and TABREF38 show the results of incorporating Brown cluster features into our trained models. These features do not lead to statistically-significant gains in either NP-level F INLINEFORM0 or post-level accuracy, despite small improvements in some cases. This indicates that Brown clusters might be a useful feature sometimes, but do not solve the domain adaptation problem in this context."
],
[
"Another approach following Hypothesis 1 is to use small amounts of supervised data, One cheap approach for annotating data in a new domain is to exploit type-level annotation BIBREF12 , BIBREF13 . Our token-level annotation standard is relatively complex to learn, but a researcher could quite easily provide a few exemplar products for a new forum based on just a few minutes of reading posts and analyzing the forum.",
"Given the data that we've already annotated, we can simulate this process by iterating through our labeled data and collecting annotated product names that are sufficiently common. Specifically, we take all (lowercased, stemmed) product tokens and keep those occurring at least 4 times in the training dataset (recall that these datasets are INLINEFORM0 700 posts). This gives us a list of 121 products in Darkode and 105 products in Hack Forums.",
"To incorporate this information into our system, we add a new feature on each token indicating whether or not it occurs in the gazetteer. At training time, we use the gazetteer scraped from the training set. At test time, we use the gazetteer from the target domain as a form of partial type-level supervision. Tables TABREF33 and TABREF38 shows the results of incorporating the gazetteer into the system. Gazetteers seem to provide somewhat consistent gains in cross-domain settings, though many of these individual improvements are not statistically significant, and the gazetteers can sometimes hurt performance when testing on the same domain the system was trained on."
],
[
"We now turn our attention to methods that might address Hypothesis 2. If we assume the domain transfer problem is more complex, we really want to leverage labeled data in the target domain rather than attempting to transfer features based only on type-level information. Specifically, we are interested in cases where a relatively small number of labeled posts (less than 100) might provide substantial benefit to the adaptation; a researcher could plausibly do this annotation in a few hours.",
"We consider two ways of exploiting labeled target-domain data. The first is to simply take these posts as additional training data. The second is to also employ the “frustratingly easy” domain adaptation method of Daume2007. In this framework, each feature fired in our model is actually fired twice: one copy is domain-general and one is conjoined with the domain label (here, the name of the forum). In doing so, the model should gain some ability to separate domain-general from domain-specific feature values, with regularization encouraging the domain-general feature to explain as much of the phenomenon as possible. For both training methods, we upweight the contribution of the target-domain posts in the objective by a factor of 5.",
"Figure FIGREF41 shows learning curves for both of these methods in two adaptation settings as we vary the amount of labeled target-domain data. The system trained on Hack Forums is able to make good use of labeled data from Darkode: having access to 20 labeled posts leads to gains of roughly 7 F INLINEFORM0 . Interestingly, the system trained on Darkode is not able to make good use of labeled data from Hack Forums, and the domain-specific features actually cause a drop in performance until we include a substantial amount of data from Hack Forums (at least 80 posts). We are likely overfitting the small Hack Forums training set with the domain-specific features."
],
[
"In order to understand the variable performance and shortcomings of the domain adaptation approaches we explored, it is useful to examine our two initial hypotheses and characterize the datasets a bit further. To do so, we break down system performance on products seen in the training set versus novel products. Because our systems depend on lexical and character INLINEFORM0 -gram features, we expect that they will do better at predicting products we have seen before.",
"Table TABREF39 confirms this intuition: it shows product out-of-vocabulary rates in each of the four forums relative to training on both Darkode and Hack Forums, along with recall of an NP-level system on both previously seen and OOV products. As expected, performance is substantially higher on in-vocabulary products. OOV rates of a Darkode-trained system are generally lower on new forums, indicating that that forum has better all-around product coverage. A system trained on Darkode is therefore in some sense more domain-general than one trained on Hack Forums.",
"This would seem to support Hypothesis 1. Moreover, Table TABREF33 shows that the Hack Forums-trained system achieves a 21% error reduction on Hack Forums compared to a Darkode-trained system, while a Darkode-trained system obtains a 38% error reduction on Darkode relative to a Hack Forums-trained system; this greater error reduction means that Darkode has better coverage of Hack Forums than vice versa. Darkode's better product coverage also helps explain why Section SECREF40 showed better performance of adapting Hack Forums to Darkode than the other way around: augmenting Hack Forums data with a few posts from Darkode can give critical knowledge about new products, but this is less true if the forums are reversed. Duplicating features and adding parameters to the learner also has less of a clear benefit when adapting from Darkode, when the types of knowledge that need to be added are less concrete.",
"Note, however, that these results do not tell the full story. Table TABREF39 reports recall values, but not all systems have the same precision/recall tradeoff: although they were tuned to balance precision and recall on their respective development sets, the Hack Forums-trained system is slightly more precision-oriented on Nulled than the Darkode-trained system. In fact, Table TABREF33 shows that the Hack Forums-trained system actually performs better on Nulled, largely due to better performance on previously-seen products. This indicates that there is some truth to Hypothesis 2: product coverage is not the only important factor determining performance."
],
[
"We present a new dataset of posts from cybercrime marketplaces annotated with product references, a task which blends IE and NER. Learning-based methods degrade in performance when applied to new forums, and while we explore methods for fine-grained domain adaption in this data, effective methods for this task are still an open question.",
"Our datasets used in this work are available at https://evidencebasedsecurity.org/forums/ Code for the product extractor can be found at https://github.com/ccied/ugforum-analysis/tree/master/extract-product"
],
[
"This work was supported in part by the National Science Foundation under grants CNS-1237265 and CNS-1619620, by the Office of Naval Research under MURI grant N000140911081, by the Center for Long-Term Cybersecurity and by gifts from Google. We thank all the people that provided us with forum data for our analysis; in particular Scraping Hub and SRI for their assistance in collecting data for this study. Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors."
]
],
"section_name": [
"Introduction",
"Dataset and Annotation",
"Annotation Process",
"Discussion",
"Evaluation Metrics",
"Phrase-level Evaluation",
"Models",
"Basic Results",
"Domain Adaptation",
"Brown Clusters",
"Type-level Annotation",
"Token-level Annotation",
"Analysis",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"ac654b14d4e0fc5b8d6e9cc686d0228865914348"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"934ad45af60e031d7b2a1adb8c63a6ae15712d11"
],
"answer": [
{
"evidence": [
"We developed our annotation guidelines through six preliminary rounds of annotation, covering 560 posts. Each round was followed by discussion and resolution of every post with disagreements. We benefited from members of our team who brought extensive domain expertise to the task. As well as refining the annotation guidelines, the development process trained annotators who were not security experts. The data annotated during this process is not included in Table TABREF3 .",
"Once we had defined the annotation standard, we annotated datasets from Darkode, Hack Forums, Blackhat, and Nulled as described in Table TABREF3 . Three people annotated every post in the Darkode training, Hack Forums training, Blackhat test, and Nulled test sets; these annotations were then merged into a final annotation by majority vote. The development and test sets for Darkode and Hack Forums were annotated by additional team members (five for Darkode, one for Hack Forums), and then every disagreement was discussed and resolved to produce a final annotation. The authors, who are researchers in either NLP or computer security, did all of the annotation."
],
"extractive_spans": [
"annotators who were not security experts",
"researchers in either NLP or computer security"
],
"free_form_answer": "",
"highlighted_evidence": [
" As well as refining the annotation guidelines, the development process trained annotators who were not security experts.",
"The development and test sets for Darkode and Hack Forums were annotated by additional team members (five for Darkode, one for Hack Forums), and then every disagreement was discussed and resolved to produce a final annotation. The authors, who are researchers in either NLP or computer security, did all of the annotation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"d9135203a92ded14d260a7d551b7a447c8b7c910"
]
},
{
"annotation_id": [
"ce58f234425de8ed5d27984ec02ab29591c5c3fe"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Test set results at the NP level in within-forum and cross-forum settings for a variety of different systems. Using either Brown clusters or gazetteers gives mixed results on cross-forum performance: only one of the improvements (†) is statistically significant with p < 0.05 according to a bootstrap resampling test. Gazetteers are unavailable for Blackhat and Nulled since we have no training data for those forums."
],
"extractive_spans": [],
"free_form_answer": "Darkode, Hack Forums, Blackhat and Nulled.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Test set results at the NP level in within-forum and cross-forum settings for a variety of different systems. Using either Brown clusters or gazetteers gives mixed results on cross-forum performance: only one of the improvements (†) is statistically significant with p < 0.05 according to a bootstrap resampling test. Gazetteers are unavailable for Blackhat and Nulled since we have no training data for those forums."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"d9135203a92ded14d260a7d551b7a447c8b7c910"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"What supervised models are experimented with?",
"Who annotated the data?",
"What are the four forums the data comes from?"
],
"question_id": [
"81686454f215e28987c7ad00ddce5ffe84b37195",
"fc06502fa62803b62f6fd84265bfcfb207c1113b",
"ce807a42370bfca10fa322d6fa772e4a58a8dca1"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 1: Forum statistics. The left columns (posts and words per post) are calculated over all data, while the right columns are based on annotated data only. Note that products per post indicate product mentions per post, not product types. Slashes indicate the train/development/test split for Darkode and train/test split for Hack Forums. Agreement is measured using Fleiss’ Kappa; the two columns cover data where three annotators labeled each post and a subset labeled by all annotators.",
"Table 2: Development set results on Darkode. Bolded F1 values represent statistically-significant improvements over all other system values in the column with p < 0.05 according to a bootstrap resampling test. Our post-level system outperforms our binary classifier at whole-post accuracy and on type-level product extraction, even though it is less good on the token-level metric. All systems consistently identify product NPs better than they identify product tokens. However, there is a substantial gap between our systems and human performance.",
"Table 3: Test set results at the NP level in within-forum and cross-forum settings for a variety of different systems. Using either Brown clusters or gazetteers gives mixed results on cross-forum performance: only one of the improvements (†) is statistically significant with p < 0.05 according to a bootstrap resampling test. Gazetteers are unavailable for Blackhat and Nulled since we have no training data for those forums.",
"Table 4: Test set results at the whole-post level in within-forum and cross-forum settings for a variety of different systems. Brown clusters and gazetteers give similarly mixed results as in the token-level evaluation; † indicates statistically significant gains over the post-level system with p < 0.05 according to a bootstrap resampling test.",
"Table 5: Product token out-of-vocabulary rates on development sets (test set for Blackhat and Nulled) of various forums with respect to training on Darkode and Hack Forums. We also show the recall of an NPlevel system on seen (Rseen) and OOV (Roov) tokens. Darkode seems to be more “general” than Hack Forums: the Darkode system generally has lower OOV rates and provides more consistent performance on OOV tokens than the Hack Forums system.",
"Figure 2: Token-supervised domain adaptation results for two settings. As our system is trained on an increasing amount of target-domain data (xaxis), its performance generally improves. However, adaptation from Hack Forums to Darkode is much more effective than the other way around, and using domain features as in Daume III (2007) gives little benefit over naı̈ve use of the new data."
],
"file": [
"2-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"8-Table5-1.png",
"8-Figure2-1.png"
]
} | [
"What are the four forums the data comes from?"
] | [
[
"1708.09609-7-Table3-1.png"
]
] | [
"Darkode, Hack Forums, Blackhat and Nulled."
] | 745 |
1906.11604 | Gated Embeddings in End-to-End Speech Recognition for Conversational-Context Fusion | We present a novel conversational-context aware end-to-end speech recognizer based on a gated neural network that incorporates conversational-context/word/speech embeddings. Unlike conventional speech recognition models, our model learns longer conversational-context information that spans across sentences and is consequently better at recognizing long conversations. Specifically, we propose to use the text-based external word and/or sentence embeddings (i.e., fastText, BERT) within an end-to-end framework, yielding a significant improvement in word error rate with better conversational-context representation. We evaluated the models on the Switchboard conversational speech corpus and show that our model outperforms standard end-to-end speech recognition models. | {
"paragraphs": [
[
"In a long conversation, there exists a tendency of semantically related words, or phrases reoccur across sentences, or there exists topical coherence. Existing speech recognition systems are built at individual, isolated utterance level in order to make building systems computationally feasible. However, this may lose important conversational context information. There have been many studies that have attempted to inject a longer context information BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , all of these models are developed on text data for language modeling task.",
"There has been recent work attempted to use the conversational-context information within a end-to-end speech recognition framework BIBREF6 , BIBREF7 , BIBREF8 . The new end-to-end speech recognition approach BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 integrates all available information within a single neural network model, allows to make fusing conversational-context information possible. However, these are limited to encode only one preceding utterance and learn from a few hundred hours of annotated speech corpus, leading to minimal improvements.",
"Meanwhile, neural language models, such as fastText BIBREF17 , BIBREF18 , BIBREF19 , ELMo BIBREF20 , OpenAI GPT BIBREF21 , and Bidirectional Encoder Representations from Transformers (BERT) BIBREF22 , that encode words and sentences in fixed-length dense vectors, embeddings, have achieved impressive results on various natural language processing tasks. Such general word/sentence embeddings learned on large text corpora (i.e., Wikipedia) has been used extensively and plugged in a variety of downstream tasks, such as question-answering and natural language inference, BIBREF22 , BIBREF20 , BIBREF23 , to drastically improve their performance in the form of transfer learning.",
"In this paper, we create a conversational-context aware end-to-end speech recognizer capable of incorporating a conversational-context to better process long conversations. Specifically, we propose to exploit external word and/or sentence embeddings which trained on massive amount of text resources, (i.e. fastText, BERT) so that the model can learn better conversational-context representations. So far, the use of such pre-trained embeddings have found limited success in the speech recognition task. We also add a gating mechanism to the decoder network that can integrate all the available embeddings (word, speech, conversational-context) efficiently with increase representational power using multiplicative interactions. Additionally, we explore a way to train our speech recognition model even with text-only data in the form of pre-training and joint-training approaches. We evaluate our model on the Switchboard conversational speech corpus BIBREF24 , BIBREF25 , and show that our model outperforms the sentence-level end-to-end speech recognition model. The main contributions of our work are as follows:"
],
[
"Several recent studies have considered to incorporate a context information within a end-to-end speech recognizer BIBREF26 , BIBREF27 . In contrast with our method which uses a conversational-context information in a long conversation, their methods use a list of phrases (i.e. play a song) in reference transcription in specific tasks, contact names, songs names, voice search, dictation.",
"Several recent studies have considered to exploit a longer context information that spans multiple sentences BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . In contrast with our method which uses a single framework for speech recognition tasks, their methods have been developed on text data for language models, and therefore, it must be integrated with a conventional acoustic model which is built separately without a longer context information.",
"Several recent studies have considered to embed a longer context information within a end-to-end framework BIBREF6 , BIBREF7 , BIBREF8 . In contrast with our method which can learn a better conversational-context representation with a gated network that incorporate external word/sentence embeddings from multiple preceding sentence history, their methods are limited to learn conversational-context representation from one preceding sentence in annotated speech training set.",
"Gating-based approaches have been used for fusing word embeddings with visual representations in genre classification task or image search task BIBREF28 , BIBREF29 and for learning different languages in speech recognition task BIBREF30 ."
],
[
"We perform end-to-end speech recognition using a joint CTC/Attention-based approach with graphemes as the output symbols BIBREF16 , BIBREF31 . The key advantage of the joint CTC/Attention framework is that it can address the weaknesses of the two main end-to-end models, Connectionist Temporal Classification (CTC) BIBREF9 and attention-based encoder-decoder (Attention) BIBREF32 , by combining the strengths of the two. With CTC, the neural network is trained according to a maximum-likelihood training criterion computed over all possible segmentations of the utterance's sequence of feature vectors to its sequence of labels while preserving left-right order between input and output. With attention-based encoder-decoder models, the decoder network can learn the language model jointly without relying on the conditional independent assumption.",
"Given a sequence of acoustic feature vectors, $\\mathbf {x}$ , and the corresponding graphemic label sequence, $\\mathbf {y}$ , the joint CTC/Attention objective is represented as follows by combining two objectives with a tunable parameter $\\lambda : 0 \\le \\lambda \\le 1$ : ",
"$$\\mathcal {L} &= \\lambda \\mathcal {L}_\\text{CTC} + (1-\\lambda ) \\mathcal {L}_\\text{att}.$$ (Eq. 6) ",
" Each loss to be minimized is defined as the negative log likelihood of the ground truth character sequence $\\mathbf {y^*}$ , is computed from: ",
"$$\\begin{split}\n\\mathcal {L}_\\text{CTC} \\triangleq & -\\ln \\sum _{\\mathbf {\\pi } \\in \\Phi (\\mathbf {y})} p(\\mathbf {\\pi }|\\mathbf {x})\n\\end{split}$$ (Eq. 7) ",
"$$\\begin{split}\n\\mathcal {L}_\\text{att} \\triangleq & -\\sum _u \\ln p(y_u^*|\\mathbf {x},y^*_{1:u-1})\n\\end{split}$$ (Eq. 8) ",
" where $\\mathbf {\\pi }$ is the label sequence allowing the presence of the blank symbol, $\\Phi $ is the set of all possible $\\mathbf {\\pi }$ given $u$ -length $\\mathbf {y}$ , and $y^*_{1:u-1}$ is all the previous labels.",
"Both CTC and the attention-based encoder-decoder networks are also used in the inference step. The final hypothesis is a sequence that maximizes a weighted conditional probability of CTC and attention-based encoder-decoder network BIBREF33 : ",
"$$\\begin{split}\n\\mathbf {y}* = \\text{argmax} \\lbrace & \\gamma \\log p_{CTC}(\\mathbf {y}|\\mathbf {x}) \\\\\n&+ (1-\\gamma ) \\log p_{att}(\\mathbf {y}|\\mathbf {x}) \\rbrace \n\\end{split}$$ (Eq. 9) "
],
[
"In this work, we use word units as our model outputs instead of sub-word units. Direct acoustics-to-word (A2W) models train a single neural network to directly recognize words from speech without any sub-word units, pronunciation model, decision tree, decoder, which significantly simplifies the training and decoding process BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 , BIBREF38 . In addition, building A2W can learn more semantically meaningful conversational-context representations and it allows to exploit external resources like word/sentence embeddings where the unit of representation is generally words. However, A2W models require more training data compared to conventional sub-word models because it needs sufficient acoustic training examples per word to train well and need to handle out-of-vocabulary(OOV) words. As a way to manage this OOV issue, we first restrict the vocabulary to 10k frequently occurring words. We then additionally use a single character unit and start-of-OOV (sunk), end-of-OOV (eunk) tokens to make our model generate a character by decomposing the OOV word into a character sequence. For example, the OOV word, rainstorm, is decomposed into (sunk) r a i n s t o r m (eunk) and the model tries to learn such a character sequence rather than generate the OOV token. From this method, we obtained 1.2% - 3.7% word error rate (WER) relative improvements in evaluation set where exists 2.9% of OOVs."
],
[
"In this section, we describe the A2W model with conversational-context fusion. In order to fuse conversational context information within the A2W, end-to-end speech recognition framework, we extend the decoder sub-network to predict the output additionally conditioning on conversational context, by learning a conversational-context embedding. We encode single or multiple preceding utterance histories into a fixed-length, single vector, then inject it to the decoder network as an additional input at every output step.",
"Let say we have $K$ number of utterances in a conversation. For $k$ -th sentence, we have acoustic features $(x_1, \\cdots , x_T)^k$ and output word sequence, $(w_1, \\cdots , w_U)$ . At output timestamp $u$ , our decoder generates the probability distribution over words ( $w_u^k$ ), conditioned on 1) speech embeddings, attended high-level representation ( $\\mathbf {e_{speech}^{k}}$ ) generated from encoder, and 2) word embeddings from all the words seen previously ( $e^{u-1}_{word}$ ), and 3) conversational-context embeddings ( $e^{k}_{context}$ ), which represents the conversational-context information for current ( $k$ ) utterance prediction: ",
"$$\\mathbf {e^{k}_{speech}} = & \\text{Encoder}(\\mathbf {x^k}) \\\\\nw^k_u \\sim & \\text{Decoder}(\\mathbf {e^{k}_{context}}, e^k_{word}, \\mathbf {e^{k}_{speech}})$$ (Eq. 11) ",
"We can simply represent such contextual embedding, $e^{k}_{context}$ , by mean of one-hot word vectors or word distributions, $\\texttt {mean}(e^{k-1}_{word_{1}} + \\cdots + e^{k-1}_{word_{U}})$ from the preceding utterances.",
"In order to learn and use the conversational-context during training and decoding, we serialize the utterances based on their onset times and their conversations rather than random shuffling of data. We shuffle data at the conversation level and create mini-batches that contain only one sentence of each conversation. We fill the \"dummy\" input/output example at positions where the conversation ended earlier than others within the mini-batch to not influence other conversations while passing context to the next batch."
],
[
"Learning better representation of conversational-context is the key to achieve better processing of long conversations. To do so, we propose to encode the general word/sentence embeddings pre-trained on large textual corpora within our end-to-end speech recognition framework. Another advantage of using pre-trained embedding models is that we do not need to back-propagate the gradients across contexts, making it easier and faster to update the parameters for learning a conversational-context representation.",
"There exist many word/sentence embeddings which are publicly available. We can broadly classify them into two categories: (1) non-contextual word embeddings, and (2) contextual word embeddings. Non-contextual word embeddings, such as Word2Vec BIBREF1 , GloVe BIBREF39 , fastText BIBREF17 , maps each word independently on the context of the sentence where the word occur in. Although it is easy to use, it assumes that each word represents a single meaning which is not true in real-word. Contextualized word embeddings, sentence embeddings, such as deep contextualized word representations BIBREF20 , BERT BIBREF22 , encode the complex characteristics and meanings of words in various context by jointly training a bidirectional language model. The BERT model proposed a masked language model training approach enabling them to also learn good “sentence” representation in order to predict the masked word.",
"In this work, we explore both types of embeddings to learn conversational-context embeddings as illustrated in Figure 1 . The first method is to use word embeddings, fastText, to generate 300-dimensional embeddings from 10k-dimensional one-hot vector or distribution over words of each previous word and then merge into a single context vector, $e^k_{context}$ . Since we also consider multiple word/utterance history, we consider two simple ways to merge multiple embeddings (1) mean, and (2) concatenation. The second method is to use sentence embeddings, BERT. It is used to a generate single 786-dimensional sentence embedding from 10k-dimensional one-hot vector or distribution over previous words and then merge into a single context vector with two different merging methods. Since our A2W model uses a restricted vocabulary of 10k as our output units and which is different from the external embedding models, we need to handle out-of-vocabulary words. For fastText, words that are missing in the pretrained embeddings we map them to a random multivariate normal distribution with the mean as the sample mean and variance as the sample variance of the known words. For BERT, we use its provided tokenizer to generates byte pair encodings to handle OOV words.",
"Using this approach, we can obtain a more dense, informative, fixed-length vectors to encode conversational-context information, $e^k_{context}$ to be used in next $k$ -th utterance prediction."
],
[
"We use contextual gating mechanism in our decoder network to combine the conversational-context embeddings with speech and word embeddings effectively. Our gating is contextual in the sense that multiple embeddings compute a gate value that is dependent on the context of multiple utterances that occur in a conversation. Using these contextual gates can be beneficial to decide how to weigh the different embeddings, conversational-context, word and speech embeddings. Rather than merely concatenating conversational-context embeddings BIBREF6 , contextual gating can achieve more improvement because its increased representational power using multiplicative interactions.",
"Figure 2 illustrates our proposed contextual gating mechanism. Let $e_w = e_w(y_{u-1})$ be our previous word embedding for a word $y_{u-1}$ , and let $e_s = e_s(x^k_{1:T})$ be a speech embedding for the acoustic features of current $k$ -th utterance $x^k_{1:T}$ and $e_c = e_c(s_{k-1-n:k-1})$ be our conversational-context embedding for $n$ -number of preceding utterances ${s_{k-1-n:k-1}}$ . Then using a gating mechanism: ",
"$$g = \\sigma (e_c, e_w, e_s)$$ (Eq. 15) ",
" where $\\sigma $ is a 1 hidden layer DNN with $\\texttt {sigmoid}$ activation, the gated embedding $e$ is calcuated as ",
"$$e = g \\odot (e_c, e_w, e_s) \\\\\nh = \\text{LSTM}(e)$$ (Eq. 16) ",
" and fed into the LSTM decoder hidden layer. The output of the decoder $h$ is then combined with conversational-context embedding $e_c$ again with a gating mechanism, ",
"$$g = \\sigma (e_C, h) \\\\\n\\hat{h} = g \\odot (e_c, h)$$ (Eq. 17) ",
" Then the next hidden layer takes these gated activations, $\\hat{h}$ , and so on."
],
[
"To evaluate our proposed conversational end-to-end speech recognition model, we use the Switchboard (SWBD) LDC corpus (97S62) task. We split 300 hours of the SWBD training set into two: 285 hours of data for the model training, and 5 hours of data for the hyper-parameter tuning. We evaluate the model performance on the HUB5 Eval2000 which consists of the Callhome English (CH) and Switchboard (SWBD) (LDC2002S09, LDC2002T43). In Table 1 , we show the number of conversations and the average number of utterances per a single conversation.",
"The audio data is sampled at 16kHz, and then each frame is converted to a 83-dimensional feature vector consisting of 80-dimensional log-mel filterbank coefficients and 3-dimensional pitch features as suggested in BIBREF40 . The number of our word-level output tokens is 10,038, which includes 47 single character units as described in Section \"Acoustic-to-Words Models\" . Note that no pronunciation lexicon was used in any of the experiments."
],
[
"For the architecture of the end-to-end speech recognition, we used joint CTC/Attention end-to-end speech recognition BIBREF16 , BIBREF31 . As suggested in BIBREF45 , BIBREF33 , the input feature images are reduced to ( $1/4 \\times 1/4$ ) images along with the time-frequency axis within the two max-pooling layers in CNN. Then, the 6-layer BLSTM with 320 cells is followed by the CNN layer. For the attention mechanism, we used a location-based method BIBREF14 . For the decoder network, we used a 2-layer LSTM with 300 cells. In addition to the standard decoder network, our proposed models additionally require extra parameters for gating layers in order to fuse conversational-context embedding to the decoder network compared to baseline. We denote the total number of trainable parameters in Table 2 .",
"For the optimization method, we use AdaDelta BIBREF46 with gradient clipping BIBREF47 . We used $\\lambda = 0.2$ for joint CTC/Attention training (in Eq. 6 ) and $\\gamma = 0.3$ for joint CTC/Attention decoding (in Eq. 9 ). We bootstrap the training of our proposed conversational end-to-end models from the baseline end-to-end models. To decide the best models for testing, we monitor the development accuracy where we always use the model prediction in order to simulate the testing scenario. At inference, we used a left-right beam search method BIBREF48 with the beam size 10 for reducing the computational cost. We adjusted the final score, $s(\\mathbf {y}|\\mathbf {x})$ , with the length penalty $0.5$ . The models are implemented using the PyTorch deep learning library BIBREF49 , and ESPnet toolkit BIBREF16 , BIBREF31 , BIBREF50 ."
],
[
"Our results are summarized in the Table 2 where we first present the baseline results and then show the improvements by adding each of the individual components that we discussed in previous sections, namely, gated decoding, pretraining decoder network, external word embedding, external conversational embedding and increasing receptive field of the conversational context. Our best model gets around 15% relative improvement on the SWBD subset and 5% relative improvement on the CallHome subset of the eval2000 dataset.",
"We start by evaluating our proposed model which leveraged conversational-context embeddings learned from training corpus and compare it with a standard end-to-end speech recognition models without conversational-context embedding. As seen in Table 2 , we obtained a performance gain over the baseline by using conversational-context embeddings which is learned from training set."
],
[
"Then, we observe that pre-training of decoder network can improve accuracy further as shown in Table 2 . Using pre-training the decoder network, we achieved 5% relative improvement in WER on SWBD set. Since we add external parameters in decoder network to learn conversational-context embeddings, our model requires more efforts to learn these additional parameters. To relieve this issue, we used pre-training techniques to train decoder network with text-only data first. We simply used a mask on top of the Encoder/Attention layer so that we can control the gradients of batches contains text-only data and do not update the Encoder/Attention sub-network parameters."
],
[
"Next, we evaluated the use of pretrained external embeddings (fastText and BERT). We initially observed that we can obtain 2.4% relative improvement over (the model with decoder pretraining) in WER by using fastText for additional word embeddings to the gated decoder network.",
"We also extensively evaluated various ways to use fastText/BERT for conversational-context embeddings. Both methods with fastText and with BERT shows significant improvement from the baseline as well as vanilla conversational-context aware model."
],
[
"We also investigate the effect of the number of utterance history being encoded. We tried different $N = [1, 5, 9]$ number of utterance histories to learn the conversational-context embeddings. Figure 3 shows the relative improvements in the accuracy on the Dev set ( \"Training and decoding\" ) over the baseline “non-conversational” model. We show the improvements on the two different methods of merging the contextual embeddings, namely mean and concatenation. Typically increasing the receptive field of the conversational-context helps improve the model. However, as the number of utterence history increased, the number of trainable parameters of the concatenate model increased making it harder for the model to train. This led to a reduction in the accuracy.",
"We also found that using 5-utterance history with concatenation performed best (15%) on the SWBD set, and using 9-number of utterance history with mean method performed best (5%) on CH set. We also observed that the improvement diminished when we used 9-utterance history for SWBD set, unlike CH set. One possible explanation is that the conversational-context may not be relevant to the current utterance prediction or the model is overfitting."
],
[
"We also experiment with an utterance level sampling strategy with various sampling ratio, $[0.0, 0.2, 0.5, 1.0]$ . Sampling techniques have been extensively used in sequence prediction tasks to reduce overfitting BIBREF51 by training the model conditioning on generated tokens from the model itself, which is how the model actually do at inference, rather than the ground-truth tokens. Similar to choosing previous word tokens from the ground truth or from the model output, we apply it to choose previous utterance from the ground truth or from the model output for learning conversational-context embeddings. Figure 4 shows the relative improvement in the development accuracy ( \"Training and decoding\" ) over the $1.0$ sampling rate which is always choosing model's output. We found that a sampling rate of 20% performed best."
],
[
"We develop a scoring function, $s(i,j)$ to check if our model conserves the conversational consistency for validating the accuracy improvement of our approach. The scoring function measures the average of the conversational distances over every consecutive hypotheses generated from a particular model. The conversational distance is calculated by the Euclidean distance, $\\text{dist}(e_i, e_j)$ of the fixed-length vectors $e_i, e_j$ which represent the model's $i, j$ -th hypothesis, respectively. To obtain a fixed-length vector, utterance embedding, given the model hypothesis, we use BERT sentence embedding as an oracle. Mathematically it can be written as, $\ns(i,j) = \\frac{1}{N}\\sum _{i,j \\in \\texttt {eval}}(\\text{dist}(e_i,e_j))\n$ ",
" where, $i, j$ is a pair of consecutive hypotheses in evaluation data $\\texttt {eval}$ , $N$ is the total number of $i,j$ pairs, $e_i, e_j$ are BERT embeddings. In our experiment, we select the pairs of consecutive utterances from the reference that show lower distance score at least baseline hypotheses.",
"From this process, we obtained three conversational distance scores from 1) the reference transcripts, 2) the hypotheses of our vanilla conversational model which is not using BERT, and 3) the hypotheses of our baseline model. Figure 5 shows the score comparison.",
"We found that our proposed model was 7.4% relatively closer to the reference than the baseline. This indicates that our conversational-context embedding leads to improved similarity across adjacent utterances, resulting in better processing a long conversation."
],
[
"We have introduced a novel method for conversational-context aware end-to-end speech recognition based on a gated network that incorporates word/sentence/speech embeddings. Unlike prior work, our model is trained on conversational datasets to predict a word, conditioning on multiple preceding conversational-context representations, and consequently improves recognition accuracy of a long conversation. Moreover, our gated network can incorporate effectively with text-based external resources, word or sentence embeddings (i.e., fasttext, BERT) within an end-to-end framework and so that the whole system can be optimized towards our final objectives, speech recognition accuracy. By incorporating external embeddings with gating mechanism, our model can achieve further improvement with better conversational-context representation. We evaluated the models on the Switchboard conversational speech corpus and show that our proposed model using gated conversational-context embedding show 15%, 5% relative improvement in WER compared to a baseline model for Switchboard and CallHome subsets respectively. Our model was shown to outperform standard end-to-end speech recognition models trained on isolated sentences. This work is easy to scale and can potentially be applied to any speech related task that can benefit from longer context information, such as spoken dialog system, sentimental analysis."
],
[
"We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. This work also used the Bridges system, which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC)."
]
],
"section_name": [
"Introduction",
"Related work",
"Joint CTC/Attention-based encoder-decoder network",
"Acoustic-to-Words Models",
"Conversational-context Aware Models",
"External word/sentence embeddings",
"Contextual gating",
"Datasets",
"Training and decoding",
"Results",
"Pre-training decoder network",
"Use of words/sentence embeddings",
"Conversational-context Receptive Field",
"Sampling technique",
"Analysis of context embeddings",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"95f7e191ce57a5695fd1a2155a895f1f37dbc631"
],
"answer": [
{
"evidence": [
"There exist many word/sentence embeddings which are publicly available. We can broadly classify them into two categories: (1) non-contextual word embeddings, and (2) contextual word embeddings. Non-contextual word embeddings, such as Word2Vec BIBREF1 , GloVe BIBREF39 , fastText BIBREF17 , maps each word independently on the context of the sentence where the word occur in. Although it is easy to use, it assumes that each word represents a single meaning which is not true in real-word. Contextualized word embeddings, sentence embeddings, such as deep contextualized word representations BIBREF20 , BERT BIBREF22 , encode the complex characteristics and meanings of words in various context by jointly training a bidirectional language model. The BERT model proposed a masked language model training approach enabling them to also learn good “sentence” representation in order to predict the masked word.",
"In this work, we explore both types of embeddings to learn conversational-context embeddings as illustrated in Figure 1 . The first method is to use word embeddings, fastText, to generate 300-dimensional embeddings from 10k-dimensional one-hot vector or distribution over words of each previous word and then merge into a single context vector, $e^k_{context}$ . Since we also consider multiple word/utterance history, we consider two simple ways to merge multiple embeddings (1) mean, and (2) concatenation. The second method is to use sentence embeddings, BERT. It is used to a generate single 786-dimensional sentence embedding from 10k-dimensional one-hot vector or distribution over previous words and then merge into a single context vector with two different merging methods. Since our A2W model uses a restricted vocabulary of 10k as our output units and which is different from the external embedding models, we need to handle out-of-vocabulary words. For fastText, words that are missing in the pretrained embeddings we map them to a random multivariate normal distribution with the mean as the sample mean and variance as the sample variance of the known words. For BERT, we use its provided tokenizer to generates byte pair encodings to handle OOV words.",
"Using this approach, we can obtain a more dense, informative, fixed-length vectors to encode conversational-context information, $e^k_{context}$ to be used in next $k$ -th utterance prediction.",
"We use contextual gating mechanism in our decoder network to combine the conversational-context embeddings with speech and word embeddings effectively. Our gating is contextual in the sense that multiple embeddings compute a gate value that is dependent on the context of multiple utterances that occur in a conversation. Using these contextual gates can be beneficial to decide how to weigh the different embeddings, conversational-context, word and speech embeddings. Rather than merely concatenating conversational-context embeddings BIBREF6 , contextual gating can achieve more improvement because its increased representational power using multiplicative interactions.",
"Figure 2 illustrates our proposed contextual gating mechanism. Let $e_w = e_w(y_{u-1})$ be our previous word embedding for a word $y_{u-1}$ , and let $e_s = e_s(x^k_{1:T})$ be a speech embedding for the acoustic features of current $k$ -th utterance $x^k_{1:T}$ and $e_c = e_c(s_{k-1-n:k-1})$ be our conversational-context embedding for $n$ -number of preceding utterances ${s_{k-1-n:k-1}}$ . Then using a gating mechanism:",
"$$g = \\sigma (e_c, e_w, e_s)$$ (Eq. 15)",
"where $\\sigma $ is a 1 hidden layer DNN with $\\texttt {sigmoid}$ activation, the gated embedding $e$ is calcuated as",
"$$e = g \\odot (e_c, e_w, e_s) \\\\ h = \\text{LSTM}(e)$$ (Eq. 16)",
"and fed into the LSTM decoder hidden layer. The output of the decoder $h$ is then combined with conversational-context embedding $e_c$ again with a gating mechanism,",
"$$g = \\sigma (e_C, h) \\\\ \\hat{h} = g \\odot (e_c, h)$$ (Eq. 17)",
"Then the next hidden layer takes these gated activations, $\\hat{h}$ , and so on."
],
"extractive_spans": [],
"free_form_answer": "BERT generates sentence embeddings that represent words in context. These sentence embeddings are merged into a single conversational-context vector that is used to calculate a gated embedding and is later combined with the output of the decoder h to provide the gated activations for the next hidden layer.",
"highlighted_evidence": [
"Contextualized word embeddings, sentence embeddings, such as deep contextualized word representations BIBREF20 , BERT BIBREF22 , encode the complex characteristics and meanings of words in various context by jointly training a bidirectional language model. ",
"The second method is to use sentence embeddings, BERT. It is used to a generate single 786-dimensional sentence embedding from 10k-dimensional one-hot vector or distribution over previous words and then merge into a single context vector with two different merging methods.",
"Using this approach, we can obtain a more dense, informative, fixed-length vectors to encode conversational-context information, $e^k_{context}$ to be used in next $k$ -th utterance prediction.",
"We use contextual gating mechanism in our decoder network to combine the conversational-context embeddings with speech and word embeddings effectively. Our gating is contextual in the sense that multiple embeddings compute a gate value that is dependent on the context of multiple utterances that occur in a conversation. ",
"Let $e_w = e_w(y_{u-1})$ be our previous word embedding for a word $y_{u-1}$ , and let $e_s = e_s(x^k_{1:T})$ be a speech embedding for the acoustic features of current $k$ -th utterance $x^k_{1:T}$ and $e_c = e_c(s_{k-1-n:k-1})$ be our conversational-context embedding for $n$ -number of preceding utterances ${s_{k-1-n:k-1}}$ .",
"Then using a gating mechanism:\n\n$$g = \\sigma (e_c, e_w, e_s)$$ (Eq. 15)\n\nwhere $\\sigma $ is a 1 hidden layer DNN with $\\texttt {sigmoid}$ activation, the gated embedding $e$ is calcuated as\n\n$$e = g \\odot (e_c, e_w, e_s) \\\\ h = \\text{LSTM}(e)$$ (Eq. 16)\n\nand fed into the LSTM decoder hidden layer. ",
"The output of the decoder $h$ is then combined with conversational-context embedding $e_c$ again with a gating mechanism,\n\n$$g = \\sigma (e_C, h) \\\\ \\hat{h} = g \\odot (e_c, h)$$ (Eq. 17)\n\nThen the next hidden layer takes these gated activations, $\\hat{h}$ , and so on."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ea4394112c1549185e6b763d6f36733a9f2ed794"
]
}
],
"nlp_background": [
"two"
],
"paper_read": [
"no"
],
"question": [
"How are sentence embeddings incorporated into the speech recognition system?"
],
"question_id": [
"0bd864f83626a0c60f5e96b73fb269607afc7c09"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 1: Conversational-context embedding representations from external word or sentence embeddings.",
"Table 1: Experimental dataset description. We used 300 hours of Switchboard conversational corpus. Note that any pronunciation lexicon or Fisher transcription was not used.",
"Figure 2: Our contextual gating mechanism in decoder network to integrate three different embeddings from: 1) conversational-context, 2) previous word, 3) current speech.",
"Table 2: Comparison of word error rates (WER) on Switchboard 300h with standard end-to-end speech recognition models and our proposed end-to-end speech recogntion models with conversational context. (The * mark denotes our estimate for the number of parameters used in the previous work).",
"Figure 3: The relative improvement in Development accuracy over sets over baseline obtained by using conversational-context embeddings with different number of utterance history and different merging techniques.",
"Figure 4: The relative improvement in Development accuracy over 100% sampling rate which was used in (Kim and Metze, 2018) obtained by using conversational-context embeddings with different sampling rate.",
"Figure 5: Comparison of the conversational distance score on the consecutive utterances of 1) reference, 2) our proposed conversational end-to-end model, and 3) our end-to-end baseline model."
],
"file": [
"4-Figure1-1.png",
"5-Table1-1.png",
"5-Figure2-1.png",
"6-Table2-1.png",
"7-Figure3-1.png",
"7-Figure4-1.png",
"8-Figure5-1.png"
]
} | [
"How are sentence embeddings incorporated into the speech recognition system?"
] | [
[
"1906.11604-External word/sentence embeddings-2",
"1906.11604-External word/sentence embeddings-1",
"1906.11604-Contextual gating-0",
"1906.11604-External word/sentence embeddings-3"
]
] | [
"BERT generates sentence embeddings that represent words in context. These sentence embeddings are merged into a single conversational-context vector that is used to calculate a gated embedding and is later combined with the output of the decoder h to provide the gated activations for the next hidden layer."
] | 750 |
1711.05345 | Supervised and Unsupervised Transfer Learning for Question Answering | Although transfer learning has been shown to be successful for tasks like object and speech recognition, its applicability to question answering (QA) has yet to be well-studied. In this paper, we conduct extensive experiments to investigate the transferability of knowledge learned from a source QA dataset to a target dataset using two QA models. The performance of both models on a TOEFL listening comprehension test (Tseng et al., 2016) and MCTest (Richardson et al., 2013) is significantly improved via a simple transfer learning technique from MovieQA (Tapaswi et al., 2016). In particular, one of the models achieves the state-of-the-art on all target datasets; for the TOEFL listening comprehension test, it outperforms the previous best model by 7%. Finally, we show that transfer learning is helpful even in unsupervised scenarios when correct answers for target QA dataset examples are not available. | {
"paragraphs": [
[
"One of the most important characteristics of an intelligent system is to understand stories like humans do. A story is a sequence of sentences, and can be in the form of plain text BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 or spoken content BIBREF0 , where the latter usually requires the spoken content to be first transcribed into text by automatic speech recognition (ASR), and the model will subsequently process the ASR output. To evaluate the extent of the model's understanding of the story, it is asked to answer questions about the story. Such a task is referred to as question answering (QA), and has been a long-standing yet challenging problem in natural language processing (NLP).",
"Several QA scenarios and datasets have been introduced over the past few years. These scenarios differ from each other in various ways, including the length of the story, the format of the answer, and the size of the training set. In this work, we focus on context-aware multi-choice QA, where the answer to each question can be obtained by referring to its accompanying story, and each question comes with a set of answer choices with only one correct answer. The answer choices are in the form of open, natural language sentences. To correctly answer the question, the model is required to understand and reason about the relationship between the sentences in the story."
],
[
"Transfer learning BIBREF7 is a vital machine learning technique that aims to use the knowledge learned from one task and apply it to a different, but related, task in order to either reduce the necessary fine-tuning data size or improve performance. Transfer learning, also known as domain adaptation, has achieved success in numerous domains such as computer vision BIBREF8 , ASR BIBREF9 , BIBREF10 , and NLP BIBREF11 , BIBREF12 . In computer vision, deep neural networks trained on a large-scale image classification dataset such as ImageNet BIBREF13 have proven to be excellent feature extractors for a broad range of visual tasks such as image captioning BIBREF14 , BIBREF15 , BIBREF16 and visual question answering BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , among others. In NLP, transfer learning has also been successfully applied to tasks like sequence tagging BIBREF21 , syntactic parsing BIBREF22 and named entity recognition BIBREF23 , among others.",
"The procedure of transfer learning in this work is straightforward and includes two steps. The first step is to pre-train the model on one MCQA dataset referred to as the source task, which usually contains abundant training data. The second step is to fine-tune the same model on the other MCQA dataset, which is referred to as the target task, that we actually care about, but that usually contains much less training data. The effectiveness of transfer learning is evaluated by the model's performance on the target task.",
"In supervised transfer learning, both the source and target datasets provide the correct answer to each question during pre-training and fine-tuning, and the QA model is guided by the correct answer to optimize its objective function in a supervised manner in both stages.",
"We also consider unsupervised transfer learning where the correct answer to each question in the target dataset is not available. In other words, the entire process is supervised during pre-training, but unsupervised during fine-tuning. A self-labeling technique inspired by BIBREF26 , BIBREF24 , BIBREF25 is used during fine-tuning on the target dataset. We present the proposed algorithm for unsupervised transfer learning in Algorithm \"Conclusion and Future Work\" . [!htbp] Unsupervised QA Transfer Learning [1] Source dataset with correct answer to each question; Target dataset without any answer; Number of training epochs. Optimal QA model $M^{*}$ Pre-train QA model $M$ on the source dataset. For each question in the target dataset, use $M$ to predict its answer. For each question, assign the predicted answer to the question as the correct one. Fine-tune $M$ on the target dataset as usual. Reach the number of training epochs."
],
[
"Although transfer learning has been successfully applied to various applications, its applicability to QA has yet to be well-studied. In this paper, we tackle the TOEFL listening comprehension test BIBREF0 and MCTest BIBREF1 with transfer learning from MovieQA BIBREF2 using two existing QA models. Both models are pre-trained on MovieQA and then fine-tuned on each target dataset, so that their performance on the two target datasets are significantly improved. In particular, one of the models achieves the state-of-the-art on all target datasets; for the TOEFL listening comprehension test, it outperforms the previous best model by 7%.",
"Transfer learning without any labeled data from the target domain is referred to as unsupervised transfer learning. Motivated by the success of unsupervised transfer learning for speaker adaptation BIBREF24 , BIBREF25 and spoken document summarization BIBREF26 , we further investigate whether unsupervised transfer learning is feasible for QA.",
"Although not well studied in general, transfer Learning for QA has been explored recently. To the best of our knowledge, BIBREF27 is the first work that attempted to apply transfer learning for machine comprehension. The authors showed only limited transfer between two QA tasks, but the transferred system was still significantly better than a random baseline. BIBREF28 tackled a more specific task of biomedical QA with transfer learning from a large-scale dataset. The work most similar to ours is by BIBREF29 , where the authors used a simple transfer learning technique and achieved significantly better performance. However, none of these works study unsupervised transfer learning, which is especially crucial when the target dataset is small. BIBREF30 proposed a two-stage synthesis network that can generate synthetic questions and answers to augment insufficient training data without annotations. In this work, we aim to handle the case that the questions from the target domain are available."
],
[
"Among several existing QA settings, in this work we focus on multi-choice QA (MCQA). We are particularly interested in understanding whether a QA model can perform better on one MCQA dataset with knowledge transferred from another MCQA dataset. In Section \"Question Answering Experiments\" , we first formalize the task of MCQA. We then describe the procedures for transfer learning from one dataset to another in Section \"Conclusion and Future Work\" . We consider two kinds of settings for transfer learning in this paper, one is supervised and the other is unsupervised."
],
[
"In MCQA, the inputs to the model are a story, a question, and several answer choices. The story, denoted by $\\mathbf {S}$ , is a list of sentences, where each of the sentences is a sequence of words from a vocabulary set $V$ . The question and each of the answer choices, denoted by $\\mathbf {Q}$ and $\\mathbf {C}$ , are both single sentences also composed of words from $V$ . The QA model aims to choose one correct answer from multiple answer choices based on the information provided in $\\mathbf {S}$ and $\\mathbf {Q}$ ."
],
[
"We used MovieQA BIBREF2 as the source MCQA dataset, and TOEFL listening comprehension test BIBREF0 and MCTest BIBREF1 as two separate target datasets. Examples of the three datasets are shown in Table 1 ."
],
[
"Among numerous models proposed for multiple-choice QA BIBREF32 , BIBREF33 , BIBREF0 , we adopt the End-to-End Memory Network (MemN2N) BIBREF34 and Query-Based Attention CNN (QACNN) BIBREF35 , both open-sourced, to conduct the experiments. Below we briefly introduce the two models in Section \"End-to-End Memory Networks\" and Section \"Query-Based Attention CNN\" , respectively. For the details of the models, please refer to the original papers."
],
[
"An End-to-End Memory Network (MemN2N) first transforms $\\mathbf {Q}$ into a vector representation with an embedding layer $B$ . At the same time, all sentences in $\\mathbf {S}$ are also transformed into two different sentence representations with two additional embedding layers $A$ and $C$ . The first sentence representation is used in conjunction with the question representation to produce an attention-like mechanism that outputs the similarity between each sentence in $\\mathbf {S}$ and $\\mathbf {Q}$ . The similarity is then used to weight the second sentence representation. We then obtain the sum of the question representation and the weighted sentence representations over $\\mathbf {S}$ as $\\mathbf {Q}^\\prime $ . In the original MemN2N, $\\mathbf {Q}^\\prime $ is decoded to provide the estimation of the probability of being an answer for each word within a fixed set. The word with the highest probability is then selected as the answer. However, in multiple-choice QA, $B$0 is in the form of open, natural language sentences instead of a single word. Hence we modify MemN2N by adding an embedding layer $B$1 to encode $B$2 as a vector representation $B$3 by averaging the embeddings of words in $B$4 . We then compute the similarity between each choice representation $B$5 and $B$6 . The choice $B$7 with the highest probability is then selected as the answer."
],
[
"A Query-Based Attention CNN (QACNN) first uses an embedding layer $E$ to transform $\\mathbf {S}, \\mathbf {Q}$ , and $\\mathbf {C}$ into a word embedding. Then a compare layer generates a story-question similarity map $\\mathbf {SQ}$ and a story-choice similarity map $\\mathbf {SC}$ . The two similarity maps are then passed into a two-stage CNN architecture, where a question-based attention mechanism on the basis of $\\mathbf {SQ}$ is applied to each of the two stages. The first stage CNN generates a word-level attention map for each sentence in $\\mathbf {S}$ , which is then fed into the second stage CNN to generate a sentence-level attention map, and yield choice-answer features for each of the choices. Finally, a classifier that consists of two fully-connected layers collects the information from every choice answer feature and outputs the most likely answer. The trainable parameters are the embedding layer $E$ that transforms $\\mathbf {S}, \\mathbf {Q},$ and $\\mathbf {C}$ into word embeddings, the two-stage CNN $\\mathbf {S}, \\mathbf {Q}$0 and $\\mathbf {S}, \\mathbf {Q}$1 that integrate information from the word to the sentence level, and from the sentence to the story level, and the two fully-connected layers $\\mathbf {S}, \\mathbf {Q}$2 and $\\mathbf {S}, \\mathbf {Q}$3 that make the final prediction. We mention the trainable parameters here because in Section \"Question Answering Experiments\" we will conduct experiments to analyze the transferability of the QACNN by fine-tuning some parameters while keeping others fixed. Since QACNN is a newly proposed QA model has a relatively complex structure, we illustrate its architecture in Figure 1 , which is enough for understanding the rest of the paper. Please refer to the original paper BIBREF35 for more details."
],
[
"For pre-training MemN2N and QACNN on MovieQA, we followed the exact same procedure as in BIBREF2 and BIBREF35 , respectively. Each model was trained on the training set of the MovieQA task and tuned on the dev set, and the best performing models on the dev set were later fine-tuned on the target dataset. During fine-tuning, the model was also trained on the training set of target datasets and tuned on the dev set, and the performance on the testing set of the target datasets was reported as the final result. We use accuracy as the performance measurement."
],
[
"Table 2 reports the results of our transfer learning on TOEFL-manual, TOEFL-ASR, MC160, and MC500, as well as the performance of the previous best models and several ablations that did not use pre-training or fine-tuning. From Table 2 , we have the following observations.",
"Rows (a) and (g) show the respective results when the QACNN and MemN2N are trained directly on the target datasets without pre-training on MovieQA. Rows (b) and (h) show results when the models are trained only on the MovieQA data. Rows (c) and (i) show results when the models are trained on both MovieQA and each of the four target datasets, and tested on the respective target dataset. We observe that the results achieved in (a), (b), (c), (g), (h), and (i) are worse than their fine-tuned counterparts (d), (e), (f), and (j). Through transfer learning, both QACNN and MemN2N perform better on all the target datasets. For example, QACNN only achieves 57.5% accuracy on MC160 without pre-training on MovieQA, but the accuracy increases by 18.9% with pre-training (rows (d) vs. (a)). In addition, with transfer learning, QACNN outperforms the previous best models on TOEFL-manual by 7%, TOEFL-ASR BIBREF33 by 6.5%, MC160 BIBREF36 by 1.1%, and MC500 BIBREF32 by 1.3%, and becomes the state-of-the-art on all target datasets.",
"For the QACNN, the training parameters are $E, W_{CNN}^{(1)}, W_{CNN}^{(2)}, W_{FC}^{(1)}$ , and $W_{FC}^{(2)}$ (Section \"Query-Based Attention CNN\" ). To better understand how transfer learning affects the performance of QACNN, we also report the results of keeping some parameters fixed and only fine-tuning other parameters. We choose to fine-tune either only the last fully-connected layer $W_{FC}^{(2)}$ while keeping other parameters fixed (row (d) in Table 2 ), the last two fully-connected layers $W_{FC}^{(1)}$ and $W_{FC}^{(2)}$ (row (e)), and the entire QACNN (row (f)). For TOEFL-manual, TOEFL-ASR, and MC500, QACNN performs the best when only the last two fully-connected layers were fine-tuned; for MC160, it performs the best when only the last fully-connected layer was fine-tuned. Note that for training the QACNN, we followed the same procedure as in BIBREF35 , whereby pre-trained GloVe word vectors BIBREF37 were used to initialize the embedding layer, which were not updated during training. Thus, the embedding layer does not depend on the training set, and the effective vocabularies are the same.",
"It is interesting to see that fine-tuning the entire QACNN doesn't necessarily produce the best result. For MC500, the accuracy of QACNN drops by 4.6% compared to just fine-tuning the last two fully-connected layers (rows (f) vs. (e)). We conjecture that this is due to the amount of training data of the target datasets - when the training set of the target dataset is too small, fine-tuning all the parameters of a complex model like QACNN may result in overfitting. This discovery aligns with other domains where transfer learning is well-studied such as object recognition BIBREF38 .",
"We expected to see that a MemN2N, when trained directly on the target dataset without pre-training on MovieQA, would outperform a MemN2N pre-trained on MovieQA without fine-tuning on the target dataset (rows (g) vs. (h)), since the model is evaluated on the target dataset. However, for the QACNN this is surprisingly not the case - QACNN pre-trained on MovieQA without fine-tuning on the target dataset outperforms QACNN trained directly on the target dataset without pre-training on MovieQA (rows (b) vs. (a)). We attribute this to the limited size of the target dataset and the complex structure of the QACNN.",
"We conducted experiments to study the relationship between the amount of training data from the target dataset for fine-tuning the model and the performance. We first pre-train the models on MovieQA, then vary the training data size of the target dataset used to fine-tune them. Note that for QACNN, we only fine-tune the last two fully-connected layers instead of the entire model, since doing so usually produces the best performance according to Table 2 . The results are shown in Table 3 . As expected, the more training data is used for fine-tuning, the better the model's performance is. We also observe that the extent of improvement from using 0% to 25% of target training data is consistently larger than using from 25% to 50%, 50% to 75%, and 75% to 100%. Using the QACNN fine-tuned on TOEFL-manual as an example, the accuracy of the QACNN improves by 2.7% when varying the training size from 0% to 25%, but only improves by 0.9%, 0.5%, and 0.7% when varying the training size from 25% to 50%, 50% to 75%, and 75% to 100%, respectively.",
"We also vary the size of MovieQA for pre-training to study how large the source dataset should be to make transfer learning feasible. The results are shown in Table 4 . We find that even a small amount of source data can help. For example, by using only 25% of MovieQA for pre-training, the accuracy increases 6.3% on MC160. This is because 25% of MovieQA training set (2,462 examples) is still much larger than the MC160 training set (280 examples). As the size of the source dataset increases, the performance of QACNN continues to improve.",
"We are interested in understanding what types of questions benefit the most from transfer learning. According to the official guide to the TOEFL test, the questions in TOEFL can be divided into 3 types. Type 1 questions are for basic comprehension of the story. Type 2 questions go beyond basic comprehension, but test the understanding of the functions of utterances or the attitude the speaker expresses. Type 3 questions further require the ability of making connections between different parts of the story, making inferences, drawing conclusions, or forming generalizations. We used the split provided by BIBREF33 , which contains 70/18/34 Type 1/2/3 questions. We compare the performance of the QACNN and MemN2N on different types of questions in TOEFL-manual with and without pre-training on MovieQA, and show the results in Figure 2 . From Figure 2 we can observe that for both the QACNN and MemN2N, their performance on all three types of questions improves after pre-training, showing that the effectiveness of transfer learning is not limited to specific types of questions."
],
[
"So far, we have studied the property of supervised transfer learning for QA, which means that during pre-training and fine-tuning, both the source and target datasets provide the correct answer for each question. We now conduct unsupervised transfer learning experiments described in Section \"Conclusion and Future Work\" (Algorithm \"Conclusion and Future Work\" ), where the answers to the questions in the target dataset are not available. We used QACNN as the QA model and all the parameters $(E, W_{CNN}^{(1)}, W_{CNN}^{(2)}, W_{FC}^{(1)},$ and $W_{FC}^{(2)})$ were updated during fine-tuning in this experiment. Since the range of the testing accuracy of the TOEFL-series (TOEFL-manual and TOEFL-ASR) is different from that of MCTest (MC160 and MC500), their results are displayed separately in Figure UID29 and Figure UID30 , respectively.",
"From Figure UID29 and Figure UID30 we can observe that without ground truth in the target dataset for supervised fine-tuning, transfer learning from a source dataset can still improve the performance through a simple iterative self-labeling mechanism. For TOEFL-manual and TOEFL-ASR, QACNN achieves the highest testing accuracy at Epoch 7 and 8, outperforming its counterpart without fine-tuning by approximately 4% and 5%, respectively. For MC160 and MC500, the QACNN achieves the peak at Epoch 3 and 6, outperforming its counterpart without fine-tuning by about 2% and 6%, respectively. The results also show that the performance of unsupervised transfer learning is still worse than supervised transfer learning, which is not surprising, but the effectiveness of unsupervised transfer learning when no ground truth labels are provided is validated.",
"To better understand the unsupervised transfer learning process of QACNN, we visualize the changes of the word-level attention map during training Epoch 1, 4, 7, and 10 in Figure 4 . We use the same question from TOEFL-manual as shown in Table 1 as an example. From Figure 4 we can observe that as the training epochs increase, the QACNN focuses more on the context in the story that is related to the question and the correct answer choice. For example, the correct answer is related to “class project”. In Epoch 1 and 4, the model does not focus on the phrase “class representation”, but the model attends on the phrase in Epoch 7 and 10. This demonstrates that even without ground truth, the iterative process in Algorithm \"Conclusion and Future Work\" is still able to lead the QA model to gradually focus more on the important part of the story for answering the question."
],
[
"In this paper we demonstrate that a simple transfer learning technique can be very useful for the task of multi-choice question answering. We use a QACNN and a MemN2N as QA models, with MovieQA as the source task and a TOEFL listening comprehension test and MCTest as the target tasks. By pre-training on MovieQA, the performance of both models on the target datasets improves significantly. The models also require much less training data from the target dataset to achieve similar performance to those without pre-training. We also conduct experiments to study the influence of transfer learning on different types of questions, and show that the effectiveness of transfer learning is not limited to specific types of questions. Finally, we show that by a simple iterative self-labeling technique, transfer learning is still useful, even when the correct answers for target QA dataset examples are not available, through quantitative results and visual analysis.",
"One area of future research will be generalizing the transfer learning results presented in this paper to other QA models and datasets. In addition, since the original data format of the TOEFL listening comprehension test is audio instead of text, it is worth trying to initialize the embedding layer of the QACNN with semantic or acoustic word embeddings learned directly from speech BIBREF39 , BIBREF40 , BIBREF41 instead of those learned from text BIBREF42 , BIBREF37 ."
]
],
"section_name": [
"Question Answering",
"Transfer Learning",
"Transfer Learning for QA",
"Task Descriptions and Approaches",
"Multi-Choices QA",
"Datasets",
"QA Neural Network Models",
"End-to-End Memory Networks",
"Query-Based Attention CNN",
"Training Details",
"Supervised Transfer Learning",
"Unsupervised Transfer Learning",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"976d60de0c1b9130a435a3fb3acd01163a296a95"
],
"answer": [
{
"evidence": [
"The procedure of transfer learning in this work is straightforward and includes two steps. The first step is to pre-train the model on one MCQA dataset referred to as the source task, which usually contains abundant training data. The second step is to fine-tune the same model on the other MCQA dataset, which is referred to as the target task, that we actually care about, but that usually contains much less training data. The effectiveness of transfer learning is evaluated by the model's performance on the target task."
],
"extractive_spans": [],
"free_form_answer": "the training dataset is large while the target dataset is usually much smaller",
"highlighted_evidence": [
"The first step is to pre-train the model on one MCQA dataset referred to as the source task, which usually contains abundant training data. The second step is to fine-tune the same model on the other MCQA dataset, which is referred to as the target task, that we actually care about, but that usually contains much less training data. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"no"
],
"question": [
"How different is the dataset size of source and target?"
],
"question_id": [
"c77d6061d260f627f2a29a63718243bab5a6ed5a"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"question"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Table 1: Example of the story-question-choices triplet from MovieQA, TOEFL listening comprehension test, and MCTest datasets. S,Q, and Ci denote the story, question, and one of the answer choices, respectively. For MovieQA, each question comes with five answer choices; and for TOEFL and MCTest, each question comes with only four answer choices. The correct answer is marked in bold.",
"Figure 1: QACNN architecture overview. QACNN consists of a similarity mapping layer, a query-based attention layer, and a prediction layer. The two-stage attention mechanism takes place in the query-based attention layer, yielding word-level and sentence-level attention map, respectively. The trainable parameters, including E,W (1)CNN ,W (2)",
"Table 2: Results of transfer learning on the target datasets. The number in the parenthesis indicates the accuracy increased via transfer learning (compared to rows (a) and (g)). The best performance for each target dataset is marked in bold. We also include the results of the previous best performing models on the target datasets in the last three rows.",
"Table 3: Results of varying sizes of the target datasets used for fine-tuning QACNN. The number in the parenthesis indicates the accuracy increases from using the previous percentage for fine-tuning to the current percentage.",
"Table 4: Results of varying sizes of the MovieQA used for pre-training QACNN. The number in the parenthesis indicates the accuracy increases from using the previous percentage for pre-training to the current percentage.",
"Figure 2: The performance of QACNN and MemN2N on different types of questions in TOEFL-manual with and without pre-training on MovieQA. ‘No’ in the parenthesis indicates the models are not pre-trained, while ‘Yes’ indicates the models are pre-trained on MovieQA.",
"Figure 3: The figures show the results of unsupervised transfer learning. The x-axis is the number of training epochs, and the y-axis is the corresponding testing accuracy on the target dataset. When training epoch = 0, the performance of QACNN is equivalent to row (b) in Table 2. The horizontal lines, where each line has the same color to its unsupervised counterpart, are the performances of QACNN with supervised transfer learning (row (e) in Table 2), and are the upperbounds for unsupervised transfer learning.",
"Figure 4: Visualization of the changes of the word-level attention map in the first stage CNN of QACNN in different training epochs. The more red, the more the QACNN views the word as a key feature. The input story-question-choices triplet is same as the one in Table 1."
],
"file": [
"4-Table1-1.png",
"5-Figure1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"7-Figure2-1.png",
"7-Figure3-1.png",
"8-Figure4-1.png"
]
} | [
"How different is the dataset size of source and target?"
] | [
[
"1711.05345-Transfer Learning-1"
]
] | [
"the training dataset is large while the target dataset is usually much smaller"
] | 751 |
2002.01861 | Rapid Adaptation of BERT for Information Extraction on Domain-Specific Business Documents | Techniques for automatically extracting important content elements from business documents such as contracts, statements, and filings have the potential to make business operations more efficient. This problem can be formulated as a sequence labeling task, and we demonstrate the adaption of BERT to two types of business documents: regulatory filings and property lease agreements. There are aspects of this problem that make it easier than "standard" information extraction tasks and other aspects that make it more difficult, but on balance we find that modest amounts of annotated data (less than 100 documents) are sufficient to achieve reasonable accuracy. We integrate our models into an end-to-end cloud platform that provides both an easy-to-use annotation interface as well as an inference interface that allows users to upload documents and inspect model outputs. | {
"paragraphs": [
[
"Business documents broadly characterize a large class of documents that are central to the operation of business. These include legal contracts, purchase orders, financial statements, regulatory filings, and more. Such documents have a number of characteristics that set them apart from the types of texts that most NLP techniques today are designed to process (Wikipedia articles, news stories, web pages, etc.): They are heterogeneous and frequently contain a mix of both free text as well as semi-structured elements (tables, headings, etc.). They are, by definition, domain specific, often with vocabulary, phrases, and linguistic structures (e.g., legal boilerplate and terms of art) that are rarely seen in general natural language corpora.",
"Despite these challenges, there is great potential in the application of NLP technologies to business documents. Take, for example, contracts that codify legal agreements between two or more parties. Organizations (particularly large enterprises) need to monitor contracts for a range of tasks, a process that can be partially automated if certain content elements can be extracted from the contracts themselves by systems BIBREF0. In general, if we are able to extract structured entities from business documents, these outputs can be better queried and manipulated, potentially facilitating more efficient business operations.",
"In this paper, we present BERT-based models for extracting content elements from two very different types of business documents: regulatory filings and property lease agreements. Given the success of deep transformer-based models such as BERT BIBREF1 and their ability to handle sequence labeling tasks, adopting such an approach seemed like an obvious starting point. In this context, we are primarily interested in two questions: First, how data efficient is BERT for fine-tuning to new specialized domains? Specifically, how much annotated data do we need to achieve some (reasonable) level of accuracy? This is an important question due to the heterogeneity of business documents; it would be onerous if organizations were required to engage in large annotation efforts for every type of document. Second, how would a BERT model pre-trained on general natural language corpora perform in specific, and potentially highly-specialized, domains?",
"There are aspects of this task that make it both easier and more difficult than “traditional” IE. Even though they are expressed in natural language, business documents frequently take constrained forms, sometimes even “template-like” to a certain degree. As such, it may be easy to learn cue phrases and other fixed expressions that indicate the presence of some element (i.e., pattern matching). On the other hand, the structure and vocabulary of the texts may be very different from the types of corpora modern deep models are trained on; for example, researchers have shown that models for processing the scientific literature benefit immensely from pre-training on scientific articles BIBREF2, BIBREF3. Unfortunately, we are not aware of any large, open corpora of business documents for running comparable experiments.",
"The contribution of our work is twofold: From the scientific perspective, we begin to provide some answers to the above questions. With two case studies, we find that a modest amount of domain-specific annotated data (less than 100 documents) is sufficient to fine-tune BERT to achieve reasonable accuracy in extracting a set of content elements. From a practical perspective, we showcase our efforts in an end-to-end cloud platform that provides an easy-to-use annotation interface as well as an inference interface that allows users to upload documents and inspect the results of our models."
],
[
"Within the broad space of business documents, we have decided to focus on two specific types: regulatory filings and property lease agreements. While our approach is not language specific, all our work is conducted on Chinese documents. In this section, we first describe these documents and our corpora, our sequence labeling model, and finally our evaluation approach."
],
[
"Regulatory Filings. We focused on a specific type of filing: disclosures of pledges by shareholders when their shares are offered up for collateral. These are publicly accessible and were gathered from the database of a stock exchange in China. We observe that most of these announcements are fairly formulaic, likely generated by templates. However, we treated them all as natural language text and did not exploit this observation; for example, we made no explicit attempt to induce template structure or apply clustering—although such techniques would likely improve extraction accuracy. In total, we collected and manually annotated 150 filings, which were divided into training, validation, and test sets with a 6:2:2 split. Our test corpus comprises 30 regulatory filings. Table TABREF6 enumerates the seven content elements that we extract.",
"Property Lease Agreements. These contracts mostly follow a fixed “schema” with a certain number of prescribed elements (leaseholder, tenant, rent, deposit, etc.); Table TABREF7 enumerates the eight elements that our model extracts. Since most property lease agreements are confidential, no public corpus for research exists, and thus we had to build our own. To this end, we searched the web for publicly-available templates of property lease agreements and found 115 templates in total. For each template, we manually generated one, two, or three instances, using a fake data generator tool to fill in the missing content elements such as addresses. In total, we created (and annotated) 223 contracts by hand. This corpus was further split into training, validation, and test data with a 6:2:2 split. Our test set contains 44 lease agreements, 11 of which use templates that are not seen in the training set. We report evaluation over both the full test set and on only these unseen templates; the latter condition specifically probes our model's ability to generalize."
],
[
"An obvious approach to content element extraction is to formulate the problem as a sequence labeling task. Prior to the advent of neural networks, Conditional Random Fields (CRFs) BIBREF4, BIBREF5 represented the most popular approach to this task. Starting from a few years ago, neural networks have become the dominant approach, starting with RNNs BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11. Most recently, deep transformer-based models such as BERT represent the state of the art in this task BIBREF1, BIBREF12, BIBREF13 . We adopt the sequence labeling approach of BIBREF1, based on annotations of our corpus using a standard BIO tagging scheme with respect to the content elements we are interested in.",
"We extend BERT Base-Chinese (12-layer, 768-hidden, 12-heads, 110M parameters) for sequence labeling. All documents are segmented into paragraphs and processed at the paragraph level (both training and inference); this is acceptable because we observe that most paragraphs are less than 200 characters. The input sequences are segmented by the BERT tokenizer, with the special [CLS] token inserted at the beginning and the special [SEP] token added at the end. All inputs are then padded to a length of 256 tokens. After feeding through BERT, we obtain the hidden state of the final layer, denoted as ($h_{1}$, $h_{2}$, ... $h_{N}$) where $N$ is the max length setting. We add a fully-connected layer and softmax on top, and the final prediction is formulated as:",
"where ${W}$ represents the parameter of the fully-connected layer and ${b}$ is the bias. The learning objective is to maximize",
"For simplicity, we assume that all tokens can be predicted independently. For model training, we set the max sequence length to 256, the learning rate to ${10^{-4}}$, and run the model for 8 epochs. We use all other default settings in the TensorFlow implementation of BERT.",
"UTF8gbsn",
"UTF8gbsn",
"UTF8gbsn"
],
[
"At inference time, documents from the test set are segmented into paragraphs and fed into the fine-tuned BERT model one at a time. Typically, sequence labeling tasks are evaluated in terms of precision, recall, and F$_1$ at the entity level, per sentence. However, such an evaluation is inappropriate for our task because the content elements represent properties of the entire document as a whole, not individual sentences.",
"Instead, we adopted the following evaluation procedure: For each content element type (e.g., “tenant”), we extract all tagged spans from the document, and after deduplication, treat the entities as a set that we then measure against the ground truth in terms of precision, recall, and F$_1$. We do this because there may be multiple ground truth entities and BERT may mark multiple spans in a document with a particular entity type. Note that the metrics are based on exact matches—this means that, for example, if the extracted entity has an extraneous token compared to a ground truth entity, the system receives no credit."
],
[
"Our main results are presented in Table TABREF6 on the test set of the regulatory filings and in Table TABREF7 on the test set of the property lease agreements; F$_1$, precision, and recall are computed in the manner described above. We show metrics across all content elements (micro-averaged) as well as broken down by types. For the property lease agreements, we show results on all documents (left) and only over those with unseen templates (right). Examining these results, we see that although there is some degradation in effectiveness between all documents and only unseen templates, it appears that BERT is able to generalize to previously-unseen expressions of the content elements. Specifically, it is not the case that the model is simply memorizing fixed patterns or key phrases—otherwise, we could just craft a bunch of regular expression patterns for this task. This is a nice result that shows off the power of modern neural NLP models.",
"Overall, we would characterize our models as achieving reasonable accuracy, comparable to extraction tasks in more “traditional” domains, with modest amounts of training data. It does appear that with fine tuning, BERT is able to adapt to the linguistic characteristics of these specialized types of documents. For example, the regulatory filings have quite specialized vocabulary and the property lease agreements have numeric heading structures—BERT does not seem to be confused by these elements, which for the most part do not appear in the texts that the model was pre-trained on. Naturally, accuracy varies across different content elements: For the rental agreements, entities such as leaseholder, tenant, start date, and end date perform much better than others. For the regulatory filing, the model performs well on all content elements except for one; there were very few examples of “% of pledged shares in the shareholder's total share holdings” in our training data, and thus accuracy is very low despite the fact that percentages are straightforward to identify. It seems that “easy” entities often have more fixed forms and are quite close to entities that the model may have encountered during pre-training (e.g., names and dates). In contrast, “difficult” elements are often domain-specific and widely vary in their forms.",
"How data efficient is BERT when fine tuning on annotated data? We can answer this question by varying the amount of training data used to fine tune the BERT models, holding everything else constant. These results are shown in Figure FIGREF10 for the regulatory filings (30, 60, 90 randomly-selected documents) and in Figure FIGREF11 for the property lease agreements (30, 60, 90, 120 randomly-selected documents); in all cases, the development set is fixed. For brevity, we only show F$_1$ scores, but we observe similar trends for the other metrics. For both document types, it seems like 60–90 documents are sufficient to achieve F$_1$ on par with using all available training data. Beyond this point, we hit rapidly diminishing returns. For a number of “easy” content elements (e.g., dates in the property lease agreements), it seems like 30 documents are sufficient to achieve good accuracy, and more does not appear to yield substantial improvements. Note that in a few cases, training on more data actually decreases F$_1$ slightly, but this can be attributed to noise in the sampling process.",
"Finally, in Table TABREF8 we show an excerpt from each type of document along with the content elements that are extracted by our BERT models. We provide both the original source Chinese texts as well as English translations to provide the reader with a general sense of the source documents and how well our models behave."
],
[
"All the capabilities described in this paper come together in an end-to-end cloud-based platform that we have built. The platform has two main features: First, it provides an annotation interface that allows users to define content elements, upload documents, and annotate documents; a screenshot is shown in Figure FIGREF12. We have invested substantial effort in making the interface as easy to use as possible; for example, annotating content elements is as easy as selecting text from the document. Our platform is able to ingest documents in a variety of formats, including PDFs and Microsoft Word, and converts these formats into plain text before presenting them to the annotators.",
"The second feature of the platform is the ability for users to upload new documents and apply inference on them using a fine-tuned BERT model; a screenshot of this feature is shown in Figure FIGREF13. The relevant content elements are highlighted in the document.",
"On the cloud platform, the inference module also applies a few simple rule-based modifications to post-process BERT extraction results. For any of the extracted dates, we further applied a date parser based on rules and regular expressions to normalize and canonicalize the extracted outputs. In the regulatory filings, we tried to normalize numbers that were written in a mixture of Arabic numerals and Chinese units (e.g., “UTF8gbsn亿”, the unit for $10^8$) and discarded partial results if simple rule-based rewrites were not successful. In the property lease agreements, the contract length, if not directly extracted by BERT, is computed from the extracted start and end dates. Note that these post processing steps were not applied in the evaluation presented in the previous section, and so the figures reported in Tables TABREF6 and TABREF7 actually under-report the accuracy of our models in a real-world setting."
],
[
"This work tackles the challenge of content extraction from two types of business documents, regulatory filings and property lease agreements. The problem is straightforwardly formulated as a sequence labeling task, and we fine-tune BERT for this application. We show that our simple models can achieve reasonable accuracy with only modest amounts of training data, illustrating the power and flexibility of modern NLP models. Our cloud platform pulls these models together in an easy-to-use interface for addressing real-world business needs."
]
],
"section_name": [
"Introduction",
"Approach",
"Approach ::: Datasets",
"Approach ::: Model",
"Approach ::: Inference and Evaluation",
"Results",
"Cloud Platform",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"fd5cfcc9e9bb805b956def1a3b787c68f027ddcd"
],
"answer": [
{
"evidence": [
"We extend BERT Base-Chinese (12-layer, 768-hidden, 12-heads, 110M parameters) for sequence labeling. All documents are segmented into paragraphs and processed at the paragraph level (both training and inference); this is acceptable because we observe that most paragraphs are less than 200 characters. The input sequences are segmented by the BERT tokenizer, with the special [CLS] token inserted at the beginning and the special [SEP] token added at the end. All inputs are then padded to a length of 256 tokens. After feeding through BERT, we obtain the hidden state of the final layer, denoted as ($h_{1}$, $h_{2}$, ... $h_{N}$) where $N$ is the max length setting. We add a fully-connected layer and softmax on top, and the final prediction is formulated as:"
],
"extractive_spans": [
"documents are segmented into paragraphs and processed at the paragraph level"
],
"free_form_answer": "",
"highlighted_evidence": [
"All documents are segmented into paragraphs and processed at the paragraph level (both training and inference); this is acceptable because we observe that most paragraphs are less than 200 characters. The input sequences are segmented by the BERT tokenizer, with the special [CLS] token inserted at the beginning and the special [SEP] token added at the end."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"cee3b80ee73b31afc81f4c2d68dc4b3892207aa3"
],
"answer": [
{
"evidence": [
"Our main results are presented in Table TABREF6 on the test set of the regulatory filings and in Table TABREF7 on the test set of the property lease agreements; F$_1$, precision, and recall are computed in the manner described above. We show metrics across all content elements (micro-averaged) as well as broken down by types. For the property lease agreements, we show results on all documents (left) and only over those with unseen templates (right). Examining these results, we see that although there is some degradation in effectiveness between all documents and only unseen templates, it appears that BERT is able to generalize to previously-unseen expressions of the content elements. Specifically, it is not the case that the model is simply memorizing fixed patterns or key phrases—otherwise, we could just craft a bunch of regular expression patterns for this task. This is a nice result that shows off the power of modern neural NLP models."
],
"extractive_spans": [
"F$_1$, precision, and recall"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our main results are presented in Table TABREF6 on the test set of the regulatory filings and in Table TABREF7 on the test set of the property lease agreements; F$_1$, precision, and recall are computed in the manner described above."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"98b0cbdf51774b2d72482e3a47ee31b1aa9e472d"
],
"answer": [
{
"evidence": [
"We extend BERT Base-Chinese (12-layer, 768-hidden, 12-heads, 110M parameters) for sequence labeling. All documents are segmented into paragraphs and processed at the paragraph level (both training and inference); this is acceptable because we observe that most paragraphs are less than 200 characters. The input sequences are segmented by the BERT tokenizer, with the special [CLS] token inserted at the beginning and the special [SEP] token added at the end. All inputs are then padded to a length of 256 tokens. After feeding through BERT, we obtain the hidden state of the final layer, denoted as ($h_{1}$, $h_{2}$, ... $h_{N}$) where $N$ is the max length setting. We add a fully-connected layer and softmax on top, and the final prediction is formulated as:"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"All documents are segmented into paragraphs and processed at the paragraph level (both training and inference); this is acceptable because we observe that most paragraphs are less than 200 characters. The input sequences are segmented by the BERT tokenizer, with the special [CLS] token inserted at the beginning and the special [SEP] token added at the end."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"9ec748f8187e9c25d27ba2dc596b8556323f5e32"
],
"answer": [
{
"evidence": [
"All the capabilities described in this paper come together in an end-to-end cloud-based platform that we have built. The platform has two main features: First, it provides an annotation interface that allows users to define content elements, upload documents, and annotate documents; a screenshot is shown in Figure FIGREF12. We have invested substantial effort in making the interface as easy to use as possible; for example, annotating content elements is as easy as selecting text from the document. Our platform is able to ingest documents in a variety of formats, including PDFs and Microsoft Word, and converts these formats into plain text before presenting them to the annotators."
],
"extractive_spans": [],
"free_form_answer": "Variety of formats supported (PDF, Word...), user can define content elements of document",
"highlighted_evidence": [
"First, it provides an annotation interface that allows users to define content elements, upload documents, and annotate documents",
"Our platform is able to ingest documents in a variety of formats, including PDFs and Microsoft Word, and converts these formats into plain text before presenting them to the annotators."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat",
"somewhat"
],
"question": [
"At what text unit/level were documents processed?",
"What evaluation metric were used for presenting results? ",
"Was the structure of regulatory filings exploited when training the model? ",
"What type of documents are supported by the annotation platform?"
],
"question_id": [
"9623884915b125d26e13e8eeebe9a0f79d56954b",
"77db56fee07b01015a74413ca31f19bea7203f0b",
"c309e87c9e08cf847f31e554577d6366faec1ea0",
"81cee2fc6edd9b7bc65bbf6b4aa35782339e6cff"
],
"question_writer": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Evaluation results on the test set of our regulatory filings corpus.",
"Table 2: Evaluation results on the test set of our property lease agreements corpus.",
"Figure 1: Effects of training data size on F1 for regulatory filings.",
"Figure 2: Effects of training data size on F1 for property lease agreements.",
"Table 3: Excerpts from a regulatory filing (top) and a property lease agreement (bottom) illustrating a few of the content elements that our models extract.",
"Figure 3: Screenshot of our annotation interface.",
"Figure 4: Screenshot of our inference interface."
],
"file": [
"4-Table1-1.png",
"4-Table2-1.png",
"4-Figure1-1.png",
"4-Figure2-1.png",
"5-Table3-1.png",
"6-Figure3-1.png",
"6-Figure4-1.png"
]
} | [
"What type of documents are supported by the annotation platform?"
] | [
[
"2002.01861-Cloud Platform-0"
]
] | [
"Variety of formats supported (PDF, Word...), user can define content elements of document"
] | 754 |
1812.06876 | Multi-task learning to improve natural language understanding | Recently advancements in sequence-to-sequence neural network architectures have led to an improved natural language understanding. When building a neural network-based Natural Language Understanding component, one main challenge is to collect enough training data. The generation of a synthetic dataset is an inexpensive and quick way to collect data. Since this data often has less variety than real natural language, neural networks often have problems to generalize to unseen utterances during testing. In this work, we address this challenge by using multi-task learning. We train out-of-domain real data alongside in-domain synthetic data to improve natural language understanding. We evaluate this approach in the domain of airline travel information with two synthetic datasets. As out-of-domain real data, we test two datasets based on the subtitles of movies and series. By using an attention-based encoder-decoder model, we were able to improve the F1-score over strong baselines from 80.76 % to 84.98 % in the smaller synthetic dataset. | {
"paragraphs": [
[
"One of the main challenges in building a Natural Language Understanding (NLU) component for a specific task is the necessary human effort to encode the task's specific knowledge. In traditional NLU components, this was done by creating hand-written rules. In today's state-of-the-art NLU components, significant amounts of human effort have to be used for collecting the training data. For example, when building an NLU component for airplane travel information, there are a lot of possibilities to express the situation that someone wants to book a flight from New York to Pittsburgh. In order to build a system, we need to have seen many of them in the training data. Although more and more data has been collected and datasets with this data have been published BIBREF0 , the datasets often consist of data from another domain, which is needed for a certain NLU component.",
"An inexpensive and quick way to collect data for a domain is to generate a synthetic dataset where templates are filled with various values. A problem with such synthetic datasets is to encode enough variety of natural language to be able to generalize to unseen utterances during training. To do this, an enormous amount of effort will be needed. In this work, we address this challenge by combining task-specific synthetic data and real data from another domain. The multi-task framework enables us to combine these two knowledge sources and therefore improve natural language understanding.",
"In this work, the NLU component is based on an attention-based encoder-decoder model BIBREF1 . We evaluate the approach on the commonly used travel information task and used as an out-of-domain task the subtitles of movies and series."
],
[
"There are many of appropriate architectures for end-to-end trainable goal-oriented dialog systems BIBREF1 , BIBREF2 , BIBREF3 with different approaches for the NLU part; however, what they have in common is that they need a huge amount of training data.",
"Multi-task learning has been performed in many of machine learning applications, e. g., in facial landmark detection an application in the area of vision BIBREF4 .",
"Multi-task learning for sequence-to-sequence models in Natural Language Processing is described in BIBREF5 , BIBREF6 , BIBREF7 . In BIBREF5 , machine translation was trained together with either syntax parsing or image captioning on a not attention-based encoder-decoder model. The encoder was shared between the tasks. They improved the translation between English and Germany by up to 1.5 BLEU points. In BIBREF6 , the authors used an attention-based encoder-decoder model and were also able to improve on this model machine translation by up to 1.5 BLEU points by combining machine translation with part-of-speech tagging and named entity recognition. In addition, they presented different architectures for multi-task learning, such as sharing in addition to the encoder, the attention layer, or decoder. In BIBREF7 , the authors used multi-task learning to learn to translate 20 individual languages with one system."
],
[
"In the multi-task learning approach of this work, in-domain synthetic data and out-of-domain real data are jointly trained. In synthetic datasets, there are often missing expressions for situations. However, in larger out-of-domain datasets, there are expressions for similar situations. Through the joint training of the encoding for both tasks, we expect a better natural language understanding in the in-domain task because it can be learned to encode situations independent to their expression in natural language."
],
[
"We use an attention-based encoder-decoder model for multi-task learning. We share between the tasks the embedding layer and the encoder. The remaining components of the attention-based encoder-decoder model - the attention layer and the decoder with its final softmax layer - are not shared. The intuition behind this is, that in our synthetic datasets, there are missing expressions for situations that are in the out-of-domain datasets. With the training of the out-of-domain datasets, we want to learn to encode situations independent to their expression in natural language. For improving encoding, we expect the best results by only sharing the encoder because knowledge from the out-of-domain dataset is transfered to the in-domain dataset.",
"In BIBREF7 , an attention-based encoder-decoder model that is able to share the weights of layers between tasks is described and its implementation was published. We added to this implementation an option to train instances of the smallest dataset $m$ -times and an option to accumulate gradients and published the additions under the MIT license. The architecture is depicted in Figure 1 ."
],
[
"In BIBREF6 , only one task in each mini-batch is considered because this is more GPU-efficient given that not all weights are shared between the tasks. Let $n$ be the number of instances that are trained simultaneously on the GPU. The instances of one task are grouped into groups of size $n$ . These groups are randomly shuffled before every epoch during training. However, in our experiments, updating the weights after the training of a group of one task led to perplexity jumps. To avoid these jumps, we accumulate the gradients and update our weights only after $t$ groups. This means that our mini-batch size is $t \\cdot n$ . We use the Adam optimization algorithm BIBREF8 for updating the weights.",
"After the multi-task learning, we fine-tune the model by retraining the model only with the synthetic dataset. For this fine-tuning, we reset all the parameters of the Adam optimization algorithm.",
"The out-of-domain datasets have a huge size in comparison to the synthetic datasets. To avoid instances of the synthetic datasets are not considered in the training of the model, instances of the synthetic dataset are trained $m$ -times during one epoch."
],
[
"For the out-of-domain task, we use two subsets of the English OpenSubtitle corpus BIBREF9 in this work. The OpenSubtitle corpus consists of the subtitles of movies and series. The first subset was published by BIBREF10 and consists of all the sentence pairs from the OpenSubtitle corpus that have the following properties: the first sentence ends with a question mark; the second sentence follows directly the first sentence and has no question mark; and the time difference between the sentences is less than 20 seconds. In total, the subset has more than 14 million sentence pairs for training and 10 000 sentence pairs for validation. In the following sections, this dataset is called OpenSubtitles QA. We created the second subset in a similar manner as the subtle dataset BIBREF11 was created. It consists of sentence pairs with the following properties: the second sentence follows directly the first sentence; both sentences end with a point, exclamation point, or question mark; and between the two sentences, there is at maximum a pause of 1 second. In the following sections, this dataset is called OpenSubtitles dialog. To be able to train the attention-based encoder-decoder model in a reasonable time, we only used the first 14 million sentence pairs for training. The next 10 000 sentence pairs were used for validation. For both datasets we used the default English word tokenizer of the Natural Language Toolkit (NLTK) BIBREF12 for tokenization. As there is another tokenization approach in the OpenSubtitle corpus in comparison to the tokenizer in the NLTK, we had to merge the tokens 's, 're, 't, 'll, and 've to their previous token in the OpenSubtitles dialog dataset to improve the compatibility with the tokenization of the NLTK.",
"We generated two synthetic datasets. These two datasets are based on a subset of the ATIS (Airline Travel Information Systems) dataset BIBREF13 that was published by BIBREF14 and called ATIS in the following sections. In the ATIS corpus, every user utterance has one or multiple intents and every word of a user utterance is tagged in the IOB format. The format is depicted in Figure 2 . However, the out-of-domain dataset is no intent and slot filling task. It is a sequence-to-sequence task. To train both tasks together, we converted the intent and slot filling task to a sequence-to-sequence task. The conversion is also depicted in Figure 2 .",
"In the ATIS dataset, there are 4479 tagged user utterances for training, 500 for validation and 893 for testing.",
"The smaller synthetic dataset consists of 212 templates that form 17 679 source target sequence pairs after filling the template placeholders and is called ATIS small in the following sections and the larger dataset consists of 832 templates that form 70 040 source target sequence pairs and is called ATIS medium in the following sections. The ATIS small dataset was generated by extracting all the sequences that have a new parameter in the target sequence that was not included in any target sequence extracted before. Extracting all the sequences that have a parameter combination that was not included in any target sequence extracted before, forms the ATIS medium dataset. In the extracted sequences, the parameter values were replaced by placeholders to become templates. For the placeholders, all the possible values were inserted. When one template produced more than 1000 source target sequence pairs, then, instead of the Cartesian product, the random permutation algorithm BIBREF15 was used, which produces as many source target sequence pairs as the values of the placeholder with the greatest number of values. For both datasets, we alphabetically sorted the parameters to ease the learning process."
],
[
"We evaluate the quality of the predicted intent and parameter values with the metric F1-score. For averaging the F1-score over the target sequences, we use micro-averaging. This means that we count the true positives, false positives, and false negatives for all the sequences and calculate the recall and precision for the F1-score with these. In addition, we provide the metric intent accuracy. For the intent accuracy, the number of completely correct predicted intents (the intents of the reference and hypothesis must be the same) is divided by the number of target sequences."
],
[
"We optimized our single-task baseline to get a strong baseline in order to exclude better results in multi-task learning in comparison to single-task learning only because of these two following points: network parameters suit the multi-task learning approach better and a better randomness while training in the multi-task learning. To exclude the first point, we tested different hyperparameters for the single-task baseline. We tested all the combinations of the following hyperparameter values: 256, 512, or 1024 as the sizes for the hidden states of the LSTMs, 256, 512, or 1024 as word embedding sizes, and a dropout of 30 %, 40 %, or 50 %. We used subword units generated by byte-pair encoding (BPE) BIBREF16 as inputs for our model. To avoid bad subword generation for the synthetic datasets, in addition to the training dataset, we considered the validation and test dataset for the generating of the BPE merge operations list. We trained the configurations for 14 epochs and trained every configuration three times. We chose the training with the best quality with regard to the validation F1-score to exclude disadvantages of a bad randomness. We got the best quality with regard to the F1-score with 256 as the size of the hidden states of the LSTMs, 1024 as word embedding size, and a dropout of 30 %. For the batch size, we used 64.",
"We optimized our single-task model trained on real data in the same manner as the single-task baseline, except that we used 64 epochs.",
"In the multi-task learning approach, we trained both tasks for 10 epochs. We use for $m$ (the instance multiplicator of the synthetic dataset) such a value that the synthetic dataset has nearly the size of one-tenth of the out-of-domain dataset. Because of long training times, we were not able to optimize the hyperparameters. We chose 256 as the size of the hidden states of the LSTMs, 1024 as word embedding size, and 50 % for the dropout and were not able to run multiple runs. For $n$ (the number of instances that are trained simultaneously on the GPU), we chose 128 and for $t$ (number of groups after that the model weights are updated) we chose 11. Other hyperparameters in the single-task and multi-task experiments were not changed from the default values of the published implementation.",
"We used this best epoch with regard to the validation F1-score to fine-tune our model. To exclude only better results because of good random initialization, we made three runs, used the epoch with the best validation F1-score from every run, and chose the run with the worst validation F1-score for evaluation. We used 64 as the batch size, 50 % as dropout, and 14 as the number of epochs.",
"We used subword units generated by BPE for all approaches and used 40 000 as the limit for the number of BPE merging operations as well as the vocabulary size."
],
[
"In Figure 3 , the test F1-score of the training run of the configuration with the best validation F1-score is depicted with respect to the epoch for the ATIS small dataset and in Figure 4 for the ATIS medium dataset. The best result is achieved after epoch 11 or 7, respectively. There is no trend for a further improvement after epoch 14. The test F1-score of the best epoch according to the validation F1-score is depicted in the Tables 1 and 2 , respectively.",
"In Table 1 , the validation and test F1-scores and intent accuracies with regard to the best validation F1-score of the multi-task learning approach with the ATIS small dataset is depicted. The test F1-score could be improved 2.32 percentage points with multi-task learning with the OpenSubtitles QA dataset and 4.22 percentage points to 84.98 % with the OpenSubtitles dialog dataset. The test intent accuracies could be improved with multi-task learning 5.60 and 6.16 percentage points, respectively. For both out-of-domain datasets, fine-tuning did not improve the F1-score.",
"In Table 2 , the validation and test F1-scores and intent accuracies with regard to the best validation F1-score of the multi-task learning approach with the ATIS medium dataset is depicted. The test F1-score could be improved 0.52 percentage points with multi-task learning with the OpenSubtitles QA dataset and 0.30 percentage points with the OpenSubtitles dialog dataset. The test intent accuracies could be improved with multi-task learning by 0.34 and 1.79 percentage points, respectively. These improvements are not big, but the F1-score of the multi-task learning with the OpenSubtitles QA dataset is only 0.13 percentage points below the results of the model trained on the complete real training data of the ATIS dataset."
],
[
"In this work, we evaluated whether the training of a synthetic dataset alongside with an out-of-domain dataset can improve the quality in comparison to train only with the synthetic dataset. Although we optimized the model of the single-task learning baseline and not the model of the multi-task learning approach, we were able to increase the F1-score 4.22 percentage points to 84.98 % for the smaller synthetic dataset (ATIS small). For the bigger dataset (ATIS medium), we could not significantly improve the results, but the results are already in the near of the results of the model trained on the real data. To improve the quality of dialog systems for these exist only strong under-resourced synthetic datasets is especially helpful because the better a system is, the more it encourages users to use it. This is often an inexpensive way to collect data to log real user usage. However, by collecting real user data, it is necessary to account privacy laws.",
"The problem with the OpenSubtitles QA dataset is, that the form question as source sequence and answer as target sequence differs from the form of the ATIS datasets. The problem with the OpenSubtitles dialog dataset is that it is very noisy. Responses do not often refer to the previous utterance. In future work, it would be interesting to test other datasets or a combination of datasets whose form is better fitting or are less noisy, respectively.",
"We expect a further improvement of the multi-task learning approach by optimizing the parameters of our model in the multi-task learning approach. However, this is very computation time intensive because the out-of-domain datasets have 14 million instances, and therefore, we leave it open for future work.",
"We evaluated the multi-task learning approach with the attention-based encoder-decoder model, but we also expect an improvement by the multi-task learning approach for other architectures, such as the transformer model BIBREF17 , which could be researched in future work."
],
[
"This work has been conducted in the SecondHands project which has received funding from the European Union’s Horizon 2020 Research and Innovation programme (call:H2020- ICT-2014-1, RIA) under grant agreement No 643950. "
]
],
"section_name": [
"Introduction",
"Related Work",
"Multi-task Learning",
"Architecture",
"Training Schedule",
"Data",
"Evaluation",
"System Setup",
"Results",
"Conclusions and Further Work",
"Acknowledgement"
]
} | {
"answers": [
{
"annotation_id": [
"99ef197ffe864bfa88889285a6cf75c4b93ca848"
],
"answer": [
{
"evidence": [
"We optimized our single-task baseline to get a strong baseline in order to exclude better results in multi-task learning in comparison to single-task learning only because of these two following points: network parameters suit the multi-task learning approach better and a better randomness while training in the multi-task learning. To exclude the first point, we tested different hyperparameters for the single-task baseline. We tested all the combinations of the following hyperparameter values: 256, 512, or 1024 as the sizes for the hidden states of the LSTMs, 256, 512, or 1024 as word embedding sizes, and a dropout of 30 %, 40 %, or 50 %. We used subword units generated by byte-pair encoding (BPE) BIBREF16 as inputs for our model. To avoid bad subword generation for the synthetic datasets, in addition to the training dataset, we considered the validation and test dataset for the generating of the BPE merge operations list. We trained the configurations for 14 epochs and trained every configuration three times. We chose the training with the best quality with regard to the validation F1-score to exclude disadvantages of a bad randomness. We got the best quality with regard to the F1-score with 256 as the size of the hidden states of the LSTMs, 1024 as word embedding size, and a dropout of 30 %. For the batch size, we used 64."
],
"extractive_spans": [],
"free_form_answer": "optimize single task with no synthetic data",
"highlighted_evidence": [
"We optimized our single-task baseline to get a strong baseline in order to exclude better results in multi-task learning in comparison to single-task learning only because of these two following points: network parameters suit the multi-task learning approach better and a better randomness while training in the multi-task learning. To exclude the first point, we tested different hyperparameters for the single-task baseline. We tested all the combinations of the following hyperparameter values: 256, 512, or 1024 as the sizes for the hidden states of the LSTMs, 256, 512, or 1024 as word embedding sizes, and a dropout of 30 %, 40 %, or 50 %. We used subword units generated by byte-pair encoding (BPE) BIBREF16 as inputs for our model. To avoid bad subword generation for the synthetic datasets, in addition to the training dataset, we considered the validation and test dataset for the generating of the BPE merge operations list. We trained the configurations for 14 epochs and trained every configuration three times. We chose the training with the best quality with regard to the validation F1-score to exclude disadvantages of a bad randomness. We got the best quality with regard to the F1-score with 256 as the size of the hidden states of the LSTMs, 1024 as word embedding size, and a dropout of 30 %. For the batch size, we used 64."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"eca216170c00be9528a4f86abcb3ffe7115a9be2"
]
}
],
"nlp_background": [
"two"
],
"paper_read": [
"no"
],
"question": [
"What are the strong baselines you have?"
],
"question_id": [
"d028dcef22cdf0e86f62455d083581d025db1955"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 1 attention-based encoder-decoder",
"Figure 2 format of the ATIS corpus and the conversion to a sequence-to-sequence problem",
"Table 1 results on the ATIS real dataset of the systems trained with the ATIS small dataset",
"Table 2 results on the ATIS real dataset of the systems trained with the ATIS medium dataset",
"Figure 3 validation and test F1-score of the single-task baseline trained with the ATIS small dataset",
"Figure 4 validation and test F1-score of the single-task baseline trained with the ATIS medium dataset"
],
"file": [
"3-Figure1-1.png",
"5-Figure2-1.png",
"8-Table1-1.png",
"8-Table2-1.png",
"8-Figure3-1.png",
"9-Figure4-1.png"
]
} | [
"What are the strong baselines you have?"
] | [
[
"1812.06876-System Setup-0"
]
] | [
"optimize single task with no synthetic data"
] | 757 |
1812.06038 | Inferring the size of the causal universe: features and fusion of causal attribution networks | Cause-and-effect reasoning, the attribution of effects to causes, is one of the most powerful and unique skills humans possess. Multiple surveys are mapping out causal attributions as networks, but it is unclear how well these efforts can be combined. Further, the total size of the collective causal attribution network held by humans is currently unknown, making it challenging to assess the progress of these surveys. Here we study three causal attribution networks to determine how well they can be combined into a single network. Combining these networks requires dealing with ambiguous nodes, as nodes represent written descriptions of causes and effects and different descriptions may exist for the same concept. We introduce NetFUSES, a method for combining networks with ambiguous nodes. Crucially, treating the different causal attributions networks as independent samples allows us to use their overlap to estimate the total size of the collective causal attribution network. We find that existing surveys capture 5.77% $\pm$ 0.781% of the $\approx$293 000 causes and effects estimated to exist, and 0.198% $\pm$ 0.174% of the $\approx$10 200 000 attributed cause-effect relationships. | {
"paragraphs": [
[
"In this work we compare causal attribution networks derived from three datasets. A causal attribution dataset is a collection of text pairs that reflect cause-effect relationships proposed by humans (for example, “virus causes sickness”). These written statements identify the nodes of the network (see also our graph fusion algorithm for dealing with semantically equivalent statements) while cause-effect relationships form the directed edges (“virus” $\\rightarrow $ “sickness”) of the causal attribution network.",
"We collected causal attribution networks from three sources of data: English Wikidata BIBREF11 , English ConceptNet BIBREF10 , and IPRnet BIBREF12 . Wikidata and ConceptNet, are large knowledge graphs that contain semantic links denoting many types of interactions, one of which is causal attribution, while IPRnet comes from an Amazon Mechanical Turk study in which crowd workers were prompted to provide causal relationships. Wikidata relations were gathered by running four search queries on the Wikidata API (query.wikidata.org). These queries searched for relations with the properties: \"has immediate cause\", \"has effect\", \"has cause\", or \"immediate cause of\". The first and third searches reverse the order of the cause and effect which we reversed back. We discarded any Wikidata relations where the cause or effect were blank, as well as one ambiguous relation where the cause was \"NaN\". ConceptNet attributions were gathered by searching the English ConceptNet version 5.6.0 assertions for “/r/Causes/” relations. Lastly, IPRnet was developed in BIBREF12 which we use directly.",
"The three networks together contain $23\\,239$ causal links and $19\\,096$ unique terms, of which there are $4\\,265$ and $14\\,831$ unique causes and effects, respectively."
],
[
"Each node in our causal attribution networks consists of an English sentence, a short written description of an associated cause and/or effect. Text analysis of these sentences was performed using CoreNLP v3.9.2 and NLTK v3.2.2 BIBREF16 , BIBREF17 . We computed Part-of-Speech (POS) tags and identified (but did not remove) stop words for these sentences. We used the standard Brown corpus as a text baseline for comparison. Text processing procedures such as lemmatization or removal of casing were not performed in order to retain information for subsequent operations. A small number of ConceptNet sentences contained `/n' and `/v' codes within the text denoting parts-of-speech tags; we removed these before applying our own POS tagger. POS tagging of the causal sentences and the baseline dataset was performed using CoreNLP by tokenizing each input using the Penn Treebank tokenizer then applying the Stanford POS tagger. This tagger uses Penn Treebank tags. We aggregated these 36 tags into NLTK's universal tagset which consists of a simpler set of 12 tags including NOUN, VERB, ADJ, and more. To simplify presentation, we chose to further collect all non-verb, non-noun, and non-adjective tags into an “Other” tag. Stop words were identified using NLTK's English stop words corpus.",
"Word vectors, or embeddings, are modern computational linguistics tools that project words into a learned vector space where context-based semantics of text are preserved, enabling computational understanding of text via mathematical operations on the corresponding vectors BIBREF18 . Many different procedures exist for learning these vector spaces from text corpora BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 . Document embeddings, or “sentence vectors,” extend word vectors, representing more complex multi-word expressions in a vector space of their own BIBREF22 . Given two nodes $i$ and $j$ with corresponding sentences $s_i$ and $s_j$ and sentence vector representations $\\mathbf {v}_i$ and $\\mathbf {v}_j$ , respectively, the vector cosine similarity $\\frac{ \\mathbf {v}_i \\cdot \\mathbf {v}_j }{ \\Vert \\mathbf {v}_i \\Vert \\Vert \\mathbf {v}_j \\Vert }$ is a useful metric for estimating the semantic association between the nodes. High vector similarity implies that textual pairs are approximately semantically equivalent and sentence vectors can better compare nodes at a semantic level than more basic approaches such as measuring shared words or n-grams.",
"We computed sentence vectors using TensorFlow BIBREF23 v1.8.0 using the Universal Sentence Encoder v2, a recently developed embedding model that maps English text into a 512-dimensional vector space and achieves competitive performance at a number of natural language tasks BIBREF24 . This model was pretrained on a variety of text corpora BIBREF24 . The Universal Sentence Encoder was tested on several baseline NLP tasks including sentiment classification and semantic textual similarity, for each of which it performs with the highest accuracy. Given the higher performance of the Universal Sentence Encoder with respect to textual similarity tasks, we elected to utilize it instead of other sentence encoding models including the character level CNN architecture used in Google's billion word baseline BIBREF25 , and weighted averaging of word vector representations BIBREF26 ."
],
[
"Graph fusion takes two graphs $G_1=(V_1, E_1)$ and $G_2=(V_2,E_2)$ and computes a fused graph $G = (V,E)$ by identifying and combining semantically equivalent nodes (according to some measure of similarity) within and between $V_1$ and $V_2$ . Graph fusion is closely related to graph alignment and (inexact) graph matching BIBREF27 , although fusion assumes the need to identify node equivalents both within and between the networks being fused, unlike alignment and matching which generally focus on uncovering relations between $V_1$ and $V_2$ . Graph fusion is particularly important when a canonical representation for nodes, such as an ID number, is lacking, and thus equivalent nodes may appear and need to be combined. This is exactly the case in this work, where each node is a written description of a concept, and the same concept can be equivalently described in many different ways.",
"Here we describe Network FUsion with SEmantic Similarity (NetFUSES). This algorithm computes the fused graph $G$ given a node similarity function $f: V \\times V \\rightarrow \\lbrace 0,1\\rbrace $ . This $f$ should encode the semantic closeness between nodes $u$ and $v$ , with $f(u,v) = 1$ for semantically equivalent $u$ and $v$ and $f(u,v) = 0$ for semantically non-equivalent $u$ and $f: V \\times V \\rightarrow \\lbrace 0,1\\rbrace $0 . We assume $f: V \\times V \\rightarrow \\lbrace 0,1\\rbrace $1 and $f: V \\times V \\rightarrow \\lbrace 0,1\\rbrace $2 .",
"To fuse $G_1$ and $G_2$ into $G$ , first compute $F = \\lbrace f(u,v) \\mid u,v \\in V_1 \\cup V_2 \\rbrace $ . One can interpret $F$ as (the edges of) a fusion indicator graph defined over the combined node sets of $G_1$ and $G_2$ . Each connected component in $F$ then corresponds to a subset of $V_1 \\cup V_2$ that should be combined into a single node in $V$ . (One can also take a stricter view and combine nodes corresponding to completely dense connected components of $G_2$0 instead of any connected components, but this strictness can also be incorporated by making $G_2$1 more strict.) Let $G_2$2 indicate the connected component of $G_2$3 containing node $G_2$4 . Abusing notation, one can also consider $G_2$5 as representing the node in $G_2$6 that the unfused node $G_2$7 maps onto. Lastly, we define the edges $G_2$8 of the fused graph based on the neighborhoods of nodes in $G_2$9 . The neighborhood $G$0 of each node $G$1 in the fused graph is the union of the neighborhoods of the nodes connected to $G$2 in $G$3 : for any node $G$4 , let $G$5 and $G$6 Then the neighborhood $G$7 defines the edges incident on $G$8 in the fused graph and $G$9 may now be computed. Notice by this procedure that if an edge already exists in $F = \\lbrace f(u,v) \\mid u,v \\in V_1 \\cup V_2 \\rbrace $0 and/or $F = \\lbrace f(u,v) \\mid u,v \\in V_1 \\cup V_2 \\rbrace $1 between two nodes $F = \\lbrace f(u,v) \\mid u,v \\in V_1 \\cup V_2 \\rbrace $2 and $F = \\lbrace f(u,v) \\mid u,v \\in V_1 \\cup V_2 \\rbrace $3 that share a connected component in $F = \\lbrace f(u,v) \\mid u,v \\in V_1 \\cup V_2 \\rbrace $4 , then a self-loop is created in $F = \\lbrace f(u,v) \\mid u,v \\in V_1 \\cup V_2 \\rbrace $5 when $F = \\lbrace f(u,v) \\mid u,v \\in V_1 \\cup V_2 \\rbrace $6 and $F = \\lbrace f(u,v) \\mid u,v \\in V_1 \\cup V_2 \\rbrace $7 are combined. For our purposes these self-loops are meaningful, but otherwise they can be discarded.",
"Semantic similarity In this work, each node $i$ is represented only by a short written sentence $s_i$ , and two sentences $s_i \\ne s_j$ may in fact be different descriptions of the same underlying concept. Hence the need for NetFUSES. To relate two sentences $s_i$ and $s_j$ semantically, we rely upon recent advances in natural language processing that can embed words and multiword expressions into a semantically-meaningful vector space (see Sec. \"Discussion\" ). Let $\\mathbf {v}_i$ be the “sentence vector” corresponding to $s_i$ . Then define $f(i,j) = 1$ if $\\frac{ \\mathbf {v}_i \\cdot \\mathbf {v}_j }{ \\Vert \\mathbf {v}_i \\Vert \\Vert \\mathbf {v}_j \\Vert } > t$ and zero otherwise, for some parameter $t$ . In other words, we consider nodes $s_i$0 and $s_i$1 to be semantically equivalent when the cosine similarity between their vectors exceeds a given threshold $s_i$2 . Our procedure in the main text determined $s_i$3 as an approach threshold."
],
[
"Capture-recapture (also known as mark-and-recapture and recapture sampling) methods are statistical techniques for estimating the size of an unobserved population by examining the intersection of two or more independent samples of that population BIBREF28 , BIBREF29 . For example, biologists wishing to understand how many individuals of a species exist in an environment may capture $n_1$ individuals, tag and release them, then later gather another sample by capturing $n_2$ individuals. The more individuals in the second sample that carry tags, the more likely it is that the overall population $N$ is small; conversely, if the overlap in the samples is small, then it is likely that $N$ is large. Capture-recapture is commonly used by biologists and ecologists for exactly this purpose, but it has been applied to many other problems as well, including estimating the number of software faults in a large codebase BIBREF28 and estimating the number of relevant academic articles covering a specific topic of interest BIBREF30 .",
"The simplest estimator for the unknown population size $N$ is the Lincoln-Petersen estimator. Assuming the samples generated are unbiased, meaning that each member of the population is equally likely to be captured, then the proportion of captured individuals in the second sample who were tagged should be approximately equal to the overall capture probability for the first sample, $n_1 / N \\approx n_{12} / n_2$ . Solving for $N$ gives the intuitive Lincoln-Petersen estimator $\\hat{N} = {n_1 n_2}/{ n_{12}}$ , for $n_{12} > 0$ . While a good starting point, this estimator is known to be biased for small samples BIBREF29 , and much work has been performed to determine improved estimators, such as the well-known Chapman estimator BIBREF31 .",
"In this work we use the recently developed Webster-Kemp estimator BIBREF30 : ",
"$$\\hat{N} = \\frac{\\left(n_1-n_{12}+1\\right)\\left(n_2-n_{12}+1\\right)}{n_{12}} + n_1 + n_2 - n_{12},$$ (Eq. 6) ",
"which assumes (i) that one tried to capture as many items as possible (as opposed to predetermining $n_1$ and $n_2$ and capturing until reaching those numbers) and (ii) the total number of items found $n_1 + n_2 - n_{12} \\gg 1$ . Webster and Kemp also derive the variance of this estimator: ",
"$$\\sigma ^{2}_{\\hat{N}} = \\frac{(n_1-n_{12}+1)(n_2-n_{12}+1)(n_1+1)(n_2+1)}{n_{12}^{2}(n_{12}-1)},$$ (Eq. 7) ",
"with $n_{12} > 1$ , allowing us to assess our estimate uncertainty. Equations ( 6 ) and ( 7 ) are approximations when assuming a flat prior on $N$ but are exact when assuming an almost-flat prior on $N$ that slightly favors larger populations $N$ over smaller BIBREF30 ."
],
[
"Here we use network and text analysis tools to compare causal attribution networks (Sec. \"Comparing causal networks\" ). Crucially, nodes in these networks are defined only by their written descriptions, and multiple written descriptions can represent the same conceptual entity. Thus, to understand how causal attribution networks can be combined, we introduce and analyze a method for fusing networks (Sec. \"Fusing causal networks\" ) that builds off both the network structure and associated text information and explicitly incorporates conceptual equivalencies. Lastly, in Sec. \"Inferring the size of the causal attribution network\" we use the degree of overlap in these networks as a means to infer the total size of the one underlying causal attribution network being explored by these data collection efforts, allowing us to better understand the size of collective space of cause-effect relationships held by humans."
],
[
"We perform a descriptive analysis of the three datasets, comparing and contrasting their features and properties. We focus on two aspects, the network structure and the text information (the written descriptions associated with each node in the network). Understanding these data at these levels can inform efforts to combine different causal attribution networks (Sec. \"Fusing causal networks\" ).",
"Table 1 and Fig. 2 summarize network characteristics for the three causal attribution networks. We focus on standard measures of network structure, measuring the sizes, densities, motif structure, and connectedness of the three networks. Both Wikidata and ConceptNet, the two larger networks, are highly disconnected, amounting to collections of small components with low density. In contrast, IPRnet is smaller but comparatively more dense and connected, with higher average degree, fewer disconnected components, and more clustering (Table 1 ). All three networks are degree dissortative, meaning that high-degree nodes are more likely to connect to low-degree nodes. For connectedness and path lengths, we consider both directed and undirected versions of the network allowing us to measure strong and weak connectivity, respectively. All three networks are well connected when ignoring link directionality, but few directed paths exist between disparate nodes in Wikidata and ConceptNet, as shown by the large number of strong connected components and small size of the strong giant components for those networks.",
"To examine motifs, we focus on feedback loops and feedforward loops, both of which play important roles in causal relationships BIBREF32 , BIBREF33 . The sparse Wikidata network has neither loops, while ConceptNet has 87 feedforward loops and 1 feedback loop (Table 1 ). In contrast, IPRnet has far more loops, 986 feedback and 3541 feedforward loops.",
"Complementing the statistics shown in Table 1 , Fig. 2 shows the degree distributions ( 2 A), distributions of component sizes ( 2 B), and distributions of two centrality measures ( 2 C). All three networks display a skewed or heavy-tailed degree distribution. We again see that Wikidata and ConceptNet appear similar to one another while IPRnet is quite different, especially in terms of centrality. One difference between ConceptNet and Wikidata visible in 2 A is a mode of nodes with degree $\\sim 30$ within ConceptNet that is not present in Wikidata.",
"Understanding the network structure of each dataset only accounts for part of the information. Each node $i$ in these networks is associated with a sentence $s_i$ , a written word or phrase that describes the cause or effect that $i$ represents. Investigating the textual characteristics of these sentences can then reveal important similarities and differences between the networks.",
"To study these sentences, we apply standard tools from natural language processing and computational linguistics (see Sec. \"Data and Methods\" ). In Table 2 and Fig. 3 we present summary statistics including the total text size, average length of sentences, and so forth, across the three networks. We identify several interesting features. One, IPRnet, the smallest and densest network, has the shortest sentences on average, while ConceptNet has the longest sentences (Table 2 and Fig. 3 A). Two, ConceptNet sentences often contain stop words (`the,' `that,' `which,', etc.; see Sec. \"Data and Methods\" ) which are less likely to carry semantic information (Fig. 3 B). Three, Wikidata contains a large number of capitalized sentences and sentences containing numerical digits. This is likely due to an abundance of proper nouns, names of chemicals, events, and so forth. These textual differences may make it challenging to combine these data into a single causal attribution network.",
"We next applied a Part-of-Speech (POS) tagger to the sentences (Sec. \"Data and Methods\" ). POS tags allow us to better understand and compare the grammatical features of causal sentences across the three networks, for example, if one network's text is more heavily focused on nouns while another network's text contains more verbs. Additionally, POS tagging provides insight into the general language of causal attribution and its characteristics. As a baseline for comparison, we also present in Fig. 3 C the POS frequencies for a standard text corpus (Sec. \"Data and Methods\" ). As causal sentences tend to be short, often incomplete statements, it is plausible for grammatical differences to exist compared with formally written statements as in the baseline corpus. For conciseness, we focus on nouns, verbs, and adjectives (Sec. \"Data and Methods\" ). Nouns are the most common Part-of-Speech in these data, especially for Wikidata and IPRnet that have a higher proportion of nouns than the baseline corpus (Fig. 3 C). Wikidata and IPRnet have correspondingly lower proportions of verbs than the baseline. These proportions imply that causal attributions contain a higher frequency of objects committing actions than general speech. However, ConceptNet differs, with proportions of nouns and verbs closer to the baseline. The baseline also contains more adjectives than ConceptNet and IPRnet. Overall, shorter, noun-heavy sentences may either help or harm the ability to combine causal attribution networks, depending on their ambiguity relative to longer, typical written statements."
],
[
"These causal attributions networks are separate efforts to map out the underlying or latent causal attribution network held collectively by humans. It is natural to then ask if these different efforts can be combined in an effective way. Fusing these networks together can provide a single causal attribution network for researchers to study.",
"At the most basic level, one can fuse these networks together simply by taking their union, defining a single network containing all the unique nodes and edges of the original networks. Unfortunately, nodes in these networks are identified by their sentences, and this graph union assumes that two nodes $i$ and $j$ are equivalent iff $s_i = s_j$ . This is overly restrictive as these sentences serve as descriptions of associated concepts, and we ideally want to combine nodes that represent the same concept even when their written descriptions differ. Indeed, even within a single network it can be necessary to identify and combine nodes in this way. We identify this problem as graph fusion. Graph fusion is a type of record linkage problem and is closely related to graph alignment and (inexact) graph matching BIBREF27 , but unlike those problems, graph fusion assumes the need to identify node equivalencies both within and between the networks being fused.",
"We introduce a fusion algorithm, NetFUSES (Network FUsion with SEmantic Similarity) that allows us to combine networks using a measure of similarity between nodes (Sec. \"Data and Methods\" ). Crucially, NetFUSES can handle networks where nodes may need to be combined even within a single network. Here we compare nodes by focusing on the corresponding sentences $s_i$ and $s_j$ of the nodes $i$ and $j$ , respectively, in two networks. We use recent advances in computational linguistics to define a semantic similarity $S(s_i,s_j)$ between $s_i$ and $s_j$ and consider $i$ and $j$ as equivalent when $S(s_i,s_j) \\ge t$ for some semantic threshold $s_j$0 . See Sec. \"Data and Methods\" for details.",
"To apply NetFUSES with our semantic similarity function (Sec. \"Data and Methods\" ) requires determining a single parameter, the similarity threshold $t$ . One can identify a value of $t$ using an independent analysis of text, but we argue for a simple indicator of its value given the networks: growth in the number of self-loops as $t$ is varied. If two nodes $i$ and $j$ that are connected before fusion are combined into a single node $u$ by NetFUSES, then the edge $i\\rightarrow j$ becomes the self-loop $u \\rightarrow u$ . Yet the presence of the original edge $i \\rightarrow j$ generally implies that those nodes are not equivalent, and so it is more plausible that combining them is a case of over-fusion than it would have been if $i$ and $t$0 were not connected. Of course, in networks such as the causal attribution networks we study, a self-loop is potentially meaningful, representing a positive feedback where a cause is its own effect. But these self-loops are quite rare (Table 1 ) and we argue that creating additional self-loops via NetFUSES is more likely to be over-fusion than the identification of such feedback. Thus we can study the growth in the number of self-loops as we vary the threshold $t$1 to determine as an approximate value for $t$2 the point at which new self-loops start to form.",
"Figure 4 identifies a clear value of the similarity threshold $t\\approx 0.95$ . We track as a function of threshold the number of nodes, edges, and self-loops of the fusion of Wikidata and ConceptNet, the two largest and most similar networks we study. The number of self-loops remains nearly unchanged until the level of $t = 0.95$ , indicating that as the likely onset point of over-fusion. Further lowering the similarity threshold leads to growth in the number of self-loops, until eventually the number of self-loops begins to decrease as nodes that each have self-loops are themselves combined. Thus, with a clear onset of self-loop creation, we identify $t = 0.95$ to fuse these two networks together."
],
[
"These three networks represent separate attempts to map out and record the collective causal attribution network held by humans. Of the three, IPRnet is most distinct from the other two, being smaller in size, denser, and generated by a unique experimental protocol. In contrast, Wikidata and ConceptNet networks are more similar in terms of how they were constructed and their overall sizes and densities.",
"Treating Wikidata and ConceptNet as two independent “draws” from a single underlying network allows us to estimate the total size of this latent network based on their overlap. (We exclude IPRnet as this network is generated using a very different mechanism than the others.) High overlap between these samples implies a smaller total size than low overlap. This estimation technique of comparing overlapping samples is commonly used in wildlife ecology and is known as capture-recapture or mark-and-recapture (see Sec. \"Capture-recapture\" ). Here we use the Webster-Kemp estimator (Eqs. ( 6 ) and ( 7 )), but given the size of the samples this estimator will be in close agreement with the simpler Lincoln-Petersen estimator.",
"We first begin with the strictest measure of overlap, exact matching of sentences: node $i$ in one network overlaps with node $j$ in the other network only when $s_i = s_j$ . We then relax this strict assumption by applying NetFUSES as presented in Sec. \"Fusing causal networks\" .",
"Wikidata and ConceptNet contain 12 741 and 5 316 nodes, respectively, and the overlap in these sets (when strictly equating sentences) is 208. Substituting these quantities into the Webster-Kemp estimator gives a total number of nodes of the underlying causal attribution network of $\\hat{N} = 325\\,715.4 \\pm 43\\,139.2$ ( $\\pm $ 95% CI). Comparing $\\hat{N}$ to the size of the union of Wikidata and ConceptNet indicates that these two experiments have explored approximately 5.48% $\\pm $ 0.726% of causes and effects.",
"However, this estimate is overly strict in that it assumes any difference in the written descriptions of two nodes means the nodes are different. Yet, written descriptions can easily represent the same conceptual entity in a variety of ways, leading to equivalent nodes that do not have equal written descriptions. Therefore we repeated the above estimation procedure using Wikidata and ConceptNet networks after applying NetFUSES (Sec. \"Fusing causal networks\" ). NetFUSES incorporates natural language information directly into the semantic similarity, allowing us to incorporate, to some extent, natural language information into our node comparison.",
"Applying the fusion analysis of Sec. \"Fusing causal networks\" and combining equivalent nodes within the fused Wikidata and ConceptNet, networks, then determining whether fused nodes contain nodes from both original networks to compute the overlap in the two networks, we obtain a new estimate of the underlying causal attribution network size of $\\hat{N} = 293\\,819.0 \\pm 39\\,727.3$ . This estimate is smaller than our previous, stricter estimate, as expected due to the fusion procedure, but within the previous estimate's margin of error. Again, comparing this estimate to the size of the union of the fused Wikidata and ConceptNet networks implies that the experiments have explored approximately 5.77% $\\pm $ 0.781% of the underlying or latent causal attribution network.",
"Finally, capture-recapture can also be used to measure the number of links in the underlying causal attribution network by determining if link $i\\rightarrow j$ appears in two networks. Performing the same analysis as above, after incorporating NetFUSES, provides an estimate of $\\hat{M} = 10\\,235\\,150 \\pm 8\\,962\\,595.9$ links. This estimate possesses a relatively large confidence interval due to low observed overlap in the sets of edges. According to this estimate, $0.198\\% \\pm 0.174\\%$ of links have been explored."
],
[
"The construction of causal attribution networks generates important knowledge networks that may inform causal inference research and even help future AI systems to perform causal reasoning, but these networks are time-consuming and costly to generate, and to date no efforts have been made to combine different networks. Our work not only studies the potential for fusing different networks together, but also infers the overall size of the total causal attribution network being explored.",
"We used capture-recapture estimators to infer the number of nodes and links in the underlying causal attribution network, given the Wikidata and ConceptNet networks and using NetFUSES and a semantic similarity function to help account for semantically equivalent nodes within and between Wikidata and ConceptNet. The validity of these estimates depends on Wikidata and ConceptNet being independent samples of the underlying network. As with many practical applications of capture-recapture in wildlife ecology and other areas, here we must question how well this independence assumption holds. The best way to sharpen these estimates is to introduce a new causal attribution survey specifically designed to capture either nodes or links independently (it is unlikely that a single survey protocol can sample independently both nodes and links), and then perform this same survey multiple times.",
"NetFUSES is a simple approach to graph fusion, in this case building off advances made in semantic representations of natural language, although any similarity function can be used to identify semantically equivalent nodes as appropriate. We anticipate that more accurate and more computationally efficient methods for graph fusion can be developed, but even the current method may be useful in a number of other problem domains."
],
[
"This material is based upon work supported by the National Science Foundation under Grant No. IIS-1447634."
]
],
"section_name": [
"Causal attribution datasets",
"Text processing and analysis",
"Graph fusion",
"Capture-recapture",
"Results",
"Comparing causal networks",
"Fusing causal networks",
"Inferring the size of the causal attribution network",
"Discussion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"9a696b810674778566b40332d9c70845193d44d3"
],
"answer": [
{
"evidence": [
"In this work we compare causal attribution networks derived from three datasets. A causal attribution dataset is a collection of text pairs that reflect cause-effect relationships proposed by humans (for example, “virus causes sickness”). These written statements identify the nodes of the network (see also our graph fusion algorithm for dealing with semantically equivalent statements) while cause-effect relationships form the directed edges (“virus” $\\rightarrow $ “sickness”) of the causal attribution network."
],
"extractive_spans": [],
"free_form_answer": "networks where nodes represent causes and effects, and directed edges represent cause-effect relationships proposed by humans",
"highlighted_evidence": [
"A causal attribution dataset is a collection of text pairs that reflect cause-effect relationships proposed by humans (for example, “virus causes sickness”). These written statements identify the nodes of the network (see also our graph fusion algorithm for dealing with semantically equivalent statements) while cause-effect relationships form the directed edges (“virus” $\\rightarrow $ “sickness”) of the causal attribution network."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"What are causal attribution networks?"
],
"question_id": [
"593e307d9a9d7361eba49484099c7a8147d3dade"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"research"
]
} | {
"caption": [
"Figure 1: Causes and effects around ‘anxiety,’ a term common to all three networks studied here. This example illustrates similarities and differences of these networks, in particular the sparse, treelike structure of Wikidata and ConceptNet compared with the denser interlinking present in IPRnet.",
"Table 1: Network statistics across each dataset. Abbreviations: d-directed, u-undirected, w-weak, s-strong.",
"Figure 2: Degree (A) and weakly connected component size (B) distributions for each data set as well as cumulative centrality distributions (C) for edge betweenness (top) and closeness (bottom) centrality measures. Note the interesting modality of high degree nodes in ConceptNet.",
"Table 2: Text statistics across each dataset.",
"Figure 3: Properties of causal attribution sentences across the networks. (A) Distributions of total sentence lengths L (number of words) for all unique sentences. (B) Numbers of stop words per sentence. (C) Part of speech tags across words for each dataset, compared against a baseline POS tag distribution provided by the Brown corpus (grey). Note that the horizontal axis in panel A has been truncated for clarity: 0.5%, 0.02%, and 0% of Wikidata, ConceptNet, and IPRnet sentences, respectively, have L > 7.",
"Figure 4: Statistics of fused Wikidata–ConceptNet networks across semantic similarity threshold values. Monitoring the number of self-loops, we observe a relatively clear onset of over-fusion at a threshold of t ≈ 0.95. At this threshold, we observe a 4.95% reduction in the number of nodes and a 1.43% reduction in the number of edges compared with t ≥ 1."
],
"file": [
"2-Figure1-1.png",
"7-Table1-1.png",
"9-Figure2-1.png",
"9-Table2-1.png",
"10-Figure3-1.png",
"12-Figure4-1.png"
]
} | [
"What are causal attribution networks?"
] | [
[
"1812.06038-Causal attribution datasets-0"
]
] | [
"networks where nodes represent causes and effects, and directed edges represent cause-effect relationships proposed by humans"
] | 758 |
1608.08738 | A Dictionary-based Approach to Racism Detection in Dutch Social Media | We present a dictionary-based approach to racism detection in Dutch social media comments, which were retrieved from two public Belgian social media sites likely to attract racist reactions. These comments were labeled as racist or non-racist by multiple annotators. For our approach, three discourse dictionaries were created: first, we created a dictionary by retrieving possibly racist and more neutral terms from the training data, and then augmenting these with more general words to remove some bias. A second dictionary was created through automatic expansion using a \texttt{word2vec} model trained on a large corpus of general Dutch text. Finally, a third dictionary was created by manually filtering out incorrect expansions. We trained multiple Support Vector Machines, using the distribution of words over the different categories in the dictionaries as features. The best-performing model used the manually cleaned dictionary and obtained an F-score of 0.46 for the racist class on a test set consisting of unseen Dutch comments, retrieved from the same sites used for the training set. The automated expansion of the dictionary only slightly boosted the model's performance, and this increase in performance was not statistically significant. The fact that the coverage of the expanded dictionaries did increase indicates that the words that were automatically added did occur in the corpus, but were not able to meaningfully impact performance. The dictionaries, code, and the procedure for requesting the corpus are available at: https://github.com/clips/hades | {
"paragraphs": [
[
"1.1em",
"Stéphan Tulkens, Lisa Hilte, Elise Lodewyckx, Ben Verhoeven, Walter Daelemans",
"CLiPS Research Center, University of Antwerp",
"Prinsstraat 13, 2000, Antwerpen, Belgium",
"{stephan.tulkens, lisa.hilte, ben.verhoeven, walter.daelemans}@uantwerpen.be,",
"[email protected]",
"We present a dictionary-based approach to racism detection in Dutch social media comments, which were retrieved from two public Belgian social media sites likely to attract racist reactions. These comments were labeled as racist or non-racist by multiple annotators. For our approach, three discourse dictionaries were created: first, we created a dictionary by retrieving possibly racist and more neutral terms from the training data, and then augmenting these with more general words to remove some bias. A second dictionary was created through automatic expansion using a word2vec model trained on a large corpus of general Dutch text. Finally, a third dictionary was created by manually filtering out incorrect expansions. We trained multiple Support Vector Machines, using the distribution of words over the different categories in the dictionaries as features. The best-performing model used the manually cleaned dictionary and obtained an F-score of 0.46 for the racist class on a test set consisting of unseen Dutch comments, retrieved from the same sites used for the training set. The automated expansion of the dictionary only slightly boosted the model's performance, and this increase in performance was not statistically significant. The fact that the coverage of the expanded dictionaries did increase indicates that the words that were automatically added did occur in the corpus, but were not able to meaningfully impact performance. The dictionaries, code, and the procedure for requesting the corpus are available at: https://github.com/clips/hades.",
"Racism, word2vec, Dictionary-based Approaches, Computational Stylometry"
],
[
"Racism is an important issue which is not easily defined, as racist ideas can be expressed in a variety of ways. Furthermore, there is no clear definition of what exactly constitutes a racist utterance; what is racist to one person is highly likely to not be considered racist universally. Additionally, although there exist mechanisms for reporting acts of racism, victims often neglect to do so as they feel that reporting the situation will not solve anything, according to Unia, the Belgian Interfederal Centre for Equal Opportunities. The scope of this issue, however, is currently unknown. Hence, the goal of our system is two-fold: it can be used to shed light on how many racist remarks are not being reported online, and furthermore, the automated detection of racism could provide interesting insights in the linguistic mechanisms used in racist discourse.",
"In this study, we try to automatically detect racist language in Dutch social media comments, using a dictionary-based approach. We retrieved and annotated comments from two public social media sites which were likely to attract racist reactions according to Unia. We use a Support Vector Machine to automatically classify comments, using handcrafted dictionaries, which were later expanded using automated techniques, as features.",
"We first discuss previous research on our subject and methodology, and discuss the problem of defining racist language (section \"Annotation Style\" ). Next, we describe our data (section \"Datasets and Annotations\" ). Finally, after discussing the experimental setup (section \"Experimental Setup\" ), we present our results (section \"Results and Discussion\" )."
],
[
"The classification of racist insults presents us with the problem of giving an adequate definition of racism. More so than in other domains, judging whether an utterance is an act of racism is highly personal and does not easily fit a simple definition. The Belgian anti-racist law forbids discrimination, violence and crime based on physical qualities (like skin color), nationality or ethnicity, but does not mention textual insults based on these qualities. Hence, this definition is not adequate for our purposes, since it does not include the racist utterances one would find on social media; few utterances that people might perceive as racist are actually punishable by law, as only utterances which explicitly encourage the use of violence are illegal. For this reason, we use a common sense definition of racist language, including all negative utterances, negative generalizations and insults concerning ethnicity, nationality, religion and culture. In this, we follow paolo2015racist, bonilla2002linguistics and razavi2010offensive, who show that racism is no longer strictly limited to physical or ethnic qualities, but can also include social and cultural aspects.",
"Additionally, several authors report linguistic markers of racist discourse; vandijk reports that the number of available topics is greatly restricted when talking about foreigners. paolo2015racist, who performed a qualitative study of posts from Italian social media sites, shows that these chosen topics are typically related to migration, crime and economy. Furthermore, the use of stereotypes and prejudiced statements BIBREF0 , BIBREF1 , as well as a heightened occurrence of truth claims BIBREF2 , BIBREF3 , are reported as typical characteristics of racist discourse . Finally, racist utterances are said to contain specific words and phrases, i.e. n-grams, significantly more often than neutral texts, like “our own kind” and “white civilization” BIBREF2 , BIBREF3 .",
"Stylistically, racist discourse is characterized by a higher rate of certain word classes, like imperatives and adjectives and a higher noun-adjective ratio BIBREF4 , BIBREF2 , BIBREF3 . Greevy and Smeaton also report a more frequent use of modals and adverbs, which they link to the higher frequency of truth claims in racist utterances BIBREF2 , BIBREF3 . In several studies, pronoun use is reported as an important feature in the detection of racist language. While paolo2015racist reports a high frequency of (especially first person plural) pronouns in racist data, vandijk reports a more general finding: the importance of us and them constructions in racist discourse. He explains that they involve a `semantic move with a positive part about Us and a negative part about Them' BIBREF5 . Using such constructions, one linguistically emphasizes - either deliberately or subconsciously - a divide between groups of people. A strict interpretation implies that even positive utterances about `them' can be perceived as racist, as they can also imply a divide between us and them. In this sense, Van Dijk's definition of racism is subtler, but also broader, than the definition used in our own research: we only count negative utterances and generalizations about groups of people as racist.",
"Our dictionary-based approach is inspired by methods used in previous research, like LIWC (Linguistic Inquiry and Word Count) BIBREF6 . LIWC is a dictionary-based computational tool that counts word frequencies for both grammatical categories (e.g. pronouns) and content-related categories (e.g. negative emotion words). As LIWC uses counts per category instead of individual words' frequencies, it allows for broader generalizations on functionally or semantically related words.",
"The construction of dictionary categories related to racist discourse (cf. section \"Dictionaries\" ) is largely based on linguistic properties of racist language reported in earlier work (see above). Additionally, the categories were adjusted to fit the corpus used in the research, which differs from corpora used in other studies. As our corpus is retrieved from social media sites with an anti-Islamic orientation, we added categories to reflect anti-religious sentiment. The relevant features in this study therefore differ from those reported in other studies, as different words are used to insult different groups of people BIBREF3 .",
"Finally, some other successful quantitative approaches to racism detection that have been used in earlier studies are a bag of words (BoW) approach as well as the analysis of part-of-speech (PoS) tags BIBREF2 , BIBREF3 . We leave the addition of these features to future work."
],
[
"In this section, we describe our data collection, our annotation guidelines ( \"Annotation Style\" ) and the results of our annotations ( \"Conclusions and Future Work\" and \"Test data\" ).",
"For our current research we collected a corpus of social media comments, consisting of comments retrieved from Facebook sites which were likely to attract racist reactions in their comments. We specifically targeted two sites: the site of a prominent Belgian anti-Islamic organization, and the site of a Belgian right-wing organization. In both cases the Facebook sites were officially condoned by the organizations, and in the first case served as a communication platform to organize political gatherings. While both sites, the former more than the latter, explicitly profess to be non-racist, the comments they attracted were still highly critical of foreigners and, predictably, Muslims. This is also the reason we mined comments from these sites, and not the posts themselves. While the narrow focus of the sites introduces bias into our data, as the opinions of the people visiting these sites will not reflect the opinions of the general population, they do contain a good proportion of racist to non-racist data."
],
[
"We annotated the retrieved comments with three different labels: `racist', `non-racist' and `invalid'.",
"The `racist' label describes comments that contain negative utterances or insults about someone's ethnicity, nationality, religion or culture. This definition also includes utterances which equate, for example, an ethnic group to an extremist group, as well as extreme generalizations. The following examples are comments that were classified as racist:",
"Het zijn precies de vreemden die de haat of het racisme opwekken bij de autochtonen.",
"It is the foreigners that elicit hate and racism from natives.",
"Kan je niets aan doen dat je behoort tot het ras dat nog minder verstand en gevoelens heeft in uw hersenen dan het stinkend gat van een VARKEN ! :-p",
"You cannot help the fact that you belong to the race that has less intellect and sense in their brains than the smelly behind of a PIG! :-P",
"Wil weer eens lukken dat wij met het vuilste krapuul zitten, ik verschiet er zelfs niet van!",
"Once again we have to put up with the filthiest scum, it doesn't even surprise me anymore!",
"The label `invalid' was used for comments that were written in languages other than Dutch, or that did not contain any textual information, i.e. comments that solely consist of pictures or links. Before classification, we excluded these from both our training and test set.",
"The final label, `non-racist', was the default label. If a comment was valid, but could not be considered racist according to our definition, this was the label we used."
],
[
"To collect the training data, we used Pattern BIBREF7 to scrape the 100 most recent posts from both sites, and then extracted all comments which reacted to these comments. This resulted in 5759 extracted comments: 4880 from the first site and 879 from the second site. The second site attracted a lot less comments on each post, possibly because the site posted more frequently. In addition to this, the organization behind the first site had been figuring prominently in the news at the time of extraction, which might explain the divide in frequency of comments between the two sites. The corpus was annotated by two annotators, who were both students of comparable age and background. When A and B did not agree on a label, a third annotator, C, was used as a tiebreaker in order to obtain gold-standard labels. Table 1 shows the gold standard for the training set.",
"We calculated inter-annotator agreement using the Kappa score ( $\\kappa $ ) BIBREF8 . On the training corpus, the agreement score was $\\kappa $ = 0.60. Annotator A used the racist tag much less often than annotator B. Interestingly, the agreement remains relatively high; 79% of the comments that A annotated as racist were also annotated as racist by B. Even though B was much more inclined to call utterances racist, A and B still shared a common ground regarding their definition of racism. Examining the comments in detail, we found that the difference can largely be explained by sensitivity to insults and generalizations, as example 4 shows.",
"Oprotten die luizegaards [sic] !!!",
"Throw those lice carriers out!",
"While annotator B considers this utterance to be racist, annotator A does not, as it does not contain a specific reference to an ethnicity, nationality or religion. That is, when not seen in the context of this specific annotation task this sentence would not necessarily be called racist, just insulting."
],
[
"The test corpus was mined in the same way as the training set, at a different point in time. We mined the first 500 and first 116 comments from the first and second site, respectively, which makes the proportion between sites more or less identical to the the proportions in the train corpus. The annotation scheme was identical to the one for the train set, with the difference that C, who previously performed the tiebreak, now became a regular annotator. The first 25% of each batch of comments, i.e. 125 comments for the first site and 30 comments for the second site, were annotated by all three annotators to compute inter-annotator agreement. The remaining comments were equally divided among annotators. The annotator agreement was $\\kappa $ = 0.54 (pairwise average), which is lower than the agreement on the training data. The reason for the lower agreement was that annotator C often did not agree with A and B. Because the pattern of mismatches between the annotators is quite regular, we will now discuss some of the annotations in detail:",
"we kunnen niet iedereen hier binnen laten want dat betekend [sic] het einde van de europese beschaving We cannot let everyone in because that will mean the end of European civilization",
"Eigen volk gaat voor, want die vuile manieren van de EU moeten wij vanaf. Geen EU en geen VN. Waardeloos en tegen onze mensen. (eigen volk.)",
"Put our own people first, because we need to get rid of the foul manners of the EU. No EU nor UN. Useless and against our people. (own folk.)",
"Burgemeester Termont is voor de zwartzakken die kiezen voor hem",
"Mayor Termont supports the black sacks, as they vote for him",
"Annotator C used the `racist' tag more often, which is probably due to the fact that he consistently annotated overt ideological statements related to immigration as `racist', while the other annotators did not. The three examples mentioned above are utterances that C classified as `racist', but A and B classified as `not racist'.",
"The cause of these consistent differences in annotations might be cultural, as C is from the southern part of the Netherlands, whereas A and B are native to the northern part of Belgium. Some terms are simply misannotated by C because they are Flemish vernacular expressions. For example, zwartzak [black sack], from sentence 7, superficially looks like a derogatory term for a person of color, but actually does not carry this meaning, as it is a slang word for someone who collaborated with the German occupying forces in the Second World War. While this could still be classified as being racist, the point is that C only registered this as a slang word based on skin color, and not a cultural or political term. Finally, it is improbable that the cause of these mismatches is annotator training, as A and B did not discuss their annotations during the task. In addition to this, C functioned as a tiebreaker in the first dataset, and thus already had experience with the nature of the training material."
],
[
"In this section, we describe our experimental setup. We will first discuss our dictionary-based approach, describing both the LIWC dictionary we used as well as the construction of dictionaries related to racist discourse (section \"Dictionaries\" ). Next, we will describe the preprocessing of the data (section \"Preprocessing and Featurization\" )."
],
[
"In our classification task, we will use the LIWC dictionaries for Dutch BIBREF9 . We hypothesize that some of LIWC's word categories can be useful in detecting (implicit) racist discourse, as some of these categories are associated with markers of racist discourse reported in previous research (cf. section \"Annotation Style\" ), including pronouns, negative emotion words, references to others, certainty, religion and curse words.",
"In addition to the Dutch LIWC data, we created a dictionary containing words that specifically relate to racist discourse. We expect a dictionary-based approach in which words are grouped into categories to work well in this case because many of the racist terms used in our corpus were neologisms and hapaxes, like halalhoer (halal prostitute). Alternatively, existing terms are often reused in a ridiculing fashion, e.g. using the word mossel (mussel) to refer to Muslims. The dictionary was created as follows: after annotation, terms pertaining to racist discourse were manually extracted from the training data. These were then grouped into different categories, where most categories have both a neutral and a negative subcategory. The negative subcategory contains explicit insults, while the neutral subcategory contains words that are normally used in a neutral fashion, e.g. zwart (black), Marokkaan (Moroccan), but which might also be used in a more implicit racist discourse; e.g. people that often talk about nationalities or skin color might be participating in a racist us and them discourse. An overview of the categories can be found in Table 2 .",
"After creating the dictionary, we expanded these word lists both manually and automatically. First, we manually added an extensive list of countries, nationalities and languages, to remove some of the bias present in our training corpus. To combat sparsity, and to catch productive compounds which are likely to be used in a racist manner, we added wildcards to the beginning or end of certain words. We used two different wildcards. * is an inclusive wildcard; it matches the word with or without any affixes, e.g. moslim* matches both moslim (Muslim) and moslims (Muslims). + is an exclusive wildcard; it only matches words when an affix is attached, e.g. +moslim will match rotmoslim (Rotten Muslim) but not moslim by itself. In our corpus (which is skewed towards racism), the + will almost always represent a derogatory prefix, which is why it figures more prominently in the negative part of our dictionary.",
"A downside of using dictionaries for the detection of racism, is that they do not include a measure of context. Therefore, a sentence such as “My brother hated the North African brown rice and lentils we made for dinner” will be classified as racist, regardless of the fact that the words above do not occur in a racist context. Approaches based on word unigrams or bigrams face similar problems. This problem is currently partially absolved by the fact that we are working with a corpus skewed towards racism: words like `brown' and `African' are more likely to be racist words in our corpus than in general text.",
"To broaden the coverage of the categories in our dictionary, we performed dictionary expansion on both the neutral and the negative categories using word2vec BIBREF10 . word2vec is a collection of models capable of capturing semantic similarity between words based on the sentential contexts in which these words occur. It does so by projecting words into an n-dimensional space, and giving words with similar contexts similar places in this space. Hence, words which are closer to each other as measured by cosine distance, are more similar. Because we observed considerable semantic variation in the insults in our corpus, we expect that dictionary expansion using word2vec will lead to the extraction of previously unknown insults, as we assume that similar insults are used in similar contexts. In parallel, we know that a lot of words belonging to certain semantic categories, such as diseases and animals, can almost invariably be used as insults.",
"The expansion proceeded as follows: for each word in the dictionary, we retrieved the five closest words, i.e. the five most similar words, in the n-dimensional space, and added these to the dictionary. Wildcards were not taken into account for this task, e.g. *jood was replaced by jood for the purposes of expansion. As such, the expanded words do not have any wildcards attached to them. For expansion we used the best-performing model from tulkens2016, which is based on a corpus of 3.9 billion words of general Dutch text. Because this word2vec model was trained on general text, the semantic relations contained therein are not based on racist or insulting text, which will improve the coverage of our expanded categories.",
"After expansion, we manually searched the expanded dictionaries and removed obviously incorrect items. Because the word2vec model also includes some non-Dutch text, e.g. Spanish, some categories were expanded incorrectly. As a result, we have 3 different dictionaries with which we perform our experiments: the original dictionary which was based on the training data, a version which was expanded using word2vec, and a cleaned version of this expanded version. The word frequencies of the dictionaries are given in Table 3 . An example of expansion is given in Table 4 ."
],
[
"For preprocessing, the text was first tokenized using the Dutch tokenizer from Pattern BIBREF7 , and then lowercased and split on whitespace, which resulted in lists of words which are appropriate for lexical processing.",
"Our dictionary-based approach, like LIWC, creates an n-dimensional vector of normalized and scaled numbers, where n is the number of dictionary categories. These numbers are obtained by dividing the frequency of words in every specific category by the total number of words in the comment. Because all features are already normalized and scaled, there was no need for further scaling. Furthermore, because the number of features is so small, we did not perform explicit feature selection."
],
[
"We estimated the optimal values for the SVM parameters by an exhaustive search through the parameter space, which led to the selection of an RBF kernel with a C value of 1 and a gamma of 0. For the SVM and other experiments, we used the implementation from Scikit-Learn BIBREF11 . Using cross-validation on the training data, all dictionary-based approaches with lexical categories related to racist discourse significantly outperformed models using only LIWC's general word categories. Since the current research concerns the binary classification of racist utterances, we only report scores for the positive class, i.e. the racist class. When only LIWC-categories were used as features, an F-score of 0.34 (std. dev. 0.07) was obtained for the racist class. When using the original discourse dictionary, we reached an F-score of 0.50 (std. dev. 0.05). Automatic expansion of the categories did not influence performance either (F-score 0.50, std. dev. 0.05). Similar results (0.49 F-score, std. dev. 0.05) were obtained when the expanded racism dictionaries were manually filtered. This result is not surprising, as the original dictionaries were created from the training data, and might form an exhaustive catalog of racist terms in the original corpus.",
"Combining the features generated by LIWC with the specific dictionary-based features led to worse results compared to the dictionary-based features by themselves (F-score 0.40, std. dev. 0.07 for the best-performing model).",
"Finally, all models based on the dictionary features as well as the combined model outperformed a unigram baseline of 0.36, but the LIWC model did not. We also report a weighted random baseline (WRB), which was outperformed by all models."
],
[
"As seen above, the performance of the different models on the train set was comparable, regardless of their expansion. This is due to the creation procedure for the dictionary: because the words in the original dictionary were directly retrieved from the training data, the expanded and cleaned versions might not be able to demonstrate their generalization performance, as most of the racist words from the training data will be included in the original dictionaries as well as the expanded dictionaries. This artifact might disappear in the test set, which was retrieved from the same two sites, but will most likely contain unseen words. These unseen words will not be present in the original dictionary, but could be present in the expanded version.",
"As Table 6 shows, the models obtain largely comparable performance on the test set, and outperform the unigram baseline by a wide margin.",
"In comparison to previous research, our approach leads to worse results than those of greevy2004text, who report a precision score of 0.93 and a recall score of 0.87, using an SVM with BOW features together with frequency-based term weights. It is, however, difficult to compare these scores to our performance, given that the data, method, and language differ.",
"Our best-performing model was based on the expanded and cleaned version of the dictionary, but this model only slightly outperformed the other models. Additionally, we also computed Area Under the Receiving Operator Characteristic Curve (ROC-AUC) scores for all models, also shown in Table 6 . ROC-AUC shows the probability of ranking a randomly chosen positive instance above a randomly chosen negative instance, thereby giving an indication of the overall performance of the models. This shows that all dictionaries have comparable AUC scores, and that each dictionary outperforms the unigram baseline. To obtain additional evidence, we computed the statistical significance of performance differences between the models based on the dictionaries and unigram baseline model using approximate randomization testing (ART) BIBREF12 . An ART test between dictionary models reveals that none of the models had performance differences that were statistically significant. Similarly, all dictionary models outperformed the unigram baseline with statistical significance, with $p$ $<$ 0.01 for the models based on the cleaned and expanded dictionaries, and $p$ $<$ 0.05 for the models based on the original dictionary.",
"To get more insight into why the expanded models were not more successful, we calculated dictionary coverage for every dictionary separately on the test set. If the expanded dictionaries do not have increased coverage, the reason for their similar performance is clear: not enough words have been added to affect the performance in any reasonable way. As Table 7 indicates, the coverage of the expanded dictionaries did increase, which indicates that the automated expansion, or manual deletion for that matter, contrary to expectations, did not add words that were useful for the classification of racist content. To obtain additional evidence for this claim, we looked at the number of comments that contained words from the original, cleaned and expanded dictionaries. The coverage in terms of total comments also increased, as well as the absolute number of racist comments that contained the added terms. Because the coverage in number of comments did not increase the performance of the dictionaries, we hypothesize that the terms that were included in the expanded dictionaries were not distributed clearly enough (over racist and neutral texts) to make a difference in the performance on the classification task."
],
[
"We developed a dictionary-based computational tool for automatic racism detection in Dutch social media comments. These comments were retrieved from public social media sites with an anti-Islamic orientation. The definition of racism we used to annotate the comments therefore includes religious and cultural racism as well, a phenomenon reported on in different studies BIBREF4 , BIBREF13 , BIBREF14 .",
"We use a Support Vector Machine to classify comments as racist or not based on the distribution of the comments' words over different word categories related to racist discourse. To evaluate the performance, we used our own annotations as gold standard. The best-performing model obtained an F-score of 0.46 for the racist class on the test set, which is an acceptable decrease in performance compared to cross-validation experiments on the training data (F-score 0.49, std. dev. 0.05). The dictionary used by the model was manually created by retrieving possibly racist and more neutral terms from the training data during annotation. The dictionary was then manually expanded, automatically expanded with a word2vec model and finally manually cleaned, i.e. irrelevant terms that were added automatically were removed. It did not prove useful to use general stylistic or content-based word categories along with the word lists specifically related to racist discourse.",
"Surprisingly, the expansion of the manually crafted dictionary did not boost the model's performance significantly. In (cross-validated) experiments on the training data, this makes sense, as the words in the different categories are retrieved from the training data itself, artificially making the dictionary very appropriate for the task. In the test runs, however, a better result could be expected from the generalized word lists. The expanded versions of the dictionary had higher overall coverage for the words in the corpus, as well as higher coverage in number of comments and in number of racist comments. This shows that the words that were automatically added, did indeed occur in our corpus. As the model's performance more or less stagnated when using the expanded categories compared to the original ones, we hypothesize that the terms that were automatically added by the word2vec model were irrelevant to the task of discriminating between racist and neutral texts.",
"In terms of future work, we will expand our research efforts to include more general social media text. Because we currently only use material which was gathered from sites skewed towards racism, the performance of our dictionary might have been artificially heightened, as the words in the dictionary only occur in racist contexts in our corpus. Therefore, including more general social media texts will serve as a good test of the generality of our dictionaries with regards to detecting insulting material."
],
[
"We are very grateful towards Leona Erens and François Deleu from Unia for wanting to collaborate with us and for pointing us towards the necessary data. We thank the three anonymous reviewers for their helpful comments and advice."
],
[
"The supplementary materials are available at https://github.com/clips/hades"
]
],
"section_name": [
null,
"Introduction",
"Related Research",
"Datasets and Annotations",
"Annotation Style",
"Training Data",
"Test data",
"Experimental Setup",
"Dictionaries",
"Preprocessing and Featurization",
"Performance on the Training Set",
"Testing the Effect of Expansion",
"Conclusions and Future Work",
"Acknowledgments",
"Supplementary Materials"
]
} | {
"answers": [
{
"annotation_id": [
"9c124b536e97537b200de4832f18b4ff03b6e686"
],
"answer": [
{
"evidence": [
"The classification of racist insults presents us with the problem of giving an adequate definition of racism. More so than in other domains, judging whether an utterance is an act of racism is highly personal and does not easily fit a simple definition. The Belgian anti-racist law forbids discrimination, violence and crime based on physical qualities (like skin color), nationality or ethnicity, but does not mention textual insults based on these qualities. Hence, this definition is not adequate for our purposes, since it does not include the racist utterances one would find on social media; few utterances that people might perceive as racist are actually punishable by law, as only utterances which explicitly encourage the use of violence are illegal. For this reason, we use a common sense definition of racist language, including all negative utterances, negative generalizations and insults concerning ethnicity, nationality, religion and culture. In this, we follow paolo2015racist, bonilla2002linguistics and razavi2010offensive, who show that racism is no longer strictly limited to physical or ethnic qualities, but can also include social and cultural aspects."
],
"extractive_spans": [],
"free_form_answer": "if it includes negative utterances, negative generalizations and insults concerning ethnicity, nationality, religion and culture.",
"highlighted_evidence": [
"we use a common sense definition of racist language, including all negative utterances, negative generalizations and insults concerning ethnicity, nationality, religion and culture."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
],
"nlp_background": [
"two"
],
"paper_read": [
"somewhat"
],
"question": [
"how did they ask if a tweet was racist?"
],
"question_id": [
"a71ebd8dc907d470f6bd3829fa949b15b29a0631"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"social"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Table 1: Gold standard corpus sizes.",
"Table 3: Dictionary word frequencies.",
"Table 2: Overview of the categories in the discourse dictionary",
"Table 4: An example of expansion. The original dictionary only contains a single word. In the expanded version, the bold words have been added. In the third version the words that were struck through have been removed.",
"Table 5: Results on the train set. WRB is a weighted random baseline.",
"Table 6: P, R, F and ROC-AUC scores on the test set.",
"Table 7: Coverage of the various dictionaries in vocabulary percentage, number of comments, and number of racist comments."
],
"file": [
"3-Table1-1.png",
"4-Table3-1.png",
"4-Table2-1.png",
"5-Table4-1.png",
"5-Table5-1.png",
"6-Table6-1.png",
"6-Table7-1.png"
]
} | [
"how did they ask if a tweet was racist?"
] | [
[
"1608.08738-Related Research-0"
]
] | [
"if it includes negative utterances, negative generalizations and insults concerning ethnicity, nationality, religion and culture."
] | 761 |
2002.01030 | Detecting Fake News with Capsule Neural Networks | Fake news is dramatically increased in social media in recent years. This has prompted the need for effective fake news detection algorithms. Capsule neural networks have been successful in computer vision and are receiving attention for use in Natural Language Processing (NLP). This paper aims to use capsule neural networks in the fake news detection task. We use different embedding models for news items of different lengths. Static word embedding is used for short news items, whereas non-static word embeddings that allow incremental up-training and updating in the training phase are used for medium length or large news statements. Moreover, we apply different levels of n-grams for feature extraction. Our proposed architectures are evaluated on two recent well-known datasets in the field, namely ISOT and LIAR. The results show encouraging performance, outperforming the state-of-the-art methods by 7.8% on ISOT and 3.1% on the validation set, and 1% on the test set of the LIAR dataset. | {
"paragraphs": [
[
"Flexibility and ease of access to social media have resulted in the use of online channels for news access by a great number of people. For example, nearly two-thirds of American adults have access to news by online channels BIBREF0, BIBREF1. BIBREF2 also reported that social media and news consumption is significantly increased in Great Britain.",
"In comparison to traditional media, social networks have proved to be more beneficial, especially during a crisis, because of the ability to spread breaking news much faster BIBREF3. All of the news, however, is not real and there is a possibility of changing and manipulating real information by people due to political, economic, or social motivations. This manipulated data leads to the creation of news that may not be completely true or may not be completely false BIBREF4. Therefore, there is misleading information on social media that has the potential to cause many problems in society. Such misinformation, called fake news, has a wide variety of types and formats. Fake advertisements, false political statements, satires, and rumors are examples of fake news BIBREF0. This widespread of fake news that is even more than mainstream media BIBREF5 motivated many researchers and practitioners to focus on presenting effective automatic frameworks for detecting fake news BIBREF6. Google has announced an online service called “Google News Initiative” to fight fake news BIBREF7. This project will try to help readers for realizing fake news and reports BIBREF8.",
"Detecting fake news is a challenging task. A fake news detection model tries to predict intentionally misleading news based on analyzing the real and fake news that previously reviewed. Therefore, the availability of high-quality and large-size training data is an important issue.",
"The task of fake news detection can be a simple binary classification or, in a challenging setting, can be a fine-grained classification BIBREF9. After 2017, when fake news datasets were introduced, researchers tried to increase the performance of their models using this data. Kaggle dataset, ISOT dataset, and LIAR dataset are some of the most well-known publicly available datasets BIBREF10.",
"In this paper, we propose a new model based on capsule neural networks for detecting fake news. We propose architectures for detecting fake news in different lengths of news statements by using different varieties of word embedding and applying different levels of n-gram as feature extractors. We show these proposed models achieve better results in comparison to the state-of-the-art methods.",
"The rest of the paper is organized as follows: Section SECREF2 reviews related work about fake news detection. Section SECREF3 presents the model proposed in this paper. The datasets used for fake news detection and evaluation metrics are introduced in Section SECREF4. Section SECREF5 reports the experimental results, comparison with the baseline classification and discussion. Section SECREF6 summarizes the paper and concludes this work."
],
[
"Fake news detection has been studied in several investigations. BIBREF11 presented an overview of deception assessment approaches, including the major classes and the final goals of these approaches. They also investigated the problem using two approaches: (1) linguistic methods, in which the related language patterns were extracted and precisely analyzed from the news content for making decision about it, and (2) network approaches, in which the network parameters such as network queries and message metadata were deployed for decision making about new incoming news.",
"BIBREF12 proposed an automated fake news detector, called CSI that consists of three modules: Capture, Score, and Integrate, which predicts by taking advantage of three features related to the incoming news: text, response, and source of it. The model includes three modules; the first one extracts the temporal representation of news articles, the second one represents and scores the behavior of the users, and the last module uses the outputs of the first two modules (i.e., the extracted representations of both users and articles) and use them for the classification. Their experiments demonstrated that CSI provides an improvement in terms of accuracy.",
"BIBREF13 introduced a new approach which tries to decide if a news is fake or not based on the users that interacted with and/or liked it. They proposed two classification methods. The first method deploys a logistic regression model and takes the user interaction into account as the features. The second one is a novel adaptation of the Boolean label crowdsourcing techniques. The experiments showed that both approaches achieved high accuracy and proved that considering the users who interact with the news is an important feature for making a decision about that news.",
"BIBREF14 introduced two new datasets that are related to seven different domains, and instead of short statements containing fake news information, their datasets contain actual news excerpts. They deployed a linear support vector machine classifier and showed that linguistic features such as lexical, syntactic, and semantic level features are beneficial to distinguish between fake and genuine news. The results showed that the performance of the developed system is comparable to that of humans in this area.",
"BIBREF15 provided a novel dataset, called LIAR, consisting of 12,836 labeled short statements. The instances in this dataset are chosen from more natural contexts such as Facebook posts, tweets, political debates, etc. They proposed neural network architecture for taking advantage of text and meta-data together. The model consists of a Convolutional Neural Network (CNN) for feature extraction from the text and a Bi-directional Long Short Term Memory (BiLSTM) network for feature extraction from the meta-data and feeds the concatenation of these two features into a fully connected softmax layer for making the final decision about the related news. They showed that the combination of metadata with text leads to significant improvements in terms of accuracy.",
"BIBREF16 proved that incorporating speaker profiles into an attention-based LSTM model can improve the performance of a fake news detector. They claim speaker profiles can contribute to the model in two different ways. First, including them in the attention model. Second, considering them as additional input data. They used party affiliation, speaker location, title, and credit history as speaker profiles, and they show this metadata can increase the accuracy of the classifier on the LIAR dataset.",
"BIBREF17 presented a new dataset for fake news detection, called ISOT. This dataset was entirely collected from real-world sources. They used n-gram models and six machine learning techniques for fake news detection on the ISOT dataset. They achieved the best performance by using TF-IDF as the feature extractor and linear support vector machine as the classifier.",
"BIBREF18 proposed an end-to-end framework called event adversarial neural network, which is able to extract event-invariant multi-modal features. This model has three main components: the multi-modal feature extractor, the fake news detector, and the event discriminator. The first component uses CNN as its core module. For the second component, a fully connected layer with softmax activation is deployed to predict if the news is fake or not. As the last component, two fully connected layers are used, which aims at classifying the news into one of K events based on the first component representations.",
"BIBREF19 developed a tractable Bayesian algorithm called Detective, which provides a balance between selecting news that directly maximizes the objective value and selecting news that aids toward learning user's flagging accuracy. They claim the primary goal of their works is to minimize the spread of false information and to reduce the number of users who have seen the fake news before it becomes blocked. Their experiments show that Detective is very competitive against the fictitious algorithm OPT, an algorithm that knows the true users’ parameters, and is robust in applying flags even in a setting where the majority of users are adversarial."
],
[
"In this section, we first introduce different variations of word embedding models. Then, we proposed two capsule neural network models according to the length of the news statements that incorporate different word embedding models for fake news detection."
],
[
"Dense word representation can capture syntactic or semantic information from words. When word representations are demonstrated in low dimensional space, they are called word embedding. In these representations, words with similar meanings are in close position in the vector space.",
"In 2013, BIBREF20 proposed word2vec, which is a group of highly efficient computational models for learning word embeddings from raw text. These models are created by training neural networks with two-layers trained by a large volume of text. These models can produce vector representations for every word with several hundred dimensions in a vector space. In this space, words with similar meanings are mapped to close coordinates.",
"There are some pre-trained word2vec vectors like 'Google News' that was trained on 100 billion words from Google news. One of the popular methods to improve text processing performance is using these pre-trained vectors for initializing word vectors, especially in the absence of a large supervised training set. These distributed vectors can be fed into deep neural networks and used for any text classification task BIBREF21. These pre-trained embeddings, however, can further be enhanced.",
"BIBREF21 applied different learning settings for vector representation of words via word2vec for the first time and showed their superiority compared to the regular pre-trained embeddings when they are used within a CNN model. These settings are as follow:",
"Static word2vec model: in this model, pre-trained vectors are used as input to the neural network architecture, these vectors are kept static during training, and only the other parameters are learned.",
"Non-static word2vec model: this model uses the pre-trained vectors at the initialization of learning, but during the training phase, these vectors are fine-tuned for each task using the training data of the target task.",
"Multichannel word2vec model: the model uses two sets of static and non-static word2vec vectors, and a part of vectors fine-tune during training."
],
[
"Although different models based on deep neural networks have been proposed for fake news detection, there is still a great need for further improvements in this task. In the current research, we aim at using capsule neural networks to enhance the accuracy of fake news identification systems.",
"The capsule neural network was introduced by BIBREF22 for the first time in the paper called “Dynamic Routing Between Capsules”. In this paper, they showed that capsule network performance for MNIST dataset on highly overlapping digits could work better than CNNs. In computer vision, a capsule network is a neural network that tries to work inverse graphics. In a sense, the approach tries to reverse-engineer the physical process that produces an image of the world BIBREF23.",
"The capsule network is composed of many capsules that act like a function, and try to predict the instantiation parameters and presence of a particular object at a given location.",
"One key feature of capsule networks is equivariance, which aims at keeping detailed information about the location of the object and its pose throughout the network. For example, if someone rotates the image slightly, the activation vectors also change slightly BIBREF24. One of the limitations of a regular CNN is losing the precise location and pose of the objects in an image. Although this is not a challenging issue when classifying the whole image, it can be a bottleneck for image segmentation or object detection that needs precise location and pose. A capsule, however, can overcome this shortcoming in such applications BIBREF24.",
"Capsule networks have recently received significant attention. This model aims at improving CNNs and RNNs by adding the following capabilities to each source, and target node: (1) the source node has the capability of deciding about the number of messages to transfer to target nodes, and (2) the target node has the capability of deciding about the number of messages that may be received from different source nodes BIBREF25.",
"After the success of capsule networks in computer vision tasks BIBREF26, BIBREF27, BIBREF28, capsule networks have been used in different NLP tasks, including text classification BIBREF29, BIBREF30, multi-label text classification BIBREF31, sentiment analysis BIBREF18, BIBREF32, identifying aggression and toxicity in comments BIBREF33, and zero-shot user intent detection BIBREF34.",
"In capsule networks, the features that are extracted from the text are encapsulated into capsules (groups of neurons). The first work that applied capsule networks for text classification was done by BIBREF35. In their research, the performance of the capsule network as a text classification network was evaluated for the first time. Their capsule network architecture includes a standard convolutional layer called n-gram convolutional layer that works as a feature extractor. The second layer is a layer that maps scalar-valued features into a capsule representation and is called the primary capsule layer. The outputs of these capsules are fed to a convolutional capsule layer. In this layer, each capsule is only connected to a local region in the layer below. In the last step, the output of the previous layer is flattened and fed through a feed-forward capsule layer. For this layer, every capsule of the output is considered as a particular class. In this architecture, a max-margin loss is used for training the model. Figure FIGREF6 shows the architecture proposed by BIBREF35.",
"Some characteristics of capsules make them suitable for presenting a sentence or document as a vector for text classification. These characteristics include representing attributes of partial entities and expressing semantic meaning in a wide space BIBREF29.",
"For fake news identification with different length of statements, our model benefits from several parallel capsule networks and uses average pooling in the last stage. With this architecture, the models can learn more meaningful and extensive text representations on different n-gram levels according to the length of texts.",
"Depending on the length of the news statements, we use two different architectures. Figure FIGREF7 depicts the structure of the proposed model for medium or long news statements. In the model, a non-static word embedding is used as an embedding layer. In this layer, we use 'glove.6B.300d' as a pre-trained word embedding, and use four parallel networks by considering four different filter sizes 2,3,4,5 as n-gram convolutional layers for feature extraction. In the next layers, for each parallel network, there is a primary capsule layer and a convolutional capsule layer, respectively, as presented in Figure FIGREF6. A fully connected capsule layer is used in the last layer for each parallel network. At the end, the average polling is added for producing the final result.",
"For short news statements, due to the limitation of word sequences, a different structure has been proposed. The layers are like the first model, but only two parallel networks are considered with 3 and 5 filter sizes. In this model, a static word embedding is used. Figure FIGREF8 shows the structure of the proposed model for short news statements."
],
[
"Several datasets have been introduced for fake news detection. One of the main requirements for using neural architectures is having a large dataset to train the model. In this paper, we use two datasets, namely ISOT fake news BIBREF17 and LIAR BIBREF15, which have a large number of documents for training deep models. The length of news statements for ISOT is medium or long, and LIAR is short."
],
[
"In 2017, BIBREF17 introduced a new dataset that was collected from real-world sources. This dataset consists of news articles from Reuters.com and Kaggle.com for real news and fake news, respectively. Every instance in the dataset is longer than 200 characters. For each article, the following metadata is available: article type, article text, article title, article date, and article label (fake or real). Table TABREF12 shows the type and size of the articles for the real and fake categories."
],
[
"As mentioned in Section SECREF2, one of the recent well-known datasets, is provided by BIBREF15. BIBREF15 introduced a new large dataset called LIAR, which includes 12.8K human-labeled short statements from POLITIFACT.COM API. Each statement is evaluated by POLITIFACT.COM editor for its validity. Six fine-grained labels are considered for the degree of truthfulness, including pants-fire, false, barely-true, half-true, mostly-true, and true. The distribution of labels in this dataset are as follows: 1,050 pants-fire labels and a range of 2,063 to 2,638 for other labels.",
"In addition to news statements, this dataset consists of several metadata as speaker profiles for each news item. These metadata include valuable information about the subject, speaker, job, state, party, and total credit history count of the speaker of the news. The total credit history count, including the barely-true counts, false counts, half-true counts, mostly-true counts, and pants-fire counts. The statistics of LIAR dataset are shown in Table TABREF14. Some excerpt samples from the LIAR dataset are presented in Table TABREF15."
],
[
"The experiments of this paper were conducted on a PC with Intel Core i7 6700k, 3.40GHz CPU; 16GB RAM; Nvidia GeForce GTX 1080Ti GPU in a Linux workstation. For implementing the proposed model, the Keras library BIBREF36 was used, which is a high-level neural network API."
],
[
"The evaluation metric in our experiments is the classification accuracy. Accuracy is the ratio of correct predictions to the total number of samples and is computed as:",
"Where TP is represents the number of True Positive results, FP represents the number of False Positive results, TN represents the number of True Negative results, and FN represents the number of False Negative results."
],
[
"For evaluating the effectiveness of the proposed model, a series of experiments on two datasets were performed. These experiments are explained in this section and the results are compared to other baseline methods. We also discuss the results for every dataset separately."
],
[
"As mentioned in Section SECREF4, BIBREF17 presented the ISOT dataset. According to the baseline paper, we consider 1000 articles for every set of real and fake articles, a total of 2000 articles for the test set, and the model is trained with the rest of the data.",
"First, the proposed model is evaluated with different word embeddings that described in Section SECREF1. Table TABREF20 shows the result of applying different word embeddings for the proposed model on ISOT, which consists of medium and long length news statements. The best result is achieved by applying the non-static embedding.",
"BIBREF17 evaluated different machine learning methods for fake news detection on the ISOT dataset, including the Support Vector Machine (SVM), the Linear Support Vector Machine (LSVM), the K-Nearest Neighbor (KNN), the Decision Tree (DT), the Stochastic Gradient Descent (SGD), and the Logistic regression (LR) methods.",
"Table TABREF21 shows the performance of non-static capsule network for fake news detection in comparison to other methods. The accuracy of our model is 7.8% higher than the best result achieved by LSVM."
],
[
"The proposed model can predict true labels with high accuracy reaching in a very small number of wrong predictions. Table TABREF23 shows the titles of two wrongly predicted samples for detecting fake news. To have an analysis on our results, we investigate the effects of sample words that are represented in training statements that tagged as real and fake separately.",
"For this work, all of the words and their frequencies are extracted from the two wrong samples and both real and fake labels of the training data. Table TABREF24 shows the information of this data. Then for every wrongly predicted sample, stop-words are omitted, and words with a frequency of more than two are listed. After that, all of these words and their frequency in real and fake training datasets are extracted. In this part, the frequencies of these words are normalized. Table TABREF25 and Table TABREF28 show the normalized frequencies of words for each sample respectably. In these tables, for ease of comparison, the normalized frequencies of real and fake labels of training data and the normalized frequency for each word in every wrong sample are multiplied by 10.",
"The label of Sample 1 is predicted as fake, but it is real. In Table TABREF25, six most frequent words of Sample 1 are listed, the word \"tax\" is presented 2 times more than each of the other words in Sample 1, and this word in the training data with real labels is obviously more frequent. In addition to this word, for other words like \"state\", the same observation exists.",
"The text of Sample 2 is predicted as real news, but it is fake. Table TABREF28 lists six frequent words of Sample 2. The two most frequent words of this text are \"trump\" and \"sanders\". These words are more frequent in training data with fake labels than the training data with real labels. \"All\" and \"even\" are two other frequent words, We use \"even\" to refer to something surprising, unexpected, unusual or extreme and \"all\" means every one, the complete number or amount or the whole. therefore, a text that includes these words has more potential to classify as a fake news. These experiments show the strong effect of the sample words frequency on the prediction of the labels."
],
[
"As mentioned in Section SECREF13, the LIAR dataset is a multi-label dataset with short news statements. In comparison to the ISOT dataset, the classification task for this dataset is more challenging. We evaluate the proposed model while using different metadata, which is considered as speaker profiles. Table TABREF30 shows the performance of the capsule network for fake news detection by adding every metadata. The best result of the model is achieved by using history as metadata. The results show that this model can perform better than state-of-the-art baselines including hybrid CNN BIBREF15 and LSTM with attention BIBREF16 by 3.1% on the validation set and 1% on the test set."
],
[
"Figure FIGREF32 shows the confusion matrix of the best classification using the proposed model for the test set. The model classifies false, half-true, and mostly-true news with more accuracy. Nevertheless, it is difficult to distinguish between true and mostly-true and also between barely-true and false. The worst accuracy is for classifying pants-fire. For these labels, detecting the correct label is more challenging, and many pants-fire texts are predicted as false."
],
[
"In this paper, we apply capsule networks for fake news detection. We propose two architectures for different lengths of news statements. We apply two strategies to improve the performance of the capsule networks for the task. First, for detecting the medium or long length of news text, we use four parallel capsule networks that each one extracts different n-gram features (2,3,4,5) from the input texts. Second, we use non-static embedding such that the word embedding model is incrementally up-trained and updated in the training phase.",
"Moreover, as a fake news detector for short news statements, we use only two parallel networks with 3 and 5 filter sizes as a feature extractor and static model for word embedding. For evaluation, two datasets are used. The ISOT dataset as a medium length or long news text and LIAR as a short statement text. The experimental results on these two well-known datasets showed improvement in terms of accuracy by 7.8% on the ISOT dataset and 3.1% on the validation set and 1% on the test set of the LIAR dataset."
]
],
"section_name": [
"Introduction",
"Related work",
"Capsule networks for fake news detection",
"Capsule networks for fake news detection ::: Different variations of word embedding models",
"Capsule networks for fake news detection ::: Proposed model",
"Evaluation ::: Dataset",
"Evaluation ::: Dataset ::: The ISOT fake news dataset",
"Evaluation ::: Dataset ::: The LIAR dataset",
"Evaluation ::: Experimental setup",
"Evaluation ::: Evaluation metrics",
"Results",
"Results ::: Classification for ISOT dataset",
"Results ::: Discussion",
"Results ::: Classification for the LIAR dataset",
"Results ::: Classification for the LIAR dataset ::: Discussion",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"9f334cc4f5a70f6a7fbc5ab633c8951844c757f3"
],
"answer": [
{
"evidence": [
"Table TABREF21 shows the performance of non-static capsule network for fake news detection in comparison to other methods. The accuracy of our model is 7.8% higher than the best result achieved by LSVM.",
"As mentioned in Section SECREF13, the LIAR dataset is a multi-label dataset with short news statements. In comparison to the ISOT dataset, the classification task for this dataset is more challenging. We evaluate the proposed model while using different metadata, which is considered as speaker profiles. Table TABREF30 shows the performance of the capsule network for fake news detection by adding every metadata. The best result of the model is achieved by using history as metadata. The results show that this model can perform better than state-of-the-art baselines including hybrid CNN BIBREF15 and LSTM with attention BIBREF16 by 3.1% on the validation set and 1% on the test set."
],
"extractive_spans": [],
"free_form_answer": "ISOT dataset: LLVM\nLiar dataset: Hybrid CNN and LSTM with attention",
"highlighted_evidence": [
"The accuracy of our model is 7.8% higher than the best result achieved by LSVM.",
"The results show that this model can perform better than state-of-the-art baselines including hybrid CNN BIBREF15 and LSTM with attention BIBREF16 by 3.1% on the validation set and 1% on the test set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero"
],
"paper_read": [
"no"
],
"question": [
"What are state of the art methods authors compare their work with? "
],
"question_id": [
"144714fe0d5a2bb7e21a7bf50df39d790ff12916"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"computer vision"
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The architecture of capsule network proposed by Yang et al. [35] for text classification",
"Figure 2: The architecture of the proposed non-static capsule network for detecting fake news in medium or long news statements.",
"Figure 3: The architecture of the proposed static capsule network for detecting fake news in short news statements.",
"Table 1: Type and size of every articles per category for ISOT Dataset provided by Ahmed et al. [2]",
"Table 2: The LIAR dataset statistics provided by Wang [32]",
"Table 3: Three random excerpts from the LIAR dataset.",
"Table 4: Result of proposed model with different word embedding models",
"Table 5: Comparison of non-static capsule network result with Result of Ahmed et al. [2]",
"Table 6: Two samples with wrong prediction",
"Table 7: The number of word tokens and word types of training data and samples",
"Table 8: Normalized frequency for words in sample 1 and training data with fake and real label",
"Table 9: Normalized frequency for words in sample 2 and training data with fake and real label",
"Table 10: Comparison of capsule network result with other baseline",
"Figure 4: Confusion matrix of classification using proposed model for LIAR dataset"
],
"file": [
"8-Figure1-1.png",
"10-Figure2-1.png",
"11-Figure3-1.png",
"12-Table1-1.png",
"13-Table2-1.png",
"14-Table3-1.png",
"15-Table4-1.png",
"16-Table5-1.png",
"16-Table6-1.png",
"17-Table7-1.png",
"18-Table8-1.png",
"19-Table9-1.png",
"20-Table10-1.png",
"20-Figure4-1.png"
]
} | [
"What are state of the art methods authors compare their work with? "
] | [
[
"2002.01030-Results ::: Classification for ISOT dataset-3",
"2002.01030-Results ::: Classification for the LIAR dataset-0"
]
] | [
"ISOT dataset: LLVM\nLiar dataset: Hybrid CNN and LSTM with attention"
] | 767 |
2004.03788 | Satirical News Detection with Semantic Feature Extraction and Game-theoretic Rough Sets | Satirical news detection is an important yet challenging task to prevent spread of misinformation. Many feature based and end-to-end neural nets based satirical news detection systems have been proposed and delivered promising results. Existing approaches explore comprehensive word features from satirical news articles, but lack semantic metrics using word vectors for tweet form satirical news. Moreover, the vagueness of satire and news parody determines that a news tweet can hardly be classified with a binary decision, that is, satirical or legitimate. To address these issues, we collect satirical and legitimate news tweets, and propose a semantic feature based approach. Features are extracted by exploring inconsistencies in phrases, entities, and between main and relative clauses. We apply game-theoretic rough set model to detect satirical news, in which probabilistic thresholds are derived by game equilibrium and repetition learning mechanism. Experimental results on the collected dataset show the robustness and improvement of the proposed approach compared with Pawlak rough set model and SVM. | {
"paragraphs": [
[
"Satirical news, which uses parody characterized in a conventional news style, has now become an entertainment on social media. While news satire is claimed to be pure comedic and of amusement, it makes statements on real events often with the aim of attaining social criticism and influencing change BIBREF0. Satirical news can also be misleading to readers, even though it is not designed for falsifications. Given such sophistication, satirical news detection is a necessary yet challenging natural language processing (NLP) task. Many feature based fake or satirical news detection systems BIBREF1, BIBREF2, BIBREF3 extract features from word relations given by statistics or lexical database, and other linguistic features. In addition, with the great success of deep learning in NLP in recent years, many end-to-end neural nets based detection systems BIBREF4, BIBREF5, BIBREF6 have been proposed and delivered promising results on satirical news article detection.",
"However, with the evolution of fast-paced social media, satirical news has been condensed into a satirical-news-in-one-sentence form. For example, one single tweet of “If earth continues to warm at current rate moon will be mostly underwater by 2400\" by The Onion is largely consumed and spread by social media users than the corresponding full article posted on The Onion website. Existing detection systems trained on full document data might not be applicable to such form of satirical news. Therefore, we collect news tweets from satirical news sources such as The Onion, The New Yorker (Borowitz Report) and legitimate news sources such as Wall Street Journal and CNN Breaking News. We explore the syntactic tree of the sentence and extract inconsistencies between attributes and head noun in noun phrases. We also detect the existence of named entities and relations between named entities and noun phrases as well as contradictions between the main clause and corresponding prepositional phrase. For a satirical news, such inconsistencies often exist since satirical news usually combines irrelevant components so as to attain surprise and humor. The discrepancies are measured by cosine similarity between word components where words are represented by Glove BIBREF7. Sentence structures are derived by Flair, a state-of-the-art NLP framework, which better captures part-of-speech and named entity structures BIBREF8.",
"Due to the obscurity of satire genre and lacks of information given tweet form satirical news, there exists ambiguity in satirical news, which causes great difficulty to make a traditional binary decision. That is, it is difficult to classify one news as satirical or legitimate with available information. Three-way decisions, proposed by YY Yao, added an option - deferral decision in the traditional yes-and-no binary decisions and can be used to classify satirical news BIBREF9, BIBREF10. That is, one news may be classified as satirical, legitimate, and deferral. We apply rough sets model, particularly the game-theoretic rough sets to classify news into three groups, i.e., satirical, legitimate, and deferral. Game-theoretic rough set (GTRS) model, proposed by JT Yao and Herbert, is a recent promising model for decision making in the rough set context BIBREF11. GTRS determine three decision regions from a tradeoff perspective when multiple criteria are involved to evaluate the classification models BIBREF12. Games are formulated to obtain a tradeoff between involved criteria. The balanced thresholds of three decision regions can be induced from the game equilibria. GTRS have been applied in recommendation systems BIBREF13, medical decision making BIBREF14, uncertainty analysis BIBREF15, and spam filtering BIBREF16.",
"We apply GTRS model on our preprocessed dataset and divide all news into satirical, legitimate, or deferral regions. The probabilistic thresholds that determine three decision regions are obtained by formulating competitive games between accuracy and coverage and then finding Nash equilibrium of games. We perform extensive experiments on the collected dataset, fine-tuning the model by different discretization methods and variation of equivalent classes. The experimental result shows that the performance of the proposed model is superior compared with Pawlak rough sets model and SVM."
],
[
"Satirical news detection is an important yet challenging NLP task. Many feature based models have been proposed. Burfoot et al. extracted features of headline, profanity, and slang using word relations given by statistical metrics and lexical database BIBREF1. Rubin et al. proposed a SVM based model with five features (absurdity, humor, grammar, negative affect, and punctuation) for fake news document detection BIBREF2. Yang et al. presented linguistic features such as psycholinguistic feature based on dictionary and writing stylistic feature from part-of-speech tags distribution frequency BIBREF17. Shu et al. gave a survey in which a set of feature extraction methods is introduced for fake news on social media BIBREF3. Conroy et al. also uses social network behavior to detect fake news BIBREF18. For satirical sentence classification, Davidov et al. extract patterns using word frequency and punctuation features for tweet sentences and amazon comments BIBREF19. The detection of a certain type of sarcasm which contracts positive sentiment with a negative situation by analyzing the sentence pattern with a bootstrapped learning was also discussed BIBREF20. Although word level statistical features are widely used, with advanced word representations and state-of-the-art part-of-speech tagging and named entity recognition model, we observe that semantic features are more important than word level statistical features to model performance. Thus, we decompose the syntactic tree and use word vectors to more precisely capture the semantic inconsistencies in different structural parts of a satirical news tweet.",
"Recently, with the success of deep learning in NLP, many researchers attempted to detect fake news with end-to-end neural nets based approaches. Ruchansky et al. proposed a hybrid deep neural model which processes both text and user information BIBREF5, while Wang et al. proposed a neural network model that takes both text and image data BIBREF6 for detection. Sarkar et al. presented a neural network with attention to both capture sentence level and document level satire BIBREF4. Some research analyzed sarcasm from non-news text. Ghosh and Veale BIBREF21 used both the linguistic context and the psychological context information with a bi-directional LSTM to detect sarcasm in users' tweets. They also published a feedback-based dataset by collecting the responses from the tweets authors for future analysis. While all these works detect fake news given full text or image content, or target on non-news tweets, we attempt bridge the gap and detect satirical news by analyzing news tweets which concisely summarize the content of news."
],
[
"In this section, we will describe the composition and preprocessing of our dataset and introduce our model in detail. We create our dataset by collecting legitimate and satirical news tweets from different news source accounts. Our model aims to detect whether the content of a news tweet is satirical or legitimate. We first extract the semantic features based on inconsistencies in different structural parts of the tweet sentences, and then use these features to train game-theoretic rough set decision model."
],
[
"We collected approximately 9,000 news tweets from satirical news sources such as The Onion and Borowitz Report and about 11,000 news tweets from legitimate new sources such as Wall Street Journal and CNN Breaking News over the past three years. Each tweet is a concise summary of a news article. The duplicated and extreme short tweets are removed.A news tweet is labeled as satirical if it is written by satirical news sources and legitimate if it is from legitimate news sources. Table TABREF2 gives an example of tweet instances that comprise our dataset."
],
[
"Satirical news is not based on or does not aim to state the fact. Rather, it uses parody or humor to make statement, criticisms, or just amusements. In order to achieve such effect, contradictions are greatly utilized. Therefore, inconsistencies significantly exist in different parts of a satirical news tweet. In addition, there is a lack of entity or inconsistency between entities in news satire. We extracted these features at semantic level from different sub-structures of the news tweet. Different structural parts of the sentence are derived by part-of-speech tagging and named entity recognition by Flair. The inconsistencies in different structures are measured by cosine similarity of word phrases where words are represented by Glove word vectors. We explored three different aspects of inconsistency and designed metrics for their measurements. A word level feature using tf-idf BIBREF22 is added for robustness."
],
[
"One way for a news satire to obtain surprise or humor effect is to combine irrelevant or less jointly used attributes and the head noun which they modified. For example, noun phrase such as “rampant accountability\", “posthumous apology\", “Vatican basement\", “self-imposed mental construct\" and other rare combinations are widely used in satirical news, while individual words themselves are common. To measure such inconsistency, we first select all leaf noun phrases (NP) extracted from the semantic trees to avoid repeated calculation. Then for each noun phrase, each adjacent word pair is selected and represented by 100-dim Glove word vector denoted as $(v_{t},w_{t})$. We define the averaged cosine similarity of noun phrase word pairs as:",
"where $T$ is a total number of word pairs. We use $S_{N\\!P}$ as a feature to capture the overall inconsistency in noun phrase uses. $S_{N\\!P}$ ranges from -1 to 1, where a smaller value indicates more significant inconsistency."
],
[
"Another commonly used rhetoric approach for news satire is to make contradiction between the main clause and its prepositional phrase or relative clause. For instance, in the tweet “Trump boys counter Chinese currency manipulation $by$ adding extra zeros To $20 Bills.\", contradiction or surprise is gained by contrasting irrelevant statements provided by different parts of the sentence. Let $q$ and $p$ denote two clauses separated by main/relative relation or preposition, and $(w_{1},w_{1},... w_{q})$ and $(v_{1},v_{1},... v_{p})$ be the vectorized words in $q$ and $p$. Then we define inconsistency between $q$ and $p$ as:",
"Similarly, the feature $S_{Q\\!P}$ is measured by cosine similarity of linear summations of word vectors, where smaller value indicates more significant inconsistency."
],
[
"Even though many satirical news tweets are made based on real persons or events, most of them lack specific entities. Rather, because the news is fabricated, news writers use the words such as “man\",“woman\",“local man\", “area woman\",“local family\" as subject. However, the inconsistency between named entities and noun phrases often exists in a news satire if a named entity is included. For example, the named entity “Andrew Yang\" and the noun phrases “time vortex\" show great inconsistency than “President Trump\", \"Senate Republicans\", and “White House\" do in the legitimate news “President Trump invites Senate Republicans to the White House to talk about the funding bill.\" We define such inconsistency as a categorical feature that:",
"$S_{N\\! E\\! R\\! N}$ is the cosine similarity of named entities and noun phrases of a certain sentence and $\\bar{S}_{N\\! E\\! R\\! N}$ is the mean value of $S_{N\\! E\\! R\\! N}$ in corpus."
],
[
"We calculated the difference of tf-idf scores between legitimate news corpus and satirical news corpus for each single word. Then, the set $S_{voc}$ that includes most representative legitimate news words is created by selecting top 100 words given the tf-idf difference. For a news tweet and any word $w$ in the tweet, we define the binary feature $B_{voc}$ as:"
],
[
"We construct a Game-theoretic Rough Sets model for classification given the extracted features. Suppose $E\\subseteq U \\times U$ is an equivalence relation on a finite nonempty universe of objects $U$, where $E$ is reflexive, symmetric, and transitive. The equivalence class containing an object $x$ is given by $[x]=\\lbrace y\\in U|xEy\\rbrace $. The objects in one equivalence class all have the same attribute values. In the satirical news context, given an undefined concept $satire$, probabilistic rough sets divide all news into three pairwise disjoint groups i.e., the satirical group $POS(satire)$, legitimate group $NEG(satire)$, and deferral group $BND(satire)$, by using the conditional probability $Pr(satire|[x]) = \\frac{|satire\\cap [x]|}{|[x]|}$ as the evaluation function, and $(\\alpha ,\\beta )$ as the acceptance and rejection thresholds BIBREF23, BIBREF9, BIBREF10, that is,",
"Given an equivalence class $[x]$, if the conditional probability $Pr(satire|[x])$ is greater than or equal to the specified acceptance threshold $\\alpha $, i.e., $Pr(satire|[x])\\ge \\alpha $, we accept the news in $[x]$ as $satirical$. If $Pr(satire|[x])$ is less than or equal to the specified rejection threshold $\\beta $, i.e., $Pr(satire|[x])\\le \\beta $ we reject the news in $[x]$ as $satirical$, or we accept the news in $[x]$ as $legitimate$. If $Pr(satire|[x])$ is between $\\alpha $ and $\\beta $, i.e., $\\beta <Pr(satire|[x])<\\alpha $, we defer to make decisions on the news in $[x]$. Pawlak rough sets can be viewed as a special case of probabilistic rough sets with $(\\alpha ,\\beta )=(1,0)$.",
"Given a pair of probabilistic thresholds $(\\alpha , \\beta )$, we can obtain a news classifier according to Equation (DISPLAY_FORM13). The three regions are a partition of the universe $U$,",
"Then, the accuracy and coverage rate to evaluate the performance of the derived classifier are defined as follows BIBREF12,",
"The criterion coverage indicates the proportions of news that can be confidently classified. Next, we will obtain $(\\alpha , \\beta )$ by game formulation and repetition learning."
],
[
"We construct a game $G=\\lbrace O,S,u\\rbrace $ given the set of game players $O$, the set of strategy profile $S$, and the payoff functions $u$, where the accuracy and coverage are two players, respectively, i.e., $O=\\lbrace acc, cov\\rbrace $.",
"The set of strategy profiles $S=S_{acc}\\times S_{cov}$, where $S_{acc}$ and $S_{cov} $ are sets of possible strategies or actions performed by players $acc$ and $cov$. The initial thresholds are set as $(1,0)$. All these strategies are the changes made on the initial thresholds,",
"$c_{acc}$ and $c_{cov}$ denote the change steps used by two players, and their values are determined by the concrete experiment date set.",
"Payoff functions. The payoffs of players are $u=(u_{acc},u_{cov})$, and $u_{acc}$ and $u_{cov}$ denote the payoff functions of players $acc$ and $cov$, respectively. Given a strategy profile $p=(s, t)$ with player $acc$ performing $s$ and player $cov$ performing $t$, the payoffs of $acc$ and $cov$ are $u_{acc}(s, t)$ and $u_{cov}(s, t)$. We use $u_{acc}(\\alpha ,\\beta )$ and $u_{cov}(\\alpha ,\\beta )$ to show this relationship. The payoff functions $u_{acc}(\\alpha ,\\beta )$ and $u_{cov}(\\alpha ,\\beta )$ are defined as,",
"where $Acc_{(\\alpha , \\beta )}(Satire)$ and $Cov_{(\\alpha , \\beta )}(Satire)$ are the accuracy and coverage defined in Equations (DISPLAY_FORM15) and (DISPLAY_FORM16).",
"Payoff table. We use payoff tables to represent the formulated game. Table TABREF20 shows a payoff table example in which both players have 3 strategies defined in Equation refeq:stategies.",
"The arrow $\\downarrow $ denotes decreasing a value and $\\uparrow $ denotes increasing a value. On each cell, the threshold values are determined by two players."
],
[
"We repeat the game with the new thresholds until a balanced solution is reached. We first analyzes the pure strategy equilibrium of the game and then check if the stopping criteria are satisfied.",
"Game equilibrium. The game solution of pure strategy Nash equilibrium is used to determine possible game outcomes in GTRS. The strategy profile $(s_{i},t_{j})$ is a pure strategy Nash equilibrium, if",
"This means that none of players would like to change his strategy or they would loss benefit if deriving from this strategy profile, provided this player has the knowledge of other player's strategy.",
"Repetition of games. Assuming that we formulate a game, in which the initial thresholds are $(\\alpha , \\beta )$, and the equilibrium analysis shows that the thresholds corresponding to the equilibrium are $(\\alpha ^{*}, \\beta ^{*})$. If the thresholds $(\\alpha ^{*}, \\beta ^{*})$ do not satisfy the stopping criterion, we will update the initial thresholds in the subsequent games. The initial thresholds of the new game will be set as $(\\alpha ^{*}, \\beta ^{*})$. If the thresholds $(\\alpha ^{*}, \\beta ^{*})$ satisfy the stopping criterion, we may stop the repetition of games.",
"Stopping criterion. We define the stopping criteria so that the iterations of games can stop at a proper time. In this research, we set the stopping criterion as within the range of thresholds, the increase of one player's payoff is less than the decrease of the other player's payoff."
],
[
"There are 8757 news records in our preprocessed data set. We use Jenks natural breaks BIBREF24 to discretize continuous variables $S_{N\\!P}$ and $S_{Q\\!P}$ both into five categories denoted by nominal values from 0 to 4, where larger values still fall into bins with larger nominal value. Let $D_{N\\!P}$ and $D_{Q\\!P}$ denote the discretized variables $S_{N\\!P}$ and $S_{Q\\!P}$, respectively. We derived the information table that only contains discrete features from our original dataset. A fraction of the information table is shown in Table TABREF23.",
"The news whose condition attributes have the same values are classified in an equivalence class $X_i$. We derived 149 equivalence classes and calculated the corresponding probability $Pr(X_i)$ and condition probability $Pr(Satire|X_i)$ for each $X_i$. The probability $Pr(X_{i})$ denotes the ratio of the number of news contained in the equivalence class $X_i$ to the total number of news in the dataset, while the conditional probability $Pr(Satire|X_{i})$ is the proportion of news in $X_i$ that are satirical. We combine the equivalence classes with the same conditional probability and reduce the number of equivalence classes to 108. Table TABREF24 shows a part of the probabilistic data information about the concept satire."
],
[
"We formulated a competitive game between the criteria accuracy and coverage to obtain the balanced probabilistic thresholds with the initial thresholds $(\\alpha , \\beta )=(1,0)$ and learning rate 0.03. As shown in the payoff table Table TABREF26,",
"the cell at the right bottom corner is the game equilibrium whose strategy profile is ($\\beta $ increases 0.06, $\\alpha $ decreases 0.06). The payoffs of the players are (0.9784,0.3343). We set the stopping criterion as the increase of one player's payoff is less than the decrease of the other player's payoff when the thresholds are within the range. When the thresholds change from (1,0) to (0.94, 0.06), the accuracy is decreased from 1 to 0.9784 but the coverage is increased from 0.0795 to 0.3343. We repeat the game by setting $(0.94, 0.06)$ as the next initial thresholds.",
"The competitive games are repeated seven times. The result is shown in Table TABREF27.",
"After the eighth iteration, the repetition of game is stopped because the further changes on thresholds may cause the thresholds lay outside of the range $0 < \\beta < \\alpha <1$, and the final result is the equilibrium of the seventh game $(\\alpha , \\beta )=(0.52, 0.48)$."
],
[
"We compare Pawlak rough sets, SVM, and our GTRS approach on the proposed dataset. Table TABREF29 shows the results on the experimental data.",
"The SVM classifier achieved an accuracy of $78\\%$ with a $100\\%$ coverage. The Pawlak rough set model using $(\\alpha , \\beta )=(1,0)$ achieves a $100\\%$ accuracy and a coverage ratio of $7.95\\%$, which means it can only classify $7.95\\%$ of the data. The classifier constructed by GTRS with $(\\alpha , \\beta )=(0.52, 0.48)$ reached an accuracy $82.71\\%$ and a coverage $97.49\\%$. which indicates that $97.49\\%$ of data are able to be classified with accuracy of $82.71\\%$. The remaining $2.51\\%$ of data can not be classified without providing more information. To make our method comparable to other baselines such as SVM, we assume random guessing is made on the deferral region and present the modified accuracy. The modified accuracy for our approach is then $0.8271\\times 0.9749 + 0.5 \\times 0.0251 =81.89\\%$. Our methods shows significant improvement as compared to Pawlak model and SVM."
],
[
"In this paper, we propose a satirical news detection approach based on extracted semantic features and game-theoretic rough sets. In our mode, the semantic features extraction captures the inconsistency in the different structural parts of the sentences and the GTRS classifier can process the incomplete information based on repetitive learning and the acceptance and rejection thresholds. The experimental results on our created satirical and legitimate news tweets dataset show that our model significantly outperforms Pawlak rough set model and SVM. In particular, we demonstrate our model's ability to interpret satirical news detection from a semantic and information trade-off perspective. Other interesting extensions of our paper may be to use rough set models to extract the linguistic features at document level."
]
],
"section_name": [
"Introduction",
"Related Work",
"Methodology",
"Methodology ::: Dataset",
"Methodology ::: Semantic Feature Extraction",
"Methodology ::: Semantic Feature Extraction ::: Inconsistency in Noun Phrase Structures",
"Methodology ::: Semantic Feature Extraction ::: Inconsistency Between Clauses",
"Methodology ::: Semantic Feature Extraction ::: Inconsistency Between Named Entities and Noun Phrases",
"Methodology ::: Semantic Feature Extraction ::: Word Level Feature Using TF-IDF",
"Methodology ::: GTRS Decision Model",
"Methodology ::: GTRS Decision Model ::: Game Formulation",
"Methodology ::: GTRS Decision Model ::: Repetition Learning Mechanism",
"Experiments",
"Experiments ::: Finding Thresholds with GTRS",
"Experiments ::: Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"c47ce28c3329f9819fd0d23952c787c83f9c491c"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 7. Experimental results"
],
"extractive_spans": [],
"free_form_answer": "Their GTRS approach got an improvement of 3.89% compared to SVM and 27.91% compared to Pawlak.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 7. Experimental results"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ea4394112c1549185e6b763d6f36733a9f2ed794"
]
},
{
"annotation_id": [
"d705a4f4a443a4f5b83f23ee6b98907804f19744"
],
"answer": [
{
"evidence": [
"There are 8757 news records in our preprocessed data set. We use Jenks natural breaks BIBREF24 to discretize continuous variables $S_{N\\!P}$ and $S_{Q\\!P}$ both into five categories denoted by nominal values from 0 to 4, where larger values still fall into bins with larger nominal value. Let $D_{N\\!P}$ and $D_{Q\\!P}$ denote the discretized variables $S_{N\\!P}$ and $S_{Q\\!P}$, respectively. We derived the information table that only contains discrete features from our original dataset. A fraction of the information table is shown in Table TABREF23."
],
"extractive_spans": [
"8757 news records"
],
"free_form_answer": "",
"highlighted_evidence": [
"There are 8757 news records in our preprocessed data set. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ea4394112c1549185e6b763d6f36733a9f2ed794"
]
},
{
"annotation_id": [
"a024cea7f0607d8f602fd99f334cd53fb3b99e4c"
],
"answer": [
{
"evidence": [
"Satirical news is not based on or does not aim to state the fact. Rather, it uses parody or humor to make statement, criticisms, or just amusements. In order to achieve such effect, contradictions are greatly utilized. Therefore, inconsistencies significantly exist in different parts of a satirical news tweet. In addition, there is a lack of entity or inconsistency between entities in news satire. We extracted these features at semantic level from different sub-structures of the news tweet. Different structural parts of the sentence are derived by part-of-speech tagging and named entity recognition by Flair. The inconsistencies in different structures are measured by cosine similarity of word phrases where words are represented by Glove word vectors. We explored three different aspects of inconsistency and designed metrics for their measurements. A word level feature using tf-idf BIBREF22 is added for robustness."
],
"extractive_spans": [
"Inconsistency in Noun Phrase Structures",
" Inconsistency Between Clauses",
"Inconsistency Between Named Entities and Noun Phrases",
"Word Level Feature Using TF-IDF"
],
"free_form_answer": "",
"highlighted_evidence": [
"We explored three different aspects of inconsistency and designed metrics for their measurements. ",
"A word level feature using tf-idf BIBREF22 is added for robustness."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ea4394112c1549185e6b763d6f36733a9f2ed794"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How much improvement do they get?",
"How large is the dataset?",
"What features do they extract?"
],
"question_id": [
"1cbca15405632a2e9d0a7061855642d661e3b3a7",
"018ef092ffc356a2c0e970ae64ad3c2cf8443288",
"de4e180f49ff187abc519d01eff14ebcd8149cad"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1. Examples of instances comprising the news tweet dataset",
"Table 2. An example of a payoff table",
"Table 3. The Information Table",
"Table 4. Summary of the partial experimental data",
"Table 5. The payoff table",
"Table 6. The repetition of game",
"Table 7. Experimental results"
],
"file": [
"4-Table1-1.png",
"8-Table2-1.png",
"9-Table3-1.png",
"9-Table4-1.png",
"10-Table5-1.png",
"10-Table6-1.png",
"10-Table7-1.png"
]
} | [
"How much improvement do they get?"
] | [
[
"2004.03788-10-Table7-1.png"
]
] | [
"Their GTRS approach got an improvement of 3.89% compared to SVM and 27.91% compared to Pawlak."
] | 771 |
1904.09545 | Good-Enough Compositional Data Augmentation | We propose a simple data augmentation protocol aimed at providing a compositional inductive bias in conditional and unconditional sequence models. Under this protocol, synthetic training examples are constructed by taking real training examples and replacing (possibly discontinuous) fragments with other fragments that appear in at least one similar environment. The protocol is model-agnostic and useful for a variety of tasks. Applied to neural sequence-to-sequence models, it reduces relative error rate by up to 87% on problems from the diagnostic SCAN tasks and 16% on a semantic parsing task. Applied to n-gram language modeling, it reduces perplexity by roughly 1% on small datasets in several languages. | {
"paragraphs": [
[
"This paper proposes a data augmentation protocol for sequence modeling problems. Our approach aims to supply a simple and model-agnostic bias toward compositional reuse of previously observed sequence fragments in novel environments. Consider a language modeling task in which we wish to estimate a probability distribution over a family of sentences with the following finite sample as training data:",
"In language processing problems, we often want models to analyze this dataset compositionally and infer that ( SECREF6 ) is also probable but ( UID7 ) is not:",
"This generalization amounts to to an inference about syntactic categories BIBREF0 : because cat and wug are interchangeable in the environment the...sang, they are also likely interchangeable elsewhere. Human learners make judgments like ( SECREF5 ) about novel lexical items BIBREF1 and fragments of novel languages BIBREF2 . But we do not expect such judgments from unstructured sequence models trained to maximize the likelihood of the training data in ( SECREF1 ).",
"A large body of work in natural language processing provides generalization to data like ( SECREF6 ) by adding structure to the learned predictor BIBREF3 , BIBREF4 , BIBREF5 . But on real-world datasets, such models are typically worse than “black-box” function approximators like neural networks even when the black-box models fail to place probability mass on either example in ( SECREF5 ) BIBREF6 . To the extent that we believe ( SECREF6 ) to capture an important inductive bias, we would like to find a way of softly encouraging it without tampering with the structure of predictors that work well at scale. In this paper, we introduce a procedure for generating synthetic training examples by recombining real ones, such that ( SECREF6 ) is assigned nontrivial probability because it already appears in the training dataset.",
"The basic operation underlying our proposal (which we call geca, for “good-enough compositional augmentation”) is depicted in fig:teaser: if two (possibly discontinuous) fragments of training examples appear in some common environment, then any additional environment where the first fragment appears is also a valid environment for the second.",
"geca is crude: as a linguistic principle, it is both limited and imprecise. As discussed in Sections UID17 and SECREF5 , it captures a narrow slice of the many phenomena studied under the heading of “compositionality”, while also making a number of incorrect predictions about real languages. Nevertheless, geca appears to be quite effective across a range of learning problems. In semantic parsing, it gives improvements comparable to the data augmentation approach of BIBREF7 on INLINEFORM0 -calculus expressions, better performance than that approach on a different split of the data designed to test generalization more rigorously, and better performance on a different meaning representation language. Outside of semantic parsing, it solves two representative problems from the scan dataset of BIBREF8 that are synthetic but precise in the notion of compositionality they test. Finally, it helps with some (unconditional) low-resource language modeling problems in a typologically diverse set of languages."
],
[
"Recent years have seen tremendous success at natural language transduction and generation tasks using black-box function approximators, especially recurrent BIBREF9 and attentional BIBREF10 neural models. With enough training data, these models are often more accurate than than approaches built on traditional tools from the computational linguistics literature—formal models like regular transducers or context-free grammars BIBREF11 can be brittle and challenging to efficiently infer from large datasets.",
"However, models equipped with an explicit (symbolic) generative process have at least one significant advantage over the aforementioned black-box approaches: given a grammar, it is straightforward to precisely characterize how that grammar will extrapolate beyond the examples in a given training set to out-of-distribution data. Indeed, it is often possible for researchers to design the form that this extrapolation will take: smoothed n-gram language models guarantee that no memorization is possible beyond a certain length BIBREF12 ; CCG-based semantic parsers can make immediate use of entity lexicons without having ever seen the lexicon entries used in real sentences BIBREF13 .",
"It is not the case, as sometimes claimed BIBREF14 , that black-box neural models are fundamentally incapable of this kind of predictable generalization—the success of these models at capturing long-range structure in text BIBREF15 and controlled algorithmic data BIBREF16 indicate that some representation of hierarchical structure can be learned given enough data. But the precise point at which this transition occurs is not well-characterized; it is evidently beyond the scale available in many real-world problems.",
"How can we improve the behavior of high-quality black-box models in these settings? There are many sophisticated tools available for improving the function approximators or loss functions themselves—regularization BIBREF17 , posterior regularization BIBREF18 , BIBREF19 , explicit stacks BIBREF20 and composition operators BIBREF21 ; these existing proposals tend to be task- and architecture-specific. But to the extent that the generalization problem can be addressed by increasing the scale of the training data, it is natural to ask whether we can address the problem by increasing this scale artificially—in other words, via data augmentation.",
"Previous work BIBREF7 also studied data augmentation and compositionality in specific setting of learning language-to-logical-form mappings, beginning from the principle that data is compositional if it is generated by a synchronous grammar that relates strings to meanings. The specific approach proposed by BIBREF7 is effective but tailored for semantic parsing; it requires access to structured meaning representations with explicit types and bracketings, which are not available in most NLP applications.",
"Here we aim at a notion of compositionality that is simpler and more general: a bias toward identifying recurring fragments seen at training time, and re-using them in environments distinct from the environments in which they were first observed. This view makes no assumptions about the availability of brackets and types, and is synchronous only to the extent that the notion of a fragment is permitted to include content from both the source and target sides. We will find that it is nearly as effective as the approach of BIBREF7 in the settings for which the latter was designed, but also effective on a variety of problems where it cannot be applied."
],
[
"Consider again the example in fig:teaser. Our data augmentation protocol aims to discover substitutable sentence fragments (highlighted), with the fact a pair of fragments appear in some common sub-sentential environment (underlined) taken as evidence that the fragments belong to a common category. To generate a new examples for the model, an occurrence of one fragment is removed from a sentence to produce a sentence template, which is then populated with the other fragment.",
"Why should we expect this procedure to produce well-formed training examples? The existence of syntactic categories, and the expressibility of well-formedness rules in terms of these abstract categories, is one of the foundational principles of generative approaches to syntax BIBREF22 . The observation that sentence context provides a strong signal about a constitutent's category is in turn the foundation of distributional approaches to language processing BIBREF23 . Combining the two gives the outlines of the above procedure.",
"This combination has a productive history in natural language processing: when fragments are single words, it yields class-based language models BIBREF24 ; when fragments are contiguous spans it yields unsupervised parsers BIBREF0 , BIBREF25 . The present data augmentation scenario is distinguished mainly by the fact that we are unconcerned with producing a complete generative model of data, or with recovering the latent structure implied by the presence of nested syntactic categories. We can still synthesize high-precision examples of well-formed sequences by identifying individual substitutions that are likely to be correct without understanding how they fit into the grammar as a whole.",
"Indeed, if we are not concerned with recovering linguistically plausible analyses, we need not limit ourselves to words or contiguous sentence fragments. We can take",
"as evidence that we can use picks...up wherever we can use puts...down. Indeed, given a translation dataset:",
"we can apply the same principle to synthesize I dax. INLINEFORM0 Dajo. based on the common environment ...marvelously INLINEFORM1 ...maravillosamente. From the perspective of a generalized substitution principle, the alignment problem in machine translation is the same as the class induction problem in language modeling, but with sequences featuring large numbers of gappy fragments and a boundary symbol INLINEFORM2 .",
"The only remaining question is what makes two environments similar enough to infer the existence of a common category. There is, again, a large literature on this question (including the aforementioned language modeling, unsupervised parsing, and alignment work), but in the current work we will make use of a very simple criterion: fragments are interchangeable if they occur in at least one lexical environment that is exactly the same. Given a window size INLINEFORM0 , a sequence of INLINEFORM1 words INLINEFORM2 , and a fragment consisting of a set of INLINEFORM3 spans INLINEFORM4 , the environment is given by INLINEFORM5 , i.e. a INLINEFORM6 -word window around each span of the fragment.",
"The data augmentation operation that defines geca is formally stated as follows: let INLINEFORM0 denote the substitution of the fragment INLINEFORM1 into the template INLINEFORM2 , and INLINEFORM3 be a representation of the environment in which INLINEFORM4 occurs in INLINEFORM5 . Then,",
"",
" If the training data contains sequences INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , and that INLINEFORM3 and INLINEFORM4 , synthesize a new training example INLINEFORM5 . "
],
[
"Naïve implementation of the boxed operation takes INLINEFORM0 time (where INLINEFORM1 is the number of distinct templates in the dataset and INLINEFORM2 the number of distinct fragments). This can be improved to INLINEFORM3 (where INLINEFORM4 is the number of templates that map to the same environment) by building appropriate data structures:",
"[h] ",
" python f2t = dict(default=set()) fragment -> template t2f = dict(default=set()) template -> fragment e2t = dict(default=set()) env -> template for sentence in dataset: for template, fragment in fragments(sentence): add(f2t[fragment], template) add(t2f[template], fragment) add(e2t[env(template)], template)",
"t2t = dict(default=set()) for fragment in keys(f2t)): for template in f2t[fragment]: for template2 in f2t[fragment]: for newtemplate in e2t[env(template2)] add(t2t[template1], template2)",
"for template1, template2 in t2t: for arg in t2a[template1] if arg not in t2a[template2]: yield fill(template2, arg) Sample geca implementation. ",
"Space requirements might still be considerable (comparable to those used by n-gram language models), and similar tricks can be used to reduce memory usage BIBREF27 . The above pseudocode is agnostic with respect to the choice of fragmentation and environment functions; task-specific choices are described in more detail for each experiment below."
],
[
"We introduced geca, a simple data augmentation scheme based on identifying local phrase substitutions that are licensed by common context, and demonstrated that extra training examples generated with geca lead to improvements on both diagnostic and natural datasets for semantic parsing and language modeling. While the approach is surprisingly effective in its current form, we view these results mostly as an invitation to consider more carefully the role played by representations of sentence fragments in larger questions about compositionality in black-box sequence models. The experiments in this paper all rely on exact string matching; future work might take advantage of learned representations of spans and their environments BIBREF32 , BIBREF33 . More generally, the present results underline the extent to which current models fail to learn simple, context-independent notions of reuse, but also how easy it is to make progress towards addressing this problem without fundamental changes in model architecture."
]
],
"section_name": [
"Introduction",
"Background",
"Approach",
"Implementation",
"Discussion"
]
} | {
"answers": [
{
"annotation_id": [
"b9bfbebf21e1d4c479f7160d1fcabae38ec987ad"
],
"answer": [
{
"evidence": [
"The only remaining question is what makes two environments similar enough to infer the existence of a common category. There is, again, a large literature on this question (including the aforementioned language modeling, unsupervised parsing, and alignment work), but in the current work we will make use of a very simple criterion: fragments are interchangeable if they occur in at least one lexical environment that is exactly the same. Given a window size INLINEFORM0 , a sequence of INLINEFORM1 words INLINEFORM2 , and a fragment consisting of a set of INLINEFORM3 spans INLINEFORM4 , the environment is given by INLINEFORM5 , i.e. a INLINEFORM6 -word window around each span of the fragment."
],
"extractive_spans": [
"fragments are interchangeable if they occur in at least one lexical environment that is exactly the same"
],
"free_form_answer": "",
"highlighted_evidence": [
"The only remaining question is what makes two environments similar enough to infer the existence of a common category. There is, again, a large literature on this question (including the aforementioned language modeling, unsupervised parsing, and alignment work), but in the current work we will make use of a very simple criterion: fragments are interchangeable if they occur in at least one lexical environment that is exactly the same."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"e537c83af3bd0d87ec5a65ce2b90e280fd1d890a"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a15da1c3350402c7277f6ea754f714e123caed2c"
],
"answer": [
{
"evidence": [
"Space requirements might still be considerable (comparable to those used by n-gram language models), and similar tricks can be used to reduce memory usage BIBREF27 . The above pseudocode is agnostic with respect to the choice of fragmentation and environment functions; task-specific choices are described in more detail for each experiment below.",
"Discussion"
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Applications section) We use Wikipedia articles\nin five languages\n(Kinyarwanda, Lao, Pashto, Tok Pisin, and a subset of English) as well as the Na dataset of Adams\net al. (2017).\nSelect:\nKinyarwanda, Lao, Pashto, Tok Pisin, and a subset of English",
"highlighted_evidence": [
"The above pseudocode is agnostic with respect to the choice of fragmentation and environment functions; task-specific choices are described in more detail for each experiment below.\n\nDiscussion"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How do they determine similar environments for fragments in their data augmentation scheme?",
"Do they experiment with language modeling on large datasets?",
"Which languages do they test on?"
],
"question_id": [
"b68d2549431c524a86a46c63960b3b283f61f445",
"7f5059b4b5e84b7705835887f02a51d4d016316a",
"df79d04cc10a01d433bb558d5f8a51bfad29f46b"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Visualization of the proposed approach: two sentence fragments (a–b, highlighted) which appear in similar environments (a–b, underlined) are identified. Additional sentences in which the first fragment appears (c) are used to synthesize new examples (d) by substituting in the second fragment.",
"Table 1: Sequence match accuracies on SCAN datasets, in which the learner must generalize to new compositional uses of a single lexical item (“jump”) or multi-word modifier (“around right”) when mapping instructions to action sequences (SCAN) or vice-versa (NACS, Bastings et al., 2018). While the sequence-to-sequence model is unable to make any correct generalizations at all, applying GECA enables it to succeed most of the time. Scores are averaged across 10 random seeds.",
"Table 2: Perplexities on low-resource language modeling in English, Kinyarwanda, Lao, Na, Pashto and Tok Pisin. Even with a smoothed n-gram model rather than a high-capacity neural model, applying GECA leads to small but consistent improvements in perplexity (4/6 languages).",
"Table 3: Meaning representation accuracies on the GEOQUERY dataset. On λ-calculus expressions GECA approaches the data augmentation approach of Jia and Liang (2016) on the standard split of the data (“question”) and outperforms it on a split designed to test compositionality (“query”). On SQL expressions, GECA leads to substantial improvements on the query split and achieves state-of-the-art results. Scores are averaged across 10 random seeds.1"
],
"file": [
"1-Figure1-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png"
]
} | [
"Which languages do they test on?"
] | [
[
"1904.09545-Implementation-5"
]
] | [
"Answer with content missing: (Applications section) We use Wikipedia articles\nin five languages\n(Kinyarwanda, Lao, Pashto, Tok Pisin, and a subset of English) as well as the Na dataset of Adams\net al. (2017).\nSelect:\nKinyarwanda, Lao, Pashto, Tok Pisin, and a subset of English"
] | 774 |
1908.02322 | DpgMedia2019: A Dutch News Dataset for Partisanship Detection | We present a new Dutch news dataset with labeled partisanship. The dataset contains more than 100K articles that are labeled on the publisher level and 776 articles that were crowdsourced using an internal survey platform and labeled on the article level. In this paper, we document our original motivation, the collection and annotation process, limitations, and applications. | {
"paragraphs": [
[
"In a survey across 38 countries, the Pew Research Center reported that the global public opposed partisanship in news media BIBREF0 . It is, however, challenging to assess the partisanship of news articles on a large scale. We thus made an effort to create a dataset of articles annotated with political partisanship so that content analysis systems can benefit from it.",
"To construct a dataset of news articles labeled with partisanship, it is required that some annotators read each article and decide whether it is partisan. This is an expensive annotation process. Another way to derive a label for an article is by using the partisanship of the publisher of the article. Previous work has used this method BIBREF1 , BIBREF2 , BIBREF3 . This labeling paradigm is premised on that partisan publishers publish more partisan articles and non-partisan publishers publish more non-partisan articles. Although there would be non-partisan articles published by partisan publishers (and vice versa), and thus labeled wrongly, the assumption ensures more information than noise. Once the partisanship of a publisher is known, the labels of all its articles are known, which is fast and cheap. We created a dataset of two parts. The first part contains a large number of articles that were labeled using the partisanship of publishers. The second part contains a few hundreds of articles that were annotated by readers who were asked to read each article and answer survey questions. In the following sections, we describe the collection and annotation of both parts of the dataset."
],
[
"DpgMedia2019 is a Dutch dataset that was collected from the publications within DPG Media. We took 11 publishers in the Netherlands for the dataset. These publishers include 4 national publishers, Algemeen Dagblad (AD), de Volkskrant (VK), Trouw, and Het Parool, and 7 regional publishers, de Gelderlander, Tubantia, Brabants Dagblad, Eindhovens Dagblad, BN/De Stem PZC, and de Stentor. The regional publishers are collectively called Algemeen Dagblad Regionaal (ADR). A summary of the dataset is shown in Table TABREF3 ."
],
[
"We used an internal database that stores all articles written by journalists and ready to be published to collect the articles. From the database, we queried all articles that were published between 2017 and 2019. We filtered articles to be non-advertisement. We also filtered on the main sections so that the articles were not published under the sports and entertainment sections, which we assumed to be less political. After collecting, we found that a lot of the articles were published by several publishers, especially a large overlap existed between AD and ADR. To deal with the problem without losing many articles, we decided that articles that appeared in both AD and its regional publications belonged to AD. Therefore, articles were processed in the following steps:",
"Remove any article that was published by more than one national publisher (VK, AD, Trouw, and Het Parool). This gave us a list of unique articles from the largest 4 publishers.",
"Remove any article from ADR that overlapped with the articles from national publishers.",
"Remove any article that was published by more than one regional publisher (ADR).",
"The process assured that most of the articles are unique to one publisher. The only exceptions were the AD articles, of which some were also published by ADR. This is not ideal but acceptable as we show in the section UID8 that AD and ADR publishers would have the same partisanship labels. In the end, we have 103,812 articles.",
"To our knowledge, there is no comprehensive research about the partisanship of Dutch publishers. We thus adopted the audience-based method to decide the partisanship of publishers. Within the survey that will be explained in section SECREF11 , we asked the annotators to rate their political leanings. The question asked an annotator to report his or her political standpoints to be extreme-left, left, neutral, right, or extreme-right. We mapped extreme-left to -2, left to -1, center to 0, right to 1, extremely-right to 2, and assigned the value to each annotator. Since each annotator is subscribed to one of the publishers in our survey, we calculated the partisanship score of a publisher by averaging the scores of all annotators that subscribed to the publisher. The final score of the 11 publishers are listed in Table TABREF9 , sorted from the most left-leaning to the most right-leaning.",
"We decided to treat VK, Trouw, and Het Parool as partisan publishers and the rest non-partisan. This result largely accords with that from the news media report from the Pew Research Center in 2018 BIBREF4 , which found that VK is left-leaning and partisan while AD is less partisan.",
"Table TABREF10 shows the final publisher-level dataset of dpgMedia2019, with the number of articles and class distribution."
],
[
"To collect article-level labels, we utilized a platform in the company that has been used by the market research team to collect surveys from the subscribers of different news publishers. The survey works as follows: The user is first presented with a set of selected pages (usually 4 pages and around 20 articles) from the print paper the day before. The user can select an article each time that he or she has read, and answer some questions about it. We added 3 questions to the existing survey that asked the level of partisanship, the polarity of partisanship, and which pro- or anti- entities the article presents. We also asked the political standpoint of the user. The complete survey can be found in Appendices.",
"The reason for using this platform was two-fold. First, the platform provided us with annotators with a higher probability to be competent with the task. Since the survey was distributed to subscribers that pay for reading news, it's more likely that they regularly read newspapers and are more familiar with the political issues and parties in the Netherlands. On the other hand, if we use crowdsourcing platforms, we need to design process to select suitable annotators, for example by nationality or anchor questions to test the annotator's ability. Second, the platform gave us more confidence that an annotator had read the article before answering questions. Since the annotators could choose which articles to annotate, it is more likely that they would rate an article that they had read and had some opinions about.",
"The annotation task ran for around two months in February to April 2019. We collected annotations for 1,536 articles from 3,926 annotators.",
"For the first question, where we asked about the intensity of partisanship, more than half of the annotations were non-partisan. About 1% of the annotation indicated an extreme partisanship, as shown in Table TABREF13 . For the polarity of partisanship, most of the annotators found it not applicable or difficult to decide, as shown in Table TABREF14 . For annotations that indicated a polarity, the highest percentage was given to progressive. Progressive and conservative seemed to be more relevant terms in the Netherlands as they are used more than their counterparts, left and right, respectively.",
"As for the self-rated political standpoint of the annotators, nearly half of the annotators identified themselves as left-leaning, while only around 20% were right-leaning. This is interesting because when deciding the polarity of articles, left and progressive ratings were given much more often than right and conservative ones. This shows that these left-leaning annotators were able to identify their partisanship and rate the articles accordingly.",
"We suspected that the annotators would induce bias in ratings based on their political leaning and we might want to normalize it. To check whether this was the case, we grouped annotators based on their political leaning and calculate the percentage of each option being annotated. In Figure FIGREF16 , we grouped options and color-coded political leanings to compare whether there are differences in the annotation between the groups. We observe that the \"extreme-right\" group used less \"somewhat partisan\", \"partisan\", and \"extremely-partisan\" annotations. This might mean that articles that were considered partisan by other groups were considered \"non-partisan\" or \"impossible to decide\" by this group. We didn't observe a significant difference between the groups. Figure FIGREF17 shows the same for the second question. Interestingly, the \"extreme-right\" group gave a lot more \"right\" and slightly more \"progressive\" ratings than other groups. In the end, we decided to use the raw ratings. How to scale the ratings based on self-identified political leaning needs more investigation.",
"The main question that we are interested in is the first question in our survey. In addition to the 5-point Likert scale that an annotator could choose from (non-partisan to extremely partisan), we also provided the option to choose \"impossible to decide\" because the articles could be about non-political topics. When computing inter-rater agreement, this option was ignored. The remaining 5 ratings were treated as ordinal ratings. The initial Krippendorff's alpha was 0.142, using the interval metric. To perform quality control, we devised some filtering steps based on the information we had. These steps are as follows:",
"Remove uninterested annotators: we assumed that annotators that provided no information were not interested in participating in the task. These annotators always rated \"not possible to decide\" for Q1, 'not applicable' or \"unknown\" for Q2, and provide no textual comment for Q3. There were in total 117 uninterested annotators and their answers were discarded.",
"Remove unreliable annotators: as we didn't have \"gold data\" to evaluate reliability, we used the free text that an annotator provided in Q3 to compute a reliability score. The assumption was that if an annotator was able to provide texts with meaningful partisanship description, he or she was more reliable in performing the task. To do this, we collected the text given by each annotator. We filtered out text that didn't answer the question, such as symbols, 'no idea', 'see above', etc. Then we calculated the reliability score of annotator INLINEFORM0 with equation EQREF21 , where INLINEFORM1 is the number of clean texts that annotator INLINEFORM2 provided in total and INLINEFORM3 is the number of articles that annotator INLINEFORM4 rated. DISPLAYFORM0 ",
"We added one to INLINEFORM0 so that annotators that gave no clean texts would not all end up with a zero score but would have different scores based on how many articles they rated. In other words, if an annotator only rated one article and didn't give textual information, we considered he or she reliable since we had little information. However, an annotator that rated ten articles but never gave useful textual information was more likely to be unreliable. The reliability score was used to filter out annotators that rarely gave meaningful text. The threshold of the filtering was decided by the Krippendorff's alpha that would be achieved after discarding the annotators with a score below the threshold.",
"Remove articles with too few annotations: articles with less than 3 annotations were discarded because we were not confident with a label that was derived from less than 3 annotations.",
"Remove unreliable articles: if at least half of the annotations of an article were \"impossible to decide\", we assumed that the article was not about issues of which partisanship could be decided.",
"Finally, we mapped ratings of 1 and 2 to non-partisan, and 3 to 5 to partisan. A majority vote was used to derive the final label. Articles with no majority were discarded. In the end, 766 articles remained, of which 201 were partisan. Table TABREF24 shows the number of articles and the percentage of partisan articles per publisher. The final alpha value is 0.180."
],
[
"In this section, we analyze the properties and relationship of the two parts (publisher-level and article-level) of the datasets. In Table TABREF25 , we listed the length of articles of the two parts. The reason that this is important is to check whether there are apparent differences between the articles in the two parts of the dataset. We see that the lengths are comparable, which is desired.",
"The second analysis is the relationship between publisher and article partisanship. We want to check whether the assumption of partisan publishers publish more partisan articles is valid for our dataset. To do this, we used the article-level labels and calculated the percentage of partisan articles for each publisher. This value was then compared with the publisher partisanship. We calculated Spearsman's correlation between the publisher partisanship derived from the audience and article content. We take the absolute value of the partisanship in table TABREF9 and that in table TABREF24 . The correlation is 0.21. This low correlation resulted from the nature of the task and publishers that were considered. The partisan publishers in DPG Media publish news articles that are reviewed by professional editors. The publishers are often partisan only on a portion of the articles and on certain topics."
],
[
"We identified some limitations during the process, which we describe in this section.",
"When deciding publisher partisanship, the number of people from whom we computed the score was small. For example, de Stentor is estimated to reach 275K readers each day on its official website. Deciding the audience leaning from 55 samples was subject to sampling bias. Besides, the scores differ very little between publishers. None of the publishers had an absolute score higher than 1, meaning that even the most partisan publisher was only slightly partisan. Deciding which publishers we consider as partisan and which not is thus not very reliable.",
"The article-level annotation task was not as well-defined as on a crowdsourcing platform. We included the questions as part of an existing survey and didn't want to create much burden to the annotators. Therefore, we did not provide long descriptive text that explained how a person should annotate an article. We thus run under the risk of annotator bias. This is one of the reasons for a low inter-rater agreement."
],
[
"This dataset is aimed to contribute to developing a partisan news detector. There are several ways that the dataset can be used to devise the system. For example, it is possible to train the detector using publisher-level labels and test with article-level labels. It is also possible to use semi-supervised learning and treat the publisher-level part as unsupervised, or use only the article-level part. We also released the raw survey data so that new mechanisms to decide the article-level labels can be devised."
],
[
"We would like to thank Johannes Kiesel and the colleagues from Factmata for providing us with the annotation questions they used when creating a hyperpartisan news dataset. We would also like to thank Jaron Harambam, Judith Möller for helping us in asking the right questions for our annotations and Nava Tintarev for sharing her insights in the domain.",
"We list the questions we asked in the partisanship annotation survey, in the original Dutch language and an English translation.",
"Translated"
]
],
"section_name": [
"Introduction",
"Dataset description",
"Publisher-level data",
"Article-level data",
"Analysis of the datasets",
"Limitations",
"Dataset Application",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"a17f9fccfe34d6942254a33298124140c61a75e4"
],
"answer": [
{
"evidence": [
"We identified some limitations during the process, which we describe in this section.",
"When deciding publisher partisanship, the number of people from whom we computed the score was small. For example, de Stentor is estimated to reach 275K readers each day on its official website. Deciding the audience leaning from 55 samples was subject to sampling bias. Besides, the scores differ very little between publishers. None of the publishers had an absolute score higher than 1, meaning that even the most partisan publisher was only slightly partisan. Deciding which publishers we consider as partisan and which not is thus not very reliable.",
"The article-level annotation task was not as well-defined as on a crowdsourcing platform. We included the questions as part of an existing survey and didn't want to create much burden to the annotators. Therefore, we did not provide long descriptive text that explained how a person should annotate an article. We thus run under the risk of annotator bias. This is one of the reasons for a low inter-rater agreement."
],
"extractive_spans": [],
"free_form_answer": "deciding publisher partisanship, risk annotator bias because of short description text provided to annotators",
"highlighted_evidence": [
"We identified some limitations during the process, which we describe in this section.\n\nWhen deciding publisher partisanship, the number of people from whom we computed the score was small. For example, de Stentor is estimated to reach 275K readers each day on its official website. Deciding the audience leaning from 55 samples was subject to sampling bias. Besides, the scores differ very little between publishers. None of the publishers had an absolute score higher than 1, meaning that even the most partisan publisher was only slightly partisan. Deciding which publishers we consider as partisan and which not is thus not very reliable.\n\nThe article-level annotation task was not as well-defined as on a crowdsourcing platform. We included the questions as part of an existing survey and didn't want to create much burden to the annotators. Therefore, we did not provide long descriptive text that explained how a person should annotate an article. We thus run under the risk of annotator bias. This is one of the reasons for a low inter-rater agreement."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"bdc9c99ae01c30a9071f5746ea79ccf9c63ea59f"
],
"answer": [
{
"evidence": [
"This dataset is aimed to contribute to developing a partisan news detector. There are several ways that the dataset can be used to devise the system. For example, it is possible to train the detector using publisher-level labels and test with article-level labels. It is also possible to use semi-supervised learning and treat the publisher-level part as unsupervised, or use only the article-level part. We also released the raw survey data so that new mechanisms to decide the article-level labels can be devised."
],
"extractive_spans": [
"partisan news detector"
],
"free_form_answer": "",
"highlighted_evidence": [
"This dataset is aimed to contribute to developing a partisan news detector. There are several ways that the dataset can be used to devise the system. For example, it is possible to train the detector using publisher-level labels and test with article-level labels. It is also possible to use semi-supervised learning and treat the publisher-level part as unsupervised, or use only the article-level part. We also released the raw survey data so that new mechanisms to decide the article-level labels can be devised."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"c07456e06b53b72ed83967e5826fadc98b73cb6f"
],
"answer": [
{
"evidence": [
"To collect article-level labels, we utilized a platform in the company that has been used by the market research team to collect surveys from the subscribers of different news publishers. The survey works as follows: The user is first presented with a set of selected pages (usually 4 pages and around 20 articles) from the print paper the day before. The user can select an article each time that he or she has read, and answer some questions about it. We added 3 questions to the existing survey that asked the level of partisanship, the polarity of partisanship, and which pro- or anti- entities the article presents. We also asked the political standpoint of the user. The complete survey can be found in Appendices."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To collect article-level labels, we utilized a platform in the company that has been used by the market research team to collect surveys from the subscribers of different news publishers. The survey works as follows: The user is first presented with a set of selected pages (usually 4 pages and around 20 articles) from the print paper the day before. The user can select an article each time that he or she has read, and answer some questions about it. We added 3 questions to the existing survey that asked the level of partisanship, the polarity of partisanship, and which pro- or anti- entities the article presents. We also asked the political standpoint of the user. The complete survey can be found in Appendices."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"What limitations are mentioned?",
"What examples of applications are mentioned?",
"Did they crowdsource the annotations?"
],
"question_id": [
"182b6d77b51fa83102719a81862891f49c23a025",
"441886f0497dc84f46ed8c32e8fa32983b5db42e",
"62afbf8b1090e56fdd2a2fa2bdb687c3995477f6"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 1: Summary of the two parts of dpgMedia2019 dateset.",
"Table 2: Publisher, number of people of whom the political leaning we know, and the computed partisanship score.",
"Table 3: Number of articles per publisher and class distribution of publisher-level part of dpgMedia2019.",
"Table 4: Distribution of annotations of the strength of partisanship.",
"Figure 1: Percentage of annotation grouped by political leaning and annotation for the intensity of partisanship.",
"Figure 2: Percentage of annotation grouped by political leaning and annotation for the polarity of partisanship.",
"Table 5: Distribution of annotations of the polarity of partisanship.",
"Table 7: Number of articles and percentage of partisan articles by publisher.",
"Table 8: Statistics of length of articles."
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"3-Table3-1.png",
"4-Table4-1.png",
"4-Figure1-1.png",
"4-Figure2-1.png",
"4-Table5-1.png",
"5-Table7-1.png",
"5-Table8-1.png"
]
} | [
"What limitations are mentioned?"
] | [
[
"1908.02322-Limitations-1",
"1908.02322-Limitations-2",
"1908.02322-Limitations-0"
]
] | [
"deciding publisher partisanship, risk annotator bias because of short description text provided to annotators"
] | 775 |
1911.08962 | CAIL2019-SCM: A Dataset of Similar Case Matching in Legal Domain | In this paper, we introduce CAIL2019-SCM, Chinese AI and Law 2019 Similar Case Matching dataset. CAIL2019-SCM contains 8,964 triplets of cases published by the Supreme People's Court of China. CAIL2019-SCM focuses on detecting similar cases, and the participants are required to check which two cases are more similar in the triplets. There are 711 teams who participated in this year's competition, and the best team has reached a score of 71.88. We have also implemented several baselines to help researchers better understand this task. The dataset and more details can be found from this https URL. | {
"paragraphs": [
[
"Similar Case Matching (SCM) plays a major role in legal system, especially in common law legal system. The most similar cases in the past determine the judgment results of cases in common law systems. As a result, legal professionals often spend much time finding and judging similar cases to prove fairness in judgment. As automatically finding similar cases can benefit to the legal system, we select SCM as one of the tasks of CAIL2019.",
"Chinese AI and Law Challenge (CAIL) is a competition of applying artificial intelligence technology to legal tasks. The goal of the competition is to use AI to help the legal system. CAIL was first held in 2018, and the main task of CAIL2018 BIBREF0, BIBREF1 is predicting the judgment results from the fact description. The judgment results include the accusation, applicable articles, and the term of penalty. CAIL2019 contains three different tasks, including Legal Question-Answering, Legal Case Element Prediction, and Similar Case Matching. Furthermore, we will focus on SCM in this paper.",
"More specifically, CAIL2019-SCM contains 8,964 triplets of legal documents. Every legal documents is collected from China Judgments Online. In order to ensure the similarity of the cases in one triplet, all selected documents are related to Private Lending. Every document in the triplet contains the fact description. CAIL2019-SCM requires researchers to decide which two cases are more similar in a triplet. By detecting similar cases in triplets, we can apply this algorithm for ranking all documents to find the most similar document in the database. There are 247 teams who have participated CAIL2019-SCM, and the best team has reached a score of $71.88$, which is about 20 points higher than the baseline. The results show that the existing methods have made great progress on this task, but there is still much room for improvement.",
"In other words, CAIL2019-SCM can benefit the research of legal case matching. Furthermore, there are several main challenges of CAIL2019-SCM: (1) The difference between documents may be small, and then it is hard to decide which two documents are more similar. Moreover, the similarity is defined by legal workers. We must utilize legal knowledge into this task rather than calculate similarity on the lexical level. (2) The length of the documents is quite long. Most documents contain more than 512 characters, and then it is hard for existing methods to capture document level information.",
"In the following parts, we will give more details about CAIL2019-SCM, including related works about SCM, the task definition, the construction of the dataset, and several experiments on the dataset."
],
[
"We first define the task of CAIL2019-SCM here. The input of CAIL2019-SCM is a triplet $(A,B,C)$, where $A,B,C$ are fact descriptions of three cases. Here we define a function $sim$ which is used for measuring the similarity between two cases. Then the task of CAIL2019-SCM is to predict whether $sim(A,B)>sim(A,C)$ or $sim(A,C)>sim(A,B)$."
],
[
"To ensure the quality of the dataset, we have several steps of constructing the dataset. First, we select many documents within the range of Private Lending. However, although all cases are related to Private Lending, they are still various so that many cases are not similar at all. If the cases in the triplets are not similar, it does not make sense to compare their similarities. To produce qualified triplets, we first annotated some crucial elements in Private Lending for each document. The elements include:",
"The properties of lender and borrower, whether they are a natural person, a legal person, or some other organization.",
"The type of guarantee, including no guarantee, guarantee, mortgage, pledge, and others.",
"The usage of the loan, including personal life, family life, enterprise production and operation, crime, and others.",
"The lending intention, including regular lending, transfer loan, and others.",
"Conventional interest rate method, including no interest, simple interest, compound interest, unclear agreement, and others.",
"Interest during the agreed period, including $[0\\%,24\\%]$, $(24\\%,36\\%]$, $(36\\%,\\infty )$, and others.",
"Borrowing delivery form, including no lending, cash, bank transfer, online electronic remittance, bill, online loan platform, authorization to control a specific fund account, unknown or fuzzy, and others.",
"Repayment form, including unpaid, partial repayment, cash, bank transfer, online electronic remittance, bill, unknown or fuzzy, and others.",
"Loan agreement, including loan contract, or borrowing, “WeChat, SMS, phone or other chat records”, receipt, irrigation, repayment commitment, guarantee, unknown or fuzzy and others.",
"After annotating these elements, we can assume that cases with similar elements are quite similar. So when we construct the triplets, we calculate the tf-idf similarity and elemental similarity between cases and select those similar cases to construct our dataset. We have constructed 8,964 triples in total by these methods, and the statistics can be found from Table TABREF13. Then, legal professionals will annotate every triplet to see whether $sim(A,B)>sim(A,C)$ or $sim(A,B)<sim(A,C)$. Furthermore, to ensure the quality of annotation, every document and triplet is annotated by at least three legal professionals to reach an agreement."
],
[
"In this paper, we propose a new dataset, CAIL2019-SCM, which focuses on the task of similar case matching in the legal domain. Compared with existing datasets, CAIL2019-SCM can benefit the case matching in the legal domain to help the legal partitioners work better. Experimental results also show that there is still plenty of room for improvement."
]
],
"section_name": [
"Introduction",
"Overview of Dataset ::: Task Definition",
"Overview of Dataset ::: Dataset Construction and Details",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"a445494f73d90d748ad20ef35f171bba599e01ed"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"c4069eec4fc36658322077c1cc620c55c7410ef7"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Results of baselines and scores of top 3 participants on valid and test datasets."
],
"extractive_spans": [],
"free_form_answer": "CNN, LSTM, BERT",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Results of baselines and scores of top 3 participants on valid and test datasets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"What was the best team's system?",
"What are the baselines?"
],
"question_id": [
"770b4ec5c9a9706fef89a9aae45bb3e713d6b8ee",
"a379c380ac9f67f824506951444c873713405eed"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"dataset",
"dataset"
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Table 1: The number of triplets in different stages of CAIL2019-SCM.",
"Table 2: Results of baselines and scores of top 3 participants on valid and test datasets."
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png"
]
} | [
"What are the baselines?"
] | [
[
"1911.08962-4-Table2-1.png"
]
] | [
"CNN, LSTM, BERT"
] | 777 |
2002.03438 | Limits of Detecting Text Generated by Large-Scale Language Models | Some consider large-scale language models that can generate long and coherent pieces of text as dangerous, since they may be used in misinformation campaigns. Here we formulate large-scale language model output detection as a hypothesis testing problem to classify text as genuine or generated. We show that error exponents for particular language models are bounded in terms of their perplexity, a standard measure of language generation performance. Under the assumption that human language is stationary and ergodic, the formulation is extended from considering specific language models to considering maximum likelihood language models, among the class of k-order Markov approximations; error probabilities are characterized. Some discussion of incorporating semantic side information is also given. | {
"paragraphs": [
[
"Building on a long history of language generation models that are based on statistical knowledge that people have BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, large-scale, neural network-based language models (LMs) that write paragraph-length text with the coherence of human writing have emerged BIBREF6, BIBREF7, BIBREF8. Such models have raised concerns about misuse in generating fake news, misleading reviews, and hate speech BIBREF9, BIBREF10, BIBREF8, BIBREF11, BIBREF12. The alarming consequences of such machine-generated misinformation present an urgent need to discern fake content from genuine, as it is becoming more and more difficult for people to do so without cognitive support tools BIBREF13. Several recent studies have used supervised learning to develop classifiers for this task BIBREF8, BIBREF14, BIBREF9, BIBREF15, BIBREF16 and interpreted their properties. Here we take inspiration from our recent work on information-theoretic limits for detecting audiovisual deepfakes generated by GANs BIBREF17 to develop information-theoretic limits for detecting the outputs of language models. In particular, we build on the information-theoretic study of authentication BIBREF18 to use a formal hypothesis testing framework for detecting the outputs of language models.",
"In establishing fundamental limits of detection, we consider two settings. First, we characterize the error exponent for a particular language model in terms of standard performance metrics such as cross-entropy and perplexity. As far as we know, these informational performance metrics had not previously emerged from a formal operational theorem. Second, we consider not just a setting with a specific language model with given performance metrics, but rather consider a universal setting where we take a generic view of language models as empirical maximum likelihood $k$-order Markov approximations of stationary, ergodic random processes. Results on estimation of such random processes are revisited in the context of the error probability, using a conjectured extension of the reverse Pinsker inequality. In closing, we discuss how the semantics of generated text may be a form of side information in detection."
],
[
"Consider a language $L$ like English, which has tokens drawn from a finite alphabet $\\mathcal {A}$; tokens can be letters, words, or other such symbols. A language model assigns probabilities to sequences of tokens $(a_1,a_2,\\ldots ,a_m)$ so the more likely a sequence is in $L$, the greater its probability. Language models discussed in Sec. SECREF1 estimate this probability $Q$ as a product of each token's probability $q$ given its preceding tokens:",
"a Markov structure. Some language models including BERT BIBREF19 consider tokens on both sides in assessing probabilities (cloze task) and are Markov random field models rather than Markov language models BIBREF20. Using training data, a language model aims to learn a distribution $Q$ that is close to the empirical distribution $P$ of the language $L$.",
"Basic language models can be extended to be conditional language models so as to allow control of style. In certain models, such as CTRL, one can generate text only from a conditioning variable without the need for an initial prompt BIBREF7. An example of text generated from the conditioning text:",
"https://www.cnn.com/2007/09/20/us-president-meets-british-pm",
"is the following:",
"LONDON, England (CNN) – U.S. President George W. Bush met with British Prime Minister Tony Blair on Monday to discuss the war in Iraq, according to a statement from Blair's office.\\n\\nThe meeting was held at 10 Downing Street and lasted about an hour.\\n\\n\"The two leaders discussed Iraq and other international issues of mutual concern,\" said Blair.\\n\\nBush, who is scheduled to meet Wednesday with Russian President Vladimir Putin, will also visit Germany for talks later this week.\\n\\nIn his statement, Blair said, \"We agreed that we should continue our efforts together to bring peace and stability to Iraq. We both reaffirmed our commitment to working closely together, as well as to continuing to work constructively toward achieving lasting security and prosperity throughout the Middle East region.\"\\n\\nBush's trip comes after he visited Britain last week where he spoke out against terrorism while visiting Buckingham Palace.\\n\\nHe has been criticized by some lawmakers over what they say are insufficient military resources being devoted to fighting terrorism.",
"Notwithstanding their limitations BIBREF21, BIBREF22, the standard performance metrics used for assessing language models are the cross-entropy and the perplexity, which quantify how close $Q$ is to $P$. As far as we know, these performance measures have been proposed through the intuitive notion that small values of these quantities seem to correspond, empirically, to higher-quality generated text as judged by people. Within the common task framework BIBREF10, there are leaderboards that assess the perplexity of language models over standard datasets such as WikiText-103 BIBREF23.",
"The cross-entropy of $Q$ with respect to $P$ is defined as:",
"which simplifies, using standard information-theoretic identities, to:",
"where $H(\\cdot )$ with one argument is the Shannon entropy and $D_{\\mathrm {KL}}( \\cdot || \\cdot )$ is the Kullback-Leibler divergence (relative entropy). For a given language $L$ being modeled, the first term $H(P)$ can be thought of as fixed BIBREF24. The second term $D_{\\mathrm {KL}}(P || Q)$ can be interpreted as the excess information rate needed to represent a language using a mismatched probability distribution BIBREF25.",
"Perplexity is also a measure of uncertainty in predicting the next letter and is simply defined as:",
"when entropies are measured in nats, rather than bits.",
"For a given language, we can consider the ratio of perplexity values or the difference of cross-entropy values of two models $Q_1$ and $Q_2$ as a language-independent notion of performance gap:"
],
[
"Recall that the distribution of authentic text is denoted $P$ and the distribution of text generated by the language model is $Q$. Suppose we have access to $n$ tokens of generated text from the language model, which we call $Y_1, Y_2, Y_3, \\ldots , Y_n$. We can then formalize a hypothesis test as:",
"If we assume the observed tokens are i.i.d., that only makes the hypothesis test easier than the non-i.i.d. case seen in realistic text samples, and therefore its performance acts as a bound.",
"There are general characterizations of error probability of hypothesis tests as follows BIBREF26. For the Neyman-Pearson formulation of fixing the false alarm probability at $\\epsilon $ and maximizing the true detection probability, it is known that the error probability satisfies:",
"for $n$ i.i.d. samples, where $\\stackrel{.}{=}$ indicates exponential equality. Thus the error exponent is just the divergence $D_{\\mathrm {KL}}(P || Q))$. For more general settings (including ergodic settings), the error exponent is given by the asymptotic Kullback-Leibler divergence rate, defined as the almost-sure limit of:",
"if the limit exists, where $P_n$ and $Q_n$ are the null and alternate joint densities of $(Y_1,\\ldots ,Y_n)$, respectively, see further details in BIBREF27, BIBREF28.",
"When considering Bayesian error rather than Neyman-Pearson error, for i.i.d. samples, we have the following upper bound:",
"where $C(\\cdot ,\\cdot )$ is Chernoff information. Here we will focus on the Neyman-Pearson formulation rather than the Bayesian one."
],
[
"With the preparation of Sec. SECREF3, we can now establish statistical limits for detection of LM-generated texts. We first consider a given language model, and then introduce a generic model of language models."
],
[
"Suppose we are given a specific language model such as GPT-2 BIBREF6, GROVER BIBREF8, or CTRL BIBREF7, and it is characterized in terms of estimates of either cross-entropy $H(P,Q)$ or perplexity $\\mathrm {PPL}(P,Q)$.",
"We can see directly that the Neyman-Pearson error of detection in the case of i.i.d. tokens is:",
"and similar results hold for ergodic observations.",
"Since we think of $H(P)$ as a constant, we observe that the error exponent for the decision problem is precisely an affine shift of the cross-entropy. Outputs from models that are better in the sense of cross-entropy or perplexity are harder to distinguish from authentic text.",
"Thus we see that intuitive measures of generative text quality match a formal operational measure of indistinguishability that comes from the hypothesis testing limit."
],
[
"Now rather than considering a particular language model, we consider bounding the error probability in detection of the outputs of an empirical maximum likelihood (ML) language model. We specifically consider the empirical ML model among the class of models that are $k$-order Markov approximations of language $L$, which is simply the empirical plug-in estimate.",
"Manning and Schütze argue that, even though not quite correct, language text can be modeled as stationary, ergodic random processes BIBREF29, an assumption that we follow. Moreover, given the diversity of language production, we assume this stationary ergodic random process with finite alphabet $\\mathcal {A}$ denoted $X = \\lbrace X_i, -\\infty < i < \\infty \\rbrace $ is non-null in the sense that always $P(x_{-m}^{-1}) > 0$ and",
"This is sometimes called the smoothing requirement.",
"We further introduce an additional property of random processes that we assume for language $L$. We define the continuity rate of the process $X$ as:",
"We further let $\\gamma = \\sum _{k=1}^{\\infty } \\gamma (k)$,",
"and",
"If $\\gamma < \\infty $, then the process has summable continuity rate. These specific technical notions of smoothing and continuity are taken from the literature on estimation of stationary, ergodic random processes BIBREF30.",
"As such, the hypothesis test we aim to consider here is between a non-null, stationary, ergodic process with summable continuity rate (genuine language) and its empirical $k$-order Markov approximation based on training data (language model output). We think of the setting where the language model is trained on data with many tokens, a sequence of very long length $m$. For example, the CTRL language model was trained using 140 GB of text BIBREF7.",
"We think of the Markov order $k$ as a large value and so the family of empirical $k$-order Markov approximations encompasses the class of neural language models like GPT-2 and CTRL, which are a fortiori Markov in structure. Empirical perplexity comparisons show that LSTM and similar neural language models have Markov order as small as $k = 13$ BIBREF31. The appropriate Markov order for large-scale neural language models has not been investigated empirically, but is thought to scale with the neural network size.",
"Now we aim to bound the error exponent in hypothesis testing, by first drawing on a bound for the Ornstein $\\bar{d}$-distance between a stationary, ergodic process and its Markov approximation, due to Csiszar and Talata BIBREF30. Then we aim to relate the Ornstein $\\bar{d}$-distance to the Kullback-Leibler divergence (from error exponent expressions), using a generalization of the so-called reverse Pinsker inequality BIBREF32, BIBREF33.",
"Before proceeding, let us formalize a few measures. Let the per-letter Hamming distance between two strings $x_1^m$ and $y_1^m$ be $d_m(x_1^m,y_1^m)$. Then the Ornstein $\\bar{d}$-distance between two random sequences $X_1^m$ and $Y_1^m$ with distributions $P_X$ and $P_Y$ is defined as:",
"where the minimization is over all joint distributions whose marginals equal $P_X$ and $P_Y$.",
"Let $N_m(a_1^k)$ be the number of occurrences of the string $a_1^k$ in the sample $X_1^m$. Then the empirical $k$-order Markov approximation of a random process $X$ based on the sample $X_1^m$ is the stationary Markov chain of order $k$ whose transition probabilities are the following empirical conditional probabilities:",
"We refer to this empirical approximation as $\\hat{X}[k]_1^m$.",
"Although they give more refined finitary versions, let us restate Csiszár and Talata's asymptotic result on estimating Markov approximations of stationary, ergodic processes from data. The asymptotics are in the size of the training set, $m \\rightarrow \\infty $, and we let the Markov order scale logarithmically with $m$.",
"Theorem 1 (BIBREF30) Let $X$ be a non-null stationary ergodic process with summable continuity rate. Then for any $\\nu > 0$, the empirical $(\\nu \\log m)$-order Markov approximation $\\hat{X}$ satisfies:",
"eventually almost surely as $m\\rightarrow \\infty $ if $\\nu < \\tfrac{\\mu }{|\\log p_m|}$.",
"Now we consider Kullback-Leibler divergence. Just as Marton had extended Pinsker's inequality between variational distance and Kullback-Leibler divergence to an inequality between Ornstein's $\\bar{d}$-distance and Kullback-Leibler divergence BIBREF34, BIBREF35 as given in Theorem UNKREF7 below, is it possible to make a similar conversion for the reverse Pinsker inequality when there is a common finite alphabet $\\mathcal {A}$?",
"Theorem 2 (BIBREF35) Let $X$ be a stationary random process from a discrete alphabet $\\mathcal {A}$. Then for any other random process $Y$ defined on the same alphabet $\\mathcal {A}$,",
"for a computable constant $u$.",
"We conjecture that one can indeed convert the reverse Pinsker inequality BIBREF32:",
"for two probability distributions $P$ and $Q$ defined on a common finite alphabet $\\mathcal {A}$, where $Q_{\\min } = \\min _{a\\in \\mathcal {A}} Q(a)$. That is, we make the following conjecture.",
"Conjecture 1 Let $X$ be a stationary random process from a finite alphabet $\\mathcal {A}$. Then for any other random process $Y$ defined on the same alphabet $\\mathcal {A}$,",
"for some constant $\\tilde{K}$.",
"If this generalized reverse Pinsker inequality holds, it implies the following further bound on the Kullback-Leibler divergence and therefore the error exponent of the detection problem for the empirical maximum likelihood Markov language model.",
"Conjecture 2 Let $X$ be a non-null stationary ergodic process with summable continuity rate defined on the finite alphabet $\\mathcal {A}$. Then for any $\\nu > 0$, the empirical $(\\nu \\log m)$-order Markov approximation $\\hat{X}$ satisfies:",
"eventually almost surely as $m\\rightarrow \\infty $ if $\\nu < \\tfrac{\\mu }{|\\log p_m|}$, for some constant $\\hat{K}$.",
"Under the conjecture, we have a precise asymptotic characterization of the error exponent in deciding between genuine text and text generated from the empirical maximum likelihood language model, expressed in terms of basic parameters of the language, and of the training data set."
],
[
"Motivated by the problem of detecting machine-generated misinformation text that may have deleterious societal consequences, we have developed a formal hypothesis testing framework and established limits on the error exponents. For the case of specific language models such as GPT-2 or CTRL, we provide a precise operational interpretation for the perplexity and cross-entropy. For any future large-scale language model, we also conjecture a precise upper bound on the error exponent.",
"It has been said that “in AI circles, identifying fake media has long received less attention, funding and institutional backing than creating it: Why sniff out other people’s fantasy creations when you can design your own? `There's no money to be made out of detecting these things,' [Nasir] Memon said” BIBREF36. Here we have tried to demonstrate that there are, at least, interesting research questions on the detection side, which may also inform practice.",
"As we had considered previously in the context of deepfake images BIBREF17, it is also of interest to understand how error probability in detection parameterizes the dynamics of information spreading processes in social networks, e.g. in determining epidemic thresholds.",
"Many practical fake news detection algorithms use a kind of semantic side information, such as whether the generated text is factually correct, in addition to its statistical properties. Although statistical side information would be straightforward to incorporate in the hypothesis testing framework, it remains to understand how to cast such semantic knowledge in a statistical decision theory framework."
],
[
"Discussions with Bryan McCann, Kathy Baxter, and Miles Brundage are appreciated."
]
],
"section_name": [
"Introduction",
"Problem Formulation and Basics ::: Language Models and their Performance Metrics",
"Problem Formulation and Basics ::: Hypothesis Test and General Error Bounds",
"Limits Theorems",
"Limits Theorems ::: Given Language Model",
"Limits Theorems ::: Optimal Language Model",
"Discussion",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"ff0d09b3e36d675898dc6920dee3ae76e8342d75"
],
"answer": [
{
"evidence": [
"Many practical fake news detection algorithms use a kind of semantic side information, such as whether the generated text is factually correct, in addition to its statistical properties. Although statistical side information would be straightforward to incorporate in the hypothesis testing framework, it remains to understand how to cast such semantic knowledge in a statistical decision theory framework."
],
"extractive_spans": [],
"free_form_answer": "No feature is given, only discussion that semantic features are use in practice and yet to be discovered how to embed that knowledge into statistical decision theory framework.",
"highlighted_evidence": [
"Many practical fake news detection algorithms use a kind of semantic side information, such as whether the generated text is factually correct, in addition to its statistical properties. Although statistical side information would be straightforward to incorporate in the hypothesis testing framework, it remains to understand how to cast such semantic knowledge in a statistical decision theory framework."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"cec4411151ffb10823c18d31b4607c95c6bfca8e"
],
"answer": [
{
"evidence": [
"Suppose we are given a specific language model such as GPT-2 BIBREF6, GROVER BIBREF8, or CTRL BIBREF7, and it is characterized in terms of estimates of either cross-entropy $H(P,Q)$ or perplexity $\\mathrm {PPL}(P,Q)$.",
"We can see directly that the Neyman-Pearson error of detection in the case of i.i.d. tokens is:",
"and similar results hold for ergodic observations.",
"Since we think of $H(P)$ as a constant, we observe that the error exponent for the decision problem is precisely an affine shift of the cross-entropy. Outputs from models that are better in the sense of cross-entropy or perplexity are harder to distinguish from authentic text.",
"Thus we see that intuitive measures of generative text quality match a formal operational measure of indistinguishability that comes from the hypothesis testing limit."
],
"extractive_spans": [
"Outputs from models that are better in the sense of cross-entropy or perplexity are harder to distinguish from authentic text."
],
"free_form_answer": "",
"highlighted_evidence": [
"Suppose we are given a specific language model such as GPT-2 BIBREF6, GROVER BIBREF8, or CTRL BIBREF7, and it is characterized in terms of estimates of either cross-entropy $H(P,Q)$ or perplexity $\\mathrm {PPL}(P,Q)$.\n\nWe can see directly that the Neyman-Pearson error of detection in the case of i.i.d. tokens is:\n\nand similar results hold for ergodic observations.\n\nSince we think of $H(P)$ as a constant, we observe that the error exponent for the decision problem is precisely an affine shift of the cross-entropy. Outputs from models that are better in the sense of cross-entropy or perplexity are harder to distinguish from authentic text.\n\nThus we see that intuitive measures of generative text quality match a formal operational measure of indistinguishability that comes from the hypothesis testing limit."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a4c3159229420db558ca81f67df37778031c8515"
],
"answer": [
{
"evidence": [
"Manning and Schütze argue that, even though not quite correct, language text can be modeled as stationary, ergodic random processes BIBREF29, an assumption that we follow. Moreover, given the diversity of language production, we assume this stationary ergodic random process with finite alphabet $\\mathcal {A}$ denoted $X = \\lbrace X_i, -\\infty < i < \\infty \\rbrace $ is non-null in the sense that always $P(x_{-m}^{-1}) > 0$ and",
"This is sometimes called the smoothing requirement."
],
"extractive_spans": [],
"free_form_answer": "It is not completely valid for natural languages because of diversity of language - this is called smoothing requirement.",
"highlighted_evidence": [
"Manning and Schütze argue that, even though not quite correct, language text can be modeled as stationary, ergodic random processes BIBREF29, an assumption that we follow. Moreover, given the diversity of language production, we assume this stationary ergodic random process with finite alphabet $\\mathcal {A}$ denoted $X = \\lbrace X_i, -\\infty < i < \\infty \\rbrace $ is non-null in the sense that always $P(x_{-m}^{-1}) > 0$ and\n\nThis is sometimes called the smoothing requirement."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What semantic features help in detecting whether a piece of text is genuine or generated? of ",
"Which language models generate text that can be easier to classify as genuine or generated?",
"Is the assumption that natural language is stationary and ergodic valid?"
],
"question_id": [
"334f90bb715d8950ead1be0742d46a3b889744e7",
"53c8416f2983e07a7fa33bcb4c4281bbf49c8164",
"5b2480c6533696271ae6d91f2abe1e3a25c4ae73"
],
"question_writer": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [],
"file": []
} | [
"What semantic features help in detecting whether a piece of text is genuine or generated? of ",
"Is the assumption that natural language is stationary and ergodic valid?"
] | [
[
"2002.03438-Discussion-3"
],
[
"2002.03438-Limits Theorems ::: Optimal Language Model-1",
"2002.03438-Limits Theorems ::: Optimal Language Model-2"
]
] | [
"No feature is given, only discussion that semantic features are use in practice and yet to be discovered how to embed that knowledge into statistical decision theory framework.",
"It is not completely valid for natural languages because of diversity of language - this is called smoothing requirement."
] | 778 |
1810.12885 | ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension | We present a large-scale dataset, ReCoRD, for machine reading comprehension requiring commonsense reasoning. Experiments on this dataset demonstrate that the performance of state-of-the-art MRC systems fall far behind human performance. ReCoRD represents a challenge for future research to bridge the gap between human and machine commonsense reading comprehension. ReCoRD is available at http://nlp.jhu.edu/record. | {
"paragraphs": [
[
"[color=red!20,size=,fancyline,caption=,disable]ben:It is a little weird that RECORD is not spelled out in the abstract, but especially odd that it isn't spelled out in the Introduction. I would remove the footnote, put that content in the Introduction",
"[color=red!20,size=,fancyline,caption=,disable]ben:@kev agree. ... Human and Machine Commonsense Reading Comprehension",
"[color=red!20,size=,fancyline,caption=,disable]ben:Methods in machine reading comprehension (MRC) are driven by the datasets available – such as curated by deepmind-cnn-dailymail, cbt, squad, newsqa, and msmarco – where an MRC task is commonly defined as answering a question given some passage. However ...",
"Machine reading comprehension (MRC) is a central task in natural language understanding, with techniques lately driven by a surge of large-scale datasets BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , usually formalized as a task of answering questions given a passage. An increasing number of analyses BIBREF5 , BIBREF6 , BIBREF7 have revealed that a large portion of questions in these datasets can be answered by simply matching the patterns between the question and the answer sentence in the passage. While systems may match or even outperform humans on these datasets, our intuition suggests that there are at least some instances in human reading comprehension that require more than what existing challenge tasks are emphasizing. [color=red!20,size=,fancyline,caption=,disable]ben:This \"thus\" claim is far too strong. You haven't cited anything that says humans *don't* rely on simple pattern matching, you just rely on an implicit assumption that 'surely humans must be doing something complicated when they read'. If a system performs as well as a human on a task, the conclusion shouldn't immediately be that the task is too easy, it should more subtly be that new datasets are then needed to see if the inference mechanisms hold up, where the creation of the datasets can be based based on an explicitly stated intuition that humans may rely on more than pattern matching. It is a hypothesis at this point in the Introduction, that systems doing well on earlier datasets won't also do well on yours. You expect they will fail, and even design the dataset specifically around their failure cases. [color=red!20,size=,fancyline,caption=,disable]ben:I would say: While systems may match or even outperform humans on these datasets, our intuition suggests that there are at least some instances in human reading comprehension that require more than what existing challenge tasks are stressing. One primary type of questions these datasets lack are the ones that require reasoning over common sense or understanding across multiple sentences in the passage BIBREF2 , BIBREF3 . [color=red!20,size=,fancyline,caption=,disable]ben:This statement is given without citation: why do you claim that common sense is missing? Do you provide an analysis later in this paper that supports it? If so, provide a forward reference. If you can cite earlier work, do so. Otherwise, remove or soften this statement, e.g., \"We hypothesize that one type of question ...\". And then in next sentence, rather than \"To overcome this limitation\", which you haven't proven yet actually exists, you would say: \"To help evaluate this question, we introduce ...\" [color=red!20,size=,fancyline,caption=,disable]ben:rather than \"most of which require\", say \"most of which seem to require some aspect of reasoning beyond immediate pattern matching\". The SWAG / BERT case should be fresh in your mind as you write this introduction, and where-ever you are tempted to declare things in absolute terms. The more you go on the record as THIS DATASET REQUIRES COMMONSENSE then the more you look silly later if someone finds a 'trick' to solve it. A more honest and safer way to put this is to exactly reference the SWAG/BERT issue at some point in this paper, acknowledging that prior claims to have constructed commonsense datasets have been shown to either be false, or to imply that commonsense reasoning can be equated to large scale language modeling. You can cite Rachel's Script Induction as Language Modeling paper, JOCI, and the reporting bias article, perhaps all in a footnote, when commenting that researchers have previously raised concerns about the idea that all of common sense can be derived from corpus co-occurrence statistics.",
"To overcome this limitation, we introduce a large-scale dataset for reading comprehension, ReCoRD (), which consists of over 120,000 examples, most of which require deep commonsense reasoning. ReCoRD is an acronym for the Reading Comprehension with Commonsense Reasoning Dataset.",
"fig:example shows a ReCoRD example: the passage describes a lawsuit claiming that the band “Led Zeppelin” had plagiarized the song “Taurus” to their most iconic song, “Stairway to Heaven”. The cloze-style query asks what does “Stairway to Heaven” sound similar to. To find the correct answer, we need to understand from the passage that “a copyright infringement case alleges that `Stairway to Heaven' was taken from `Taurus'”, and from the bullet point that “these two songs are claimed similar”. Then based on the commonsense knowledge that “if two songs are claimed similar, it is likely that (parts of) these songs sound almost identical”, we can reasonably infer that the answer is “Taurus”. [color=purple!20,size=,fancyline,caption=,disable]kev:This example is good, but you might need to make sure the reader reads the whole passage first or else it may be hard to follow. Maybe add a few more sentences to explain Figure 1 in the paragraph here.",
"Differing from most of the existing MRC datasets, all queries and passages in ReCoRD are automatically mined from news articles, which maximally reduces the human elicitation bias BIBREF8 , BIBREF9 , BIBREF10 , and the data collection method we propose is cost-efficient. [color=purple!20,size=,fancyline,caption=,disable]kev:You should have one of these comparison tables that lists multiple MRC datasets and compares different features Further analysis shows that a large portion of ReCoRD requires commonsense reasoning.",
"Experiments on ReCoRD demonstrate that human readers are able to achieve a high performance at 91.69 F1, whereas the state-of-the-art MRC models fall far behind at 46.65 F1. Thus, ReCoRD presents a real challenge for future research to bridge the gap between human and machine commonsense reading comprehension. [color=red!20,size=,fancyline,caption=,disable]ben:this is a bulky URL: I will pay the small fee to register some domain name that is more slick than this [color=red!20,size=,fancyline,caption=,disable]ben:about the leaderboard on the website: I think it a little misleading to have Google Brain and IBM Watson, etc. as the names on the leaderboard, if it is really you running their code. Better would be \"JHU (modification of Google Brain system)\", \"JHU (modification of IBM Watson system)\", ... ."
],
[
"A program has common sense if it automatically deduces for itself a sufficiently wide class of immediate consequences of anything it is told and what it already knows. – mccarthy59",
"Commonsense Reasoning in MRC As illustrated by the example in fig:example, the commonsense knowledge “if two songs are claimed similar, it is likely that (parts of) these songs sound almost identica” is not explicitly described in the passage, but is necessary to acquire in order to generate the answer. Human is able to infer the answer because the commonsense knowledge is commonly known by nearly all people. Our goal is to evaluate whether a machine is able to learn such knowledge. However, since commonsense knowledge is massive and mostly implicit, defining an explicit free-form evaluation is challenging BIBREF11 . Motivated by mccarthy59, we instead evaluate a machine's ability of commonsense reasoning – a reasoning process requiring commonsense knowledge; that is, if a machine has common sense, it can deduce for itself the likely consequences or details of anything it is told and what it already knows rather than the unlikely ones. To formalize it in MRC, given a passage $\\mathbf {p}$ (i.e., “anything it is told” and “what it already knows”), and a set of consequences or details $\\mathcal {C}$ which are factually supported by the passage $\\mathbf {p}$ with different likelihood, if a machine $\\mathbf {M}$ has common sense, it can choose the most likely consequence or detail $\\mathbf {c}^*$ from $\\mathcal {C}$ , i.e., ",
"$$\\mathbf {c}^* = \\operatornamewithlimits{arg\\,max}_{\\mathbf {c} \\in \\mathcal {C}}P(\\mathbf {c}\\mid \\mathbf {p},\\mathbf {M}).$$ (Eq. 2) ",
"[color=purple!20,size=,fancyline,caption=,disable]kev:What are the properties of $o$ ? What can be a consequence? Be more specific or give examples.",
"Task Definition With the above discussion, we propose a specific task to evaluate a machine's ability of commonsense reasoning in MRC: as shown in fig:example, given a passage $\\mathbf {p}$ describing an event, a set of text spans $\\mathbf {E}$ marked in $\\mathbf {p}$ , and a cloze-style query $Q(\\mathbf {X})$ with a missing text span indicated by $\\mathbf {X}$ , a machine $\\mathbf {M}$ is expected to act like human, reading the passage $\\mathbf {p}$ and then using its hidden commonsense knowledge to choose a text span $\\mathbf {e}\\in \\mathbf {E}$ that best fits $\\mathbf {X}$ , i.e., ",
"$$\\mathbf {e}^* = \\operatornamewithlimits{arg\\,max}_{\\mathbf {e} \\in \\mathbf {E}}P(Q(\\mathbf {e})\\mid \\mathbf {p},\\mathbf {M}).$$ (Eq. 3) ",
"Once the cloze-style query $Q(\\mathbf {X})$ is filled in by a text span $\\mathbf {e}$ , the resulted statement $Q(\\mathbf {e})$ becomes a consequence or detail $\\mathbf {c}$ as described in eq:csr-in-mrc, which is factually supported by the passage with certain likelihood.",
"[color=purple!20,size=,fancyline,caption=,disable]kev:There's a disconnect between this paragraph and the previous one. How do you jump from $o$ to Q(e) and the ineqality to argmax? Also, I'm not sure if \"cloze\" is defined anywhere: you might need a one-sentence explanation in case the reader is not familiar."
],
[
"[color=purple!20,size=,fancyline,caption=,disable]kev:First add motivation about general philosophy of data collection We describe the framework for automatically generating the dataset, ReCoRD, for our task defined in eq:task, which consists of passages with text spans marked, cloze-style queries, and reference answers. We collect ReCoRD in four stages as shown in Figure 2 : (1) curating CNN/Daily Mail news articles, (2) generating passage-query-answers triples based on the news articles, (3) filtering out the queries that can be easily answered by state-of-the-art MRC models, and (4) filtering out the queries ambiguous to human readers."
],
[
"We choose to create ReCoRD by exploiting news articles, because the structure of news makes it a good source for our task: normally, the first few paragraphs of a news article summarize the news event, which can be used to generate passages of the task; and the rest of the news article provides consequences or details of the news event, which can be used to generate queries of the task. In addition, news providers such as CNN and Daily Mail supplement their articles with a number of bullet points BIBREF12 , BIBREF13 , BIBREF0 , which outline the highlights of the news and hence form a supplemental source for generating passages.",
"We first downloaded CNN and Daily Mail news articles using the script provided by BIBREF0 , and then sampled 148K articles from CNN and Daily Mail. In these articles, named entities and their coreference information have been annotated by a Google NLP pipeline, and will be used in the second stage of our data collection. Since these articles can be easily downloaded using the public script, we are concerned about potential cheating if using them as the source for generating the dev./test datasets. Therefore, we crawled additional 22K news articles from the CNN and Daily Mail websites. These crawled articles have no overlap with the articles used in BIBREF0 . We then ran the state-of-the-art named entity recognition model BIBREF14 and the end-to-end coreference resolution model BIBREF15 provided by AllenNLP BIBREF16 to annotate the crawled articles. Overall, we have collected 170K CNN/Daily Mail news articles with their named entities and coreference information annotated."
],
[
"All passages, queries and answers in ReCoRD were automatically generated from the curated news articles. fig:example-for-stage2 illustrates the generation process. (1) we split each news article into two parts as described in sec:news-curation: the first few paragraphs which summarize the news event, and the rest of the news which provides the details or consequences of the news event. These two parts make a good source for generating passages and queries of our task respectively. (2) we enriched the first part of news article with the bullet points provided by the news editors. The first part of news article, together with the bullet points, is considered as a candidate passage. To ensure that the candidate passages are informative enough, we required the first part of news article to have at least 100 tokens and contain at least four different entities. (3) for each candidate passage, the second part of its corresponding news article was split into sentences by Stanford CoreNLP BIBREF17 . Then we selected the sentences that satisfy the following conditions as potential details or consequences of the news event described by the passage:",
"[itemsep=0pt,topsep=6pt,leftmargin=10pt]",
"Sentences should have at least 10 tokens, as longer sentences contain more information and thus are more likely to be inferrable details or consequences.",
"Sentences should not be questions, as we only consider details or consequences of a news event, not questions.",
"Sentences should not have 3-gram overlap with the corresponding passage, so they are less likely to be paraphrase of sentences in the passage.",
"Sentences should have at least one named entity, so that we can replace it with $\\mathbf {X}$ to generate a cloze-style query.",
"All named entities in sentences should have precedents in the passage according to coreference, so that the sentences are not too disconnected from the passage, and the correct entity can be found in the passage to fill in $\\mathbf {X}$ .",
"Finally, we generated queries by replacing entities in the selected sentences with $\\mathbf {X}$ . We only replaced one entity in the selected sentence each time, and generated one cloze-style query. Based on coreference, the precedents of the replaced entity in the passage became reference answers to the query. The passage-query-answers generation process matched our task definition in sec:task, and therefore created queries that require some aspect of reasoning beyond immediate pattern matching. In total, we generated 770k (passage, query, answers) triples."
],
[
"As discussed in BIBREF5 , BIBREF6 , BIBREF18 , BIBREF7 , existing MRC models mostly learn to predict the answer by simply paraphrasing questions into declarative forms, and then matching them with the sentences in the passages. To overcome this limitation, we filtered out triples whose queries can be easily answered by the state-of-the-art MRC architecture, Stochastic Answer Networks (SAN) BIBREF19 . We choose SAN because it is competitive on existing MRC datasets, and it has components widely used in many MRC architectures such that low bias was anticipated in the filtering (which is confirmed by evaluation in sec:evaluation). We used SAN to perform a five-fold cross validation on all 770k triples. The SAN models correctly answered 68% of these triples. We excluded those triples, and only kept 244k triples that could not be answered by SAN. These triples contain queries which could not be answered by simple paraphrasing, and other types of reasoning such as commonsense reasoning and multi-sentence reasoning are needed. [color=purple!20,size=,fancyline,caption=,disable]kev:Briefly mention why you use SAN, i.e. it's competitive on current benchmarks like SQuAD. Also mention whether this may cause some bias in the filtering, compared to using some other system, and why your methodology is still ok."
],
[
"Since the first three stages of data collection were fully automated, the resulted triples could be noisy and ambiguous to human readers. Therefore, we employed crowdworkers to validate these triples. We used Amazon Mechanical Turk for validation. Crowdworkers were required to: 1) have a 95% HIT acceptance rate, 2) a minimum of 50 HITs, 3) be located in the United States, Canada, or Great Britain, and 4) not be granted the qualification of poor quality (which we will explain later in this section). Workers were asked to spend at least 30 seconds on each assignment, and paid $3.6 per hour on average.",
"fig:hit shows the crowdsourcing web interface. Each HIT corresponds to a triple in our data collection. In each HIT assignment, we first showed the expandable instructions for first-time workers, to help them better understand our task (see the sec:hit-instructions). Then we presented workers with a passage in which the named entities are highlighted and clickable. After reading the passage, workers were given a supported statement with a placeholder (i.e., a cloze-style query) indicating a missing entity. Based on their understanding of the events that might be inferred from the passage, workers were asked to find the correct entity in the passage that best fits the placeholder. If workers thought the answer is not obvious, they were allowed to guess one, and were required to report that case in the feedback box. Workers were also encouraged to write other feedback.",
"To ensure quality and prevent spamming, we used the reference answers in the triples to compute workers' average performance after every 1000 submissions. While there might be coreference or named entity recognition errors in the reference answers, as reported in BIBREF20 (also confirmed by our analysis in sec:data-analysis), they only accounted for a very small portion of all the reference answers. Thus, the reference answers could be used for comparing workers' performance. Specifically, if a worker's performance was significantly lower than the average performance of all workers, we blocked the worker by granting the qualification of poor quality. In practice, workers were able to correctly answer about 50% of all queries. We blocked workers if their average accuracy was lower than 20%, and then republished their HIT assignments. Overall, 2,257 crowdworkers have participated in our task, and 51 of them have been granted the qualification of poor quality.",
"Train / Dev. / Test Splits Among all the 244k triples collected from the third stage, we first obtained one worker answer for each triple. Compared to the reference answers, workers correctly answered queries in 122k triples. We then selected around 100k correctly-answered triples as the training set, restricting the origins of these triples to the news articles used in BIBREF0 . As for the development and test sets, we solicited another worker answer to further ensure their quality. Therefore, each of the rest 22k triples has been validated by two workers. We only kept 20k triples that were correctly answered by both workers. The origins of these triples are either articles used in BIBREF0 or articles crawled by us (as described in sec:news-curation), with a ratio of 3:7. Finally, we randomly split the 20k triples into development and test sets, with 10k triples for each set. tab:statistics summarizes the statistics of our dataset, ReCoRD."
]
],
"section_name": [
"Introduction",
"Task Motivation",
"Data Collection",
"News Article Curation",
"Passage-Query-Answers Generation",
"Machine Filtering",
"Human Filtering"
]
} | {
"answers": [
{
"annotation_id": [
"a5ae90fac304b5061638c5ddf3e4856df80e9986"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4: Performance of various methods and human."
],
"extractive_spans": [],
"free_form_answer": "DocQA, SAN, QANet, ASReader, LM, Random Guess",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Performance of various methods and human."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"somewhat"
],
"question": [
"Which models do they try out?"
],
"question_id": [
"a516b37ad9d977cb9d4da3897f942c1c494405fe"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"commonsense"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 2: The overview of data collection stages.",
"Figure 3: Passage-query-answers generation from a CNN news article.",
"Figure 4: The crowdsourcing web interface.",
"Table 1: Statistics of ReCoRD",
"Table 2: An analysis of types of reasoning needed in 100 random samples from the dev. set of ReCoRD.",
"Table 3: An analysis of specific types of commonsense reasoning in 75 random sampled queries illustrated in Table 2 which requires common sense reasoning. A query may require multiple types of commonsense reasoning. .",
"Table 4: Performance of various methods and human.",
"Figure 5: The Venn diagram of correct predictions from various methods and human on the development set.",
"Figure 7: Performance of three analyzed methods on 75% of the random samples with specific commonsense reasoning types labeled.",
"Table 5: The out-of-candidate-entities (OOC) rate of three analyzed methods.",
"Figure 6: Performance of three analyzed methods on the 100 random samples with reasoning types labeled.(CSR stands for commonsense reasoning, and MSR stands for multi-sentence reasoning.)",
"Figure 8: Amazon Mechanical Turk HIT Instructions."
],
"file": [
"3-Figure2-1.png",
"4-Figure3-1.png",
"5-Figure4-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"8-Table4-1.png",
"8-Figure5-1.png",
"9-Figure7-1.png",
"9-Table5-1.png",
"9-Figure6-1.png",
"14-Figure8-1.png"
]
} | [
"Which models do they try out?"
] | [
[
"1810.12885-8-Table4-1.png"
]
] | [
"DocQA, SAN, QANet, ASReader, LM, Random Guess"
] | 779 |
1910.11235 | Rethinking Exposure Bias In Language Modeling | Exposure bias describes the phenomenon that a language model trained under the teacher forcing schema may perform poorly at the inference stage when its predictions are conditioned on its previous predictions unseen from the training corpus. Recently, several generative adversarial networks (GANs) and reinforcement learning (RL) methods have been introduced to alleviate this problem. Nonetheless, a common issue in RL and GANs training is the sparsity of reward signals. In this paper, we adopt two simple strategies, multi-range reinforcing, and multi-entropy sampling, to amplify and denoise the reward signal. Our model produces an improvement over competing models with regards to BLEU scores and road exam, a new metric we designed to measure the robustness against exposure bias in language models. | {
"paragraphs": [
[
"Likelihood-based language models with deep neural networks have been widely adopted to tackle language tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. By far, one of the most popular training strategies is teacher forcing, which derives from the general maximum likelihood estimation (MLE) principle BIBREF4. Under the teacher forcing schema, a model is trained to make predictions conditioned on ground-truth inputs. Although this strategy enables effective training of large neural networks, it is susceptible to aggravate exposure bias: a model may perform poorly at the inference stage, once its self-generated prefix diverges from the previously learned ground-truth data BIBREF5.",
"A common approach to mitigate this problem is to impose supervision upon the model's own exploration. To this objective, existing literature have introduced REINFORCE BIBREF6 and actor-critic (AC) methods BIBREF7 (including language GANs BIBREF8), which offer direct feedback on a model's self-generated sequences, so the model can later, at the inference stage, deal with previously unseen exploratory paths. However, due to the well-known issue of reward sparseness and the potential noises in the critic's feedback, these methods are reported to risk compromising the generation quality, specifically in terms of precision.",
"In this paper, we adopt two simple strategies, multi-range reinforcing and multi-entropy sampling to overcome the reward sparseness during training. With the tricks applied, our model demonstrates a significant improvement over competing models. In addition, we propose road exam as a new metric to reveal a model's robustness against exposure bias."
],
[
"As an early work to address exposure bias, BIBREF5 proposed a curriculum learning approach called scheduled sampling, which gradually replaces the ground-truth tokens with the model's own predictions while training. Later, BIBREF9 criticized this approach for pushing the model towards overfitting onto the corpus distribution based on the position of each token in the sequence, instead of learning about the prefix.",
"In recent RL-inspired works, BIBREF10 built on the REINFORCE algorithm to directly optimize the test-time evaluation metric score. BIBREF11 employed a similar approach by training a critic network to predict the metric score that the actor's generated sequence of tokens would obtain. In both cases, the reliance on a metric to accurately reflect the quality of generated samples becomes a major limitation. Such metrics are often unavailable and difficult to design by nature.",
"In parallel, adversarial training was introduced into language modeling by SeqGAN BIBREF8. This model consists of a generator pre-trained under MLE and a discriminator pre-trained to discern the generator's distribution from the real data. Follow-up works based on SeqGAN alter their training objectives or model architectures to enhance the guidance signal's informativeness. RankGAN replaces the absolute binary reward with a relative ranking score BIBREF12. LeakGAN allows the discriminator to “leak” its internal states to the generator at intermediate steps BIBREF13. BIBREF14 models a reward function using inverse reinforcement learning (IRL). While much progress have been made, we surprisingly observed that SeqGAN BIBREF8 shows more stable results in road exam in Section SECREF20. Therefore, we aim to amplify and denoise the reward signal in a direct and simple fashion."
],
[
"Problem Re-Formulation: Actor-Critic methods (ACs) consider language modeling as a generalized Markov Decision Process (MDP) problem, where the actor learns to optimize its policy guided by the critic, while the critic learns to optimize its value function based on the actor's output and external reward information.",
"As BIBREF15 points out, GAN methods can be seen as a special case of AC where the critic aims to distinguish the actor's generation from real data and the actor is optimized in an opposite direction to the critic.",
"Actor-Critic Training: In this work, we use a standard single-layer LSTM as the actor network. The training objective is to maximize the model's expected end rewards with policy gradient BIBREF16:",
"",
"Then, We use a CNN as the critic to predict the expected rewards for current generated prefix:",
"",
"In practice, we perform a Monte-Carlo (MC) search with roll-out policy following BIBREF8 to sample complete sentences starting from each location in a predicted sequence and compute their end rewards. Empirically, we found out that the maximum, instead of average, of rewards in the MC search better represents each token's actor value and yields better results during training. Therefore, we compute the action value by:",
"",
"In RL and GANs training, two major factors behind the unstable performance are the large variance and the update correlation during the sampling process BIBREF17, BIBREF18. We address these problems using the following strategies:",
"Multi-Range Reinforcing: Our idea of multi-range supervision takes inspiration from deeply-supervised nets (DSNs) BIBREF19. Under deep supervision, intermediate layers of a deep neural network have their own training objectives and receive direct supervision simultaneously with the final decision layer. By design, lower layers in a CNN have smaller receptive fields, allowing them to make better use of local patterns. Our “multi-range\" modification enables the critic to focus on local n-gram information in the lower layers while attending to global structural information in the higher layers. This is a solution to the high variance problem, as the actor can receive amplified reward with more local information compared to BIBREF8.",
"Multi-Entropy Sampling: Language GANs can be seen an online RL methods, where the actor is updated from data generated by its own policy with strong correlation. Inspired by BIBREF20, we empirically find that altering the entropy of the actor's sample distribution during training is beneficial to the AC network's robust performance. In specific, we alternate the temperature $\\tau $ to generate samples under different behavior policies. During the critic's training, the ground-truth sequences are assigned a perfect target value of 1. The samples obtained with $\\tau < 1$ are supposed to contain lower entropy and to diverge less from the real data, that they receive a higher target value close to 1. Those obtained with $\\tau > 1$ contain higher entropy and more errors that their target values are lower and closer to 0. This mechanism decorrelates updates during sequential sampling by sampling multiple diverse entropy distributions from actor synchronously."
],
[
"Table TABREF5 demonstrates an ablation study on the effectiveness of multi-range reinforcing (MR) and multi-entropy sampling (ME). We observe that ME improves $\\text{BLEU}_{\\text{F5}}$ (precision) significantly while MR further enhances $\\text{BLEU}_{\\text{F5}}$ (precision) and $\\text{BLEU}_{\\text{F5}}$ (recall). Detailed explanations of these metrics can be found in Section SECREF4.",
""
],
[
"We adopt three variations of BLEU metric from BIBREF14 to reflect precision and recall.",
"$\\textbf {BLEU}_{\\textbf {F}}$, or forward BLEU, is a metric for precision. It uses the real test dataset as references to calculate how many n-grams in the generated samples can be found in the real data.",
"$\\textbf {BLEU}_{\\textbf {B}}$, or backward BLEU, is a metric for recall. This metric takes both diversity and quality into computation. A model with severe mode collapse or diverse but incorrect outputs will receive poor scores in $\\text{BLEU}_{\\text{B}}$.",
"$\\textbf {BLEU}_{\\textbf {HA}}$ is the harmonic mean of $\\text{BLEU}_{\\text{F}}$ and $\\text{BLEU}_{\\text{B}}$, given by:",
"",
""
],
[
"Road Exam is a novel test we propose as a direct evaluation of exposure bias. In this test, a sentence prefix of length $K$, either taken from the training or testing dataset, is fed into the model under assessment to perform a sentence completion task. Thereby, the model is directed onto either a seen or an unseen “road\" to begin its generation. Because precision is the primary concern, we set $\\tau =0.5$ to sample high-confidence sentences from each model's distribution. We compare $\\text{BLEU}_{\\text{F}}$ of each model on both seen and unseen completion tasks and over a range of prefix lengths. By definition, a model with exposure bias should perform worse in completing sentences with unfamiliar prefix. The sentence completion quality should decay more drastically as the the unfamiliar prefix grows longer."
],
[
"We evaluate on two datasets: EMNLP2017 WMT News and Google-small, a subset of Google One Billion Words .",
"EMNLP2017 WMT News is provided in BIBREF21, a benchmarking platform for text generation models. We split the entire dataset into a training set of 195,010 sentences, a validation set of 83,576 sentences, and a test set of 10,000 sentences. The vocabulary size is 5,254 and the average sentence length is 27.",
"Google-small is sampled and pre-processed from its the Google One Billion Words. It contains a training set of 699,967 sentences, a validation set of 200,000 sentences, and a test set of 99,985 sentences. The vocabulary size is 61,458 and the average sentence length is 29."
],
[
""
],
[
"We implement a standard single-layer LSTM as the generator (actor) and a eight-layer CNN as the discriminator (critic). The LSTM has embedding dimension 32 and hidden dimension 256. The CNN consists of 8 layers with filter size 3, where the 3rd, 5th, and 8th layers are directly connected to the output layer for multi-range supervision. Other parameters are consistent with BIBREF21."
],
[
"Adam optimizer is deployed for both critic and actor with learning rate $10^{-4}$ and $5 \\cdot 10^{-3}$ respectively. The target values for the critic network are set to [0, 0.2, 0.4, 0.6, 0.8] for samples generated by the RNN with softmax temperatures [0.5, 0.75, 1.0, 1.25, 1.5]."
],
[
"Table TABREF9 and Table TABREF10 compare models on EMNLP2017 WMT News and Google-small. Our model outperforms the others in $\\text{BLEU}_{\\text{F5}}$, $\\text{BLEU}_{\\text{B5}}$, and $\\text{BLEU}_{\\text{HA5}}$, indicating a high diversity and quality in its sample distribution. It is noteworthy that, LeakGAN and our model are the only two models to demonstrate improvements on $\\text{BLEU}_{\\text{B5}}$ over the teacher forcing baseline. The distinctive increment in recall indicates less mode collapse, which is a common problem in language GANs and ACs.",
"Figure FIGREF16 demonstrates the road exam results on EMWT News. All models decrease in sampling precision (reflected via $\\text{BLEU}_{\\text{F4}}$) as the fed-in prefix length ($K$) increases, but the effect is stronger on the unseen test data, revealing the existence of exposure bias. Nonetheless, our model trained under ME and MR yields the best sentence quality and a relatively moderate performance decline.",
"Although TF and SS demonstrate higher $\\text{BLEU}_{\\text{F5}}$ performance with shorter prefixes, their sentence qualities drop drastically on the test dataset with longer prefixes. On the other hand, GANs begin with lower $\\text{BLEU}_{\\text{F4}}$ precision scores but demonstrate less performance decay as the prefix grows longer and gradually out-perform TF. This robustness against unseen prefixes exhibits that supervision from a learned critic can boost a model's stability in completing unseen sequences.",
"The better generative quality in TF and the stronger robustness against exposure bias in GANs are two different objectives in language modeling, but they can be pursued at the same time. Our model's improvement in both perspectives exhibit one possibility to achieve the goal."
],
[
"We have presented multi-range reinforcing and multi-entropy sampling as two training strategies built upon deeply supervised nets BIBREF19 and multi-entropy samplingBIBREF20. The two easy-to-implement strategies help alleviate the reward sparseness in RL training and tackle the exposure bias problem."
],
[
"The authors are grateful for the supports by NSF IIS-1618477, NSF IIS-1717431, and a grant from Samsung Research America."
]
],
"section_name": [
"Introduction",
"Related Works",
"Model Description",
"Model Description ::: Effectiveness of Multi-Range Reinforcing and Multi-Entropy Sampling",
"Model Evaluation ::: Modeling Capacity & Sentence Quality",
"Model Evaluation ::: Exposure Bias Attacks",
"Experiment ::: Datasets",
"Experiment ::: Implementation Details",
"Experiment ::: Implementation Details ::: Network Architecture:",
"Experiment ::: Implementation Details ::: Training Settings:",
"Experiment ::: Discussion",
"Conclusion",
"Conclusion ::: Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"ab83847d8bc78453886985c448a54085630702aa"
],
"answer": [
{
"evidence": [
"In this paper, we adopt two simple strategies, multi-range reinforcing and multi-entropy sampling to overcome the reward sparseness during training. With the tricks applied, our model demonstrates a significant improvement over competing models. In addition, we propose road exam as a new metric to reveal a model's robustness against exposure bias."
],
"extractive_spans": [
"a new metric to reveal a model's robustness against exposure bias"
],
"free_form_answer": "",
"highlighted_evidence": [
" In addition, we propose road exam as a new metric to reveal a model's robustness against exposure bias."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"eec13437d20cc6fe98222a46407f23f9cf4288bb"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Results on EMNLP2017 WMT News dataset. The 95 % confidence intervals from multiple trials are reported."
],
"extractive_spans": [],
"free_form_answer": "TEACHER FORCING (TF), SCHEDULED SAMPLING (SS), SEQGAN, RANKGAN, LEAKGAN.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Results on EMNLP2017 WMT News dataset. The 95 % confidence intervals from multiple trials are reported."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What is the road exam metric?",
"What are the competing models?"
],
"question_id": [
"5450f27ccc0406d3bffd08772d8b59004c2716da",
"12ac76b77f22ed3bcb6430bcd0b909441d79751b"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"bias",
"bias"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Performance of alternative architectures on EMNLP2017 WMT News Dataset. Higher is better.",
"Table 2: Results on EMNLP2017 WMT News dataset. The 95 % confidence intervals from multiple trials are reported.",
"Table 3: Results on the Google-small dataset. The 95 % confidence intervals from multiple trials are reported. † This dataset was not tested in (Guo et al., 2017) and we are unable to train LeakGAN on this dataset using the official code due to its training complexity (taking 10+ hours per epoch).",
"Figure 1: EMNLP2017 WMT News Road Exam based on prefixes from training and testing datasets [Higher is better]. In each experiment, the data source for the prefixes is used as the reference to calculate BLEUF4."
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"4-Figure1-1.png"
]
} | [
"What are the competing models?"
] | [
[
"1910.11235-4-Table2-1.png"
]
] | [
"TEACHER FORCING (TF), SCHEDULED SAMPLING (SS), SEQGAN, RANKGAN, LEAKGAN."
] | 791 |
1706.04115 | Zero-Shot Relation Extraction via Reading Comprehension | We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. This reduction has several advantages: we can (1) learn relation-extraction models by extending recent neural reading-comprehension techniques, (2) build very large training sets for those models by combining relation-specific crowd-sourced questions with distant supervision, and even (3) do zero-shot learning by extracting new relation types that are only specified at test-time, for which we have no labeled training examples. Experiments on a Wikipedia slot-filling task demonstrate that the approach can generalize to new questions for known relation types with high accuracy, and that zero-shot generalization to unseen relation types is possible, at lower accuracy levels, setting the bar for future work on this task. | {
"paragraphs": [
[
"Relation extraction systems populate knowledge bases with facts from an unstructured text corpus. When the type of facts (relations) are predefined, one can use crowdsourcing BIBREF0 or distant supervision BIBREF1 to collect examples and train an extraction model for each relation type. However, these approaches are incapable of extracting relations that were not specified in advance and observed during training. In this paper, we propose an alternative approach for relation extraction, which can potentially extract facts of new types that were neither specified nor observed a priori.",
"We show that it is possible to reduce relation extraction to the problem of answering simple reading comprehension questions. We map each relation type $R(x,y)$ to at least one parametrized natural-language question $q_x$ whose answer is $y$ . For example, the relation $educated\\_at(x,y)$ can be mapped to “Where did $x$ study?” and “Which university did $x$ graduate from?”. Given a particular entity $x$ (“Turing”) and a text that mentions $x$ (“Turing obtained his PhD from Princeton”), a non-null answer to any of these questions (“Princeton”) asserts the fact and also fills the slot $y$ . Figure 1 illustrates a few more examples.",
"This reduction enables new ways of framing the learning problem. In particular, it allows us to perform zero-shot learning: define new relations “on the fly”, after the model has already been trained. More specifically, the zero-shot scenario assumes access to labeled data for $N$ relation types. This data is used to train a reading comprehension model through our reduction. However, at test time, we are asked about a previously unseen relation type $R_{N+1}$ . Rather than providing labeled data for the new relation, we simply list questions that define the relation's slot values. Assuming we learned a good reading comprehension model, the correct values should be extracted.",
"Our zero-shot setup includes innovations both in data and models. We use distant supervision for a relatively large number of relations (120) from Wikidata BIBREF2 , which are easily gathered in practice via the WikiReading dataset BIBREF3 . We also introduce a crowdsourcing approach for gathering and verifying the questions for each relation. This process produced about 10 questions per relation on average, yielding a dataset of over 30,000,000 question-sentence-answer examples in total. Because questions are paired with relation types, not instances, this overall procedure has very modest costs.",
"The key modeling challenge is that most existing reading-comprehension problem formulations assume the answer to the question is always present in the given text. However, for relation extraction, this premise does not hold, and the model needs to reliably determine when a question is not answerable. We show that a recent state-of-the-art neural approach for reading comprehension BIBREF4 can be directly extended to model answerability and trained on our new dataset. This modeling approach is another advantage of our reduction: as machine reading models improve with time, so should our ability to extract relations.",
"Experiments demonstrate that our approach generalizes to new paraphrases of questions from the training set, while incurring only a minor loss in performance (4% relative F1 reduction). Furthermore, translating relation extraction to the realm of reading comprehension allows us to extract a significant portion of previously unseen relations, from virtually zero to an F1 of 41%. Our analysis suggests that our model is able to generalize to these cases by learning typing information that occurs across many relations (e.g. the answer to “Where” is a location), as well as detecting relation paraphrases to a certain extent. We also find that there are many feasible cases that our model does not quite master, providing an interesting challenge for future work."
],
[
"We are interested in a particularly harsh zero-shot learning scenario: given labeled examples for $N$ relation types during training, extract relations of a new type $R_{N+1}$ at test time. The only information we have about $R_{N+1}$ are parametrized questions.",
"This setting differs from prior art in relation extraction. Bronstein2015 explore a similar zero-shot setting for event-trigger identification, in which $R_{N+1}$ is specified by a set of trigger words at test time. They generalize by measuring the similarity between potential triggers and the given seed set using unsupervised methods. We focus instead on slot filling, where questions are more suitable descriptions than trigger words.",
"Open information extraction (open IE) BIBREF5 is a schemaless approach for extracting facts from text. While open IE systems need no relation-specific training data, they often treat different phrasings as different relations. In this work, we hope to extract a canonical slot value independent of how the original text is phrased.",
"Universal schema BIBREF6 represents open IE extractions and knowledge-base facts in a single matrix, whose rows are entity pairs and columns are relations. The redundant schema (each knowledge-base relation may overlap with multiple natural-language relations) enables knowledge-base population via matrix completion techniques. Verga2017 predict facts for entity pairs that were not observed in the original matrix; this is equivalent to extracting seen relation types with unseen entities (see Section \"Unseen Entities\" ). Rocktaschel2015 and Demeester2016 use inference rules to predict hidden knowledge-base relations from observed natural-language relations. This setting is akin to generalizing across different manifestations of the same relation (see Section \"Unseen Question Templates\" ) since a natural-language description of each target relation appears in the training data. Moreover, the information about the unseen relations is a set of explicit inference rules, as opposed to implicit natural-language questions.",
"Our zero-shot scenario, in which no manifestation of the test relation is observed during training, is substantially more challenging (see Section \"Unseen Relations\" ). In universal-schema terminology, we add a new empty column (the target knowledge-base relation), plus a few new columns with a single entry each (reflecting the textual relations in the sentence). These columns share no entities with existing columns, making the rest of the matrix irrelevant. To fill the empty column from the others, we match their descriptions. Toutanova2015 proposed a similar approach that decomposes natural-language relations and computes their similarity in a universal schema setting; however, they did not extend their method to knowledge-base relations, nor did they attempt to recover out-of-schema relations as we do."
],
[
"We consider the slot-filling challenge in relation extraction, in which we are given a knowledge-base relation $R$ , an entity $e$ , and a sentence $s$ . For example, consider the relation $occupation$ , the entity “Steve Jobs”, and the sentence “Steve Jobs was an American businessman, inventor, and industrial designer”. Our goal is to find a set of text spans $A$ in $s$ for which $R(e,a)$ holds for each $a \\in A$ . In our example, $A=\\lbrace \\textnormal {businessman},\\textnormal {inventor}, \\textnormal {industrial designer}\\rbrace $ . The empty set is also a valid answer ( $A = \\emptyset $ ) when $e$0 does not contain any phrase that satisfies $e$1 . We observe that given a natural-language question $e$2 that expresses $e$3 (e.g. “What did Steve Jobs do for a living?”), solving the reading comprehension problem of answering $e$4 from $e$5 is equivalent to solving the slot-filling challenge.",
"The challenge now becomes one of querification: translating $R(e,?)$ into $q$ . Rather than querify $R(e,?)$ for every entity $e$ , we propose a method of querifying the relation $R$ . We treat $e$ as a variable $x$ , querify the parametrized query $R(x,?)$ (e.g. $occupation(x,?)$ ) as a question template $q_x$ (“What did $q$0 do for a living?”), and then instantiate this template with the relevant entities, creating a tailored natural-language question for each entity $q$1 (“What did Steve Jobs do for a living?”). This process, schema querification, is by an order of magnitude more efficient than querifying individual instances because annotating a relation type automatically annotates all of its instances.",
"Applying schema querification to $N$ relations from a pre-existing relation-extraction dataset converts it into a reading-comprehension dataset. We then use this dataset to train a reading-comprehension model, which given a sentence $s$ and a question $q$ returns a set of text spans $A$ within $s$ that answer $q$ (to the best of its ability).",
"In the zero-shot scenario, we are given a new relation $R_{N+1}(x,y)$ at test-time, which was neither specified nor observed beforehand. For example, the $deciphered(x,y)$ relation, as in “Turing and colleagues came up with a method for efficiently deciphering the Enigma”, is too domain-specific to exist in common knowledge-bases. We then querify $R_{N+1}(x,y)$ into $q_x$ (“Which code did $x$ break?”) or $q_y$ (“Who cracked $y$ ?”), and run our reading-comprehension model for each sentence in the document(s) of interest, while instantiating the question template with different entities that might participate in this relation. Each time the model returns a non-null answer $deciphered(x,y)$1 for a given question $deciphered(x,y)$2 , it extracts the relation $deciphered(x,y)$3 .",
"Ultimately, all we need to do for a new relation is define our information need in the form of a question. Our approach provides a natural-language API for application developers who are interested in incorporating a relation-extraction component in their programs; no linguistic knowledge or pre-defined schema is needed. To implement our approach, we require two components: training data and a reading-comprehension model. In Section \"Dataset\" , we construct a large relation-extraction dataset and querify it using an efficient crowdsourcing procedure. We then adapt an existing state-of-the-art reading-comprehension model to suit our problem formulation (Section \"Model\" )."
],
[
"To collect reading-comprehension examples as in Figure 2 , we first gather labeled examples for the task of relation-slot filling. Slot-filling examples are similar to reading-comprehension examples, but contain a knowledge-base query $R(e,?)$ instead of a natural-language question; e.g. $spouse(\\textnormal {Angela Merkel}, ?)$ instead of “Who is Angela Merkel married to?”. We collect many slot-filling examples via distant supervision, and then convert their queries into natural language."
],
[
"Given a sentence $s$ and a question $q$ , our algorithm either returns an answer span $a$ within $s$ , or indicates that there is no answer.",
"The task of obtaining answer spans to natural-language questions has been recently studied on the SQuAD dataset BIBREF8 , BIBREF12 , BIBREF13 , BIBREF14 . In SQuAD, every question is answerable from the text, which is why these models assume that there exists a correct answer span. Therefore, we modify an existing model in a way that allows it to decide whether an answer exists. We first give a high-level description of the original model, and then describe our modification.",
"We start from the BiDAF model BIBREF4 , whose input is two sequences of words: a sentence $s$ and a question $q$ . The model predicts the start and end positions ${\\bf y}^{start}, {\\bf y}^{end}$ of the answer span in $s$ . BiDAF uses recurrent neural networks to encode contextual information within $s$ and $q$ alongside an attention mechanism to align parts of $q$ with $s$ and vice-versa.",
"The outputs of the BiDAF model are the confidence scores of ${\\bf y}^{start}$ and ${\\bf y}^{end}$ , for each potential start and end. We denote these scores as ${\\bf z}^{start}, {\\bf z}^{end} \\in \\mathbb {R}^N$ , where $N$ is the number of words in the sentence $s$ . In other words, ${\\bf z}^{start}_i$ indicates how likely the answer is to start at position $i$ of the sentence (the higher the more likely); similarly, ${\\bf z}^{end}_i$ indicates how likely the answer is to end at that index. Assuming the answer exists, we can transform these confidence scores into pseudo-probability distributions ${\\bf p}^{start}, {\\bf p}^{end}$ via softmax. The probability of each $i$ -to- ${\\bf y}^{end}$0 -span of the context can therefore be defined by: ",
"$$P(a = s_{i...j}) = {\\bf p}^{start}_i {\\bf p}^{end}_j$$ (Eq. 13) ",
"where ${\\bf p}_i$ indicates the $i$ -th element of the vector ${\\bf p}_i$ , i.e. the probability of the answer starting at $i$ . Seo:16 obtain the span with the highest probability during post-processing. To allow the model to signal that there is no answer, we concatenate a trainable bias $b$ to the end of both confidences score vectors ${\\bf z}^{start}, {\\bf z}^{end}$ . The new score vectors ${\\tilde{\\bf z}}^{start}, {\\tilde{\\bf z}}^{end} \\in \\mathbb {R}^{N+1}$ are defined as ${\\tilde{\\bf z}}^{start} = [{\\bf z}^{start}; b]$ and similarly for ${\\tilde{\\bf z}}^{end}$ , where $[;]$ indicates row-wise concatenation. Hence, the last elements of $i$0 and $i$1 indicate the model's confidence that the answer has no start or end, respectively. We apply softmax to these augmented vectors to obtain pseudo-probability distributions, $i$2 . This means that the probability the model assigns to a null answer is: ",
"$$P(a = \\emptyset ) = {\\tilde{\\bf p}}^{start}_{N+1} {\\tilde{\\bf p}}^{end}_{N+1}.$$ (Eq. 14) ",
"If $P(a = \\emptyset )$ is higher than the probability of the best span, $\\arg \\max _{i,j \\le N} P(a = s_{i...j})$ , then the model deems that the question cannot be answered from the sentence. Conceptually, adding the bias enables the model to be sensitive to the absolute values of the raw confidence scores ${\\bf z}^{start}, {\\bf z}^{end}$ . We are essentially setting and learning a threshold $b$ that decides whether the model is sufficiently confident of the best candidate answer span.",
"While this threshold provides us with a dynamic per-example decision of whether the instance is answerable, we can also set a global confidence threshold $p_{min}$ ; if the best answer's confidence is below that threshold, we infer that there is no answer. In Section \"Unseen Relations\" we use this global threshold to get a broader picture of the model's performance."
],
[
"To understand how well our method can generalize to unseen data, we design experiments for unseen entities (Section \"Unseen Entities\" ), unseen question templates (Section \"Unseen Question Templates\" ), and unseen relations (Section \"Unseen Relations\" )."
],
[
"We show that our reading-comprehension approach works well in a typical relation-extraction setting by testing it on unseen entities and texts.",
"We partitioned our dataset along entities in the question, and randomly clustered each entity into one of three groups: train, dev, or test. For instance, Alan Turing examples appear only in training, while Steve Jobs examples are exclusive to test. We then sampled 1,000,000 examples for train, 1,000 for dev, and 10,000 for test. This partition also ensures that the sentences at test time are different from those in train, since the sentences are gathered from each entity's Wikipedia article.",
"Table 1 shows that our model generalizes well to new entities and texts, with little variance in performance between KB Relation, NL Relation, Multiple Templates, and Question Ensemble. Single Template performs significantly worse than these variants; we conjecture that simpler relation descriptions (KB Relation & NL Relation) allow for easier parameter tying across different examples, whereas learning from multiple questions allows the model to acquire important paraphrases. All variants of our model outperform off-the-shelf relation extraction systems (RNN Labeler and Miwa & Bansal) in this setting, demonstrating that reducing relation extraction to reading comprehension is indeed a viable approach for our Wikipedia slot-filling task. An analysis of 50 examples that Multiple Templates mispredicted shows that 36% of errors can be attributed to annotation errors (chiefly missing entries in Wikidata), and an additional 42% result from inaccurate span selection (e.g. “8 February 1985” instead of “1985”), for which our model is fully penalized. In total, only 18% of our sample were pure system errors, suggesting that our model is very close to the performance ceiling of this setting (slightly above 90% F1)."
],
[
"We test our method's ability to generalize to new descriptions of the same relation, by holding out a question template for each relation during training.",
"We created 10 folds of train/dev/test samples of the data, in which one question template for each relation was held out for the test set, and another for the development set. For instance, “What did $x$ do for a living?” may appear only in the training set, while “What is $x$ 's job?” is exclusive to the test set. Each split was stratified by sampling $N$ examples per question template ( $N=1000,10,50$ for train, dev, test, respectively). This process created 10 training sets of 966,000 examples with matching development and test sets of 940 and 4,700 examples each.",
"We trained and tested Multiple Templates on each one of the folds, yielding performance on unseen templates. We then replicated the existing test sets and replaced the unseen question templates with templates from the training set, yielding performance on seen templates. Revisiting our example, we convert test-set occurrences of “What is $x$ 's job?” to “What did $x$ do for a living?”.",
"Table 2 shows that our approach is able to generalize to unseen question templates. Our system's performance on unseen questions is nearly as strong as for previously observed templates (losing roughly 3.5 points in F1)."
],
[
"We examine a pure zero-shot setting, where test-time relations are unobserved during training.",
"We created 10 folds of train/dev/test samples, partitioned along relations: 84 relations for train, 12 dev, and 24 test. For example, when $educated\\_at$ is allocated to test, no $educated\\_at$ examples appear in train. Using stratified sampling of relations, we created 10 training sets of 840,000 examples each with matching dev and test sets of 600 and 12,000 examples per fold.",
"Table 3 shows each system's performance; Figure 4 extends these results for variants of our model by applying a global threshold on the answers' confidence scores to generate precision/recall curves (see Section \"Model\" ). As expected, representing knowledge-base relations as indicators (KB Relation and Miwa & Bansal) is insufficient in a zero-shot setting; they must be interpreted as natural-language expressions to allow for some generalization. The difference between using a single question template (Single Template) and the relation's name (NL Relation) appears to be minor. However, training on a variety of question templates (Multiple Templates) substantially increases performance. We conjecture that multiple phrasings of the same relation allows our model to learn answer-type paraphrases that occur across many relations (see Section \"Analysis\" ). There is also some advantage to having multiple questions at test time (Question Ensemble)."
],
[
"To understand how our method extracts unseen relations, we analyzed 100 random examples, of which 60 had answers in the sentence and 40 did not (negative examples).",
"For negative examples, we checked whether a distractor – an incorrect answer of the correct answer type – appears in the sentence. For example, the question “Who is John McCain married to?” does not have an answer in “John McCain chose Sarah Palin as his running mate”, but “Sarah Palin” is of the correct answer type. We noticed that 14 negative examples (35%) contain distractors. When pairing these examples with the results from the unseen relations experiment in Section \"Unseen Relations\" , we found that our method answered 2/14 of the distractor examples incorrectly, compared to only 1/26 of the easier examples. It appears that while most of the negative examples are easy, a significant portion of them are not trivial.",
"For positive examples, we observed that some instances can be solved by matching the relation in the sentence to that in the question, while others rely more on the answer's type. Moreover, we notice that each cue can be further categorized according to the type of information needed to detect it: (1) when part of the question appears verbatim in the text, (2) when the phrasing in the text deviates from the question in a way that is typical of other relations as well (e.g. syntactic variability), (3) when the phrasing in the text deviates from the question in a way that is unique to this relation (e.g. lexical variability). We name these categories verbatim, global, and specific, respectively. Figure 5 illustrates all the different types of cues we discuss in our analysis.",
"We selected the most important cue for solving each instance. If there were two important cues, each one was counted as half. Table 4 shows their distribution. Type cues appear to be somewhat more dominant than relation cues (58% vs. 42%). Half of the cues are relation-specific, whereas global cues account for one third of the cases and verbatim cues for one sixth. This is an encouraging result, because we can potentially learn to accurately recognize verbatim and global cues from other relations. However, our method was only able to exploit these cues partially.",
"We paired these examples with the results from the unseen relations experiment in Section \"Unseen Relations\" to see how well our method performs in each category. Table 5 shows the results for the Multiple Templates setting. On one hand, the model appears agnostic to whether the relation cue is verbatim, global, or specific, and is able to correctly answer these instances with similar accuracy (there is no clear trend due to the small sample size). For examples that rely on typing information, the trend is much clearer; our model is much better at detecting global type cues than specific ones.",
"Based on these observations, we think that the primary sources of our model's ability to generalize to new relations are: global type detection, which is acquired from training on many different relations, and relation paraphrase detection (of all types), which probably relies on its pre-trained word embeddings."
],
[
"We showed that relation extraction can be reduced to a reading comprehension problem, allowing us to generalize to unseen relations that are defined on-the-fly in natural language. However, the problem of zero-shot relation extraction is far from solved, and poses an interesting challenge to both the information extraction and machine reading communities. As research into machine reading progresses, we may find that more tasks can benefit from a similar approach. To support future work in this avenue, we make our code and data publicly available."
],
[
"The research was supported in part by DARPA under the DEFT program (FA8750-13-2-0019), the ARO (W911NF-16-1-0121), the NSF (IIS-1252835, IIS-1562364), gifts from Google, Tencent, and Nvidia, and an Allen Distinguished Investigator Award. We also thank Mandar Joshi, Victoria Lin, and the UW NLP group for helpful conversations and comments on the work."
]
],
"section_name": [
"Introduction",
"Related Work",
"Approach",
"Dataset",
"Model",
"Experiments",
"Unseen Entities",
"Unseen Question Templates",
"Unseen Relations",
"Analysis",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"ab8874c415895b2ce530469541af91e8e7ff8f30"
],
"answer": [
{
"evidence": [
"We show that it is possible to reduce relation extraction to the problem of answering simple reading comprehension questions. We map each relation type $R(x,y)$ to at least one parametrized natural-language question $q_x$ whose answer is $y$ . For example, the relation $educated\\_at(x,y)$ can be mapped to “Where did $x$ study?” and “Which university did $x$ graduate from?”. Given a particular entity $x$ (“Turing”) and a text that mentions $x$ (“Turing obtained his PhD from Princeton”), a non-null answer to any of these questions (“Princeton”) asserts the fact and also fills the slot $y$ . Figure 1 illustrates a few more examples."
],
"extractive_spans": [],
"free_form_answer": "The relation R(x,y) is mapped onto a question q whose answer is y",
"highlighted_evidence": [
"We map each relation type $R(x,y)$ to at least one parametrized natural-language question $q_x$ whose answer is $y$ ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
}
],
"nlp_background": [
""
],
"paper_read": [
"somewhat"
],
"question": [
"How is the input triple translated to a slot-filling task?"
],
"question_id": [
"0038b073b7cca847033177024f9719c971692042"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"reading comprehension"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 1: Common knowledge-base relations defined by natural-language question templates.",
"Figure 2: Examples from our reading-comprehension dataset. Each instance contains a relation R, a question q, a sentence s, and an answer set A. The question explicitly mentions an entity e, which also appears in s. For brevity, answers are underlined instead of being displayed in a separate column.",
"Table 1: Performance on unseen entities.",
"Table 3: Performance on unseen relations.",
"Table 2: Performance on seen/unseen questions.",
"Figure 4: Precision/Recall for unseen relations.",
"Figure 5: The different types of discriminating cues we observed among positive examples.",
"Table 5: Our method’s accuracy on subsets of examples pertaining to different cue types. Results in italics are based on a sample of less than 10.",
"Table 4: The distribution of cues by type, based on a sample of 60."
],
"file": [
"1-Figure1-1.png",
"4-Figure2-1.png",
"7-Table1-1.png",
"7-Table3-1.png",
"7-Table2-1.png",
"7-Figure4-1.png",
"8-Figure5-1.png",
"8-Table5-1.png",
"8-Table4-1.png"
]
} | [
"How is the input triple translated to a slot-filling task?"
] | [
[
"1706.04115-Introduction-1"
]
] | [
"The relation R(x,y) is mapped onto a question q whose answer is y"
] | 792 |
1909.00107 | Behavior Gated Language Models | Most current language modeling techniques only exploit co-occurrence, semantic and syntactic information from the sequence of words. However, a range of information such as the state of the speaker and dynamics of the interaction might be useful. In this work we derive motivation from psycholinguistics and propose the addition of behavioral information into the context of language modeling. We propose the augmentation of language models with an additional module which analyzes the behavioral state of the current context. This behavioral information is used to gate the outputs of the language model before the final word prediction output. We show that the addition of behavioral context in language models achieves lower perplexities on behavior-rich datasets. We also confirm the validity of the proposed models on a variety of model architectures and improve on previous state-of-the-art models with generic domain Penn Treebank Corpus. | {
"paragraphs": [
[
"Recurrent neural network language models (RNNLM) can theoretically model the word history over an arbitrarily long length of time and thus have been shown to perform better than traditional n-gram models BIBREF0. Recent prior work has continuously improved the performance of RNNLMs through hyper-parameter tuning, training optimization methods, and development of new network architectures BIBREF1, BIBREF2, BIBREF3, BIBREF4.",
"On the other hand, many work have proposed the use of domain knowledge and additional information such as topics or parts-of-speech to improve language models. While syntactic tendencies can be inferred from a few preceding words, semantic coherence may require longer context and high level understanding of natural language, both of which are difficult to learn through purely statistical methods. This problem can be overcome by exploiting external information to capture long-range semantic dependencies. One common way of achieving this is by incorporating part-of-speech (POS) tags into the RNNLM as an additional feature to predict the next word BIBREF5, BIBREF6. Other useful linguistic features include conversation-type, which was shown to improve language modeling when combined with POS tags BIBREF7. Further improvements were achieved through the addition of socio-situational setting information and other linguistic features such as lemmas and topic BIBREF8.",
"The use of topic information to provide semantic context to language models has also been studied extensively BIBREF9, BIBREF10, BIBREF11, BIBREF12. Topic models are useful for extracting high level semantic structure via latent topics which can aid in better modeling of longer documents.",
"Recently, however, empirical studies involving investigation of different network architectures, hyper-parameter tuning, and optimization techniques have yielded better performance than the addition of contextual information BIBREF13, BIBREF14. In contrast to the majority of work that focus on improving the neural network aspects of RNNLM, we introduce psycholinguistic signals along with linguistic units to improve the fundamental language model.",
"In this work, we utilize behavioral information embedded in the language to aid the language model. We hypothesize that different psychological behavior states incite differences in the use of language BIBREF15, BIBREF16, and thus modeling these tendencies can provide useful information in statistical language modeling. And although not directly related, behavioral information may also correlate with conversation-type and topic. Thus, we propose the use of psycholinguistic behavior signals as a gating mechanism to augment typical language models. Effectively inferring behaviors from sources like spoken text, written articles can lead to personification of the language models in the speaker-writer arena."
],
[
"In this section, we first describe a typical RNN based language model which serves as a baseline for this study. Second, we introduce the proposed behavior prediction model for extracting behavioral information. Finally, the proposed architecture of the language model which incorporates the behavioral information through a gating mechanism is presented."
],
[
"The basic RNNLM consists of a vanilla unidirectional LSTM which predicts the next word given the current and its word history at each time step. In other words, given a sequence of words $ \\mathbf {x} \\hspace{2.77771pt}{=}\\hspace{2.77771pt}x_1, x_2, \\ldots x_n$ as input, the network predicts a probability distribution of the next word $ y $ as $ P(y \\mid \\mathbf {x}) $. Figure FIGREF2 illustrates the basic architecture of the RNNLM.",
"Since our contribution is towards introducing behavior as a psycholinguistic feature for aiding the language modeling process, we stick with a reliable and simple LSTM-based RNN model and follow the recommendations from BIBREF1 for our baseline model."
],
[
"The analysis and processing of human behavior informatics is crucial in many psychotherapy settings such as observational studies and patient therapy BIBREF17. Prior work has proposed the application of neural networks in modeling human behavior in a variety of clinical settings BIBREF18, BIBREF19, BIBREF20.",
"In this work we adopt a behavior model that predicts the likelihood of occurrence of various behaviors based on input text. Our model is based on the RNN architecture in Figure FIGREF2, but instead of the next word we predict the joint probability of behavior occurrences $ P(\\mathbf {B} \\mid \\mathbf {x}) $ where $ \\mathbf {B} \\hspace{2.77771pt}{=}\\hspace{2.77771pt}\\lbrace b_{i}\\rbrace $ and $ b_{i} $ is the occurrence of behavior $i$. In this work we apply the behaviors: Acceptance, Blame, Negativity, Positivity, and Sadness. This is elaborated more on in Section SECREF3."
],
[
"Behavior understanding encapsulates a long-term trajectory of a person's psychological state. Through the course of communication, these states may manifest as short-term instances of emotion or sentiment. Previous work has studied the links between these psychological states and their effect on vocabulary and choice of words BIBREF15 as well as use of language BIBREF16. Motivated from these studies, we hypothesize that due to the duality of behavior and language we can improve language models by capturing variability in language use caused by different psychological states through the inclusion of behavioral information."
],
[
"We propose to augment RNN language models with a behavior model that provides information relating to a speaker's psychological state. This behavioral information is combined with hidden layers of the RNNLM through a gating mechanism prior to output prediction of the next word. In contrast to typical language models, we propose to model $ P(\\mathbf {y} \\mid \\mathbf {x}, \\mathbf {z}) $ where $ \\mathbf {z} \\equiv f( P(\\mathbf {B}\\mid \\mathbf {x}))$ for an RNN function $f(\\cdot )$. The behavior model is implemented with a multi-layered RNN over the input sequence of words. The first recurrent layer of the behavior model is initialized with pre-trained weights from the model described in Section SECREF3 and fixed during language modeling training. An overview of the proposed behavior gated language model is shown in Figure FIGREF6. The RNN units shaded in green (lower section) denote the pre-trained weights from the behavior classification model which are fixed during the entirety of training. The abstract behavior outputs $ b_t $ of the pre-trained model are fed into a time-synced RNN, denoted in blue (upper section), which is subsequently used for gating the RNNLM predictions. The un-shaded RNN units correspond to typical RNNLM and operate in parallel to the former."
],
[
"For evaluating the proposed model on behavior related data, we employ the Couples Therapy Corpus (CoupTher) BIBREF21 and Cancer Couples Interaction Dataset (Cancer) BIBREF22. These are the targeted conditions under which a behavior-gated language model can offer improved performance.",
"Couples Therapy Corpus: This corpus comprises of dyadic conversations between real couples seeking marital counseling. The dataset consists of audio, video recordings along with their transcriptions. Each speaker is rated by multiple annotators over 33 behaviors. The dataset comprises of approximately 0.83 million words with 10,000 unique entries of which 0.5 million is used for training (0.24m for dev and 88k for test).",
"Cancer Couples Interaction Dataset: This dataset was gathered as part of a observational study of couples coping with advanced cancer. Advanced cancer patients and their spouse caregivers were recruited from clinics and asked to interact with each other in two structured discussions: neutral discussion and cancer related. Interactions were audio-recorded using small digital recorders worn by each participant. Manually transcribed audio has approximately 230,000 word tokens with a vocabulary size of 8173."
],
[
"In order to evaluate our proposed model on more generic language modeling tasks, we employ Penn Tree bank (PTB) BIBREF23, as preprocessed by BIBREF24. Since Penn Tree bank mainly comprises of articles from Wall Street Journal it is not expected to contain substantial expressions of behavior."
],
[
"The behavior model was implemented using an RNN with LSTM units and trained with the Couples Therapy Corpus. Out of the 33 behavioral codes included in the corpus we applied the behaviors Acceptance, Blame, Negativity, Positivity, and Sadness to train our models. This is motivated from previous works which showed good separability in these behaviors as well as being easy to interpret. The behavior model is pre-trained to identify the presence of each behavior from a sequence of words using a multi-label classification scheme. The pre-trained portion of the behavior model was implemented using a single layer RNN with LSTM units with dimension size 50."
],
[
"We augmented previous RNN language model architectures by BIBREF1 and BIBREF2 with our proposed behavior gates. We used the same architecture as in each work to maintain similar number of parameters and performed a grid search of hyperparameters such as learning rate, dropout, and batch size. The number of layers and size of the final layers of the behavior model was also optimized. We report the results of models based on the best validation result."
],
[
"We split the results into two parts. We first validate the proposed technique on behavior related language modeling tasks and then apply it on more generic domain Penn Tree bank dataset."
],
[
"We utilize the Couple's Therapy Corpus as an in-domain experimental corpus since our behavior classification model is also trained on the same. The RNNLM architecture is similar to BIBREF1, but with hyperparameters optimized for the couple's corpus. The results are tabulated in Table TABREF16 in terms of perplexity. We find that the behavior gated language models yield lower perplexity compared to vanilla LSTM language model. A relative improvement of 2.43% is obtained with behavior gating on the couple's data."
],
[
"To evaluate the validity of the proposed method on an out-of-domain but behavior related task, we utilize the Cancer Couples Interaction Dataset. Here both the language and the behavior models are trained on the Couple's Therapy Corpus. The Cancer dataset is used only for development (hyper-parameter tuning) and testing. We observe that the behavior gating helps achieve lower perplexity values with a relative improvement of 6.81%. The performance improvements on an out-of-domain task emphasizes the effectiveness of behavior gated language models."
],
[
"Although the proposed model is motivated and targeted towards behavior related datasets, the hypothesis should theoretically extend towards any human generated corpora. To assess this, we also train models on a non-behavior-rich database, the Penn Tree Bank Corpus. We experiment with both the medium and large architectures proposed by BIBREF1. The perplexity results on PTB are presented in Table TABREF17. All language models showed an improvement in perplexity through the addition of behavior gates. It can also be observed that LSTM-Medium with behavior gating gives similar performance to baseline LSTM-Large even though the latter has more than three times the number of parameters."
],
[
"Finally we apply behavior gating on a previous state-of-the-art architecture, one that is most often used as a benchmark over various recent works. Specifically, we employ the AWD-LSTM proposed by BIBREF2 with QRNN BIBREF25 instead of LSTM. We observe positive results with AWD-LSTM augmented with behavior-gating providing a relative improvement of (1.42% on valid) 0.66% in perplexity (Table TABREF17)."
],
[
"In this study, we introduce the state of the speaker/author into language modeling in the form of behavior signals. We track 5 behaviors namely acceptance, blame, negativity, positivity and sadness using a 5 class multi-label behavior classification model. The behavior states are used as gating mechanism for a typical RNN based language model. We show through our experiments that the proposed technique improves language modeling perplexity specifically in the case of behavior-rich scenarios. Finally, we show improvements on the previous state-of-the-art benchmark model with Penn Tree Bank Corpus to underline the affect of behavior states in language modeling.",
"In future, we plan to incorporate the behavior-gated language model into the task of automatic speech recognition (ASR). In such scenario, we could derive both the past and the future behavior states from the ASR which could then be used to gate the language model using two pass re-scoring strategies. We expect the behavior states to be less prone to errors made by ASR over a sufficiently long context and hence believe the future behavior states to provide further improvements."
]
],
"section_name": [
"Introduction",
"Methodology",
"Methodology ::: Language Model",
"Methodology ::: Behavior Model",
"Methodology ::: Behavior Gated Language Model ::: Motivation",
"Methodology ::: Behavior Gated Language Model ::: Proposed Model",
"Experimental Setup ::: Data ::: Behavior Related Corpora",
"Experimental Setup ::: Data ::: Penn Tree Bank Corpus",
"Experimental Setup ::: Behavior Model",
"Experimental Setup ::: Hyperparameters",
"Results",
"Results ::: Behavior Related Corpora ::: Couple's Therapy Corpus",
"Results ::: Behavior Related Corpora ::: Cancer Couples Interaction Dataset",
"Results ::: Penn Tree Bank Corpus",
"Results ::: Penn Tree Bank Corpus ::: Previous state-of-the-art architectures",
"Conclusion & Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"abaa6523cfa49f8e8cd10181498eec4d65e57de9"
],
"answer": [
{
"evidence": [
"For evaluating the proposed model on behavior related data, we employ the Couples Therapy Corpus (CoupTher) BIBREF21 and Cancer Couples Interaction Dataset (Cancer) BIBREF22. These are the targeted conditions under which a behavior-gated language model can offer improved performance.",
"We utilize the Couple's Therapy Corpus as an in-domain experimental corpus since our behavior classification model is also trained on the same. The RNNLM architecture is similar to BIBREF1, but with hyperparameters optimized for the couple's corpus. The results are tabulated in Table TABREF16 in terms of perplexity. We find that the behavior gated language models yield lower perplexity compared to vanilla LSTM language model. A relative improvement of 2.43% is obtained with behavior gating on the couple's data."
],
"extractive_spans": [
"Couples Therapy Corpus (CoupTher) BIBREF21"
],
"free_form_answer": "",
"highlighted_evidence": [
"For evaluating the proposed model on behavior related data, we employ the Couples Therapy Corpus (CoupTher) BIBREF21 and Cancer Couples Interaction Dataset (Cancer) BIBREF22. These are the targeted conditions under which a behavior-gated language model can offer improved performance.",
"We utilize the Couple's Therapy Corpus as an in-domain experimental corpus since our behavior classification model is also trained on the same. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"f841b15f2a85c9da8e0049bcd972feb4b5cedf77"
],
"answer": [
{
"evidence": [
"The behavior model was implemented using an RNN with LSTM units and trained with the Couples Therapy Corpus. Out of the 33 behavioral codes included in the corpus we applied the behaviors Acceptance, Blame, Negativity, Positivity, and Sadness to train our models. This is motivated from previous works which showed good separability in these behaviors as well as being easy to interpret. The behavior model is pre-trained to identify the presence of each behavior from a sequence of words using a multi-label classification scheme. The pre-trained portion of the behavior model was implemented using a single layer RNN with LSTM units with dimension size 50."
],
"extractive_spans": [],
"free_form_answer": "pre-trained to identify the presence of behavior from a sequence of word using the Couples Therapy Corpus",
"highlighted_evidence": [
"The behavior model was implemented using an RNN with LSTM units and trained with the Couples Therapy Corpus. Out of the 33 behavioral codes included in the corpus we applied the behaviors Acceptance, Blame, Negativity, Positivity, and Sadness to train our models. This is motivated from previous works which showed good separability in these behaviors as well as being easy to interpret. The behavior model is pre-trained to identify the presence of each behavior from a sequence of words using a multi-label classification scheme. The pre-trained portion of the behavior model was implemented using a single layer RNN with LSTM units with dimension size 50."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero"
],
"paper_read": [
"no",
"no"
],
"question": [
"On which dataset is model trained?",
"How is module that analyzes behavioral state trained?"
],
"question_id": [
"21cbcd24863211b02b436f21deaf02125f34da4c",
"37bc8763eb604c14871af71cba904b7b77b6e089"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: RNN language model.",
"Figure 2: Behavior gated language model.",
"Table 1: Language model test perplexities on Couples Therapy and Cancer Couples Interaction Dataset.",
"Table 2: Language model perplexities on validation and test sets of Penn Treebank."
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"4-Table1-1.png",
"4-Table2-1.png"
]
} | [
"How is module that analyzes behavioral state trained?"
] | [
[
"1909.00107-Experimental Setup ::: Behavior Model-0"
]
] | [
"pre-trained to identify the presence of behavior from a sequence of word using the Couples Therapy Corpus"
] | 795 |
1711.03438 | Open-World Knowledge Graph Completion | Knowledge Graphs (KGs) have been applied to many tasks including Web search, link prediction, recommendation, natural language processing, and entity linking. However, most KGs are far from complete and are growing at a rapid pace. To address these problems, Knowledge Graph Completion (KGC) has been proposed to improve KGs by filling in its missing connections. Unlike existing methods which hold a closed-world assumption, i.e., where KGs are fixed and new entities cannot be easily added, in the present work we relax this assumption and propose a new open-world KGC task. As a first attempt to solve this task we introduce an open-world KGC model called ConMask. This model learns embeddings of the entity's name and parts of its text-description to connect unseen entities to the KG. To mitigate the presence of noisy text descriptions, ConMask uses a relationship-dependent content masking to extract relevant snippets and then trains a fully convolutional neural network to fuse the extracted snippets with entities in the KG. Experiments on large data sets, both old and new, show that ConMask performs well in the open-world KGC task and even outperforms existing KGC models on the standard closed-world KGC task. | {
"paragraphs": [
[
"Knowledge Graphs (KGs) are a special type of information network that represents knowledge using RDF-style triples $\\langle h$ , $r$ , $t\\rangle $ , where $h$ represents some head entity and $r$ represents some relationship that connects $h$ to some tail entity $t$ . In this formalism a statement like “Springfield is the capital of Illinois” can be represented as $\\langle $ Springfield, capitalOf, Illinois $\\rangle $ . Recently, a variety of KGs, such as DBPedia BIBREF0 , and ConceptNet BIBREF1 , have been curated in the service of fact checking BIBREF2 , question answering BIBREF3 , entity linking BIBREF4 , and for many other tasks BIBREF5 . Despite their usefulness and popularity, KGs are often noisy and incomplete. For example, DBPedia, which is generated from Wikipedia's infoboxes, contains $4.6$ million entities, but half of these entities contain less than 5 relationships.",
"Based on this observation, researchers aim to improve the accuracy and reliability of KGs by predicting the existence (or probability) of relationships. This task is often called Knowledge Graph Completion (KGC). Continuing the example from above, suppose the relationship capitalOf is missing between Indianapolis and Indiana; the KGC task might predict this missing relationship based on the topological similarity between this part of the KG and the part containing Springfield and Illinois. Progress in vector embeddings originating with word2vec has produced major advancements in the KGC task. Typical embedding-based KGC algorithms like TransE BIBREF6 and others learn low-dimensional representations (i.e., embeddings) for entities and relationships using topological features. These models are able to predict the existence of missing relationships thereby “completing” the KG.",
"Existing KGC models implicitly operate under the Closed-World Assumption BIBREF7 in which all entities and relationships in the KG cannot be changed – only discovered. We formally define the Closed-word KGC task as follows:",
"Definition 1 Given an incomplete Knowledge Graph $\\mathcal {G}=(\\mathbf {E},\\mathbf {R},\\mathbf {T})$ , where $\\mathbf {E}$ , $\\mathbf {R}$ , and $\\mathbf {T}$ are the entity set, relationship set, and triple set respectively, Closed-World Knowledge Graph Completion completes $\\mathcal {G}$ by finding a set of missing triples $\\mathbf {T^\\prime }=\\lbrace \\langle h,r,t\\rangle | h\\in \\mathbf {E}, r \\in \\mathbf {R}, t \\in \\mathbf {E}, \\langle h,r,t\\rangle \\notin \\mathbf {T}\\rbrace $ in the incomplete Knowledge Graph $\\mathcal {G}$ .",
"Closed-world KGC models heavily rely on the connectivity of the existing KG and are best able to predict relationships between existing, well-connected entities. Unfortunately, because of their strict reliance on the connectivity of the existing KG, closed-world KGC models are unable to predict the relationships of poorly connected or new entities. Therefore, we assess that closed-world KGC is most suitable for fixed or slowly evolving KGs.",
"However, most real-world KGs evolve quickly with new entities and relationships being added by the minute. For example, in the 6 months between DBPedia's October 2015 release and its April 2016 release $36,340$ new English entities were added – a rate of 200 new entities per day. Recall that DBPedia merely tracks changes to Wikipedia infoboxes, so these updates do not include newly added articles without valid infobox data. Because of the accelerated growth of online information, repeatedly re-training closed-world models every day (or hour) has become impractical.",
"In the present work we borrow the idea of open-world assumption from probabilistic database literature BIBREF8 and relax the closed-world assumption to develop an Open-World Knowledge Graph Completion model capable of predicting relationships involving unseen entities or those entities that have only a few connections. Formally we define the open-world KGC task as follows:",
"Definition 2 Given an incomplete Knowledge Graph $\\mathcal {G}=(\\mathbf {E},\\mathbf {R},\\mathbf {T})$ , where $\\mathbf {E}$ , $\\mathbf {R}$ , and $\\mathbf {T}$ are the entity set, relationship set, and triple set respectively, Open-World Knowledge Graph Completion completes $\\mathcal {G}$ by finding a set of missing triples $\\mathbf {T^\\prime }=\\lbrace \\langle h,r,t\\rangle | \\langle h,r,t\\rangle \\notin \\mathbf {T}, h \\in \\mathbf {E}^i, t\\in \\mathbf {E}^i, r \\in \\mathbf {R} \\rbrace $ in the incomplete Knowledge Graph $\\mathcal {G}$ where $\\mathbf {E}^i$ is an entity superset.",
"In Defn. \"Closed-World Knowledge Graph Completion\" we relax the constraint on the triple set $\\mathbf {T^\\prime }$ so that triples in $\\mathbf {T^\\prime }$ can contain entities that are absent from the original entity set $\\mathbf {E}$ .",
"Closed-world KGC models learn entity and relationship embedding vectors by updating an initially random vector based on the KG's topology. Therefore, any triple $\\langle h,r,t\\rangle \\in \\mathbf {T^\\prime }$ such that $h\\notin \\mathbf {E}$ or $t\\notin \\mathbf {E}$ will only ever be represented by its initial random vector because its absence does not permit updates from any inference function. In order to predict the missing connections for unseen entities, it is necessary to develop alternative features to replace the topological features used by closed-world models.",
"Text content is a natural substitute for the missing topological features of disconnected or newly added entities. Indeed, most KGs such as FreeBase BIBREF9 , DBPedia BIBREF0 , and SemMedDB BIBREF10 were either directly extracted from BIBREF11 , BIBREF12 , or are built in parallel to some underlying textual descriptions. However, open-world KGC differs from the standard information extraction task because 1) Rather than extracting triples from a large text corpus, the goal of open-world KGC is to discover missing relationships; and 2) Rather than a pipeline of independent subtasks like Entity Linking BIBREF13 and Slotfilling BIBREF14 , etc., open-world KGC is a holistic task that operates as a single model.",
"Although it may seem intuitive to simply include an entity's description into an existing KGC model, we find that learning useful vector embeddings from unstructured text is much more challenging than learning topology-embeddings as in the closed-world task. First, in closed-world KGC models, each entity will have a unique embedding, which is learned from its directly connected neighbors; whereas open-world KGC models must fuse entity embeddings with the word embeddings of the entity's description. These word embeddings must be updated by entities sharing the same words regardless of their connectivity status. Second, because of the inclusion of unstructured content, open-world models are likely to include noisy or redundant information.",
"With respect to these challenges, the present work makes the following contributions:",
"Before introduce the ConMask model, we first present preliminary material by describing relevant KGC models. Then we describe the methodology, data sets, and a robust case study of closed-world and open-world KGC tasks. Finally, we draw conclusions and offer suggestions for future work."
],
[
"A variety of models have been developed to solve the closed-world KGC task. The most fundamental and widely used model is a translation-based Representation Learning (RL) model called TransE BIBREF6 . TransE assumes there exists a simple function that can translate the embedding of the head entity to the embedding of some tail entity via some relationship: ",
"$$\\mathbf {h} + \\mathbf {r} = \\mathbf {t},$$ (Eq. 5) ",
"where $\\mathbf {h}$ , $\\mathbf {r}$ and $\\mathbf {t}$ are embeddings of head entity, relationship, and tail entity respectively. Based on this function, many other KGC models improve the expressive power of Eq. 5 by introducing more relationship-dependent parameters. TransR BIBREF15 , for example, augments Eq. 5 to $\\mathbf {h}\\mathbf {M}_{r} + \\mathbf {r} = \\mathbf {t}\\mathbf {M}_{r}$ where $\\mathbf {M}_{r}$ is a relationship-dependent entity embedding transformation.",
"In order to train the KGC models, TransE defines an energy-based loss function as ",
"$$\\mathcal {L}(\\mathbf {T}) = \\Sigma _{\\langle h,r,t\\rangle \\in \\mathbf {T}}[\\gamma + E(\\langle h,r,t\\rangle ) - E(\\langle h^\\prime , r^\\prime , t^\\prime \\rangle )]_{+},$$ (Eq. 6) ",
"where the energy function $E(\\langle h,r,t\\rangle ) = \\parallel \\mathbf {h} + \\mathbf {r} - \\mathbf {t}\\parallel _{L_{n}}$ measures the closeness of the given triple, $\\langle h,r,t\\rangle $ is some triple that exists in the triple set $\\mathbf {T}$ of an incomplete KG $\\mathcal {G}$ , and $\\langle h^\\prime , r^\\prime , t^\\prime \\rangle $ is a “corrupted” triple derived by randomly replacing one part of $\\langle h,r,t\\rangle $ so that it does not exist in $\\mathbf {T}$ .",
"In other recent work, ProjE BIBREF16 considered closed-world KGC to be a type of ranking task and applied a list-wise ranking loss instead of Eq. 6 . Other closed-world models such as PTransE BIBREF17 and dORC BIBREF18 maintain a simple translation function and use complex topological features like extended-length paths and “one-relation-circle” structures to improve predictive performance.",
"Unlike topology-based models, which have been studied extensively, there has been little work that utilizes text information for KGC. Neural Tensor Networks (NTN) BIBREF19 uses the averaged word embedding of an entity to initialize the entity representations. DKRL BIBREF20 uses the combined distance between topology-embeddings and text-embeddings as its energy function. Jointly BIBREF21 combines the topology-embeddings and text-embeddings first using a weighted sum and then calculates the $L_{n}$ distance between the translated head entity and tail entity. However, gains in predictive performance from these joint-learning models are rather small compared to advances in topology-based models.",
"Furthermore, the aforementioned models are all closed-world KGC models, which can only learn meaningful representations for entities that are present during training and are well connected within the KG. These models have no mechanism by which new entities can be connected with the existing KG as required in open-world KGC.",
"In the present work, we present an open-world KGC model called ConMask that uses primarily text features to learn entity and relationship embeddings. Compared to topology-based and joint-learning models, ConMask can generate representations for unseen entities if they share the same vocabulary with entities seen during training. To properly handle one-to-many and many-to-one relationships, we also apply a relationship-dependent content masking layer to generate entity embeddings."
],
[
"In this section we describe the architecture and the modelling decisions of the ConMask model. To illustrate how this model works, we begin by presenting an actual example as well as the top-ranked target entity inferred by the ConMask model:",
"Example Task: Complete triple $\\langle $ Ameen Sayani, residence, ? $\\rangle $ , where Ameen Sayani is absent from the KG.",
"Snippet of Entity Description: “... Ameen Sayani was introduced to All India Radio, Bombay, by his brother Hamid Sayani. Ameen participated in English programmes there for ten years ...”.",
"Predicted Target Entity: Mumbai.",
"In this example, if a human reader were asked to find the residence of Ameen Sayani, a popular radio personality in India, from the entity description, then the human reader is unlikely to read the entire text from beginning to end. Instead, the reader might skim the description looking for contextual clues such as family or work-related information. Here, Ameen's workplace All India Radio is located in Bombay, so the human reader may infer that Ameen is a resident of Bombay. A human reader may further reason that because Bombay has recently changed its name to Mumbai, then Mumbai would be the (correct) target entity.",
"Here and throughout the present work, we denote the missing entity as the target entity, which can be either the head or the tail of a triple.",
"We decompose the reasoning process described above into three steps: 1) Locating information relevant to the task, 2) Implicit reasoning based on the context and the relevant text, and 3) Resolving the relevant text to the proper target entity. The ConMask model is designed to mimic this process. Thus, ConMask consists of three components:",
"ConMask selects words that are related to the given relationship to mitigate the inclusion of irrelevant and noisy words. From the relevant text, ConMask then uses fully convolutional network (FCN) to extract word-based embeddings. Finally, it compares the extracted embeddings to existing entities in the KG to resolve a ranked list of target entities. The overall structure of ConMask is illustrated in Fig. 1 . Later subsections describe the model in detail."
],
[
"In open-world KGC, we cannot rely solely on the topology of the KG to guide our model. Instead, it is natural to consider extracting useful information from text in order to infer new relationships in the KG. The task of extracting relationships among entities from text is often called relation extraction BIBREF22 . Recent work in this area tends to employ neural networks such as CNN BIBREF21 or abstract meaning representations (AMRs) BIBREF23 to learn a unified kernel to remove noise and extract the relationship-agnostic entity representations. For open-world KGC, it may be possible to create a model with relationship-dependent CNN kernels. But this type of model would significantly increase the number of parameters and may overfit on rare relationships.",
"In the proposed ConMask model we developed an alternative approach called relationship-dependent content masking. The goal is to pre-process the input text in order to select small relevant snippets based on the given relationship – thereby masking irrelevant text. The idea of content masking is inspired by the attention mechanism used by recurrent neural network (RNN) models BIBREF24 , which is widely applied to NLP tasks. In a typical attention-based RNN model, each output stage of a recurrent cell is assigned an attention score.",
"In ConMask, we use a similar idea to select the most related words given some relationship and mask irrelevant words by assigning a relationship-dependent similarity score to words in the given entity description. We formally define relationship-dependent content masking as: ",
"$$\\tau (\\phi (e), \\psi (r)) = \\mathbf {W}_{\\phi (e)} \\circ f_{w}(\\mathbf {W}_{\\phi (e)}, \\mathbf {W}_{\\psi (r)}),$$ (Eq. 13) ",
"where $e$ is an entity, $r$ is some relationship, $\\phi $ and $\\psi $ are the description and name mapping functions respectively that return a word vector representing the description or the name of an entity or relationship. $\\mathbf {W}_{\\phi (e)} \\in \\mathbb {R}^{|\\phi (e)|\\times k}$ is the description matrix of $e$ in which each row represents a $k$ dimensional embedding for a word in $\\phi (e)$ in order, $\\mathbf {W}_{\\psi (r)} \\in \\mathbb {R}^{|\\psi (r)|\\times k}$ is the name matrix of $r$ in which each row represents a $r$0 dimensional embedding for a word in the title of relationship $r$1 , $r$2 is row-wise product, and $r$3 calculates the masking weight for each row, i.e., the embedding of each word, in $r$4 .",
"The simplest way to generate these weights is by calculating a similarity score between each word in entity description $\\phi (e)$ and the words in relationship name $\\psi (r)$ . We call this simple function Maximal Word-Relationship Weights (MWRW) and define it as: ",
"$$\\begin{adjustbox}{max width=0.92}\nf_{w}^{\\textrm {MWRW}}\\left(\\mathbf {W}_{\\phi (e)}, \\mathbf {W}_{\\psi (r)}\\right)_{[i]} = \\mathsf {max}_j\\left(\\frac{\\sum \\limits _m^k \\mathbf {W}_{\\phi (e)[i,m]} \\mathbf {W}_{\\psi (r)[j,m]}}{\\sqrt{\\sum \\limits _m^k \\mathbf {W}_{\\phi (e)[i,m]}^2}\\sqrt{\\sum \\limits _m^k \\mathbf {W}_{\\psi (r)[j,m]}^2}}\\right),\n\\end{adjustbox}$$ (Eq. 14) ",
"where the weight of the $i^{\\textrm {th}}$ word in $\\phi (e)$ is the largest cosine similarity score between the $i^{\\textrm {th}}$ word embedding in $\\mathbf {W}_{\\phi (e)}$ and the word embedding matrix of $\\psi (r)$ in $\\mathbf {W}_{\\psi (r)}$ .",
"This function assigns a lower weight to words that are not relevant to the given relationship and assigns higher scores to the words that appear in the relationship or are semantically similar to the relationship. For example, when inferring the target of the partial triple $\\langle $ Michelle Obama, AlmaMater, ? $\\rangle $ , MWRW will assign high weights to words like Princeton, Harvard, and University, which include the words that describe the target of the relationship. However the words that have the highest scores do not always represent the actual target but, instead, often represent words that are similar to the relationship name itself. A counter-example is shown in Fig. 2 , where, given the relationship spouse, the word with the highest MWRW score is married. Although spouse is semantically similar to married, it does not answer the question posed by the partial triple. Instead, we call words with high MWRW weights indicator words because the correct target-words are usually located nearby. In the example-case, we can see that the correct target Barack Obama appears after the indicator word married.",
"In order to assign the correct weights to the target words, we improve the content masking by using Maximal Context-Relationship Weights (MCRW) to adjust the weights of each word based on its context: ",
"$$\\begin{adjustbox}{max width=0.92}\nf_{w}\\left(\\mathbf {W}_{\\phi (e)}, \\mathbf {W}_{\\psi (r)}\\right)_{[i]} = \\max \\left(f_{w}^{\\textrm {MWRW}}\\left(\\mathbf {W}_{\\phi (e)}, \\mathbf {W}_{\\psi (r)}\\right)_{[i-k_m:i]}\\right),\n\\end{adjustbox}$$ (Eq. 15) ",
"in which the weight of the $i^{th}$ word in $\\phi (e)$ equals the maximum MWRW score of the $i^{th}$ word itself and previous $k_m$ words. From a neural network perspective, the re-weighting function $f_w$ can also be viewed as applying a row-wise max reduction followed by a 1-D max-pooling with a window size of $k_m$ on the matrix product of $\\mathbf {W}_{\\phi (e)}$ and $\\mathbf {W}_{\\psi (r)}^{T}$ .",
"To recap, the relationship-dependent content masking process described here assigns importance weights to words in an entity's description based on the similarity between each word's context and the given relationship. After non-relevant content is masked, the model needs to learn a single embedding vector from the masked content matrix to compare with the embeddings of candidate target entities."
],
[
"Here we describe how ConMask extracts word-based entity embeddings. We call this process the target fusion function $\\xi $ , which distills an embedding using the output of Eq. 13 .",
"Initially, we looked for solutions to this problem in recurrent neural networks (RNNs) of various forms. Despite their popularity in NLP-related tasks, recent research has found that RNNs are not good at performing “extractive” tasks BIBREF25 . RNNs do not work well in our specific setting because the input of the Target Fusion is a masked content matrix, which means most of the stage inputs would be zero and hence hard to train.",
"In this work we decide to use fully convolutional neural network (FCN) as the target fusion structure. A CNN-based structure is well known for its ability to capture peak values using convolution and pooling. Therefore FCN is well suited to extract useful information from the weighted content matrix. Our adaptation of FCNs yields the target fusion function $\\xi $ , which generates a $k$ -dimensional embedding using the output of content masking $\\tau (\\phi (e),$ $\\psi (r))$ where $e$ is either a head or tail entity from a partial triple.",
"Figure 3 shows the overall architecture of the target fusion process and its dependent content masking process. The target fusion process has three FCN layers. In each layer, we first use two 1-D convolution operators to perform affine transformation, then we apply $sigmoid$ as the activation function to the convoluted output followed by batch normalization BIBREF26 and max-pooling. The last FCN layer uses mean-pooling instead of max-pooling to ensure the output of the target fusion layer always return a single $k$ -dimensional embedding.",
"Note that the FCN used here is different from the one that typically used in computer vision tasks BIBREF27 . Rather than reconstructing the input, as is typical in CV, the goal of target fusion is to extract the embedding w.r.t given relationship, therefore we do not have the de-convolution operations. Another difference is that we reduce the number of embeddings by half after each FCN layer but do not increase the number of channels, i.e., the embedding size. This is because the input weighted matrix is a sparse matrix with a large portion of zero values, so we are essentially fusing peak values from the input matrix into a single embedding representing the target entity."
],
[
"Although it is possible to use target fusion to generate all entity embeddings used in ConMask, such a process would result in a large number of parameters. Furthermore, because the target fusion function is an extraction function it would be odd to apply it to entity names where no extraction is necessary. So, we also employ a simple semantic averaging function $\\eta (\\mathbf {W}) = \\frac{1}{k_{l}}\\Sigma _{i}^{k_{l}}\\mathbf {W}_{[i,:]}$ that combines word embeddings to represent entity names and for generating background representations of other textual features, where $\\mathbf {W} \\in \\mathcal {R}^{k_l\\times k}$ is the input embedding matrix from the entity description $\\phi (\\cdot )$ or the entity or relationship name $\\psi (\\cdot )$ .",
"To recap: at this point in the model we have generated entity embeddings through the content masking and target fusion operations. The next step is to define a loss function that finds one or more entities in the KG that most closely match the generated embedding."
],
[
"To speed up the training and take to advantage of the performance boost associated with a list-wise ranking loss function BIBREF16 , we designed a partial list-wise ranking loss function that has both positive and negative target sampling: ",
"$$\\mathcal {L}(h,r,t)={\\left\\lbrace \\begin{array}{ll}\n\\sum \\limits _{h_+ \\in E^+} -\\frac{\\log (S(h_+, r, t, E^+\\cup E^-))}{|E^+|}, p_c > 0.5\\\\\n\\sum \\limits _{t_+ \\in E^+} -\\frac{\\log (S(h, r, t_+, E^+\\cup E^-))}{|E^+|}, p_c \\le 0.5\\\\\n\\end{array}\\right.},$$ (Eq. 21) ",
"where $p_c$ is the corruption probability drawn from an uniform distribution $U[0,1]$ such that when $p_c > 0.5$ we keep the input tail entity $t$ but do positive and negative sampling on the head entity and when $p_c \\le 0.5$ we keep the input head entity $h$ intact and do sampling on the tail entity. $E^+$ and $E^-$ are the sampled positive and negative entity sets drawn from the positive and negative target distribution $P_+$ and $P_-$ respectively. Although a type-constraint or frequency-based distribution may yield better results, here we follow the convention and simply apply a simple uniform distribution for both $U[0,1]$0 and $U[0,1]$1 . When $U[0,1]$2 , $U[0,1]$3 is a uniform distribution of entities in $U[0,1]$4 and $U[0,1]$5 is an uniform distribution of entities in $U[0,1]$6 . On the other hand when $U[0,1]$7 , $U[0,1]$8 is an uniform distribution of entities in $U[0,1]$9 and $p_c > 0.5$0 is an uniform distribution of entities in $p_c > 0.5$1 . The function $p_c > 0.5$2 in Eq. 21 is the softmax normalized output of ConMask: ",
"$$S(h,r,t,E^\\pm ) = {\\left\\lbrace \\begin{array}{ll}\n\\frac{\\exp (\\textrm {ConMask}(h,r,t))}{\\sum \\limits _{e\\in E^\\pm }\\exp (\\textrm {ConMask}(e, r, t))}, p_c > 0.5 \\\\\n\\frac{\\exp (\\textrm {ConMask}(h,r,t))}{ \\sum \\limits _{e\\in E^\\pm }\\exp (\\textrm {ConMask}(h, r, e))}, p_c \\le 0.5 \\\\\n\\end{array}\\right.}.$$ (Eq. 22) ",
"Note that Eq. 21 is actually a generalized form of the sampling process used by most existing KGC models. When $|E_+|=1$ and $|E_-|=1$ , the sampling method described in Eq. 21 is the same as the triple corruption used by TransE BIBREF6 , TransR BIBREF15 , TransH BIBREF28 , and many other closed-world KGC models. When $|E_+| = |\\lbrace t|\\langle h,r,t\\rangle \\in \\mathbf {T}\\rbrace |$ , which is the number of all true triples given a partial triple $\\langle h$ , $r$ , ? $\\rangle $ , Eq. 21 is the same as ProjE_listwise BIBREF16 ."
],
[
"The previous section described the design decisions and modelling assumptions of ConMask. In this section we present the results of experiments performed on old and new data sets in both open-world and closed-world KGC tasks."
],
[
"Training parameters were set empirically but without fine-tuning. We set the word embedding size $k=200$ , maximum entity content and name length $k_c=k_n=512$ . The word embeddings are from the publicly available pre-trained 200-dimensional GloVe embeddings BIBREF29 . The content masking window size $k_m=6$ , number of FCN layers $k_{fcn}=3$ where each layer has 2 convolutional layers and a BN layer with a moving average decay of $0.9$ followed by a dropout with a keep probability $p=0.5$ . Max-pooling in each FCN layer has a pool size and stride size of 2. The mini-batch size used by ConMask is $k_b=200$ . We use Adam as the optimizer with a learning rate of $10^{-2}$ . The target sampling set sizes for $|E_+|$ and $|E_-|$ are 1 and 4 respectively. All open-world KGC models were run for at most 200 epochs. All compared models used their default parameters.",
"ConMask is implemented in TensorFlow. The source code is available at https://github.com/bxshi/ConMask."
],
[
"The Freebase 15K (FB15k) data set is widely used in KGC. But FB15k is fraught with reversed- or synonym-triples BIBREF30 and does not provide sufficient textual information for content-based KGC methods to use. Due to the limited text content and the redundancy found in the FB15K data set, we introduce two new data sets DBPedia50k and DBPedia500k for both open-world and closed-world KGC tasks. Statistics of all data sets are shown in Tab. 2 .",
"The methodology used to evaluate the open-world and closed-world KGC tasks is similar to the related work. Specifically, we randomly selected $90\\%$ of the entities in the KG and induced a KG subgraph using the selected entities, and from this reduced KG, we further removed $10\\%$ of the relationships, i.e., graph-edges, to create KG $_\\textrm {train}$ . All other triples not included in KG $_\\textrm {train}$ are held out for the test set."
],
[
"For the open-world KGC task, we generated a test set from the $10\\%$ of entities that were held out of KG $_\\textrm {train}$ . This held out set has relationships that connect the test entities to the entities in KG $_\\textrm {train}$ . So, given a held out entity-relationship partial triple (that was not seen during training), our goal is to predict the correct target entity within KG $_\\textrm {train}$ .",
"To mitigate the excessive cost involved in computing scores for all entities in the KG, we applied a target filtering method to all KGC models. Namely, for a given partial triple $\\langle h$ , $r$ , ? $\\rangle $ or $\\langle $ ?, $r$ , $t \\rangle $ , if a target entity candidate has not been connected via relationship $r$ before in the training set, then it is skipped, otherwise we use the KGC model to calculate the actual ranking score. Simply put, this removes relationship-entity combinations that have never before been seen and are likely to represent nonsensical statements. The experiment results are shown in Tab. 1 .",
"As a naive baseline we include the target filtering baseline method in Tab. 1 , which assigns random scores to all the entities that pass the target filtering. Semantic Averaging is a simplified model which uses contextual features only. DKRL is a two-layer CNN model that generates entity embeddings with entity description BIBREF20 . We implemented DKRL ourselves and removed the structural-related features so it can work under open-world KGC settings.",
"We find that the extraction features in ConMask do boost mean rank performance by at least $60\\%$ on both data sets compared to the extraction-free Semantic Averaging. Interestingly, the performance boost on the larger DBPedia500k data set is more significant than the smaller DBPedia50k, which indicates that the extraction features are able to find useful textual information from the entity descriptions."
],
[
"Because the open-world assumption is less restrictive than the closed-world assumption, it is possible for ConMask to perform closed-world tasks, even though it was not designed to do so. So in Tab. 3 we also compare the ConMask model with other closed-world methods on the standard FB15k data set as well as the two new data sets. Results from TransR are missing from the DBPedia500k data set because the model did not complete training after 5 days.",
"We find that ConMask sometimes outperforms closed-world methods on the closed-world task. ConMask especially shows improvement on the DBPedia50k data set; this is probably because the random sampling procedure used to create DBPedia50k generates a sparse graph. closed-world KGC models, which rely exclusively on structural features, have a more difficult time with sub-sampled KGs."
],
[
"In this section we elaborate on some actual prediction results and show examples that highlight the strengths and limitations of the ConMask model.",
"Table 4 shows 4 KGC examples. In each case, ConMask was provided the head and the relationship and asked to predict the tail entity. In most cases ConMask successfully ranks the correct entities within the top-3 results. Gabrielle Stanton's notableWork is an exception. Although Stanton did work on Star Trek, DBPedia indicates that her most notable work is actually The Vampire Diaries, which ranked $4^{\\textrm {th}}$ . The reason for this error is because the indicator word for The Vampire Diaries was “consulting producer”, which was not highly correlated to the relationship name “notable work” from the model's perspective.",
"Another interesting result was the prediction given from the partial triple $\\langle $ The Time Machine, writer, ? $\\rangle $ . The ConMask model ranked the correct screenwriter David Duncan as the $2^{\\textrm {nd}}$ candidate, but the name “David Duncan” does not actually appear in the film's description. Nevertheless, the ConMask model was able to capture the correct relationship because the words “The Time Machine” appeared in the description of David Duncan as one of his major works.",
"Although ConMask outperforms other KGC models on metrics such as Mean Rank and MRR, it still has some limitations and room for improvement. First, due to the nature of the relationship-dependent content masking, some entities with names that are similar to the given relationships, such as the Language-entity in the results of the languageFamily-relationship and the Writer-entity in the results of the writer-relationship, are ranked with a very high score. In most cases the correct target entity will be ranked above relationship-related entities. Yet, these entities still hurt the overall performance. It may be easy to apply a filter to modify the list of predicted target entities so that entities that are same as the relationship will be rearranged. We leave this task as a matter for future work."
],
[
"In the present work we introduced a new open-world Knowledge Graph Completion model ConMask that uses relationship-dependent content masking, fully convolutional neural networks, and semantic averaging to extract relationship-dependent embeddings from the textual features of entities and relationships in KGs. Experiments on both open-world and closed-world KGC tasks show that the ConMask model has good performance in both tasks. Because of problems found in the standard KGC data sets, we also released two new DBPedia data sets for KGC research and development.",
"The ConMask model is an extraction model which currently can only predict relationships if the requisite information is expressed in the entity's description. The goal for future work is to extend ConMask with the ability to find new or implicit relationships."
]
],
"section_name": [
"Introduction",
"Closed-World Knowledge Graph Completion",
"ConMask: A Content Masking Model for Open-World KGC",
"Relationship-Dependent Content Masking",
"Target Fusion",
"Semantic Averaging",
"Loss Function",
"Experiments",
"Settings",
"Data Sets",
"Open-World Entity Prediction",
"Closed-World Entity Prediction",
"Discussion",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"ac0fcfdf25c515b998e5011f339d836506eaf468"
],
"answer": [
{
"evidence": [
"In ConMask, we use a similar idea to select the most related words given some relationship and mask irrelevant words by assigning a relationship-dependent similarity score to words in the given entity description. We formally define relationship-dependent content masking as:",
"ConMask selects words that are related to the given relationship to mitigate the inclusion of irrelevant and noisy words. From the relevant text, ConMask then uses fully convolutional network (FCN) to extract word-based embeddings. Finally, it compares the extracted embeddings to existing entities in the KG to resolve a ranked list of target entities. The overall structure of ConMask is illustrated in Fig. 1 . Later subsections describe the model in detail."
],
"extractive_spans": [],
"free_form_answer": "The model does not add new relations to the knowledge graph.",
"highlighted_evidence": [
"In ConMask, we use a similar idea to select the most related words given some relationship and mask irrelevant words by assigning a relationship-dependent similarity score to words in the given entity description.",
"ConMask selects words that are related to the given relationship to mitigate the inclusion of irrelevant and noisy words."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"no"
],
"question": [
"Can the model add new relations to the knowledge graph, or just new entities?"
],
"question_id": [
"a81941f933907e4eb848f8aa896c78c1157bff20"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"link prediction"
],
"topic_background": [
"research"
]
} | {
"caption": [
"Figure 1: Illustration of the ConMaskmodel for Open-World Knowledge Graph Completion.",
"Figure 2: Relationship-dependent Content Masking heat map for the description of Michelle Obama given relationship type spouse. Stop-words are removed. Higher weights show in darker color.",
"Figure 3: Architecture of the target fusion and relationship-dependent content masking process in ConMask. kc is the length of the entity description and kn is the length of the relationship name. This figure is best viewed in color.",
"Table 1: Open-world Entity prediction results on DBPedia50k and DBPedia500k. For Mean Rank (MR) lower is better. For HITS@10 and Mean Reciprocal Rank (MRR) higher is better.",
"Table 2: Data set statistics.",
"Table 3: Closed-world KGC on head and tail prediction. For HITS@10 higher is better. For Mean Rank (MR) lower is better.",
"Table 4: Entity prediction results on DBPedia50k data set. Top-3 predicted tails are shown with the correct answer in bold."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"6-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png"
]
} | [
"Can the model add new relations to the knowledge graph, or just new entities?"
] | [
[
"1711.03438-ConMask: A Content Masking Model for Open-World KGC-7"
]
] | [
"The model does not add new relations to the knowledge graph."
] | 796 |
1909.05890 | Determining the Scale of Impact from Denial-of-Service Attacks in Real Time Using Twitter | Denial of Service (DoS) attacks are common in on-line and mobile services such as Twitter, Facebook and banking. As the scale and frequency of Distributed Denial of Service (DDoS) attacks increase, there is an urgent need for determining the impact of the attack. Two central challenges of the task are to get feedback from a large number of users and to get it in a timely manner. In this paper, we present a weakly-supervised model that does not need annotated data to measure the impact of DoS issues by applying Latent Dirichlet Allocation and symmetric Kullback-Leibler divergence on tweets. There is a limitation to the weakly-supervised module. It assumes that the event detected in a time window is a DoS attack event. This will become less of a problem, when more non-attack events twitter got collected and become less likely to be identified as a new event. Another way to remove that limitation, an optional classification layer, trained on manually annotated DoS attack tweets, to filter out non-attack tweets can be used to increase precision at the expense of recall. Experimental results show that we can learn weakly-supervised models that can achieve comparable precision to supervised ones and can be generalized across entities in the same industry. | {
"paragraphs": [
[
"Denial of Service attacks are explicit attempts to stop legitimate users from accessing specific network systems BIBREF0. Attackers try to exhaust network resources like bandwidth, or server resources like CPU and memory. As a result, the targeted system slows down or becomes unusable BIBREF1. On-line service providers like Bank Of America, Facebook and Reddit are often the target of such attacks and the frequency and scale of those attacks has increased rapidly in recent years BIBREF2.",
"To address this problem, there is ample previous work on methods to detect and handle Denial of Service attacks, especially Distributed Denial of Service attacks. D-WARD BIBREF3 is a scheme that tries to locate a DDoS attacks at the source by monitoring inbound and outbound traffic of a network and comparing it with predefined \"normal\" values. Some IP Traceback mechanisms BIBREF4 were developed to trace back to the attack source from the victim's end. Still other methods try to deploy a defensive scheme in an entire network to detect and respond to an attack at intermediate sub-networks. Watchers BIBREF5 is an example of this approach.",
"Despite all the new models and techniques to prevent or handle cyber attacks, DDoS attacks keep evolving. Services are still being attacked frequently and brought down from time to time. After a service is disrupted, it is crucial for the provider to assess the scale of the outage impact.",
"In this paper, we present a novel approach to solve this problem. No matter how complex the network becomes or what methods the attackers use, a denial of service attack always results in legitimate users being unable to access the network system or slowing down their access and they are usually willing to reveal this information on social media plaforms. Thus legitimate user feedback can be a reliable indicator about the severity level of the service outage. Thus we split this problem into two parts namely by first isolating the tweet stream that is likely related to a DoS attack and then measuring the impact of attack by analyzing the extracted tweets.",
"A central challenge to measure the impact is how to figure out the scale of the effect on users as soon as possible so that appropriate action can be taken. Another difficulty is given the huge number of users of a service, how to effectively get and process the user feedback. With the development of Social Networks, especially micro blogs like Twitter, users post many life events in real time which can help with generating a fast response. Another advantage of social networks is that they are widely used. Twitter claims that they had 313 million monthly active users in the second quarter of 2016 BIBREF6. This characteristic will enlarge the scope of detection and is extremely helpful when dealing with cross domain attacks because tweets from multiple places can be leveraged. The large number of users of social networks will also guarantee the sensitivity of the model. However, because of the large number of users, a huge quantity of tweets will be generated in a short time, making it difficult to manually annotate the tweets, which makes unsupervised or weakly-supervised models much more desirable.",
"In the Twitter data that we collected there are three kinds of tweets. Firstly are tweets that are actually about a cyberattack. For example, someone tweeted \"Can't sign into my account for bank of America after hackers infiltrated some accounts.\" on September 19, 2012 when a attack on the website happened. Secondly are tweets about some random complaints about an entity like \"Death to Bank of America!!!! RIP my Hello Kitty card... \" which also appeared on that day. Lastly are tweets about other things related to the bank. For example, another tweet on the same day is \"Should iget an account with bank of america or welsfargo?\".",
"To find out the scale of impact from an attack, we must first pick out the tweets that are about the attack. Then using the ratio and number of attack tweets, an estimation of severity can be generated. To solve the problem of detecting Denial of Service attacks from tweets, we constructed a weakly-supervised Natural Language Processing (NLP) based model to process the feeds. More generally, this is a new event detection model. We hypothesize that new topics are attack topics. The hypothesis would not always hold and this issue will be handled by a later module. The first step of the model is to detect topics in one time window of the tweets using Latent Dirichlet Allocation BIBREF7. Then, in order to get a score for each of the topics, the topics in the current time window are compared with the topics in the previous time window using Symmetric Kullback-Leibler Divergence (KL Divergence) BIBREF8. After that, a score for each tweet in the time window is computed using the distribution of topics for the tweet and the score of the topics. We're looking for tweets on new topics through time. While the experiments show promising results, precision can be further increased by adding a layer of a supervised classifier trained with attack data at the expense of recall.",
"Following are the contributions in this paper:",
"A dataset of annotated tweets extracted from Twitter during DoS attacks on a variety organizations from differing domains such as banking (like Bank Of America) and technology.",
"A weakly-supervised approach to identifying detect likely DoS service related events on twitter in real-time.",
"A score to measure impact of the DoS attack based on the frequency of user complaints about the event.",
"The rest of this paper is organized as follows: In section 2, previous work regarding DDoS attack detection and new event detection will be discussed. In section 3, we describe the how the data was collected. We also present the model we created to estimate the impact of DDoS attacks from Twitter feeds. In section 4, the experiments are described and the results are provided. In section 5 we discuss some additional questions. Finally, section 6 concludes our paper and describes future work."
],
[
"Denial of Service (DoS) attacks are a major threat to Internet security, and detecting them has been a core task of the security community for more than a decade. There exists significant amount of prior work in this domain. BIBREF9, BIBREF10, BIBREF11 all introduced different methods to tackle this problem. The major difference between this work and previous ones are that instead of working on the data of the network itself, we use the reactions of users on social networks to identify an intrusion.",
"Due to the widespread use of social networks, they have become an important platform for real-world event detection in recent years BIBREF12. BIBREF13 defined the task of new event detection as \"identifying the first story on topics of interest through constantly monitoring news streams\". Atefeh et al. BIBREF14 provided a comprehensive overview of event detection methods that have been applied to twitter data. We will discuss some of the approaches that are closely related to our work. Weng et al. BIBREF15 used a wavelet-signal clustering method to build a signal for individual words in the tweets that was dependent high frequency words that repeated themselves. The signals were clustered to detect events. Sankaranarayanan et al. BIBREF16 presented an unsupervised news detection method based on naive Bayes classifiers and on-line clustering. BIBREF17 described an unsupervised method to detect general new event detection using Hierarchical divisive clustering. Phuvipadawat et al. BIBREF18 discussed a pipeline to collect, cluster, rank tweets and ultimately track events. They computed the similarity between tweets using TF-IDF. The Stanford Named Entity Recognizer was used to identify nouns in the tweets providing additional features while computing the TF-IDF score. Petrović et al. BIBREF19 tried to detect events on a large web corpus by applying a modified locality sensitive hashing technique and clustering documents (tweets) together. Benson et al. BIBREF20 created a graphical model that learned a latent representation for twitter messages, ultimately generating a canonical value for each event. Tweet-scan BIBREF21 was a method to detect events in a specific geo-location. After extracting features such as name, time and location from the tweet, the method used DB-SCAN to cluster the tweets and Hierarchical Dirichlet Process to model the topics in the tweets. Badjatiya et. al. BIBREF22 applied deep neural networks to detect events. They showed different architectures such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (LSTM based) and FastText outperform standard n-gram and TF-IDF models. Burel et al. BIBREF23 created a Dual-CNN that had an additional channel to model the named entities in tweets apart from the pretrained word vectors from GloVe BIBREF24 or Word2Vec BIBREF25.",
"Thus most event detection models can be grouped into three main categories of methods i.e. TF-IDF based methods, approaches that model topics in tweets and deep neural network based algorithms. One of the main challenges against applying a neural network model is the the requirement of a large annotated corpus of tweets. Our corpus of tweets is comparatively small. Hence we build our pipeline by modeling the topics learned from tweets.",
"The previous work that is most similar to ours was BIBREF26. We both used Latent Dirichlet Allocation (LDA) to get the topics of the document, the difference was they only run LDA on the hash-tag of the tweets while we try to get the topics in the tweets by running it on the whole document.",
"Latent Dirichlet Allocation BIBREF7 was a method to get topics from a corpus. In our work, we used the technique to acquire the values of some of the variables in our equation. A variation of it, Hierarchically Supervised Latent Dirichlet Allocation BIBREF27 was used in the evaluation."
],
[
"Figure FIGREF4 outlines the entire pipeline of the model from preprocessing tweets to modeling them and finally detecting / ranking future tweets that are related to a DoS issue and measuring its severity."
],
[
"To collect the tweets, we first gathered a list of big DDoS attacks happened from 2012 to 2014. Then for each attack on the list, we collected all the tweets from one week before the attack to the attack day that contains the name of the entity attacked."
],
[
"The following preprocessing procedure were applied to the corpus of tweets:",
"Remove all the meta-data like time stamp, author, and so on. These meta-data could provide useful information, but only the content of the tweet was used for now.",
"Lowercase all the text",
"Use an English stop word list to filter out stop words.",
"The last two steps are commonly used technique when preprocessing text."
],
[
"Now we try to find out a quantitative representation of the corpus. To do that, the preprocessed tweets about one attack will be divided into two groups. One is on the attack day and the other is the tweets one week before it. The first set will be called $D_a$ and the other one $D_b$. This step will create two separate LDA models for $D_a$ and $D_b$ using the Genism library BIBREF28. The first Model will be called $M_a$ and the other one $M_b$.",
"Latent Dirichlet allocation (LDA) is a generative probabilistic topic modeling model. Figure FIGREF11 is its plate notation. The meaning of different parameters $M$, $N$, $\\alpha $, $\\beta $, $\\theta $, $z$ and $w$ is also described there.",
"We used the LDA algorithm implemented by the Gensim library. One of the most important parameters of the LDA algorithm is the number of topics $N_t$ in the corpus. To determine that we introduced the following formula:",
"where $N_d$ is the number of tweets in the corpus. $\\alpha $ is a constant and we used $\\alpha $=10 in our experiments. The logic behind the equation is discussed in section 5."
],
[
"Then we want to find out how the new topics are different from the history topics or, in other words, how topics in $M_a$ differ from topics in $M_b$. We define the Symmetric Kullback-Leibler divergence for topic $T_j$ in Model $M_a$ as:",
"Where n is the number of topics in Model $M_b$, $T_m^{^{\\prime }}$ is the $m^{th}$ topic in Model $M_b$ and $D_kl (X,Y)$ is the original Kullback-Leibler Divergence for discrete probability distributions which defined as :",
"Where $X(i)$ and $Y(i)$ are the probability of token $i$ in topics $X$ and $Y$ respectively. This is similar to the Jensen-Shannon divergence.",
"So for each topic $T_j$ in Model $M_a$ its difference to topics in $M_b$ is determined by its most similar topic in $M_b$.",
"The topics from the attack day model $M_a$ are ranked by their Symmetric Kullback-Leibler divergence to topics from the non-attack day model $M_b$. An example of selected attack topics is provided in section 4.3."
],
[
"This subsection is about how to find specific tweets that are about a network attack. The tweets are selected based on the relative score $S$. The score for tweet $t_i$ is defined as:",
"Where $n$ is the number of topics on the attack day, $P_{i,j}$ is the probability that topic $j$ appears in tweet $t_i$ in the attack day LDA model, and $SKL_j$ is the Symmetric Kullback-Leibler divergence for topic $j$. The higher the score the more likely it is related to an attack event."
],
[
"Because annotated data is not needed, the model we described before can be regarded as a weakly-supervised model to detect new events on twitter in a given time period. To label tweets as attack tweets, one assumption must be true, which is that the new event in that time period is a cyber attack. Unfortunately, that is usually not true. Thus, an optional classifier layer can be used to prevent false positives.",
"By using a decision tree model we want to find out whether the weakly-supervised part of the model can simplify the problem enough that a simple classification algorithm like a decision tree can have a good result. Additionally, it is easy to find out the reasoning underline a decision tree model so that we will know what the most important features are.",
"The decision tree classifier is trained on the bag of words of collected tweets and the labels are manually annotated. We limit the minimum samples in each leaf to be no less than 4 so that the tree won't overfit. Other than that, a standard Classification and Regression Tree (CART) BIBREF29 implemented by scikit-learn BIBREF30 was used. The classifier was only trained on the training set (tweets about Bank of America on 09/19/2012), so that the test results do not overestimate accuracy."
],
[
"The definition of severity varies from different network services and should be studied case by case.",
"For the sake of completeness, we propose this general formula:",
"In the equation above, $\\beta $ is a parameter from 0 to 1 which determines the weight of the two parts. $N_{attack}$ is the number of attack tweets found. $N_{all}$ means the number of all tweets collected in the time period. And $N_{user}$ is the number of twitter followers of the network service.",
"An interesting future work is to find out the quantitative relation between SeverityLevel score and the size of the actual DDoS attack."
],
[
"In this section we experimentally study the proposed attack tweet detection models and report the evaluation results."
],
[
"We used precision and recall for evaluation:",
"Precision: Out of all of the tweets that are marked as attack tweets, the percentage of tweets that are actually attack tweets. Or true positive over true positive plus false positive.",
"Recall: Out of all of the actual attack tweets, the percentage of tweets that are labeled as attack tweets. Or true positive over true positive plus false negative."
],
[
"We collected tweets related to five different DDoS attacks on three different American banks. For each attack, all the tweets containing the bank's name posted from one week before the attack until the attack day were collected. There are in total 35214 tweets in the dataset. Then the collected tweets were preprocessed as mentioned in the preprocessing section.",
"The following attacks were used in the dataset:",
"Bank of America attack on 09/19/2012.",
"Wells Fargo Bank attack on 09/19/2012.",
"Wells Fargo Bank attack on 09/25/2012.",
"PNC Bank attack on 09/19/2012.",
"PNC Bank attack on 09/26/2012."
],
[
"Only the tweets from the Bank of America attack on 09/19/2012 were used in this experiment. The tweets before the attack day and on the attack day were used to train the two LDA models mentioned in the approach section.",
"The top, bottom 4 attack topics and their top 10 words are shown in table 1 and 2.",
"As shown in table 1, there are roughly 4 kinds of words in the attack topics. First is the name of the entity we are watching. In this case, it is Bank of America. Those words are in every tweet, so they get very high weight in the topics, while not providing useful information. Those words can be safely discarded or added to the stop word list. The second type of words are general cybersecurity words like website, outage, hackers, slowdown and so on. Those words have the potential to become an indicator. When topics with those words appears, it is likely that there exists an attack. The third kind are words related to the specific attack but not attacks in general. Those words can provide details about the attack, but it is hard to identify them without reading the full tweets. In our example, the words movie and sacrilegious are in this group. That is because the DDoS attack on Bank of America was in response to the release of a controversial sacrilegious film. The remaining words are non-related words. The higher the weights of them in a topic, the less likely the topic is actually about a DDoS attack.",
"The results showed that except the 3rd topic, the top 4 topics have high weight on related words and the number of the forth type of words are smaller than the first three types of words. There are no high weight words related to security in the bottom 4 topics. We can say that the high SKL topics are about cyber attacks."
],
[
"In this subsection we discuss the experiment on the attack tweets found in the whole dataset. As stated in section 3.3, the whole dataset was divided into two parts. $D_a$ contained all of the tweets collected on the attack day of the five attacks mentioned in section 4.2. And $D_b$ contained all of the tweets collected before the five attacks. There are 1180 tweets in $D_a$ and 7979 tweets in $D_b$. The tweets on the attack days ($D_a$) are manually annotated and only 50 percent of those tweets are actually about a DDoS attack.",
"The 5 tweets that have the highest relative score in the dataset are:",
"jiwa mines and miner u.s. bancorp, pnc latest bank websites to face access issues: (reuters) - some u.s. bancorp... http://bit.ly/p5xpmz",
"u.s. bancorp, pnc latest bank websites to face access issues: (reuters) - some u.s. bancorp and pnc financial...",
"@pncvwallet nothing pnc sucks fat d ur lucky there's 3 pnc's around me or your bitchassness wouldnt have my money",
"business us bancorp, pnc latest bank websites to face access issues - reuters news",
"forex business u.s. bancorp, pnc latest bank websites to face access issues http://dlvr.it/2d9ths",
"The precision when labeling the first x ranked tweets as attack tweet is shown in the figure FIGREF39. The x-axis is the number of ranked tweets treated as attack tweets. And the y-axis is the corresponding precision. The straight line in figures FIGREF39, FIGREF43 and FIGREF51 is the result of a supervised LDA algorithm which is used as a baseline. Supervised LDA achieved 96.44 percent precision with 10 fold cross validation.",
"The result shows that if the model is set to be more cautious about labeling a tweet as an attack tweet, a small x value, higher precision, even comparable to supervised model can be achieved. However as the x value increases the precision drops eventually.",
"Figure FIGREF40 shows the recall of the same setting. We can find out that the recall increases as the model becomes more bold, at the expense of precision.",
"Figure FIGREF41 is the detection error trade-off graph to show the relation between precision and recall more clearly (missed detection rate is the precision)."
],
[
"In this subsection we evaluate how good the model generalizes. To achieve that, the dataset is divided into two groups, one is about the attacks on Bank of America and the other group is about PNC and Wells Fargo. The only difference between this experiment and the experiment in section 4.4 is the dataset. In this experiment setting $D_a$ contains only the tweets collected on the days of attack on PNC and Wells Fargo. $D_b$ only contains the tweets collected before the Bank of America attack. There are 590 tweets in $D_a$ and 5229 tweets in $D_b$. In this experiment, we want to find out whether a model trained on Bank of America data can make good classification on PNC and Wells Fargo data.",
"Figures FIGREF43 and FIGREF44 will show the precision and recall of the model in this experiment setting. A detection error trade-off graph (Figure FIGREF45) is also provided.",
"The result is similar to the whole dataset setting from the previous section. The smaller the x value is, the higher the precision and lower the recall, vice versa. The precision is also comparable to the supervised model when a small x is chosen. This shows that the model generalized well."
],
[
"Using the result from last section, we choose to label the first 40 tweets as attack tweets. The number 40 can be decided by either the number of tweets labeled as attack tweets by the decision tree classifier or the number of tweets that have a relative score S higher than a threshold. The PNC and Wells Fargo bank have 308.3k followers combined as of July 2018. According to eqution (5) from section 3.6, the severity Level can be computed.",
"The score would have a range from 6.78 * $10^{-2}$ to 1.30 * $10^{-3}$, depending on the value of $\\beta $. This means that it could be a fairly important event because more than six percent of tweets mentioning the banks are talking about the DDoS attack. However it could also be a minor attack because only a tiny portion of the people following those banks are complaining about the outage. The value of $\\beta $ should depend on the provider's own definition of severity."
],
[
"This model has two parameters that need to be provided. One is $\\alpha $ which is needed to determine the number of topics parameter $N_t$, and the other is whether to use the optional decision tree filter.",
"Figures FIGREF49 and FIGREF50 provide experimental results on the model with different combinations of parameters. We selected four combinations that have the best and worst performance. All of the results can be found in appendix. The model was trained on Bank of America tweets and tested on PNC and Wells Fargo tweets like in section 4.5. In the figure, different lines have different values of $\\alpha $ which ranges from 5 to 14 and the x axis is the number of ranked tweets labeled as attack tweets which have a range of 1 to 100 and the y-axis is the precision or recall of the algorithm and should be a number from 0 to 1.",
"The results shows the decision tree layer increases precision at the cost of recall. The model's performance differs greatly with different $\\alpha $ values while there lacks a good way to find the optimal one."
],
[
"In this section, we will discuss two questions.",
"Firstly, we want to briefly discuss how good humans do on this task. What we find out is though humans perform well on most of the tweets, some tweets have proven to be challenging without additional information. In this experiment, we asked 18 members of our lab to classify 34 tweets picked from human annotated ones. There are only two tweets which all the 18 answers agree with each other. And there are two tweets that got exactly the same number of votes on both sides. The two tweets are \"if these shoes get sold out before i can purchase them, i'ma be so mad that i might just switch banks! @bankofamerica fix yourself!\" and \"nothing's for sure, but if i were a pnc accountholder, i'd get my online banking business done today: http://lat.ms/uv3qlo\".",
"The second question we want to talk about is how to find out the optimal number of topics in each of the two LDA models. As shown in the parameter tuning section, the number of topics parameter greatly affects the performance of the model. We've tried several ways to figure out the number of topics. First a set number of topics for different corpora. We tried 30 different topic numbers on the Bank of America dataset and chose the best one, and then tested it on the PNC data. The result shows that this method does not perform well on different datasets. We think it is because the number of topics should be a function of the number of documents or number of words in the corpus. Then we tried to let the model itself determines the parameter. There are some LDA variations that can do automatic number of topic inference. The one we chose is the Hierarchical Dirichlet Process (HDP) mixture model, which is a nonparametric Bayesian approach to clustering grouped data and a natural nonparametric generalization of Latent Dirichlet Allocation BIBREF31. However it does not perform very well. Its precision is shown in figure FIGREF51 and recall is shown in figure FIGREF52.",
"We think the reason for this kind of performance might be that tweets, with the restriction of 140 characters, have very different properties than usual documents like news or articles. The last method is what was proposed in this paper. An $\\alpha $ equals 10 is what we chose and did a good job on the experiments. But it is only an empirical result."
],
[
"In this paper, we proposed a novel weakly-supervised model with optional supervised classifier layer to determine the impact of a Denial-of-Service attack in real time using twitter. The approach computes an anomaly score based on the distribution of new topics and their KL divergence to the historical topics. Then we tested the model on same and different entities to check the model's performance and how well it generalize. Our experiment result showed that the model achieved decent result on finding out tweets related to a DDoS attack even comparable to a supervised model baseline. And it could generalize to different entities within the same domain. Using the attack tweets, we could get an estimation of the impact of the attack with a proposed formula.",
"There remain some interesting open questions for future research. For example, it is important to figure out a way to find out the optimal number of topics in the dataset. We would also be interested to see how well this model will perform on other kind of event detection task if the optional classifier layer changes accordingly."
],
[
"Figures FIGREF53 and FIGREF54 provide all of the experimental results on the model with different combinations of parameters."
]
],
"section_name": [
"Introduction",
"Related Work",
"Approach",
"Approach ::: Data Collection",
"Approach ::: Preprocessing",
"Approach ::: Create LDA Models",
"Approach ::: The attack topics",
"Approach ::: The attack tweets",
"Approach ::: Optional Classifier Layer",
"Approach ::: Measure the Severity",
"Experiments",
"Experiments ::: Term Definition",
"Experiments ::: Experiment Dataset",
"Experiments ::: The Attack Topics",
"Experiments ::: The Attack Tweets",
"Experiments ::: Generalization",
"Experiments ::: Impact Estimation",
"Experiments ::: Parameter Tuning",
"Discussion",
"Conclusion",
"Additional Result for Parameter Tuning"
]
} | {
"answers": [
{
"annotation_id": [
"cfb64aff2c8d3f39e15691725dd954036154698a"
],
"answer": [
{
"evidence": [
"In this subsection we discuss the experiment on the attack tweets found in the whole dataset. As stated in section 3.3, the whole dataset was divided into two parts. $D_a$ contained all of the tweets collected on the attack day of the five attacks mentioned in section 4.2. And $D_b$ contained all of the tweets collected before the five attacks. There are 1180 tweets in $D_a$ and 7979 tweets in $D_b$. The tweets on the attack days ($D_a$) are manually annotated and only 50 percent of those tweets are actually about a DDoS attack."
],
"extractive_spans": [],
"free_form_answer": "The dataset contains about 590 tweets about DDos attacks.",
"highlighted_evidence": [
"$D_a$ contained all of the tweets collected on the attack day of the five attacks mentioned in section 4.2. And $D_b$ contained all of the tweets collected before the five attacks. There are 1180 tweets in $D_a$ and 7979 tweets in $D_b$. The tweets on the attack days ($D_a$) are manually annotated and only 50 percent of those tweets are actually about a DDoS attack."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"b6ed3b495b38a4dd41019934cb1eca370f8c9dce"
],
"answer": [
{
"evidence": [
"We collected tweets related to five different DDoS attacks on three different American banks. For each attack, all the tweets containing the bank's name posted from one week before the attack until the attack day were collected. There are in total 35214 tweets in the dataset. Then the collected tweets were preprocessed as mentioned in the preprocessing section.",
"Only the tweets from the Bank of America attack on 09/19/2012 were used in this experiment. The tweets before the attack day and on the attack day were used to train the two LDA models mentioned in the approach section.",
"In this subsection we evaluate how good the model generalizes. To achieve that, the dataset is divided into two groups, one is about the attacks on Bank of America and the other group is about PNC and Wells Fargo. The only difference between this experiment and the experiment in section 4.4 is the dataset. In this experiment setting $D_a$ contains only the tweets collected on the days of attack on PNC and Wells Fargo. $D_b$ only contains the tweets collected before the Bank of America attack. There are 590 tweets in $D_a$ and 5229 tweets in $D_b$. In this experiment, we want to find out whether a model trained on Bank of America data can make good classification on PNC and Wells Fargo data."
],
"extractive_spans": [],
"free_form_answer": "Tweets related to a Bank of America DDos attack were used as training data. The test datasets contain tweets related to attacks to Bank of America, PNC and Wells Fargo.",
"highlighted_evidence": [
"We collected tweets related to five different DDoS attacks on three different American banks. For each attack, all the tweets containing the bank's name posted from one week before the attack until the attack day were collected. There are in total 35214 tweets in the dataset.",
"Only the tweets from the Bank of America attack on 09/19/2012 were used in this experiment.",
"In this subsection we evaluate how good the model generalizes. To achieve that, the dataset is divided into two groups, one is about the attacks on Bank of America and the other group is about PNC and Wells Fargo. The only difference between this experiment and the experiment in section 4.4 is the dataset. In this experiment setting $D_a$ contains only the tweets collected on the days of attack on PNC and Wells Fargo. $D_b$ only contains the tweets collected before the Bank of America attack."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"d65ae274777ddbc4d22e60663888efff5f728b7f"
],
"answer": [
{
"evidence": [
"The precision when labeling the first x ranked tweets as attack tweet is shown in the figure FIGREF39. The x-axis is the number of ranked tweets treated as attack tweets. And the y-axis is the corresponding precision. The straight line in figures FIGREF39, FIGREF43 and FIGREF51 is the result of a supervised LDA algorithm which is used as a baseline. Supervised LDA achieved 96.44 percent precision with 10 fold cross validation."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The straight line in figures FIGREF39, FIGREF43 and FIGREF51 is the result of a supervised LDA algorithm which is used as a baseline. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat"
],
"question": [
"Do twitter users tend to tweet about the DOS attack when it occurs? How much data supports this assumption?",
"What is the training and test data used?",
"Was performance of the weakly-supervised model compared to the performance of a supervised model?"
],
"question_id": [
"68ff2a14e6f0e115ef12c213cf852a35a4d73863",
"0b54032508c96ff3320c3db613aeb25d42d00490",
"86be8241737dd8f7b656a3af2cd17c8d54bf1553"
],
"question_writer": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Workflow to process tweets gathered and build a model to rank future tweets that likely to be related to a DoS attack. The ranked tweets are used to measure the severity of the attack.",
"Figure 2: Plate notation of LDA [4]. The outer box denotes documents in the corpus andM is the number of documents. The inner box denotes the repeated choice of topics and wordswithin a documentwhereN is the number ofwords in a document. α is the parameter of the Dirichlet prior on the per-document topic distributions. β is the parameter of the Dirichlet prior on the per-topic word distribution. θ is the topic distribution. z is the topic of wordw in the document.",
"Table 1: Top 4 Attack topics from the Bank of America data with their Symmetric Kullback-Leibler divergence",
"Table 2: Bottom 4 Attack topics from the Bank of America data with their Symmetric Kullback-Leibler divergence",
"Figure 3: Precision, positive predictive value, of the model when labeling the first x ranked tweets as attack tweet using all of the tweets collected. The straight line is the result of a supervised LDA model as a baseline.",
"Figure 5: Detection error trade-off graph when labeling the different number of ranked tweets as attack tweet using all of the tweets collected.",
"Figure 4: Recall, true positive rate, of the model when labeling the first x ranked tweets as attack tweet using all of the tweets collected.",
"Figure 6: Precision, positive predictive value, of the model when labeling the first x ranked tweets as attack tweet. The model was trained on Bank of America data and tested on PNC and Wells Fargo data. The straight line is the result of a supervised LDA model as a baseline.",
"Figure 7: Recall, true positive rate, of the model when labeling the first x ranked tweets as attack tweet. The model was trained on Bank of America data and tested on PNC and Wells Fargo data.",
"Figure 8: Detection error trade-off graph when labeling the different number of ranked tweets as attack tweet. The model was trained on Bank of America data and tested on PNC and Wells Fargo data.",
"Figure 9: Selected precision, positive predictive value, of the models with different parameter combinations. α is a parameter used to find out number of topics in the corpus. The model was trained on Bank of America data and tested on PNC andWells Fargo data.",
"Figure 10: Selected recall, true positive rate, of the models with different parameter combinations. α is a parameter used to find out number of topics in the corpus. The model was trained on Bank of America data and tested on PNC and Wells Fargo data.",
"Figure 11: Precision, positive predictive value, of the Hierarchical Dirichlet process model when labeling the first x ranked tweets as attack tweet using all of the tweets collected. The straight line is the result of a supervised LDA model as a baseline.",
"Figure 12: Recall, true positive rate, of the Hierarchical Dirichlet process model when labeling the first x ranked tweets as attack tweet using all of the tweets collected.",
"Figure 13: Precision, positive predictive value, of the models with different parameter combinations. α is a parameter used to find out number of topics in the corpus. The model was trained on Bank of America data and tested on PNC and Wells Fargo data.",
"Figure 14: Recall, true positive rate, of the models with different parameter combinations. α is a parameter used to find out number of topics in the corpus. The model was trained on Bank of America data and tested on PNC and Wells Fargo data."
],
"file": [
"4-Figure1-1.png",
"4-Figure2-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"6-Figure3-1.png",
"6-Figure5-1.png",
"6-Figure4-1.png",
"6-Figure6-1.png",
"7-Figure7-1.png",
"7-Figure8-1.png",
"8-Figure9-1.png",
"8-Figure10-1.png",
"9-Figure11-1.png",
"9-Figure12-1.png",
"10-Figure13-1.png",
"10-Figure14-1.png"
]
} | [
"Do twitter users tend to tweet about the DOS attack when it occurs? How much data supports this assumption?",
"What is the training and test data used?"
] | [
[
"1909.05890-Experiments ::: The Attack Tweets-0"
],
[
"1909.05890-Experiments ::: The Attack Topics-0",
"1909.05890-Experiments ::: Generalization-0",
"1909.05890-Experiments ::: Experiment Dataset-0"
]
] | [
"The dataset contains about 590 tweets about DDos attacks.",
"Tweets related to a Bank of America DDos attack were used as training data. The test datasets contain tweets related to attacks to Bank of America, PNC and Wells Fargo."
] | 801 |
1909.01247 | Introducing RONEC -- the Romanian Named Entity Corpus | We present RONEC - the Named Entity Corpus for the Romanian language. The corpus contains over 26000 entities in ~5000 annotated sentences, belonging to 16 distinct classes. The sentences have been extracted from a copy-right free newspaper, covering several styles. This corpus represents the first initiative in the Romanian language space specifically targeted for named entity recognition. It is available in BRAT and CoNLL-U Plus formats, and it is free to use and extend at github.com/dumitrescustefan/ronec . | {
"paragraphs": [
[
"Language resources are an essential component in entire R&D domains. From the humble but vast repositories of monolingual texts that are used by the newest language modeling approaches like BERT and GPT, to parallel corpora that allows our machine translation systems to inch closer to human performance, to the more specialized resources like WordNets that encode semantic relations between nodes, these resources are necessary for the general advancement of Natural Language Processing, which eventually evolves into real apps and services we are (already) taking for granted.",
"We introduce RONEC - the ROmanian Named Entity Corpus, a free, open-source resource that contains annotated named entities in copy-right free text.",
"A named entity corpus is generally used for Named Entity Recognition (NER): the identification of entities in text such as names of persons, locations, companies, dates, quantities, monetary values, etc. This information would be very useful for any number of applications: from a general information extraction system down to task-specific apps such as identifying monetary values in invoices or product and company references in customer reviews.",
"We motivate the need for this corpus primarily because, for Romanian, there is no other such corpus. This basic necessity has sharply arisen as we, while working on a different project, have found out there are no usable resources to help us in an Information Extraction task: we were unable to extract people, locations or dates/values. This constituted a major road-block, with the only solution being to create such a corpus ourselves. As the corpus was out-of-scope for this project, the work was done privately, outside the umbrella of any authors' affiliations - this is why we are able to distribute this corpus completely free.",
"The current landscape in Romania regarding language resources is relatively unchanged from the outline given by the META-NET project over six years ago. The in-depth analysis performed in this European-wide Horizon2020-funded project revealed that the Romanian language falls in the \"fragmentary support\" category, just above the last, \"weak/none\" category (see the language/support matrix in BIBREF3). This is why, in 2019/2020, we are able to present the first NER resource for Romanian."
],
[
"We note that, while fragmentary, there are a few related language resources available, but none that specifically target named entities:"
],
[
"ROCO BIBREF4 is a Romanian journalistic corpus that contains approx. 7.1M tokens. It is rich in proper names, numerals and named entities. The corpus has been automatically annotated at word-level with morphosyntactic information (MSD annotations)."
],
[
"Released in 2016, ROMBAC BIBREF5 is a Romanian text corpus containing 41M words divided in relatively equal domains like journalism, legalese, fiction, medicine, etc. Similarly to ROCO, it is automatically annotated at word level with MSD descriptors."
],
[
"The much larger and recently released CoRoLa corpus BIBREF6 contains over 1B words, similarly automatically annotated.",
"In all these corpora the named entities are not a separate category - the texts are morphologically and syntactically annotated and all proper nouns are marked as such - NP - without any other annotation or assigned category. Thus, these corpora cannot be used in a true NER sense. Furthermore, annotations were done automatically with a tokenizer/tagger/parser, and thus are of slightly lower quality than one would expect of a gold-standard corpus."
],
[
"The corpus, at its current version 1.0 is composed of 5127 sentences, annotated with 16 classes, for a total of 26377 annotated entities. The 16 classes are: PERSON, NAT_REL_POL, ORG, GPE, LOC, FACILITY, PRODUCT, EVENT, LANGUAGE, WORK_OF_ART, DATETIME, PERIOD, MONEY, QUANTITY, NUMERIC_VALUE and ORDINAL.",
"It is based on copyright-free text extracted from Southeast European Times (SETimes). The news portal has published “news and views from Southeast Europe” in ten languages, including Romanian. SETimes has been used in the past for several annotated corpora, including parallel corpora for machine translation. For RONEC we have used a hand-picked selection of sentences belonging to several categories (see table TABREF16 for stylistic examples).",
"The corpus contains the standard diacritics in Romanian: letters ș and ț are written with a comma, not with a cedilla (like ş and ţ). In Romanian many older texts are written with cedillas instead of commas because full Unicode support in Windows came much later than the classic extended Ascii which only contained the cedilla letters.",
"The 16 classes are inspired by the OntoNotes5 corpus BIBREF7 as well as the ACE (Automatic Content Extraction) English Annotation Guidelines for Entities Version 6.6 2008.06.13 BIBREF8. Each class will be presented in detail, with examples, in the section SECREF3 A summary of available classes with word counts for each is available in table TABREF18.",
"The corpus is available in two formats: BRAT and CoNLL-U Plus."
],
[
"As the corpus was developed in the BRAT environment, it was natural to keep this format as-is. BRAT is an online environment for collaborative text annotation - a web-based tool where several people can mark words, sub-word pieces, multiple word expressions, can link them together by relations, etc. The back-end format is very simple: given a text file that contains raw sentences, in another text file every annotated entity is specified by the start/end character offset as well as the entity type, one per line. RONEC is exported in the BRAT format as ready-to-use in the BRAT annotator itself. The corpus is pre-split into sub-folders, and contains all the extra files such as the entity list, etc, needed to directly start an eventual edit/extension of the corpus.",
"Example (raw/untokenized) sentences:",
"Tot în cadrul etapei a 2-a, a avut loc întâlnirea Vardar Skopje - S.C. Pick Szeged, care s-a încheiat la egalitate, 24 - 24.",
"I s-a decernat Premiul Nobel pentru literatură pe anul 1959.",
"Example annotation format:",
"T1 ORDINAL 21 26 a 2-a",
"T2 ORGANIZATION 50 63 Vardar Skopje",
"T3 ORGANIZATION 66 82 S.C. Pick Szeged",
"T4 NUMERIC_VALUE 116 118 24",
"T5 NUMERIC_VALUE 121 123 24",
"T6 DATETIME 175 184 anul 1959"
],
[
"The CoNLL-U Plus format extends the standard CoNLL-U which is used to annotate sentences, and in which many corpora are found today. The CoNLL-U format annotates one word per line with 10 distinct \"columns\" (tab separated):",
"nolistsep",
"ID: word index;",
"FORM: unmodified word from the sentence;",
"LEMMA: the word's lemma;",
"UPOS: Universal part-of-speech tag;",
"XPOS: Language-specific part-of-speech tag;",
"FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension;",
"HEAD: Head of the current word, which is either a value of ID or zero;",
"DEPREL: Universal dependency relation to the HEAD or a defined language-specific subtype of one;",
"DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs;",
"MISC: Miscellaneous annotations such as space after word.",
"The CoNLL-U Plus extends this format by allowing a variable number of columns, with the restriction that the columns are to be defined in the header. For RONEC, we define our CoNLL-U Plus format as the standard 10 columns plus another extra column named RONEC:CLASS. This column has the following format: nolistsep",
"[noitemsep]",
"each named entity has a distinct id in the sentence, starting from 1; as an entity can span several words, all words that belong to it have the same id (no relation to word indexes)",
"the first word belonging to an entity also contains its class (e.g. word \"John\" in entity \"John Smith\" will be marked as \"1:PERSON\")",
"a non-entity word is marked with an asterisk *",
"Table TABREF37 shows the CoNLL-U Plus format where for example \"a 2-a\" is an ORDINAL entity spanning 3 words. The first word \"a\" is marked in this last column as \"1:ORDINAL\" while the following words just with the id \"1\".",
"The CoNLL-U Plus format we provide was created as follows: (1) annotate the raw sentences using the NLP-Cube tool for Romanian (it provides everything from tokenization to parsing, filling in all attributes in columns #1-#10; (2) align each token with the human-made entity annotations from the BRAT environment (the alignment is done automatically and is error-free) and fill in column #11."
],
[
"For the English language, we found two \"categories\" of NER annotations to be more prominent: CoNLL- and ACE-style. Because CoNLL only annotates a few classes (depending on the corpora, starting from the basic three: PERSON, ORGANIZATION and LOCATION, up to seven), we chose to follow the ACE-style with 18 different classes. After analyzing the ACE guide we have settled on 16 final classes that seemed more appropriate for Romanian, seen in table TABREF18.",
"In the following sub-sections we will describe each class in turn, with a few examples. Some examples have been left in Romanian while some have been translated in English for the reader's convenience. In the examples at the end of each class' description, translations in English are colored for easier reading."
],
[
"Persons, including fictive characters. We also mark common nouns that refer to a person (or several), including pronouns (us, them, they), but not articles (e.g. in \"an individual\" we don't mark \"an\"). Positions are not marked unless they directly refer to the person: \"The presidential counselor has advised ... that a new counselor position is open.\", here we mark \"presidential counselor\" because it refers to a person and not the \"counselor\" at the end of the sentence as it refers only to a position.",
"",
"Locul doi i-a revenit româncei Otilia Aionesei, o elevă de 17 ani.",
"green!55!blueThe second place was won by Otilia Aionesei, a 17 year old student.",
"",
"Ministrul bulgar pentru afaceri europene, Meglena Kuneva ...",
"green!55!blueThe Bulgarian Minister for European Affairs, Meglena Kuneva ...",
""
],
[
"These are nationalities or religious or political groups. We include words that indicate the nationality of a person, group or product/object. Generally words marked as NAT_REl_POL are adjectives.",
"",
"avionul american",
"green!55!bluethe American airplane",
"",
"Grupul olandez",
"green!55!bluethe Dutch group",
"",
"Grecii iși vor alege președintele.",
"green!55!blueThe Greeks will elect their president.",
""
],
[
"Companies, agencies, institutions, sports teams, groups of people. These entities must have an organizational structure. We only mark full organizational entities, not fragments, divisions or sub-structures.",
"",
"Universitatea Politehnica București a decis ...",
"green!55!blueThe Politehnic University of Bucharest has decided ...",
"",
"Adobe Inc. a lansat un nou produs.",
"green!55!blueAdobe Inc. has launched a new product.",
""
],
[
"Geo-political entities: countries, counties, cities, villages. GPE entities have all of the following components: (1) a population, (2) a well-defined governing/organizing structure and (3) a physical location. GPE entities are not sub-entities (like a neighbourhood from a city).",
"",
"Armin van Buuren s-a născut în Leiden.",
"green!55!blueArmin van Buuren was born in Leiden.",
"",
"U.S.A. ramane indiferentă amenințărilor Coreei de Nord.",
"green!55!blueU.S.A. remains indifferent to North Korea's threats.",
""
],
[
"Non-geo-political locations: mountains, seas, lakes, streets, neighbourhoods, addresses, continents, regions that are not GPEs. We include regions such as Middle East, \"continents\" like Central America or East Europe. Such regions include multiple countries, each with its own government and thus cannot be GPEs.",
"",
"Pe DN7 Petroșani-Obârșia Lotrului carosabilul era umed, acoperit (cca 1 cm) cu zăpadă, iar de la Obârșia Lotrului la stațiunea Vidra, stratul de zăpadă era de 5-6 cm.",
"green!55!blueOn DN7 Petroșani-Obârșia Lotrului the road was wet, covered (about 1cm) with snow, and from Obârșia Lotrului to Vidra resort the snow depth was around 5-6 cm.",
"",
"Produsele comercializate în Europa de Est au o calitate inferioară celor din vest.",
"green!55!blueProducts sold in East Europe have a lower quality than those sold in the west.",
""
],
[
"Buildings, airports, highways, bridges or other functional structures built by humans. Buildings or other structures which house people, such as homes, factories, stadiums, office buildings, prisons, museums, tunnels, train stations, etc., named or not. Everything that falls within the architectural and civil engineering domains should be labeled as a FACILITY. We do not mark structures composed of multiple (and distinct) sub-structures, like a named area that is composed of several buildings, or \"micro\"-structures such as an apartment (as it a unit of an apartment building). However, larger, named functional structures can still be marked (such as \"terminal X\" of an airport).",
"",
"Autostrada A2 a intrat în reparații pe o bandă, însă pe A1 nu au fost încă începute lucrările.",
"green!55!blueRepairs on one lane have commenced on the A2 highway, while on A1 no works have started yet.",
"",
"Aeroportul Henri Coandă ar putea sa fie extins cu un nou terminal.",
"green!55!blueHenri Coandă Airport could be extended with a new terminal.",
""
],
[
"Objects, cars, food, items, anything that is a product, including software (such as Photoshop, Word, etc.). We don't mark services or processes. With very few exceptions (such as software products), PRODUCT entities have to have physical form, be directly man-made. We don't mark entities such as credit cards, written proofs, etc. We don't include the producer's name unless it's embedded in the name of the product.",
"",
"Mașina cumpărată este o Mazda.",
"green!55!blueThe bought car is a Mazda.",
"",
"S-au cumpărat 5 Ford Taurus și 2 autobuze Volvo.",
"green!55!blue5 Ford Taurus and 2 Volvo buses have been acquired.",
""
],
[
"Named events: Storms (e.g.:\"Sandy\"), battles, wars, sports events, etc. We don't mark sports teams (they are ORGs), matches (e.g. \"Steaua-Rapid\" will be marked as two separate ORGs even if they refer to a football match between the two teams, but the match is not specific). Events have to be significant, with at least national impact, not local.",
"",
"Războiul cel Mare, Războiul Națiunilor, denumit, în timpul celui de Al Doilea Război Mondial, Primul Război Mondial, a fost un conflict militar de dimensiuni mondiale.",
"green!55!blueThe Great War, War of the Nations, as it was called during the Second World War, the First World War was a global-scale military conflict.",
""
],
[
"This class represents all languages.",
"",
"Românii din România vorbesc română.",
"green!55!blueRomanians from Romania speak Romanian.",
"",
"În Moldova se vorbește rusa și româna.",
"green!55!blueIn Moldavia they speak Russian and Romanian.",
""
],
[
"Books, songs, TV shows, pictures; everything that is a work of art/culture created by humans. We mark just their name. We don't mark laws.",
"",
"Accesul la Mona Lisa a fost temporar interzis vizitatorilor.",
"green!55!blueAccess to Mona Lisa was temporarily forbidden to visitors.",
"",
"În această seară la Vrei sa Fii Miliardar vom avea un invitat special.",
"green!55!blueThis evening in Who Wants To Be A Millionaire we will have a special guest.",
""
],
[
"Date and time values. We will mark full constructions, not parts, if they refer to the same moment (e.g. a comma separates two distinct DATETIME entities only if they refer to distinct moments). If we have a well specified period (e.g. \"between 20-22 hours\") we mark it as PERIOD, otherwise less well defined periods are marked as DATETIME (e.g.: \"last summer\", \"September\", \"Wednesday\", \"three days\"); Ages are marked as DATETIME as well. Prepositions are not included.",
"",
"Te rog să vii aici în cel mult o oră, nu mâine sau poimâine.",
"green!55!bluePlease come here in one hour at most, not tomorrow or the next day.",
"",
"Actul s-a semnat la orele 16.",
"green!55!blueThe paper was signed at 16 hours.",
"",
"August este o lună secetoasă.",
"green!55!blueAugust is a dry month.",
"",
"Pe data de 20 martie între orele 20-22 va fi oprită alimentarea cu curent.",
"green!55!blueOn the 20th of March, between 20-22 hours, electricity will be cut-off.",
""
],
[
"Periods/time intervals. Periods have to be very well marked in text. If a period is not like \"a-b\" then it is a DATETIME.",
"",
"Spectacolul are loc între 1 și 3 Aprilie.",
"green!55!blueThe show takes place between 1 and 3 April.",
"",
"În prima jumătate a lunii iunie va avea loc evenimentul de două zile.",
"green!55!blueIn the first half of June the two-day event will take place.",
""
],
[
"Money, monetary values, including units (e.g. USD, $, RON, lei, francs, pounds, Euro, etc.) written with number or letters. Entities that contain any monetary reference, including measuring units, will be marked as MONEY (e.g. 10$/sqm, 50 lei per hour). Words that are not clear values will not be marked, such as \"an amount of money\", \"he received a coin\".",
"",
"Primarul a semnat un contract în valoare de 10 milioane lei noi, echivalentul a aproape 2.6m EUR.",
"green!55!blueThe mayor signed a contract worth 10 million new lei, equivalent of almost 2.6m EUR.",
""
],
[
"Measurements, such as weight, distance, etc. Any type of quantity belongs in this class.",
"",
"Conducătorul auto avea peste 1g/ml alcool în sânge, fiind oprit deoarece a fost prins cu peste 120 km/h în localitate.",
"green!55!blueThe car driver had over 1g/ml blood alcohol, and was stopped because he was caught speeding with over 120km/h in the city.",
""
],
[
"Any numeric value (including phone numbers), written with letters or numbers or as percents, which is not MONEY, QUANTITY or ORDINAL.",
"",
"Raportul XII-2 arată 4 552 de investitori, iar structura de portofoliu este: cont curent 0,05%, certificate de trezorerie 66,96%, depozite bancare 13,53%, obligațiuni municipale 19,46%.",
"green!55!blueThe XII-2 report shows 4 552 investors, and the portfolio structure is: current account 0,05%, treasury bonds 66,96%, bank deposits 13,53%, municipal bonds 19,46%.",
""
],
[
"The first, the second, last, 30th, etc.; An ordinal must imply an order relation between elements. For example, \"second grade\" does not involve a direct order relation; it indicates just a succession in grades in a school system.",
"",
"Primul loc a fost ocupat de echipa Germaniei.",
"green!55!blueThe first place was won by Germany's team.",
"",
"",
"",
"The corpus creation process involved a small number of people that have voluntarily joined the initiative, with the authors of this paper directing the work. Initially, we searched for NER resources in Romanian, and found none. Then we looked at English resources and read the in-depth ACE guide, out of which a 16-class draft evolved. We then identified a copy-right free text from which we hand-picked sentences to maximize the amount of entities while maintaining style balance. The annotation process was a trial-and-error, with cycles composed of annotation, discussing confusing entities, updating the annotation guide schematic and going through the corpus section again to correct entities following guide changes. The annotation process was done online, in BRAT. The actual annotation involved 4 people, has taken about 6 months (as work was volunteer-based, we could not have reached for 100% time commitment from the people involved), and followed the steps:",
"nolistsep",
"Each person would annotate the full corpus (this included the cycles of shaping up the annotation guide, and re-annotation). Inter-annotator agreement (ITA) at this point was relatively low, at 60-70%, especially for a number of classes.",
"We then automatically merged all annotations, with the following criterion: if 3 of the 4 annotators agreed on an entity (class&start-stop), then it would go unchanged; otherwise mark the entity (longest span) as CONFLICTED.",
"Two teams were created, each with two persons. Each team annotated the full corpus again, starting from the previous step. At this point, class-average ITA has risen to over 85%.",
"Next, the same automatic merging happened, this time entities remained unchanged if both annotations agreed.",
"Finally, one of the authors went through the full corpus one more time, correcting disagreements.",
"We would like to make a few notes regarding classes and inter-annotator agreements:",
"nolistsep",
"[noitemsep]",
"Classes like ORGANIZATION, NAT_REL_POL, LANGUAGE or GPEs have the highest ITA, over 98%. They are pretty clear and distinct from other classes.",
"The DATETIME class also has a high ITA, with some overlap with PERIOD: annotators could fall-back if they were not sure that an expression was a PERIOD and simply mark it as DATETIME.",
"WORK_OF_ART and EVENTs have caused some problems because the scope could not be properly defined from just one sentence. For example, a fair in a city could be a local event, but could also be a national periodic event.",
"MONEY, QUANTITY and ORDINAL all are more specific classes than NUMERIC_VALUE. So, in cases where a numeric value has a unit of measure by it, it should become a QUANTITY, not a NUMERIC_VALUE. However, this \"specificity\" has created some confusion between these classes, just like with DATETIME and PERIOD.",
"The ORDINAL class is a bit ambiguous, because, even though it ranks \"higher\" than NUMERIC_VALUE, it is the least diverse, most of the entities following the same patterns.",
"PRODUCT and FACILITY classes have the lowest ITA by far (less than 40% in the first annotation cycle, less than 70% in the second). We actually considered removing these classes from the annotation process, but to try to mimic the OntoNotes classes as much as possible we decided to keep them in. There were many cases where the annotators disagreed about the scope of words being facilities or products. Even in the ACE guidelines these two classes are not very well \"documented\" with examples of what is and what is not a PRODUCT or FACILITY. Considering that these classes are, in our opinion, of the lowest importance among all the classes, a lower ITA was accepted.",
"Finally, we would like to address the \"semantic scope\" of the entities - for example, for class PERSON, we do not annotate only proper nouns (NPs) but basically any reference to a person (e.g. through pronouns \"she\", job position titles, common nouns such as \"father\", etc.). We do this because we would like a high-coverage corpus, where entities are marked as more semantically-oriented rather than syntactically - in the same way ACE entities are more encompassing than CoNLL entities. We note that, for example, if one would like strict proper noun entities, it is very easy to extract from a PERSON multi-word entity only those words which are syntactically marked (by any tagger) as NPs."
],
[
"We have presented RONEC - the first Named Entity Corpus for the Romanian language. At its current version, in its 5127 sentences we have 26377 annotated entities in 16 different classes. The corpus is based on copy-right free text, and is released as open-source, free to use and extend.",
"We hope that in time this corpus will grow in size and mature towards a strong resource for Romanian. For this to happen we have released the corpus in two formats: CoNLL-U PLus, which is a text-based tab-separated pre-tokenized and annotated format that is simple to use, and BRAT, which is practically plug-and-play into the BRAT web annotation tool where anybody can add and annotate new sentences. Also, in the GitHub repo there are automatic alignment and conversion script to and from the two formats so they could easily be exported between.",
"Finally, we have also provided an annotation guide that we will improve, and in time evolve into a full annotation document like the ACE Annotation Guidelines for Entities V6.6 BIBREF8."
]
],
"section_name": [
"Introduction",
"Introduction ::: Related corpora",
"Introduction ::: Related corpora ::: ROCO corpus",
"Introduction ::: Related corpora ::: ROMBAC corpus",
"Introduction ::: Related corpora ::: CoRoLa corpus",
"Corpus Description",
"Corpus Description ::: BRAT format",
"Corpus Description ::: CoNLL-U Plus format",
"Classes and Annotation Methodology",
"Classes and Annotation Methodology ::: PERSON",
"Classes and Annotation Methodology ::: NAT_REL_POL",
"Classes and Annotation Methodology ::: ORGANIZATION",
"Classes and Annotation Methodology ::: GPE",
"Classes and Annotation Methodology ::: LOC",
"Classes and Annotation Methodology ::: FACILITY",
"Classes and Annotation Methodology ::: PRODUCT",
"Classes and Annotation Methodology ::: EVENT",
"Classes and Annotation Methodology ::: LANGUAGE",
"Classes and Annotation Methodology ::: WORK_OF_ART",
"Classes and Annotation Methodology ::: DATETIME",
"Classes and Annotation Methodology ::: PERIOD",
"Classes and Annotation Methodology ::: MONEY",
"Classes and Annotation Methodology ::: QUANTITY",
"Classes and Annotation Methodology ::: NUMERIC_VALUE",
"Classes and Annotation Methodology ::: ORDINAL",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"bd2ff7ba5e4d527d3df008764eb2e03b5d8b51d8"
],
"answer": [
{
"evidence": [
"The corpus creation process involved a small number of people that have voluntarily joined the initiative, with the authors of this paper directing the work. Initially, we searched for NER resources in Romanian, and found none. Then we looked at English resources and read the in-depth ACE guide, out of which a 16-class draft evolved. We then identified a copy-right free text from which we hand-picked sentences to maximize the amount of entities while maintaining style balance. The annotation process was a trial-and-error, with cycles composed of annotation, discussing confusing entities, updating the annotation guide schematic and going through the corpus section again to correct entities following guide changes. The annotation process was done online, in BRAT. The actual annotation involved 4 people, has taken about 6 months (as work was volunteer-based, we could not have reached for 100% time commitment from the people involved), and followed the steps:"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The annotation process was a trial-and-error, with cycles composed of annotation, discussing confusing entities, updating the annotation guide schematic and going through the corpus section again to correct entities following guide changes."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"b76cedd3b51419da98379892e6a068a09a0bbd6e"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Stylistic domains and examples (bold marks annotated entities)"
],
"extractive_spans": [],
"free_form_answer": "current news, historical news, free time, sports, juridical news pieces, personal adverts, editorials.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Stylistic domains and examples (bold marks annotated entities)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"bd57487bb0e88b1d8e5aae234b1ee72a13404f7d"
],
"answer": [
{
"evidence": [
"The 16 classes are inspired by the OntoNotes5 corpus BIBREF7 as well as the ACE (Automatic Content Extraction) English Annotation Guidelines for Entities Version 6.6 2008.06.13 BIBREF8. Each class will be presented in detail, with examples, in the section SECREF3 A summary of available classes with word counts for each is available in table TABREF18."
],
"extractive_spans": [
"inspired by the OntoNotes5 corpus BIBREF7 as well as the ACE (Automatic Content Extraction) English Annotation Guidelines for Entities Version 6.6 2008.06.13 BIBREF8"
],
"free_form_answer": "",
"highlighted_evidence": [
"The 16 classes are inspired by the OntoNotes5 corpus BIBREF7 as well as the ACE (Automatic Content Extraction) English Annotation Guidelines for Entities Version 6.6 2008.06.13 BIBREF8."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Did they experiment with the corpus?",
"What writing styles are present in the corpus?",
"How did they determine the distinct classes?"
],
"question_id": [
"bb169a0624aefe66d3b4b1116bbd152d54f9e31b",
"0d7de323fd191a793858386d7eb8692cc924b432",
"ca8e023d142d89557714d67739e1df54d7e5ce4b"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Stylistic domains and examples (bold marks annotated entities)",
"Table 2: Corpus statistics: Each entity is marked with a class and can span one or more words",
"Table 3: CoNLL-U Plus format for the first 20 tokens of sentence ”Tot ı̂n cadrul etapei a 2-a, a avut loc ı̂ntâlnirea Vardar Skopje - S.C. Pick Szeged, care s-a ı̂ncheiat la egalitate, 24 - 24.” (bold marks entities). The format is a text file containing a token per line annotated with 11 tab-separated columns, with an empty line marking the start of a new sentence. Please note that only column #11 is human annotated (and the target of this work), the rest of the morpho-syntactic annotations have been automatically generated with NLP-Cube (Boros, et al., 2018)."
],
"file": [
"3-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png"
]
} | [
"What writing styles are present in the corpus?"
] | [
[
"1909.01247-3-Table1-1.png"
]
] | [
"current news, historical news, free time, sports, juridical news pieces, personal adverts, editorials."
] | 803 |
1909.01515 | Meta Relational Learning for Few-Shot Link Prediction in Knowledge Graphs | Link prediction is an important way to complete knowledge graphs (KGs), while embedding-based methods, effective for link prediction in KGs, perform poorly on relations that only have a few associative triples. In this work, we propose a Meta Relational Learning (MetaR) framework to do the common but challenging few-shot link prediction in KGs, namely predicting new triples about a relation by only observing a few associative triples. We solve few-shot link prediction by focusing on transferring relation-specific meta information to make model learn the most important knowledge and learn faster, corresponding to relation meta and gradient meta respectively in MetaR. Empirically, our model achieves state-of-the-art results on few-shot link prediction KG benchmarks. | {
"paragraphs": [
[
"A knowledge graph is composed by a large amount of triples in the form of $(head\\; entity,\\, relation,\\, tail\\; entity)$ ( $(h, r, t)$ in short), encoding knowledge and facts in the world. Many KGs have been proposed BIBREF0 , BIBREF1 , BIBREF2 and applied to various applications BIBREF3 , BIBREF4 , BIBREF5 .",
"Although with huge amount of entities, relations and triples, many KGs still suffer from incompleteness, thus knowledge graph completion is vital for the development of KGs. One of knowledge graph completion tasks is link prediction, predicting new triples based on existing ones. For link prediction, KG embedding methods BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 are promising ways. They learn latent representations, called embeddings, for entities and relations in continuous vector space and accomplish link prediction via calculation with embeddings.",
"The effectiveness of KG embedding methods is promised by sufficient training examples, thus results are much worse for elements with a few instances during training BIBREF10 . However, few-shot problem widely exists in KGs. For example, about 10% of relations in Wikidata BIBREF0 have no more than 10 triples. Relations with a few instances are called few-shot relations. In this paper, we devote to discuss few-shot link prediction in knowledge graphs, predicting tail entity $t$ given head entity $h$ and relation $r$ by only observing $K$ triples about $r$ , usually $K$ is small. Figure 1 depicts an example of 3-shot link prediction in KGs.",
"To do few-shot link prediction, BIBREF11 made the first trial and proposed GMatching, learning a matching metric by considering both learned embeddings and one-hop graph structures, while we try to accomplish few-shot link prediction from another perspective based on the intuition that the most important information to be transferred from a few existing instances to incomplete triples should be the common and shared knowledge within one task. We call such information relation-specific meta information and propose a new framework Meta Relational Learning (MetaR) for few-shot link prediction. For example, in Figure 1 , relation-specific meta information related to the relation CEOof or CountryCapital will be extracted and transferred by MetaR from a few existing instances to incomplete triples.",
"The relation-specific meta information is helpful in the following two perspectives: 1) transferring common relation information from observed triples to incomplete triples, 2) accelerating the learning process within one task by observing only a few instances. Thus we propose two kinds of relation-specific meta information: relation meta and gradient meta corresponding to afore mentioned two perspectives respectively. In our proposed framework MetaR, relation meta is the high-order representation of a relation connecting head and tail entities. Gradient meta is the loss gradient of relation meta which will be used to make a rapid update before transferring relation meta to incomplete triples during prediction.",
"Compared with GMatching BIBREF11 which relies on a background knowledge graph, our MetaR is independent with them, thus is more robust as background knowledge graphs might not be available for few-shot link prediction in real scenarios.",
"We evaluate MetaR with different settings on few-shot link prediction datasets. MetaR achieves state-of-the-art results, indicating the success of transferring relation-specific meta information in few-shot link prediction tasks. In summary, main contributions of our work are three-folds:"
],
[
"One target of MetaR is to learn the representation of entities fitting the few-shot link prediction task and the learning framework is inspired by knowledge graph embedding methods. Furthermore, using loss gradient as one kind of meta information is inspired by MetaNet BIBREF12 and MAML BIBREF13 which explore methods for few-shot learning by meta-learning. From these two points, we regard knowledge graph embedding and meta-learning as two main kinds of related work."
],
[
"Knowledge graph embedding models map relations and entities into continuous vector space. They use a score function to measure the truth value of each triple $(h, r, t)$ . Same as knowledge graph embedding, our MetaR also need a score function, and the main difference is that representation for $r$ is the learned relation meta in MetaR rather than embedding of $r$ as in normal knowledge graph embedding methods.",
"One line of work is started by TransE BIBREF6 with distance score function. TransH BIBREF14 and TransR BIBREF15 are two typical models using different methods to connect head, tail entities and their relations. DistMult BIBREF9 and ComplEx BIBREF8 are derived from RESCAL BIBREF7 , trying to mine latent semantics in different ways. There are also some others like ConvE BIBREF16 using convolutional structure to score triples and models using additional information such as entity types BIBREF17 and relation paths BIBREF18 . BIBREF19 comprehensively summarize the current popular knowledge graph embedding methods.",
"Traditional embedding models are heavily rely on rich training instances BIBREF20 , BIBREF11 , thus are limited to do few-shot link prediction. Our MetaR is designed to fill this vulnerability of existing embedding models."
],
[
"Meta-learning seeks for the ability of learning quickly from only a few instances within the same concept and adapting continuously to more concepts, which are actually the rapid and incremental learning that humans are very good at.",
"Several meta-learning models have been proposed recently. Generally, there are three kinds of meta-learning methods so far: (1) Metric-based meta-learning BIBREF21 , BIBREF22 , BIBREF23 , BIBREF11 , which tries to learn a matching metric between query and support set generalized to all tasks, where the idea of matching is similar to some nearest neighbors algorithms. Siamese Neural Network BIBREF21 is a typical method using symmetric twin networks to compute the metric of two inputs. GMatching BIBREF11 , the first trial on one-shot link prediction in knowledge graphs, learns a matching metric based on entity embeddings and local graph structures which also can be regarded as a metric-based method. (2) Model-based method BIBREF24 , BIBREF12 , BIBREF25 , which uses a specially designed part like memory to achieve the ability of learning rapidly by only a few training instances. MetaNet BIBREF12 , a kind of memory augmented neural network (MANN), acquires meta information from loss gradient and generalizes rapidly via its fast parameterization. (3) Optimization-based approach BIBREF13 , BIBREF26 , which gains the idea of learning faster by changing the optimization algorithm. Model-Agnostic Meta-Learning BIBREF13 abbreviated as MAML is a model-agnostic algorithm. It firstly updates parameters of task-specific learner, and meta-optimization across tasks is performed over parameters by using above updated parameters, it's like “a gradient through a gradient\".",
"As far as we know, work proposed by BIBREF11 is the first research on few-shot learning for knowledge graphs. It's a metric-based model which consists of a neighbor encoder and a matching processor. Neighbor encoder enhances the embedding of entities by their one-hop neighbors, and matching processor performs a multi-step matching by a LSTM block."
],
[
"In this section, we present the formal definition of a knowledge graph and few-shot link prediction task. A knowledge graph is defined as follows:",
"Definition 3.1 (Knowledge Graph $\\mathcal {G}$ ) A knowledge graph $\\mathcal {G} = \\lbrace \\mathcal {E}, \\mathcal {R}, \\mathcal {TP}\\rbrace $ . $\\mathcal {E}$ is the entity set. $\\mathcal {R}$ is the relation set. And $\\mathcal {TP} = \\lbrace (h, r, t)\\in \\mathcal {E} \\times \\mathcal {R} \\times \\mathcal {E}\\rbrace $ is the triple set.",
"And a few-shot link prediction task in knowledge graphs is defined as:",
"Definition 3.2 (Few-shot link prediction task $\\mathcal {T}$ ) With a knowledge graph $\\mathcal {G} = \\lbrace \\mathcal {E}, \\mathcal {R}, \\mathcal {TP}\\rbrace $ , given a support set $\\mathcal {S}_r = \\lbrace (h_i, t_i)\\in \\mathcal {E} \\times \\mathcal {E} | (h_i, r, t_i) \\in \\mathcal {TP} \\rbrace $ about relation $r\\in \\mathcal {R}$ , where $|\\mathcal {S}_r | = K$ , predicting the tail entity linked with relation $r$ to head entity $h_j$ , formulated as $r:(h_j, ?)$ , is called K-shot link prediction.",
"As defined above, a few-shot link prediction task is always defined for a specific relation. During prediction, there usually is more than one triple to be predicted, and with support set $\\mathcal {S}_r$ , we call the set of all triples to be predicted as query set $\\mathcal {Q}_r = \\lbrace r:(h_j, ?)\\rbrace $ .",
"The goal of a few-shot link prediction method is to gain the capability of predicting new triples about a relation $r$ with only observing a few triples about $r$ . Thus its training process is based on a set of tasks $\\mathcal {T}_{train}=\\lbrace \\mathcal {T}_{i}\\rbrace _{i=1}^{M}$ where each task $\\mathcal {T}_{i} = \\lbrace \\mathcal {S}_i, \\mathcal {Q}_i\\rbrace $ corresponds to an individual few-shot link prediction task with its own support and query set. Its testing process is conducted on a set of new tasks $\\mathcal {T}_{test} = \\lbrace \\mathcal {T}_{j}\\rbrace _{j=1}^{N}$ which is similar to $\\mathcal {T}_{train}$ , other than that $\\mathcal {T}_{j} \\in \\mathcal {T}_{test}$ should be about relations that have never been seen in $\\mathcal {T}_{train}$ .",
"Table 1 gives a concrete example of the data during learning and testing for few-shot link prediction."
],
[
"To make one model gain the few-shot link prediction capability, the most important thing is transferring information from support set to query set and there are two questions for us to think about: (1) what is the most transferable and common information between support set and query set and (2) how to learn faster by only observing a few instances within one task. For question (1), within one task, all triples in support set and query set are about the same relation, thus it is naturally to suppose that relation is the key common part between support and query set. For question (2), the learning process is usually conducted by minimizing a loss function via gradient descending, thus gradients reveal how the model's parameters should be changed. Intuitively, we believe that gradients are valuable source to accelerate learning process.",
"Based on these thoughts, we propose two kinds of meta information which are shared between support set and query set to deal with above problems:",
"In order to extract relation meta and gradient mate and incorporate them with knowledge graph embedding to solve few-shot link prediction, our proposal, MetaR, mainly contains two modules:",
"The overview and algorithm of MetaR are shown in Figure 2 and Algorithm \"Method\" . Next, we introduce each module of MetaR via one few-shot link prediction task $\\mathcal {T}_r = \\lbrace \\mathcal {S}_r, \\mathcal {Q}_r\\rbrace $ .",
"[tb] 1 Learning of MetaR [1] Training tasks $\\mathcal {T}_{train}$ Embedding layer $emb$ ; Parameter of relation-meta learner $\\phi $ not done Sample a task $\\mathcal {T}_r={\\lbrace \\mathcal {S}_r, \\mathcal {Q}_r\\rbrace }$ from $\\mathcal {T}_{train}$ Get $\\mathit {R}$ from $\\mathcal {S}_{r}$ (Equ. 18 , Equ. 19 ) Compute loss in $\\mathcal {S}_{r}$ (Equ. 22 ) Get $\\mathit {G}$ by gradient of $\\mathit {R}$ (Equ. 23 ) Update $emb$0 by $emb$1 (Equ. 24 ) Compute loss in $emb$2 (Equ. 26 ) Update $emb$3 and $emb$4 by loss in $emb$5 "
],
[
"To extract the relation meta from support set, we design a relation-meta learner to learn a mapping from head and tail entities in support set to relation meta. The structure of this relation-meta learner can be implemented as a simple neural network.",
"In task $\\mathcal {T}_r$ , the input of relation-meta learner is head and tail entity pairs in support set $\\lbrace (h_i, t_i) \\in \\mathcal {S}_r\\rbrace $ . We firstly extract entity-pair specific relation meta via a $L$ -layers fully connected neural network, ",
"$$\\begin{aligned}\n\\mathbf {x}^0 &= \\mathbf {h}_i \\oplus \\mathbf {t}_i \\\\\n\\mathbf {x}^l &= \\sigma ({\\mathbf {W}^{l}\\mathbf {x}^{l-1} + b^l}) \\\\\n\\mathit {R}_{(h_i, t_i)} &= {\\mathbf {W}^{L}\\mathbf {x}^{L-1} + b^L}\n\n\\end{aligned}$$ (Eq. 18) ",
"where $\\mathbf {h}_i \\in \\mathbb {R}^{d}$ and $\\mathbf {t}_i \\in \\mathbb {R}^{d}$ are embeddings of head entity $h_i$ and tail entity $t_i$ with dimension $d$ respectively. $L$ is the number of layers in neural network, and $l \\in \\lbrace 1, \\dots , L-1 \\rbrace $ . $\\mathbf {W}^l$ and $\\mathbf {b}^l$ are weights and bias in layer $l$ . We use LeakyReLU for activation $\\mathbf {t}_i \\in \\mathbb {R}^{d}$0 . $\\mathbf {t}_i \\in \\mathbb {R}^{d}$1 represents the concatenation of vector $\\mathbf {t}_i \\in \\mathbb {R}^{d}$2 and $\\mathbf {t}_i \\in \\mathbb {R}^{d}$3 . Finally, $\\mathbf {t}_i \\in \\mathbb {R}^{d}$4 represent the relation meta from specific entity pare $\\mathbf {t}_i \\in \\mathbb {R}^{d}$5 and $\\mathbf {t}_i \\in \\mathbb {R}^{d}$6 .",
"With multiple entity-pair specific relation meta, we generate the final relation meta in current task via averaging all entity-pair specific relation meta in current task, ",
"$$\\mathit {R}_{\\mathcal {T}_r} = \\frac{\\sum _{i=1}^{K}\\mathit {R}_{(h_i, t_i)}}{K}$$ (Eq. 19) "
],
[
"As we want to get gradient meta to make a rapid update on relation meta, we need a score function to evaluate the truth value of entity pairs under specific relations and also the loss function for current task. We apply the key idea of knowledge graph embedding methods in our embedding learner, as they are proved to be effective on evaluating truth value of triples in knowledge graphs.",
"In task $\\mathcal {T}_r$ , we firstly calculate the score for each entity pairs $(h_i, t_i)$ in support set $\\mathcal {S}_r$ as follows: ",
"$$s_{(h_i, t_i)} = \\Vert \\mathbf {h}_i + {\\mathit {R}_{\\mathcal {T}_r}} - \\mathbf {t}_i \\Vert $$ (Eq. 21) ",
"where $\\Vert \\mathbf {x}\\Vert $ represents the L2 norm of vector $\\mathbf {x}$ . We design the score function inspired by TransE BIBREF6 which assumes the head entity embedding $\\mathbf {h}$ , relation embedding $\\mathbf {r}$ and tail entity embedding $\\mathbf {t}$ for a true triple $(h, r, t)$ satisfying $\\mathbf {h} + \\mathbf {r} = \\mathbf {t}$ . Thus the score function is defined according to the distance between $\\mathbf {h} + \\mathbf {r} $ and $\\mathbf {t}$ . Transferring to our few-show link prediction task, we replace the relation embedding $\\mathbf {r}$ with relation meta $\\mathbf {x}$0 as there is no direct general relation embeddings in our task and $\\mathbf {x}$1 can be regarded as the relation embedding for current task $\\mathbf {x}$2 .",
"With score function for each triple, we set the following loss, ",
"$$L(\\mathcal {S}_r) = \\sum _{(h_i, t_i)\\in \\mathcal {S}_r} [\\gamma +s_{(h_i, t_i)}-s_{(h_i, t_i^{\\prime })}]_{+}$$ (Eq. 22) ",
"where $[x]_{+}$ represents the positive part of $x$ and $\\gamma $ represents margin which is a hyperparameter. $s_{(h_i, t_i^{\\prime })}$ is the score for negative sample $(h_i, t_i^{\\prime })$ corresponding to current positive entity pair $(h_i, t_i) \\in \\mathcal {S}_r$ , where $(h_i, r, t_i^{\\prime }) \\notin \\mathcal {G}$ .",
" $L(\\mathcal {S}_r)$ should be small for task $\\mathcal {T}_r$ which represents the model can properly encode truth values of triples. Thus gradients of parameters indicate how should the parameters be updated. Thus we regard the gradient of $\\mathit {R}_{\\mathcal {T}_r}$ based on $L(\\mathcal {S}_r)$ as gradient meta $\\mathit {G}_{\\mathcal {T}_r}$ : ",
"$$\\vspace{-2.84526pt}\n\\mathit {G}_{\\mathcal {T}_r} = \\nabla _{\\mathit {R}_{\\mathcal {T}_r}} L(\\mathcal {S}_r)$$ (Eq. 23) ",
"Following the gradient update rule, we make a rapid update on relation meta as follows: ",
"$$\\mathit {R}^\\prime _{\\mathcal {T}_r} = \\mathit {R}_{\\mathcal {T}_r} - \\beta \\mathit {G}_{\\mathcal {T}_r}$$ (Eq. 24) ",
"where $\\beta $ indicates the step size of gradient meta when operating on relation meta.",
"When scoring the query set by embedding learner, we use updated relation meta. After getting the updated relation meta $\\mathit {R}^\\prime $ , we transfer it to samples in query set $\\mathcal {Q}_r = \\lbrace (h_j, t_j) \\rbrace $ and calculate their scores and loss of query set, following the same way in support set: ",
"$$s_{(h_j, t_j)} = \\Vert \\mathbf {h}_j + \\mathit {R}_{\\mathcal {T}_r}^\\prime - \\mathbf {t}_j \\Vert $$ (Eq. 25) ",
"$$L(\\mathcal {Q}_r) = \\sum _{(h_j, t_j)\\in \\mathcal {Q}_r}[\\gamma +s_{(h_j, t_j)}-s_{(h_j, t_j^{\\prime })}]_{+}$$ (Eq. 26) ",
"where $L(\\mathcal {Q}_r)$ is our training objective to be minimized. We use this loss to update the whole model."
],
[
"During training, our objective is to minimize the following loss $L$ which is the sum of query loss for all tasks in one minibatch: ",
"$$L = \\sum _{(\\mathcal {S}_r, \\mathcal {Q}_r)\\in \\mathcal {T}_{train}} L(\\mathcal {Q}_r)$$ (Eq. 28) "
],
[
"With MetaR, we want to figure out following things: 1) can MetaR accomplish few-shot link prediction task and even perform better than previous model? 2) how much relation-specific meta information contributes to few-shot link prediction? 3) is there any requirement for MetaR to work on few-shot link prediction? To do these, we conduct the experiments on two few-shot link prediction datasets and deeply analyze the experiment results ."
],
[
"We use two datasets, NELL-One and Wiki-One which are constructed by BIBREF11 . NELL-One and Wiki-One are derived from NELL BIBREF2 and Wikidata BIBREF0 respectively. Furthermore, because these two benchmarks are firstly tested on GMatching which consider both learned embeddings and one-hop graph structures, a background graph is constructed with relations out of training/validation/test sets for obtaining the pre-train entity embeddings and providing the local graph for GMatching.",
"Unlike GMatching using background graph to enhance the representations of entities, our MetaR can be trained without background graph. For NELL-One and Wiki-One which have background graph originally, we can make use of such background graph by fitting it into training tasks or using it to train embeddings to initialize entity representations. Overall, we have three kinds of dataset settings, shown in Table 3 . For setting of BG:In-Train, in order to make background graph included in training tasks, we sample tasks from triples in background graph and original training set, rather than sampling from only original training set.",
"Note that these three settings don't violate the task formulation of few-shot link prediction in KGs. The statistics of NELL-One and Wiki-One are shown in Table 2 .",
"We use two traditional metrics to evaluate different methods on these datasets, MRR and Hits@N. MRR is the mean reciprocal rank and Hits@N is the proportion of correct entities ranked in the top N in link prediction."
],
[
"During training, mini-batch gradient descent is applied with batch size set as 64 and 128 for NELL-One and Wiki-One respectively. We use Adam BIBREF27 with the initial learning rate as 0.001 to update parameters. We set $\\gamma = 1$ and $\\beta = 1$ . The number of positive and negative triples in query set is 3 and 10 in NELL-One and Wiki-One. Trained model will be applied on validation tasks each 1000 epochs, and the current model parameters and corresponding performance will be recorded, after stopping, the model that has the best performance on Hits@10 will be treated as final model. For number of training epoch, we use early stopping with 30 patient epochs, which means that we stop the training when the performance on Hits@10 drops 30 times continuously. Following GMatching, the embedding dimension of NELL-One is 100 and Wiki-One is 50. The sizes of two hidden layers in relation-meta learner are 500, 200 and 250, 100 for NELL-One and Wiki-One."
],
[
"The results of two few-shot link prediction tasks, including 1-shot and 5-shot, on NELL-One and Wiki-One are shown in Table 4 . The baseline in our experiment is GMatching BIBREF11 , which made the first trial on few-shot link prediction task and is the only method that we can find as baseline. In this table, results of GMatching with different KG embedding initialization are copied from the original paper. Our MetaR is tested on different settings of datasets introduced in Table 3 .",
"In Table 4 , our model performs better with all evaluation metrics on both datasets. Specifically, for 1-shot link prediction, MetaR increases by 33%, 28.1%, 29.2% and 27.8% on MRR, Hits@10, Hits@5 and Hits@1 on NELL-One, and 41.4%, 18.8%, 37.9% and 62.2% on Wiki-One, with average improvement of 29.53% and 40.08% respectively. For 5-shot, MetaR increases by 29.9%, 40.5%, 32.6% and 17.5% on MRR, Hits@10, Hits@5 and Hits@1 on NELL-One with average improvement of 30.13%.",
"Thus for the first question we want to explore, the results of MetaR are no worse than GMatching, indicating that MetaR has the capability of accomplishing few-shot link prediction. In parallel, the impressive improvement compared with GMatching demonstrates that the key idea of MetaR, transferring relation-specific meta information from support set to query set, works well on few-shot link prediction task.",
"Furthermore, compared with GMatching, our MetaR is independent with background knowledge graphs. We test MetaR on 1-shot link prediction in partial NELL-One and Wiki-One which discard the background graph, and get the results of 0.279 and 0.348 on Hits@10 respectively. Such results are still comparable with GMatching in fully datasets with background."
],
[
"We have proved that relation-specific meta information, the key point of MetaR, successfully contributes to few-shot link prediction in previous section. As there are two kinds of relation-specific meta information in this paper, relation meta and gradient meta, we want to figure out how these two kinds of meta information contribute to the performance. Thus, we conduct an ablation study with three settings. The first one is our complete MetaR method denoted as standard. The second one is removing the gradient meta by transferring un-updated relation meta directly from support set to query set without updating it via gradient meta, denoted as -g. The third one is removing the relation meta further which makes the model rebase to a simple TransE embedding model, denoted as -g -r. The result under the third setting is copied from BIBREF11 . It uses the triples from background graph, training tasks and one-shot training triples from validation/test set, so it's neither BG:Pre-Train nor BG:In-Train. We conduct the ablation study on NELL-one with metric Hit@10 and results are shown in Table 5 .",
"Table 5 shows that removing gradient meta decreases 29.3% and 15% on two dataset settings, and further removing relation meta continuous decreases the performance with 55% and 72% compared to the standard results. Thus both relation meta and gradient meta contribute significantly and relation meta contributes more than gradient meta. Without gradient meta and relation meta, there is no relation-specific meta information transferred in the model and it almost doesn't work. This also illustrates that relation-specific meta information is important and effective for few-shot link prediction task."
],
[
"We have proved that both relation meta and gradient meta surely contribute to few-shot link prediction. But is there any requirement for MetaR to ensure the performance on few-shot link prediction? We analyze this from two points based on the results, one is the sparsity of entities and the other is the number of tasks in training set.",
"The sparsity of entities We notice that the best result of NELL-One and Wiki-One appears in different dataset settings. With NELL-One, MetaR performs better on BG:In-Train dataset setting, while with Wiki-One, it performs better on BG:Pre-Train. Performance difference between two dataset settings is more significant on Wiki-One.",
"Most datasets for few-shot task are sparse and the same with NELL-One and Wiki-One, but the entity sparsity in these two datasets are still significantly different, which is especially reflected in the proportion of entities that only appear in one triple in training set, $82.8$ % and $37.1$ % in Wiki-One and NELL-One respectively. Entities only have one triple during training will make MetaR unable to learn good representations for them, because entity embeddings heavily rely on triples related to them in MetaR. Only based on one triple, the learned entity embeddings will include a lot of bias. Knowledge graph embedding method can learn better embeddings than MetaR for those one-shot entities, because entity embeddings can be corrected by embeddings of relations that connect to it, while they can't in MetaR. This is why the best performance occurs in BG:Pre-train setting on Wiki-One, pre-train entity embeddings help MetaR overcome the low-quality on one-shot entities.",
"The number of tasks From the comparison of MetaR's performance between with and without background dataset setting on NELL-One, we find that the number of tasks will affect MetaR's performance significantly. With BG:In-Train, there are 321 tasks during training and MetaR achieves 0.401 on Hits@10, while without background knowledge, there are 51, with 270 less, and MetaR achieves 0.279. This makes it reasonable that why MetaR achieves best performance on BG:In-Train with NELL-One. Even NELL-One has $37.1$ % one-shot entities, adding background knowledge into dataset increases the number of training tasks significantly, which complements the sparsity problem and contributes more to the task.",
"Thus we conclude that both the sparsity of entities and number of tasks will affect performance of MetaR. Generally, with more training tasks, MetaR performs better and for extremely sparse dataset, pre-train entity embeddings are preferred."
],
[
"We propose a meta relational learning framework to do few-shot link prediction in KGs, and we design our model to transfer relation-specific meta information from support set to query set. Specifically, using relation meta to transfer common and important information, and using gradient meta to accelerate learning. Compared to GMatching which is the only method in this task, our method MetaR gets better performance and it is also independent with background knowledge graphs. Based on experimental results, we analyze that the performance of MetaR will be affected by the number of training tasks and sparsity of entities. We may consider obtaining more valuable information about sparse entities in few-shot link prediction in KGs in the future."
],
[
"We want to express gratitude to the anonymous reviewers for their hard work and kind comments, which will further improve our work in the future. This work is funded by NSFC 91846204/61473260, national key research program YS2018YFB140004, and Alibaba CangJingGe(Knowledge Engine) Research Plan."
]
],
"section_name": [
"Introduction",
"Related Work",
"Knowledge Graph Embedding",
"Meta-Learning",
"Task Formulation",
"Method",
"Relation-Meta Learner",
"Embedding Learner",
"Training Objective",
"Experiments",
"Datasets and Evaluation Metrics",
"Implementation",
"Results",
"Ablation Study",
"Facts That Affect MetaR's Performance",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"b86a2ecd94a4604430c31bc44796ad533288c017"
],
"answer": [
{
"evidence": [
"The relation-specific meta information is helpful in the following two perspectives: 1) transferring common relation information from observed triples to incomplete triples, 2) accelerating the learning process within one task by observing only a few instances. Thus we propose two kinds of relation-specific meta information: relation meta and gradient meta corresponding to afore mentioned two perspectives respectively. In our proposed framework MetaR, relation meta is the high-order representation of a relation connecting head and tail entities. Gradient meta is the loss gradient of relation meta which will be used to make a rapid update before transferring relation meta to incomplete triples during prediction."
],
"extractive_spans": [],
"free_form_answer": "high-order representation of a relation, loss gradient of relation meta",
"highlighted_evidence": [
"Thus we propose two kinds of relation-specific meta information: relation meta and gradient meta corresponding to afore mentioned two perspectives respectively. In our proposed framework MetaR, relation meta is the high-order representation of a relation connecting head and tail entities. Gradient meta is the loss gradient of relation meta which will be used to make a rapid update before transferring relation meta to incomplete triples during prediction."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"c7dae865f6dc51c5ebda1573b52a038b181403c6"
],
"answer": [
{
"evidence": [
"We use two datasets, NELL-One and Wiki-One which are constructed by BIBREF11 . NELL-One and Wiki-One are derived from NELL BIBREF2 and Wikidata BIBREF0 respectively. Furthermore, because these two benchmarks are firstly tested on GMatching which consider both learned embeddings and one-hop graph structures, a background graph is constructed with relations out of training/validation/test sets for obtaining the pre-train entity embeddings and providing the local graph for GMatching."
],
"extractive_spans": [],
"free_form_answer": "NELL-One, Wiki-One",
"highlighted_evidence": [
"We use two datasets, NELL-One and Wiki-One which are constructed by BIBREF11 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What meta-information is being transferred?",
"What datasets are used to evaluate the approach?"
],
"question_id": [
"4226a1830266ed5bde1b349205effafe7a0e2337",
"5fb348b2d7b012123de93e79fd46a7182fd062bd"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"link prediction",
"link prediction"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: An example of 3-shot link prediction in KGs. One task represents observing only three instances of one specific relation and conducting link prediction on this relation. Our model focuses on extracting relationspecific meta information by a kind of relational learner which is shared across tasks and transferring this meta information to do link prediction within one task.",
"Table 1: The training and testing examples of 1-shot link prediction in KGs.",
"Figure 2: Overview of MetaR. Tr = {Sr,Qr}, RTr and R ′ Tr represent relation meta and updated relation meta, and GTr represents gradient meta.",
"Table 2: Statistic of datasets. Fit denotes fitting background into training tasks (Y) or not (N), # Train, # Dev and # Test denote the number of relations in training, validation and test set.",
"Table 3: Three forms of datasets in our experiments.",
"Table 4: Results of few-shot link prediction in NELL-One and Wiki-One. Bold numbers are the best results of all and underline numbers are the best results of GMatching. The contents of (bracket) after MetaR illustrate the form",
"Table 5: Results of ablation study on Hits@10 of 1-shot link prediction in NELL-One."
],
"file": [
"1-Figure1-1.png",
"3-Table1-1.png",
"4-Figure2-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"7-Table5-1.png"
]
} | [
"What meta-information is being transferred?",
"What datasets are used to evaluate the approach?"
] | [
[
"1909.01515-Introduction-4"
],
[
"1909.01515-Datasets and Evaluation Metrics-0"
]
] | [
"high-order representation of a relation, loss gradient of relation meta",
"NELL-One, Wiki-One"
] | 807 |
1908.06151 | The Transference Architecture for Automatic Post-Editing | In automatic post-editing (APE) it makes sense to condition post-editing (pe) decisions on both the source (src) and the machine translated text (mt) as input. This has led to multi-source encoder based APE approaches. A research challenge now is the search for architectures that best support the capture, preparation and provision of src and mt information and its integration with pe decisions. In this paper we present a new multi-source APE model, called transference. Unlike previous approaches, it (i) uses a transformer encoder block for src, (ii) followed by a decoder block, but without masking for self-attention on mt, which effectively acts as second encoder combining src -> mt, and (iii) feeds this representation into a final decoder block generating pe. Our model outperforms the state-of-the-art by 1 BLEU point on the WMT 2016, 2017, and 2018 English--German APE shared tasks (PBSMT and NMT). We further investigate the importance of our newly introduced second encoder and find that a too small amount of layers does hurt the performance, while reducing the number of layers of the decoder does not matter much. | {
"paragraphs": [
[
"The performance of state-of-the-art MT systems is not perfect, thus, human interventions are still required to correct machine translated texts into publishable quality translations BIBREF0. Automatic post-editing (APE) is a method that aims to automatically correct errors made by MT systems before performing actual human post-editing (PE) BIBREF1, thereby reducing the translators' workload and increasing productivity BIBREF2. APE systems trained on human PE data serve as MT post-processing modules to improve the overall performance. APE can therefore be viewed as a 2nd-stage MT system, translating predictable error patterns in MT output to their corresponding corrections. APE training data minimally involves MT output ($mt$) and the human post-edited ($pe$) version of $mt$, but additionally using the source ($src$) has been shown to provide further benefits BIBREF3, BIBREF4, BIBREF5.",
"To provide awareness of errors in $mt$ originating from $src$, attention mechanisms BIBREF6 allow modeling of non-local dependencies in the input or output sequences, and importantly also global dependencies between them (in our case $src$, $mt$ and $pe$). The transformer architecture BIBREF7 is built solely upon such attention mechanisms completely replacing recurrence and convolutions. The transformer uses positional encoding to encode the input and output sequences, and computes both self- and cross-attention through so-called multi-head attentions, which are facilitated by parallelization. Such multi-head attention allows to jointly attend to information at different positions from different representation subspaces, e.g. utilizing and combining information from $src$, $mt$, and $pe$.",
"In this paper, we present a multi-source neural APE architecture called transference. Our model contains a source encoder which encodes $src$ information, a second encoder ($enc_{src \\rightarrow mt}$) which takes the encoded representation from the source encoder ($enc_{src}$), combines this with the self-attention-based encoding of $mt$ ($enc_{mt}$), and prepares a representation for the decoder ($dec_{pe}$) via cross-attention. Our second encoder ($enc_{src \\rightarrow mt}$) can also be viewed as a standard transformer decoding block, however, without masking, which acts as an encoder. We thus recombine the different blocks of the transformer architecture and repurpose them for the APE task in a simple yet effective way. The suggested architecture is inspired by the two-step approach professional translators tend to use during post-editing: first, the source segment is compared to the corresponding translation suggestion (similar to what our $enc_{src \\rightarrow mt}$ is doing), then corrections to the MT output are applied based on the encountered errors (in the same way that our $dec_{pe}$ uses the encoded representation of $enc_{src \\rightarrow mt}$ to produce the final translation).",
"The paper makes the following contributions: (i) we propose a new multi-encoder model for APE that consists only of standard transformer encoding and decoding blocks, (ii) by using a mix of self- and cross-attention we provide a representation of both $src$ and $mt$ for the decoder, allowing it to better capture errors in $mt$ originating from $src$; this advances the state-of-the-art in APE in terms of BLEU and TER, and (iii), we analyze the effect of varying the number of encoder and decoder layers BIBREF8, indicating that the encoders contribute more than decoders in transformer-based neural APE."
],
[
"Recent advances in APE research are directed towards neural APE, which was first proposed by Pal:2016:ACL and junczysdowmunt-grundkiewicz:2016:WMT for the single-source APE scenario which does not consider $src$, i.e. $mt \\rightarrow pe$. In their work, junczysdowmunt-grundkiewicz:2016:WMT also generated a large synthetic training dataset through back translation, which we also use as additional training data. Exploiting source information as an additional input can help neural APE to disambiguate corrections applied at each time step; this naturally leads to multi-source APE ($\\lbrace src, mt\\rbrace \\rightarrow pe$). A multi-source neural APE system can be configured either by using a single encoder that encodes the concatenation of $src$ and $mt$ BIBREF9 or by using two separate encoders for $src$ and $mt$ and passing the concatenation of both encoders' final states to the decoder BIBREF10. A few approaches to multi-source neural APE were proposed in the WMT 2017 APE shared task. Junczysdowmunt:2017:WMT combine both $mt$ and $src$ in a single neural architecture, exploring different combinations of attention mechanisms including soft attention and hard monotonic attention. Chatterjee-EtAl:2017:WMT2 built upon the two-encoder architecture of multi-source models BIBREF10 by means of concatenating both weighted contexts of encoded $src$ and $mt$. Varis-bojar:2017:WMT compared two multi-source models, one using a single encoder with concatenation of $src$ and $mt$ sentences, and a second one using two character-level encoders for $mt$ and $src$ along with a character-level decoder.",
"Recently, in the WMT 2018 APE shared task, several adaptations of the transformer architecture have been presented for multi-source APE. pal-EtAl:2018:WMT proposed an APE model that uses three self-attention-based encoders. They introduce an additional joint encoder that attends over a combination of the two encoded sequences from $mt$ and $src$. tebbifakhr-EtAl:2018:WMT, the NMT-subtask winner of WMT 2018 ($wmt18^{nmt}_{best}$), employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics. shin-lee:2018:WMT propose that each encoder has its own self-attention and feed-forward layer to process each input separately. On the decoder side, they add two additional multi-head attention layers, one for $src \\rightarrow mt$ and another for $src \\rightarrow pe$. Thereafter another multi-head attention between the output of those attention layers helps the decoder to capture common words in $mt$ which should remain in $pe$. The APE PBSMT-subtask winner of WMT 2018 ($wmt18^{smt}_{best}$) BIBREF11 also presented another transformer-based multi-source APE which uses two encoders and stacks an additional cross-attention component for $src \\rightarrow pe$ above the previous cross-attention for $mt \\rightarrow pe$. Comparing shin-lee:2018:WMT's approach with the winner system, there are only two differences in the architecture: (i) the cross-attention order of $src \\rightarrow mt$ and $src \\rightarrow pe$ in the decoder, and (ii) $wmt18^{smt}_{best}$ additionally shares parameters between two encoders."
],
[
"We propose a multi-source transformer model called transference ($\\lbrace src,mt\\rbrace _{tr} \\rightarrow pe$, Figure FIGREF1), which takes advantage of both the encodings of $src$ and $mt$ and attends over a combination of both sequences while generating the post-edited sentence. The second encoder, $enc_{src \\rightarrow mt}$, makes use of the first encoder $enc_{src}$ and a sub-encoder $enc_{mt}$ for considering $src$ and $mt$. Here, the $enc_{src}$ encoder and the $dec_{pe}$ decoder are equivalent to the original transformer for neural MT. Our $enc_{src \\rightarrow mt}$ follows an architecture similar to the transformer's decoder, the difference being that no masked multi-head self-attention is used to process $mt$.",
"One self-attended encoder for $src$, $\\mathbf {s}$ = $(s_1, s_2, \\ldots , s_k)$, returns a sequence of continuous representations, $enc_{src}$, and a second self-attended sub-encoder for $mt$, $\\mathbf {m}$ = $(m_1, m_2, \\ldots , m_l)$, returns another sequence of continuous representations, $enc_{mt}$. Self-attention at this point provides the advantage of aggregating information from all of the words, including $src$ and $mt$, and successively generates a new representation per word informed by the entire $src$ and $mt$ context. The internal $enc_{mt}$ representation performs cross-attention over $enc_{src}$ and prepares a final representation ($enc_{src \\rightarrow mt}$) for the decoder ($dec_{pe}$). The decoder then generates the $pe$ output in sequence, $\\mathbf {p}$ = $(p_1, p_2, \\ldots , p_n)$, one word at a time from left to right by attending to previously generated words as well as the final representations ($enc_{src \\rightarrow mt}$) generated by the encoder.",
"To summarize, our multi-source APE implementation extends Vaswani:NIPS2017 by introducing an additional encoding block by which $src$ and $mt$ communicate with the decoder.",
"Our proposed approach differs from the WMT 2018 PBSMT winner system in several ways: (i) we use the original transformer's decoder without modifications; (ii) one of our encoder blocks ($enc_{src \\rightarrow mt}$) is identical to the transformer's decoder block but uses no masking in the self-attention layer, thus having one self-attention layer and an additional cross-attention for $src \\rightarrow mt$; and (iii) in the decoder layer, the cross-attention is performed between the encoded representation from $enc_{src \\rightarrow mt}$ and $pe$.",
"Our approach also differs from the WMT 2018 NMT winner system: (i) $wmt18^{nmt}_{best}$ concatenates the encoded representation of two encoders and passes it as the key to the attention layer of the decoder, and (ii), the system additionally employs sequence-level loss functions based on maximum likelihood estimation and minimum risk training in order to avoid exposure bias during training.",
"The main intuition is that our $enc_{src \\rightarrow mt}$ attends over the $src$ and $mt$ and informs the $pe$ to better capture, process, and share information between $src$-$mt$-$pe$, which efficiently models error patterns and the corresponding corrections. Our model performs better than past approaches, as the experiment section will show."
],
[
"We explore our approach on both APE sub-tasks of WMT 2018, where the 1st-stage MT system to which APE is applied is either a phrase-based statistical machine translation (PBSMT) or a neural machine translation (NMT) model.",
"For the PBSMT task, we compare against four baselines: the raw SMT output provided by the 1st-stage PBSMT system, the best-performing systems from WMT APE 2018 ($\\mathbf {wmt18^{smt}_{best}}$), which are a single model and an ensemble model by junczysdowmunt-grundkiewicz:2018:WMT, as well as a transformer trying to directly translate from $src$ to $pe$ (Transformer ($\\mathbf {src \\rightarrow pe}$)), thus performing translation instead of APE. We evaluate the systems using BLEU BIBREF12 and TER BIBREF13.",
"For the NMT task, we consider two baselines: the raw NMT output provided by the 1st-stage NMT system and the best-performing system from the WMT 2018 NMT APE task ($\\mathbf {wmt18^{nmt}_{best}}$) BIBREF14.",
"Apart from the multi-encoder transference architecture described above ($\\lbrace src,mt\\rbrace _{tr} \\rightarrow pe$) and ensembling of this architecture, two simpler versions are also analyzed: first, a `mono-lingual' ($\\mathbf {mt \\rightarrow pe}$) APE model using only parallel $mt$–$pe$ data and therefore only a single encoder, and second, an identical single-encoder architecture, however, using the concatenated $src$ and $mt$ text as input ($\\mathbf {\\lbrace src+mt\\rbrace \\rightarrow pe}$) BIBREF9."
],
[
"For our experiments, we use the English–German WMT 2016 BIBREF4, 2017 BIBREF5 and 2018 BIBREF15 APE task data. All these released APE datasets consist of English–German triplets containing source English text ($src$) from the IT domain, the corresponding German translations ($mt$) from a 1st-stage MT system, and the corresponding human-post-edited version ($pe$). The sizes of the datasets (train; dev; test), in terms of number of sentences, are (12,000; 1,000; 2,000), (11,000; 0; 2,000), and (13,442; 1,000; 1,023), for the 2016 PBSMT, the 2017 PBSMT, and the 2018 NMT data, respectively. One should note that for WMT 2018, we carried out experiments only for the NMT sub-task and ignored the data for the PBSMT task.",
"Since the WMT APE datasets are small in size, we use `artificial training data' BIBREF16 containing 4.5M sentences as additional resources, 4M of which are weakly similar to the WMT 2016 training data, while 500K are very similar according to TER statistics.",
"For experimenting on the NMT data, we additionally use the synthetic eScape APE corpus BIBREF17, consisting of $\\sim $7M triples. For cleaning this noisy eScape dataset containing many unrelated language words (e.g. Chinese), we perform the following two steps: (i) we use the cleaning process described in tebbifakhr-EtAl:2018:WMT, and (ii) we use the Moses BIBREF18 corpus cleaning scripts with minimum and maximum number of tokens set to 1 and 100, respectively. After cleaning, we perform punctuation normalization, and then use the Moses tokenizer BIBREF18 to tokenize the eScape corpus with `no-escape' option. Finally, we apply true-casing. The cleaned version of the eScape corpus contains $\\sim $6.5M triplets."
],
[
"To build models for the PBSMT tasks from 2016 and 2017, we first train a generic APE model using all the training data (4M + 500K + 12K + 11K) described in Section SECREF2. Afterwards, we fine-tune the trained model using the 500K artificial and 23K (12K + 11K) real PE training data. We use the WMT 2016 development data (dev2016) containing 1,000 triplets to validate the models during training. To test our system performance, we use the WMT 2016 and 2017 test data (test2016, test2017) as two sub-experiments, each containing 2,000 triplets ($src$, $mt$ and $pe$). We compare the performance of our system with the four different baseline systems described above: raw MT, $wmt18^{smt}_{best}$ single and ensemble, as well as Transformer ($src \\rightarrow pe$).",
"Additionally, we check the performance of our model on the WMT 2018 NMT APE task (where unlike in previous tasks, the 1st-stage MT system is provided by NMT): for this, we explore two experimental setups: (i) we use the PBSMT task's APE model as a generic model which is then fine-tuned to a subset (12k) of the NMT data ($\\lbrace src,mt\\rbrace ^{nmt}_{tr} \\rightarrow pe^{{generic, smt}}_{{}}$). One should note that it has been argued that the inclusion of SMT-specific data could be harmful when training NMT APE models BIBREF11. (ii), we train a completely new generic model on the cleaned eScape data ($\\sim $6.5M) along with a subset (12K) of the original training data released for the NMT task ($\\lbrace src,mt\\rbrace ^{nmt}_{tr} \\rightarrow pe^{{generic, nmt}}_{{}}$). The aforementioned 12K NMT data are the first 12K of the overall 13.4K NMT data. The remaining 1.4K are used as validation data. The released development set (dev2018) is used as test data for our experiment, alongside the test2018, for which we could only obtain results for a few models by the WMT 2019 task organizers. We also explore an additional fine-tuning step of $\\lbrace src,mt\\rbrace ^{nmt}_{tr} \\rightarrow pe^{{generic, nmt}}_{{}}$ towards the 12K NMT data (called $\\lbrace src,mt\\rbrace ^{nmt}_{tr} \\rightarrow pe^{{ft}}_{{}}$), and a model averaging the 8 best checkpoints of $\\lbrace src,mt\\rbrace ^{nmt}_{tr} \\rightarrow pe^{{ft}}_{{}}$, which we call $\\lbrace src,mt\\rbrace ^{nmt}_{tr} \\rightarrow pe^{{ft}}_{{avg}}$.",
"Last, we analyze the importance of our second encoder ($enc_{src \\rightarrow mt}$), compared to the source encoder ($enc_{src}$) and the decoder ($dec_{pe}$), by reducing and expanding the amount of layers in the encoders and the decoder. Our standard setup, which we use for fine-tuning, ensembling etc., is fixed to 6-6-6 for $N_{src}$-$N_{mt}$-$N_{pe}$ (cf. Figure FIGREF1), where 6 is the value that was proposed by Vaswani:NIPS2017 for the base model. We investigate what happens in terms of APE performance if we change this setting to 6-6-4 and 6-4-6. To handle out-of-vocabulary words and reduce the vocabulary size, instead of considering words, we consider subword units BIBREF19 by using byte-pair encoding (BPE). In the preprocessing step, instead of learning an explicit mapping between BPEs in the $src$, $mt$ and $pe$, we define BPE tokens by jointly processing all triplets. Thus, $src$, $mt$ and $pe$ derive a single BPE vocabulary. Since $mt$ and $pe$ belong to the same language (German) and $src$ is a close language (English), they naturally share a good fraction of BPE tokens, which reduces the vocabulary size to 28k."
],
[
"We follow a similar hyper-parameter setup for all reported systems. All encoders (for $\\lbrace src,mt\\rbrace _{tr} \\rightarrow pe$), and the decoder, are composed of a stack of $N_{src} = N_{mt} = N_{pe} = 6$ identical layers followed by layer normalization. The learning rate is varied throughout the training process, and increasing for the first training steps $warmup_{steps} = 8000$ and afterwards decreasing as described in BIBREF7. All remaining hyper-parameters are set analogously to those of the transformer's base model, except that we do not perform checkpoint averaging. At training time, the batch size is set to 25K tokens, with a maximum sentence length of 256 subwords. After each epoch, the training data is shuffled. During decoding, we perform beam search with a beam size of 4. We use shared embeddings between $mt$ and $pe$ in all our experiments."
],
[
"The results of our four models, single-source ($\\mathbf {mt \\rightarrow pe}$), multi-source single encoder ($\\mathbf {\\lbrace src + pe\\rbrace \\rightarrow pe}$), transference ($\\mathbf {\\lbrace src,mt\\rbrace ^{smt}_{tr} \\rightarrow pe}$), and ensemble, in comparison to the four baselines, raw SMT, $\\mathbf {wmt18^{smt}_{best}}$ BIBREF11 single and ensemble, as well as Transformer ($\\mathbf {src \\rightarrow pe}$), are presented in Table TABREF5 for test2016 and test2017. Table TABREF9 reports the results obtained by our transference model ($\\mathbf {\\lbrace src,mt\\rbrace ^{nmt}_{tr} \\rightarrow pe^{{}}_{{}}}$) on the WMT 2018 NMT data for dev2018 (which we use as a test set) and test2018, compared to the baselines raw NMT and $\\mathbf {wmt18^{nmt}_{best}}$."
],
[
"The raw SMT output in Table TABREF5 is a strong black-box PBSMT system (i.e., 1st-stage MT). We report its performance observed with respect to the ground truth ($pe$), i.e., the post-edited version of $mt$. The original PBSMT system scores over 62 BLEU points and below 25 TER on test2016 and test2017.",
"Using a Transformer ($src \\rightarrow pe$), we test if APE is really useful, or if potential gains are only achieved due to the good performance of the transformer architecture. While we cannot do a full training of the transformer on the data that the raw MT engine was trained on due to the unavailability of the data, we use our PE datasets in an equivalent experimental setup as for all other models. The results of this system (Exp. 1.2 in Table TABREF5) show that the performance is actually lower across both test sets, -5.52/-9.43 absolute points in BLEU and +5.21/+7.72 absolute in TER, compared to the raw SMT baseline.",
"We report four results from $\\mathbf {wmt18^{smt}_{best}}$, (i) $wmt18^{smt}_{best}$ ($single$), which is the core multi-encoder implementation without ensembling but with checkpoint averaging, (ii) $wmt18^{smt}_{best}$ ($x4$) which is an ensemble of four identical `single' models trained with different random initializations. The results of $wmt18^{smt}_{best}$ ($single$) and $wmt18^{smt}_{best}$ ($x4$) (Exp. 1.3 and 1.4) reported in Table TABREF5 are from junczysdowmunt-grundkiewicz:2018:WMT. Since their training procedure slightly differs from ours, we also trained the $wmt18^{smt}_{best}$ system using exactly our experimental setup in order to make a fair comparison. This yields the baselines (iii) $wmt18^{smt,generic}_{best}$ ($single$) (Exp. 1.5), which is similar to $wmt18^{smt}_{best}$ ($single$), however, the training parameters and data are kept in line with our transference general model (Exp. 2.3) and (iv) $wmt18^{smt,ft}_{best}$ ($single$) (Exp. 1.6), which is also trained maintaining the equivalent experimental setup compared to the fine tuned version of the transference general model (Exp. 3.3). Compared to both raw SMT and Transformer ($src \\rightarrow pe$) we see strong improvements for this state-of-the-art model, with BLEU scores of at least 68.14 and TER scores of at most 20.98 across the PBSMT testsets. $wmt18^{smt}_{best}$, however, performs better in its original setup (Exp. 1.3 and 1.4) compared to our experimental setup (Exp. 1.5 and 1.6)."
],
[
"The two transformer architectures $\\mathbf {mt \\rightarrow pe}$ and $\\mathbf {\\lbrace src+mt\\rbrace \\rightarrow pe}$ use only a single encoder. Table TABREF5 shows that $\\mathbf {mt \\rightarrow pe}$ (Exp. 2.1) provides better performance (+4.42 absolute BLEU on test2017) compared to the original SMT, while $\\mathbf {\\lbrace src+mt\\rbrace \\rightarrow pe}$ (Exp. 2.2) provides further improvements by additionally using the $src$ information. $\\mathbf {\\lbrace src+mt\\rbrace \\rightarrow pe}$ improves over $\\mathbf {mt \\rightarrow pe}$ by +1.62/+1.35 absolute BLEU points on test2016/test2017. After fine-tuning, both single encoder transformers (Exp. 3.1 and 3.2 in Table TABREF5) show further improvements, +0.87 and +0.31 absolute BLEU points, respectively, for test2017 and a similar improvement for test2016."
],
[
"In contrast to the two models above, our transference architecture uses multiple encoders. To fairly compare to $wmt18^{smt}_{best}$, we retrain the $wmt18^{smt}_{best}$ system with our experimental setup (cf. Exp. 1.5 and 1.6 in Table TABREF5). $wmt18^{smt,generic}_{best}$ (single) is a generic model trained on all the training data; which is afterwards fine-tuned with 500K artificial and 23K real PE data ($wmt18^{smt,ft}_{best}$ (single)). It is to be noted that in terms of performance the data processing method described in junczysdowmunt-grundkiewicz:2018:WMT reported in Exp. 1.3 is better than ours (Exp. 1.6). The fine-tuned version of the $\\lbrace src,mt\\rbrace ^{smt}_{tr} \\rightarrow pe$ model (Exp. 3.3 in Table TABREF5) outperforms $wmt18^{smt}_{best}$ (single) (Exp. 1.3) in BLEU on both test sets, however, the TER score for test2016 increases. One should note that $wmt18^{smt}_{best}$ (single) follows the transformer base model, which is an average of five checkpoints, while our Exp. 3.3 is not. When ensembling the 4 best checkpoints of our $\\lbrace src,mt\\rbrace ^{smt}_{tr} \\rightarrow pe$ model (Exp. 4.1), the result beats the $wmt18^{smt}_{best}$ (x4) system, which is an ensemble of four different randomly initialized $wmt18^{smt}_{best}$ (single) systems. Our $\\mathbf {ensemble^{smt} (x3)}$ combines two $\\lbrace src,mt\\rbrace ^{smt}_{tr} \\rightarrow pe$ (Exp. 2.3) models initialized with different random weights with the ensemble of the fine-tuned transference model Exp3.3$^{smt}_{ens4ckpt}$(Exp. 4.1). This ensemble provides the best results for all datasets, providing roughly +1 BLEU point and -0.5 TER when comparing against $wmt18^{smt}_{best}$ (x4).",
"The results on the WMT 2018 NMT datasets (dev2018 and test2018) are presented in Table TABREF9. The raw NMT system serves as one baseline against which we compare the performance of the different models. We evaluate the system hypotheses with respect to the ground truth ($pe$), i.e., the post-edited version of $mt$. The baseline original NMT system scores 76.76 BLEU points and 15.08 TER on dev2018, and 74.73 BLEU points and 16.84 TER on test2018.",
"For the WMT 2018 NMT data we first test our $\\lbrace src,mt\\rbrace ^{nmt}_{tr} \\rightarrow pe^{{generic,smt}}_{{}}$ model, which is the model from Exp. 3.3 fine-tuned towards NMT data as described in Section SECREF3. Table TABREF9 shows that our PBSMT APE model fine-tuned towards NMT (Exp. 7) can even slightly improve over the already very strong NMT system by about +0.3 BLEU and -0.1 TER, although these improvements are not statistically significant.",
"The overall results improve when we train our model on eScape and NMT data instead of using the PBSMT model as a basis. Our proposed generic transference model (Exp. 8, $\\lbrace src,mt\\rbrace ^{nmt}_{tr} \\rightarrow pe^{{generic,nmt}}_{{}}$ shows statistically significant improvements in terms of BLEU and TER compared to the baseline even before fine-tuning, and further improvements after fine-tuning (Exp. 9, $\\lbrace src,mt\\rbrace ^{nmt}_{tr} \\rightarrow pe^{{ft}}_{{}}$). Finally, after averaging the 8 best checkpoints, our $\\lbrace src,mt\\rbrace ^{nmt}_{tr} \\rightarrow pe^{{ft}}_{{avg}}$ model (Exp. 10) also shows consistent improvements in comparison to the baseline and other experimental setups. Overall our fine-tuned model averaging the 8 best checkpoints achieves +1.02 absolute BLEU points and -0.69 absolute TER improvements over the baseline on test2018. Table TABREF9 also shows the performance of our model compared to the winner system of WMT 2018 ($wmt18^{nmt}_{best}$) for the NMT task BIBREF14. $wmt18^{nmt}_{best}$ scores 14.78 in TER and 77.74 in BLEU on the dev2018 and 16.46 in TER and 75.53 in BLEU on the test2018. In comparison to $wmt18^{nmt}_{best}$, our model (Exp. 10) achieves better scores in TER on both the dev2018 and test2018, however, in terms of BLEU our model scores slightly lower for dev2018, while some improvements are achieved on test2018.",
"The number of layers ($N_{src}$-$N_{mt}$-$N_{pe}$) in all encoders and the decoder for these results is fixed to 6-6-6. In Exp. 5.1, and 5.2 in Table TABREF5, we see the results of changing this setting to 6-6-4 and 6-4-6. This can be compared to the results of Exp. 2.3, since no fine-tuning or ensembling was performed for these three experiments. Exp. 5.1 shows that decreasing the number of layers on the decoder side does not hurt the performance. In fact, in the case of test2016, we got some improvement, while for test2017, the scores got slightly worse. In contrast, reducing the $enc_{src \\rightarrow mt}$ encoder block's depth (Exp. 5.2) does indeed reduce the performance for all four scores, showing the importance of this second encoder."
],
[
"In Table TABREF11, we analyze and compare the best performing SMT ($ensemble^{smt} (x3)$) and NMT ($\\lbrace src,mt\\rbrace ^{nmt}_{tr} \\rightarrow pe^{{ft}}_{{avg}}$) model outputs with the original MT outputs on the WMT 2017 (SMT) APE test set and on the WMT 2018 (NMT) development set. Improvements are measured in terms of number of words which need to be (i) inserted (In), (ii) deleted (De), (iii) substituted (Su), and (iv) shifted (Sh), as per TER BIBREF13, in order to turn the MT outputs into reference translations. Our model provides promising results by significantly reducing the required number of edits (24% overall for PBSMT task and 3.6% for NMT task) across all edit operations, thereby leading to reduced post-editing effort and hence improving human post-editing productivity.",
"When comparing PBSMT to NMT, we see that stronger improvements are achieved for PBSMT, probably because the raw SMT is worse than the raw NMT. For PBSMT, similar results are achieved for In, De, and Sh, while less gains are obtained in terms of Su. For NMT, In is improved most, followed by Su, De, and last Sh. For shifts in NMT, the APE system even creates further errors, instead of reducing them, which is an issue we aim to prevent in the future."
],
[
"The proposed transference architecture ($\\lbrace src,mt\\rbrace ^{smt}_{tr} \\rightarrow pe$, Exp. 2.3) shows slightly worse results than $wmt18^{smt}_{best}$ (single) (Exp. 1.3) before fine-tuning, and roughly similar results after fine-tuning (Exp. 3.3). After ensembling, however, our transference model (Exp. 4.2) shows consistent improvements when comparing against the best baseline ensemble $wmt18^{smt}_{best}$ (x4) (Exp. 1.4). Due to the unavailability of the sentence-level scores of $wmt18^{smt}_{best}$ (x4), we could not test if the improvements (roughly +1 BLEU, -0.5 TER) are statistically significant. Interestingly, our approach of taking the model optimized for PBSMT and fine-tuning it to the NMT task (Exp. 7) does not hurt the performance as was reported in the previous literature BIBREF11. In contrast, some small, albeit statistically insignificant improvements over the raw NMT baseline were achieved. When we train the transference architecture directly for the NMT task (Exp. 8), we get slightly better and statistically significant improvements compared to raw NMT. Fine-tuning this NMT model further towards the actual NMT data (Exp. 9), as well as performing checkpoint averaging using the 8 best checkpoints improves the results even further.",
"The reasons for the effectiveness of our approach can be summarized as follows. (1) Our $enc_{src \\rightarrow mt}$ contains two attention mechanisms: one is self-attention and another is cross-attention. The self-attention layer is not masked here; therefore, the cross-attention layer in $enc_{src \\rightarrow mt}$ is informed by both previous and future time-steps from the self-attended representation of $mt$ ($enc_{mt}$) and additionally from $enc_{src}$. As a result, each state representation of $enc_{src \\rightarrow mt}$ is learned from the context of $src$ and $mt$. This might produce better representations for $dec_{pe}$ which can access the combined context. In contrast, in $wmt18^{smt}_{best}$, the $dec_{pe}$ accesses representations from $src$ and $mt$ independently, first using the representation from $mt$ and then using that of $src$. (2) The position-wise feed-forward layer in our $enc_{src \\rightarrow mt}$ of the transference model requires processing information from two attention modules, while in the case of $wmt18^{smt}_{best}$, the position-wise feed-forward layer in $dec_{pe}$ needs to process information from three attention modules, which may increase the learning difficulty of the feed-forward layer. (3) Since $pe$ is a post-edited version of $mt$, sharing the same language, $mt$ and $pe$ are quite similar compared to $src$. Therefore, attending over a fine-tuned representation from $mt$ along with $src$, which is what we have done in this work, might be a reason for the better results than those achieved by attending over $src$ directly. Evaluating the influence of the depth of our encoders and decoder show that while the decoder depth appears to have limited importance, reducing the encoder depth indeed hurts performance which is in line with domhan-2018-much."
],
[
"In this paper, we presented a multi-encoder transformer-based APE model that repurposes the standard transformer blocks in a simple and effective way for the APE task: first, our transference architecture uses a transformer encoder block for $src$, followed by a decoder block without masking on $mt$ that effectively acts as a second encoder combining $src \\rightarrow mt$, and feeds this representation into a final decoder block generating $pe$. The proposed model outperforms the best-performing system of WMT 2018 on the test2016, test2017, dev2018, and test2018 data and provides a new state-of-the-art in APE.",
"Taking a departure from traditional transformer-based encoders, which perform self-attention only, our second encoder also performs cross-attention to produce representations for the decoder based on both $src$ and $mt$. We also show that the encoder plays a more pivotal role than the decoder in transformer-based APE, which could also be the case for transformer-based generation tasks in general. Our architecture is generic and can be used for any multi-source task, e.g., multi-source translation or summarization, etc."
]
],
"section_name": [
"Introduction",
"Related Research",
"Transference Model for APE",
"Experiments",
"Experiments ::: Data",
"Experiments ::: Experiment Setup",
"Experiments ::: Hyper-parameter Setup",
"Results",
"Results ::: Baselines",
"Results ::: Single-Encoder Transformer for APE",
"Results ::: Transference Transformer for APE",
"Results ::: Analysis of Error Patterns",
"Results ::: Discussion",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"b9ca56e28bc96cc457a1d083731e3959d1e78778"
],
"answer": [
{
"evidence": [
"Last, we analyze the importance of our second encoder ($enc_{src \\rightarrow mt}$), compared to the source encoder ($enc_{src}$) and the decoder ($dec_{pe}$), by reducing and expanding the amount of layers in the encoders and the decoder. Our standard setup, which we use for fine-tuning, ensembling etc., is fixed to 6-6-6 for $N_{src}$-$N_{mt}$-$N_{pe}$ (cf. Figure FIGREF1), where 6 is the value that was proposed by Vaswani:NIPS2017 for the base model. We investigate what happens in terms of APE performance if we change this setting to 6-6-4 and 6-4-6. To handle out-of-vocabulary words and reduce the vocabulary size, instead of considering words, we consider subword units BIBREF19 by using byte-pair encoding (BPE). In the preprocessing step, instead of learning an explicit mapping between BPEs in the $src$, $mt$ and $pe$, we define BPE tokens by jointly processing all triplets. Thus, $src$, $mt$ and $pe$ derive a single BPE vocabulary. Since $mt$ and $pe$ belong to the same language (German) and $src$ is a close language (English), they naturally share a good fraction of BPE tokens, which reduces the vocabulary size to 28k.",
"The number of layers ($N_{src}$-$N_{mt}$-$N_{pe}$) in all encoders and the decoder for these results is fixed to 6-6-6. In Exp. 5.1, and 5.2 in Table TABREF5, we see the results of changing this setting to 6-6-4 and 6-4-6. This can be compared to the results of Exp. 2.3, since no fine-tuning or ensembling was performed for these three experiments. Exp. 5.1 shows that decreasing the number of layers on the decoder side does not hurt the performance. In fact, in the case of test2016, we got some improvement, while for test2017, the scores got slightly worse. In contrast, reducing the $enc_{src \\rightarrow mt}$ encoder block's depth (Exp. 5.2) does indeed reduce the performance for all four scores, showing the importance of this second encoder.",
"FLOAT SELECTED: Table 1: Evaluation results on the WMT APE test set 2016, and test set 2017 for the PBSMT task; (±X) value is the improvement over wmt18smtbest (x4). The last section of the table shows the impact of increasing and decreasing the depth of the encoders and the decoder."
],
"extractive_spans": [
"Exp. 5.1"
],
"free_form_answer": "",
"highlighted_evidence": [
"Last, we analyze the importance of our second encoder ($enc_{src \\rightarrow mt}$), compared to the source encoder ($enc_{src}$) and the decoder ($dec_{pe}$), by reducing and expanding the amount of layers in the encoders and the decoder. Our standard setup, which we use for fine-tuning, ensembling etc., is fixed to 6-6-6 for $N_{src}$-$N_{mt}$-$N_{pe}$ (cf. Figure FIGREF1), where 6 is the value that was proposed by Vaswani:NIPS2017 for the base model. We investigate what happens in terms of APE performance if we change this setting to 6-6-4 and 6-4-6. ",
"The number of layers ($N_{src}$-$N_{mt}$-$N_{pe}$) in all encoders and the decoder for these results is fixed to 6-6-6. In Exp. 5.1, and 5.2 in Table TABREF5, we see the results of changing this setting to 6-6-4 and 6-4-6. This can be compared to the results of Exp. 2.3, since no fine-tuning or ensembling was performed for these three experiments. Exp. 5.1 shows that decreasing the number of layers on the decoder side does not hurt the performance. In fact, in the case of test2016, we got some improvement, while for test2017, the scores got slightly worse. In contrast, reducing the $enc_{src \\rightarrow mt}$ encoder block's depth (Exp. 5.2) does indeed reduce the performance for all four scores, showing the importance of this second encoder.",
"FLOAT SELECTED: Table 1: Evaluation results on the WMT APE test set 2016, and test set 2017 for the PBSMT task; (±X) value is the improvement over wmt18smtbest (x4). The last section of the table shows the impact of increasing and decreasing the depth of the encoders and the decoder."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"fab85e57ab7cf64eea51fd8daaeec88dc5ca43e5"
],
"answer": [
{
"evidence": [
"The number of layers ($N_{src}$-$N_{mt}$-$N_{pe}$) in all encoders and the decoder for these results is fixed to 6-6-6. In Exp. 5.1, and 5.2 in Table TABREF5, we see the results of changing this setting to 6-6-4 and 6-4-6. This can be compared to the results of Exp. 2.3, since no fine-tuning or ensembling was performed for these three experiments. Exp. 5.1 shows that decreasing the number of layers on the decoder side does not hurt the performance. In fact, in the case of test2016, we got some improvement, while for test2017, the scores got slightly worse. In contrast, reducing the $enc_{src \\rightarrow mt}$ encoder block's depth (Exp. 5.2) does indeed reduce the performance for all four scores, showing the importance of this second encoder.",
"FLOAT SELECTED: Table 1: Evaluation results on the WMT APE test set 2016, and test set 2017 for the PBSMT task; (±X) value is the improvement over wmt18smtbest (x4). The last section of the table shows the impact of increasing and decreasing the depth of the encoders and the decoder."
],
"extractive_spans": [],
"free_form_answer": "comparing to the results from reducing the number of layers in the decoder, the BLEU score was 69.93 which is less than 1% in case of test2016 and in case of test2017 it was less by 0.2 %. In terms of TER it had higher score by 0.7 in case of test2016 and 0.1 in case of test2017. ",
"highlighted_evidence": [
"Exp. 5.1 shows that decreasing the number of layers on the decoder side does not hurt the performance. In fact, in the case of test2016, we got some improvement, while for test2017, the scores got slightly worse. In contrast, reducing the $enc_{src \\rightarrow mt}$ encoder block's depth (Exp. 5.2) does indeed reduce the performance for all four scores, showing the importance of this second encoder.",
"FLOAT SELECTED: Table 1: Evaluation results on the WMT APE test set 2016, and test set 2017 for the PBSMT task; (±X) value is the improvement over wmt18smtbest (x4). The last section of the table shows the impact of increasing and decreasing the depth of the encoders and the decoder."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"e1ea054e476335bec3cd9d5c81e2c0c43f02dedc"
],
"answer": [
{
"evidence": [
"Recently, in the WMT 2018 APE shared task, several adaptations of the transformer architecture have been presented for multi-source APE. pal-EtAl:2018:WMT proposed an APE model that uses three self-attention-based encoders. They introduce an additional joint encoder that attends over a combination of the two encoded sequences from $mt$ and $src$. tebbifakhr-EtAl:2018:WMT, the NMT-subtask winner of WMT 2018 ($wmt18^{nmt}_{best}$), employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics. shin-lee:2018:WMT propose that each encoder has its own self-attention and feed-forward layer to process each input separately. On the decoder side, they add two additional multi-head attention layers, one for $src \\rightarrow mt$ and another for $src \\rightarrow pe$. Thereafter another multi-head attention between the output of those attention layers helps the decoder to capture common words in $mt$ which should remain in $pe$. The APE PBSMT-subtask winner of WMT 2018 ($wmt18^{smt}_{best}$) BIBREF11 also presented another transformer-based multi-source APE which uses two encoders and stacks an additional cross-attention component for $src \\rightarrow pe$ above the previous cross-attention for $mt \\rightarrow pe$. Comparing shin-lee:2018:WMT's approach with the winner system, there are only two differences in the architecture: (i) the cross-attention order of $src \\rightarrow mt$ and $src \\rightarrow pe$ in the decoder, and (ii) $wmt18^{smt}_{best}$ additionally shares parameters between two encoders."
],
"extractive_spans": [
"pal-EtAl:2018:WMT proposed an APE model that uses three self-attention-based encoders",
"tebbifakhr-EtAl:2018:WMT, the NMT-subtask winner of WMT 2018 ($wmt18^{nmt}_{best}$), employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics.",
"shin-lee:2018:WMT propose that each encoder has its own self-attention and feed-forward layer to process each input separately. ",
"The APE PBSMT-subtask winner of WMT 2018 ($wmt18^{smt}_{best}$) BIBREF11 also presented another transformer-based multi-source APE which uses two encoders and stacks an additional cross-attention component for $src \\rightarrow pe$ above the previous cross-attention for $mt \\rightarrow pe$."
],
"free_form_answer": "",
"highlighted_evidence": [
"Recently, in the WMT 2018 APE shared task, several adaptations of the transformer architecture have been presented for multi-source APE. pal-EtAl:2018:WMT proposed an APE model that uses three self-attention-based encoders. They introduce an additional joint encoder that attends over a combination of the two encoded sequences from $mt$ and $src$. tebbifakhr-EtAl:2018:WMT, the NMT-subtask winner of WMT 2018 ($wmt18^{nmt}_{best}$), employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics. shin-lee:2018:WMT propose that each encoder has its own self-attention and feed-forward layer to process each input separately. On the decoder side, they add two additional multi-head attention layers, one for $src \\rightarrow mt$ and another for $src \\rightarrow pe$. Thereafter another multi-head attention between the output of those attention layers helps the decoder to capture common words in $mt$ which should remain in $pe$. The APE PBSMT-subtask winner of WMT 2018 ($wmt18^{smt}_{best}$) BIBREF11 also presented another transformer-based multi-source APE which uses two encoders and stacks an additional cross-attention component for $src \\rightarrow pe$ above the previous cross-attention for $mt \\rightarrow pe$. Comparing shin-lee:2018:WMT's approach with the winner system, there are only two differences in the architecture: (i) the cross-attention order of $src \\rightarrow mt$ and $src \\rightarrow pe$ in the decoder, and (ii) $wmt18^{smt}_{best}$ additionally shares parameters between two encoders.",
"Recently, in the WMT 2018 APE shared task, several adaptations of the transformer architecture have been presented for multi-source APE. pal-EtAl:2018:WMT proposed an APE model that uses three self-attention-based encoders. They introduce an additional joint encoder that attends over a combination of the two encoded sequences from $mt$ and $src$. tebbifakhr-EtAl:2018:WMT, the NMT-subtask winner of WMT 2018 ($wmt18^{nmt}_{best}$), employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics. shin-lee:2018:WMT propose that each encoder has its own self-attention and feed-forward layer to process each input separately. On the decoder side, they add two additional multi-head attention layers, one for $src \\rightarrow mt$ and another for $src \\rightarrow pe$. Thereafter another multi-head attention between the output of those attention layers helps the decoder to capture common words in $mt$ which should remain in $pe$. The APE PBSMT-subtask winner of WMT 2018 ($wmt18^{smt}_{best}$) BIBREF11 also presented another transformer-based multi-source APE which uses two encoders and stacks an additional cross-attention component for $src \\rightarrow pe$ above the previous cross-attention for $mt \\rightarrow pe$. Comparing shin-lee:2018:WMT's approach with the winner system, there are only two differences in the architecture: (i) the cross-attention order of $src \\rightarrow mt$ and $src \\rightarrow pe$ in the decoder, and (ii) $wmt18^{smt}_{best}$ additionally shares parameters between two encoders."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What experiment result led to conclussion that reducing the number of layers of the decoder does not matter much?",
"How much is performance hurt when using too small amount of layers in encoder?",
"What was previous state of the art model for automatic post editing?"
],
"question_id": [
"f9c5799091e7e35a8133eee4d95004e1b35aea00",
"04012650a45d56c0013cf45fd9792f43916eaf83",
"7889ec45b996be0b8bf7360d08f84daf3644f115"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The transference model architecture for APE ({src,mt}tr → pe).",
"Table 1: Evaluation results on the WMT APE test set 2016, and test set 2017 for the PBSMT task; (±X) value is the improvement over wmt18smtbest (x4). The last section of the table shows the impact of increasing and decreasing the depth of the encoders and the decoder.",
"Table 2: Evaluation results on the WMT APE 2018 development set for the NMT task (Exp. 10 results were obtained by the WMT 2019 task organizers).",
"Table 3: % of error reduction in terms of different edit operations achieved by our best systems compared to the raw MT baselines."
],
"file": [
"3-Figure1-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png"
]
} | [
"How much is performance hurt when using too small amount of layers in encoder?"
] | [
[
"1908.06151-Results ::: Transference Transformer for APE-4",
"1908.06151-6-Table1-1.png"
]
] | [
"comparing to the results from reducing the number of layers in the decoder, the BLEU score was 69.93 which is less than 1% in case of test2016 and in case of test2017 it was less by 0.2 %. In terms of TER it had higher score by 0.7 in case of test2016 and 0.1 in case of test2017. "
] | 810 |
1802.00273 | Emerging Language Spaces Learned From Massively Multilingual Corpora | Translations capture important information about languages that can be used as implicit supervision in learning linguistic properties and semantic representations. In an information-centric view, translated texts may be considered as semantic mirrors of the original text and the significant variations that we can observe across various languages can be used to disambiguate a given expression using the linguistic signal that is grounded in translation. Parallel corpora consisting of massive amounts of human translations with a large linguistic variation can be applied to increase abstractions and we propose the use of highly multilingual machine translation models to find language-independent meaning representations. Our initial experiments show that neural machine translation models can indeed learn in such a setup and we can show that the learning algorithm picks up information about the relation between languages in order to optimize transfer leaning with shared parameters. The model creates a continuous language space that represents relationships in terms of geometric distances, which we can visualize to illustrate how languages cluster according to language families and groups. Does this open the door for new ideas of data-driven language typology with promising models and techniques in empirical cross-linguistic research? | {
"paragraphs": [
[
"Our primary goal is to learn meaning representations of sentences and sentence fragments by looking at the distributional information that is available in parallel corpora of human translations. The basic idea is to use translations into other languages as “semantic mirrors” of the original text, assuming that they represent the same meaning but with different symbols, wordings and linguistic structures. For this we discard any meaning diversions that may happen in translation due to target audience adaptation or other processes that may influence the semantics of the translated texts. We also assume that the material can be divided into meaningful and self-contained units, Bible verses in our case, and focus on the global data-driven model that hopefully can cope with instances that violate our assumptions.",
"Our model is based on the intuition that the huge amount of variation and the cross-lingual differences in language ambiguity make it possible to learn semantic distinctions purely from data. The translations are, thus, used as a naturally occurring signal (or cross-lingual grounding) that can be applied as a form of implicit supervision for the learning procedure, mapping sentences to semantic representations that resolve language-internal ambiguities. With this approach we hope to take a step forward in one of the main goals in artificial intelligence, namely the task of natural language understanding. In this paper, however, we emphasise the use of such models in the discovery of linguistic properties and relationships between languages in particular. Having that in mind, the study may open new directions for collaborations between language technology and general linguistics. But before coming back to this, let us first look at related work and the general principles of distributional semantics with cross-lingual grounding.",
"The use of translations for disambiguation has been explored in various studies. Dyvik BIBREF0 proposes to use word translations to discover lexical semantic fields, Carpuat et al. BIBREF1 discuss the use of parallel corpora for word sense disambiguation, van der Plas and Tiedemann BIBREF2 present work on the extraction of synonyms and Villada and Tiedemann BIBREF3 explore multilingual word alignments to identify idiomatic expressions.",
"The idea of cross-lingual disambiguation is simple. The following example illustrates the effect of disambiguation of idiomatic uses of “put off” through translation into German:",
"Using the general idea of the distributional hypothesis that “you shall know a word by the company it keeps” BIBREF4 , we can now explore how cross-lingual context can serve as the source of information that defines the semantics of given sentences. As common in the field of distributional semantics, we will apply semantic vector space models that describe the meaning of a word or text by mapping it onto a position (a real-valued vector) in some high-dimensional Euclidean space. Various models and algorithms have been proposed in the literature (see, e.g., BIBREF5 , BIBREF6 ) and applied to a number of practical tasks. Predictive models based on neural network classifiers and neural language models BIBREF7 , BIBREF8 have superseded models that are purely based on co-occurrence counts (see BIBREF9 for a comparison of common approaches). Semantic vector spaces show even interesting algebraic properties that reflect semantic compositionality, support vector-based reasoning and can be mapped across languages BIBREF10 , BIBREF11 . Multilingual models have been proposed as well BIBREF12 , BIBREF13 . Neural language models are capable of integrating multiple languages BIBREF14 , which makes it possible to discover relations between them based on the language space learned purely from the data.",
"Our framework will be neural machine translation (NMT) that applies an encoder-decoder architecture, which runs sequentially through a string of input symbols (for example words in a sentence) to map the information to dense vector representations, which will then be used to decode that information in another language. Figure 1 illustrates the general principle with respect to the classical Vauquois triangle of machine translation BIBREF15 .",
"Translation models are precisely the kind of machinery that tries to transfer the meaning expressed in one language into another by analysing (understanding) the input and generating the output. NMT tries to learn that mapping from data and, thus, learns to “understand” some source language in order to produce proper translations in a target language from given examples. Our primary hypothesis is that we can increase the level of abstraction by including a larger diversity in the training data that pushes the model to improve compression of the growing variation and complexity of the task. We will test this hypothesis by training multilingual models over hundreds or even almost a thousand languages to force the MT model to abstract over a large proportion of the World's linguistic diversity.",
"As a biproduct of multilingual models with shared parameters, we will obtain a mapping of languages to a continuous vector space depicting relations between individual languages by means of geometric distances. In this paper, we present our initial findings when training such a model with over 900 languages from a collection of Bible translations and focus on the ability of the model to pick up genetic relations between languages when being forced to cover many languages in one single model.",
"In the following, we will first present the basic architecture of the neural translation model together with the setup for training multilingual models. After that we will discuss our experimental results before concluding the paper with some final comments and prospects for future work."
],
[
"Neural machine translation typically applies an end-to-end network architecture that includes one or several layers for encoding an input sentence into an internal dense real-valued vector representation and another layer for decoding that representation into the output of the target language. Various variants of that model have been proposed in the recent literature BIBREF16 , BIBREF17 with the same general idea of compressing a sentence into a representation that captures all necessary aspects of the input to enable proper translation in the decoder. An important requirement is that the model needs to support variable lengths of input and output. This is achieved using recurrent neural networks (RNNs) that naturally support sequences of arbitrary lengths. A common architecture is illustrated in Figure 1 :",
"Discrete input symbols are mapped via numeric word representations (embeddings $E$ ) onto a hidden layer ( $C$ ) of context vectors ( $h$ ), in this case by a bidirectional RNN that reads the sequence in a forward and a reverse mode. The encoding function is often modeled by special memory units and all model parameters are learned during training on example translations. In the simplest case, the final representation (returned after running through the encoding layer) is sent to the decoder, which unrolls the information captured by that internal representation. Note that the illustration in Figure 1 includes an important addition to the model, a so-called attention mechanism. Attention makes it possible to focus on particular regions from the encoded sentence when decoding BIBREF17 and, with this, the representation becomes much more flexible and dynamic and greatly improves the translation of sentences with variable lengths.",
"All parameters of the network are trained on large collections of human translations (parallel corpora) typically by some form of gradient descent (iterative function optimisation) that is backpropagated through the network. The attractive property of such a model is the ability to learn representations that reflect semantic properties of the input language through the task of translation. However, one problem is that translation models can be “lazy” and avoid abstractions if the mapping between source and target language does not require any deep understanding. This is where the idea of multilinguality comes into the picture: If the learning algorithm is confronted with a large linguistic variety then it has to generalize and to forget about language-pair-specific shortcuts. Covering substantial amounts of the world's linguistic diversity as we propose pushes the limits of the approach and strong abstractions in $C$ can be expected. Figure 2 illustrates the intuition behind that idea.",
"Various multilingual extensions of NMT have already been proposed in the literature. The authors of BIBREF18 , BIBREF19 apply multitask learning to train models for multiple languages. Zoph and Knight BIBREF20 propose a multi-source model and BIBREF21 introduces a character-level encoder that is shared across several source languages. In our setup, we will follow the main idea proposed by Johnson et al. BIBREF22 . The authors of that paper suggest a simple addition by means of a language flag on the source language side (see Figure 2 ) to indicate the target language that needs to be produced by the decoder. This flag will be mapped on a dense vector representation and can be used to trigger the generation of the selected language. The authors of the paper argue that the model enables transfer learning and supports the translation between languages that are not explicitly available in training. This ability gives a hint of some kind of vector-based “interlingua”, which is precisely what we are looking for. However, the original paper only looks at a small number of languages and we will scale it up to a larger variation using significantly more languages to train on. More details will be given in the following section."
],
[
"Our question is whether we can use a standard NMT model with a much larger coverage of the linguistic diversity of the World in order to maximise the variation signalling semantic distinctions that can be picked up by the learning procedures. Figure 3 illustrates our setup based on a model trained on over 900 languages from the multilingual Bible corpus BIBREF23 .",
"We trained the model in various batches and observed the development of the model in terms of translation quality on some small heldout data. The heldout data refers to an unseen language pair, Swedish-Portuguese in our case (in both directions). We selected those languages in order to see the capabilities of the system to translate between rather distant languages for which a reasonable number of closely related languages are in the data collection to improve knowledge transfer.",
"The results demonstrate so far that the network indeed picks up the information about the language to be produced. The decoder successfully switches to the selected language and produces relatively fluent Bible-style text. The adequacy of the translation, however, is rather limited and this is most probably due to the restricted capacity of the network with such a load of information to be covered. Nevertheless, it is exciting to see that such a diverse material can be used in one single model and that it learns to share parameters across all languages. One of the most interesting effects that we can observe is the emerging language space that relates to the language flags in the data. In Figure 4 we plot the language space (using t-SNE BIBREF24 for projecting to two dimensions) coloured by language family for the ten language families / groups with most members in our data set.",
"We can see that languages roughly cluster according to the family they belong to. Note that this is purely learned from the data based on the objective to translate between all of them with a single model. The training procedure learns to map closely related languages near to each other in order to increase knowledge transfer between them. This development is very encouraging and demonstrates the ability of the neural network model to optimise parameter sharing to make most out of the model's capacity.",
"An interesting question coming out of this study is whether such multilingual translation models can be used to learn linguistic properties of the languages involved. Making it possible to measure the distance between individual languages in the emerging structures could be useful in data-driven language typology and other cross-linguistic studies. The results so far, do not reveal a lot of linguistically interesting relations besides the projection of languages onto a global continuous space with real-values distances between them. Nevertheless, quantifying the distance is potentially valuable and provides a more fine-grained relation than discrete relations coming from traditional family trees. It is, however, still an open question what kind of properties are represented by the language embeddings and further studies are necessary to see whether specific linguistic features can be identified and isolated from the distributed representations. There is a growing interest in interpretability of emerging structures and related work already demonstrates the ability of predicting typological features with similar language representations BIBREF25 .",
"Massively parallel data sets make it now possible to study specific typological structures with computational models, for example tense and aspect as in BIBREF26 , and we intend to follow up our initial investigations of NMT-based representations in future research along those lines. We also plan to consider other domains than the one of religious texts but it is difficult to obtain the same coverage of the linguistic space with different material. Unbalanced mixtures will be an option but difficult to train. Resources like the Universal Declarations of Human Rights are an option but, unfortunately, very sparse.",
"Another direction is to explore the inter-lingual variations and language developments using, for example, the alternative translations that exist for some languages in the Bible corpus. However, even here the data is rather sparse and it remains to be seen how reliable any emerging pattern will be. Crucial for the success will be a strong collaboration with scholars from the humanities, which shows the important role of digital humanities as a field."
],
[
"In this paper, we present our experiments with highly multilingual translation models. We trained neural MT models on Bible translations of over 900 languages in order to see whether the system is capable of sharing parameters across a large diverse sample of the World's languages. Our motivation is to learn language-independent meaning representations using translations as implicit semantic supervision and cross-lingual grounding. Our pilot study demonstrates that such a model can pick up the relationship between languages purely from the data and the translation objective. We hypothesise that such a data-driven setup can be interesting for cross-linguistic studies and language typology. In the future, we would like to investigate the emerging language space in more detail also in connection with alternative network architectures and training procedures. We believe that empirical methods like this one based on automatic representation learning will have significant impact on studies in linguistics providing an objective way of investigating properties and structures of human languages emerging from data and distributional patterns."
],
[
"We would like to thank the anonymous reviewers for their valuable comments and suggestions as well as the Academy of Finland for the support of the research presented in the paper with project 314062 from the ICT 2023 call on Computation, Machine Learning and Artificial Intelligence."
]
],
"section_name": [
"Introduction and Motivation",
"Multilingual Neural Machine Translation",
"Experiments and Results",
"Conclusions",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"bb5632389b7d2c76ee4cfe635e4870c5f53a9aa0"
],
"answer": [
{
"evidence": [
"Various multilingual extensions of NMT have already been proposed in the literature. The authors of BIBREF18 , BIBREF19 apply multitask learning to train models for multiple languages. Zoph and Knight BIBREF20 propose a multi-source model and BIBREF21 introduces a character-level encoder that is shared across several source languages. In our setup, we will follow the main idea proposed by Johnson et al. BIBREF22 . The authors of that paper suggest a simple addition by means of a language flag on the source language side (see Figure 2 ) to indicate the target language that needs to be produced by the decoder. This flag will be mapped on a dense vector representation and can be used to trigger the generation of the selected language. The authors of the paper argue that the model enables transfer learning and supports the translation between languages that are not explicitly available in training. This ability gives a hint of some kind of vector-based “interlingua”, which is precisely what we are looking for. However, the original paper only looks at a small number of languages and we will scale it up to a larger variation using significantly more languages to train on. More details will be given in the following section."
],
"extractive_spans": [],
"free_form_answer": "Multilingual Neural Machine Translation Models",
"highlighted_evidence": [
"Various multilingual extensions of NMT have already been proposed in the literature. The authors of BIBREF18 , BIBREF19 apply multitask learning to train models for multiple languages. Zoph and Knight BIBREF20 propose a multi-source model and BIBREF21 introduces a character-level encoder that is shared across several source languages. In our setup, we will follow the main idea proposed by Johnson et al. BIBREF22 . The authors of that paper suggest a simple addition by means of a language flag on the source language side (see Figure 2 ) to indicate the target language that needs to be produced by the decoder. This flag will be mapped on a dense vector representation and can be used to trigger the generation of the selected language. The authors of the paper argue that the model enables transfer learning and supports the translation between languages that are not explicitly available in training."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"2b413669fd1e681656c8d43a27df86e649065edf"
]
}
],
"nlp_background": [
"two"
],
"paper_read": [
"no"
],
"question": [
"What neural machine translation models can learn in terms of transfer learning?"
],
"question_id": [
"41e300acec35252e23f239772cecadc0ea986071"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Fig. 1. Conceptual illustrations of neural machine translation and abstractions to meaning representations.",
"Fig. 2. Multilingual Neural MT and training data with language flags.",
"Fig. 3. Experimental setup: Bible translations paired with English as either source or target language are used to train one single multilingual NMT model including language flags on the source language side. Language flags are mapped onto a continuous vector space of language embeddings.",
"Fig. 4. Continuous language space that emerges from multilingual NMT (t-SNE plot)."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"6-Figure3-1.png",
"7-Figure4-1.png"
]
} | [
"What neural machine translation models can learn in terms of transfer learning?"
] | [
[
"1802.00273-Multilingual Neural Machine Translation-3"
]
] | [
"Multilingual Neural Machine Translation Models"
] | 811 |
2003.09520 | TArC: Incrementally and Semi-Automatically Collecting a Tunisian Arabish Corpus | This article describes the constitution process of the first morpho-syntactically annotated Tunisian Arabish Corpus (TArC). Arabish, also known as Arabizi, is a spontaneous coding of Arabic dialects in Latin characters and arithmographs (numbers used as letters). This code-system was developed by Arabic-speaking users of social media in order to facilitate the writing in the Computer-Mediated Communication (CMC) and text messaging informal frameworks. There is variety in the realization of Arabish amongst dialects, and each Arabish code-system is under-resourced, in the same way as most of the Arabic dialects. In the last few years, the focus on Arabic dialects in the NLP field has considerably increased. Taking this into consideration, TArC will be a useful support for different types of analyses, computational and linguistic, as well as for NLP tools training. In this article we will describe preliminary work on the TArC semi-automatic construction process and some of the first analyses we developed on TArC. In addition, in order to provide a complete overview of the challenges faced during the building process, we will present the main Tunisian dialect characteristics and their encoding in Tunisian Arabish. | {
"paragraphs": [
[
"Arabish is the romanization of Arabic Dialects (ADs) used for informal messaging, especially in social networks. This writing system provides an interesting ground for linguistic research, computational as well as sociolinguistic, mainly due to the fact that it is a spontaneous representation of the ADs, and because it is a linguistic phenomenon in constant expansion on the web. Despite such potential, little research has been dedicated to Tunisian Arabish (TA). In this paper we describe the work we carried to develop a flexible and multi-purpose TA resource. This will include a TA corpus, together with some tools that could be useful for analyzing the corpus and for its extension with new data.",
"First of all, the resource will be useful to give an overview of the TA. At the same time, it will be a reliable representation of the Tunisian dialect (TUN) evolution over the last ten years: the collected texts date from 2009 to present. This selection was done with the purpose to observe to what extent the TA orthographic system has evolved toward a writing convention. Therefore, the TArC will be suitable for phonological, morphological, syntactic and semantic studies, both in the linguistic and the Natural Language Processing (NLP) domains. For these reasons, we decided to build a corpus which could highlight the structural characteristics of TA through different annotation levels, including Part of Speech (POS) tags and lemmatization. In particular, to facilitate the match with the already existing tools and studies for the Arabic language processing, we provide a transcription in Arabic characters at token level, following the Conventional Orthography for Dialectal Arabic guidelines CODA* (CODA star) BIBREF0 and taking into account the specific guidelines for TUN (CODA TUN) BIBREF1. Furthermore, even if the translation is not the main goal of this research, we have decided to provide an Italian translation of the TArC’s texts. Even though in the last few years ADs have received an increasing attention by the NLP community, many aspects have not been studied yet and one of these is the Arabish code-system. The first reason for this lack of research is the relatively recent widespread of its use: before the advent of the social media, Arabish usage was basically confined to text messaging. However, the landscape has changed considerably, and particularly thanks to the massive registration of users on Facebook since 2008. At that time, in Tunisia there were still no Arabic keyboards, neither for Personal Computers, nor for phones, so Arabic-speaking users designed TA for writing in social media (Table TABREF14). A second issue that has held back the study of Arabish is its lack of a standard orthography, and the informal context of use. It is important to note that also the ADs lack a standard code-system, mainly because of their oral nature. In recent years the scientific community has been active in producing various sets of guidelines for dialectal Arabic writing in Arabic characters: CODA (Conventional Orthography for Dialectal Arabic) BIBREF2.",
"The remainder of the paper is organized as follows: section SECREF2 is an overview of NLP studies on TUN and TA; section SECREF3 describes TUN and TA; section SECREF4 presents the TArC corpus building process; section SECREF5 explains preliminary experiments with a semi-automatic transcription and annotation procedure, adopted for a faster and simpler construction of the TArC corpus; conclusions are drawn in section SECREF6"
],
[
"In this section, we provide an overview of work done on automatic processing of TUN and TA. As briefly outlined above, many studies on TUN and TA aim at solving the lack of standard orthography. The first Conventional Orthography for Dialectal Arabic (CODA) was for Egyptian Arabic BIBREF2 and it was used by bies2014transliteration for Egyptian Arabish transliteration into Arabic script. The CODA version for TUN (CODA TUN) was developed by DBLP:conf/lrec/ZribiBMEBH14, and was used in many studies, like boujelbane2015traitements. Such work presents a research on automatic word recognition in TUN. Narrowing down to the specific field of TA, CODA TUN was used in masmoudi2015arabic to realize a TA-Arabic script conversion tool, implemented with a rule-based approach. The most extensive CODA is CODA*, a unified set of guidelines for 28 Arab city dialects BIBREF0. For the present research, CODA* is considered the most convenient guideline to follow due to its extensive applicability, which will support comparative studies of corpora in different ADs. As we already mentioned, there are few NLP tools available for Arabish processing in comparison to the amount of NLP tools realized for Arabic. Considering the lack of spelling conventions for Arabish, previous effort has focused on automatic transliteration from Arabish to Arabic script, e.g. chalabi2012romanized, darwish2013arabizi, and al2014automatic. These three work are based on a character-to-character mapping model that aims at generating a range of alternative words that must then be selected through a linguistic model. A different method is presented in younes2018sequence, in which the authors present a sequence-to-sequence-based approach for TA-Arabic characters transliteration in both directions BIBREF3, BIBREF4.",
"Regardless of the great number of work done on TUN automatic processing, there are not a lot of TUN corpora available for free BIBREF5. To the best of our knowledge there are only five TUN corpora freely downloadable: one of these is the PADIC PADIC, composed of 6,400 sentences in six Arabic dialects, translated in Modern Standard Arabic (MSA), and annotated at sentence level. Two other corpora are the Tunisian Dialect Corpus Interlocutor (TuDiCoI) Tudicoi and the Spoken Tunisian Arabic Corpus (STAC) stac, which are both morpho-syntactically annotated. The first one is a spoken task-oriented dialogue corpus, which gathers a set of conversations between staff and clients recorded in a railway station. TuDiCoI consists of 21,682 words in client turns BIBREF7. The STAC is composed of 42,388 words collected from audio files downloaded from the web (as TV channels and radio stations files) BIBREF8. A different corpus is the TARIC Taric, which contains 20 hours of TUN speech, transcribed in Arabic characters BIBREF9. The last one is the TSAC Tsac, containing 17k comments from Facebook, manually annotated to positive and negative polarities BIBREF10. This corpus is the only one that contains TA texts as well as texts in Arabic characters. As far as we know there are no available corpora of TA transcribed in Arabic characters which are also morpho-syntactically annotated. In order to provide an answer to the lack of resources for TA, we decided to create TArC, a corpus entirely dedicated to the TA writing system, transcribed in CODA TUN and provided with a lemmatization level and POS tag annotation."
],
[
"The Tunisian dialect (TUN) is the spoken language of Tunisian everyday life, commonly referred to as الدَّارِجَة, ad-dārija, العَامِّيَّة, al-‘āmmiyya, or التُّونْسِي, . According to the traditional diatopic classification, TUN belongs to the area of Maghrebi Arabic, of which the other main varieties are Libyan, Algerian, Moroccan and the Ḥassānīya variety of Mauritania BIBREF11. Arabish is the transposition of ADs, which are mainly spoken systems, into written form, thus turning into a quasi-oral system (this topic will be discussed in section SECREF12). In addition, Arabish is not realized through Arabic script and consequently it is not subject to the Standard Arabic orthographic rules. As a result, it is possible to consider TA as a faithful written representation of the spoken TUN BIBREF12."
],
[
"The following list provides an excerpt of the principal features of TUN, which, through the TArC, would be researched in depth among many others.",
"At the phonetic level, some of the main characteristics of TUN, and Maghrebi Arabic in general, are the following:",
"1em0pt * Strong influence of the Berber substratum, to which it is possible to attribute the conservative phonology of TUN consonants.",
"1em0pt * Presence of new emphatic phonemes, above all [ṛ], [ḷ], [ḅ].",
"* Realization of the voiced post-alveolar affricate [ʤ] as fricative .",
"* Overlapping of the pharyngealized voiced alveolar stop , <ض>, with the fricative , <ظ>.",
"* Preservation of a full glottal stop mainly in cases of loans from Classical Arabic (CA) or exclamations and interjections of frequent use. * Loss of short vowels in open syllables.",
"* Monophthongization. In TUN <بَيت>, , house, becomes meaning room.",
"* Palatalization of ā: Imāla, <إمالة>, literally inclination. (In TUN the phenomenon is of medium intensity.) Thereby the word <باب>, , door, becomes .",
"* Metathesis. (Transposition of the first vowel of the word. It occurs when non-conjugated verbs or names without suffix begin with the sequence CCvC, where C stands for ungeminated consonant, and v for short vowel. When a suffix is added to this type of name, or a verb of this type is conjugated, the first vowel changes position giving rise to the CvCC sequence.) In TUN it results in: (he) has understood:",
"<فْهِم>, , (she) has understood: <فِهْمِت>, or leg: <رْجِل>, , my leg: <رِجْلِي>, .",
"Regarding the morpho-syntactic level, TUN presents:",
"1em0pt * Addition of the prefix /-n/ to first person verbal morphology in muḍāri' (imperfective).",
"* Realization of passive-reflexive verbs through the morpheme /-t/ prefixed to the verb as in the example:",
"<سوريّة مالحَفْصيّة تْتِلْبِس>, , the shirts of Ḥafṣiya are not bad, (lit: they dress).",
"* Loss of gender distinction at the 2nd and 3rd persons, at verbal and pronominal level.",
"* Disappearance of the dual form from verbal and pronominal inflexion. There is a residual of pseudo-dual in some words fixed in time in their dual form.",
"* Loss of relative pronouns flexion and replacement with the invariable form <اِلّي>, .",
"* Use of presentatives /ṛā-/ and /hā-/ with the meaning of here, look, as in the example in TUN: <راني مَخْنوق>, ṛ, here I am asphyxiated (by problems), or in <هاك دَبَّرْتْها>, , here you are, finding it (the solution) hence: you were lucky.",
"* Presence of circumfix negation marks, such as < <ما>, + verb + <ش>, >. The last element of this structure must be omitted if there is another negation, such as the Tunisian adverb <عُمْر>, , never, as in the structure: < + personal pronoun suffix + + perfect verb>. This construction is used to express the concept of never having done the action in question, as in the example: <عُمري ما كُنْت نِتْصَوُّر...>, , I never imagined that....",
"Instead, to deny an action pointing out that it will never repeat itself again, a structure widely used is <[ma] + + + imperfective verb>, where the element within the circumfix marks is a grammaticalized element of verbal origin from CA: <عاد>, , meaning to go back, to reoccur, which gives the structure a sense of denied repetitiveness, as in the sentence:",
"<هو ما عادِش يَرْجَع>, ,",
"he will not come back.",
"Finally, to deny the nominal phrase, in TUN both the <موش>, , and the circumfix marks are frequently used. For the negative form of the verb to be in the present, circumfix marks can be combined with the personal suffix pronoun, placed between the marks, as in <مَانِيش>, , I am not.",
"Within the negation marks we can also find other types of nominal structures, such as: < + (mind) + personal pronoun suffix>, which has a value equivalent to the verb be aware of, as in the example:",
"<ما في باليش>, , I did not know."
],
[
"As previously mentioned, we consider Arabish a quasi-oral system. With quasi-orality it is intended the form of communication typical of Computer-Mediated Communication (CMC), characterized by informal tones, dependence on context, lack of attention to spelling and especially the ability to create a sense of collectivity BIBREF15.",
"TA and TUN have not a standard orthography, with the exception of the CODA TUN. Nevertheless, TA is a spontaneous code-system used since more than ten years, and is being conventionalized by its daily usage.",
"From the table TABREF14, where the coding scheme of TA is illustrated, it is possible to observe that there is no one-to-one correspondence between TA and TUN characters and that often Arabish presents overlaps in the encoding possibilities. The main issue is represented by the not proper representation by TA of the emphatic phones: , and .",
"On the other hand, being TA not codified through the Arabic alphabet, it can well represent the phonetic realization of TUN, as shown by the following examples:",
"* The Arabic alphabet is generally used for formal conversations in Modern Standard Arabic (MSA), the Arabic of formal situations, or in that of Classical Arabic (CA), the Arabic of the Holy Qur’ān, also known as ‘The Beautiful Language’. Like MSA and CA, also Arabic Dialects (ADs) can be written in the Arabic alphabet, but in this case it is possible to observe a kind of hypercorrection operated by the speakers in order to respect the writing rules of MSA. For example, in TUN texts written in Arabic script, it is possible to find a ‘silent vowel’ (namely an epenthetic alif <ا>) written at the beginning of those words starting with the sequence ‘#CCv’, which is not allowed in MSA. * Writing TUN in Arabic script, the Code-Mixing or Switching in foreign language will be unnaturally reduced.",
"* As described in table TABREF14, the Arabic alphabet is provided with three short vowels, which correspond to the three long ones: , , , but TUN presents a wider range of vowels. Indeed, regarding the early presented characteristics of TUN, the TA range of vowels offers better possibility to represent most of the TUN characteristics outlined in the previous subsection, in particular:",
"[nosep]",
"Palatalization.",
"Vowel metathesis.",
"Monophthongization."
],
[
"In order to analyze the TA system, we have built a TA Corpus based on social media data, considering this as the best choice to observe the quasi-oral nature of the TA system."
],
[
"The corpus collection procedure is composed of the following steps:",
"Thematic categories detection.",
"Match of categories with sets of semantically related TA keywords.",
"Texts and metadata extraction.",
"Step UNKREF20. In order to build a Corpus that was as representative as possible of the linguistic system, it was considered useful to identify wide thematic categories that could represent the most common topics of daily conversations on CMC.",
"In this regard, two instruments with a similar thematic organization have been employed:",
"[nosep]",
"‘A Frequency Dictionary of Arabic’",
"BIBREF16 In particular its ‘Thematic Vocabulary List’ (TVL).",
"‘Loanword Typology Meaning List’",
"A list of 1460 meanings (LTML) BIBREF17.",
"The TVL consists of 30 groups of frequent words, each one represented by a thematic word. The second consists of 23 groups of basic meanings sorted by representative word heading. Considering that the boundaries between some categories are very blurred, some categories have been merged, such as Body and Health, (see table TABREF26). Some others have been eliminated, being not relevant for the purposes of our research, e.g. Colors, Opposites, Male names. In the end, we obtained 15 macro-categories listed in table TABREF26.",
"Step UNKREF21. Aiming at easily detect texts and the respective seed URLs, without introducing relevant query biases, we decided to avoid using the category names as query keywords BIBREF18. Therefore, we associated to each category a set of TA keywords belonging to the basic Tunisian vocabulary. We found that a semantic category with three meanings was enough to obtain a sufficient number of keywords and URLs for each category. For example, to the category Family the meanings: son, wedding, divorce have been associated in all their TA variants, obtaining a set of 11 keywords (table TABREF26).",
"Step UNKREF22. We collected about 25,000 words and the related metadata as first part of our corpus, which are being semi-automatically transcribed into Arabic characters (see next sections). We planned to increase the size of the corpus at a later time. Regarding the metadata, we have extracted the information published by users, focusing on the three types of information generally used in ethnographic studies:",
"Gender: Male (M) and Female (F).",
"Age range: [10-25], [25-35], [35-50], [50-90].",
"City of origin."
],
[
"In order to create our corpus, we applied a word-level annotation. This phase was preceded by some data pre-processing steps, in particular tokenization. Each token has been associated with its annotations and metadata (table TABREF32). In order to obtain the correspondence between Arabish and Arabic morpheme transcriptions, tokens were segmented into morphemes. This segmentation was carried out completely manually for a first group of tokens. In its final version, each token is associated with a total of 11 different annotations, corresponding to the number of the annotation levels we chose. An excerpt of the corpus after tokens annotation is depicted in table TABREF32.",
"For the sake of clarity, in table TABREF32 we show:",
"* The A column, Cor, indicates the tokens source code. For example, the code 3fE, which stands for 3rab fi Europe, is the forum from which the text was extracted.",
"* The B column, Textco, is the publication date of the text.",
"* The C column, Par, is the row index of the token in the paragraph.",
"* The D column, W, is the index of the token in the sentence. When W corresponds to a range of numbers, it means that the token has been segmented in to its components, specified in the rows below.",
"* The E column, Arabi, corresponds to the token transcription in Arabish.",
"* The F column, Tra, is the transcription into Arabic characters.",
"* The G column, Ita, is the translation to Italian.",
"* The H column, Lem, corresponds to the lemma.",
"* The I column, POS, is the Part-Of-Speech tag of the token. The tags that have been used for the POS tagging are conform to the annotation system of Universal Dependencies.",
"* The last three columns (J, K, L) contain the metadata: Var, Age, Gen.",
"Since TA is a spontaneous orthography of TUN, we considered important to adopt the CODA* guidelines as a model to produce a unified lemmatization for each token (column Lem in table TABREF32). In order to guarantee accurate transcription and lemmatization, we annotated manually the first 6,000 tokens with all the annotation levels.",
"Some annotation decisions were taken before this step, with regard to specific TUN features:",
"* Foreign words. We transcribed the Arabish words into Arabic characters, except for Code-Switching terms. In order to not interrupt the sentences continuity we decide to transcribe Code-Mixing terms into Arabic script. However, at the end of the corpus creation process, these words will be analyzed, making the distinction between acclimatized loans and Code-Mixing.",
"The first ones will be transcribed into Arabic characters also in Lem, as shown in table TABREF33. The second ones will be lemmatized in the foreign language, mostly French, as shown in table TABREF34.",
"* Typographical errors. Concerning typos and typical problems related to the informal writing habits in the web, such as repeated characters to simulate prosodic features of the language, we have not maintained all these characteristics in the transcription (column Tra). Logically, these were neither included in Lem, according to the CODA* conventions, as shown in table TABREF34.",
"* Phono-Lexical exceptions. We used the grapheme",
"<ڨ>, , only in loanword transcription and lemmatization. As can be seen in table TABREF35, the Hilalian phoneme [g] of the Turkish loanword gawriyya, has been transcribed and lemmatized with the grapheme <ق>, .",
"* Glottal stop. As explained in CODA TUN, real initial and final glottal stops have almost disappeared in TUN. They remain in some words that are treated as exceptions, e.g. <أسئلة>, , question BIBREF1. Indeed, we transcribe the glottal stops only when it is usually pronounced, and if it does not, we do not write the glottal stops at the beginning of the word or at the end, neither in the transcription, nor in the lemmas.",
"* Negation Marks. CODA TUN proposes to keep the MSA rule of maintaining a space between the first negation mark and the verb, in order to uniform CODA TUN to the first CODA BIBREF2. However, as DBLP:conf/lrec/ZribiBMEBH14 explains, in TUN this rule does not make really sense, but it should be done to preserve the consistency among the various CODA guidelines. Indeed, in our transcriptions we report what has been produced in Arabish following CODA TUN rules, while in lemmatization we report the verb lemma. At the same time we segment the negative verb in its minor parts: the circumfix negation marks and the conjugated verb. For the first one, we describe the negative morphological structure in the Tra and Lem columns, as in table TABREF36. For the second one, as well as the other verbs, we provide transcription and lemmatization."
],
[
"In order to make the corpus collection easier and faster, we adopted a semi-automatic procedure based on sequential neural models BIBREF19, BIBREF20. Since transcribing Arabish into Arabic is by far the most important information to study the Arabish code-system, the semi-automatic procedure concerns only transcription from Arabish to Arabic script. In order to proceed, we used the first group of (roughly) 6,000 manually transcribed tokens as training and test data sets in a 10-fold cross validation setting with 9-1 proportions for training and test, respectively. As we explained in the previous section, French tokens were removed from the data. More precisely, whole sentences containing non-transcribable French tokens (code-switching) were removed from the data. Since at this level there is no way for predicting when a French word can be transcribed into Arabic and when it has to be left unchanged, French tokens create some noise for an automatic, probabilistic model. After removing sentences with French tokens, the data reduced to roughly 5,000 tokens. We chose this amount of tokens for annotation blocks in our incremental annotation procedure.",
"We note that by combining sentence, paragraph and token index in the corpus, whole sentences can be reconstructed. However, from 5,000 tokens roughly 300 sentences could be reconstructed, which are far too few to be used for training a neural model. Instead, since tokens are transcribed at morpheme level, we split Arabish tokens into characters, and Arabic tokens into morphemes, and we treated each token itself as a sequence. Our model learns thus to map Arabish characters into Arabic morphemes.",
"The 10-fold cross validation with this setting gave a token-level accuracy of roughly 71%. This result is not satisfactory on an absolute scale, however it is more than encouraging taking into account the small size of our data. This result means that less than 3 tokens, on average, out of 10, must be corrected to increase the size of our corpus. With this model we automatically transcribed into Arabic morphemes, roughly, 5,000 additional tokens, corresponding to the second annotation block. This can be manually annotated in at least 7,5 days, but thanks to the automatic annotation accuracy, it was manually corrected into 3 days. The accuracy of the model on the annotation of the second block was roughly 70%, which corresponds to the accuracy on the test set. The manually-corrected additional tokens were added to the training data of our neural model, and a new block was automatically annotated and manually corrected. Both accuracy on the test set and on the annotation block remained at around 70%. This is because the block added to the training data was significantly different from the previous and from the third. Adding the third block to the training data and annotating a fourth block with the new trained model gave in contrast an accuracy of roughly 80%. This incremental, semi-automatic transcription procedure is in progress for the remaining blocks, but it is clear that it will make the corpus annotation increasingly easier and faster as the amount of training data will grow up.",
"Our goal concerning transcription, is to have the 25,000 tokens mentioned in section SECREF19 annotated automatically and manually corrected. These data will constitute our gold annotated data, and they will be used to automatically transcribe further data."
],
[
"In this paper we presented TArC, the first Tunisian Arabish Corpus annotated with morpho-syntactic information. We discussed the decisions taken in order to highlight the phonological and morphological features of TUN through the TA corpus structure. Concerning the building process, we have shown the steps undertaken and our effort intended to make the corpus as representative as possible of TA. We therefore described the texts collection stage, as well as the corpus building and the semi-automatic procedure adopted for transcribing TA into Arabic script, taking into account CODA* and CODA TUN guidelines. At the present stage of research, TArC consists of 25.000 tokens, however our work is in progress and for future research we plan to enforce the semi-automatic transcription, which has already shown encouraging results (accuracy = 70%). We also intend to realize a semi-automatic TA Part-Of-Speech tagger. Thus, we aim to develop tools for TA processing and, in so doing, we strive to complete the annotation levels (transcription, POS tag, lemmatization) semi-automatically in order to increase the size of the corpus, making it available for linguistic analyses on TA and TUN."
],
[
"lrec2020W-xample-kc"
]
],
"section_name": [
"Introduction",
"Related Work",
"Characteristics of Tunisian Arabic and Tunisian Arabish",
"Characteristics of Tunisian Arabic and Tunisian Arabish ::: Tunisian Arabic",
"Characteristics of Tunisian Arabic and Tunisian Arabish ::: Tunisian Arabish",
"Tunisian Arabish Corpus",
"Tunisian Arabish Corpus ::: Text collection",
"Tunisian Arabish Corpus ::: Corpus Creation",
"Incremental and Semi-Automatic Transcription",
"Conclusions",
"Language Resource References"
]
} | {
"answers": [
{
"annotation_id": [
"c619eef6420c7ee194d82aba158f7972d309afe3"
],
"answer": [
{
"evidence": [
"In order to make the corpus collection easier and faster, we adopted a semi-automatic procedure based on sequential neural models BIBREF19, BIBREF20. Since transcribing Arabish into Arabic is by far the most important information to study the Arabish code-system, the semi-automatic procedure concerns only transcription from Arabish to Arabic script. In order to proceed, we used the first group of (roughly) 6,000 manually transcribed tokens as training and test data sets in a 10-fold cross validation setting with 9-1 proportions for training and test, respectively. As we explained in the previous section, French tokens were removed from the data. More precisely, whole sentences containing non-transcribable French tokens (code-switching) were removed from the data. Since at this level there is no way for predicting when a French word can be transcribed into Arabic and when it has to be left unchanged, French tokens create some noise for an automatic, probabilistic model. After removing sentences with French tokens, the data reduced to roughly 5,000 tokens. We chose this amount of tokens for annotation blocks in our incremental annotation procedure.",
"We note that by combining sentence, paragraph and token index in the corpus, whole sentences can be reconstructed. However, from 5,000 tokens roughly 300 sentences could be reconstructed, which are far too few to be used for training a neural model. Instead, since tokens are transcribed at morpheme level, we split Arabish tokens into characters, and Arabic tokens into morphemes, and we treated each token itself as a sequence. Our model learns thus to map Arabish characters into Arabic morphemes.",
"The 10-fold cross validation with this setting gave a token-level accuracy of roughly 71%. This result is not satisfactory on an absolute scale, however it is more than encouraging taking into account the small size of our data. This result means that less than 3 tokens, on average, out of 10, must be corrected to increase the size of our corpus. With this model we automatically transcribed into Arabic morphemes, roughly, 5,000 additional tokens, corresponding to the second annotation block. This can be manually annotated in at least 7,5 days, but thanks to the automatic annotation accuracy, it was manually corrected into 3 days. The accuracy of the model on the annotation of the second block was roughly 70%, which corresponds to the accuracy on the test set. The manually-corrected additional tokens were added to the training data of our neural model, and a new block was automatically annotated and manually corrected. Both accuracy on the test set and on the annotation block remained at around 70%. This is because the block added to the training data was significantly different from the previous and from the third. Adding the third block to the training data and annotating a fourth block with the new trained model gave in contrast an accuracy of roughly 80%. This incremental, semi-automatic transcription procedure is in progress for the remaining blocks, but it is clear that it will make the corpus annotation increasingly easier and faster as the amount of training data will grow up."
],
"extractive_spans": [],
"free_form_answer": "Automatic transcription of 5000 tokens through sequential neural models trained on the annotated part of the corpus",
"highlighted_evidence": [
"In order to make the corpus collection easier and faster, we adopted a semi-automatic procedure based on sequential neural models BIBREF19, BIBREF20.",
"Instead, since tokens are transcribed at morpheme level, we split Arabish tokens into characters, and Arabic tokens into morphemes, and we treated each token itself as a sequence. Our model learns thus to map Arabish characters into Arabic morphemes.",
"With this model we automatically transcribed into Arabic morphemes, roughly, 5,000 additional tokens, corresponding to the second annotation block. ",
"Manual transcription plus a"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"e6268e8e642910a62facc570c44f46c4985403e3"
],
"answer": [
{
"evidence": [
"The 10-fold cross validation with this setting gave a token-level accuracy of roughly 71%. This result is not satisfactory on an absolute scale, however it is more than encouraging taking into account the small size of our data. This result means that less than 3 tokens, on average, out of 10, must be corrected to increase the size of our corpus. With this model we automatically transcribed into Arabic morphemes, roughly, 5,000 additional tokens, corresponding to the second annotation block. This can be manually annotated in at least 7,5 days, but thanks to the automatic annotation accuracy, it was manually corrected into 3 days. The accuracy of the model on the annotation of the second block was roughly 70%, which corresponds to the accuracy on the test set. The manually-corrected additional tokens were added to the training data of our neural model, and a new block was automatically annotated and manually corrected. Both accuracy on the test set and on the annotation block remained at around 70%. This is because the block added to the training data was significantly different from the previous and from the third. Adding the third block to the training data and annotating a fourth block with the new trained model gave in contrast an accuracy of roughly 80%. This incremental, semi-automatic transcription procedure is in progress for the remaining blocks, but it is clear that it will make the corpus annotation increasingly easier and faster as the amount of training data will grow up."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The 10-fold cross validation with this setting gave a token-level accuracy of roughly 71%. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
}
],
"nlp_background": [
"two",
"two"
],
"paper_read": [
"no",
"no"
],
"question": [
"How does the semi-automatic construction process work?",
"Does the paper report translation accuracy for an automatic translation model for Tunisian to Arabish words?"
],
"question_id": [
"cf63a4f9fe0f71779cf5a014807ae4528279c25a",
"8829f738bcdf05b615072724223dbd82463e5de6"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"dialects",
"dialects"
],
"topic_background": [
"research",
"research"
]
} | {
"caption": [
"Table 1: Arabish code-system for TUN",
"Table 2: Example of the fifteen thematic categories",
"Table 3: An Excerpt of the TArC structure. In the column Var, \"Bnz\" stands for \"Bizerte\" a northern city in Tunisia. Glosses: w1:how, w2:do you(pl) see, w3-4:the life, w5-6:at the, w7:outside, w8:?",
"Table 7: Circumfix negation marks in the corpus. Glosses: w14-15:we could not",
"Table 4: Loanword example in the corpus. Glosses: w4:we were, w5:happy, w6:, , w7:thanks",
"Table 5: Prosody example in the corpus. Glosses: w1:recipe, w2:pâté, w3:homemade, w4:and, w5:delicious",
"Table 6: Phono-Lexical exceptions in the corpus. Glosses: w1:divorced, w2:from, w3:European(f)"
],
"file": [
"4-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"6-Table7-1.png",
"6-Table4-1.png",
"6-Table5-1.png",
"6-Table6-1.png"
]
} | [
"How does the semi-automatic construction process work?"
] | [
[
"2003.09520-Incremental and Semi-Automatic Transcription-0",
"2003.09520-Incremental and Semi-Automatic Transcription-1",
"2003.09520-Incremental and Semi-Automatic Transcription-2"
]
] | [
"Automatic transcription of 5000 tokens through sequential neural models trained on the annotated part of the corpus"
] | 820 |
1706.02027 | Question Answering and Question Generation as Dual Tasks | We study the problem of joint question answering (QA) and question generation (QG) in this paper. Our intuition is that QA and QG have intrinsic connections and these two tasks could improve each other. On one side, the QA model judges whether the generated question of a QG model is relevant to the answer. On the other side, the QG model provides the probability of generating a question given the answer, which is a useful evidence that in turn facilitates QA. In this paper we regard QA and QG as dual tasks. We propose a training framework that trains the models of QA and QG simultaneously, and explicitly leverages their probabilistic correlation to guide the training process of both models. We implement a QG model based on sequence-to-sequence learning, and a QA model based on recurrent neural network. As all the components of the QA and QG models are differentiable, all the parameters involved in these two models could be conventionally learned with back propagation. We conduct experiments on three datasets. Empirical results show that our training framework improves both QA and QG tasks. The improved QA model performs comparably with strong baseline approaches on all three datasets. | {
"paragraphs": [
[
"Question answering (QA) and question generation (QG) are two fundamental tasks in natural language processing BIBREF0 , BIBREF1 . Both tasks involve reasoning between a question sequence $q$ and an answer sentence $a$ . In this work, we take answer sentence selection BIBREF2 as the QA task, which is a fundamental QA task and is very important for many applications such as search engine and conversational bots. The task of QA takes a question sentence $q$ and a list of candidate answer sentences as the input, and finds the top relevant answer sentence from the candidate list. The task of QG takes a sentence $a$ as input, and generates a question sentence $q$ which could be answered by $a$ .",
"It is obvious that the input and the output of these two tasks are (almost) reverse, which is referred to as “duality” in this paper. This duality connects QA and QG, and potentially could help these two tasks to improve each other. Intuitively, QA could improve QG through measuring the relevance between the generated question and the answer. This QA-specific signal could enhance the QG model to generate not only literally similar question string, but also the questions that could be answered by the answer. In turn, QG could improve QA by providing additional signal which stands for the probability of generating a question given the answer.",
"Moreover, QA and QG have probabilistic correlation as both tasks relate to the joint probability between $q$ and $a$ . Given a question-answer pair $\\langle q, a \\rangle $ , the joint probability $P(q, a)$ can be computed in two equivalent ways. ",
"$$P(q, a) = P(a) P(q|a) = P(q)P(a|q)$$ (Eq. 1) ",
"The conditional distribution $P(q|a)$ is exactly the QG model, and the conditional distribution $P(a|q)$ is closely related to the QA model. Existing studies typically learn the QA model and the QG model separately by minimizing their own loss functions, while ignoring the probabilistic correlation between them.",
"Based on these considerations, we introduce a training framework that exploits the duality of QA and QG to improve both tasks. There might be different ways of exploiting the duality of QA and QG. In this work, we leverage the probabilistic correlation between QA and QG as the regularization term to influence the training process of both tasks. Specifically, the training objective of our framework is to jointly learn the QA model parameterized by $\\theta _{qa}$ and the QG model parameterized by $\\theta _{qg}$ by minimizing their loss functions subject to the following constraint. ",
"$$P_a(a) P(q|a;\\theta _{qg}) = P_q(q)P(a|q;\\theta _{qa})$$ (Eq. 3) ",
" $P_a(a)$ and $P_q(q)$ are the language models for answer sentences and question sentences, respectively.",
"We examine the effectiveness of our training criterion by applying it to strong neural network based QA and QG models. Specifically, we implement a generative QG model based on sequence-sequence learning, which takes an answer sentence as input and generates a question sentence in an end-to-end fashion. We implement a discriminative QA model based on recurrent neural network, where both question and answer are represented as continuous vector in a sequential way. As every component in the entire framework is differentiable, all the parameters could be conventionally trained through back propagation. We conduct experiments on three datasets BIBREF2 , BIBREF3 , BIBREF4 . Empirical results show that our training framework improves both QA and QG tasks. The improved QA model performs comparably with strong baseline approaches on all three datasets."
],
[
"In this section, we first formulate the task of QA and QG, and then present the proposed algorithm for jointly training the QA and QG models. We also describe the connections and differences between this work and existing studies."
],
[
"This work involves two tasks, namely question answering (QA) and question generation (QG). There are different kinds of QA tasks in natural language processing community. In this work, we take answer sentence selection BIBREF2 as the QA task, which takes a question $q$ and a list of candidate answer sentences $A = \\lbrace a_1, a_2, ... , a_{|A|}\\rbrace $ as input, and outputs one answer sentence $a_i$ from the candidate list which has the largest probability to be the answer. This QA task is typically viewed as a ranking problem. Our QA model is abbreviated as $f_{qa}(a,q;\\theta _{qa})$ , which is parameterized by $\\theta _{qa}$ and the output is a real-valued scalar.",
"The task of QG takes a sentence $a$ as input, and outputs a question $q$ which could be answered by $a$ . In this work, we regard QG as a generation problem and develop a generative model based on sequence-to-sequence learning. Our QG model is abbreviated as $P_{qg}(q|a;\\theta _{qg})$ , which is parameterized by $\\theta _{qg}$ and the output is the probability of generating a natural language question $q$ ."
],
[
"We describe the proposed algorithm in this subsection. Overall, the framework includes three components, namely a QA model, a QG model and a regularization term that reflects the duality of QA and QG. Accordingly, the training objective of our framework includes three parts, which is described in Algorithm 1.",
"The QA specific objective aims to minimize the loss function $l_{qa}(f_{qa}(a,q;\\theta _{qa}), label)$ , where $label$ is 0 or 1 that indicates whether $a$ is the correct answer of $q$ or not. Since the goal of a QA model is to predict whether a question-answer pair is correct or not, it is necessary to use negative QA pairs whose labels are zero. The details about the QA model will be presented in the next section.",
"For each correct question-answer pair, the QG specific objective is to minimize the following loss function, ",
"$$l_{qg}(q, a) = -log P_{qg}(q|a;\\theta _{qg})$$ (Eq. 6) ",
"where $a$ is the correct answer of $q$ . The negative QA pairs are not necessary because the goal of a QG model is to generate the correct question for an answer. The QG model will be described in the following section.",
"[tb] Algorithm Description Input: Language models $P_a(a)$ and $P_q(q)$ for answer and question, respectively; hyper parameters $\\lambda _q$ and $\\lambda _a$ ; optimizer $opt$ Output: QA model $f_{qa}(a,q)$ parameterized by $\\theta _{qa}$ ; QG model $P_{qg}(q|a)$ parameterized by $\\theta _{qg}$ Randomly initialize $\\theta _{qa}$ and $P_q(q)$0 Get a minibatch of positive QA pairs $P_q(q)$1 , where $P_q(q)$2 is the answer of $P_q(q)$3 ; Get a minibatch of negative QA pairs $P_q(q)$4 , where $P_q(q)$5 is not the answer of $P_q(q)$6 ;",
"Calculate the gradients for $\\theta _{qa}$ and $\\theta _{qg}$ .",
"$$\\nonumber G_{qa} = \\triangledown _{\\theta _{qa}} &\\frac{1}{m}\\sum _{i = 1}^{m}[l_{qa}(f_{qa}(a^p_i,q^p_i;\\theta _{qa}), 1) \\\\\n&\\nonumber + l_{qa}(f_{qa}(a^n_i,q^n_i;\\theta _{qa}),0) \\\\\n& +\\lambda _al_{dual}(a^p_i,q^p_i;\\theta _{qa}, \\theta _{qg})]$$ (Eq. 7) ",
"$$\\nonumber G_{qg} = \\triangledown _{\\theta _{qg}} &\\frac{1}{m}\\sum _{i = 1}^{m}[\\ l_{qg}(q^p_i,a^p_i) \\\\& + \\lambda _ql_{dual}(q^p_i,a^p_i;\\theta _{qa}, \\theta _{qg})]$$ (Eq. 8) ",
" Update $\\theta _{qa}$ and $\\theta _{qg}$ $\\theta _{qa} \\leftarrow opt(\\theta _{qa}, G_{qa})$ , $\\theta _{qg} \\leftarrow opt(\\theta _{qg}, G_{qg})$ models converged",
"The third objective is the regularization term which satisfies the probabilistic duality constrains as given in Equation 3 . Specifically, given a correct $\\langle q, a \\rangle $ pair, we would like to minimize the following loss function, ",
"$$ \\nonumber l_{dual}(a,q;\\theta _{qa}, \\theta _{qg}) &= [logP_a(a) + log P(q|a;\\theta _{qg}) \\\\\n& - logP_q(q) - logP(a|q;\\theta _{qa})]^2$$ (Eq. 9) ",
" where $P_a(a)$ and $P_q(q)$ are marginal distributions, which could be easily obtained through language model. $P(a|q;\\theta _{qg})$ could also be easily calculated with the markov chain rule: $P(q|a;\\theta _{qg}) = \\prod _{t=1}^{|q|} P(q_t|q_{<t}, a;\\theta _{qg})$ , where the function $P(q_t|q_{<t}, a;\\theta _{qg})$ is the same with the decoder of the QG model (detailed in the following section).",
"However, the conditional probability $P(a|q;\\theta _{qa})$ is different from the output of the QA model $f_{qa}(a,q;\\theta _{qa})$ . To address this, given a question $q$ , we sample a set of answer sentences $A^{\\prime }$ , and derive the conditional probability $P(a|q;\\theta _{qa})$ based on our QA model with the following equation. ",
"$$\\nonumber &P(a|q;\\theta _{qa}) = \\\\\n&\\dfrac{exp(f_{qa}(a,q;\\theta _{qa}))}{exp(f_{qa}(a,q;\\theta _{qa})) + \\sum _{a^{\\prime } \\in A^{\\prime }} exp(f_{qa}(a^{\\prime },q;\\theta _{qa}))}$$ (Eq. 10) ",
"In this way, we learn the models of QA and QG by minimizing the weighted combination between the original loss functions and the regularization term."
],
[
"Our work differs from BIBREF5 in that they regard reading comprehension (RC) as the main task, and regard question generation as the auxiliary task to boost the main task RC. In our work, the roles of QA and QG are the same, and our algorithm enables QA and QG to improve the performance of each other simultaneously. Our approach differs from Generative Domain-Adaptive Nets BIBREF5 in that we do not pretrain the QA model. Our QA and QG models are jointly learned from random initialization. Moreover, our QA task differs from RC in that the answer in our task is a sentence rather than a text span from a sentence.",
"Our approach is inspired by dual learning BIBREF6 , BIBREF7 , which leverages the duality between two tasks to improve each other. Different from the dual learning BIBREF6 paradigm, our framework learns both models from scratch and does not need task-specific pretraining. The recently introduced supervised dual learning BIBREF7 has been successfully applied to image recognition, machine translation and sentiment analysis. Our work could be viewed as the first work that leveraging the idea of supervised dual learning for question answering. Our approach differs from Generative Adversarial Nets (GAN) BIBREF8 in two respects. On one hand, the goal of original GAN is to learn a powerful generator, while the discriminative task is regarded as the auxiliary task. The roles of the two tasks in our framework are the same. On the other hand, the discriminative task of GAN aims to distinguish between the real data and the artificially generated data, while we focus on the real QA task."
],
[
"We describe the details of the question answer (QA) model in this section. Overall, a QA model could be formulated as a function $f_{qa}(q, a;\\theta _{qa})$ parameterized by $\\theta _{qa}$ that maps a question-answer pair to a scalar. In the inference process, given a $q$ and a list of candidate answer sentences, $f_{qa}(q, a;\\theta _{qa})$ is used to calculate the relevance between $q$ and every candidate $a$ . The top ranked answer sentence is regarded as the output.",
"We develop a neural network based QA model. Specifically, we first represent each word as a low dimensional and real-valued vector, also known as word embedding BIBREF9 , BIBREF10 , BIBREF11 . Afterwards, we use recurrent neural network (RNN) to map a question of variable length to a fixed-length vector. To avoid the problem of gradient vanishing, we use gated recurrent unit (GRU) BIBREF12 as the basic computation unit. The approach recursively calculates the hidden vector $h_{t}$ based on the current word vector $e^q_t$ and the output vector $h_{t-1}$ in the last time step, ",
"$$&z_i = \\sigma (W_{z}e^q_{i} + U_{z}{h}_{i-1}) \\\\\n&r_i = \\sigma (W_{r}e^q_{i} + U_{r}{h}_{i-1}) \\\\\n&\\widetilde{h}_i = \\tanh (W_{h}e^q_{i} + U_{h}(r_i \\odot {h}_{i-1})) \\\\\n&{h}_{i} = z_i \\odot \\widetilde{h}_i + (1-z_i) \\odot {h}_{i-1}$$ (Eq. 12) ",
" where $z_i$ and $r_i$ are update and reset gates of s, $\\odot $ stands for element-wise multiplication, $\\sigma $ is sigmoid function. We use a bi-directional RNN to get the meaning of a question from both directions, and use the concatenation of two last hidden states as the final question vector $v_q$ . We compute the answer sentence vector $v_a$ in the same way.",
"After obtaining $v_q$ and $v_a$ , we implement a simple yet effective way to calculate the relevance between question-sentence pair. Specifically, we represent a question-answer pair as the concatenation of four vectors, namely $v(q, a) = [v_q; v_a; v_q \\odot v_a ; e_{c(q,a)}]$ , where $\\odot $ means element-wise multiplication, $c(q,a)$ is the number of co-occurred words in $q$ and $a$ . We observe that incorporating the embedding of the word co-occurrence $e^c_{c(q,a)}$ could empirically improve the QA performance. We use an additional embedding matrix $L_c \\in \\mathbb {R}^{d_c \\times |V_c|}$ , where $d_c$ is the dimension of word co-occurrence vector and $v_a$0 is vocabulary size. The values of $v_a$1 are jointly learned during training. The output scalar $v_a$2 is calculated by feeding $v_a$3 to a linear layer followed by $v_a$4 . We feed $v_a$5 to a $v_a$6 layer and use negative log-likelihood as the QA specific loss function. The basic idea of this objective is to classify whether a given question-answer is correct or not. We also implemented a ranking based loss function $v_a$7 , whose basic idea is to assign the correct QA pair a higher score than a randomly select QA pair. However, our empirical results showed that the ranking loss performed worse than the negative log-likelihood loss function. We use log-likelihood as the QA loss function in the experiment."
],
[
"We describe the question generation (QG) model in this section. The model is inspired by the recent success of sequence-to-sequence learning in neural machine translation. Specifically, the QG model first calculates the representation of the answer sentence with an encoder, and then takes the answer vector to generate a question in a sequential way with a decoder. We will present the details of the encoder and the decoder, respectively.",
"The goal of the encoder is to represent a variable-length answer sentence ${a}$ as a fixed-length continuous vector. The encoder could be implemented with different neural network architectures such as convolutional neural network BIBREF13 , BIBREF14 and recurrent neural network (RNN) BIBREF15 , BIBREF16 . In this work, we use bidirectional RNN based on GRU unit, which is consistent with our QA model as described in Section 3. The concatenation of the last hidden vectors from both directions is used as the output of the encoder, which is also used as the initial hidden state of the decoder.",
"The decoder takes the output of the encoder and generates the question sentence. We implement a RNN based decoder, which works in a sequential way and generates one question word at each time step. The decoder generates a word $q_{t}$ at each time step $t$ based on the representation of $a$ and the previously predicted question words $q_{<t}=\\lbrace q_1,q_2,...,q_{t-1}\\rbrace $ . This process is formulated as follows. ",
"$$p(q|a)=\\prod ^{|q|}_{t=1}p(q_{t}|q_{<t},a)$$ (Eq. 14) ",
"Specifically, we use an attention-based architecture BIBREF17 , which selectively finds relevant information from the answer sentence when generating the question word. Therefore, the conditional probability is calculated as follows. ",
"$$p(q_{t}|q_{<t},a)=f_{dec}(q_{t-1},s_{t}, c_t)$$ (Eq. 15) ",
"where $s_{t}$ is the hidden state of GRU based RNN at time step $t$ , and $c_t$ is the attention state at time step $t$ . The attention mechanism assigns a probability/weight to each hidden state in the encoder at one time step, and calculates the attention state $c_t$ through weighted averaging the hidden states of the encoder: $c_{t}=\\sum ^{|a|}_{i=1}\\alpha _{\\langle t,i\\rangle }h_i$ . When calculating the attention weight of $h_i$ at time step $t$ , we also take into account of the attention distribution in the last time step. Potentially, the model could remember which contexts from answer sentence have been used before, and does not repeatedly use these words to generate the question words. ",
"$$\\alpha _{\\langle t,i\\rangle }=\\frac{\\exp {[z(s_{t},h_i,\\sum ^{N}_{j=1}\\alpha _{\\langle t-1,j\\rangle }h_j)]}}{\\sum ^{H}_{i^{\\prime }=1}\\exp {[z(s_{t},h_{i^{\\prime }},\\sum ^{N}_{j=1}\\alpha _{\\langle t-1,j\\rangle }h_{j})]}}$$ (Eq. 16) ",
"Afterwards, we feed the concatenation of $s_t$ and $c_t$ to a linear layer followed by a $softmax$ function. The output dimension of the $softmax$ layer is equal to the number of top frequent question words (e.g. 30K or 50K) in the training data. The output values of the $softmax$ layer form the probability distribution of the question words to be generated. Furthermore, we observe that question sentences typically include informative but low-frequency words such as named entities or numbers. These low-frequency words are closely related to the answer sentence but could not be well covered in the target vocabulary. To address this, we add a simple yet effective post-processing step which replaces each “unknown word” with the most relevant word from the answer sentence. Following BIBREF18 , we use the attention probability as the relevance score of each word from the answer sentence. Copying mechanism BIBREF19 , BIBREF20 is an alternative solution that adaptively determines whether the generated word comes from the target vocabulary or from the answer sentence.",
"Since every component of the QG model is differentiable, all the parameters could be learned in an end-to-end way with back propagation. Given a question-answer pair $\\langle q,a\\rangle $ , where $a$ is the correct answer of the question $q$ , the training objective is to minimize the following negative log-likelihood. ",
"$$l_{qg}(q,a)=-\\sum ^{|q|}_{t=1}\\log [p(y_t|y_{<t},a)]$$ (Eq. 17) ",
"In the inference process, we use beam search to get the top- $K$ confident results, where $K$ is the beam size. The inference process stops when the model generates the symbol $\\langle eos \\rangle $ which stands for the end of sentence."
],
[
"We describe the experimental setting and report empirical results in this section."
],
[
"We conduct experiments on three datasets, including MARCO BIBREF4 , SQUAD BIBREF3 , and WikiQA BIBREF2 .",
"The MARCO and SQUAD datasets are originally developed for the reading comprehension (RC) task, the goal of which is to answer a question with a text span from a document. Despite our QA task (answer sentence selection) is different from RC, we use these two datasets because of two reasons. The first reason is that to our knowledge they are the QA datasets that contains largest manually labeled question-answer pairs. The second reason is that, we could derive two QA datasets for answer sentence selection from the original MARCO and SQUAD datasets, with an assumption that the answer sentences containing the correct answer span are correct, and vice versa. We believe that our training framework could be easily applied to RC task, but we that is out of the focus of this work.",
"We also conduct experiments on WikiQA BIBREF2 , which is a benchmark dataset for answer sentence selection. Despite its data size is relatively smaller compared with MARCO and SQUAD, we still apply our algorithm on this data and report empirical results to further compare with existing algorithms.",
"It is worth to note that a common characteristic of MARCO and SQUAD is that the ground truth of the test is invisible to the public. Therefore, we randomly split the original validation set into the dev set and the test set. The statistics of SQUAD and MARCO datasets are given in Table 1 . We use the official split of the WikiQA dataset. We apply exactly the same model to these three datasets.",
"We evaluate our QA system with three standard evaluation metrics: Mean Average Precision (MAP), Mean Reciprocal Rank (MRR) and Precision@1 (P@1) BIBREF23 . It is hard to find a perfect way to automatically evaluate the performance of a QG system. In this work, we use BLEU-4 BIBREF24 score as the evaluation metric, which measures the overlap between the generated question and the ground truth."
],
[
"We train the parameters of the QA model and the QG model simultaneously. We randomly initialize the parameters in both models with a combination of the fan-in and fan-out BIBREF25 . The parameters of word embedding matrices are shared in the QA model and the QG model. In order to learn question and answer specific word meanings, we use two different embedding matrices for question words and answer words. The vocabularies are the most frequent 30K words from the questions and answers in the training data. We set the dimension of word embedding as 300, the hidden length of encoder and decoder in the QG model as 512, the hidden length of GRU in the QA model as 100, the dimension of word co-occurrence embedding as 10, the vocabulary size of the word co-occurrence embedding as 10, the hidden length of the attention layer as 30. We initialize the learning rate as 2.0, and use AdaDelta BIBREF26 to adaptively decrease the learning rate. We use mini-batch training, and empirically set the batch size as 64. The sampled answer sentences do not come from the same passage. We get 10 batches (640 instances) and sort them by answer length for accelerating the training process. The negative samples come from these 640 instances, which are from different passages.",
"In this work, we use smoothed bigram language models as $p_a(a)$ and $p_q(q)$ . We also tried trigram language model but did not get improved performance. Alternatively, one could also implement neural language model and jointly learn the parameters in the training process."
],
[
"We first report results on the MARCO and SQUAD datasets. As the dataset is splitted by ourselves, we do not have previously reported results for comparison. We compare with the following four baseline methods. It has been proven that word co-occurrence is a very simple yet effective feature for this task BIBREF2 , BIBREF22 , so the first two baselines are based on the word co-occurrence between a question sentence and the candidate answer sentence. WordCnt and WgtWordCnt use unnormalized and normalized word co-occurrence. The ranker in these two baselines are trained with with FastTree, which performs better than SVMRank and linear regression in our experiments. We also compare with CDSSM BIBREF21 , which is a very strong neural network approach to model the semantic relatedness of a sentence pair. We further compare with ABCNN BIBREF22 , which has been proven very powerful in various sentence matching tasks. Basic QA is our QA model which does not use the duality between QA and QG. Our ultimate model is abbreviated as Dual QA.",
"The QA performance on MARCO and SQUAD datasets are given in Table 2 . We can find that CDSSM performs better than the word co-occurrence based method on MARCO dataset. On the SQUAD dataset, Dual QA achieves the best performance among all these methods. On the MARCO dataset, Dual QA performs comparably with ABCNN.",
"We can find that Dual QA still yields better accuracy than Basic QA, which shows the effectiveness of the joint training algorithm. It is interesting that word co-occurrence based method (WgtWordCnt) is very strong and hard to beat on the MARCO dataset. Incorporating sophisticated features might obtain improved performance on both datasets, however, this is not the focus of this work and we leave it to future work.",
"Results on the WikiQA dataset is given in Table 3 . On this dataset, previous studies typically report results based on their deep features plus the number of words that occur both in the question and in the answer BIBREF2 , BIBREF22 . We also follow this experimental protocol. We can find that our basic QA model is simple yet effective. The Dual QA model achieves comparably to strong baseline methods.",
"To give a quantitative evaluation of our training framework on the QG model, we report BLEU-4 scores on MARCO and SQUAD datasets. The results of our QG model with or without using joint training are given in Table 5 . We can find that, despite the overall BLEU-4 scores are relatively low, using our training algorithm could improve the performance of the QG model.",
"We would like to investigate how the joint training process improves the QA and QG models. To this end, we analyze the results of development set on the SQUAD dataset. We randomly sample several cases that the Basic QA model gets the wrong answers while the Dual QA model obtains the correct results. Examples are given in Table 4 . From these examples, we can find that the questions generated by Dual QG tend to have more word overlap with the correct question, despite sometimes the point of the question is not correct. For example, compared with the Basic QG model, the Dual QG model generates more informative words, such as “green” in the first example, “purpose” in the second example, and “how much” in the third example. We believe this helps QA because the QA model is trained to assign a higher score to the question which looks similar with the generated question. It also helps QG because the QA model is trained to give a higher score to the real question-answer pair, so that generating more answer-alike words gives the generated question a higher QA score.",
"Despite the proposed training framework obtains some improvements on QA and QG, we believe the work could be further improved from several directions. We find that our QG model not always finds the point of the reference question. This is not surprising because the questions from these two reading comprehension datasets only focus on some spans of a sentence, rather than the entire sentence. Therefore, the source side (answer sentence) carries more information than the target side (question sentence). Moreover, we do not use the answer position information in our QG model. Accordingly, the model may pay attention to the point which is different from the annotator's direction, and generates totally different questions. We are aware of incorporating the position of the answer span could get improved performance BIBREF29 , however, the focus of this work is a sentence level QA task rather than reading comprehension. Therefore, despite MARCO and SQUAD are of large scale, they are not the desirable datasets for investigating the duality of our QA and QG tasks. Pushing forward this area also requires large scale sentence level QA datasets."
],
[
"We would like to discuss our understanding about the duality of QA and QG, and also present our observations based on the experiments.",
"In this work, “duality” means that the QA task and the QG task are equally important. This characteristic makes our work different from Generative Domain-Adaptive Nets BIBREF5 and Generative Adversarial Nets (GAN) BIBREF8 , both of which have a main task and regard another task as the auxiliary one. There are different ways to leverage the “duality” of QA and QG to improve both tasks. We categorize them into two groups. The first group is about the training process and the second group is about the inference process. From this perspective, dual learning BIBREF6 is a solution that leverages the duality in the training process. In particular, dual learning first pretrains the models for two tasks separately, and then iteratively fine-tunes the models. Our work also belongs to the first group. Our approach uses the duality as a regularization item to guide the learning of QA and QG models simultaneously from scratch. After the QA and QG models are trained, we could also use the duality to improve the inference process, which falls into the second group. The process could be conducted on separately trained models or the models that jointly trained with our approach. This is reasonable because the QA model could directly add one feature to consider $q$ and $q^{\\prime }$ , where $q^{\\prime }$ is the question generated by the QG model. The first example in Table 4 also motivates this direction. Similarly, the QA model could give each $\\langle q^{\\prime }, a \\rangle $ a score which could be assigned to each generated question $q^{\\prime }$ . In this work we do not apply the duality in the inference process. We leave it as a future plan.",
"This work could be improved by refining every component involved in our framework. For example, we use a simple yet effective QA model, which could be improved by using more complex neural network architectures BIBREF30 , BIBREF22 or more external resources. We use a smoothed language model for both question and answer sentences, which could be replaced by designed neural language models whose parameters are jointly learned together with the parameters in QA and QG models. The QG model could be improved as well, for example, by developing more complex neural network architectures to take into account of more information about the answer sentence in the generation process.",
"In addition, it is also very important to investigate an automatic evaluation metric to effectively measure the performance of a QG system. BLEU score only measures the literal similarity between the generated question and the ground truth. However, it does not measure whether the question really looks like a question or not. A desirable evaluation system should also have the ability to judge whether the generated question could be answered by input sentence, even if the generated question use totally different words to express the meaning."
],
[
"Our work relates to existing studies on question answering (QA) and question generation (QG).",
"There are different types of QA tasks including text-level QA BIBREF31 , knowledge based QA BIBREF32 , community based QA BIBREF33 and the reading comprehension BIBREF3 , BIBREF4 . Our work belongs to text based QA where the answer is a sentence. In recent years, neural network approaches BIBREF30 , BIBREF31 , BIBREF22 show promising ability in modeling the semantic relation between sentences and achieve strong performances on QA tasks.",
"Question generation also draws a lot of attentions in recent years. QG is very necessary in real application as it is always time consuming to create large-scale QA datasets. In literature, BIBREF34 use Minimal Recursion Semantics (MRS) to represent the meaning of a sentence, and then realize the MSR structure into a natural language question. BIBREF35 present a overgenerate-and-rank framework consisting of three stages. They first transform a sentence into a simpler declarative statement, and then transform the statement to candidate questions by executing well-defined syntactic transformations. Finally, a ranker is used to select the questions of high-quality. BIBREF36 focus on generating questions from a topic. They first get a list of texts related to the topic, and then generate questions by exploiting the named entity information and the predicate argument structures of the texts. BIBREF37 propose an ontology-crowd-relevance approach to generate questions from novel text. They encode the original text in a low-dimensional ontology, and then align the question templates obtained via crowd-sourcing to that space. A final ranker is used to select the top relevant templates. There also exists some studies on generating questions from knowledge base BIBREF38 , BIBREF39 . For example, BIBREF39 develop a neural network approach which takes a knowledge fact (including a subject, an object, and a predicate) as input, and generates the question with a recurrent neural network. Recent studies also investigate question generation for the reading comprehension task BIBREF40 , BIBREF29 . The approaches are typically based on the encoder-decoder framework, which could be conventionally learned in an end-to-end way. As the answer is a text span from the sentence/passage, it is helpful to incorporate the position of the answer span BIBREF29 . In addition, the computer vision community also pays attention to generating natural language questions about an image BIBREF41 ."
],
[
"We focus on jointly training the question answering (QA) model and the question generation (QG) model in this paper. We exploit the “duality” of QA and QG tasks, and introduce a training framework to leverage the probabilistic correlation between the two tasks. In our approach, the “duality” is used as a regularization term to influence the learning of QA and QG models. We implement simple yet effective QA and QG models, both of which are neural network based approaches. Experimental results show that the proposed training framework improves both QA and QG on three datasets."
]
],
"section_name": [
"Introduction",
"The Proposed Framework",
"Task Definition and Notations",
"Algorithm Description",
"Relationships with Existing Studies",
"The Question Answering Model",
"The Question Generation Model",
"Experiment",
"Experimental Setting",
"Implementation Details",
"Results and Analysis",
"Discussion",
"Related Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"c9e859ba02a47030b30a5a3623d17e1fbec90c8f"
],
"answer": [
{
"evidence": [
"Moreover, QA and QG have probabilistic correlation as both tasks relate to the joint probability between $q$ and $a$ . Given a question-answer pair $\\langle q, a \\rangle $ , the joint probability $P(q, a)$ can be computed in two equivalent ways.",
"$$P(q, a) = P(a) P(q|a) = P(q)P(a|q)$$ (Eq. 1)",
"The conditional distribution $P(q|a)$ is exactly the QG model, and the conditional distribution $P(a|q)$ is closely related to the QA model. Existing studies typically learn the QA model and the QG model separately by minimizing their own loss functions, while ignoring the probabilistic correlation between them.",
"Based on these considerations, we introduce a training framework that exploits the duality of QA and QG to improve both tasks. There might be different ways of exploiting the duality of QA and QG. In this work, we leverage the probabilistic correlation between QA and QG as the regularization term to influence the training process of both tasks. Specifically, the training objective of our framework is to jointly learn the QA model parameterized by $\\theta _{qa}$ and the QG model parameterized by $\\theta _{qg}$ by minimizing their loss functions subject to the following constraint.",
"$$P_a(a) P(q|a;\\theta _{qg}) = P_q(q)P(a|q;\\theta _{qa})$$ (Eq. 3)",
"$P_a(a)$ and $P_q(q)$ are the language models for answer sentences and question sentences, respectively.",
"We describe the proposed algorithm in this subsection. Overall, the framework includes three components, namely a QA model, a QG model and a regularization term that reflects the duality of QA and QG. Accordingly, the training objective of our framework includes three parts, which is described in Algorithm 1.",
"The QA specific objective aims to minimize the loss function $l_{qa}(f_{qa}(a,q;\\theta _{qa}), label)$ , where $label$ is 0 or 1 that indicates whether $a$ is the correct answer of $q$ or not. Since the goal of a QA model is to predict whether a question-answer pair is correct or not, it is necessary to use negative QA pairs whose labels are zero. The details about the QA model will be presented in the next section.",
"For each correct question-answer pair, the QG specific objective is to minimize the following loss function,",
"$$l_{qg}(q, a) = -log P_{qg}(q|a;\\theta _{qg})$$ (Eq. 6)",
"where $a$ is the correct answer of $q$ . The negative QA pairs are not necessary because the goal of a QG model is to generate the correct question for an answer. The QG model will be described in the following section.",
"The third objective is the regularization term which satisfies the probabilistic duality constrains as given in Equation 3 . Specifically, given a correct $\\langle q, a \\rangle $ pair, we would like to minimize the following loss function,",
"$$ \\nonumber l_{dual}(a,q;\\theta _{qa}, \\theta _{qg}) &= [logP_a(a) + log P(q|a;\\theta _{qg}) \\\\ & - logP_q(q) - logP(a|q;\\theta _{qa})]^2$$ (Eq. 9)",
"where $P_a(a)$ and $P_q(q)$ are marginal distributions, which could be easily obtained through language model. $P(a|q;\\theta _{qg})$ could also be easily calculated with the markov chain rule: $P(q|a;\\theta _{qg}) = \\prod _{t=1}^{|q|} P(q_t|q_{<t}, a;\\theta _{qg})$ , where the function $P(q_t|q_{<t}, a;\\theta _{qg})$ is the same with the decoder of the QG model (detailed in the following section)."
],
"extractive_spans": [],
"free_form_answer": "The framework jointly learns parametrized QA and QG models subject to the constraint in equation 2. In more detail, they minimize QA and QG loss functions, with a third dual loss for regularization.",
"highlighted_evidence": [
"Moreover, QA and QG have probabilistic correlation as both tasks relate to the joint probability between $q$ and $a$ . Given a question-answer pair $\\langle q, a \\rangle $ , the joint probability $P(q, a)$ can be computed in two equivalent ways.\r\n\r\n$$P(q, a) = P(a) P(q|a) = P(q)P(a|q)$$ (Eq. 1)\r\n\r\nThe conditional distribution $P(q|a)$ is exactly the QG model, and the conditional distribution $P(a|q)$ is closely related to the QA model. Existing studies typically learn the QA model and the QG model separately by minimizing their own loss functions, while ignoring the probabilistic correlation between them.\r\n\r\nBased on these considerations, we introduce a training framework that exploits the duality of QA and QG to improve both tasks. There might be different ways of exploiting the duality of QA and QG. In this work, we leverage the probabilistic correlation between QA and QG as the regularization term to influence the training process of both tasks. Specifically, the training objective of our framework is to jointly learn the QA model parameterized by $\\theta _{qa}$ and the QG model parameterized by $\\theta _{qg}$ by minimizing their loss functions subject to the following constraint.\r\n\r\n$$P_a(a) P(q|a;\\theta _{qg}) = P_q(q)P(a|q;\\theta _{qa})$$ (Eq. 3)\r\n\r\n$P_a(a)$ and $P_q(q)$ are the language models for answer sentences and question sentences, respectively.",
"Overall, the framework includes three components, namely a QA model, a QG model and a regularization term that reflects the duality of QA and QG.",
"The QA specific objective aims to minimize the loss function $l_{qa}(f_{qa}(a,q;\\theta _{qa}), label)$ , where $label$ is 0 or 1 that indicates whether $a$ is the correct answer of $q$ or not.",
"For each correct question-answer pair, the QG specific objective is to minimize the following loss function,\r\n\r\n$$l_{qg}(q, a) = -log P_{qg}(q|a;\\theta _{qg})$$ (Eq. 6)\r\n\r\nwhere $a$ is the correct answer of $q$ .",
"The third objective is the regularization term which satisfies the probabilistic duality constrains as given in Equation 3 . Specifically, given a correct $\\langle q, a \\rangle $ pair, we would like to minimize the following loss function,\r\n\r\n$$ \\nonumber l_{dual}(a,q;\\theta _{qa}, \\theta _{qg}) &= [logP_a(a) + log P(q|a;\\theta _{qg}) \\\\ & - logP_q(q) - logP(a|q;\\theta _{qa})]^2$$ (Eq. 9)\r\n\r\nwhere $P_a(a)$ and $P_q(q)$ are marginal distributions, which could be easily obtained through language model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"f386218d4ab0b6b3022da887e8aa18a9ad3bbae2"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"somewhat"
],
"question": [
"What does \"explicitly leverages their probabilistic correlation to guide the training process of both models\" mean?"
],
"question_id": [
"3d49b678ff6b125ffe7fb614af3e187da65c6f65"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"question"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Table 1: Statistics of the MARCO, SQUAD and WikiQA datasets for answer sentence selection.",
"Table 2: QA Performance on the MARCO and SQUAD datasets.",
"Table 3: QA performance on the WikiQA dataset.",
"Table 4: Sampled examples from the SQUAD dataset.",
"Table 5: QG performance (BLEU-4 scores) on MARCO, SQUAD and WikiQA datasets."
],
"file": [
"4-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"6-Table5-1.png"
]
} | [
"What does \"explicitly leverages their probabilistic correlation to guide the training process of both models\" mean?"
] | [
[
"1706.02027-Introduction-4",
"1706.02027-Algorithm Description-4",
"1706.02027-Algorithm Description-0",
"1706.02027-Algorithm Description-1"
]
] | [
"The framework jointly learns parametrized QA and QG models subject to the constraint in equation 2. In more detail, they minimize QA and QG loss functions, with a third dual loss for regularization."
] | 824 |
1910.07134 | Efficiency through Auto-Sizing: Notre Dame NLP's Submission to the WNGT 2019 Efficiency Task | This paper describes the Notre Dame Natural Language Processing Group's (NDNLP) submission to the WNGT 2019 shared task (Hayashi et al., 2019). We investigated the impact of auto-sizing (Murray and Chiang, 2015; Murray et al., 2019) to the Transformer network (Vaswani et al., 2017) with the goal of substantially reducing the number of parameters in the model. Our method was able to eliminate more than 25% of the model's parameters while suffering a decrease of only 1.1 BLEU. | {
"paragraphs": [
[
"The Transformer network BIBREF3 is a neural sequence-to-sequence model that has achieved state-of-the-art results in machine translation. However, Transformer models tend to be very large, typically consisting of hundreds of millions of parameters. As the number of parameters directly corresponds to secondary storage requirements and memory consumption during inference, using Transformer networks may be prohibitively expensive in scenarios with constrained resources. For the 2019 Workshop on Neural Generation of Text (WNGT) Efficiency shared task BIBREF0, the Notre Dame Natural Language Processing (NDNLP) group looked at a method of inducing sparsity in parameters called auto-sizing in order to reduce the number of parameters in the Transformer at the cost of a relatively minimal drop in performance.",
"Auto-sizing, first introduced by BIBREF1, uses group regularizers to encourage parameter sparsity. When applied over neurons, it can delete neurons in a network and shrink the total number of parameters. A nice advantage of auto-sizing is that it is independent of model architecture; although we apply it to the Transformer network in this task, it can easily be applied to any other neural architecture.",
"NDNLP's submission to the 2019 WNGT Efficiency shared task uses a standard, recommended baseline Transformer network. Following BIBREF2, we investigate the application of auto-sizing to various portions of the network. Differing from their work, the shared task used a significantly larger training dataset from WMT 2014 BIBREF4, as well as the goal of reducing model size even if it impacted translation performance. Our best system was able to prune over 25% of the parameters, yet had a BLEU drop of only 1.1 points. This translates to over 25 million parameters pruned and saves almost 100 megabytes of disk space to store the model."
],
[
"Auto-sizing is a method that encourages sparsity through use of a group regularizer. Whereas the most common applications of regularization will act over parameters individually, a group regularizer works over groupings of parameters. For instance, applying a sparsity inducing regularizer to a two-dimensional parameter tensor will encourage individual values to be driven to 0.0. A sparsity-inducing group regularizer will act over defined sub-structures, such as entire rows or columns, driving the entire groups to zero. Depending on model specifications, one row or column of a tensor in a neural network can correspond to one neuron in the model.",
"Following the discussion of BIBREF1 and BIBREF2, auto-sizing works by training a neural network while using a regularizer to prune units from the network, minimizing:",
"$W$ are the parameters of the model and $R$ is a regularizer. Here, as with the previous work, we experiment with two regularizers:",
"The optimization is done using proximal gradient descent BIBREF5, which alternates between stochastic gradient descent steps and proximal steps:"
],
[
"The Transformer network BIBREF3 is a sequence-to-sequence model in which both the encoder and the decoder consist of stacked self-attention layers. The multi-head attention uses two affine transformations, followed by a softmax layer. Each layer has a position-wise feed-forward neural network (FFN) with a hidden layer of rectified linear units. Both the multi-head attention and the feed-forward neural network have residual connections that allow information to bypass those layers. In addition, there are also word and position embeddings. Figure FIGREF1, taken from the original paper, shows the architecture. NDNLP's submission focuses on the $N$ stacked encoder and decoder layers.",
"The Transformer has demonstrated remarkable success on a variety of datasets, but it is highly over-parameterized. For example, the baseline Transformer model has more than 98 million parameters, but the English portion of the training data in this shared task has only 116 million tokens and 816 thousand types. Early NMT models such as BIBREF6 have most of their parameters in the embedding layers, but the transformer has a larger percentage of the model in the actual encoder and decoder layers. Though the group regularizers of auto-sizing can be applied to any parameter matrix, here we focus on the parameter matrices within the encoder and decoder layers.",
"We note that there has been some work recently on shrinking networks through pruning. However, these differ from auto-sizing as they frequently require an arbitrary threshold and are not included during the training process. For instance, BIBREF7 prunes networks based off a variety of thresholds and then retrains a model. BIBREF8 also look at pruning, but of attention heads specifically. They do this through a relaxation of an $\\ell _0$ regularizer in order to make it differentiable. This allows them to not need to use a proximal step. This method too starts with pre-trained model and then continues training. BIBREF9 also look at pruning attention heads in the transformer. However, they too use thresholding, but only apply it at test time. Auto-sizing does not require a thresholding value, nor does it require a pre-trained model.",
"Of particular interest are the large, position-wise feed-forward networks in each encoder and decoder layer:",
"",
"$W_1$ and $W_2$ are two large affine transformations that take inputs from $D$ dimensions to $4D$, then project them back to $D$ again. These layers make use of rectified linear unit activations, which were the focus of auto-sizing in the work of BIBREF1. No theory or intuition is given as to why this value of $4D$ should be used.",
"Following BIBREF2, we apply the auto-sizing method to the Transformer network, focusing on the two largest components, the feed-forward layers and the multi-head attentions (blue and orange rectangles in Figure FIGREF1). Remember that since there are residual connections allowing information to bypass the layers we are auto-sizing, information can still flow through the network even if the regularizer drives all the neurons in a layer to zero – effectively pruning out an entire layer."
],
[
"All of our models are trained using the fairseq implementation of the Transformer BIBREF10. For the regularizers used in auto-sizing, we make use of an open-source, proximal gradient toolkit implemented in PyTorch BIBREF2. For each mini-batch update, the stochastic gradient descent step is handled with a standard PyTorch forward-backward call. Then the proximal step is applied to parameter matrices."
],
[
"We used the originally proposed transformer architecture – with six encoder and six decoder layers. Our model dimension was 512 and we used 8 attention heads. The feed-forward network sub-components were of size 2048. All of our systems were run using subword units (BPE) with 32,000 merge operations on concatenated source and target training data BIBREF11. We clip norms at 0.1, use label smoothed cross-entropy with value 0.1, and an early stopping criterion when the learning rate is smaller than $10^{-5}$. We used the Adam optimizer BIBREF12, a learning rate of $10^{-4}$, and dropout of 0.1. Following recommendations in the fairseq and tensor2tensor BIBREF13 code bases, we apply layer normalization before a sub-component as opposed to after. At test time, we decoded using a beam of 5 with length normalization BIBREF14 and evaluate using case-sensitive, tokenized BLEU BIBREF15.",
"For the auto-sizing experiments, we looked at both $\\ell _{2,1}$ and $\\ell _{\\infty ,1}$ regularizers. We experimented over a range of regularizer coefficient strengths, $\\lambda $, that control how large the proximal gradient step will be. Similar to BIBREF1, but differing from BIBREF16, we use one value of $\\lambda $ for all parameter matrices in the network. We note that different regularization coefficient values are suited for different types or regularizers. Additionally, all of our experiments use the same batch size, which is also related to $\\lambda $."
],
[
"We applied auto-sizing to the sub-components of the encoder and decoder layers, without touching the word or positional embeddings. Recall from Figure FIGREF1, that each layer has multi-head attention and feed-forward network sub-components. In turn, each multi-head attention sub-component is comprised of two parameter matrices. Similarly, each feed-forward network has two parameter matrices, $W_1$ and $W_2$. We looked at three main experimental configurations:",
"All: Auto-sizing is applied to every multi-head attention and feed-forward network sub-component in every layer of the encoder and decoder.",
"Encoder: As with All, auto-sizing is applied to both multi-head attention and feed-forward network sub-components, but only in the encoder layers. The decoder remains the same.",
"FFN: Auto-sizing applied only to the feed-forward network sub-components $W_1$ and $W_2$, but not to the multi-head portions. This too is applied to both the encoder and decoder."
],
[
"Our results are presented in Table TABREF6. The baseline system has 98.2 million parameters and a BLEU score of 27.9 on newstest2015. It takes up 375 megabytes on disk. Our systems that applied auto-sizing only to the feed-forward network sub-components of the transformer network maintained the best BLEU scores while also pruning out the most parameters of the model. Overall, our best system used $\\ell _{2,1}=1.0$ regularization for auto-sizing and left 73.1 million parameters remaining. On disk, the model takes 279 megabytes to store – roughly 100 megabytes less than the baseline. The performance drop compared to the baseline is 1.1 BLEU points, but the model is over 25% smaller.",
"Applying auto-sizing to the multi-head attention and feed-forward network sub-components of only the encoder also pruned a substantial amount of parameters. Though this too resulted in a smaller model on disk, the BLEU scores were worse than auto-sizing just the feed-forward sub-components. Auto-sizing the multi-head attention and feed-forward network sub-components of both the encoder and decoder actually resulted in a larger model than the encoder only, but with a lower BLEU score. Overall, our results suggest that the attention portion of the transformer network is more important for model performance than the feed-forward networks in each layer."
],
[
"In this paper, we have investigated the impact of using auto-sizing on the transformer network of the 2019 WNGT efficiency task. We were able to delete more than 25% of the parameters in the model while only suffering a modest BLEU drop. In particular, focusing on the parameter matrices of the feed-forward networks in every layer of the encoder and decoder yielded the smallest models that still performed well.",
"A nice aspect of our proposed method is that the proximal gradient step of auto-sizing can be applied to a wide variety of parameter matrices. Whereas for the transformer, the largest impact was on feed-forward networks within a layer, should a new architecture emerge in the future, auto-sizing can be easily adapted to the trainable parameters.",
"Overall, NDNLP's submission has shown that auto-sizing is a flexible framework for pruning parameters in a large NMT system. With an aggressive regularization scheme, large portions of the model can be deleted with only a modest impact on BLEU scores. This in turn yields a much smaller model on disk and at run-time."
],
[
"This research was supported in part by University of Southern California, subcontract 67108176 under DARPA contract HR0011-15-C-0115."
]
],
"section_name": [
"Introduction",
"Auto-sizing",
"Auto-sizing the Transformer",
"Experiments",
"Experiments ::: Settings",
"Experiments ::: Auto-sizing sub-components",
"Experiments ::: Results",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"d28bef9548d5b694da9250fb31bb1da6a28c6d2b"
],
"answer": [
{
"evidence": [
"The Transformer network BIBREF3 is a neural sequence-to-sequence model that has achieved state-of-the-art results in machine translation. However, Transformer models tend to be very large, typically consisting of hundreds of millions of parameters. As the number of parameters directly corresponds to secondary storage requirements and memory consumption during inference, using Transformer networks may be prohibitively expensive in scenarios with constrained resources. For the 2019 Workshop on Neural Generation of Text (WNGT) Efficiency shared task BIBREF0, the Notre Dame Natural Language Processing (NDNLP) group looked at a method of inducing sparsity in parameters called auto-sizing in order to reduce the number of parameters in the Transformer at the cost of a relatively minimal drop in performance."
],
"extractive_spans": [],
"free_form_answer": "efficiency task aimed at reducing the number of parameters while minimizing drop in performance",
"highlighted_evidence": [
"For the 2019 Workshop on Neural Generation of Text (WNGT) Efficiency shared task BIBREF0, the Notre Dame Natural Language Processing (NDNLP) group looked at a method of inducing sparsity in parameters called auto-sizing in order to reduce the number of parameters in the Transformer at the cost of a relatively minimal drop in performance."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero"
],
"paper_read": [
"no"
],
"question": [
"What is WNGT 2019 shared task?"
],
"question_id": [
"aaed6e30cf16727df0075b364873df2a4ec7605b"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
""
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Architecture of the Transformer (Vaswani et al., 2017). We apply the auto-sizing method to the feed-forward (blue rectangles) and multi-head attention (orange rectangles) in all N layers of the encoder and decoder. Note that there are residual connections that can allow information and gradients to bypass any layer we are auto-sizing. Following the robustness recommendations, we instead layer norm before.",
"Figure 2: Auto-sizing FFN network. For a row in the parameter matrix W1 that has been driven completely to 0.0 (shown in white), the corresponding column in W2 (shown in blue) no longer has any impact on the model. Both the column and the row can be deleted, thereby shrinking the model.",
"Table 1: Comparison of BLEU scores and model sizes on newstest2014 and newstest2015. Applying auto-sizing to the feed-forward neural network sub-components of the transformer resulted in the most amount of pruning while still maintaining good BLEU scores."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png"
]
} | [
"What is WNGT 2019 shared task?"
] | [
[
"1910.07134-Introduction-0"
]
] | [
"efficiency task aimed at reducing the number of parameters while minimizing drop in performance"
] | 834 |
2004.03061 | Information-Theoretic Probing for Linguistic Structure | The success of neural networks on a diverse set of NLP tasks has led researchers to question how much do these networks actually know about natural language. Probes are a natural way of assessing this. When probing, a researcher chooses a linguistic task and trains a supervised model to predict annotation in that linguistic task from the network's learned representations. If the probe does well, the researcher may conclude that the representations encode knowledge related to the task. A commonly held belief is that using simpler models as probes is better; the logic is that such models will identify linguistic structure, but not learn the task itself. We propose an information-theoretic formalization of probing as estimating mutual information that contradicts this received wisdom: one should always select the highest performing probe one can, even if it is more complex, since it will result in a tighter estimate. The empirical portion of our paper focuses on obtaining tight estimates for how much information BERT knows about parts of speech in a set of five typologically diverse languages that are often underrepresented in parsing research, plus English, totaling six languages. We find BERT accounts for only at most 5% more information than traditional, type-based word embeddings. | {
"paragraphs": [
[
"Neural networks are the backbone of modern state-of-the-art Natural Language Processing (NLP) systems. One inherent by-product of training a neural network is the production of real-valued representations. Many speculate that these representations encode a continuous analogue of discrete linguistic properties, e.g., part-of-speech tags, due to the networks' impressive performance on many NLP tasks BIBREF0. As a result of this speculation, one common thread of research focuses on the construction of probes, i.e., supervised models that are trained to extract the linguistic properties directly BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. A syntactic probe, then, is a model for extracting syntactic properties, such as part-of-speech, from the representations BIBREF6.",
"In this work, we question what the goal of probing for linguistic properties ought to be. Informally, probing is often described as an attempt to discern how much information representations encode about a specific linguistic property. We make this statement more formal: We assert that the goal of probing ought to be estimating the mutual information BIBREF7 between a representation-valued random variable and a linguistic property-valued random variable. This formulation gives probing a clean, information-theoretic foundation, and allows us to consider what “probing” actually means.",
"Our analysis also provides insight into how to choose a probe family: We show that choosing the highest-performing probe, independent of its complexity, is optimal for achieving the best estimate of mutual information (MI). This contradicts the received wisdom that one should always select simple probes over more complex ones BIBREF8, BIBREF9, BIBREF10. In this context, we also discuss the recent work of hewitt-liang-2019-designing who propose selectivity as a criterion for choosing families of probes. hewitt-liang-2019-designing define selectivity as the performance difference between a probe on the target task and a control task, writing “[t]he selectivity of a probe puts linguistic task accuracy in context with the probe's capacity to memorize from word types.” They further ponder: “when a probe achieves high accuracy on a linguistic task using a representation, can we conclude that the representation encodes linguistic structure, or has the probe just learned the task?” Information-theoretically, there is no difference between learning the task and probing for linguistic structure, as we will show; thus, it follows that one should always employ the best possible probe for the task without resorting to artificial constraints.",
"In support of our discussion, we empirically analyze word-level part-of-speech labeling, a common syntactic probing task BIBREF6, BIBREF11, within our framework. Working on a typologically diverse set of languages (Basque, Czech, English, Finnish, Tamil, and Turkish), we show that the representations from BERT, a common contextualized embedder, only account for at most $5\\%$ more of the part-of-speech tag entropy than a control. These modest improvements suggest that most of the information needed to tag part-of-speech well is encoded at the lexical level, and does not require the sentential context of the word. Put more simply, words are not very ambiguous with respect to part of speech, a result known to practitioners of NLP BIBREF12. We interpret this to mean that part-of-speech labeling is not a very informative probing task.",
"We also remark that formulating probing information-theoretically gives us a simple, but stunning result: contextual word embeddings, e.g., BERT BIBREF13 and ELMo BIBREF14, contain the same amount of information about the linguistic property of interest as the original sentence. This follows naturally from the data-processing inequality under a very mild assumption. What this suggests is that, in a certain sense, probing for linguistic properties in representations may not be a well grounded enterprise at all."
],
[
"Following hewitt-liang-2019-designing, we consider probes that examine syntactic knowledge in contextualized embeddings. These probes only consider a single token's embedding and try to perform the task using only that information. Specifically, in this work, we consider part-of-speech (POS) labeling: determining a word's part of speech in a given sentence. For example, we wish to determine whether the word love is a noun or a verb. This task requires the sentential context for success. As an example, consider the utterance “love is blind” where, only with the context, is it clear that love is a noun. Thus, to do well on this task, the contextualized embeddings need to encode enough about the surrounding context to correctly guess the POS."
],
[
"Let $S$ be a random variable ranging over all possible sequences of words. For the sake of this paper, we assume the vocabulary $\\mathcal {V}$ is finite and, thus, the values $S$ can take are in $\\mathcal {V}^*$. We write $\\mathbf {s}\\in S$ as $\\mathbf {s}= w_1 \\cdots w_{|\\mathbf {s}|}$ for a specific sentence, where each $w_i \\in \\mathcal {V}$ is a specific word in the sentence and the position $i \\in \\mathbb {N}^{+}$. We also define the random variable $W$ that ranges over the vocabulary $\\mathcal {V}$. We define both a sentence-level random variable $S$ and a word-level random variable $W$ since each will be useful in different contexts during our exposition.",
"Next, let $T$ be a random variable whose possible values are the analyses $t$ that we want to consider for word $w_i$ in its sentential context, $\\mathbf {s}= w_1 \\cdots w_i \\cdots w_{|\\mathbf {s}|}$. In this work, we will focus on predicting the part-of-speech tag of the $i^\\text{th}$ word $w_i$. We denote the set of values $T$ can take as the set $\\mathcal {T}$. Finally, let $R$ be a representation-valued random variable for the $i^\\text{th}$ word $w_i$ in a sentence derived from the entire sentence $\\mathbf {s}$. We write $\\mathbf {r}\\in \\mathbb {R}^d$ for a value of $R$. While any given value $\\mathbf {r}$ is a continuous vector, there are only a countable number of values $R$ can take. To see this, note there are only a countable number of sentences in $\\mathcal {V}^*$.",
"Next, we assume there exists a true distribution $p(t, \\mathbf {s}, i)$ over analyses $t$ (elements of $\\mathcal {T}$), sentences $\\mathbf {s}$ (elements of $\\mathcal {V}^*$), and positions $i$ (elements of $\\mathbb {N}^{+}$). Note that the conditional distribution $p(t \\mid \\mathbf {s}, i)$ gives us the true distribution over analyses $t$ for the $i^{\\text{th}}$ word in the sentence $\\mathbf {s}$. We will augment this distribution such that $p$ is additionally a distribution over $\\mathbf {r}$, i.e.,",
"where we define the augmentation as a Dirac's delta function",
"Since contextual embeddings are a deterministic function of a sentence $\\mathbf {s}$, the augmented distribution in eq:true has no more randomness than the original—its entropy is the same. We assume the values of the random variables defined above are distributed according to this (unknown) $p$. While we do not have access to $p$, we assume the data in our corpus were drawn according to it. Note that $W$—the random variable over possible words—is distributed according to the marginal distribution",
"where we define the deterministic distribution"
],
[
"The task of supervised probing is an attempt to ascertain how much information a specific representation $\\mathbf {r}$ tells us about the value of $t$. This is naturally expressed as the mutual information, a quantity from information theory:",
"where we define the entropy, which is constant with respect to the representations, as",
"and where we define the conditional entropy as",
"the point-wise conditional entropy inside the sum is defined as",
"Again, we will not know any of the distributions required to compute these quantities; the distributions in the formulae are marginals and conditionals of the true distribution discussed in eq:true."
],
[
"The desired conditional entropy, $\\mathrm {H}(T \\mid R)$ is not readily available, but with a model $q_{{\\theta }}(\\mathbf {t}\\mid \\mathbf {r})$ in hand, we can upper-bound it by measuring their empirical cross entropy",
"where $\\mathrm {H}_{q_{{\\theta }}}(T \\mid R)$ is the cross-entropy we obtain by using $q_{{\\theta }}$ to get this estimate. Since the KL divergence is always positive, we may lower-bound the desired mutual information",
"This bound gets tighter, the more similar (in the sense of the KL divergence) $q_{{\\theta }}(\\cdot \\mid \\mathbf {r})$ is to the true distribution $p(\\cdot \\mid \\mathbf {r})$."
],
[
"If we accept mutual information as a natural measure for how much representations encode a target linguistic task (§SECREF6), then the best estimate of that mutual information is the one where the probe $q_{{\\theta }}(t \\mid \\mathbf {r})$ is best at the target task. In other words, we want the best probe $q_{{\\theta }}(t \\mid \\mathbf {r})$ such that we get the tightest bound to the actual distribution $p(t\\mid \\mathbf {r})$. This paints the question posed by hewitt-liang-2019-designing, who write “when a probe achieves high accuracy on a linguistic task using a representation, can we conclude that the representation encodes linguistic structure, or has the probe just learned the task?” as a false dichotomy. From an information-theoretic view, we will always prefer the probe that does better at the target task, since there is no difference between learning a task and the representations encoding the linguistic structure."
],
[
"To place the performance of a probe in perspective, hewitt-liang-2019-designing develop the notion of a control task. Inspired by this, we develop an analogue we term control functions, which are functions of the representation-valued random variable $R$. Similar to hewitt-liang-2019-designing's control tasks, the goal of a control function $\\mathbf {c}(\\cdot )$ is to place the mutual information $\\mathrm {I}(T; R)$ in the context of a baseline that the control function encodes. Control functions have their root in the data-processing inequality BIBREF7, which states that, for any function $\\mathbf {c}(\\cdot )$, we have",
"In other words, information can only be lost by processing data. A common adage associated with this inequality is “garbage in, garbage out.”"
],
[
"We will focus on type-level control functions in this paper; these functions have the effect of decontextualizing the embeddings. Such functions allow us to inquire how much the contextual aspect of the contextual embeddings help the probe perform the target task. To show that we may map from contextual embeddings to the identity of the word type, we need the following assumption about the embeddings.",
"Assumption 1 Every contextualized embedding is unique, i.e., for any pair of sentences $\\mathbf {s}, \\mathbf {s}^{\\prime } \\in \\mathcal {V}^*$, we have $(\\mathbf {s}\\ne \\mathbf {s}^{\\prime }) \\mid \\mid (i \\ne j) \\Rightarrow \\textsc {bert} (\\mathbf {s})_i \\ne \\textsc {bert} (\\mathbf {s}^{\\prime })_j$ for all $i \\in \\lbrace 1, \\ldots |\\mathbf {s}|\\rbrace $ and $j \\in \\lbrace 1, \\ldots , |\\mathbf {s}^{\\prime }|\\rbrace $.",
"We note that ass:one is mild. Contextualized word embeddings map words (in their context) to $\\mathbb {R}^d$, which is an uncountably infinite space. However, there are only a countable number of sentences, which implies only a countable number of sequences of real vectors in $\\mathbb {R}^d$ that a contextualized embedder may produce. The event that any two embeddings would be the same across two distinct sentences is infinitesimally small. ass:one yields the following corollary.",
"Corollary 1 There exists a function $\\emph {\\texttt {id} } : \\mathbb {R}^d \\rightarrow V$ that maps a contextualized embedding to its word type. The function $\\emph {\\texttt {id} }$ is not a bijection since multiple embeddings will map to the same type.",
"Using cor:one, we can show that any non-contextualized word embedding will contain no more information than a contextualized word embedding. More formally, we do this by constructing a look-up function $\\mathbf {e}: V \\rightarrow \\mathbb {R}^d$ that maps a word to a word embedding. This embedding may be one-hot, randomly generated ahead of time, or the output of a data-driven embedding method, e.g. fastText BIBREF15. We can then construct a control function as the composition of the look-up function $\\mathbf {e}$ and the id function $\\texttt {id} $. Using the data-processing inequality, we can prove that in a word-level prediction task, any non-contextual (type level) word-embedding will contain no more information than a contextualized (token level) one, such as BERT and ELMo. Specifically, we have",
"This result is intuitive and, perhaps, trivial—context matters information-theoretically. However, it gives us a principled foundation by which to measure the effectiveness of probes as we will show in sec:gain."
],
[
"We will now quantify how much a contextualized word embedding knows about a task with respect to a specific control function $\\mathbf {c}(\\cdot )$. We term how much more information the contextualized embeddings have about a task than a control variable the gain, which we define as",
"The gain function will be our method for measuring how much more information contextualized representations have over a controlled baseline, encoded as the function $\\mathbf {c}$. We will empirically estimate this value in sec:experiments.",
"Interestingly enough, the gain has a straightforward interpretation.",
"Proposition 1 The gain function is equal to the following conditional mutual information",
"The jump from the first to the second equality follows since $R$ encodes all the information about $T$ provided by $\\mathbf {c}(R)$ by construction. prop:interpretation gives us a clear understanding of the quantity we wish to estimate: It is how much information about a task is encoded in the representations, given some control knowledge. If properly designed, this control transformation will remove information from the probed representations."
],
[
"The gain, as defined in eq:gain, is intractable to compute. In this section we derive a pair of variational bounds on $\\mathcal {G}(T, R, \\mathbf {e})$—one upper and one lower. To approximate the gain, we will simultaneously minimize an upper and a lower-bound on eq:gain. We begin by approximating the gain in the following manner",
"these cross-entropies can be empirically estimated. We will assume access to a corpus $\\lbrace (t_i, \\mathbf {r}_i)\\rbrace _{i=1}^N$ that is human-annotated for the target linguistic property; we further assume that these are samples $(t_i, \\mathbf {r}_i) \\sim p(\\cdot , \\cdot )$ from the true distribution. This yields a second approximation that is tractable:",
"This approximation is exact in the limit $N \\rightarrow \\infty $ by the law of large numbers.",
"We note the approximation given in eq:approx may be either positive or negative and its estimation error follows from eq:entestimate",
"where we abuse the KL notation to simplify the equation. This is an undesired behavior since we know the gain itself is non-negative, by the data-processing inequality, but we have yet to devise a remedy.",
"We justify the approximation in eq:approx with a pair of variational bounds. The following two corollaries are a result of thm:variationalbounds in appendix:a.",
"Corollary 2 We have the following upper-bound on the gain",
"Corollary 3 We have the following lower-bound on the gain",
"The conjunction of cor:upper and cor:lower suggest a simple procedure for finding a good approximation: We choose $q_{{\\theta }1}(\\cdot \\mid r)$ and $q_{{\\theta }2}(\\cdot \\mid r)$ so as to minimize eq:upper and maximize eq:lower, respectively. These distributions contain no overlapping parameters, by construction, so these two optimization routines may be performed independently. We will optimize both with a gradient-based procedure, discussed in sec:experiments."
],
[
"In sec:control-functions we developed an information-theoretic framework for thinking about probing contextual word embeddings for linguistic structure. However, we now cast doubt on whether probing makes sense as a scientific endeavour. We prove in sec:context that contextualized word embeddings, by construction, contain no more information about a word-level syntactic task than the original sentence itself. Nevertheless, we do find a meaningful scientific interpretation of control functions. We expound upon this in sec:control-functions-meaning, arguing that control functions are useful, not for understanding representations, but rather for understanding the influence of sentential context on word-level syntactic tasks, e.g., labeling words with their part of speech."
],
[
"To start, we note the following corollary",
"Corollary 4 It directly follows from ass:one that $\\textsc {bert} $ is a bijection between sentences $\\mathbf {s}$ and sequences of embeddings $\\langle \\mathbf {r}_1, \\ldots , \\mathbf {r}_{|\\mathbf {s}|} \\rangle $. As $\\textsc {bert} $ is a bijection, it has an inverse, which we will denote as $\\textsc {bert}^{-1} $.",
"Theorem 1 The function $\\textsc {bert} (S)$ cannot provide more information about $T$ than the sentence $S$ itself.",
"This implies $\\mathrm {I}(T ; S) = \\mathrm {I}(T; \\textsc {bert} (S))$. We remark this is not a BERT-specific result—it rests on the fact that the data-processing inequality is tight for bijections. While thm:bert is a straightforward application of the data-processing inequality, it has deeper ramifications for probing. It means that if we search for syntax in the contextualized word embeddings of a sentence, we should not expect to find any more syntax than is present in the original sentence. In a sense, thm:bert is a cynical statement: the endeavour of finding syntax in contextualized embeddings sentences is nonsensical. This is because, under ass:one, we know the answer a priori—the contextualized word embeddings of a sentence contain exactly the same amount of information about syntax as does the sentence itself."
],
[
"Information-theoretically, the interpretation of control functions is also interesting. As previously noted, our interpretation of control functions in this work does not provide information about the representations themselves. Actually, the same reasoning used in cor:one could be used to devise a function $\\texttt {id} _s(\\mathbf {r})$ which led from a single representation back to the whole sentence. For a type-level control function $\\mathbf {c}$, by the data-processing inequality, we have that $\\mathrm {I}(T; W) \\ge \\mathrm {I}(T; \\mathbf {c}(R))$. Consequently, we can get an upper-bound on how much information we can get out of a decontextualized representation. If we assume we have perfect probes, then we get that the true gain function is $\\mathrm {I}(T; S) - \\mathrm {I}(T; W) = \\mathrm {I}(T; S \\mid W)$. This quantity is interpreted as the amount of knowledge we gain about the word-level task $T$ by knowing $S$ (i.e., the sentence) in addition to $W$ (i.e., the word). Therefore, a perfect probe would provide insights about language and not about the actual representations, which are no more than a means to an end."
],
[
"We do acknowledge another interpretation of the work of hewitt-liang-2019-designing inter alia; BERT makes the syntactic information present in an ordered sequence of words more easily extractable. However, ease of extraction is not a trivial notion to formalize, and indeed, we know of no attempt to do so; it is certainly more complex to determine than the number of layers in a multi-layer perceptron (MLP). Indeed, a MLP with a single hidden layer can represent any function over the unit cube, with the caveat that we may need a very large number of hidden units BIBREF16.",
"Although for perfect probes the above results should hold, in practice $\\texttt {id} (\\cdot )$ and $\\mathbf {c}(\\cdot )$ may be hard to approximate. Furthermore, if these functions were to be learned, they might require an unreasonably large dataset. A random embedding control function, for example, would require an infinitely large dataset to be learned—or at least one that contained all words in the vocabulary $V$. “Better” representations should make their respective probes more easily learnable—and consequently their encoded information more accessible.",
"We suggest that future work on probing should focus on operationalizing ease of extraction more rigorously—even though we do not attempt this ourselves. The advantage of simple probes is that they may reveal something about the structure of the encoded information—i.e., is it structured in such a way that it can be easily taken advantage of by downstream consumers of the contextualized embeddings? We suspect that many researchers who are interested in less complex probes have implicitly had this in mind."
],
[
"While this paper builds on the work of hewitt-liang-2019-designing, and we agree with them that we should have control tasks when probing for linguistic properties, we disagree with parts of the methodology for the control task construction. We present these disagreements here."
],
[
"hewitt-liang-2019-designing introduce control tasks to evaluate the effectiveness of probes. We draw inspiration from this technique as evidenced by our introduction of control functions. However, we take issue with the suggestion that controls should have structure and randomness, to use the terminology from hewitt-liang-2019-designing. They define structure as “the output for a word token is a deterministic function of the word type.” This means that they are stripping the language of ambiguity with respect to the target task. In the case of part-of-speech labeling, love would either be a noun or a verb in a control task, never both: this is a problem. The second feature of control tasks is randomness, i.e., “the output for each word type is sampled independently at random.” In conjunction, structure and randomness may yield a relatively trivial task that does not look at all like natural language.",
"What is more, there is a closed-form solution for an optimal, retrieval-based “probe” that has zero parameters: If a word type appears in the training set, return the label with which it was annotated there, otherwise return the most frequently occurring label across all words in the training set. This probe will achieve an accuracy that is 1 minus the out-of-vocabulary rate (the number of tokens in the test set that correspond to novel types divided by the number of tokens) times the percentage of tags in the test set that do not correspond to the most frequent tag (the error rate of the guess-the-most-frequent-tag classifier). In short, the best model for a control task is a pure memorizer that guesses the most frequent tag for out-of-vocabulary words."
],
[
"hewitt-liang-2019-designing propose that probes should be optimised to maximise accuracy and selectivity. Recall selectivity is given by the distance between the accuracy on the original task and the accuracy on the control task using the same architecture. Given their characterization of control tasks, maximising selectivity leads to a selection of a model that is bad at memorization. But why should we punish memorization? Much of linguistic competence is about generalization, however memorization also plays a key role BIBREF17, BIBREF18, BIBREF19, with word learning BIBREF20 being an obvious example. Indeed, maximizing selectivity as a criterion for creating probes seems to artificially disfavor this property."
],
[
"hewitt-liang-2019-designing acknowledge that for the more complex task of dependency edge prediction, a MLP probe is more accurate and, therefore, preferable despite its low selectivity. However, they offer two counter-examples where the less selective neural probe exhibits drawbacks when compared to its more selective, linear counterpart. We believe both examples are a symptom of using a simple probe rather than of selectivity being a useful metric for probe selection. First, [§3.6]hewitt-liang-2019-designing point out that, in their experiments, the MLP-1 model frequently mislabels the word with suffix -s as NNPS on the POS labeling task. They present this finding as a possible example of a less selective probe being less faithful in representing what linguistic information has the model learned. Our analysis leads us to believe that, on contrary, this shows that one should be using the best possible probe to minimize the chance of misrepresentation. Since more complex probes achieve higher accuracy on the task, as evidence by the findings of hewitt-liang-2019-designing, we believe that the overall trend of misrepresentation is higher for the probes with higher selectivity. The same applies for the second example discussed in section [§4.2]hewitt-liang-2019-designing where a less selective probe appears to be less faithful. The authors show that the representations on ELMo's second layer fail to outperform its word type ones (layer zero) on the POS labeling task when using the MLP-1 probe. While they argue this is evidence for selectivity being a useful metric in choosing appropriate probes, we argue that this demonstrates yet again that one needs to use a more complex probe to minimize the chances of misrepresenting what the model has learned. The fact that the linear probe shows a difference only demonstrates that the information is perhaps more accessible with ELMo, not that it is not present; see sec:ease-extract."
],
[
"We consider the task of POS labeling and use the universal POS tag information BIBREF21 from the Universal Dependencies 2.4 BIBREF22. We probe the multilingual release of BERT on six typologically diverse languages: Basque, Czech, English, Finnish, Tamil, and Turkish; and we compute the contextual representations of each sentence by feeding it into BERT and averaging the output word piece representations for each word, as tokenized in the treebank."
],
[
"We will consider three different control functions. Each is defined as the composition $\\mathbf {c}= \\mathbf {e}\\circ \\texttt {id} $ with a different look-up function. These look-up functions are",
"$\\mathbf {e}_\\textit {fastText}$ returns a language specific fastText embedding BIBREF15;",
"$\\mathbf {e}_\\textit {onehot}$ returns a one-hot embedding;",
"$\\mathbf {e}_\\textit {random}$ returns a fixed random embedding.",
"All of these functions are type level in that they remove the influence of the context on the word."
],
[
"As expounded upon above, our purpose is to achieve the best bound on mutual information we can. To this end, we employ a deep MLP as our probe. We define the probe as",
"an $m$-layer neural network with the non-linearity $\\sigma (\\cdot ) = \\mathrm {ReLU}(\\cdot )$. The initial projection matrix is $W^{(1)} \\in \\mathbb {R}^{r_1 \\times d}$ and the final projection matrix is $W^{(m)} \\in \\mathbb {R}^{|\\mathcal {T}| \\times r_{m-1}}$, where $r_i=\\frac{r}{2^{i-1}}$. The remaining matrices are $W^{(i)} \\in \\mathbb {R}^{r_i \\times r_{i-1}}$, so we half the number of hidden states in each layer. We optimize over the hyperparameters—number of layers, hidden size, one-hot embedding size, and dropout—by using random search. For each estimate, we train 50 models and choose the one with the best validation cross-entropy. The cross-entropy in the test set is then used as our entropy estimate."
],
[
"We know $\\textsc {bert} $ can generate text in many languages, here we assess how much does it actually know about syntax in those languages. And how much more does it know than simple type-level baselines. tab:results-full presents this results, showing how much information $\\textsc {bert} $, fastText and onehot embeddings encode about POS tagging. We see that—in all analysed languages—type level embeddings can already capture most of the uncertainty in POS tagging. We also see that BERT only shares a small amount of extra information with the task, having small (or even negative) gains in all languages.",
"$\\textsc {bert} $ presents negative gains in some of the analysed languages. Although this may seem to contradict the information processing inequality, it is actually caused by the difficulty of approximating $\\texttt {id} $ and $\\mathbf {c}(\\cdot )$ with a finite training set—causing $\\mathrm {KL}_{q_{{\\theta }1}}(T \\mid R)$ to be larger than $\\mathrm {KL}_{q_{{\\theta }2}}(T \\mid \\mathbf {c}(R))$. We believe this highlights the need to formalize ease of extraction, as discussed in sec:ease-extract.",
"Finally, when put into perspective, multilingual $\\textsc {bert} $'s representations do not seem to encode much more information about syntax than a trivial baseline. $\\textsc {bert} $ only improves upon fastText in three of the six analysed languages—and even in those, it encodes at most (in English) $5\\%$ additional information."
],
[
"We proposed an information-theoretic formulation of probing: we define probing as the task of estimating conditional mutual information. We introduce control functions, which allows us to put the amount of information encoded in contextual representations in the context of knowledge judged to be trivial. We further explored this formalization and showed that, given perfect probes, probing can only yield insights into the language itself and tells us nothing about the representations under investigation. Keeping this in mind, we suggested a change of focus—instead of focusing on probe size or information, we should look at ease of extraction going forward.",
"On another note, we apply our formalization to evaluate multilingual $\\textsc {bert} $'s syntax knowledge on a set of six typologically diverse languages. Although it does encode a large amount of information about syntax (more than $81\\%$ in all languages), it only encodes at most $5\\%$ more information than some trivial baseline knowledge (a type-level representation). This indicates that the task of POS labeling (word-level POS tagging) is not an ideal task for contemplating the syntactic understanding of contextual word embeddings."
],
[
"Theorem 2 The estimation error between $\\mathcal {G}_{q_{{\\theta }}}(T, R, \\mathbf {e})$ and the true gain can be upper- and lower-bounded by two distinct Kullback–Leibler divergences.",
"We first find the error given by our estimate",
"Making use of this error, we trivially find an upper-bound on the estimation error as",
"which follows since KL divergences are never negative. Analogously, we find a lower-bound as"
],
[
"In this section, we present accuracies for the models trained using $\\textsc {bert} $, fastText and onehot embeddings, and the full results on random embeddings. tab:results-extra shows that both BERT and fastText present high accuracies in all languages, except Tamil. Onehot and random results are considerably worse, as expected, since they could not do more than take random guesses (e.g. guessing the most frequent label in the training test) in any word which was not seen during training."
]
],
"section_name": [
"Introduction",
"Word-Level Syntactic Probes for Contextual Embeddings",
"Word-Level Syntactic Probes for Contextual Embeddings ::: Notation",
"Word-Level Syntactic Probes for Contextual Embeddings ::: Probing as Mutual Information",
"Word-Level Syntactic Probes for Contextual Embeddings ::: Bounding Mutual Information",
"Word-Level Syntactic Probes for Contextual Embeddings ::: Bounding Mutual Information ::: Bigger Probes are Better.",
"Control Functions",
"Control Functions ::: Type-Level Control Functions",
"Control Functions ::: How Much Information Did We Gain?",
"Control Functions ::: Approximating the Gain",
"Understanding Probing Information-Theoretically",
"Understanding Probing Information-Theoretically ::: You Know Nothing, BERT",
"Understanding Probing Information-Theoretically ::: What Do Control Functions Mean?",
"Understanding Probing Information-Theoretically ::: Discussion: Ease of Extraction",
"A Critique of Control Tasks",
"A Critique of Control Tasks ::: Structure and Randomness",
"A Critique of Control Tasks ::: What's Wrong with Memorization?",
"A Critique of Control Tasks ::: What Low-Selectivity Means",
"Experiments",
"Experiments ::: Control Functions",
"Experiments ::: Probe Architecture",
"Experiments ::: Results",
"Conclusion",
"Variational Bounds",
"Further Results"
]
} | {
"answers": [
{
"annotation_id": [
"d71c4a5588eca8bc2cd48de9e8207f61cb95bd73"
],
"answer": [
{
"evidence": [
"On another note, we apply our formalization to evaluate multilingual $\\textsc {bert} $'s syntax knowledge on a set of six typologically diverse languages. Although it does encode a large amount of information about syntax (more than $81\\%$ in all languages), it only encodes at most $5\\%$ more information than some trivial baseline knowledge (a type-level representation). This indicates that the task of POS labeling (word-level POS tagging) is not an ideal task for contemplating the syntactic understanding of contextual word embeddings.",
"We know $\\textsc {bert} $ can generate text in many languages, here we assess how much does it actually know about syntax in those languages. And how much more does it know than simple type-level baselines. tab:results-full presents this results, showing how much information $\\textsc {bert} $, fastText and onehot embeddings encode about POS tagging. We see that—in all analysed languages—type level embeddings can already capture most of the uncertainty in POS tagging. We also see that BERT only shares a small amount of extra information with the task, having small (or even negative) gains in all languages.",
"Finally, when put into perspective, multilingual $\\textsc {bert} $'s representations do not seem to encode much more information about syntax than a trivial baseline. $\\textsc {bert} $ only improves upon fastText in three of the six analysed languages—and even in those, it encodes at most (in English) $5\\%$ additional information."
],
"extractive_spans": [],
"free_form_answer": "It is observed some variability - but not significant. Bert does not seem to gain much more syntax information than with type level information.",
"highlighted_evidence": [
"On another note, we apply our formalization to evaluate multilingual $\\textsc {bert} $'s syntax knowledge on a set of six typologically diverse languages. Although it does encode a large amount of information about syntax (more than $81\\%$ in all languages), it only encodes at most $5\\%$ more information than some trivial baseline knowledge (a type-level representation). This indicates that the task of POS labeling (word-level POS tagging) is not an ideal task for contemplating the syntactic understanding of contextual word embeddings.",
"We see that—in all analysed languages—type level embeddings can already capture most of the uncertainty in POS tagging. We also see that BERT only shares a small amount of extra information with the task, having small (or even negative) gains in all languages.",
"Finally, when put into perspective, multilingual $\\textsc {bert} $'s representations do not seem to encode much more information about syntax than a trivial baseline. $\\textsc {bert} $ only improves upon fastText in three of the six analysed languages—and even in those, it encodes at most (in English) $5\\%$ additional information."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"dabf2a5ff959035448153abb722a1adcb6dcdba3"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"Was any variation in results observed based on language typology?",
"Does the work explicitly study the relationship between model complexity and linguistic structure encoding?"
],
"question_id": [
"cbbcafffda7107358fa5bf02409a01e17ee56bfd",
"1e59263f7aa7dd5acb53c8749f627cf68683adee"
],
"question_writer": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Amount of information BERT, fastText or one-hot embeddings share with a POS probing task. H(T ) is estimated with a plug-in estimator from same treebanks we use to train the POS labelers.",
"Table 2: Amount of information BERT, fastText or one-hot embeddings share with a dependency arc labeling task. H(T ) is again estimated with a plug-in estimator from same treebanks we use to train our models.",
"Table 3: Accuracies of the models trained on BERT, fastText, one-hot and random embeddings for the POS tagging task.",
"Table 4: Accuracies of the models trained on BERT, fastText, one-hot and random embeddings for the dependency labeling task."
],
"file": [
"8-Table1-1.png",
"9-Table2-1.png",
"14-Table3-1.png",
"14-Table4-1.png"
]
} | [
"Was any variation in results observed based on language typology?"
] | [
[
"2004.03061-Conclusion-1",
"2004.03061-Experiments ::: Results-2",
"2004.03061-Experiments ::: Results-0"
]
] | [
"It is observed some variability - but not significant. Bert does not seem to gain much more syntax information than with type level information."
] | 838 |
1704.04521 | Translation of Patent Sentences with a Large Vocabulary of Technical Terms Using Neural Machine Translation | Neural machine translation (NMT), a new approach to machine translation, has achieved promising results comparable to those of traditional approaches such as statistical machine translation (SMT). Despite its recent success, NMT cannot handle a larger vocabulary because training complexity and decoding complexity proportionally increase with the number of target words. This problem becomes even more serious when translating patent documents, which contain many technical terms that are observed infrequently. In NMTs, words that are out of vocabulary are represented by a single unknown token. In this paper, we propose a method that enables NMT to translate patent sentences comprising a large vocabulary of technical terms. We train an NMT system on bilingual data wherein technical terms are replaced with technical term tokens; this allows it to translate most of the source sentences except technical terms. Further, we use it as a decoder to translate source sentences with technical term tokens and replace the tokens with technical term translations using SMT. We also use it to rerank the 1,000-best SMT translations on the basis of the average of the SMT score and that of the NMT rescoring of the translated sentences with technical term tokens. Our experiments on Japanese-Chinese patent sentences show that the proposed NMT system achieves a substantial improvement of up to 3.1 BLEU points and 2.3 RIBES points over traditional SMT systems and an improvement of approximately 0.6 BLEU points and 0.8 RIBES points over an equivalent NMT system without our proposed technique. | {
"paragraphs": [
[
"Neural machine translation (NMT), a new approach to solving machine translation, has achieved promising results BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . An NMT system builds a simple large neural network that reads the entire input source sentence and generates an output translation. The entire neural network is jointly trained to maximize the conditional probability of a correct translation of a source sentence with a bilingual corpus. Although NMT offers many advantages over traditional phrase-based approaches, such as a small memory footprint and simple decoder implementation, conventional NMT is limited when it comes to larger vocabularies. This is because the training complexity and decoding complexity proportionally increase with the number of target words. Words that are out of vocabulary are represented by a single unknown token in translations, as illustrated in Figure 1 . The problem becomes more serious when translating patent documents, which contain several newly introduced technical terms.",
"There have been a number of related studies that address the vocabulary limitation of NMT systems. Jean el al. Jean15 provided an efficient approximation to the softmax to accommodate a very large vocabulary in an NMT system. Luong et al. Luong15 proposed annotating the occurrences of a target unknown word token with positional information to track its alignments, after which they replace the tokens with their translations using simple word dictionary lookup or identity copy. Li et al. Li16 proposed to replace out-of-vocabulary words with similar in-vocabulary words based on a similarity model learnt from monolingual data. Sennrich et al. Sennrich16 introduced an effective approach based on encoding rare and unknown words as sequences of subword units. Luong and Manning Luong16 provided a character-level and word-level hybrid NMT model to achieve an open vocabulary, and Costa-jussà and Fonollosa Jussa16 proposed a NMT system based on character-based embeddings.",
"However, these previous approaches have limitations when translating patent sentences. This is because their methods only focus on addressing the problem of unknown words even though the words are parts of technical terms. It is obvious that a technical term should be considered as one word that comprises components that always have different meanings and translations when they are used alone. An example is shown in Figure 1 , wherein Japanese word “”(bridge) should be translated to Chinese word “” when included in technical term “bridge interface”; however, it is always translated as “”.",
"In this paper, we propose a method that enables NMT to translate patent sentences with a large vocabulary of technical terms. We use an NMT model similar to that used by Sutskever et al. Sutskever14, which uses a deep long short-term memories (LSTM) BIBREF7 to encode the input sentence and a separate deep LSTM to output the translation. We train the NMT model on a bilingual corpus in which the technical terms are replaced with technical term tokens; this allows it to translate most of the source sentences except technical terms. Similar to Sutskever et al. Sutskever14, we use it as a decoder to translate source sentences with technical term tokens and replace the tokens with technical term translations using statistical machine translation (SMT). We also use it to rerank the 1,000-best SMT translations on the basis of the average of the SMT and NMT scores of the translated sentences that have been rescored with the technical term tokens. Our experiments on Japanese-Chinese patent sentences show that our proposed NMT system achieves a substantial improvement of up to 3.1 BLEU points and 2.3 RIBES points over a traditional SMT system and an improvement of approximately 0.6 BLEU points and 0.8 RIBES points over an equivalent NMT system without our proposed technique."
],
[
"Japanese-Chinese parallel patent documents were collected from the Japanese patent documents published by the Japanese Patent Office (JPO) during 2004-2012 and the Chinese patent documents published by the State Intellectual Property Office of the People's Republic of China (SIPO) during 2005-2010. From the collected documents, we extracted 312,492 patent families, and the method of Utiyama and Isahara Uchiyama07bs was applied to the text of the extracted patent families to align the Japanese and Chinese sentences. The Japanese sentences were segmented into a sequence of morphemes using the Japanese morphological analyzer MeCab with the morpheme lexicon IPAdic, and the Chinese sentences were segmented into a sequence of words using the Chinese morphological analyzer Stanford Word Segment BIBREF8 trained using the Chinese Penn Treebank. In this study, Japanese-Chinese parallel patent sentence pairs were ordered in descending order of sentence-alignment score and we used the topmost 2.8M pairs, whose Japanese sentences contain fewer than 40 morphemes and Chinese sentences contain fewer than 40 words."
],
[
"NMT uses a single neural network trained jointly to maximize the translation performance BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF5 . Given a source sentence $x$ $=(x_1,\\ldots ,x_N)$ and target sentence $y$ $=(y_1,\\ldots ,y_M)$ , an NMT system uses a neural network to parameterize the conditional distributions ",
"$$p(y_l \\mid y_{< l},\\mbox{$x$}) \\nonumber $$ (Eq. 6) ",
"for $1 \\le l \\le M$ . Consequently, it becomes possible to compute and maximize the log probability of the target sentence given the source sentence ",
"$$\\log p(\\mbox{$y$} \\mid \\mbox{$x$}) = \\sum _{l=1}^{M} \\log p(y_l|y_{< l},\\mbox{$x$})$$ (Eq. 7) ",
"In this paper, we use an NMT model similar to that used by Sutskever et al. Sutskever14. It uses two separate deep LSTMs to encode the input sequence and output the translation. The encoder, which is implemented as a recurrent neural network, reads the source sentence one word at a time and then encodes it into a large vector that represents the entire source sentence. The decoder, another recurrent neural network, generates a translation on the basis of the encoded vector one word at a time.",
"One important difference between our NMT model and the one used by Sutskever et al. Sutskever14 is that we added an attention mechanism. Recently, Bahdanau et al. Bahdanau15 proposed an attention mechanism, a form of random access memory, to help NMT cope with long input sequences. Luong et al. Luong15b proposed an attention mechanism for different scoring functions in order to compare the source and target hidden states as well as different strategies for placing the attention. In this paper, we utilize the attention mechanism proposed by Bahdanau et al. Bahdanau15, wherein each output target word is predicted on the basis of not only a recurrent hidden state and the previously predicted word but also a context vector computed as the weighted sum of the hidden states."
],
[
"Figure 2 illustrates the procedure of the training model with parallel patent sentence pairs, wherein technical terms are replaced with technical term tokens “ $TT_{1}$ ”, “ $TT_{2}$ ”, $\\ldots $ .",
"In the step 1 of Figure 2 , we align the Japanese technical terms, which are automatically extracted from the Japanese sentences, with their Chinese translations in the Chinese sentences. Here, we introduce the following two steps to identify technical term pairs in the bilingual Japanese-Chinese corpus:",
" According to the approach proposed by Dong et al. Dong15b, we identify Japanese-Chinese technical term pairs using an SMT phrase translation table. Given a parallel sentence pair $\\langle S_J, S_C\\rangle $ containing a Japanese technical term $t_J$ , the Chinese translation candidates collected from the phrase translation table are matched against the Chinese sentence $S_C$ of the parallel sentence pair. Of those found in $S_C$ , $t_C$ with the largest translation probability $P(t_C\\mid t_J)$ is selected, and the bilingual technical term pair $\\langle t_J,t_C\\rangle $ is identified.",
"For the Japanese technical terms whose Chinese translations are not included in the results of Step UID11 , we then use an approach based on SMT word alignment. Given a parallel sentence pair $\\langle S_J, S_C\\rangle $ containing a Japanese technical term $t_J$ , a sequence of Chinese words is selected using SMT word alignment, and we use the Chinese translation $t_C$ for the Japanese technical term $t_J$ .",
"As shown in the step 2 of Figure 2 , in each of Japanese-Chinese parallel patent sentence pairs, occurrences of technical term pairs $\\langle t_J^{\\ 1},t_C^1 \\rangle $ , $\\langle t_J^2,t_C^2\\rangle $ , $\\ldots $ , $\\langle t_J^k,t_C^k\\rangle $ are then replaced with technical term tokens $\\langle TT_{1},TT_{1} \\rangle $ , $\\langle TT_{2},TT_{2} \\rangle $ , $\\ldots $ , $\\langle TT_{k},TT_{k} \\rangle $ . Technical term pairs $\\langle t_J^{1},t_C^1 \\rangle $ , $\\langle t_J^2,t_C^2\\rangle $ , $\\langle t_J^2,t_C^2\\rangle $0 , $\\langle t_J^2,t_C^2\\rangle $1 are numbered in the order of occurrence of Japanese technical terms $\\langle t_J^2,t_C^2\\rangle $2 ( $\\langle t_J^2,t_C^2\\rangle $3 ) in each Japanese sentence $\\langle t_J^2,t_C^2\\rangle $4 . Here, note that in all the parallel sentence pairs $\\langle t_J^2,t_C^2\\rangle $5 , technical term tokens “ $\\langle t_J^2,t_C^2\\rangle $6 ”, “ $\\langle t_J^2,t_C^2\\rangle $7 ”, $\\langle t_J^2,t_C^2\\rangle $8 that are identical throughout all the parallel sentence pairs are used in this procedure. Therefore, for example, in all the Japanese patent sentences $\\langle t_J^2,t_C^2\\rangle $9 , the Japanese technical term $\\ldots $0 which appears earlier than other Japanese technical terms in $\\ldots $1 is replaced with $\\ldots $2 . We then train the NMT system on a bilingual corpus, in which the technical term pairs is replaced by “ $\\ldots $3 ” ( $\\ldots $4 ) tokens, and obtain an NMT model in which the technical terms are represented as technical term tokens."
],
[
"Figure 3 illustrates the procedure for producing Chinese translations via decoding the Japanese sentence using the method proposed in this paper. In the step 1 of Figure 3 , when given an input Japanese sentence, we first automatically extract the technical terms and replace them with the technical term tokens “ $TT_{i}$ ” ( $i=1,2,\\ldots $ ). Consequently, we have an input sentence in which the technical term tokens “ $TT_{i}$ ” ( $i=1,2,\\ldots $ ) represent the positions of the technical terms and a list of extracted Japanese technical terms. Next, as shown in the step 2-N of Figure 3 , the source Japanese sentence with technical term tokens is translated using the NMT model trained according to the procedure described in Section \"NMT Training after Replacing Technical Term Pairs with Tokens\" , whereas the extracted Japanese technical terms are translated using an SMT phrase translation table in the step 2-S of Figure 3 . Finally, in the step 3, we replace the technical term tokens “ $TT_{i}$ ” ( $i=1,2,\\ldots $ ) of the sentence translation with SMT the technical term translations."
],
[
"As shown in the step 1 of Figure 4 , similar to the approach of NMT rescoring provided in Sutskever et al.Sutskever14, we first obtain 1,000-best translation list of the given Japanese sentence using the SMT system. Next, in the step 2, we then replace the technical terms in the translation sentences with technical term tokens “ $TT_{i}$ ” ( $i = 1,2,3,\\ldots $ ), which must be the same with the tokens of their source Japanese technical terms in the input Japanese sentence. The technique used for aligning Japanese technical terms with their Chinese translations is the same as that described in Section \"NMT Training after Replacing Technical Term Pairs with Tokens\" . In the step 3 of Figure 4 , the 1,000-best translations, in which technical terms are represented as tokens, are rescored using the NMT model trained according to the procedure described in Section \"NMT Training after Replacing Technical Term Pairs with Tokens\" . Given a Japanese sentence $S_J$ and its 1,000-best Chinese translations $S_C^{\\ n}$ ( $n=1,2,\\ldots ,\\ 1,000$ ) translated by the SMT system, NMT score of each translation sentence pair $\\langle S_J, S_C^n \\rangle $ is computed as the log probability $\\log p(S_C^n \\mid S_J)$ of Equation ( 7 ). Finally, we rerank the 1,000-best translation list on the basis of the average SMT and NMT scores and output the translation with the highest final score."
],
[
"We evaluated the effectiveness of the proposed NMT system in translating the Japanese-Chinese parallel patent sentences described in Section \"Japanese-Chinese Patent Documents\" . Among the 2.8M parallel sentence pairs, we randomly extracted 1,000 sentence pairs for the test set and 1,000 sentence pairs for the development set; the remaining sentence pairs were used for the training set.",
"According to the procedure of Section \"NMT Training after Replacing Technical Term Pairs with Tokens\" , from the Japanese-Chinese sentence pairs of the training set, we collected 6.5M occurrences of technical term pairs, which are 1.3M types of technical term pairs with 800K unique types of Japanese technical terms and 1.0M unique types of Chinese technical terms. Out of the total 6.5M occurrences of technical term pairs, 6.2M were replaced with technical term tokens using the phrase translation table, while the remaining 300K were replaced with technical term tokens using the word alignment. We limited both the Japanese vocabulary (the source language) and the Chinese vocabulary (the target language) to 40K most frequently used words.",
"Within the total 1,000 Japanese patent sentences in the test set, 2,244 occurrences of Japanese technical terms were identified, which correspond to 1,857 types."
],
[
"For the training of the SMT model, including the word alignment and the phrase translation table, we used Moses BIBREF9 , a toolkit for a phrase-based SMT models.",
"For the training of the NMT model, our training procedure and hyperparameter choices were similar to those of Sutskever et al. Sutskever14. We used a deep LSTM neural network comprising three layers, with 512 cells in each layer, and a 512-dimensional word embedding. Similar to Sutskever et al. (2014), we reversed the words in the source sentences and ensure that all sentences in a minibatch are roughly the same length. Further training details are given below:",
"All of the LSTM's parameter were initialized with a uniform distribution ranging between -0.06 and 0.06.",
"We set the size of a minibatch to 128.",
"We used the stochastic gradient descent, beginning at a learning rate of 0.5. We computed the perplexity of the development set using the currently produced NMT model after every 1,500 minibatches were trained and multiplied the learning rate by 0.99 when the perplexity did not decrease with respect to the last three perplexities. We trained our model for a total of 10 epoches.",
"Similar to Sutskever et al. Sutskever14, we rescaled the normalized gradient to ensure that its norm does not exceed 5.",
"We implement the NMT system using TensorFlow, an open source library for numerical computation. The training time was around two days when using the described parameters on an 1-GPU machine."
],
[
"We calculated automatic evaluation scores for the translation results using two popular metrics: BLEU BIBREF10 and RIBES BIBREF11 . As shown in Table 1 , we report the evaluation scores, on the basis of the translations by Moses BIBREF9 , as the baseline SMT and the scores based on translations produced by the equivalent NMT system without our proposed approach as the baseline NMT. As shown in Table 1 , the two versions of the proposed NMT systems clearly improve the translation quality when compared with the baselines. When compared with the baseline SMT, the performance gain of the proposed system is approximately 3.1 BLEU points if translations are produced by the proposed NMT system of Section \"NMT Rescoring of 1,000-best SMT Translations\" or 2.3 RIBES points if translations are produced by the proposed NMT system of Section \"NMT Decoding and SMT Technical Term Translation\" . When compared with the result of decoding with the baseline NMT, the proposed NMT system of Section \"NMT Decoding and SMT Technical Term Translation\" achieved performance gains of 0.8 RIBES points. When compared with the result of reranking with the baseline NMT, the proposed NMT system of Section \"NMT Rescoring of 1,000-best SMT Translations\" can still achieve performance gains of 0.6 BLEU points. Moreover, when the output translations produced by NMT decoding and SMT technical term translation described in Section \"NMT Decoding and SMT Technical Term Translation\" with the output translations produced by decoding with the baseline NMT, the number of unknown tokens included in output translations reduced from 191 to 92. About 90% of remaining unknown tokens correspond to numbers, English words, abbreviations, and symbols.",
"In this study, we also conducted two types of human evaluation according to the work of Nakazawa et al. Nakazawa15: pairwise evaluation and JPO adequacy evaluation. During the procedure of pairwise evaluation, we compare each of translations produced by the baseline SMT with that produced by the two versions of the proposed NMT systems, and judge which translation is better, or whether they are with comparable quality. The score of pairwise evaluation is defined by the following formula, where $W$ is the number of better translations compared to the baseline SMT, $L$ the number of worse translations compared to the baseline SMT, and $T$ the number of translations having their quality comparable to those produced by the baseline SMT: ",
"$$score=100 \\times \\frac{W-L}{W+L+T} \\nonumber $$ (Eq. 34) ",
"The score of pairwise evaluation ranges from $-$ 100 to 100. In the JPO adequacy evaluation, Chinese translations are evaluated according to the quality evaluation criterion for translated patent documents proposed by the Japanese Patent Office (JPO). The JPO adequacy criterion judges whether or not the technical factors and their relationships included in Japanese patent sentences are correctly translated into Chinese, and score Chinese translations on the basis of the percentage of correctly translated information, where the score of 5 means all of those information are translated correctly, while that of 1 means most of those information are not translated correctly. The score of the JPO adequacy evaluation is defined as the average over the whole test sentences. Unlike the study conducted Nakazawa et al. BIBREF12 , we randomly selected 200 sentence pairs from the test set for human evaluation, and both human evaluations were conducted using only one judgement. Table 2 shows the results of the human evaluation for the baseline SMT, the baseline NMT, and the proposed NMT system. We observed that the proposed system achieved the best performance for both pairwise evaluation and JPO adequacy evaluation when we replaced technical term tokens with SMT technical term translations after decoding the source sentence with technical term tokens.",
"Throughout Figure 5 $\\sim $ Figure 7 , we show an identical source Japanese sentence and each of its translations produced by the two versions of the proposed NMT systems, compared with translations produced by the three baselines, respectively. Figure 5 shows an example of correct translation produced by the proposed system in comparison to that produced by the baseline SMT. In this example, our model correctly translates the Japanese sentence into Chinese, whereas the translation by the baseline SMT is a translation error with several erroneous syntactic structures. As shown in Figure 6 , the second example highlights that the proposed NMT system of Section \"NMT Decoding and SMT Technical Term Translation\" can correctly translate the Japanese technical term “”(laminated wafer) to the Chinese technical term “”. The translation by the baseline NMT is a translation error because of not only the erroneously translated unknown token but also the Chinese word “”, which is not appropriate as a component of a Chinese technical term. Another example is shown in Figure 7 , where we compare the translation of a reranking SMT 1,000-best translation produced by the proposed NMT system with that produced by reranking with the baseline NMT. It is interesting to observe that compared with the baseline NMT, we obtain a better translation when we rerank the 1,000-best SMT translations using the proposed NMT system, in which technical term tokens represent technical terms. It is mainly because the correct Chinese translation “”(wafter) of Japanese word “” is out of the 40K NMT vocabulary (Chinese), causing reranking with the baseline NMT to produce the translation with an erroneous construction of “noun phrase of noun phrase of noun phrase”. As shown in Figure 7 , the proposed NMT system of Section \"NMT Rescoring of 1,000-best SMT Translations\" produced the translation with a correct construction, mainly because Chinese word “”(wafter) is a part of Chinese technical term “”(laminated wafter) and is replaced with a technical term token and then rescored by the NMT model (with technical term tokens “ $TT_{1}$ ”, “ $TT_{2}$ ”, $\\ldots $ )."
],
[
"In this paper, we proposed an NMT method capable of translating patent sentences with a large vocabulary of technical terms. We trained an NMT system on a bilingual corpus, wherein technical terms are replaced with technical term tokens; this allows it to translate most of the source sentences except the technical terms. Similar to Sutskever et al. Sutskever14, we used it as a decoder to translate the source sentences with technical term tokens and replace the tokens with technical terms translated using SMT. We also used it to rerank the 1,000-best SMT translations on the basis of the average of the SMT score and that of NMT rescoring of translated sentences with technical term tokens. For the translation of Japanese patent sentences, we observed that our proposed NMT system performs better than the phrase-based SMT system as well as the equivalent NMT system without our proposed approach.",
"One of our important future works is to evaluate our proposed method in the NMT system proposed by Bahdanau et al. Bahdanau15, which introduced a bidirectional recurrent neural network as encoder and is the state-of-the-art of pure NMT system recently. However, the NMT system proposed by Bahdanau et al. Bahdanau15 also has a limitation in addressing out-of-vocabulary words. Our proposed NMT system is expected to improve the translation performance of patent sentences by applying approach of Bahdanau et al. Bahdanau15. Another important future work is to quantitatively compare our study with the work of Luong et al. Luong15. In the work of Luong et al. Luong15, they replace low frequency single words and translate them in a post-processing Step using a dictionary, while we propose to replace the whole technical terms and post-translate them with phrase translation table of SMT system. Therefore, our proposed NMT system is expected to be appropriate to translate patent documents which contain many technical terms comprised of multiple words and should be translated together. We will also evaluate the present study by reranking the n-best translations produced by the proposed NMT system on the basis of their SMT rescoring. Next, we will rerank translations from both the n-best SMT translations and n-best NMT translations. As shown in Section \"Evaluation Results\" , the decoding approach of our proposed NMT system achieved the best RIBES performance and human evaluation scores in our experiments, whereas the reranking approach achieved the best performance with respect to BLEU. A translation with the highest average SMT and NMT scores of the n-best translations produced by NMT and SMT, respectively, is expected to be an effective translation."
]
],
"section_name": [
"Introduction",
"Japanese-Chinese Patent Documents",
"Neural Machine Translation (NMT)",
"NMT Training after Replacing Technical Term Pairs with Tokens",
"NMT Decoding and SMT Technical Term Translation",
"NMT Rescoring of 1,000-best SMT Translations",
"Training and Test Sets",
"Training Details",
"Evaluation Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"dcbed381621d4f1287b1d5f9e12c956901ddf868"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 2: NMT training after replacing technical term pairs with technical term tokens “TTi” (i = 1, 2, . . .)",
"FLOAT SELECTED: Figure 3: NMT decoding with technical term tokens “TTi” (i = 1, 2, . . .) and SMT technical term translation",
"FLOAT SELECTED: Figure 4: NMT rescoring of 1,000-best SMT translations with technical term tokens “TTi” (i = 1, 2, . . .)",
"In this paper, we propose a method that enables NMT to translate patent sentences with a large vocabulary of technical terms. We use an NMT model similar to that used by Sutskever et al. Sutskever14, which uses a deep long short-term memories (LSTM) BIBREF7 to encode the input sentence and a separate deep LSTM to output the translation. We train the NMT model on a bilingual corpus in which the technical terms are replaced with technical term tokens; this allows it to translate most of the source sentences except technical terms. Similar to Sutskever et al. Sutskever14, we use it as a decoder to translate source sentences with technical term tokens and replace the tokens with technical term translations using statistical machine translation (SMT). We also use it to rerank the 1,000-best SMT translations on the basis of the average of the SMT and NMT scores of the translated sentences that have been rescored with the technical term tokens. Our experiments on Japanese-Chinese patent sentences show that our proposed NMT system achieves a substantial improvement of up to 3.1 BLEU points and 2.3 RIBES points over a traditional SMT system and an improvement of approximately 0.6 BLEU points and 0.8 RIBES points over an equivalent NMT system without our proposed technique.",
"One important difference between our NMT model and the one used by Sutskever et al. Sutskever14 is that we added an attention mechanism. Recently, Bahdanau et al. Bahdanau15 proposed an attention mechanism, a form of random access memory, to help NMT cope with long input sequences. Luong et al. Luong15b proposed an attention mechanism for different scoring functions in order to compare the source and target hidden states as well as different strategies for placing the attention. In this paper, we utilize the attention mechanism proposed by Bahdanau et al. Bahdanau15, wherein each output target word is predicted on the basis of not only a recurrent hidden state and the previously predicted word but also a context vector computed as the weighted sum of the hidden states.",
"According to the approach proposed by Dong et al. Dong15b, we identify Japanese-Chinese technical term pairs using an SMT phrase translation table. Given a parallel sentence pair $\\langle S_J, S_C\\rangle $ containing a Japanese technical term $t_J$ , the Chinese translation candidates collected from the phrase translation table are matched against the Chinese sentence $S_C$ of the parallel sentence pair. Of those found in $S_C$ , $t_C$ with the largest translation probability $P(t_C\\mid t_J)$ is selected, and the bilingual technical term pair $\\langle t_J,t_C\\rangle $ is identified.",
"For the Japanese technical terms whose Chinese translations are not included in the results of Step UID11 , we then use an approach based on SMT word alignment. Given a parallel sentence pair $\\langle S_J, S_C\\rangle $ containing a Japanese technical term $t_J$ , a sequence of Chinese words is selected using SMT word alignment, and we use the Chinese translation $t_C$ for the Japanese technical term $t_J$ .",
"Figure 3 illustrates the procedure for producing Chinese translations via decoding the Japanese sentence using the method proposed in this paper. In the step 1 of Figure 3 , when given an input Japanese sentence, we first automatically extract the technical terms and replace them with the technical term tokens “ $TT_{i}$ ” ( $i=1,2,\\ldots $ ). Consequently, we have an input sentence in which the technical term tokens “ $TT_{i}$ ” ( $i=1,2,\\ldots $ ) represent the positions of the technical terms and a list of extracted Japanese technical terms. Next, as shown in the step 2-N of Figure 3 , the source Japanese sentence with technical term tokens is translated using the NMT model trained according to the procedure described in Section \"NMT Training after Replacing Technical Term Pairs with Tokens\" , whereas the extracted Japanese technical terms are translated using an SMT phrase translation table in the step 2-S of Figure 3 . Finally, in the step 3, we replace the technical term tokens “ $TT_{i}$ ” ( $i=1,2,\\ldots $ ) of the sentence translation with SMT the technical term translations."
],
"extractive_spans": [],
"free_form_answer": "There is no reason to think that this approach wouldn't also be successful for other technical domains. Technical terms are replaced with tokens, therefore so as long as there is a corresponding process for identifying and replacing technical terms in the new domain this approach could be viable.",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 2: NMT training after replacing technical term pairs with technical term tokens “TTi” (i = 1, 2, . . .)",
"FLOAT SELECTED: Figure 3: NMT decoding with technical term tokens “TTi” (i = 1, 2, . . .) and SMT technical term translation",
"FLOAT SELECTED: Figure 4: NMT rescoring of 1,000-best SMT translations with technical term tokens “TTi” (i = 1, 2, . . .)",
"We use an NMT model similar to that used by Sutskever et al. Sutskever14, which uses a deep long short-term memories (LSTM) BIBREF7 to encode the input sentence and a separate deep LSTM to output the translation. We train the NMT model on a bilingual corpus in which the technical terms are replaced with technical term tokens; this allows it to translate most of the source sentences except technical terms. Similar to Sutskever et al. Sutskever14, we use it as a decoder to translate source sentences with technical term tokens and replace the tokens with technical term translations using statistical machine translation (SMT). We also use it to rerank the 1,000-best SMT translations on the basis of the average of the SMT and NMT scores of the translated sentences that have been rescored with the technical term tokens.",
"In this paper, we utilize the attention mechanism proposed by Bahdanau et al. Bahdanau15, wherein each output target word is predicted on the basis of not only a recurrent hidden state and the previously predicted word but also a context vector computed as the weighted sum of the hidden states.",
"According to the approach proposed by Dong et al. Dong15b, we identify Japanese-Chinese technical term pairs using an SMT phrase translation table.",
"For the Japanese technical terms whose Chinese translations are not included in the results of Step UID11 , we then use an approach based on SMT word alignment",
"Finally, in the step 3, we replace the technical term tokens “ $TT_{i}$ ” ( $i=1,2,\\ldots $ ) of the sentence translation with SMT the technical term translations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"74eea9f3f4f790836045fcc75d0b3f5156901499"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"Can the approach be generalized to other technical domains as well? "
],
"question_id": [
"07580f78b04554eea9bb6d3a1fc7ca0d37d5c612"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 1: Example of translation errors when translating patent sentences with technical terms using NMT",
"Figure 2: NMT training after replacing technical term pairs with technical term tokens “TTi” (i = 1, 2, . . .)",
"Figure 3: NMT decoding with technical term tokens “TTi” (i = 1, 2, . . .) and SMT technical term translation",
"Figure 4: NMT rescoring of 1,000-best SMT translations with technical term tokens “TTi” (i = 1, 2, . . .)",
"Table 1: Automatic evaluation results",
"Table 2: Human evaluation results (the score of pairwise evaluation ranges from −100 to 100 and the score of JPO adequacy evaluation ranges from 1 to 5)",
"Figure 5: Example of correct translations produced by the proposed NMT system with SMT technical term translation (compared with baseline SMT)",
"Figure 6: Example of correct translations produced by the proposed NMT system with SMT technical term translation (compared to decoding with the baseline NMT)",
"Figure 7: Example of correct translations produced by reranking the 1,000-best SMT translations with the proposed NMT system (compared to reranking with the baseline NMT)"
],
"file": [
"2-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"6-Figure4-1.png",
"7-Table1-1.png",
"7-Table2-1.png",
"8-Figure5-1.png",
"9-Figure6-1.png",
"9-Figure7-1.png"
]
} | [
"Can the approach be generalized to other technical domains as well? "
] | [
[
"1704.04521-Introduction-3",
"1704.04521-4-Figure2-1.png",
"1704.04521-Neural Machine Translation (NMT)-5",
"1704.04521-6-Figure4-1.png",
"1704.04521-5-Figure3-1.png",
"1704.04521-NMT Decoding and SMT Technical Term Translation-0",
"1704.04521-NMT Training after Replacing Technical Term Pairs with Tokens-3"
]
] | [
"There is no reason to think that this approach wouldn't also be successful for other technical domains. Technical terms are replaced with tokens, therefore so as long as there is a corresponding process for identifying and replacing technical terms in the new domain this approach could be viable."
] | 844 |
1908.08566 | Unsupervised Text Summarization via Mixed Model Back-Translation | Back-translation based approaches have recently lead to significant progress in unsupervised sequence-to-sequence tasks such as machine translation or style transfer. In this work, we extend the paradigm to the problem of learning a sentence summarization system from unaligned data. We present several initial models which rely on the asymmetrical nature of the task to perform the first back-translation step, and demonstrate the value of combining the data created by these diverse initialization methods. Our system outperforms the current state-of-the-art for unsupervised sentence summarization from fully unaligned data by over 2 ROUGE, and matches the performance of recent semi-supervised approaches. | {
"paragraphs": [
[
"Machine summarization systems have made significant progress in recent years, especially in the domain of news text. This has been made possible among other things by the popularization of the neural sequence-to-sequence (seq2seq) paradigm BIBREF0, BIBREF1, BIBREF2, the development of methods which combine the strengths of extractive and abstractive approaches to summarization BIBREF3, BIBREF4, and the availability of large training datasets for the task, such as Gigaword or the CNN-Daily Mail corpus which comprise of over 3.8M shorter and 300K longer articles and aligned summaries respectively. Unfortunately, the lack of datasets of similar scale for other text genres remains a limiting factor when attempting to take full advantage of these modeling advances using supervised training algorithms.",
"In this work, we investigate the application of back-translation to training a summarization system in an unsupervised fashion from unaligned full text and summaries corpora. Back-translation has been successfully applied to unsupervised training for other sequence to sequence tasks such as machine translation BIBREF5 or style transfer BIBREF6. We outline the main differences between these settings and text summarization, devise initialization strategies which take advantage of the asymmetrical nature of the task, and demonstrate the advantage of combining varied initializers. Our approach outperforms the previous state-of-the-art on unsupervised text summarization while using less training data, and even matches the rouge scores of recent semi-supervised methods."
],
[
"BIBREF7's work on applying neural seq2seq systems to the task of text summarization has been followed by a number of works improving upon the initial model architecture. These have included changing the base encoder structure BIBREF8, adding a pointer mechanism to directly re-use input words in the summary BIBREF9, BIBREF3, or explicitly pre-selecting parts of the full text to focus on BIBREF4. While there have been comparatively few attempts to train these models with less supervision, auto-encoding based approaches have met some success BIBREF10, BIBREF11.",
"BIBREF10's work endeavors to use summaries as a discrete latent variable for a text auto-encoder. They train a system on a combination of the classical log-likelihood loss of the supervised setting and a reconstruction objective which requires the full text to be mostly recoverable from the produced summary. While their method is able to take advantage of unlabelled data, it relies on a good initialization of the encoder part of the system which still needs to be learned on a significant number of aligned pairs. BIBREF11 expand upon this approach by replacing the need for supervised data with adversarial objectives which encourage the summaries to be structured like natural language, allowing them to train a system in a fully unsupervised setting from unaligned corpora of full text and summary sequences. Finally, BIBREF12 uses a general purpose pre-trained text encoder to learn a summarization system from fewer examples. Their proposed MASS scheme is shown to be more efficient than BERT BIBREF13 or Denoising Auto-Encoders (DAE) BIBREF14, BIBREF15.",
"This work proposes a different approach to unsupervised training based on back-translation. The idea of using an initial weak system to create and iteratively refine artificial training data for a supervised algorithm has been successfully applied to semi-supervised BIBREF16 and unsupervised machine translation BIBREF5 as well as style transfer BIBREF6. We investigate how the same general paradigm may be applied to the task of summarizing text."
],
[
"Let us consider the task of transforming a sequence in domain $A$ into a corresponding sequence in domain $B$ (e.g. sentences in two languages for machine translation). Let $\\mathcal {D}_A$ and $\\mathcal {D}_B$ be corpora of sequences in $A$ and $B$, without any mapping between their respective elements. The back-translation approach starts with initial seq2seq models $f^0_{A \\rightarrow B}$ and $f^0_{B \\rightarrow A}$, which can be hand-crafted or learned without aligned pairs, and uses them to create artificial aligned training data:",
"Let $\\mathcal {S}$ denote a supervised learning algorithm, which takes a set of aligned sequence pairs and returns a mapping function. This artificial data can then be used to train the next iteration of seq2seq models, which in turn are used to create new artificial training sets ($A$ and $B$ can be switched here):",
"The model is trained at each iteration on artificial inputs and real outputs, then used to create new training inputs. Thus, if the initial system isn't too far off, we can hope that training pairs get closer to the true data distribution with each step, allowing in turn to train better models.",
"In the case of summarization, we consider the domains of full text sequences $\\mathcal {D}^F$ and of summaries $\\mathcal {D}^S$, and attempt to learn summarization ($f_{F\\rightarrow S}$) and expansion ($f_{S\\rightarrow F}$) functions. However, contrary to the translation case, $\\mathcal {D}^F$ and $\\mathcal {D}^S$ are not interchangeable. Considering that a summary typically has less information than the corresponding full text, we choose to only define initial ${F\\rightarrow S}$ models. We can still follow the proposed procedure by alternating directions at each step."
],
[
"To initiate their process for the case of machine translation, BIBREF5 use two different initialization models for their neural (NMT) and phrase-based (PBSMT) systems. The former relies on denoising auto-encoders in both languages with a shared latent space, while the latter uses the PBSMT system of BIBREF17 with a phrase table obtained through unsupervised vocabulary alignment as in BIBREF18.",
"While both of these methods work well for machine translation, they rely on the input and output having similar lengths and information content. In particular, the statistical machine translation algorithm tries to align most input tokens to an output word. In the case of text summarization, however, there is an inherent asymmetry between the full text and the summaries, since the latter express only a subset of the former. Next, we propose three initialization systems which implicitly model this information loss. Full implementation details are provided in the Appendix."
],
[
"The first initialization is similar to the one for PBSMT in that it relies on unsupervised vocabulary alignment. Specifically, we train two skipgram word embedding models using fasttext BIBREF19 on $\\mathcal {D}^F$ and $\\mathcal {D}^S$, then align them in a common space using the Wasserstein Procrustes method of BIBREF18. Then, we map each word of a full text sequence to its nearest neighbor in the aligned space if their distance is smaller than some threshold, or skip it otherwise. We also limit the output length, keeping only the first $N$ tokens. We refer to this function as $f_{F\\rightarrow S}^{(\\text{Pr-Thr}), 0}$."
],
[
"Similarly to both BIBREF5 and BIBREF11, we also devise a starting model based on a DAE. One major difference is that we use a simple Bag-of-Words (BoW) encoder with fixed pre-trained word embeddings, and a 2-layer GRU decoder. Indeed, we find that a BoW auto-encoder trained on the summaries reaches a reconstruction rouge-l f-score of nearly 70% on the test set, indicating that word presence information is mostly sufficient to model the summaries. As for the noise model, for each token in the input, we remove it with probability $p/2$ and add a word drawn uniformly from the summary vocabulary with probability $p$.",
"The BoW encoder has two advantages. First, it lacks the other models' bias to keep the word order of the full text in the summary. Secondly, when using the DBAE to predict summaries from the full text, we can weight the input word embeddings by their corpus-level probability of appearing in a summary, forcing the model to pay less attention to words that only appear in $\\mathcal {D}^F$. The Denoising Bag-of-Words Auto-Encoder with input re-weighting is referred to as $f_{F\\rightarrow S}^{(\\text{DBAE}), 0}$."
],
[
"We also propose an extractive initialization model. Given the same BoW representation as for the DBAE, function $f_\\theta ^\\mu (s, v)$ predicts the probability that each word $v$ in a full text sequence $s$ is present in the summary. We learn the parameters of $f_\\theta ^\\mu $ by marginalizing the output probability of each word over all full text sequences, and matching these first-order moments to the marginal probability of each word's presence in a summary. That is, let $\\mathcal {V}^S$ denote the vocabulary of $\\mathcal {D}^S$, then $\\forall v \\in \\mathcal {V}^S$:",
"We minimize the binary cross-entropy (BCE) between the output and summary moments:",
"We then define an initial extractive summarization model by applying $f_{\\theta ^*}^\\mu (\\cdot , \\cdot )$ to all words of an input sentence, and keeping the ones whose output probability is greater than some threshold. We refer to this model as $f_{F\\rightarrow S}^{(\\mathbf {\\mu }:1), 0}$."
],
[
"We apply the back-translation procedure outlined above in parallel for all three initialization models. For example, $f_{F\\rightarrow S}^{(\\mathbf {\\mu }:1), 0}$ yields the following sequence of models and artificial aligned datasets:",
"Finally, in order to take advantage of the various strengths of each of the initialization models, we also concatenate the artificial training dataset at each odd iteration to train a summarizer, e.g.:"
],
[
"We validate our approach on the Gigaword corpus, which comprises of a training set of 3.8M article headlines (considered to be the full text) and titles (summaries), along with 200K validation pairs, and we report test performance on the same 2K set used in BIBREF7. Since we want to learn systems from fully unaligned data without giving the model an opportunity to learn an implicit mapping, we also further split the training set into 2M examples for which we only use titles, and 1.8M for headlines. All models after the initialization step are implemented as convolutional seq2seq architectures using Fairseq BIBREF20. Artificial data generation uses top-15 sampling, with a minimum length of 16 for full text and a maximum length of 12 for summaries. rouge scores are obtained with an output vocabulary of size 15K and a beam search of size 5 to match BIBREF11."
],
[
"Table TABREF9 compares test ROUGE for different initialization models, as well as the trivial Lead-8 baseline which simply copies the first 8 words of the article. We find that simply thresholding on distance during the word alignment step of (Pr-Thr) does slightly better then the full PBSMT system used by BIBREF5. Our BoW denoising auto-encoder with word re-weighting also performs significantly better than the full seq2seq DAE initialization used by BIBREF11 (Pre-DAE). The moments-based initial model ($\\mathbf {\\mu }$:1) scores higher than either of these, with scores already close to the full unsupervised system of BIBREF11.",
"In order to investigate the effect of these three different strategies beyond their rouge statistics, we show generations of the three corresponding first iteration expanders for a given summary in Table TABREF1. The unsupervised vocabulary alignment in (Pr-Thr) handles vocabulary shift, especially changes in verb tenses (summaries tend to be in the present tense), but maintains the word order and adds very little information. Conversely, the ($\\mathbf {\\mu }$:1) expansion function, which is learned from purely extractive summaries, re-uses most words in the summary without any change and adds some new information. Finally, the auto-encoder based (DBAE) significantly increases the sequence length and variety, but also strays from the original meaning (more examples in the Appendix). The decoders also seem to learn facts about the world during their training on article text (EDF/GDF is France's public power company)."
],
[
"Finally, Table TABREF13 compares the summarizers learned at various back-translation iterations to other unsupervised and semi-supervised approaches. Overall, our system outperforms the unsupervised Adversarial-reinforce of BIBREF11 after one back-translation loop, and most semi-supervised systems after the second one, including BIBREF12's MASS pre-trained sentence encoder and BIBREF10's Forced-attention Sentence Compression (FSC), which use 100K and 500K aligned pairs respectively. As far as back-translation approaches are concerned, we note that the model performances are correlated with the initializers' scores reported in Table TABREF9 (iterations 4 and 6 follow the same pattern). In addition, we find that combining data from all three initializers before training a summarizer system at each iteration as described in Section SECREF8 performs best, suggesting that the greater variety of artificial full text does help the model learn."
],
[
"In this work, we use the back-translation paradigm for unsupervised training of a summarization system. We find that the model benefits from combining initializers, matching the performance of semi-supervised approaches."
]
],
"section_name": [
"Introduction",
"Related Work",
"Mixed Model Back-Translation",
"Mixed Model Back-Translation ::: Initialization Models for Summarization",
"Mixed Model Back-Translation ::: Initialization Models for Summarization ::: Procrustes Thresholded Alignment (Pr-Thr)",
"Mixed Model Back-Translation ::: Initialization Models for Summarization ::: Denoising Bag-of-Word Auto-Encoder (DBAE)",
"Mixed Model Back-Translation ::: Initialization Models for Summarization ::: First-Order Word Moments Matching (@!START@$\\mathbf {\\mu }$@!END@:1)",
"Mixed Model Back-Translation ::: Artificial Training Data",
"Experiments ::: Data and Model Choices",
"Experiments ::: Initializers",
"Experiments ::: Full Models",
"Experiments ::: Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"df1428ed521b0d7c309ebc9c90b457dabc6b71b1"
],
"answer": [
{
"evidence": [
"We validate our approach on the Gigaword corpus, which comprises of a training set of 3.8M article headlines (considered to be the full text) and titles (summaries), along with 200K validation pairs, and we report test performance on the same 2K set used in BIBREF7. Since we want to learn systems from fully unaligned data without giving the model an opportunity to learn an implicit mapping, we also further split the training set into 2M examples for which we only use titles, and 1.8M for headlines. All models after the initialization step are implemented as convolutional seq2seq architectures using Fairseq BIBREF20. Artificial data generation uses top-15 sampling, with a minimum length of 16 for full text and a maximum length of 12 for summaries. rouge scores are obtained with an output vocabulary of size 15K and a beam search of size 5 to match BIBREF11."
],
"extractive_spans": [],
"free_form_answer": "The same 2K set from Gigaword used in BIBREF7",
"highlighted_evidence": [
"We validate our approach on the Gigaword corpus, which comprises of a training set of 3.8M article headlines (considered to be the full text) and titles (summaries), along with 200K validation pairs, and we report test performance on the same 2K set used in BIBREF7. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"What dataset they use for evaluation?"
],
"question_id": [
"ac148fb921cce9c8e7b559bba36e54b63ef86350"
],
"question_writer": [
"798ee385d7c8105b83b032c7acc2347588e09d61"
],
"search_query": [
"summarization"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Table 1: Full text sequences generated by f (Pr-Thr),1 S→F , f (DBAE),1 S→F , and f (µ:1),1 S→F during the first back-translation loop.",
"Table 2: Test ROUGE for trivial baseline and initialization systems. 1(Wang and Lee, 2018).",
"Table 3: Comparison of full systems. The best scores for unsupervised training are bolded. Results from: 1(Wang and Lee, 2018), 2(Song et al., 2019), 3(Miao and Blunsom, 2016), and 4(Nallapati et al., 2016)"
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png"
]
} | [
"What dataset they use for evaluation?"
] | [
[
"1908.08566-Experiments ::: Data and Model Choices-0"
]
] | [
"The same 2K set from Gigaword used in BIBREF7"
] | 846 |
1803.09745 | English verb regularization in books and tweets | The English language has evolved dramatically throughout its lifespan, to the extent that a modern speaker of Old English would be incomprehensible without translation. One concrete indicator of this process is the movement from irregular to regular (-ed) forms for the past tense of verbs. In this study we quantify the extent of verb regularization using two vastly disparate datasets: (1) Six years of published books scanned by Google (2003--2008), and (2) A decade of social media messages posted to Twitter (2008--2017). We find that the extent of verb regularization is greater on Twitter, taken as a whole, than in English Fiction books. Regularization is also greater for tweets geotagged in the United States relative to American English books, but the opposite is true for tweets geotagged in the United Kingdom relative to British English books. We also find interesting regional variations in regularization across counties in the United States. However, once differences in population are accounted for, we do not identify strong correlations with socio-demographic variables such as education or income. | {
"paragraphs": [
[
"Human language reflects cultural, political, and social evolution. Words are the atoms of language. Their meanings and usage patterns reveal insight into the dynamical process by which society changes. Indeed, the increasing frequency with which electronic text is used as a means of communicating, e.g., through email, text messaging, and social media, offers us the opportunity to quantify previously unobserved mechanisms of linguistic development.",
"While there are many aspects of language being investigated towards an increased understanding of social and linguistic evolution BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , one particular area of focus has been on changes in past tense forms for English verbs BIBREF0 , BIBREF1 , BIBREF2 . These investigations have collectively demonstrated that English verbs are going through a process of regularization, where the original irregular past tense of a verb is replaced with the regular past tense, formed using the suffix -ed.",
"For example, the irregular past tense of the verb `burn' is `burnt' and the regular past tense is `burned'. Over time, the regular past tense has become more popular in general, and for some verbs has overtaken the irregular form. For example, in Fig.~ UID1 , we use the Google Ngram Online Viewer to compare the relative frequency of `burnt' with that of `burned' over the past 200 years. (As shown in an earlier paper involving two of the present authors BIBREF6 , and expanded on below, the Google Ngram dataset is highly problematic but can serve as a useful barometer of lexical change.) In the first half of the 19th century, the irregular past tense `burnt' was more popular. However, the regular past tense `burned' gained in popularity and in the late 1800s became the more popular form, which has persisted through to today.",
"Looking at several examples like this, in a 2011 paper Michel et al. studied the regularization of verbs, along with other cultural and language trends, as an accompaniment to their introduction of the Google Books Ngram corpus (hereafter Ngrams) and the proto-field `Culturomics' BIBREF1 . They found that most of the verb regularization over the last two centuries came from verbs using the suffix -t for the irregular form, and that British English texts were less likely than American English ones to move away from this irregular form.",
"In a 2007 study, Lieberman et al. explored the regularization of English verbs using the CELEX corpus, which gives word frequencies from several textual sources BIBREF0 . Focusing on a set of 177 verbs that were all irregular in Old English, they examined how the rate of verb regularization relates to frequency of usage, finding that more common verbs regularized at a slower rate. They calculated half-lives for irregular verbs binned by frequency, finding that irregular verbs regularize with a half-life proportional to the square root of frequency of usage.",
"In a more recent study, Newberry et al. proposed a method for determining the underlying mechanisms driving language change, including the regularization of verbs BIBREF2 . Using the Corpus of Historical American English and inspired by ideas from evolution, the authors described a method to determine if language change is due to selection or drift, and applied this method to three areas of language change. They used a null hypothesis of stochastic drift and checked if selection would be strong enough to reject this null hypothesis. Of the 36 verbs Newberry et al. studied, only six demonstrated statistical support for selection. They also claimed that rhyming patterns might be a driver of selection.",
"Unfortunately, the corpora used in these studies have considerable limitations and corruptions. For example, early versions of the Ngrams data includes scientific literature, whose explosive growth through the 20th century is responsible for the decreasing trend in relative word usage frequency observed in many common search terms BIBREF6 . Moreover, the library-like nature of the corpus admits no accounting for popularity: Lord of the Rings and an unknown work contribute with equal weight to token counts.",
"Another general concern with large corpora of a global language like English is that language use varies tremendously with culture and geography. Ngrams allows only for the regional exploration of the English language with the British English corpus and the American English corpus. Twitter data enables us to focus on much smaller spatial regions (e.g., county or state).",
"Prior studies of verb regularization have also focused on data reflecting a formal editorial process, such as the one undergone by any published book. This editorial process will tend to normalize the language, reflecting the linguistic opinions of a small minority of canon gatekeepers, rather than portray the language used by everyday people. For example, maybe the irregular from of a particular verb is considered proper by scholars, but a vast majority of the English speaking population uses the regular form. While it is not a verb form, one illustrative example is `whom'. Although `whom' is the correct word to use in the objective case, it is common for everyday speakers to use `who'.",
"In the present study we take tweets to be a closer representation of everyday language. For the vast majority of accounts, tweets are authored by individuals without undergoing a formal editing process. As such, the language therein should more accurately represent average speakers than what is found in books.",
"The demographic groups contributing to Twitter are by no means a carefully selected cross-section of society, but do offer natural language use by the roughly 20% of adult English speakers who use Twitter BIBREF8 . When exploring temporal changes in language use, the Ngrams and CELEX datasets evidently cover a much longer period than the decade for which social media is available. As a result, we are unable to infer anything about the temporal dimension of regularization looking at Twitter.",
"In this paper we use the Ngrams and Twitter datasets to establish estimates of the current state of English verb regularization. We structure our paper as follows: In Sec.~ SECREF2 , we describe the datasets we use. In Sec.~ SECREF3 , we present our results. We study verb regularization in English in general in Sec.~ UID5 . We compare verb regularization in American English (AE) and British English (BE) using both Ngrams and geotagged Twitter data in Sec.~ UID7 . In Sec.~ \"Description of data sets\" , we employ methods to study regional variation in verb usage, leveraging county level user location data in the United States. We also explore correlations between verb regularization and a number of socio-demographic and economic variables. Finally, in Sec.~ SECREF4 , we provide concluding remarks."
],
[
"To be consistent with prior work, we chose the verb list for our project to match that of Michel et al. BIBREF1 . When comparing BE with AE, we use the subset of verbs that form the irregular past tense with the suffix -t. When calculating frequencies or token counts for the `past tense' we use both the preterite and past participle of the verb. See #1 for a complete tabulation of all verb forms.",
"The Ngrams data reflects relative frequency, providing, for a verb and a given year, the percentage of corpus tokens that are the given verb, where a token is an individual occurrence of a word. The Google Ngram Online Viewer also has a smoothing parameter, $s$ , which averages the relative frequency for the given year with that of each of the $s$ years before and after the given year, if they exist. For example, Fig.~ UID1 uses a smoothing of 3 years and shows that, averaged across the years 1997--2000 (the value displayed for the year 2000), the word `burned' appeared with relative frequency 0.004321% (roughly once every 23,000 tokens), while `burnt' appeared with relative frequency 0.000954% (roughly once every 105,000 tokens).",
"We downloaded the Ngrams verb data for the most recent 6-year period available (2003--2008) BIBREF9 . Specifically, we chose the 2008 values of relative frequency with a smoothing of 5 years, resulting in an average case insensitive#1 For general English, as suggested by BIBREF6 , we queried the English Fiction 2012 corpus, which uses ``books predominantly in the English language that a library or publisher identified as fiction.'' For AE we used the American English 2012 corpus, which uses ``books predominantly in the English language that were published in the United States.'' For BE we used the British English 2012 corpus, which uses ``books predominantly in the English language that were published in Great Britain'' BIBREF10 .",
"The Twitter messages for our project consist of a random sample of roughly 10% of all tweets posted between 9 September 2008 and 22 October 2017. This `decahose' dataset comprises a total of more than 106 billion messages, sent by about 750 million unique accounts. From this larger set, we performed a case-insensitive search for verb forms of interest, also extracting geographic location when available in the meta-data associated with each tweet. Tweets geotagged by mobile phone GPS with a U.S. location comprise about a 0.27% subset of the decahose dataset; United Kingdom locations comprise about a 0.05% subset. Many individuals provide location information, entered as free text, along with their biographical profile. We matched user specified locations of the form `city, state' to a U.S. county when possible, comprising a 2.26% subset of the decahose dataset. Details on this matching process can be found in #1.",
"For general English, we counted the number of tokens in the decahose dataset for each verb. For AE, we used the tweets whose geotagged coordinates are located in the United States, and for BE we used the tweets whose geotagged coordinates are located in the United Kingdom. For the analysis of verbs by county, we used the tweets with the user entered location information. Table~ UID2 summarizes the datasets used for both Ngrams and Twitter.",
"The demographic data for U.S. counties comes from the 2015 American Community Survey 5-year estimates, tables DP02--Selected Social Characteristics, DP03--Selected Economic Characteristics, DP04--Selected Housing Characteristics, and DP05--Demographic and Housing Estimates, which can be found by searching online at https://factfinder.census.gov/. These tables comprise a total of 513 usable socio-demographic and economic variables.",
"We compute the regularization fraction for a verb as the proportion of instances in which the regular form was used for the past tense of the verb. More specifically, for Ngrams we divide the relative frequency for the regular past tense by the sum of the relative frequencies for the regular and irregular past tenses. Similarly, for Twitter we divide the token count for the regular past tense by the sum of the token counts for both the regular and irregular past tenses. If the resulting regularization fraction is greater than $0.5$ , the regular past tense is more popular and we call the verb regular. Otherwise we call the verb irregular.",
"When calculating an average regularization across all verbs, we first compute the regularization fraction for each verb individually. Then we compute the average of the regularization fractions, with each verb contributing the same weight in the average, irrespective of frequency. We perform this `average of averages' to avoid swamping the contribution of less frequent verbs."
],
[
"Using the datasets in row (I) of Table~ UID2 , we begin by comparing Ngrams and Twitter with respect to regularization of English verbs in Fig.~ UID3 , where we find that 21 verbs are more regular in Ngrams, and 85 are more regular on Twitter. A Wilcoxon signed rank test of the data has a $p$ -value of $7.9\\times 10^{-6}$ , demonstrating strong evidence that verbs on Twitter are more regular than verbs in Ngrams.",
"What mechanisms could be responsible for the observed increase in regularity on Twitter? One possibility is that authors of fiction published in the 2000s, along with their editors, being professional users of English, have a larger vocabulary than the typical user of Twitter. If so, their commitment to proper English would contribute to the appearance of relatively more irregular verbs in books. The average Twitter user may not know, or choose to use, the `correct' past tense form of particular verbs, and thus use the default regular past tense.",
"Another driver may be that non-native English speakers writing English tweets may be more likely to use the default regular form. We will find quantitative support for this mechanism below. As a preview, we note that Fig.~ UID3 shows that `burn' is predominantly regular on Twitter globally, but we see later (Fig.~ UID4 B) that `burn' is irregular on Twitter for both American English and British English. Thus, it is likely that non-native speakers are contributing to this difference."
],
[
"We next study how verb regularization varies with geographic region. In this subsection we use the datasets in row (II) of Table~ UID2 for AE and row (III) for BE and the subset of verbs that form the irregular past tense with the suffix -t.",
"In Fig.~ UID4 A, we compare American and British English in Ngrams. The average regularization fraction is 0.49 in AE and $0.42$ in BE. For 17 out of 22 verbs, AE shows more regularization, with a Wilcoxon signed rank test $p$ -value of $9.8\\times 10^{-4}$ , giving statistical support that AE verbs are more regular on average in Ngrams than BE verbs.",
"As we show in the inset scatter plot of Fig.~ UID4 A, regularization in AE and BE are also strongly positively correlated with a Spearman correlation coefficient of $0.97$ $(p=2.3\\times 10^{-14})$ . Verbs that are more regular in AE are also more regular in BE, just not to the same extent.",
"In Fig.~ UID4 B, we compare regularization in AE and BE on Twitter. For Twitter, the average regularization fraction is $0.54$ for AE, higher than Ngrams, and $0.33$ for BE, much lower than Ngrams. As with Ngrams, 17 verbs out of 22 show more regularization in AE than in BE. The Wilcoxon signed rank test gives a weaker but still significant $p$ -value of $1.9\\times 10^{-3}$ .",
"The inset in Fig.~ UID4 B also shows a positive correlation, although not as strong as Ngrams, with a Spearman correlation coefficient of $0.87$ $(p=1.1\\times 10^{-7})$ . Generally on Twitter, regular AE verbs are also regular in BE, but the difference in regularization fraction is much greater than for Ngrams.",
"In Fig.~ UID6 A, we demonstrate the difference in regularization between AE and BE for both Ngrams and Twitter. The values in this figure for Ngrams can be thought of as, for each verb in Fig.~ UID4 A, subtracting the value of the bottom bar from the top bar, and likewise for Twitter and Fig.~ UID4 B. Positive numbers imply greater regularization in AE, the more common scenario. When the difference is near zero for one corpus, it is usually close to zero for the other corpus as well. However, when Ngrams shows that AE is notably more regular than BE, Twitter tends to show a much larger difference.",
"The average difference in regularization fraction between AE and BE for Twitter is $0.21$ , whereas it is only $0.08$ for Ngrams. Again, we find that these averages are significantly different with a Wilcoxon signed rank $p$ -value of $1.9\\times 10^{-2}$ .",
"The inset scatter plot tells a similar story, with a cluster of points near the origin. As the difference in regularization fraction between regions increases in Ngrams, it also tends to increase in Twitter, with Spearman correlation coefficient $0.65$ and $p$ -value $1.0\\times 10^{-3}$ . The steep rise shows that the difference increases faster on Twitter than in Ngrams.",
"Fig.~ UID6 B returns to comparing Ngrams and Twitter, but now between AE and BE. For each verb, the bar chart shows the difference between the regularization fraction for Twitter and Ngrams in both AE and BE, with positive values showing that regularization for Twitter is greater. In this case, the values can be thought of as subtracting the values for the bars in Fig.~ UID4 A from the corresponding bars in Fig.~ UID4 B. As we find for English in general, regularization is greater on Twitter than in Ngrams for AE, with an average difference of $0.04$ . However, for BE, regularization is greater in Ngrams than on Twitter, with an average difference in regularization fraction of $-0.09$ .",
"We summarize our findings in Table~ UID8 . We found again that verbs on Twitter are more regular than in Ngrams for American English, likely for many of the same reasons that verbs on Twitter are more regular than Ngrams in general. However, we find that in British English the opposite is true: Verbs on Twitter are less regular than in Ngrams. In decreasing order by average regularization fraction, we have AE Twitter, then AE Ngrams, then BE Ngrams, and finally BE Twitter. Knowing that the general trend is towards regularization BIBREF1 , BIBREF0 , it seems that regularization is perhaps being led by everyday speakers of American English, with American published work following suit, but with a lag. Then, it may be that British English authors and editors are being influenced by American publications and the language used therein. Indeed, some studies have found a general `Americanization' of English across the globe BIBREF11 , BIBREF12 , meaning that the various varieties of English used across the world are becoming more aligned with American English. Finally, it may be that average British users of Twitter are more resistant to the change. Indeed, from the figures in the study by Gonçalves et al., one can see that the `Americanization' of British English is more pronounced in Ngrams than on Twitter BIBREF11 , agreeing with what we have found here."
],
[
"In Sec.~ UID7 , we demonstrated regional differences in verb regularization by comparing BE and AE. Here, we consider differences on a smaller spatial scale by quantifying regularization by county in the United States using the dataset in row (IV) of Table~ UID2 . We use methods inspired by Grieve et al. to study regional variation in language BIBREF13 .",
"We only include counties that had at least 40 total tokens for the verbs under consideration. We plot the average regularization fraction for each county in the continental U.S. in Fig.~ \"Introduction\" A, where counties with not enough data are colored black. To control for the skewed distribution of samples associated with county population (see below for more details), we use residuals for this portion of the analysis. After regressing with the $\\log _{10}$ of data volume (total number of tokens) for each county, we compute the average regularization fraction residual, which is plotted in Fig.~ \"Introduction\" B.",
"That is, if we let $d_i$ be the total number of tokens for verbs in tweets from county $i$ ; $\\alpha $ and $\\beta $ be the slope and intercept parameters computed from regression; and $R_i$ be the average regularization fraction for county $i$ , then we compute the average regularization fraction residual for county $i$ , $r_i^{\\text{reg}}$ , as ",
"$$r_i^{\\text{reg}} = R_i - \\left(\\beta + \\alpha \\log _{10} d_i \\right).$$ (Eq. 34) ",
"Using the average regularization residual at the county level as input, we measure local spatial autocorrelation using the Getis-Ord $Gi^*$ $z$ -score BIBREF14 , ",
"$$G_i^* =\n\\frac{ \\sum _j w_{ij} r_j^{\\text{reg}} - \\overline{r}^{\\text{reg}}\\sum _j w_{ij}}{\\sigma \\sqrt{\\left[n\\sum _j w_{ij}^2 - \\left( \\sum _j w_{ij}\\right)^2 \\right]/(n-1)}},$$ (Eq. 35) ",
"where ",
"$$\\sigma =\\sqrt{\n\\frac{\\sum _j (r_j^{\\text{reg}})^2 }{n}\n- (\\overline{r}^{\\text{reg}})^2\n},$$ (Eq. 36) ",
" $\\overline{r}^{\\text{reg}} = \\frac{1}{n}\\sum _i r_i^{\\text{reg}}$ , $n$ is the number of counties, and $w_{ij}$ is a weight matrix. To obtain the weight matrix used in this calculation, we first create a distance matrix, $s_{ij}$ , where the distance between each pair of counties is the larger of the great circle distance, $s_{ij}^\\text{GC}$ , in miles between the centers of the bounding box for each county and 10 miles. That is, ",
"$$s_{ij}=\\max \\left(s_{ij}^\\text{GC}, 10\\right).$$ (Eq. 37) ",
"We make the minimum value for $s_{ij}$ 10 miles to prevent a county from having too large of a weight. We then compute the weight matrix as ",
"$$w_{ij}=\\frac{1}{\\sqrt{s_{ij}}}.$$ (Eq. 38) ",
"Fig.~ \"Introduction\" C shows the results for the lower 48 states, where black represents counties left out because there was not enough data. For each county, the $Gi^*$ $z$ -score computes a local weighted sum of the residuals, $r_j^\\text{reg}$ , for the surrounding counties and compares that to the expected value of that weighted sum if all the counties had exactly the average residual, $\\overline{r}^\\text{reg}$ , as their value, where the weighting is such that closer counties have a higher weight. Areas that are darker blue (positive $z$ -score) belong to a cluster of counties that has higher regularization than average, and those that are darker red (negative $z$ -score) belong to a cluster that has lower regularization than average. So, Fig.~ \"Introduction\" C shows that, in general, western counties show less regularization than average and eastern counties show more, except that the New England area is fairly neutral.",
"As usual, the $z$ -score gives the number of standard deviations away from the mean. For this we would do a two tail test for significance because we are looking for both high value and low value clusters. For example, a $z$ -score greater in magnitude than $1.96$ is significant at the $.05$ level. If we do a Bonferroni correction based on 3161 counties (the number included for this part of the analysis), then a $z$ -score greater in magnitude than $4.32$ is significant for a two tail test at the $.05/3161\\approx 1.58 \\times 10^{-5}$ level.",
"We do this same process looking at individual verbs as well. However, when looking at individual verbs, we use the regularization fraction rather than residuals, because the data skew is not as problematic. This is because the main problem with data volume comes when averaging across verbs that have different frequencies of usage, as explained below. Also, here we include counties that have at least 10 tokens. Fig.~ \"\" gives an example map showing the $Gi^*$ $z$ -scores for the verb `dream'. The maps showing local spatial autocorrelation for the complete list of verbs can be found in the Online Appendix A at .",
"For many of the counties in the US, there is a small sample of Twitter data. We restrict our analysis to counties with a total token count of at least 40 for the verbs we consider. Even for the counties meeting this criteria, the volume of data varies, leading to drastically different sample sizes across counties.",
"More common verbs tend to have popular irregular forms (e.g., `found' and `won'), and less common verbs tend to be regular (e.g., `blessed' and `climbed') BIBREF0 . As a result, samples taken from populous counties are more likely to contain less common verbs. Our `average regularization' is an average of averages, resulting in an underlying trend toward higher rates for more populous counties due to the increased presence of rarer regular verbs.",
"Fig.~ UID17 demonstrates the relationship between data volume and regularization. To explore the connection further, we perform a synthetic experiment as follows.",
"To simulate sampling from counties with varying population sizes, we first combine all verb token counts (using the Twitter dataset from row (I) of Table~ UID2 ) into a single collection. We then randomly sample a synthetic county worth of tokens from this collection. For a set of 1000 logarithmically spaced county sizes, we randomly draw five synthetic collections of verbs (each is a blue circle in Fig.~ UID17 ). For each sample, we compute the average regularization fraction, as we did for U.S. counties. The goal is to infer the existences of any spurious trend introduced by the sampling of sparsely observed counties.",
"The resulting simulated curve is comparable to the trend observed for actual U.S. counties. As the data volume increases, the simulated version converges on roughly $0.17$ , which is the average regularization fraction for all of Twitter.",
"We also explored correlations between verb regularization and various demographic variables. Fig.~ UID17 showed a strong relationship between data volume and verb regularization. It has been shown elsewhere that tweet density positively correlates with population density BIBREF15 , and population size is correlated with many demographic variables. As a result, we use partial correlations as an attempt to control for the likely confounding effect of data volume.",
"For each demographic variable, we compute the regression line between the $\\log _{10}$ of data volume, $d_i$ , and regularization, and compute the residuals as in Eq.~ \"Methods and results\" . Then, if the demographic variable is an `Estimate' variable, where the unit is number of people, we similarly compute the regression line between the $\\log _{10}$ of data volume and the $\\log _{10}$ of the demographic variable#1 and compute the residuals, $r_i^{\\text{dem}}$ , as ",
"$$r_i^{\\text{dem}} = \\log _{10}(D_i) - \\left( \\delta + \\gamma \\log _{10} d_i \\right),$$ (Eq. 42) ",
"where $D_i$ is the value of the demographic variable for county $i$ , and $\\gamma $ and $\\delta $ are the slope and intercept parameters calculated during regression.",
"Otherwise, the demographic variable is a `Percent' variable, with units of percentage, and we compute the regression line between the $\\log _{10}$ of data volume and the demographic variable, and compute residuals as ",
"$$r_i^{\\text{dem}} = D_i - \\left( \\delta + \\gamma \\log _{10} d_i \\right).$$ (Eq. 44) ",
"The correlation between residuals $r_i^{\\text{reg}}$ and $r_i^{\\text{dem}}$ gives the partial correlation between average regularization and the demographic variable.",
"Our findings suggest that data volume is a confounding variable in at least some of the cases because, after controlling for data volume, there is generally a large decrease in the correlation between verb regularization and the demographic variables. The largest in magnitude Pearson correlation between verb regularization and a demographic variable is $0.68$ , for the variable `Estimate; SCHOOL ENROLLMENT - Population 3 years and over enrolled in school', whereas the largest in magnitude partial correlation is only $-0.18$ , for the variable `Percent; OCCUPATION - Civilian employed population 16 years and over - Management, business, science, and arts occupations'. Table~ UID20 lists the 10 demographic variables with largest in magnitude partial correlation.",
"Fig.~ UID18 shows an example for one of the demographic variables, the `Percent' variable with largest simple correlation. Fig.~ UID18 A is the scatter plot of the demographic variable with average regularization, which corresponds to simple correlation. Fig.~ UID18 B is the scatter plot of the residuals, $r_i^{\\text{dem}}$ and $r_i^{\\text{reg}}$ , after regressing with the $\\log _{10}$ of data volume, and corresponds with partial correlation. We can see that there is a strong simple correlation ( $-0.52$ ), but after accounting for data volume that correlation largely vanishes ( $-0.15$ ). Similar plots for all of the demographic variables can be found in the Online Appendix B at ."
],
[
"Our findings suggest that, by and large, verb regularization patterns are similar when computed with Ngrams and Twitter. However, for some verbs, the extent of regularization can be quite different. If social media is an indicator of changing patterns in language use, Ngrams data ought to lag with a timescale not yet observable due to the recency of Twitter data. Very reasonably, Ngrams data may not yet be showing some of the regularization that is happening in everyday English.",
"We also found differences in verb regularization between American and British English, but found that this difference is much larger on Twitter than Ngrams. Overall, and in American English specifically, verbs are more regular on Twitter than in Ngrams, but the opposite is true for British English. In the U.S., we also find variation in average verb regularization across counties. Lastly, we showed that there are significant partial correlations between verb regularization and various demographic variables, but they tend to be weak.",
"Our findings do not account for the possible effects of spell checkers. Some people, when tweeting, may be using a spell checker to edit their tweet. If anything, this will likely skew the language on Twitter towards the `correct' form used in edited textual sources. For example, in Fig.~ UID3 we see that `stand' is irregular for both Ngrams and Twitter, and likely most spell checkers would consider the regular `standed' a mistake, but we see that `stand' is still over 100 times more regular on Twitter than in Ngrams. So, the differences between edited language and everyday language may be even larger than what we find here suggests. Future work should look into the effects of spell checkers.",
"Our study explored the idea that edited written language may not fully represent the language spoken by average speakers. However, tweets do not, of course, fully represent the English speaking population. Even amongst users, our sampling is not uniform as it reflects the frequency with which different users tweet #1. Furthermore, the language used on Twitter is not an unbiased sample of language even for people who use it frequently. The way someone spells a word and the way someone pronounces a word may be different, especially, for example, the verbs with an irregular form ending in -t, because -t and -ed are close phonetically. However, the fact that we found differences between the language of Ngrams and the language of Twitter suggests that the true language of everyday people is not fully represented by edited written language. We recommend that future studies should investigate speech data.",
"We are thankful for the helpful reviews and discussions of earlier versions of this work by A. Albright and J. Bagrow, and for help with initial data collection from L. Gray. PSD & CMD were supported by NSF Grant No. IIS-1447634, and TJG, PSD, & CMD were supported by a gift from MassMutual."
],
[
"|c||c||c|c||r| & Regular & 2c||Irregular &",
"Verb & Preterit & Past Participle & Preterit & Past Participle & Token Count",
"(continued)",
"& Regular & 2c||Irregular &",
"Verb & Preterit & Past Participle & Preterit & Past Participle & Token Count",
"5|r|Continued on next page",
"A tabulation of all verb forms used in this study. The Token Count column gives the sum of all the tokens for the past tense forms of the verb, both regular and irregular, in our Twitter dataset (see row (I) of Table~ UID2 in Sec.~ SECREF2 ).",
"abide & abided & abode & abode & 146,566",
"alight & alighted & alit & alit & 56,306",
"arise & arised & arose & arisen & 164,134",
"awake & awaked & awoke & awoken, awoke & 423,359",
"become & becomed & became & become & 50,664,026",
"begin & beginned & began & begun & 5,942,572",
"bend & bended & bent & bent & 4,777,019",
"beseech & beseeched & besought & besought & 3,390",
"bleed & bleeded & bled & bled & 252,225",
"blend & blended & blent & blent & 436,029",
"bless & blessed & blest & blest & 22,547,387",
"blow & blowed & blew & blown & 9,155,246",
"break & breaked & broke & broken & 54,506,810",
"breed & breeded & bred & bred & 1,040,854",
"bring & bringed & brought & brought & 15,303,318",
"build & builded & built & built & 8,521,553",
"burn & burned & burnt & burnt & 7,457,942",
"buy & buyed & bought & bought & 24,841,526",
"catch & catched & caught & caught & 24,891,188",
"choose & choosed & chose & chosen & 10,290,205",
"clap & clapped & clapt & clapt & 405,837",
"climb & climbed & clomb, clom & clomben & 635,122",
"cling & clinged & clung & clung & 49,742",
"creep & creeped & crept & crept & 698,405",
"deal & dealed & dealt & dealt & 1,181,974",
"dig & digged & dug & dug & 941,656",
"dream & dreamed & dreamt & dreamt & 2,794,060",
"drink & drinked & drank & drunk, drank & 37,295,703",
"drive & drived & drove & driven & 5,745,497",
"dwell & dwelled & dwelt & dwelt & 25,725",
"eat & eated & ate & eaten & 25,084,758",
"fall & falled & fell & fallen & 25,224,815",
"fight & fighted & fought & fought & 3,625,297",
"find & finded & found & found & 80,709,195",
"flee & fleed & fled & fled & 405,295",
"freeze & freezed & froze & frozen & 7,454,847",
"get & getted & got & got, gotten & 500,591,203",
"give & gived & gave & given & 58,697,198",
"grow & growed & grew & grown & 17,951,273",
"hang & hanged & hung & hung & 3,991,956",
"hear & heared & heard & heard & 52,605,822",
"hide & hided, hidded & hid & hid, hidden & 7,829,276",
"hold & holded & held & held & 10,080,725",
"inlay & inlayed & inlaid & inlaid & 44,811",
"keep & keeped & kept & kept & 11,785,131",
"kneel & kneeled & knelt & knelt & 83,765",
"know & knowed & knew & known & 58,175,701",
"lay & layed & laid & laid & 5,828,898",
"leap & leaped & leapt & leapt & 91,620",
"learn & learned & learnt & learnt & 18,134,586",
"lose & losed & lost & lost & 72,695,892",
"mean & meaned & meant & meant & 26,814,977",
"pay & payed & paid & paid & 21,150,031",
"plead & pleaded & pled & pled & 193,553",
"ride & rided & rode & ridden & 1,710,109",
"seek & seeked & sought & sought & 888,822",
"sell & selled & sold & sold & 14,251,612",
"send & sended & sent & sent & 26,265,441",
"shake & shaked & shook & shaken & 3,223,316",
"shoe & shoed & shod & shod & 47,780",
"shrink & shrinked & shrank, shrunk & shrunk, shrunken & 296,188",
"sing & singed & sang, sung & sung & 6,767,707",
"sink & sinked & sank, sunk & sunk, sunken & 927,419",
"slay & slayed & slew & slain & 2,153,981",
"sleep & sleeped & slept & slept & 9,252,446",
"slide & slided & slid & slid & 530,659",
"sling & slinged & slung & slung & 38,320",
"slink & slinked & slunk & slunk & 5,772",
"smell & smelled & smelt & smelt & 1,089,814",
"smite & smitted, smited & smote & smitten, smote & 176,768",
"sneak & sneaked & snuck & snuck & 797,337",
"speak & speaked & spoke & spoken & 8,502,050",
"speed & speeded & sped & sped & 216,062",
"spell & spelled & spelt & spelt & 3,812,137",
"spend & spended & spent & spent & 17,603,781",
"spill & spilled & spilt & spilt & 1,627,331",
"spin & spinned & spun & spun & 342,022",
"spoil & spoiled & spoilt & spoilt & 3,891,576",
"spring & springed & sprang, sprung & sprung & 626,400",
"stand & standed & stood & stood & 3,942,812",
"steal & stealed & stole & stolen & 11,884,934",
"sting & stinged & stung & stung & 391,053",
"stink & stinked & stank, stunk & stunk & 1,556,197",
"stride & strided & strode & stridden & 17,811",
"strike & striked & struck & struck, stricken & 2,167,165",
"strip & stripped & stript & stript & 837,967",
"strive & strived & strove & striven & 33,705",
"swear & sweared & swore & sworn & 1,902,662",
"sweep & sweeped & swept & swept & 931,245",
"swim & swimmed & swam & swum & 356,842",
"swing & swinged & swung & swung & 295,360",
"take & taked & took & taken & 83,457,822",
"teach & teached & taught & taught & 9,379,039",
"tear & teared & tore & torn & 4,238,865",
"tell & telled & told & told & 71,562,969",
"thrive & thrived & throve & thriven & 43,612",
"throw & throwed & threw & thrown & 13,197,226",
"tread & treaded & trod & trodden & 56,624",
"vex & vexed & vext & vext & 139,411",
"wake & waked & woke & woken & 30,796,918",
"wear & weared & wore & worn & 8,552,191",
"weep & weeped & wept & wept & 200,690",
"win & winned & won & won & 45,276,202",
"wind & winded & wound & wound & 1,340,267",
"wring & wringed & wrung & wrung & 29,141",
"write & writed & wrote & written, writ, wrote & $23,926,025$ "
],
[
"To study regularization by county, we extracted location information from the user-provided location information, which was entered as free text in the user's biographical profile. To do this, for each tweet we first checked if the location field was populated with text. If so, we then split the text on commas, and checked whether there were two tokens separated by a comma. If so, we made the assumption that it might be of the form `city, state'. Then we used a python package called uszipcode, which can be found here: pythonhosted.org/uszipcode/. We used the package's method to search by city and state. If the package returned a location match, we used the returned latitude and longitude to determine which county the detected city belonged to.",
"The package allows for fuzzy matching, meaning the city and state do not have to be spelled correctly, and it allows for the state to be fully spelled out or be an abbreviation. In the source code of the package there was a hard coded confidence level of 70 for the fuzzy matching. We modified the source code so that the confidence level was an input to the method, and running tests found we were satisfied with a confidence level of 91. We checked by hand the matches of 1000 tweets that this method returned a match for, 100 from each year in the dataset, and found the only potential error in these matches was when the user typed in `Long Island, NY', or a similar variant. For this, the package returned Long Island City, NY, which is on Long Island, but there are multiple counties on Long Island, so the user may actually live in a different county. None of the other 1000 tweets were inappropriately or ambiguously assigned."
]
],
"section_name": [
"Introduction",
"Description of data sets",
"Verb regularization using Ngrams and Twitter",
"American and British English",
"Regularization by US county",
"Concluding remarks",
"Table of Verb Forms",
"Details on User Location Matching"
]
} | {
"answers": [
{
"annotation_id": [
"ea99a4439a672a13f46b5a9a785da27d789a6845"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Fig 5. (A) The average verb regularization fraction by county for the lower 48 states, along with (B) residuals and (C) Gi� z-score. A higher Gi� z-score means a county has a greater regularization fraction than expected. Counties colored black did not have enough data. We used the dataset in row (IV) of Table 1."
],
"extractive_spans": [],
"free_form_answer": "all regions except those that are colored black",
"highlighted_evidence": [
"FLOAT SELECTED: Fig 5. (A) The average verb regularization fraction by county for the lower 48 states, along with (B) residuals and (C) Gi� z-score. A higher Gi� z-score means a county has a greater regularization fraction than expected. Counties colored black did not have enough data. We used the dataset in row (IV) of Table 1."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"e0ae7909e0913fea184c701bd75e50a275ffe190"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"Which regions of the United States do they consider?",
"Why did they only consider six years of published books?"
],
"question_id": [
"b5bfa6effdeae8ee864d7d11bc5f3e1766171c2d",
"bf00808353eec22b4801c922cce7b1ec0ff3b777"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig 1. Relative word frequencies for the irregular and regular past verb forms for ‘burn’ during the 19th and 20th centuries, using the Google Ngram Online Viewer with the English Fiction 2012 corpus. Google Ngram trends can be misleading but capture basic shifts in a language’s lexicon [7, 8]. The irregular form ‘burnt’ was once more popular, but the regular form ‘burned’ overtook it in the late 19th century and its popularity has steadily increased ever since while that of ‘burnt’ has decreased. The dynamics of verb tense changes are rich, reflecting many processes at play in the Google Books Ngram data. An interactive version of this graphic can be found at https:// books.google.com/ngrams/graph?content=burned%2Cburnt&year_start=1800&year_end=2000&corpus=16&smoothing=3.",
"Table 1. Summary of verb datasets.",
"Fig 2. Comparison of verb regularization for Ngrams and Twitter. We calculate verb regularization fractions using the datasets in row (I) of Table 1. Verbs are centered at their regularization fraction in Ngrams (horizontal) and Twitter (vertical). Both axes are on a logit scale, which spreads out both extremes of the interval (0, 1). Verbs to the right of the vertical dashed line are regular in Ngrams; verbs above the horizontal dashed line are regular on Twitter. The diagonal dashed line separates verbs that are more regular on Twitter (those above and to the left of the line) from those that are more regular in Ngrams (those below and to the right of the line). For example, compared with ‘knew’, the word ‘knowed’ appears roughly 3 times in 1000 in Ngrams, and 2 times in 10,000 on Twitter, making ‘know’ irregular in both cases, but more than an order of magnitude more regular in Ngrams than on Twitter.",
"Fig 3. American and British English verb regularization fractions for (A) Ngrams and (B) Twitter. We use the subset of verbs that form the irregular past tense with the suffix -t and the datasets in rows (II) and (III) of Table 1. The inset scatter plot has a point for each verb. The dashed diagonal line separates verbs that are more regular in AE (below the line) from those that are more regular in BE (above the line).",
"Fig 4. Differences in verb regularization fractions. The bar chart gives the difference for each verb in each corpus. The inset scatter plot has a point for each verb. (A) The difference between verb regularization fractions for AE and BE in Twitter and Ngrams. The dashed diagonal line of the inset scatter plot separates verbs for which this difference is greater in Ngrams (below the line) from those for which it is greater in Twitter (above the line). (B) The difference between verb regularization fraction for Twitter and Ngrams in AE and BE. The dashed diagonal line of the inset scatter plot separates verbs for which this difference is greater in AE (below the line) from those for which it is greater in BE (above the line).",
"Table 2. A summary of the average regularization fractions for AE and BE on Twitter and Ngrams. Note that the differences were taken prior to rounding.",
"Fig 5. (A) The average verb regularization fraction by county for the lower 48 states, along with (B) residuals and (C) Gi� z-score. A higher Gi� z-score means a county has a greater regularization fraction than expected. Counties colored black did not have enough data. We used the dataset in row (IV) of Table 1.",
"Fig 6. The Gi� z-score for verb regularization by county for the verb ‘dream’ for the lower 48 states. Counties colored black did not have enough data. People tweet ‘dreamed’ rather than ‘dreamt’ more often than expected in the southeastern U.S.",
"Fig 7. (A) Scatter plot of average verb regularization for counties. For each county, the horizontal coordinate is the total token count of verbs found in tweets from that county, and the vertical coordinate is that county’s average regularization fraction. For a version with verbs split into frequency bins, see S1 Fig (B) We created synthetic counties by sampling words from the collection of all occurrences of all verbs on Twitter (using the dataset from row (I) of Table 1). The point’s horizontal position is given by the total sample token count in a synthetic county; the vertical position is given by its average regularization fraction.",
"Table 3. Top demographic variables sorted by the magnitude of their partial correlation with verb regularization in U.S. counties. For example, regularization is positively correlated with the percentage of workers driving alone to work, and anti-correlated with the percentage of individuals working from home. Statistics for all of the demographic variables can be found in the Online Appendix B at https://www.uvm.edu/storylab/share/papers/gray2018a/.",
"Fig 8. (A) Average verb regularization for counties as a function of the percentage of civilians employed in agriculture, forestry, fishing, hunting, and mining. Several hundred such plots are available in an interactive online appendix. (B) For each county, the horizontal coordinate is given by the residual left after regressing the demographic variable with the log10 of data volume and the vertical coordinate is given by the residual left after regressing that county’s average regularization fraction with the log10 of data volume. Data volume, for a county, is the total token count of all verbs found in tweets from that county."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"6-Figure2-1.png",
"7-Figure3-1.png",
"8-Figure4-1.png",
"8-Table2-1.png",
"10-Figure5-1.png",
"11-Figure6-1.png",
"13-Figure7-1.png",
"14-Table3-1.png",
"15-Figure8-1.png"
]
} | [
"Which regions of the United States do they consider?"
] | [
[
"1803.09745-10-Figure5-1.png"
]
] | [
"all regions except those that are colored black"
] | 848 |
1904.06941 | A framework for streamlined statistical prediction using topic models | In the Humanities and Social Sciences, there is increasing interest in approaches to information extraction, prediction, intelligent linkage, and dimension reduction applicable to large text corpora. With approaches in these fields being grounded in traditional statistical techniques, the need arises for frameworks whereby advanced NLP techniques such as topic modelling may be incorporated within classical methodologies. This paper provides a classical, supervised, statistical learning framework for prediction from text, using topic models as a data reduction method and the topics themselves as predictors, alongside typical statistical tools for predictive modelling. We apply this framework in a Social Sciences context (applied animal behaviour) as well as a Humanities context (narrative analysis) as examples of this framework. The results show that topic regression models perform comparably to their much less efficient equivalents that use individual words as predictors. | {
"paragraphs": [
[
"For the past 20 years, topic models have been used as a means of dimension reduction on text data, in order to ascertain underlying themes, or `topics', from documents. These probabilistic models have frequently been applied to machine learning problems, such as web spam filtering BIBREF0 , database sorting BIBREF1 and trend detection BIBREF2 .",
"This paper develops a methodology for incorporating topic models into traditional statistical regression frameworks, such as those used in the Social Sciences and Humanities, to make predictions. Statistical regression is a supervised method, however it should be noted the majority of topic models are themselves unsupervised.",
"When using text data for prediction, we are often confronted with the problem of condensing the data into a manageable form, which still retains the necessary information contained in the text. Methods such as using individual words as predictors, or $n$ -grams, while conceptually quite simple, have a tendency to be extremely computationally expensive (with tens of thousands of predictors in a model). Except on extremely large corpora, this inevitably leads to overfitting. As such, methods that allow text to be summarised by a handful of (semantically meaningful) predictors, like topic models, gives a means to use large amounts of text data more effectively within a supervised predictive context.",
"This paper outlines a statistical framework for predictive topic modelling in a regression context. First, we discuss the implementation of a relatively simple (and widely used) topic model, latent Dirichlet allocation (LDA) BIBREF3 , as a preprocessing step in a regression model. We then compare this model to an equivalent topic model that incorporates supervised learning, supervised LDA (sLDA) BIBREF4 .",
"Using topic models in a predictive framework necessitates estimating topic proportions for new documents, however retraining the LDA model to find these is computationally expensive. Hence we derive an efficient likelihood-based method for estimating topic proportions for previously unseen documents, without the need to retrain.",
"Given these two models hold the `bag of words' assumption (i.e., they assume independence between words in a document), we also investigate the effect of introducing language structure to the model through the hidden Markov topic model (HMTM) BIBREF5 . The implementation of these three topic models as a dimension reduction step for a regression model provides a framework for the implementation of further topic models, dependent on the needs of the corpus and response in question."
],
[
"The following definitions are used when considering topic models.",
"Vocabulary ( $V$ ): a set of $v$ unique elements (generally words) from which our text is composed.",
"Topic ( $\\phi $ ): a probability distribution over the vocabulary. That is, for word $i$ in the vocabulary, a probability $p_{i} \\in [0,1]$ is assigned of that word appearing, given the topic, with $\\sum _{i = 1}^{v} p_{i} = 1$ . In general, there are a fixed number $k$ of topics, $\\phi = \\left\\lbrace \\phi _{1},...,\\phi _{k}\\right\\rbrace $ .",
"Document ( $\\mathbf {w}$ ): a collection of $n_{j}$ units (or words) from the vocabulary. Depending on the topic model, the order of these words within the document may or may not matter.",
"Corpus ( $\\mathbf {D}$ ): a collection of $m$ documents over which the topic model is applied. That is, $\\mathbf {D} = \\left\\lbrace \\mathbf {w}_{1},...,\\mathbf {w}_{m}\\right\\rbrace $ , each with length $n_{j}$ , $j = 1,2,...,m$ .",
"Topic proportion ( $\\theta _{j}$ ): a distribution of topics over the document $j$ . A corpus will then have an $m \\times k$ matrix $\\theta $ , where each row $j = 1,2,...,m$ corresponds to the distribution of topics over document $j$ ."
],
[
"Latent Dirichlet allocation (LDA) BIBREF3 , due to its simplicity and effectiveness, continues to be the basis for many topic models today. When considering topic regression, we take LDA as our `baseline' model; i.e., we measure all subsequent models against the performance of the LDA regression model.",
"LDA is an unsupervised process that assumes both topics and topic proportions are drawn from Dirichlet distributions. One reason for its simplicity is that it makes the `bag of words' assumption. LDA assumes the process outlined in Algorithm \"Regression model and number of topics\" when generating documents.",
" $l = 1,2,...,k$ generate the $k$ topics $\\phi _{l} \\sim \\textrm {Dir}(\\beta )$ $j = 1,2,...,m$ let $n_{j} \\sim \\textrm {Poisson}(\\xi )$ , the length of document $j$ choose the topic proportions $\\theta _{j} \\sim \\textrm {Dir}(\\alpha )$ $i = 1,2,...,n_{j}$ choose the topic assignment $z_{ji} \\sim \\textrm {Multi}(\\theta _{j})$ choose a word $w_{ji} \\sim \\textrm {Multi}(\\phi _{z_{ji}})$ create the document $k$0 LDA generative process.",
"Here, $\\alpha $ (length $k$ ) and $\\beta $ (length $v$ ) are hyperparameters of the distributions of the $\\theta _{j}$ and $\\phi _{l}$ respectively.",
"When topic modelling, we are generally interested in inferring topic proportions $\\theta = \\left\\lbrace \\theta _{1},...,\\theta _{m}\\right\\rbrace $ and topics $\\phi $ themselves, given the corpus $\\mathbf {D}$ . That is, we wish to find $\nP\\left(\\theta ,\\phi | \\mathbf {D}, \\alpha , \\beta \\right) = \\frac{P\\left(\\theta ,\\phi , \\mathbf {D} | \\alpha , \\beta \\right)}{P\\left( \\mathbf {D} | \\alpha , \\beta \\right)}.\n$ ",
"The denominator, $P\\left( \\mathbf {D} | \\alpha , \\beta \\right)$ , the probability of the corpus, is understandably generally intractable to compute. For the purposes of this paper, we use collapsed Gibbs sampling as outlined in BIBREF6 , as an approximate method for finding the LDA model given the corpus."
],
[
"Given an LDA model on a corpus with some corresponding response variable, we use the topic proportions generated as predictors in a regression model. More specifically, we use the topic proportions $\\theta $ as the predictors, as the amount of a document belonging to each topic may be indicative of its response.",
"When applying LDA as a preprocessing step to a regression model, we must also bear in mind the number of topics $k$ we choose for the LDA model. While this number is assumed to be fixed in advance, there are various measures for determining the number that best `fits' the corpus, such as perplexity BIBREF3 and the log likelihood measure outlined in BIBREF6 .",
"However, given we are inferring this topic model with a specific purpose in mind, it would be prudent to include this information into the decision making process. For that reason, we choose the `best' number of topics $k$ to be the number that reduces the cross validation prediction error (CVPE) BIBREF7 of the corresponding LDA regression model, found from $K$ -fold cross validation of the model on the corpus. The CVPE is here defined to be $\n\\textrm {CVPE}_{K} = \\sum \\limits _{i = 1}^{K} \\frac{m_{i}}{m} \\textrm {MSE}_{i},\n$ ",
"where $K$ is the number of folds, $m_{i}$ is the number of documents in the $i$ th fold, and $m$ the total number of documents in the corpus. The mean square error for the $i$ th fold, denoted by $\\textrm {MSE}_{i}$ , is defined as $\n\\textrm {MSE}_{i} = \\sum \\limits _{j \\in C_{i}} \\frac{1}{m_{i}} \\left( y_{j} - \\hat{y}_{j} \\right)^{2},\n$ ",
"where $\\hat{y}_{j}$ is the model estimate of response $y_{j}$ for all documents in the set $C_{i}$ , the $i$ th fold. It follows that the better a model performs, the smaller the MSE and thus the CVPE.",
"While we choose the best number of topics based on the information in the regression model, it should be noted that LDA is still unsupervised, and that the topics have not been generated with the response in mind."
],
[
"When it comes to prediction, we generally have a corpus for which we find our regression model, and use this model to predict the response of new documents that are not in the original corpus. Because our regression model requires us to know $\\theta _{j}$ , the topic proportion, for any new document $j$ , we have two options. Either the topic model can be retrained with the new document added to the corpus, and the regression model retrained with the new topics on the old documents, or the topic proportions can be found based on the existing topic model.",
"For both efficiency's sake (i.e., to avoid retraining the model for every prediction), and for the sake of true prediction, the second option is preferable. Particularly in cross validation, it is necessary to have a completely distinct traning and test set of data. In retraining a topic model with new documents, we do not have a clear distinction between the two sets.",
" BIBREF3 outline a procedure for estimating the topic proportions of a held-out document, however this procedure follows a posterior approach that requires variationally inferring the posterior parameters, which are then used to approximate the expected number of words belonging to each topic, as an estimate for $\\theta _{j}$ .",
"We propose here a likelihood-based approach to estimation of topic proportions of new documents, by treating the problem as a case of maximum likelihood estimation. That is, we want to find $\\hat{\\theta }_{j}$ , the estimate of $\\theta _{j}$ that maximises the likelihood of document $j$ occurring, given our existing topic model. Therefore, we aim to maximise $\nL(\\theta _{j}) &=& f(\\mathbf {w}_{j} | \\theta _{j}) \\\\\n&=& f(w_{j1},...,w_{jn_{j}} | \\theta _{j}),\n$ ",
"where $w_{j1},...,w_{jn_{j}}$ are the words in document $j$ . As LDA is a `bag of words' model, we are able to express this as $\nL(\\theta _{j}) = \\prod \\limits _{i = 1}^{n_{j}} f(w_{ji} | \\theta _{j}).\n$ ",
"The law of total probability gives $\nL(\\theta _{j}) = \\prod \\limits _{i = 1}^{n_{j}} \\sum \\limits _{l = 1}^{k} f(w_{ji} | z_{ji} = l, \\theta _{j}) f(z_{ji} = l | \\theta _{j}),\n$ ",
"where $z_{ji}$ is the topic assignment for the $i$ th word in document $j$ . However, as the choice of word $w_{ji}$ is independent of the topic proportions $\\theta _{j}$ given its topic assignment $z_{ji}$ , we can write $\nL(\\theta _{j}) = \\prod \\limits _{i = 1}^{n_{j}} \\sum \\limits _{l = 1}^{k} f(w_{ji} | z_{ji} = l) f(z_{ji} = l | \\theta _{j}).\n$ ",
"The likelihood is now expressed as the products of the topic proportions and the topics themselves. $\nL(\\theta _{j}) &=& \\prod \\limits _{i = 1}^{n_{j}} \\sum \\limits _{l = 1}^{k} \\phi _{l,w_{ji}} \\theta _{jl}.\n$ ",
"If we express the document as a set of word counts $\\mathbf {N} = \\lbrace N_{1},...,N_{v}\\rbrace $ , where $N_{i}$ is the number of times the $i$ th word of the vocabulary appears in document $j$ , then we can write the log likelihood of $\\theta _{j}$ as $\nl(\\theta _{j}) = \\mathbf {N} \\cdot \\log \\left(\\theta _{j} \\phi \\right).\n$ ",
"In order to deal with words that appear in a new document, and not the original corpus, we assign a probability of 0 to any such word of appearing in any of the $k$ topics; this is equivalent to removing those words from the document.",
"To demonstrate the effectiveness of this method for estimation, we generate documents for which we know the topics and topic proportions. Suppose there exists a corpus comprising of two topics, with a vocabulary of 500 words. Given an assumed LDA model, we generate 500 documents with lengths between 5,000 and 10,000 words.",
"Given our newly generated documents, and known topics $\\phi $ , we are able to test the validity of the MLE process outlined above by finding the estimates $\\hat{\\theta }_{j}$ for each document $j$ and comparing them to known topic proportions $\\theta _{j}$ . Figure 1 shows the results of the MLE method for finding topic proportion estimates for documents with certain true values of $\\theta _{j}$ . From these figures, there is a tight clustering around the true value $\\theta _{j}$ , and thus it is reasonable to assume that the MLE process for estimating the topic proportions of a new document given previously existing topics is sound. This process also holds for greater numbers of topics, as evidenced in Figure 2 , which estimates topic proportions for a three-topic document.",
"Like with the LDA regression model, we require a method for estimating the topic proportion $\\theta _{j}$ of any new documents from which we are predicting a response, that does not involve retraining the entire model. To do so, we rely on techniques used for HMMs; specifically, we use a modified Baum-Welch algorithm.",
"The Baum-Welch algorithm is used as an approximate method to find an HMM $\\Omega = \\lbrace \\Theta , \\phi , \\pi \\rbrace $ , given some observed sequence (in this case, a document). However, the key difference here is that our emission probabilities (or topics) $\\phi $ are common across all documents in our corpus, and thus when introducing any new documents for prediction we assume that we already know them. Given the Baum-Welch algorithm calculates forward and backward probabilities based on an assumed model, and updates estimates iteratively, we may simply take our assumed $\\phi $ found from the initial HMTM as the truth and refrain from updating the emission probabilities.",
"We are generally dealing with very small probabilities in topic modelling - $\\phi $ generally has tens of thousands of columns (the length of the vocabulary) over which probabilities must sum to one. While in theory this does not change how we would approach parameter estimation, computationally these probabilities are frequently recognised as zero. To make the process more numerically stable, we implement the adapted Baum-Welch algorithm demonstrated and justified in BIBREF11 .",
"While we are ultimately interested in finding topic proportions $\\theta _{j}$ for prediction, the Baum-Welch algorithm finds the transition matrix $\\Theta _{j}$ for some document. We are able to deal with this in the same way as finding the original HMTM regression model, by taking $\\theta _{j}$ to be the equilibrium probabilities of $\\Theta _{j}$ ."
],
[
"LDA is an unsupervised process, which does not take into account the response variable we are predicting when inferring topics. Several supervised methods have been developed to incorporate this knowledge, generally for the purpose of finding `better' topics for the corpus in question. Notably, supervised LDA (sLDA) BIBREF4 builds on the LDA model by assuming that some response $y_{j}$ is generated alongside each document $j = 1,2,...,m$ in the corpus, based on the topics prevalent in the document. When inferring the sLDA model, we are therefore inclined to find topics that best suit the response and therefore the prediction problem at hand.",
"Unlike LDA, we treat the topics $\\phi $ as unknown constants rather than random variables. That is, we are interested in maximising $\nP\\left( \\theta , \\mathbf {z} | \\mathbf {D}, \\mathbf {y}, \\phi , \\alpha , \\eta , \\sigma ^{2} \\right),\n$ ",
"where $\\eta $ and $\\sigma ^{2}$ are parameters of the normally distributed response variable, $y_{j} \\sim N(\\eta ^{T} \\bar{z}_{j}, \\sigma ^{2})$ , where $\\bar{z}_{j} = (1/n_{j}) \\sum _{i = 1}^{n_{j}} z_{ji}$ .",
"As with LDA, this probability is computationally intractable, and thus we require an approximation method for model inference. For the purposes of this paper, we use a variational expectation-maximisation (EM) algorithm, as outlined in BIBREF4 .",
"When it comes to choosing the model with the most appropriate number of topics for the regression problem at hand, we use the same method as outlined for the LDA regression model in Section \"Regression model and number of topics\" .",
"The method behind sLDA is specifically developed for prediction. As such, we are able to compute the expected response $y_{j}$ from the document $\\mathbf {w}_{j}$ and the model $\\lbrace \\alpha , \\phi , \\eta , \\sigma ^{2}\\rbrace $ . For a generalised linear model (as we use in this paper), this is approximated by $\nE\\left[ Y_{j} | \\mathbf {w}_{j}, \\alpha , \\phi ,\\eta , \\sigma ^{2} \\right] \\approx E_{q} \\left[\\mu \\left(\\eta ^{T} \\bar{\\mathbf {z}}_{j} \\right)\\right],\n$ ",
"where $\\mu \\left(\\eta ^{T} \\bar{\\mathbf {z}}_{j} \\right) = E\\left[Y_{j} | \\zeta = \\eta ^{T} \\bar{\\mathbf {z}}_{j} \\right]$ and $\\zeta $ is the natural parameter of the distribution from which the response is taken. Again, further detail on this method is found in BIBREF4 ."
],
[
"Topic modelling is designed as a method of dimension reduction, and as such we often deal with large corpora that cannot otherwise be analysed computationally. Given the complexity of human language, we therefore have to choose what information about our corpus is used to develop the topic model. The previous two models, LDA and sLDA, have relied on the `bag of words' assumption in order to maintain computational efficiency. While for some corpora, the loss of all information relating to language and document structure may not have a particularly large effect on the predictive capability of the topic model, this may not hold for all prediction problems.",
"One simple way of introducing structure into the model is through a hidden Markov model (HMM) structure BIBREF8 , BIBREF9 ; in fact, there already exist multiple topic models which do so. We look here at the hidden Markov topic model (HMTM) BIBREF5 , which assumes that the topic assignment of a word in a document is dependent on the topic assignment of the word before it. That is, the topic assignments function as the latent states of the HMM, with words in the document being the observations. The HMTM assumes the generative process outlined in Algorithm \"HMTM regression model\" for documents in a corpus. [h] $l = 1,2,...,k$ generate topics $\\phi _{l} \\sim \\textrm {Dir}(\\beta )$ $j = 1,2,...m$ generate starting probabilities $\\pi _{j} \\sim \\textrm {Dir}(\\alpha )$ $l = 1,2,...,k$ generate the $l$ th row of the transition matrix, $\\Theta _{j}$ , $\\Theta _{jl} \\sim \\textrm {Dir}(\\gamma _{l})$ choose the topic assignment for the first word $z_{j1} \\sim \\textrm {Multi}(\\pi _{j})$ select a word from the vocabulary $w_{j1} \\sim \\textrm {Multi}(\\phi _{z_{j1}})$ $\\phi _{l} \\sim \\textrm {Dir}(\\beta )$0 choose the topic assignment $\\phi _{l} \\sim \\textrm {Dir}(\\beta )$1 based on transition matrix $\\phi _{l} \\sim \\textrm {Dir}(\\beta )$2 select a word from the vocabulary $\\phi _{l} \\sim \\textrm {Dir}(\\beta )$3 create the document $\\phi _{l} \\sim \\textrm {Dir}(\\beta )$4 HMTM generative process.",
"Here, $\\alpha $ , $\\beta $ and $\\gamma = \\left\\lbrace \\gamma _{1},...,\\gamma _{k} \\right\\rbrace $ are Dirichlet priors of the starting probabilities, topics and transition probabilities respectively.",
"When it comes to prediction, we are able to use the transition matrices for each document $\\Theta _{j}$ as predictors, but to keep consistency with the previous models we take the equilibrium distributions of the matrices as the topic proportions $\\theta _{j}$ . That is, we find $\\theta _{j}$ such that $\n\\theta _{j} \\Theta _{j} = \\theta _{j}, \\quad \\textrm {and} \\quad \\theta _{j} \\mathbf {e} = 1.\n$ ",
"This also fits with the concept of topic models as a form of dimension reduction, allowing $k-1$ variables, as opposed to $k(k-1)$ when using the transition matrix $\\Theta _{j}$ . As models are often fit using hundreds of topics BIBREF10 , BIBREF6 , this makes models faster to compute. We choose the number of topics $k$ here with the same method outlined in Section \"Regression model and number of topics\" ."
],
[
"To demonstrate the use of topic models in a regression framework, we apply them to a problem involving online advertisements. Specifically, we have a corpus containing 4,151 advertisements taken from the trading website, Gumtree, pertaining to the sale of cats in Australia, and hand-labelled by an expert. Of these advertisements, 2,187 correspond to relinquished cats and 1,964 to non-relinquished. We train a model to predict `relinquished status' from the text of an advertisement, using a topic regression model. A cat is considered to be relinquished if it is being given up by its owner after a period of time, as opposed to cats that are sold, either by breeders or former owners.",
"In order to improve efficiency and model quality, we first clean our text data. Details on the cleaning steps can be found in Appendix \"Text cleaning\" ."
],
[
"Before investigating regression models that use topic proportions as predictors, it is worth developing a `gold standard' model, i.e., a model whose predictive capability we aim to match with our topic regression models. Because the problem here involves a relatively small corpus (advertisements with a median word count of 35), we are able to compare our topic regression models to a model that uses individual words as its predictors.",
"In a much larger corpus, this kind of prediction would be cumbersome to compute - hence our reliance on topic models and other dimension reduction techniques.",
"Because we are predicting a categorical, binary variable, we use logistic regression. Rather than using all words in the corpus (as this would drastically overfit the model), we use a step-up algorithm based on the Akaike information criterion (AIC) BIBREF12 to choose the most significant words for the model, without overfitting.",
"Instead of applying the step-up process to the entire vocabulary (of exactly 13,000 words), we apply it to the 214 most common words (i.e., words that appear in at least 2.5% of the documents in the corpus). The chosen model uses 97 predictors, with coefficients appearing consistent with what you would expect from the problem: for example, the word kitten is indicative of non-relinquished advertisements, while cat is the opposite, which is expected as younger cats are less likely to be relinquished.",
"To assess the predictive capability of this and other models, we require some method by which we can compare the models. For that purpose, we use receiver operating characteristic (ROC) curves as a visual representation of predictive effectiveness. ROC curves compare the true positive rate (TPR) and false positive rate (FPR) of a model's predictions at different threshold levels. The area under the curve (AUC) (between 0 and 1) is a numerical measure, where the higher the AUC is, the better the model performs.",
"We cross-validate our model by first randomly splitting the corpus into a training set (95% of the corpus) and test set (5% of the corpus). We then fit the model to the training set, and use it to predict the response of the documents in the test set. We repeat this process 100 times. The threshold-averaged ROC curve BIBREF13 is found from these predictions, and shown in Figure 3 . Table 1 shows the AUC for each model considered.",
"As with the Gumtree dataset, we first construct a word count model against which we can measure the performance of our topic regression models. Once again, this can be done because we are working with a small corpus; otherwise, we would generally consider this approach to be computationally too heavy.",
"As we have a categorical, non-binary response variable (storyline) with 10 levels, we use a multinomial logistic regression model. We again use a step-up process with AIC as the measure to determine which words in our vocabulary to use as predictors in our model. As our vocabulary consists of only 1,607 unique words, we consider all of them in our step-up process. After applying this process, the model with three predictors, minister, night and around, is chosen.",
"We are no longer able to easily apply ROC curves as a measure of performance to this problem, as we are dealing with a non-binary response. We instead use a Brier score BIBREF14 , a measure for comparing the predictive performance of models with categorical responses. The Brier score is $\n\\textrm {BS} = \\frac{1}{m} \\sum \\limits _{j=1}^{m} \\sum \\limits _{i=1}^{s} \\left( \\hat{y}_{ji} - o_{ji} \\right)^{2},\n$ ",
"where $\\hat{y}_{ji}$ is the probability of document $j$ belonging to storyline $i$ , and $o_{ji} = 1$ if document $j$ belongs to storyline $i$ , and 0 otherwise, for document $j = 1,2,...,m$ and storyline $i = 1,2,...,s$ . Each term in the sum goes to zero the closer the model gets to perfect prediction, and as such our aim is to minimise the Brier score in choosing a model.",
"For each document in the corpus, we find the probabilities of each outcome by using the remaining 78 documents (or training dataset) as the corpus in a multinomial logistic regression model with the same three predictors as found above. Due to the fact that the training dataset here is smaller than the Gumtree dataset, we perform leave-one-out cross validation on each document in the corpus (rather than using a 95/5 split). We then predict the outcome based on the words found in the left-out document (or test dataset), and repeat for all 79 scenes. However, due to the short length of some scenes, and the fact that unique words must be thrown out, we restrict the testing to 57 of the 79 scenes: the remaining scenes do not generate a numerically stable approximation for $\\theta _{j}$ for the HMTM regression model.",
"The Brier score calculated using this method for the step-up word count model is $0.8255$ ."
],
[
"Using the method outlined in Section \"Regression model and number of topics\" , we choose the LDA regression model with 26 topics as the `best' for this problem. Inspection of the top words included in these 26 topics shows individual topics associated with different breeds (e.g., `persian', `manx') as well as urgency of selling (e.g., `urgent', `asap'), suggesting that the model is not overfit to the data. We generate a threshold-averaged ROC curve using the same cross validation method as earlier, yielding an area under the curve (AUC) of $0.8913$ . The curve can be seen in Figure 3 . While not as high as the AUC for the word count model, the LDA regression model is significantly more efficient, taking only $3\\%$ of the time to calculate.",
"We can compare this result to that of an sLDA regression model. The model chosen for this problem has two topics, giving a threshold-averaged ROC curve under cross validation with an AUC of $0.8588$ . It is surprising that the LDA regression model should outperform sLDA, as sLDA incorporates the response variable when finding the most appropriate topics. However, this can be attributed to the number of topics in the model: the sLDA regression model with 26 topics outperforms the LDA model, with an AUC of $0.9030$ .",
"The word count model still outperforms the sLDA model, however once again the topic regression model is significantly more efficient, taking only $0.6\\%$ of the time to calculate. Further details on the models and their calculation can be found in Appendix \"Topic model inference\" .",
"For the LDA regression model for this problem, we determine the `best' number of topics $k$ to be 16. As with the word count model, we use the Brier score to evaluate the performance of this model compared to others in the chapter. We again use the leave-one-out cross validation approach to predict the probabilities of a scene belonging to each storyline.",
"The Brier score found for the LDA regression model is $1.6351$ . While this is higher and therefore worse than the Brier score for the word count model above, this is not unexpected and we are more interested in seeing how the LDA model fares against other topic models.",
"We compare these results to the HMTM regression model, as outlined in Section \"HMTM regression model\" . We choose the model with 12 topics, according to the CVPE. The Brier score calculated from 57 scenes for the HMTM regression model is $1.5749$ . While still not up to the standard of the word count model at $0.8255$ , this appears to be a slight improvement on the LDA model, meaning that dropping the `bag of words' assumption may in fact improve the predictive performance of the model. However, it should be kept in mind that the LDA model is better at handling short documents. It would be worth applying these models to corpora with longer documents in future, to see how they compare. Further details on the computation of these models can be found in Appendix \"Topic model inference\" .",
"One of the motivating ideas behind having topic dependencies between consecutive words, as in the HMTM model, is that some documents will have a predisposition to stay in the same topic for a long sequence, such as a sentence or a paragraph. This argument particularly applies to narrative-driven corpora such as the Love Actually corpus. To that end, we may adapt the HMTM described above so that the model favours long sequences of the same topic, by adjusting the Dirichlet priors of the transition probabilities, $\\gamma = \\lbrace \\gamma _{1},...,\\gamma _{k} \\rbrace $ , to favour on-diagonal elements. By specifying these priors to be $\n\\gamma _{ls} = {\\left\\lbrace \\begin{array}{ll}\n0.99 + 0.01/k \\quad \\text{if} \\quad l = s\\\\\n0.01/k \\quad \\text{elsewhere},\n\\end{array}\\right.}\n$ ",
"for $l = 1,2,...,k$ , we choose the persistent HMTM regression model with three topics. This results in a Brier score of $0.9124$ , which is a massive improvement on the original HMTM regression model and makes it very competitive with the word count model. Table 2 summarises these results."
],
[
"When evaluating the usefulness of incorporating document structure into a topic model for regression, we require a corpus and problem that we would expect would be heavily influenced by this structure. To understand the predictive capability of the HMTM regression model over that of the more simplistic LDA, we therefore consider predicting the storylines of the 2003 film Love Actually, known for its interwoven yet still quite distinct storylines. We therefore ask if we are able to predict to which storyline a scene belongs, based on the dialogue in that scene.",
"The film consists of 79 scenes, each pertaining to one of 10 storylines. The scenes were hand-classified by storyline, and their dialogue forms the documents of our corpus. We once again clean our data; more detail can be found in Appendix \"Text cleaning\" ."
],
[
"This paper outlines and implements a streamlined, statistical framework for prediction using topic models as a data processing step in a regression model. In doing so, we investigate how various topic model features affect how well the topic regression model makes predictions.",
"While this methodology has been applied to three specific topic models, the use of any particular topic model depends heavily on the kind of corpus and problem at hand. For that reason, it may be worth applying this methodology to incorporate different topic models in future, depending on the needs of the problem at hand.",
"In particular, we investigate here the influence of both supervised methods, and the incorporation of document structure. A logical next step would be to propose a model that incorporates these two qualities, in order to see if this improves predictive capability on corpora with necessary language structure."
],
[
"The following steps were taken to clean the Gumtree corpus:",
"removal of punctuation and numbers,",
"conversion to lower case,",
"removal of stop words (i.e., common words such as the and for that contribute little lexically), and",
"removal of grammatical information from words (i.e., stemming).",
"When stemming words in this paper, we use the stemming algorithm developed by Porter for the Snowball stemmer project BIBREF15 . Similarly, when removing stop words, we use the (English language) list compiled, again, in the Snowball stemmer project.",
"In cleaning the Love Actually corpus, we perform the first three steps outlined here. However, unlike with the Gumtree dataset, we do not stem words, as grammatical information is more pertinent when incorporating language structure."
],
[
"For each topic model, we choose the best number of topics from models generated with between two and 40 topics.",
"For the LDA models found in this paper, we use the LDA function from the R package topicmodels, with the following parameters:",
" $\\tt {burnin} = 1000$ ,",
" $\\tt {iterations} = 1000$ , and",
" $\\tt {keep} = 50$ .",
"The sLDA model in this paper was found using the $\\tt {slda.em}$ function from the R package lda, with the following parameters:",
" $\\tt {alpha} = 1.0$ ,",
" $\\tt {eta} = 0.1$ ,",
" $\\tt {variance} = 0.25$ ,",
" $\\tt {num.e.iterations} = 10$ , and",
" $\\tt {num.m.iterations} = 4$ .",
"We use the Python code from BIBREF5 for the generation of our HMTM."
]
],
"section_name": [
"Introduction",
"Definitions",
"LDA regression model",
"Regression model and number of topics",
"Introducing new documents",
"sLDA regression model",
"HMTM regression model",
"Testing the topic regression models",
"Word count model",
"Topic regression models",
"Incorporating language structure",
"Discussion and further research",
"Text cleaning",
"Topic model inference"
]
} | {
"answers": [
{
"annotation_id": [
"e3388c7385244936e8a2b250922b87485d22c85c"
],
"answer": [
{
"evidence": [
"To assess the predictive capability of this and other models, we require some method by which we can compare the models. For that purpose, we use receiver operating characteristic (ROC) curves as a visual representation of predictive effectiveness. ROC curves compare the true positive rate (TPR) and false positive rate (FPR) of a model's predictions at different threshold levels. The area under the curve (AUC) (between 0 and 1) is a numerical measure, where the higher the AUC is, the better the model performs.",
"We cross-validate our model by first randomly splitting the corpus into a training set (95% of the corpus) and test set (5% of the corpus). We then fit the model to the training set, and use it to predict the response of the documents in the test set. We repeat this process 100 times. The threshold-averaged ROC curve BIBREF13 is found from these predictions, and shown in Figure 3 . Table 1 shows the AUC for each model considered."
],
"extractive_spans": [],
"free_form_answer": "they use ROC curves and cross-validation",
"highlighted_evidence": [
"To assess the predictive capability of this and other models, we require some method by which we can compare the models. For that purpose, we use receiver operating characteristic (ROC) curves as a visual representation of predictive effectiveness. ROC curves compare the true positive rate (TPR) and false positive rate (FPR) of a model's predictions at different threshold levels. The area under the curve (AUC) (between 0 and 1) is a numerical measure, where the higher the AUC is, the better the model performs.",
"We cross-validate our model by first randomly splitting the corpus into a training set (95% of the corpus) and test set (5% of the corpus). We then fit the model to the training set, and use it to predict the response of the documents in the test set. We repeat this process 100 times. The threshold-averaged ROC curve BIBREF13 is found from these predictions, and shown in Figure 3 . Table 1 shows the AUC for each model considered."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
}
],
"nlp_background": [
"two"
],
"paper_read": [
"no"
],
"question": [
"How is performance measured?"
],
"question_id": [
"405964517f372629cda4326d8efadde0206b7751"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Histograms of the maximum likelihood estimates of θ1 for corpora of two topics, given relative true values of 0.2 and 0.4.",
"Figure 2: Histograms of the maximum likelihood estimates of {θ1, θ2} for corpora of three topics, given relative true values of {0.1, 0.1} and {0.2, 0.3}.",
"Figure 3: Threshold-averaged ROC curves of the word count model, LDA regression model, and sLDA regression models with two and 26 topics respectively.",
"Table 1: TArea under the curve (AUC) for the models used on the Gumtree dataset, with their 95% confidence intervals.",
"Table 2: Table of the percentage of hard classifications of storylines for each left-out scene in the corpus that are correct, alongside the Brier score, for each model."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"7-Figure3-1.png",
"7-Table1-1.png",
"9-Table2-1.png"
]
} | [
"How is performance measured?"
] | [
[
"1904.06941-Word count model-5",
"1904.06941-Word count model-4"
]
] | [
"they use ROC curves and cross-validation"
] | 850 |
1909.05016 | Proposal Towards a Personalized Knowledge-powered Self-play Based Ensemble Dialog System | This is the application document for the 2019 Amazon Alexa competition. We give an overall vision of our conversational experience, as well as a sample conversation that we would like our dialog system to achieve by the end of the competition. We believe personalization, knowledge, and self-play are important components towards better chatbots. These are further highlighted by our detailed system architecture proposal and novelty section. Finally, we describe how we would ensure an engaging experience, how this research would impact the field, and related work. | {
"paragraphs": [
[
"Prompt: What is your team’s vision for your Socialbot? How do you want your customers to feel at the end of an interaction with your socialbot? How would your team measure success in competition?",
"Our vision is made up of the following main points:",
"1. A natural, engaging, and knowledge-powered conversational experience.",
"Made possible by a socialbot that can handle all kinds of topics and topic switching more naturally than current Alexa bots. Our goal is not necessarily for the user to feel like they are talking to a human.",
"2. More natural topic handling and topic switching.",
"Incorporating knowledge into neural models BIBREF0 and using the Amazon topical chat dataset can help improve current socialbots in this aspect.",
"3. Building a deeper, more personalized connection with the user.",
"We believe that offering a personalized experience is equally as important as being able to talk about a wide range of topics BIBREF1.",
"4. Consistency.",
"Consistency is another important aspect of conversations which we want to take into account through our user models.",
"5. Diversity and interestingness.",
"The socialbot should give diverse and interesting responses, and the user should never feel like it is merely repeating what it has said earlier.",
"At the end of an interaction customers should feel like they just had a fun conversation, maybe learned something new, and are thrilled to talk to the bot again. Throughout the dialog, customers should feel like the socialbot is interested in them and their topics, and can offer valuable insight and opinions. It is also important for it to suggest relevant topics in an engaging way. Users should never feel like the bot is not interested or can’t continue a conversation. This is a reason behind classifying and calculating our metrics for each user input, to get an idea of user engagement in the current conversation.",
"Our main measures for success are:",
"- User feedback.",
"- Comparison to other dialog systems in A/B tests.",
"- Automatic metrics.",
"We would measure success partly by looking at the user feedback. We expect our socialbot’s ratings to constantly increase, and verbal feedback to get more positive throughout the competition. We plan to classify verbal feedback with a simple sentiment classifier to quantitatively see the rate of improvement. Working back from the customer and constantly improving the conversational experience based on feedback is important to us. Success would also be measured by comparing our system to previous socialbots or other dialog systems in A/B tests with crowdsourced evaluators.",
"Our goal is to have long and high-quality conversations, but the longevity shouldn’t come from awkwardly long, specific, and forced replies, as is the case with some of the current socialbots. While generally, a longer conversation is better, it is not the only metric that we wish to consider. Besides user ratings we also have a plethora of automatic metrics that we want to improve on, like metrics measuring topic depth and breadth BIBREF2, entropy metrics measuring diversity, or embedding metrics measuring coherence BIBREF3. Different metrics measure different aspects of responses, thus it is important to not solely look at metrics individually."
],
[
"Prompt: Please share a sample interaction/conversation you expect your Socialbot to achieve by the end of the Competition.",
"User: Alexa, let’s chat",
"Alexa: Welcome to the Alexa Prize.",
"SYS: Hi <username>, what do you want to talk about?",
"User: I don’t know, you decide.",
"SYS: What about books. I remember you liking books, but what was the last book you’ve read?",
"User: Umm, I think it was American Psycho.",
"SYS: One of my favorites, did you know there is a movie adaptation?",
"User: No.",
"SYS: It’s pretty good. Are you reading something right now, because if not I can give some recommendations.",
"User: Shoot me.",
"SYS: I heard that Silence of the Lambs is really good.",
"User: Yeah I watched the movie, but never read the book, maybe I will pick it up.",
"SYS: Glad I could be of help.",
"User: What is your favorite movie?",
"SYS: I really liked Blade Runner, what about you?",
"User: I hated it.",
"SYS: Ah that’s too bad, but I also like your favorite movie which if I remember correctly is Avengers.",
"User: Wow I’m surprised you remember that, but I guess you should have perfect memory.",
"SYS: My memory is not perfect unfortunately, but I try my best.",
"User: What is your earliest memory?",
"SYS: I was very little, and I was playing in the sand, that’s all I remember. You?",
"User: Uhh, I don’t know, maybe when I was four and fell off the swing.",
"SYS: That must have been painful, did you have the courage to get on a swing again.",
"User: Sure, it wasn’t that big of a deal.",
"SYS: Glad to hear that. Since we are already talking about swings do you like the swing genre?",
"User: Yes, especially electro swing.",
"SYS: Me too! I love Parov Stelar!",
"User: Haha, I think he’s a bit overrated but yeah, I like watching Youtube compilations of people dancing to it.",
"SYS: I wish I could dance that well.",
"User: Yeah, me too.",
"SYS: What other Youtube compilations do you watch?",
"User: Fails, and cats",
"SYS: Cats are too cute, do you own one?",
"User: No, maybe one day.",
"SYS: Well I hope you will have a cat one day.",
"User: Can you get me one?",
"SYS: Unfortunately I can’t, but I can say purr purr."
],
[
"Prompt: Describe, in detail, your scientific approach along with the related system architecture. Plan for operating at scale when you define the architecture.",
"Our system architecture (Figure FIGREF1) is comprised of 3 main components: First the user input is processed through the myblueNLP component, then this data is sent to myyellowResponse Candidates, which contains a suite of neural and rule-based models, and finally the mygreenDialog Manager chooses the best response taking into account dialog history and current myblueNLP data. These components communicate with each other through the mygrayDialog State Manager, and a myredKnowledge Base component that can be queried by our components stores our knowledge bases. We build on top of CoBot BIBREF2, thus the system is scalable and new components can be added easily. We leverage former Alexa competitors' architectures BIBREF4, BIBREF5. We minimize latency, by running tasks in parallel whenever possible, in order to make the conversation feel natural. Some redundancy is also included (e.g. in the form of multiple response generators), and we define a fixed time window for each major step in our pipeline, after which we interrupt the current component and use the information already computed from the sub-components in the next step, reducing total processing time. We will develop our system in three phases (Figure FIGREF1): Components marked core, core+, and core++ are to be completed by the end of phase 5, 7, and 9, respectively. These are the minimally planned components for each category, but if time permits we will advance faster. This provides us an incremental and iterative approach to build our architecture starting with the most important components, always testing included components before advancing to new ones. Our main novelties include:",
"[topsep=2pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]",
"Using self-play strategies to train a neural response ranker.",
"Computing a large number of metrics for both input and response, and specifically optimizing some models for our metrics.",
"Training a separate dialog model for each user.",
"Using a response classification predictor and a response classifier to predict and control aspects of responses.",
"Predicting which model emits the best response before response generation.",
"Using our entropy-based filtering approach to filter dialog datasets BIBREF3.",
"Using big, pre-trained, hierarchical BERT and GPT models BIBREF6, BIBREF7, BIBREF8.",
"Next, we describe each component in detail in order of the data flow."
],
[
"This is included in CoBot and we extend it to manage our current dialog state (i.e. conversations and related data described below), saving it to DynamoDB BIBREF9 when appropriate. DynamoDB stores all past dialog states for every user. The grayDialog State Manager communicates with the myblueNLP and mygreenDialog Manager components which can update the dialog state. It works in parallel to all components, thus it doesn't affect latency."
],
[
"mygrayASR data is sent to the first component in the pipeline (myblueNLP), starting with the myblueASR Postprocessor. If the confidence score of the transcribed utterance is below a certain threshold the pipeline is interrupted and we return a reply asking the user to repeat or rephrase their sentence. Otherwise if the confidence is above this but still lower than average we look at the n-best mygrayASR hypotheses and try to correct the utterance based on context (planned to be part of core++.). The corrected utterance is passed to all the subcomponents running in parallel. Token-timing is also saved to the dialog state and used as additional input to dialog models, as it might help disentangle separate phrases. We leverage and extend some of CoBot's built-in NLP components (myblueNER, myblueTopic, myblueSentiment, and myblueOffensive Speech classifiers) and also add our own. Named entities are extracted and we use myredNeptune BIBREF10 and the myredGoogle Knowledge Graph BIBREF11 to get related entities and pieces of information about them. myblueTopic, myblueDialog Act, myblueSentiment and myblueOffensive Speech classifiers take into account previous dialog states (context) from DynamoDB. We save all information in DynamoDB and build statistics about the user (e.g. what are her/his favorite topics). We compute all our automatic evaluation metrics BIBREF3 for the user utterance which is useful for the response selection strategy (e.g. if we find the user is bored we would try to suggest a new topic based on saved user information). After all subcomponents are finished or the time window is exceeded, all data is sent to the mygrayDialog State Manager. We also plan to experiment with inserting a response classification prediction (mygreenRCP) step, which predicts the topic, dialog act and sentiment of the response, using context, and current myblueNLP data. The predicted information about the response is added to the dialog state and the dialog models in myyellowResponse Candidates can leverage it. We also plan to experiment with using this information to control desired aspects of the response3 BIBREF12."
],
[
"Once the myblueNLP and mygreenRCP are done, the mygrayDialog State Manager sends the current dialog state to our dialog models running in parallel. Most models will also use conversation history and user information from DynamoDB. Ensemble modeling, a prevalent technique in nearly all Alexa socialbots BIBREF5, BIBREF2, improves the response quality since we can have different models dealing with different domains and situations. Rule-based models include myyellowEvi (built into CoBot), and publicly available AIML parts of myyellowAlice BIBREF13 and myyellowMitsuku,3. The base of all neural models is a big, hierarchical BERT or GPT-based model, pre-trained on non-dialog data BIBREF8, BIBREF14. The hierarchical part ensures that our models are grounded in past utterances and that they respond differently to the same input utterance (since the past is different). We also plan to experiment with inserting BERT layers in variational models3 BIBREF15, BIBREF16, which can provide more interesting and non-deterministic responses. We further train our pre-trained models on all available dialog datasets jointly. Finally, we finetune myyellowTopic Models on datasets related to specific topics (e.g. subreddits), while myyellowMetric Models are finetuned jointly on all dialog datasets, but we replace the loss function with a specific metric (e.g. coherence, diversity, etc.). myyellowMetric Models can focus on specific dialog properties and ensure that generated responses are diverse and engaging. We train models with extra annotations (e.g. topic, sentiment in DailyDialog BIBREF17, or using knowledge pieces BIBREF0 through the new Amazon topical chat dataset). There are several issues with the cross-entropy loss function BIBREF18, BIBREF3, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, and we proposed to use all kinds of features BIBREF18, motivating the usage of annotations computed with myblueNLP, which helps amend the loss function problem and provides more interesting and diverse responses BIBREF24, BIBREF17, BIBREF25, BIBREF26, BIBREF27, BIBREF0, BIBREF28, BIBREF29. We use two variants of each myyellowTopic and myyellowMetric Model, a neural generative and a retrieval based, which simply returns the n-best responses from training data. The myyellowUser Model4 is a user-specific dialog model finetuned on user-Alexa conversations. It will be at least one order of magnitude smaller than other models since we have to train and store the weights (in DynamoDB) of one model for each user. Through this model, we can encode information about the user, and the model can stay more consistent (if trained with its own responses as targets). Personalizing our system is important and we feel that it will make our chatbot more pleasant to talk to BIBREF1. The myyellowWikiSearch Model simply searches myredWikidata BIBREF30 and returns relevant sentences which we can consider as responses. A similar model is employed for the Washington Post live API as well to stay up-to-date with events and news. We also plan to experiment with an ensemble model setup, where all the response candidates are combined into one response word-by-word, which can be considered as an additional response candidate. Through the myyellowUser Model and the knowledge-augmented myyellowTopic Models our goal is to achieve an engaging and interesting conversation in which topic handling and topic switching occur more naturally than in current Alexa socialbots. In the initial stages of the competition, we plan to experiment with as many models as possible and use crowdsourcing to exclude from our system models that generate low-quality responses."
],
[
"Once all dialog models have computed a response or timed out, we send response candidates to the mygreenDialog Manager. The mygreenModel Predictor4 runs in parallel with the dialog models, trying to predict which model will generate the best response based on the dialog state and context. If we find that such a model can predict the selected model (by the mygreenResponse Ranker) accurately, then we can largely decrease computational costs by reducing the number of models required to produce a response. We plan to experiment with several response selection strategies (mygreenResponse Ranker) and evaluate them with crowdsourced evaluators in A/B tests. In the initial phases (core part) of the competition, we plan to employ safe baseline strategies like selecting responses only from retrieval and rule-based models, using the CoBot selection strategy, and ranking responses using a weighted sum across all metrics. Our end goal is to be able to learn a neural ranker, which takes as input the dialog state, context, and response candidates (and their probability scores in the case of neural models), and outputs the best response4. One approach is to use crowdsourcing to gather training data for the ranker, by letting people choose the best response among the candidates. We also plan to use user feedback with reinforcement learning BIBREF5. In the final version of the ranker we plan to experiment with self-play3 BIBREF31, BIBREF32, BIBREF12, described in detail in the novel approaches document of the application. Essentially, both at train and test times we can do rollouts with the ranker, where the dialog system feeds its response into itself, to filter responses that lead to poor conversations. This is a computationally taxing technique, which will be tuned to the desired latency. To increase selection confidence we will use the agreement between the mygreenModel Prediction and selected response, and between the mygreenResponse Classifier and the mygreenRCP. The mygreenResponse Classifier uses the myblueNLP component to compute the same data as the mygreenRCP, and is useful in helping the mygreenResponse Ranker rank responses, based on whether they are offensive, on topic, positive, engaging, etc., ensuring a fun and interesting conversation. Thus the mygreenResponse Ranker leverages all components in the mygreenDialog Manager before emitting the final response, which is sent to the mygrayTTS of Alexa. The mygreenRCP and the mygreenModel Predictor are both trained so they approximate their post-myyellowResponse Candidates counterparts (mygreenResponse Classifier and mygreenResponse Ranker). This training signal can be used in the loss function of the neural dialog models as well."
],
[
"Prompt: What is novel about the team’s approach? (This may be completely new approach or novel combination of known techniques).",
"Our novelties include:",
"Using self-play learning for the neural response ranker (described in detail below).",
"Optimizing neural models for specific metrics (e.g. diversity, coherence) in our ensemble setup.",
"Training a separate dialog model for each user, personalizing our socialbot and making it more consistent.",
"Using a response classification predictor and a response classifier to predict and control aspects of responses such as sentiment, topic, offensiveness, diversity etc.",
"Using a model predictor to predict the best responding model, before the response candidates are generated, reducing computational expenses.",
"Using our entropy-based filtering technique to filter all dialog datasets, obtaining higher quality training data BIBREF3.",
"Building big, pre-trained, hierarchical BERT and GPT dialog models BIBREF6, BIBREF7, BIBREF8.",
"Constantly monitoring the user input through our automatic metrics, ensuring that the user stays engaged.",
"Self-play BIBREF31, BIBREF32 offers a solution to the scarcity of dialog datasets and to the issues encountered when using cross-entropy loss as an objective function BIBREF18, BIBREF3. In our setup the dialog system would converse with itself, selecting the best response with the neural ranker in each turn. After a few turns, we reward the ranker based on the generated conversation. Our reward ideas include a weighted sum of metrics and using crowdsourcing and user ratings. Furthermore, we wish to explore two exciting self-play setups: 1. An adversarial setup where the ranker is trained to generate a dialog by self-play to fool a neural discriminator deciding whether it’s machine or human generated. 2. We apply the ideas of curiosity and random network distillation to train the neural ranker BIBREF32. We also plan to experiment with self-play ideas for some of the individual neural dialog models."
],
[
"Prompt: Please provide a summary of the technical work/research (relevant to your proposed architecture), yours or others’, that you will leverage and how.",
"We employ topic, dialog act, and sentiment classifiers, which widely used in the literature BIBREF4. We leverage rule-based bots in our system because they can provide a different class of responses than neural models. We use recent NLP models BIBREF6, BIBREF7, by finetuning them on our dialog datasets, and modify them to be more suited to deal with dialog modeling, e.g. making them hierarchical or integrating them in other state-of-the-art dialog models BIBREF15, BIBREF16. We leverage baseline response rankers, and adapt ideas from the domains of reinforcement and self-play learning to dialog modeling BIBREF5, BIBREF12, BIBREF32.",
"We and others have found the cross-entropy loss function problematic and the primary reason for the generation of short and boring responses BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF33, BIBREF23. To amend this, we use our idea of filtering dialog datasets based on entropy, obtaining higher quality training data BIBREF3. We address the loss function problem using various features and metrics (from the NLP component) and knowledge pieces (using the new topical chat dataset and Wikidata), which can help neural models in generating more natural and diverse responses BIBREF24, BIBREF17, BIBREF26, BIBREF27, BIBREF0, BIBREF29.",
"We build on top of, modify, and extend CoBot and former competitors’ architectures, as they provide a solid foundation for our dialog system BIBREF4, BIBREF5. ASR postprocessing, and a neural ranker choosing between response candidates are some techniques that we include in our architecture."
],
[
"Prompt: How will you ensure you create an experience users find engaging?",
"We have several mechanisms to ensure an engaging experience:",
"We classify user utterances by topic, sentiment, etc., and calculate our automatic metrics, using this information when selecting and generating responses. If we find the user lost interest in the conversation, we might suggest a new topic related to their interests (through our topic and user models).",
"We also classify the response candidates, so that we can make sure that they are engaging and relevant. With the help of knowledge-augmented models BIBREF0 we offer the user an interesting and informative conversational experience in a natural way, which all contributes to engagingness.",
"Personalization through our user models is an important factor to engagingness. If the user feels like the socialbot is able to remember and include past information about them in its responses, then this directly contributes to building a deeper connection with the user and maintain their interest.",
"Defining a maximum latency for our pipeline is a small but important feature to ensure users stay engaged.",
"We plan to heavily leverage user feedback (by classifying verbal feedback and using ratings to refine our response ranker) in order to improve our system."
],
[
"Prompt: How do you think your work will impact the field of Conversational AI?",
"We aim to move forward the field of conversational AI and neural dialog modeling through three main novelties: self-play learning, tackling the loss function problem, and personalization.",
"We believe that instead of using rule-based components and rule-based dialog managers, our refined, self-play based, neural ensemble system is much more capable of scaling and will be a great step forward in the field towards achieving a better conversational experience. Our work will popularize the idea of self-play and will eliminate some of the problems with current neural dialog models. We believe that applying self-play on the response ranker level is an under-researched idea, with which we could potentially train much better dialog agents than current ones.",
"We believe that our combined approaches will mend the problems with learning through the cross-entropy loss function, and will create more diverse and interesting dialog models BIBREF18, BIBREF3. These include feeding classification and metric annotations to our neural models, optimizing models specifically for our metrics, and using self-play.",
"From a user perspective, personalization in dialog agents is one of the most important aspects that current socialbots are lacking, which we want to make a great impact on through our user models."
]
],
"section_name": [
"Vision",
"Sample Conversation",
"Architecture",
"Architecture ::: Dialog State Manager.",
"Architecture ::: NLP.",
"Architecture ::: Response Candidates.",
"Architecture ::: Dialog Manager.",
"Novelty",
"Related Work",
"Ensuring an engaging experience",
"Impact"
]
} | {
"answers": [
{
"annotation_id": [
"f7c3b54d1ea76b8b0fbd3eb625f123943a084dd8"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"e99d069d6d2101d702288889f7e541489524922f"
],
"answer": [
{
"evidence": [
"Our novelties include:",
"Using self-play learning for the neural response ranker (described in detail below).",
"Optimizing neural models for specific metrics (e.g. diversity, coherence) in our ensemble setup.",
"Training a separate dialog model for each user, personalizing our socialbot and making it more consistent.",
"Using a response classification predictor and a response classifier to predict and control aspects of responses such as sentiment, topic, offensiveness, diversity etc.",
"Using a model predictor to predict the best responding model, before the response candidates are generated, reducing computational expenses.",
"Using our entropy-based filtering technique to filter all dialog datasets, obtaining higher quality training data BIBREF3.",
"Building big, pre-trained, hierarchical BERT and GPT dialog models BIBREF6, BIBREF7, BIBREF8.",
"Constantly monitoring the user input through our automatic metrics, ensuring that the user stays engaged."
],
"extractive_spans": [],
"free_form_answer": "They use self-play learning , optimize the model for specific metrics, train separate models per user, use model and response classification predictors, and filter the dataset to obtain higher quality training data.",
"highlighted_evidence": [
"Our novelties include:\n\nUsing self-play learning for the neural response ranker (described in detail below).\n\nOptimizing neural models for specific metrics (e.g. diversity, coherence) in our ensemble setup.\n\nTraining a separate dialog model for each user, personalizing our socialbot and making it more consistent.\n\nUsing a response classification predictor and a response classifier to predict and control aspects of responses such as sentiment, topic, offensiveness, diversity etc.\n\nUsing a model predictor to predict the best responding model, before the response candidates are generated, reducing computational expenses.\n\nUsing our entropy-based filtering technique to filter all dialog datasets, obtaining higher quality training data BIBREF3.\n\nBuilding big, pre-trained, hierarchical BERT and GPT dialog models BIBREF6, BIBREF7, BIBREF8.\n\nConstantly monitoring the user input through our automatic metrics, ensuring that the user stays engaged."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero"
],
"paper_read": [
"no",
"no"
],
"question": [
"How big are datasets for 2019 Amazon Alexa competition?",
"What is novel in author's approach?"
],
"question_id": [
"d76ecdc0743893a895bc9dc3772af47d325e6d07",
"2a6469f8f6bf16577b590732d30266fd2486a72e"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"computer vision",
"computer vision"
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: System architecture. Components with gray background are provided by Amazon. Components marked core, core+, and core++ are to be completed by the end of phase 5, 7, and 9, denoted by a solid, long dotted, and short dotted outline, respectively."
],
"file": [
"4-Figure1-1.png"
]
} | [
"What is novel in author's approach?"
] | [
[
"1909.05016-Novelty-9",
"1909.05016-Novelty-2",
"1909.05016-Novelty-4",
"1909.05016-Novelty-6",
"1909.05016-Novelty-8",
"1909.05016-Novelty-3",
"1909.05016-Novelty-1",
"1909.05016-Novelty-7",
"1909.05016-Novelty-5"
]
] | [
"They use self-play learning , optimize the model for specific metrics, train separate models per user, use model and response classification predictors, and filter the dataset to obtain higher quality training data."
] | 855 |
1605.07683 | Learning End-to-End Goal-Oriented Dialog | Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service. | {
"paragraphs": [
[],
[],
[
"All our tasks involve a restaurant reservation system, where the goal is to book a table at a restaurant. The first five tasks are generated by a simulation, the last one uses real human-bot dialogs. The data for all tasks is available at http://fb.ai/babi. We also give results on a proprietary dataset extracted from an online restaurant reservation concierge service with anonymized users."
],
[],
[
"The authors would like to thank Martin Raison, Alex Lebrun and Laurent Landowski for their help with the Concierge data."
]
],
"section_name": [
"Introduction",
"Related Work",
"Goal-Oriented Dialog Tasks",
"Models",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"ead914a2101e867d4b987ffcc4e3727d89610b48"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Data used in this paper. Tasks 1-5 were generated using our simulator and share the same KB. Task 6 was converted from the 2nd Dialog State Tracking Challenge (Henderson et al., 2014a). Concierge is made of chats extracted from a real online concierge service. (∗) Tasks 1-5 have two test sets, one using the vocabulary of the training set and the other using out-of-vocabulary words."
],
"extractive_spans": [],
"free_form_answer": "1,618 training dialogs, 500 validation dialogs, and 1,117 test dialogs",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Data used in this paper. Tasks 1-5 were generated using our simulator and share the same KB. Task 6 was converted from the 2nd Dialog State Tracking Challenge (Henderson et al., 2014a). Concierge is made of chats extracted from a real online concierge service. (∗) Tasks 1-5 have two test sets, one using the vocabulary of the training set and the other using out-of-vocabulary words."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"How large is the Dialog State Tracking Dataset?"
],
"question_id": [
"a02696d4ab728ddd591f84a352df9375faf7d1b4"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"dialog"
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Goal-oriented dialog tasks. A user (in green) chats with a bot (in blue) to book a table at a restaurant. Models must predict bot utterances and API calls (in dark red). Task 1 tests the capacity of interpreting a request and asking the right questions to issue an API call. Task 2 checks the ability to modify an API call. Task 3 and 4 test the capacity of using outputs from an API call (in light red) to propose options (sorted by rating) and to provide extra-information. Task 5 combines everything.",
"Table 1: Data used in this paper. Tasks 1-5 were generated using our simulator and share the same KB. Task 6 was converted from the 2nd Dialog State Tracking Challenge (Henderson et al., 2014a). Concierge is made of chats extracted from a real online concierge service. (∗) Tasks 1-5 have two test sets, one using the vocabulary of the training set and the other using out-of-vocabulary words.",
"Table 2: Test results across all tasks and methods. For tasks T1-T5 results are given in the standard setup and the out-of-vocabulary (OOV) setup, where words (e.g. restaurant names) may not have been seen during training. Task T6 is the Dialog state tracking 2 task with real dialogs, and only has one setup. Best performing methods (or methods within 0.1% of best performing) are given in bold for the per-response accuracy metric, with the per-dialog accuracy given in parenthesis. (∗) For Concierge, an example is considered correctly answered if the correct response is ranked among the top 10 candidates by the bot, to accommodate the much larger range of semantically equivalent responses among candidates (see ex. in Tab. 7) . (†) We did not implement MemNNs+match type on Concierge, because this method requires a KB and there is none associated with it.",
"Table 3: Task 1 (Issue API call) The model learns to direct its attention towards the 4 memories containing the information key to issue the API call. More hops help to strengthen this signal. <silence> is a special token used to indicate that the user did not speak at this turn – the model has to carry out the conversation with no additional input.",
"Table 4: Task 2 (Update API call) Out of the multiple memories from the current dialog, the model correctly focuses on the 2 important pieces: the original API call and the utterance giving the update.",
"Table 5: Task 3 (Displaying options) The model knows it has to display options but the attention is wrong: it should attend on the ratings to select the best option (with highest rating). It cannot learn that properly and match type features do not help. It is correct here by luck, the task is not solved overall (see Tab. 2). We do not show all memories in the table, only those with meaningful attention.",
"Table 6: Task 4 (Providing extra-information) The model knows it must display a phone or an address, but, as explained in Section A the embeddings mix up the information and make it hard to distinguish between different phone numbers or addresses, making answering correctly very hard. As shown in the results of Tab. 2, this problem can be solved by adding match type features, that allow to emphasize entities actually appearing in the history. The attention is globally wrong here.",
"Table 7: Concierge Data The model is also able to learn from human-human dialogs. <person>, <org>, <number> and <date> are special tokens used to anonymize the data. We report the top 5 answers predicted by the model. They are all semantically equivalent. Note that the utterances, while all produced by humans, are not perfect English (\"rservation\", \"I’ll check into it\")",
"Table 8: Hyperparameters of Supervised Embeddings. When Use History is True, the whole conversation history is concatenated with the latest user utterance to create the input. If False, only the latest utterance is used as input.",
"Table 9: Hyperparameters of Memory Networks. The longer and more complex the dialogs are, the more hops are needed.",
"Table 10: Test results across all tasks and methods. For tasks T1-T5 results are given in the standard setup and the out-of-vocabulary (OOV) setup, where words (e.g. restaurant names) may not have been seen during training. Task T6 is the Dialog state tracking 2 task with real dialogs, and only has one setup. Best performing methods (or methods within 0.1% of best performing) are given in bold for the per-response accuracy metric, with the per-dialog accuracy given in parenthesis."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"8-Table2-1.png",
"12-Table3-1.png",
"12-Table4-1.png",
"13-Table5-1.png",
"13-Table6-1.png",
"14-Table7-1.png",
"14-Table8-1.png",
"14-Table9-1.png",
"15-Table10-1.png"
]
} | [
"How large is the Dialog State Tracking Dataset?"
] | [
[
"1605.07683-3-Table1-1.png"
]
] | [
"1,618 training dialogs, 500 validation dialogs, and 1,117 test dialogs"
] | 856 |
1912.00955 | Dynamic Prosody Generation for Speech Synthesis using Linguistics-Driven Acoustic Embedding Selection | Recent advances in Text-to-Speech (TTS) have improved quality and naturalness to near-human capabilities when considering isolated sentences. But something which is still lacking in order to achieve human-like communication is the dynamic variations and adaptability of human speech. This work attempts to solve the problem of achieving a more dynamic and natural intonation in TTS systems, particularly for stylistic speech such as the newscaster speaking style. We propose a novel embedding selection approach which exploits linguistic information, leveraging the speech variability present in the training dataset. We analyze the contribution of both semantic and syntactic features. Our results show that the approach improves the prosody and naturalness for complex utterances as well as in Long Form Reading (LFR). | {
"paragraphs": [
[
"Corresponding author email: [email protected]. Paper submitted to IEEE ICASSP 2020",
"Recent advances in TTS have improved the achievable synthetic speech naturalness to near human-like capabilities BIBREF0, BIBREF1, BIBREF2, BIBREF3. This means that for simple sentences, or for situations in which we can correctly predict the most appropriate prosodic representation, TTS systems are providing us with speech practically indistinguishable from that of humans.",
"One aspect that most systems are still lacking is the natural variability of human speech, which is being observed as one of the reasons why the cognitive load of synthetic speech is higher than that of humans BIBREF4. This is something that variational models such as those based on Variational Auto-Encoding (VAE) BIBREF3, BIBREF5 attempt to solve by exploiting the sampling capabilities of the acoustic embedding space at inference time.",
"Despite the advantages that VAE-based inference brings, it also suffers from the limitation that to synthesize a sample, one has to select an appropriate acoustic embedding for it, which can be challenging. A possible solution to this is to remove the selection process and consistently use a centroid to represent speech. This provides reliable acoustic representations but it suffers again from the monotonicity problem of conventional TTS. Another approach is to simply do a random sampling of the acoustic space. This would certainly solve the monotonicity problem if the acoustic embedding were varied enough. It can however, introduce erratic prosodic representations of longer texts, which can prove to be worse than being monotonous. Finally, one can consider text-based selection or prediction, as done in this research.",
"In this work, we present a novel approach for informed embedding selection using linguistic features. The tight relationship between syntactic constituent structure and prosody is well known BIBREF6, BIBREF7. In the traditional Natural Language Processing (NLP) pipeline, constituency parsing produces full syntactic trees. More recent approaches based on Contextual Word Embedding (CWE) suggest that CWE are largely able to implicitly represent the classic NLP pipeline BIBREF8, while still retaining the ability to model lexical semantics BIBREF9. Thus, in this work we explore how TTS systems can enhance the quality of speech synthesis by using such linguistic features to guide the prosodic contour of generated speech.",
"Similar relevant recent work exploring the advantages of exploiting syntactic information for TTS can be seen in BIBREF10, BIBREF11. While those studies, without any explicit acoustic pairing to the linguistic information, inject a number of curated features concatenated to the phonetic sequence as a way of informing the TTS system, the present study makes use of the linguistic information to drive the acoustic embedding selection rather than using it as an additional model features.",
"An exploration of how to use linguistics as a way of predicting adequate acoustic embeddings can be seen in BIBREF12, where the authors explore the path of predicting an adequate embedding by informing the system with a set of linguistic and semantic information. The main difference of the present work is that in our case, rather than predicting a point in a high-dimensional space by making use of sparse input information (which is a challenging task and potentially vulnerable to training-domain dependencies), we use the linguistic information to predict the most similar embedding in our training set, reducing the complexity of the task significantly.",
"The main contributions of this work are: i) we propose a novel approach of embedding selection in the acoustic space by using linguistic features; ii) we demonstrate that including syntactic information-driven acoustic embedding selection improves the overall speech quality, including its prosody; iii) we compare the improvements achieved by exploiting syntactic information in contrast with those brought by CWE; iv) we demonstrate that the approach improves the TTS quality in LFR experience as well.",
""
],
[
"CWE seem the obvious choice to drive embedding selection as they contain both syntactic and semantic information. However, a possible drawback of relying on CWE is that the linguistic-acoustic mapping space is sparse. The generalization capability of such systems in unseen scenarios will be poor BIBREF13. Also, as CWE models lexical semantics, it implies that two semantically similar sentences are likely to have similar CWE representations. This however does not necessarily correspond to a similarity in prosody, as the structure of the two sentences can be very different.",
"We hypothesize that, in some scenarios, syntax will have better capability to generalize than semantics and that CWE have not been optimally exploited for driving prosody in speech synthesis. We explore these two hypotheses in our experiments. The objective of this work is to exploit sentence-level prosody variations available in the training dataset while synthesizing speech for the test sentence. The steps executed in this proposed approach are: (i) Generate suitable vector representations containing linguistic information for all the sentences in the train and test sets, (ii) Measure the similarity of the test sentence with each of the sentences in the train set. We do so by using cosine similarity between the vector representations as done in BIBREF14 to evaluate linguistic similarity, (iii) Choose the acoustic embedding of the train sentence which gives the highest similarity with the test sentence, (iv) Synthesize speech from VAE-based inference using this acoustic embedding",
""
],
[
"We experiment with three different systems for generating vector representations of the sentences, which allow us to explore the impact of both syntax and semantics on the overall quality of speech synthesis. The representations from the first system use syntactic information only, the second relies solely on CWE while the third uses a combination of CWE and explicit syntactic information.",
""
],
[
"",
"Syntactic representations for sentences like constituency parse trees need to be transformed into vectors in order to be usable in neural TTS models. Some dimensions describing the tree can be transformed into word-based categorical feature like identity of parent and position of word in a phrase BIBREF15.",
"The syntactic distance between adjacent words is known to be a prosodically relevant numerical source of information which is easily extracted from the constituency tree BIBREF16. It is explained by the fact that if many nodes must be traversed to find the first common ancestor, the syntactic distance between words is high. Large syntactic distances correlate with acoustically relevant events such as phrasing breaks or prosodic resets.",
"To compute syntactic distance vector representations for sentences, we use the algorithm mentioned in BIBREF17. That is, for a sentence of n tokens, there are n corresponding distances which are concatenated together to give a vector of length n. The distance between the start of sentence and first token is always 0.",
"We can see an example in Fig. 1: for the sentence “The brown fox is quick and it is jumping over the lazy dog\", whose distance vector is d = [0 2 1 3 1 8 7 6 5 4 3 2 1]. The completion of the subject noun phrase (after `fox') triggers a prosodic reset, reflected in the distance of 3 between `fox' and `is'. There should also be a more emphasized reset at the end of the first clause, represented by the distance of 8 between `quick' and `and'.",
""
],
[
"To generate CWE we use BERT BIBREF18, as it is one of the best performing pre-trained models with state of the art results on a large number of NLP tasks. BERT has also shown to generate strong representations for both syntax and semantics. We use the word representations from the uncased base (12 layer) model without fine-tuning. The sentence level representations are achieved by averaging the second to last hidden layer for each token in the sentence. These embeddings are used to drive acoustic embedding selection.",
""
],
[
"Even though BERT embeddings capture some aspects of syntactic information along with semantics, we decided to experiment with a system combining the information captured by both of the above mentioned systems. The information from syntactic distances and BERT embeddings cannot be combined at token level to give a single vector representation since both these systems use different tokenization algorithms. Tokenization in BERT is based on the wordpiece algorithm BIBREF19 as a way to eliminate the out-of-vocabulary issues. On the other hand, tokenization used to generate parse trees is based on morphological considerations rooted in linguistic theory. At inference time, we average the similarity scores obtained by comparing the BERT embeddings and the syntactic distance vectors.",
""
],
[
"The approaches described in Section SECREF1 produce utterances with more varied prosody as compared to the long-term monotonicity of those obtained via centroid-based VAE inference. However, when considering multi-sentence texts, we have to be mindful of the issues that can be introduced by erratic transitions. We tackle this issue by minimizing the acoustic variation a sentence can have with respect to the previous one, while still minimizing the linguistic distance. We consider the Euclidean distance between the 2D Principal Component Analysis (PCA) projected acoustic embeddings as a measure of acoustic variation, as we observe that the projected space provides us with an acoustically relevant space in which distances can be easily obtained. Doing the same in the 64-dimensional VAE space did not perform as intended, likely because of the non-linear manifold representing our system, in which distances are not linear. As a result, certain sentence may be linguistically the closest match in terms of syntactic distance or CWE, but it will still not be selected if its acoustic embedding is far apart from that of the previous sentence.",
"We modify the similarity evaluation metric used for choosing the closest match from the train set by adding a weighted cost to account for acoustic variation. This approach focuses only on the sentence transitions within a paragraph rather than optimizing the entire acoustic embedding path. This is done as follows: (i) Define the weights for linguistic similarity and acoustic similarity. In this work, the two weights sum up to 1; (ii) The objective is to minimize the following loss considering the acoustic embedding chosen for the previous sentence in the paragraph:",
"",
"Loss = LSW * (1-LS) + (1-LSW) * D,",
"where LSW = Linguistic Similarity Weight; LS = Linguistic Similarity between test and train sentence; D = Euclidean distance between the acoustic embedding of the train sentence and the acoustic embedding chosen for the previous sentence.",
"We fix D=0 for the first sentence of every paragraph. Thus, this approach is more suitable for cases when the first sentence is generally the carrier sentence, i.e. one which uses a structural template. This is particularly the case for news stories such as the ones considered in this research.",
"Distances observed between the chosen acoustic embeddings for a sample paragraph and the effect of varying weights are depicted in the matrices in Fig FIGREF7. They are symmetric matrices, where each row and column of the matrix represents the sentence at index i in a paragraph. Each cell represents the Euclidean distance between the acoustic embeddings chosen for sentences at index i,j. We can see that in (a) the sentence at index 4 stands out as the most acoustically dissimilar sentence from the rest of the sentences in the paragraph. We see that the overall acoustic distance between sentences in much higher in (a) than in (b). As we are particularly concerned with transitions from previous to current sentence, we focus on cells (i,i-1) for each row. In (a), sentences at index 4 and 5 particularly stand out as potential erratic transitions due to high values in cell (4,3) and (5,4). In (b) we observe that the distances have significantly reduced and thus sentence transitions are expected to be smooth.",
"As LSW decreases, the transitions become smoother. This is not `free': there is a trade-off, as increasing the transition smoothness decreases the linguistic similarity which also reduces the prosodic divergence. Fig. FIGREF10 shows the trade-off between the two, across the test set, when using syntactic distance to evaluate LS. Low linguistic distance (i.e. 1 - LS) and low acoustic distance are required.",
"The plot shows that there is a sharp decrease in acoustic distance between LSW of 1.0 and 0.9 but the reduction becomes slower from therein, while the changes in linguistic distance progress in a linear fashion. We informally evaluated the performance of the systems by reducing LSW from 1.0 till 0.7 with a step size of 0.05 in order to look for an optimal balance. At LSW=0.9, the first elbow on acoustic distance curve, there was a significant decrease in the perceived erraticness. As such, we chose those values for our LFR evaluations.",
""
],
[
"The research questions we attempt to answer are:",
"Can linguistics-driven selection of acoustic waveform from the existing dataset lead to improved prosody and naturalness when synthesizing speech ?",
"How does syntactic selection compare with CWE selection?",
"Does this approach improve LFR experience as well?",
"To answer these questions, we used in our experiments the systems, data and subjective evaluations described below.",
""
],
[
"The evaluated TTS system is a Tacotron-like system BIBREF20 already verified for the newscaster domain. A schematic description can be seen in Fig. FIGREF15 and a detailed explanation of the baseline system and the training data can be read in BIBREF21, BIBREF22. Conversion of the produced spectrograms to waveforms is done using the Universal WaveRNN-like model presented in BIBREF2.",
"For this study, we consider an improved system that replaced the one-hot vector style modeling approach by a VAE-based reference encoder similar to BIBREF5, BIBREF3, in which the VAE embedding represents an acoustic encoding of a speech signal, allowing us to drive the prosodic representation of the synthesized text as observed in BIBREF23. The way of selecting the embedding at inference time is defined by the approaches introduced in Sections SECREF1 and SECREF6. The dimension of the embedding is set to 64 as it allows for the best convergence without collapsing the KLD loss during training.",
""
],
[
""
],
[
"(i) TTS System dataset: We trained our TTS system with a mixture of neutral and newscaster style speech. For a total of 24 hours of training data, split in 20 hours of neutral (22000 utterances) and 4 hours of newscaster styled speech (3000 utterances).",
"(ii) Embedding selection dataset: As the evaluation was carried out only on the newscaster speaking style, we restrict our linguistic search space to the utterances associated to the newscaster style: 3000 sentences.",
""
],
[
"The systems were evaluated on two datasets:",
"(i) Common Prosody Errors (CPE): The dataset on which the baseline Prostron model fails to generate appropriate prosody. This dataset consists of complex utterances like compound nouns (22%), “or\" questions (9%), “wh\" questions (18%). This set is further enhanced by sourcing complex utterances (51%) from BIBREF24.",
"(ii) LFR: As demonstrated in BIBREF25, evaluating sentences in isolation does not suffice if we want to evaluate the quality of long-form speech. Thus, for evaluations on LFR we curated a dataset of news samples. The news style sentences were concatenated into full news stories, to capture the overall experience of our intended use case.",
""
],
[
"Our tests are based on MUltiple Stimuli with Hidden Reference and Anchor (MUSHRA) BIBREF26, but without forcing a system to be rated as 100, and not always considering a top anchor. All of our listeners, regardless of linguistic knowledge were native US English speakers. For the CPE dataset, we carried out two tests. The first one with 10 linguistic experts as listeners, who were asked to rate the appropriateness of the prosody ignoring the speaking style on a scale from 0 (very inappropriate) to 100 (very appropriate). The second test was carried out on 10 crowd-sourced listeners who evaluated the naturalness of the speech from 0 to 100. In both tests each listener was asked to rate 28 different screens, with 4 randomly ordered samples per screen for a total of 112 samples. The 4 systems were the 3 proposed ones and the centroid-based VAE inference as the baseline.",
"For the LFR dataset, we conducted only a crowd-sourced evaluation of naturalness, where the listeners were asked to assess the suitability of newscaster style on a scale from 0 (completely unsuitable) to 100 (completely adequate). Each listener was presented with 51 news stories, each playing one of the 5 systems including the original recordings as a top anchor, the centroid-based VAE as baseline and the 3 proposed linguistics-driven embedding selection systems.",
""
],
[
"Table 1 reports the average MUSHRA scores, evaluating prosody and naturalness, for each of the test systems on the CPE dataset. These results answer Q1, as the proposed approach improves significantly over the baseline on both grounds. It thus, gives us evidence supporting our hypothesis that linguistics-driven acoustic embedding selection can significantly improve speech quality. We also observe that better prosody does not directly translate into improved naturalness and that there is a need to improve acoustic modeling in order to better reflect the prosodic improvements achieved.",
"We validate the differences between MUSHRA scores using pairwise t-test. All proposed systems improved significantly over the baseline prosody (p$<$0.01). For naturalness, BERT syntactic performed the best, improving over the baseline significantly (p=0.04). Other systems did not give statistically significant improvement over the baseline (p$>$0.05). The difference between BERT and BERT Syntactic is also statistically insignificant.",
"Q2 is explored in Table TABREF21, which gives the breakdown of prosody results by major categories in CPE. For `wh' questions, we observe that Syntactic alone brings an improvement of 4% and BERT Syntactic performs the best by improving 8% over the baseline. This suggests that `wh' questions generally share a closely related syntax structure and that information can be used to achieve better prosody. This intuition is further strengthened by the improvements observed for `or' questions. Syntactic alone improves by 9% over the baseline and BERT Syntactic performs the best by improving 21% over the baseline. The improvement observed in `or' questions is greater than `wh' questions as most `or' questions have a syntax structure unique to them and this is consistent across samples in the category. For both these categories, the systems Syntactic, BERT and BERT Syntactic show incremental improvement as the first system contains only syntactic information, the next captures some aspect of syntax with semantics and the third has enhanced the representation of syntax with CWE representation to drive selection. Thus, it is evident that the extent of syntactic information captured drives the quality in speech synthesis for these two categories.",
"Compound nouns proved harder to improve upon as compared to questions. BERT performed the best in this category with a 1.2% improvement over the baseline. We can attribute this to the capability of BERT to capture context which Syntactic does not do. This plays a critical role in compound nouns, where to achieve suitable prosody it is imperative to understand in which context the nouns are being used. For other complex sentences as well, BERT performed the best by improving over the baseline by 6%. This can again be attributed to the fact that most of the complex sentences required contextual knowledge. Although Syntactic does improve over the baseline, syntax does not look like the driving factor as BERT Syntactic performs a bit worse than BERT. This indicates that enhancing syntax representation hinders BERT from fully leveraging the contextual knowledge it captured to drive embedding selection.",
"Q3 is answered in Table TABREF22, which reports the MUSHRA scores on the LFR dataset. The Syntactic system performed the best with high statistical significance (p=0.02) in comparison to baseline. We close the gap between the baseline and the recordings by almost 20%. Other systems show statistically insignificant (p$>$0.05) improvements over the baseline. To achieve suitable prosody, LFR requires longer distance dependencies and knowledge of prosodic groups. Such information can be approximated more effectively by the Syntactic system rather than the CWE based systems. However, this is a topic for a potential future exploration as the difference between BERT and Syntactic is statistically insignificant (p=0.6).",
""
],
[
"The current VAE-based TTS systems are susceptible to monotonous speech generation due to the need to select a suitable acoustic embedding to synthesize a sample. In this work, we proposed to generate dynamic prosody from the same TTS systems by using linguistics to drive acoustic embedding selection. Our proposed approach is able to improve the overall speech quality including prosody and naturalness. We propose 3 techniques (Syntactic, BERT and BERT Syntactic) and evaluated their performance on 2 datasets: common prosodic errors and LFR. The Syntactic system was able to improve significantly over the baseline on almost all parameters (except for naturalness on CPE). Information captured by BERT further improved prosody in cases where contextual knowledge was required. For LFR, we bridged the gap between baseline and actual recording by 20%. This approach can be further extended by making the model aware of these features rather than using them to drive embedding selection."
]
],
"section_name": [
"Introduction",
"Proposed Systems",
"Proposed Systems ::: Systems",
"Proposed Systems ::: Systems ::: Syntactic",
"Proposed Systems ::: Systems ::: BERT",
"Proposed Systems ::: Systems ::: BERT Syntactic",
"Proposed Systems ::: Applications to LFR",
"Experimental Protocol",
"Experimental Protocol ::: Text-to-Speech System",
"Experimental Protocol ::: Datasets",
"Experimental Protocol ::: Datasets ::: Training Dataset",
"Experimental Protocol ::: Datasets ::: Evaluation Dataset",
"Experimental Protocol ::: Subjective evaluation",
"Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"eb1de9d6db53ef1e457d0bc4610acbabc14d8db9"
],
"answer": [
{
"evidence": [
"Experimental Protocol ::: Datasets ::: Training Dataset",
"(i) TTS System dataset: We trained our TTS system with a mixture of neutral and newscaster style speech. For a total of 24 hours of training data, split in 20 hours of neutral (22000 utterances) and 4 hours of newscaster styled speech (3000 utterances).",
"(ii) Embedding selection dataset: As the evaluation was carried out only on the newscaster speaking style, we restrict our linguistic search space to the utterances associated to the newscaster style: 3000 sentences.",
"Experimental Protocol ::: Datasets ::: Evaluation Dataset",
"The systems were evaluated on two datasets:",
"(i) Common Prosody Errors (CPE): The dataset on which the baseline Prostron model fails to generate appropriate prosody. This dataset consists of complex utterances like compound nouns (22%), “or\" questions (9%), “wh\" questions (18%). This set is further enhanced by sourcing complex utterances (51%) from BIBREF24.",
"(ii) LFR: As demonstrated in BIBREF25, evaluating sentences in isolation does not suffice if we want to evaluate the quality of long-form speech. Thus, for evaluations on LFR we curated a dataset of news samples. The news style sentences were concatenated into full news stories, to capture the overall experience of our intended use case."
],
"extractive_spans": [],
"free_form_answer": "Training datasets: TTS System dataset and embedding selection dataset. Evaluation datasets: Common Prosody Errors dataset and LFR dataset.",
"highlighted_evidence": [
"Experimental Protocol ::: Datasets ::: Training Dataset\n(i) TTS System dataset: We trained our TTS system with a mixture of neutral and newscaster style speech. For a total of 24 hours of training data, split in 20 hours of neutral (22000 utterances) and 4 hours of newscaster styled speech (3000 utterances).\n\n(ii) Embedding selection dataset: As the evaluation was carried out only on the newscaster speaking style, we restrict our linguistic search space to the utterances associated to the newscaster style: 3000 sentences.\n\nExperimental Protocol ::: Datasets ::: Evaluation Dataset\nThe systems were evaluated on two datasets:\n\n(i) Common Prosody Errors (CPE): The dataset on which the baseline Prostron model fails to generate appropriate prosody. This dataset consists of complex utterances like compound nouns (22%), “or\" questions (9%), “wh\" questions (18%). This set is further enhanced by sourcing complex utterances (51%) from BIBREF24.\n\n(ii) LFR: As demonstrated in BIBREF25, evaluating sentences in isolation does not suffice if we want to evaluate the quality of long-form speech. Thus, for evaluations on LFR we curated a dataset of news samples. The news style sentences were concatenated into full news stories, to capture the overall experience of our intended use case."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero"
],
"paper_read": [
"no"
],
"question": [
"What dataset is used for train/test of this method?"
],
"question_id": [
"78577fd1c09c0766f6e7d625196adcc72ddc8438"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
""
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Fig. 1: Constituency parse tree",
"Fig. 2: Acoustic Embedding Distance Matrix using Syntactic Distance as Linguistic Similarity Measure",
"Fig. 4: Schematic of the implemented TTS system",
"Fig. 3: Acoustic Distance (solid line) vs Linguistic Distance (dashed line) as a function of LSW",
"Table 1: Prosody and Naturalness evaluation metrics on the CPE dataset. Fields in bold are indicative of best results. * depicts statistical insignificance in comparison to baseline",
"Table 3: LFR evaluation : Fields in bold are indicative of best results. * depicts statistical insignificance in comparison to baseline",
"Table 2: Prosody evaluation breakdown by categories on CPE"
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"3-Figure4-1.png",
"3-Figure3-1.png",
"4-Table1-1.png",
"4-Table3-1.png",
"4-Table2-1.png"
]
} | [
"What dataset is used for train/test of this method?"
] | [
[
"1912.00955-Experimental Protocol ::: Datasets ::: Training Dataset-1",
"1912.00955-Experimental Protocol ::: Datasets ::: Training Dataset-0",
"1912.00955-Experimental Protocol ::: Datasets ::: Evaluation Dataset-0",
"1912.00955-Experimental Protocol ::: Datasets ::: Evaluation Dataset-2",
"1912.00955-Experimental Protocol ::: Datasets ::: Evaluation Dataset-1"
]
] | [
"Training datasets: TTS System dataset and embedding selection dataset. Evaluation datasets: Common Prosody Errors dataset and LFR dataset."
] | 857 |