id
stringlengths 32
33
| x
stringlengths 41
1.75k
| y
stringlengths 4
39
|
---|---|---|
f58555aa8fc78903df83af8309a3d7_0 | Our representation model is built on the functionalities of Annotation Graph <cite>[7]</cite> and the underlying storage scheme is conceptually similar to Standoff XML format [9] , but we opted for a relational database structure built with an object-oriented design for efficiency, reusability and versatility. | uses differences |
f58555aa8fc78903df83af8309a3d7_1 | The tagger is built on a generic, multifunctional relational database similar to the annotation graph model <cite>[7]</cite> that has been demonstrated to be capable of representing virtually all sorts of common linguistic annotations. | similarities |
f59a8c650583343fe372db42fc109a_0 | We trained Transformer models <cite>(Vaswani et al., 2017)</cite> using Sockeye 1 (Hieber et al., 2017) . | uses |
f59a8c650583343fe372db42fc109a_1 | Figure 1 (a) shows the structure of the standard Transformer translation model <cite>(Vaswani et al., 2017)</cite> and we removed the encoder and the attention layer in the decoder from the Transformer translation model to create our Transformer language model as shown in Figure 1 (b) . | extends |
f59a8c650583343fe372db42fc109a_2 | Then we randomly selected 10M sentences which contain difficult words for back-translation. The mod- els used for back-translating monolingual data are baseline Transformers <cite>(Vaswani et al., 2017)</cite> trained on the bilingual data after data selection as described before. | uses |
f59a8c650583343fe372db42fc109a_3 | The settings of Transformerbase is the same as the baseline Transformer in<cite> Vaswani et al. (2017)</cite> 's work. | similarities uses |
f60796ff05156e81c4b183cdcb05ae_0 | Similar experiments on using shared feature extraction layers for slot-filling across several domains have demonstrated significant performance improvements relative to single-domain baselines, especially in low data regimes<cite> [9]</cite> . | background |
f60796ff05156e81c4b183cdcb05ae_1 | All digits were replaced with special "#" tokens following<cite> [9]</cite> . | uses |
f60796ff05156e81c4b183cdcb05ae_3 | Both LSTM layers are shared across all domains, fol- lowed by domain specific softmax layers, following<cite> [9]</cite> . | uses |
f6a35ed1ec0c01d3e9faa1ec3d8478_0 | We build on previous work in our lab on disagreement detection, classifying stance, identifying high quality arguments, measuring the properties and the persuasive effects of factual vs. emotional arguments, and clustering arguments into their facets or frames related to a particular topic [9, 1, 13, 18, <cite>16,</cite> 12, 15] . | uses |
f6a35ed1ec0c01d3e9faa1ec3d8478_1 | Swanson et al.(2015) created a large corpus consisting of 109,074 posts on the topics gay marriage (GM, 22425 posts), gun control (GC, 38102 posts), death penalty (DP, 5283 posts) by combining the Internet Argument Corpus(IAC) [17] , with dialogues from online debate forums 1<cite> [16]</cite> . | background |
f6a35ed1ec0c01d3e9faa1ec3d8478_2 | We started with the Argument Quality (AQ) regressor from<cite> [16]</cite> , which predicts a quality score for each sentence. | uses |
f6a35ed1ec0c01d3e9faa1ec3d8478_3 | had improved upon the AQ predictor from<cite> [16]</cite> , giving a much larger and diverse corpus [12] . | extends |
f7255360eacc4e2a4e8bea2f6ab1b0_0 | <cite>Hearst (1997)</cite> and Nomoto and Nitta (1994) detect this coherence through patterns of lexical cooccurrence. | background |
f7255360eacc4e2a4e8bea2f6ab1b0_1 | A first qualitative evaluation of the method has been done with about 20 texts but without a formal protocol as in<cite> (Hearst, 1997)</cite> . | differences |
f7255360eacc4e2a4e8bea2f6ab1b0_2 | As in<cite> (Hearst, 1997)</cite> , boundaries found by the method are weighted and sorted in decreasing order. | uses |
f7255360eacc4e2a4e8bea2f6ab1b0_3 | Each boundary, which is a minimum of the cohesion graph, was weighted by the sum of the differences between its value and the values of the two maxima around it, as in<cite> (Hearst, 1997)</cite> . | uses |
f797e7439bd78af2ef86271214f991_0 | In <cite>Martschat and Strube (2014)</cite> , we propose a framework for error analysis for coreference resolution. | motivation |
f797e7439bd78af2ef86271214f991_1 | The idea underlying the analysis framework of <cite>Martschat and Strube (2014)</cite> is to employ spanning trees in a graph-based entity representation. | background |
f797e7439bd78af2ef86271214f991_2 | In <cite>Martschat and Strube (2014)</cite> , we propose an algorithm based on Ariel's accessibility theory (Ariel, 1990) for reference entities. | similarities uses |
f797e7439bd78af2ef86271214f991_3 | First, it includes multigraph, which is a deterministic approach using a few strong features<cite> (Martschat and Strube, 2014)</cite> . | background |
f797e7439bd78af2ef86271214f991_4 | We already implemented the algorithms discussed in <cite>Martschat and Strube (2014)</cite> . | similarities uses |
f797e7439bd78af2ef86271214f991_5 | Our system also supports other use cases, such as the cross-system analysis described in <cite>Martschat and Strube (2014)</cite> . | similarities uses |
f797e7439bd78af2ef86271214f991_6 | Hence, following <cite>Martschat and Strube (2014)</cite> , we categorize all errors by coarse mention type of anaphor and antecedent (proper name, noun, pronoun, demonstrative pronoun or verb) 4 . | similarities uses |
f797e7439bd78af2ef86271214f991_7 | Compared to our original implementation of the error analysis framework<cite> (Martschat and Strube, 2014)</cite> , we made the analysis interface more userfriendly and provide more analysis functionality. | differences |
f861e6590ff57225395e7d480c66e8_0 | We start from a baseline joint model that performs the tasks of named entity recognition (NER) and relation extraction at once. Previously proposed models (summarized in Section 2) exhibit several issues that the neural network-based baseline approach (detailed in Section 3.1) overcomes: (i) our model uses automatically extracted features without the need of external parsers nor manually extracted features (see <cite>Gupta et al. (2016)</cite> ; Miwa and Bansal (2016) ; Li et al. (2017) ), (ii) all entities and the corresponding relations within the sentence are extracted at once, instead of examining one pair of entities at a time (see Adel and Schütze (2017) ), and (iii) we model relation extraction in a multi-label setting, allowing multiple relations per entity (see Katiyar and Cardie (2017) ; Bekoulis et al. (2018a) ). | motivation background |
f861e6590ff57225395e7d480c66e8_1 | Joint entity and relation extraction: Joint models (Li and Ji, 2014; Miwa and Sasaki, 2014) that are based on manually extracted features have been proposed for performing both the named entity recognition (NER) and relation extraction subtasks at once. These methods rely on the availability of NLP tools (e.g., POS taggers) or manually designed features leading to additional complexity. <cite>Gupta et al. (2016)</cite> propose the use of various manually extracted features along with RNNs. | motivation background |
f861e6590ff57225395e7d480c66e8_2 | For the CoNLL04 (Roth and Yih, 2004 ) EC task (assuming boundaries are given), we use the same splits as in <cite>Gupta et al. (2016)</cite> ; Adel and Schütze (2017) . | uses |
f861e6590ff57225395e7d480c66e8_3 | For the CoNLL04 dataset, we use two evaluation settings. We use the relaxed evaluation similar to <cite>Gupta et al. (2016)</cite> ; Adel and Schütze (2017) on the EC task. The baseline model outperforms the state-of-the-art models that do not rely on manually extracted features (>4% improvement for both tasks), since we directly model the whole sentence, instead of just considering pairs of entities. | uses differences |
f881f6c65301fdfe2fffe7a18e05c4_0 | Words and phrases that may directly mark the structure of a discourse have been termed CUE PttR.ASES, CLUE WORDS, DISCOURSE MAI:tKERS~ arid DISCOURSE PARTICLES [3,<cite> 4,</cite> 14, 17, 19] . | background |
f881f6c65301fdfe2fffe7a18e05c4_1 | For example, by indicating the presence of a structural boundary or a relationship between parts of a discourse, cue phrases caa assist in the resolution of anaphora [5,<cite> 4,</cite> 17] and in the identification of rhetorical relations [10, 12, 17] . | background |
f881f6c65301fdfe2fffe7a18e05c4_2 | Grosz and Sidner<cite> [4]</cite> classify cue phrases based on changes to the attentional stack and intentional structure found in their theory of discourse. | background |
f881f6c65301fdfe2fffe7a18e05c4_3 | Once a cue phrase has been identified, however, it is not always clear whether to interpret it as a discourse marker or not [6,<cite> 4,</cite> 8, 18] . | background |
f881f6c65301fdfe2fffe7a18e05c4_4 | 8Our set of cue phrases was derived from extensional definitions provided by ourselves and othel~ [3,<cite> 4,</cite> 17, 18, 21] . | extends differences |
f8fc3634684ff37ab3d29cee910443_0 | For example, systems have learned to commentate simulated robot soccer games by learning from sample sportscasts (Chen and Mooney, 2008; Liang et al., 2009; <cite>Börschinger et al., 2011)</cite> , or understand navigation instructions by learning from action traces produced when following the directions (Chen and Mooney, 2011; Tellex et al., 2011) . | background |
f8fc3634684ff37ab3d29cee910443_1 | Our approach extends that of<cite> Börschinger et al. (2011)</cite> , which in turn was inspired by a series of previous techniques (Lu et al., 2008; Liang et al., 2009; following the idea of constructing correspondences between NL and MR in a single probabilistic generative framework. | extends |
f8fc3634684ff37ab3d29cee910443_2 | 2 Like<cite> Börschinger et al. (2011)</cite> , our approach learns a semantic parser directly from ambiguous supervision, specifically NL instructions paired with their complete landmarks plans as context. | similarities uses |
f8fc3634684ff37ab3d29cee910443_3 | We basically follow the scheme of<cite> Börschinger et al. (2011)</cite> , but instead of generating NL words from each atomic MR, words are generated from each lexeme MR, Figure 6 : Summary of the rule generation process. | extends differences |
f8fc3634684ff37ab3d29cee910443_4 | Lexeme rules come from the schemata of<cite> Börschinger et al. (2011)</cite> , and allow every lexeme MR to generate one or more NL words. | uses |
f8fc3634684ff37ab3d29cee910443_5 | After traversing all of its children, 5 We used the implementation available at http://web. science.mq.edu.au/˜mjohnson/Software.htm which was also used by<cite> Börschinger et al. (2011)</cite> . | uses similarities |
f8fc3634684ff37ab3d29cee910443_6 | Our approach improves on<cite> Börschinger et al. (2011)</cite> 's method in the following ways: | extends |
f8fc3634684ff37ab3d29cee910443_7 | A number of approaches (Kate and Mooney, 2007; Chen and Mooney, 2008; Chen et al., 2010; <cite>Börschinger et al., 2011)</cite> assume training data consisting of a set of sentences each associated with a small set of MRs, one of which is usually the correct meaning of the sentence. | background |
f8fc3634684ff37ab3d29cee910443_8 | As previously discussed,<cite> Börschinger et al. (2011)</cite> use a PCFG generative model and also train it on ambiguous data using EM. | background |
f8fc3634684ff37ab3d29cee910443_9 | Our model enhances<cite> Börschinger et al. (2011)</cite> 's approach to reducing the problem of grounded learning of semantic parsers to PCFG induction. | extends |
fa00b8bac394b48bf950f154c65216_0 | Large pre-trained language models (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019;<cite> Yang et al., 2019</cite>; improved the state-of-the-art of various natural language understanding (NLU) tasks such as question answering (e.g., SQuAD; Rajpurkar et al. 2016) , natural language inference (e.g., MNLI; Bowman et al. 2015) as well as text classification (Zhang et al., 2015) . | background |
fa00b8bac394b48bf950f154c65216_2 | As mentioned earlier, we can take advantage of recent pre-trained Transformer encoders for the document encoding part as in<cite> Liu and Lapata (2019)</cite> . | uses similarities |
fa00b8bac394b48bf950f154c65216_3 | As mentioned earlier, we can take advantage of recent pre-trained Transformer encoders for the document encoding part as in<cite> Liu and Lapata (2019)</cite> . However,<cite> Liu and Lapata (2019)</cite> leave the decoder randomly initialized. | differences |
fa00b8bac394b48bf950f154c65216_4 | Very recently, the feature learning part was replaced again with pretrained transformers (Zhang et al., 2019;<cite> Liu and Lapata, 2019</cite> ) that lead to another huge improvement of summarization performance. | background |
fa00b8bac394b48bf950f154c65216_5 | Following previous work (See et al., 2017; Zhang et al., 2019;<cite> Liu and Lapata, 2019)</cite> , we use the non-anonymized version of CNNDM. | uses |
fa00b8bac394b48bf950f154c65216_6 | We closely follow the preprocessing procedures described in Durrett et al. (2016) and<cite> Liu and Lapata (2019)</cite> . | uses |
fa00b8bac394b48bf950f154c65216_9 | BERTExt<cite> (Liu and Lapata, 2019</cite> ) is an extractive model fine-tuning on BERT (Devlin et al., 2019) that outperforms other extractive systems. | background |
fa00b8bac394b48bf950f154c65216_10 | BERTAbs<cite> (Liu and Lapata, 2019)</cite> and UniLM (Dong et al., 2019) are both pre-training based SEQ2SEQ summarization models. | background |
fa3d20d5975ec59454abfca68f8935_0 | Even with recent progress (Gehrmann et al., 2018;<cite> Chen and Bansal, 2018)</cite> , there is still some work to be done in the field. | motivation |
fa3d20d5975ec59454abfca68f8935_1 | Nonetheless remarkable progress have been achieved with the use of seq2seq models (Gehrmann et al., 2018; See et al., 2017; Chopra et al., 2016; Rush et al., 2015) and a reward instead of loss function via deep-reinforcement learning<cite> (Chen and Bansal, 2018</cite>; Paulus et al., 2017; Ranzato et al., 2015) . | background |
fa3d20d5975ec59454abfca68f8935_2 | We see abstractive summarization in same light as several other authors<cite> (Chen and Bansal, 2018</cite>; Hsu et al., 2018; Liu et al., 2018 ) -extract salient sentences and then abstract; thus sharing similar advantages as the popular divide-and-conquer algorithm. | uses |
fa3d20d5975ec59454abfca68f8935_3 | Hence it is customary to create one from the abstractive ground-truth summaries<cite> (Chen and Bansal, 2018</cite>; Nallapati et al., 2017) . | uses |
fa3d20d5975ec59454abfca68f8935_4 | Different from Nallapati et al. (2017) 's approach to greedily add sentences to the summary that maximizes the ROUGE score, our approach is more similar to <cite>Chen and Bansal (2018)</cite>'s model that calculates the individual reference sentence-level score as per its similarity with each sentence in the corresponding document. | similarities |
fa3d20d5975ec59454abfca68f8935_5 | for each t th sentence in the reference summary, R j , per i th sentence in document D j , in contrast to <cite>Chen and Bansal (2018)</cite> 's that uses ROUGE-L recall score. | differences |
fa3d20d5975ec59454abfca68f8935_6 | <cite>Chen and Bansal (2018)</cite> introduced a stop criterion in their reinforcement learning process. | background |
fa3d20d5975ec59454abfca68f8935_9 | Following previous works (See et al., 2017; Nallapati et al., 2017;<cite> Chen and Bansal, 2018)</cite> , we evaluate both datasets on standard ROUGE-1, ROUGE-2 and ROUGE-L (Lin, 2004) . | uses |
fa3d20d5975ec59454abfca68f8935_10 | Some authors have employed integer linear programming (Martins and Smith, 2009; Gillick and Favre, 2009; Boudin et al., 2015) , graph concepts (Erkan and Radev, 2004; , ranking with reinforcement learning (Narayan et al., 2018) and mostly related to our work -binary classification (Shen et al., 2007; Nallapati et al., 2017;<cite> Chen and Bansal, 2018)</cite> Our binary classification architecture differs significantly from existing models because it uses a transformer as the building block instead of a bidirectional GRU-RNN (Nallapati et al., 2017) , or bidirectional LSTM-RNN<cite> (Chen and Bansal, 2018)</cite> . | differences |
fa3d20d5975ec59454abfca68f8935_11 | Similar to Rush et al. (2015) ; <cite>Chen and Bansal (2018)</cite> we abstract by simplifying our extracted sentences. | similarities |
fa7475b6025d010dd6814dfb3905ef_0 | Recent studies on KD<cite> [33,</cite> 15] even leverage more sophisticated model-specific distillation loss functions for better performance. | background |
fa7475b6025d010dd6814dfb3905ef_1 | Recent studies on KD<cite> [33,</cite> 15] even leverage more sophisticated model-specific distillation loss functions for better performance. Different from previous KD studies which explicitly exploit a distillation loss to minimize the distance between the teacher model and the student model, we propose a new genre of model compression. | differences |
fa7475b6025d010dd6814dfb3905ef_2 | Also, selecting various loss functions and balancing the weights of each loss for different tasks and datasets are always laborious<cite> [33,</cite> 28] . | background |
fa7475b6025d010dd6814dfb3905ef_3 | The use of only one loss function throughout the whole compression process allows us to unify the different phases and keep the compression in a total end-to-end fashion. Also, selecting various loss functions and balancing the weights of each loss for different tasks and datasets are always laborious<cite> [33,</cite> 28] . | differences |
fa7475b6025d010dd6814dfb3905ef_4 | Patient Knowledge Distillation (PKD)<cite> [33]</cite> designs multiple distillation losses between the module hidden states of the teacher and student models. | background |
fa7475b6025d010dd6814dfb3905ef_5 | However, the performance greatly relies on the design of the loss function [14,<cite> 33,</cite> 15] . | background |
fa7475b6025d010dd6814dfb3905ef_6 | This loss function needs to be combined with taskspecific loss<cite> [33,</cite> 17] . | background |
fa7475b6025d010dd6814dfb3905ef_7 | We test our approach under a task-specific compression setting<cite> [33,</cite> 37] instead of a pretraining compression setting [28, 34] . | uses |
fa7475b6025d010dd6814dfb3905ef_8 | Formally, we define the task of compression as trying to retain as much performance as possible when compressing the officially released BERT-base (uncased) 5 to a 6-layer compact model with the same hidden size, following the settings in [28,<cite> 33,</cite> 37] . | uses |
fa7475b6025d010dd6814dfb3905ef_9 | As a result, we are able to obtain a predecessor model with comparable performance with that reported in previous studies [28,<cite> 33,</cite> 15] . | similarities |
fa7475b6025d010dd6814dfb3905ef_10 | Afterward, for training successor models, following [28, <cite>33]</cite> , we use the first 6 layers of BERT-base to initialize the successor model since the over-parameterized nature of Transformer [38] could cause the model unable to converge while training on small datasets. | uses |
fa7475b6025d010dd6814dfb3905ef_11 | We set up a baseline of vanilla Knowledge Distillation [14] as in<cite> [33]</cite> . | uses |
fa7475b6025d010dd6814dfb3905ef_12 | Under the setting of compressing 12-layer BERT-base to a 6-layer compact model, we choose BERT-PKD<cite> [33]</cite> , PD-BERT [37] , and DistillBERT [28] as strong baselines. | uses |
fa7475b6025d010dd6814dfb3905ef_13 | Also, our model obviously outperforms the vanilla KD [14] and Patient Knowledge Distillation (PKD)<cite> [33]</cite> , showing its supremacy over the KD-based compression approaches. | differences |
fb75198b7c9e569932dfd486ba6c0a_0 | Current research on applying WSD to specific domains has been evaluated on three available lexicalsample datasets (Ng and Lee, 1996; Weeber et al., 2001;<cite> Koeling et al., 2005)</cite> . | background |
fb75198b7c9e569932dfd486ba6c0a_1 | <cite>Koeling et al. (2005)</cite> present a corpus were the examples are drawn from the balanced BNC corpus (Leech, 1992) and the SPORTS and FINANCES sections of the newswire Reuters corpus (Rose et al., 2002) , comprising around 300 examples (roughly 100 from each of those corpora) for each of the 41 nouns. | background |
fb75198b7c9e569932dfd486ba6c0a_2 | In (Agirre and Lopez de Lacalle, 2008) , the authors also show that state-of-the-art WSD systems are not able to adapt to the domains in the context of the <cite>Koeling et al. (2005)</cite> dataset. | background |
fb75198b7c9e569932dfd486ba6c0a_3 | In contrast, ) reimplemented this method and showed that the improvement on WSD in the<cite> (Koeling et al., 2005)</cite> data was marginal. | background |
fb75198b7c9e569932dfd486ba6c0a_4 | In ) the authors report successful adaptation on the<cite> (Koeling et al., 2005</cite> ) dataset on supervised setting. | background |
fb75198b7c9e569932dfd486ba6c0a_5 | The predominant sense acquisition method was succesfully applied to specific domains in<cite> (Koeling et al., 2005)</cite> . | uses |
fb75198b7c9e569932dfd486ba6c0a_6 | When a general corpus is used, the most predominant sense in general is obtained, and when a domain-specific corpus is used, the most predominant sense for that corpus is obtained<cite> (Koeling et al., 2005)</cite> . | background |
fb75198b7c9e569932dfd486ba6c0a_7 | When a general corpus is used, the most predominant sense in general is obtained, and when a domain-specific corpus is used, the most predominant sense for that corpus is obtained<cite> (Koeling et al., 2005)</cite> . The main motivation of the authors is that the most frequent sense is a very powerful baseline, but it is one which requires hand-tagging text, while their method yields similar information automatically. | similarities |
fb87be2081ce1515dd8dbda46b4f3f_0 | Lexical Simplification (LS) aims at replacing complex words with simpler alternatives, which can help various groups of people, including children [De Belder and Moens, 2010] , non-native speakers<cite> [Paetzold and Specia, 2016]</cite> , people with cognitive disabilities [Feng, 2009; Saggion, 2017] , to understand text better. | background |
fb87be2081ce1515dd8dbda46b4f3f_1 | For avoiding the need for resources such as databases or parallel corpora, recent work utilizes word embedding models to extract simplification candidates for complex words [Glavaš andŠtajner, 2015;<cite> Paetzold and Specia, 2016</cite>; <cite>Paetzold and Specia, 2017a]</cite> . | background |
fb87be2081ce1515dd8dbda46b4f3f_2 | Given one sentence "John composed these verses." and complex words 'composed' and 'verses', the top three simplification candidates for each complex word are generated by our method BERT-LS and the state-of-the-art two baselines based word embeddings (Glavaš[Glavaš andŠtajner, 2015] and Paetzold-NE<cite> [Paetzold and Specia, 2017a]</cite> ). | uses |
fb87be2081ce1515dd8dbda46b4f3f_3 | original word embeddings trained on text, and<cite> Paetzold et al. (2017)</cite> used a retrofitted context-aware word embedding model trained on text with the POS tag. | background |
fb87be2081ce1515dd8dbda46b4f3f_4 | For complex words 'composed' and 'verses' in the sentence "John composed these verses.", the top three substitution candidates of the two complex words generated by the LS systems based on word embeddings [Glavaš andŠtajner, 2015; <cite>Paetzold and Specia, 2017a]</cite> are only related with the complex words itself without without paying attention to the original sentence. | background |
fb87be2081ce1515dd8dbda46b4f3f_5 | For complex words 'composed' and 'verses' in the sentence "John composed these verses.", the top three substitution candidates of the two complex words generated by the LS systems based on word embeddings [Glavaš andŠtajner, 2015; <cite>Paetzold and Specia, 2017a]</cite> are only related with the complex words itself without without paying attention to the original sentence. The top three substitution candidates generated by BERT-LS are not only related with the complex words, but also can fit for the original sentence very well. | differences background |
fb87be2081ce1515dd8dbda46b4f3f_6 | Lexical simplification (LS) contains identifying complex words and finding the best candidate substitution for these complex words [Shardlow, 2014;<cite> Paetzold and Specia, 2017b]</cite> . | background |
fb87be2081ce1515dd8dbda46b4f3f_7 | The popular lexical simplification (LS) approaches are rule-based, which each rule contain a complex word and its simple synonyms<cite> [Lesk, 1986</cite>; Pavlick and Callison-Burch, 2016;<cite> Maddela and Xu, 2018]</cite> . | background |
fb87be2081ce1515dd8dbda46b4f3f_8 | Afterward, they further extracted candidates for complex word by combining word embeddings with WordNet and parallel corpora<cite> [Paetzold and Specia, 2017a]</cite> . | background |
fb87be2081ce1515dd8dbda46b4f3f_9 | Afterward, they further extracted candidates for complex word by combining word embeddings with WordNet and parallel corpora<cite> [Paetzold and Specia, 2017a]</cite> . After examining existing LS methods ranging from rulesbased to embedding-based, the major challenge is that they generated simplification candidates for the complex word regardless of the context of the complex word, which will inevitably produce a large number of spurious candidates that can confuse the systems employed in the subsequent steps. | motivation background |
fb87be2081ce1515dd8dbda46b4f3f_10 | Afterward, they further extracted candidates for complex word by combining word embeddings with WordNet and parallel corpora<cite> [Paetzold and Specia, 2017a]</cite> . After examining existing LS methods ranging from rulesbased to embedding-based, the major challenge is that they generated simplification candidates for the complex word regardless of the context of the complex word, which will inevitably produce a large number of spurious candidates that can confuse the systems employed in the subsequent steps. In this paper, we will first present a BERT-based LS approach that requires only a sufficiently large corpus of regular text without any manual efforts. | motivation background differences |
fb87be2081ce1515dd8dbda46b4f3f_11 | The substitution ranking of the lexical simplification pipeline is to decide which of the candidate substitutions that fit the context of a complex word is the simplest<cite> [Paetzold and Specia, 2017b]</cite>. | background |
fb87be2081ce1515dd8dbda46b4f3f_12 | In this paper, we are not focused on identifying complex words<cite> [Paetzold and Specia, 2017b]</cite> , which is a separate task. | differences |
fb87be2081ce1515dd8dbda46b4f3f_14 | We use three widely used lexical simplification datasets to do experiments. (2) BenchLS 6<cite> [Paetzold and Specia, 2016]</cite> . | uses |
fb87be2081ce1515dd8dbda46b4f3f_15 | We choose the following eight baselines to evaluation: Devlin [Devlin and Tait, 1998 ], Biran [Biran et al., 2011 ], Yamamoto [Kajiwara et al., 2013 , Horn [Horn et al., 2014] , Glavaš [Glavaš andŠtajner, 2015] , SimplePPDB [Pavlick and Callison-Burch, 2016] , Paetzold-CA<cite> [Paetzold and Specia, 2016]</cite> , and Paetzold-NE<cite> [Paetzold and Specia, 2017a]</cite> . | uses |
fb87be2081ce1515dd8dbda46b4f3f_16 | The following three widely used metrics are used for evaluation<cite> [Paetzold and Specia, 2015</cite>;<cite> Paetzold and Specia, 2016</cite>;<cite> Paetzold and Specia, 2017b]</cite> . | uses |