id
stringlengths 32
33
| x
stringlengths 41
1.75k
| y
stringlengths 4
39
|
---|---|---|
021c423c731ecbe3e26b3ce234b390_0 | Automatic detection of fake from legitimate news in different formats such as headlines, tweets and full news articles has been approached in recent Natural Language Processing literature (Vlachos and Riedel, 2014; Vosoughi, 2015; Jin et al., 2016;<cite> Rashkin et al., 2017</cite>; Wang, 2017; Pomerleau and Rao, 2017; Thorne et al., 2018) . | background |
021c423c731ecbe3e26b3ce234b390_1 | Most previous systems built to identify fake news articles rely on training data labeled with respect to the general reputation of the sources, i.e., domains/user accounts (Fogg et al., 2001; Lazer et al., 2017;<cite> Rashkin et al., 2017)</cite> . Even though some of these studies try to identify fake news based on linguistic cues, the question is whether they learn publishers' general writing style (e.g., common writing features of a few clickbaity websites) or deceptive style (similarities among news articles that contain misinformation). | motivation |
021c423c731ecbe3e26b3ce234b390_2 | A few recent studies have examined full articles (i.e., actual 'fake news') to extract discriminative linguistic features of misinformation<cite> Rashkin et al., 2017</cite>; Horne and Adali, 2017) . | background |
021e5dbe22bf0f4ebda4d37040d0a6_0 | In the cross-lingual study of<cite> McDonald et al. (2011)</cite> , where delexicalized parsing models from a number of source languages were evaluated on a set of target languages, it was observed that the best target language was frequently not the closest typologically to the source. In one stunning example, Danish was the worst source language when parsing Swedish, solely due to greatly divergent annotation schemes. | motivation background |
021e5dbe22bf0f4ebda4d37040d0a6_1 | We aim to do the same for syntactic dependencies and present cross-lingual parsing experiments to highlight some of the benefits of cross-lingually consistent annotation. First, results largely conform to our expectations of which target languages should be useful for which source languages, unlike in the study of<cite> McDonald et al. (2011)</cite> . | differences background |
021e5dbe22bf0f4ebda4d37040d0a6_2 | The selected sentences were pre-processed using cross-lingual taggers (Das and Petrov, 2011) and parsers <cite>(McDonald et al., 2011)</cite> . | uses |
021e5dbe22bf0f4ebda4d37040d0a6_3 | One of the motivating factors in creating such a data set was improved cross-lingual transfer evaluation. To test this, we use a cross-lingual transfer parser similar to that of<cite> McDonald et al. (2011)</cite> . In particular, it is a perceptron-trained shift-reduce parser with a beam of size 8. | extends background |
021e5dbe22bf0f4ebda4d37040d0a6_4 | We can make several interesting observations. Most notably, for the Germanic and Romance target languages, the best source language is from the same language group. This is in stark contrast to the results of<cite> McDonald et al. (2011)</cite> , who observe that this is rarely the case with the heterogenous CoNLL treebanks. | background differences |
021e5dbe22bf0f4ebda4d37040d0a6_5 | With respect to evaluation, it is interesting to compare the absolute numbers to those reported in<cite> McDonald et al. (2011)</cite> | background |
022049c0e75a490978b2c49da41deb_0 | Recent NLP work on semantic idiomaticity has focused on the task of "compositionality prediction", in the form of a regression task whereby a given MWE is mapped onto a continuous-valued compositionality score, either for the MWE as a whole or for each of its component words (Reddy et al., 2011; Schulte im Walde et al., 2013;<cite> Salehi et al., 2014b)</cite> . | background |
022049c0e75a490978b2c49da41deb_1 | There has however been recent interest in approaches to MWEs that are more broadly applicable to a wider range of languages and MWE types (Brooke et al., 2014;<cite> Salehi et al., 2014b</cite>; Schneider et al., 2014) . | background |
022049c0e75a490978b2c49da41deb_2 | Our first method for building vectors is that of<cite> Salehi et al. (2014b)</cite> : the top 50 most-frequent words in the training corpus are considered to be stopwords and discarded, and words with frequency rank 51-1051 are considered to be the content-bearing words, which form the dimensions for our vectors, in the manner of Schütze (1997) . | uses |
022049c0e75a490978b2c49da41deb_3 | The state-of-the-art method for this dataset <cite>(Salehi et al., 2014b</cite> ) is a supervised support vector regression model, trained over the distributional method from Section 3.1 as applied to both English and 51 target languages (under word and MWE translation). | background |
022049c0e75a490978b2c49da41deb_4 | The state-of-the-art method for this dataset <cite>(Salehi et al., 2014b</cite> ) is a linear combination of: (1) the distributional method from Section 3.1; (2) the same method applied to 10 target languages (under word and MWE translation, selecting the languages using supervised learning); and (3) the string similarity method of Salehi and Cook (2013) . | background |
022049c0e75a490978b2c49da41deb_5 | Note that for EVPC, we don't use the vector for the particle, in keeping with<cite> Salehi et al. (2014b)</cite> ; as such, there are no results for comp 2 . | similarities |
022049c0e75a490978b2c49da41deb_6 | For comp 1 , α is set to 1.0 for EVPC, and 0.7 for both ENC and GNC, also based on the findings of<cite> Salehi et al. (2014b)</cite> . | uses |
022049c0e75a490978b2c49da41deb_7 | In future work we intend to explore the contribution of information from word embeddings of a target expression and its component words under translation into many languages, along the lines of<cite> Salehi et al. (2014b)</cite> . | similarities future_work |
023a954d97b5d761b01f09bb242d19_0 | Abstract Meaning Representation (AMR) forms a rooted acyclic directed graph that represents the content of a sentence. All nodes and edges of the AMR graph are labeled according to the sense of the words in a sentence. AMR parsing is the task of converting a given sentence to a corresponding graph. AMRs have been applied to several applications such as event extraction [13, 7] , text summarization [6, 11] and text generation [15, 14] . However, AMR annotation which requires a lot of human effort limits the outcome of data-driven approaches, one of which being neural network based methods<cite> [10,</cite> 3] . Therefore, a highly accurate parser is necessary in order to intensify other applications which are based on AMR. | motivation background |
023a954d97b5d761b01f09bb242d19_1 | NeuralAMR <cite>[10]</cite> has succeeded at both AMR parsing and sentence generation as the result of a bootstrapping training strategy on a 20-million-sentence unsupervised dataset. | background |
023a954d97b5d761b01f09bb242d19_2 | Although recent studies have utilized Long Short-Term Memory (LSTM) in AMR parsing<cite> [10,</cite> 1] , there are several disadvantages of employing LSTM compared to CNN. First, LSTM models long dependency, which might be noise to generate a linearized graph, whereas CNN provides a shorter dependency which is advantageous to generate graph traversal. Secondly, LSTM requires a chronologically computing process that restrains the ability of parallelization; on the contrary, CNN enables simultaneous parsing. | motivation background |
023a954d97b5d761b01f09bb242d19_3 | Unlike the prior work <cite>[10]</cite> , in our model, the graphs pass through a much simpler pre-processing series which consists of variable removal, graph linearization, and infrequent word replacement. For stripping the AMR text, we modified the depth-first-search traversal from the work of Kontas et al <cite>[10]</cite> in the way of marking the end of a path. The left parentheses are ignored and the right parentheses are replaced by doubling the concept of the terminal node. The process of recovering the stripped text from the graph is called de-linearization. The graph which contains multiple nodes of a single concept might not be perfectly reversed because those nodes have been collapsed into one. | extends |
033ce75c882764e08fb3871656a8d1_0 | In contrast, <cite>Zilio et al. (2011)</cite> make a study involving training a model but use it only on English and use extra lexical resources to complement the machine learning method, so their study does not focus just on classifier evaluation. | background |
033ce75c882764e08fb3871656a8d1_1 | In contrast, <cite>Zilio et al. (2011)</cite> make a study involving training a model but use it only on English and use extra lexical resources to complement the machine learning method, so their study does not focus just on classifier evaluation. This paper presents the first evaluation of mwetoolkit on French together with two resources very commonly used by the French NLP community: the tagger TreeTagger (Schmid, 1994) and the dictionary Dela. | motivation |
033ce75c882764e08fb3871656a8d1_2 | Ramisch et al. (2010b) provide experiments on Portuguese, English and Greek. <cite>Zilio et al. (2011)</cite> provide experiments with this tool as well. | background |
033ce75c882764e08fb3871656a8d1_3 | That is the reason why we will run three experiments close to the one of <cite>Zilio et al. (2011)</cite> but were the only changing parameter is the pattern that we train our classifiers on. | uses motivation |
033ce75c882764e08fb3871656a8d1_4 | In contrast to <cite>Zilio et al. (2011)</cite> we run our experiment on French. | differences |
033ce75c882764e08fb3871656a8d1_5 | For preprocessing we used the same processes as described in <cite>Zilio et al. (2011)</cite> . | uses |
033ce75c882764e08fb3871656a8d1_6 | We tested several algorithms offered by Weka as well as the training options suggested by <cite>Zilio et al. (2011)</cite> . | uses |
03b7c2e050957dcff336183823e6f1_0 | We use a recently proposed dependency parser <cite>(Titov and Henderson, 2007b )</cite> 1 which has demonstrated state-of-theart performance on a selection of languages from the CoNLL-X shared task (Buchholz and Marsi, 2006) . | uses |
03b7c2e050957dcff336183823e6f1_1 | When conditioning on words, we treated each word feature individually, as this proved to be useful in <cite>(Titov and Henderson, 2007b)</cite> . | motivation |
03b7c2e050957dcff336183823e6f1_2 | In our experiments we use the same definition of structural locality as was proposed for the ISBN dependency parser in <cite>(Titov and Henderson, 2007b)</cite> . | similarities uses |
03b7c2e050957dcff336183823e6f1_3 | Unlike <cite>(Titov and Henderson, 2007b )</cite>, in the shared task we used only the simplest feed-forward approximation, which replicates the computation of a neural network of the type proposed in (Henderson, 2003) . | differences |
03b7c2e050957dcff336183823e6f1_4 | To search for the most probable parse, we use the heuristic search algorithm described in <cite>(Titov and Henderson, 2007b)</cite> , which is a form of beam search. | uses |
03b7c2e050957dcff336183823e6f1_5 | As was demonstrated in <cite>(Titov and Henderson, 2007b)</cite> , even a minimal set of local explicit features achieves results which are non-significantly different from a carefully chosen set of explicit features, given the language independent definition of locality described in section 2. | similarities |
0410820bea04fb68908f4885089081_0 | tagger (e.g. the <cite>Brill tagger</cite> <cite>[2]</cite> ) for the English documents. | uses |
04b525b91b48e31258287a015d0401_0 | <cite>(Gliozzo et al., 2005)</cite> succeeded eliminating this requirement by using the category name alone as the initial keyword, yet obtaining superior performance within the keywordbased approach. | background |
04b525b91b48e31258287a015d0401_1 | <cite>(Gliozzo et al., 2005)</cite> succeeded eliminating this requirement by using the category name alone as the initial keyword, yet obtaining superior performance within the keywordbased approach. The goal of our research is to further improve the scheme of text categorization from category name, which was hardly explored in prior work. | motivation |
04b525b91b48e31258287a015d0401_2 | When analyzing the behavior of the LSA representation of <cite>(Gliozzo et al., 2005)</cite> we noticed that <cite>it captures</cite> two types of similarities between the category name and document terms. <cite>One type</cite> regards words which refer specifically to the category name's meaning, such as pitcher for the category Baseball. | background |
04b525b91b48e31258287a015d0401_3 | When analyzing the behavior of the LSA representation of <cite>(Gliozzo et al., 2005)</cite> we noticed that <cite>it captures</cite> two types of similarities between the category name and document terms. <cite>One type</cite> regards words which refer specifically to the category name's meaning, such as pitcher for the category Baseball. However, typical context words for the category which do not necessarily imply its specific meaning, like stadium, also come up as similar to baseball in LSA space. This limits <cite>the method's precision</cite>, due to false-positive classifications of contextually-related documents that do not discuss the specific category topic (such as other sports documents wrongly classified to Baseball). <cite>This behavior</cite> is quite typical for query expansion methods, which expand a query with contextually correlated terms. We propose a novel scheme that models separately these two types of similarity. | motivation |
04b525b91b48e31258287a015d0401_4 | As described in Section 1, the keyword list in <cite>(Gliozzo et al., 2005)</cite> consisted of the category name alone. <cite>This was accompanied</cite> by representing the category names and documents (step 2) in LSA space, obtained through cooccurrence-based dimensionality reduction. In <cite>this space</cite>, words that tend to cooccur together, or occur in similar contexts, are represented by similar vectors. | background |
04b525b91b48e31258287a015d0401_5 | We thus extend the scheme in Figure 1 by creating two vectors per category (in steps 1 and 2): a reference vector c ref in term space, consisting of referring terms for the category name; and a context vector c con , representing the category name in LSA space, as in <cite>(Gliozzo et al., 2005)</cite> . | uses |
04b525b91b48e31258287a015d0401_6 | We therefore measure the contextual similarity between a category c and a document d utilizing LSA space, replicating the method in <cite>(Gliozzo et al., 2005)</cite> : c con and d LSA are taken as the LSA vectors of the category name and the document, respectively, yielding Sim con (c, d) = cos( c con , d LSA )). | uses |
04b525b91b48e31258287a015d0401_7 | We tested our method on the two corpora used in <cite>(Gliozzo et al., 2005)</cite> : 20-NewsGroups, classified by a single-class scheme (single category per document), and Reuters-10 3 , of a multi-class scheme. As in <cite>their work</cite>, non-standard category names were adjusted, such as Foreign exchange for Money-fx. | uses |
04b525b91b48e31258287a015d0401_8 | As we hypothesized, the Reference model achieves much better precision than the Context model from <cite>(Gliozzo et al., 2005)</cite> resources, yielding a lower F1. | differences |
04f6b9d4296dee4bbf965f9911bf98_0 | <cite>Transformer</cite> is a powerful architecture that achieves superior performance on various sequence learning tasks, including neural machine translation, language understanding, and sequence prediction. | background |
04f6b9d4296dee4bbf965f9911bf98_1 | At the core of the <cite>Transformer</cite> is the attention mechanism, which concurrently processes all inputs in the streams. | background |
04f6b9d4296dee4bbf965f9911bf98_2 | This new formulation gives us a better way to understand individual components of the <cite>Transformer's</cite> attention, such as the better way to integrate the positional embedding. | motivation |
04f6b9d4296dee4bbf965f9911bf98_3 | Another important advantage of our kernel-based formulation is that it paves the way to a larger space of composing <cite>Transformer</cite>'s attention. | motivation |
04f6b9d4296dee4bbf965f9911bf98_4 | As an example, we propose a new variant of <cite>Transformer's</cite> attention which models the input as a product of symmetric kernels. | extends |
04f6b9d4296dee4bbf965f9911bf98_5 | <cite>Transformer</cite> <cite>(Vaswani et al., 2017 )</cite> is a relative new architecture which outperforms traditional deep learning models such as Recurrent Neural Networks (RNNs) (Sutskever et al., 2014) and Temporal Convolutional Networks (TCNs) (Bai et al., 2018) for sequence modeling tasks across neural machine translations <cite>(Vaswani et al., 2017)</cite> , language understanding (Devlin et al., 2018) , sequence prediction (Dai et al., 2019) , image generation (Child et al., 2019) , video activity classification (Wang et al., 2018) , music generation (Huang et al., 2018a) , and multimodal sentiment analysis (Tsai et al., 2019a) . | background |
04f6b9d4296dee4bbf965f9911bf98_6 | Instead of performing recurrence (e.g., RNN) or convolution (e.g., TCN) over the sequences, <cite>Transformer</cite> is a feed-forward model that concurrently processes the entire sequence. | background |
04f6b9d4296dee4bbf965f9911bf98_7 | At the core of the <cite>Transformer</cite> is its attention mechanism, which is proposed to integrate the dependencies between the inputs. | background |
04f6b9d4296dee4bbf965f9911bf98_8 | There are up to three types of attention within the full <cite>Transformer</cite> model as exemplified with neural machine translation application <cite>(Vaswani et al., 2017)</cite> : 1) Encoder self-attention considers the source sentence as input, generating a sequence of encoded representations, where each encoded token has a global dependency with other tokens in the input sequence. | background |
04f6b9d4296dee4bbf965f9911bf98_9 | In all cases, the <cite>Transformer's</cite> attentions follow the same general mechanism. | background |
04f6b9d4296dee4bbf965f9911bf98_10 | We note that this operation is orderagnostic to the permutation in the input se-quence (order is encoded with extra positional embedding <cite>(Vaswani et al., 2017</cite>; Shaw et al., 2018; Dai et al., 2019) ). | background |
04f6b9d4296dee4bbf965f9911bf98_11 | The above observation inspires us to connect <cite>Transformer's</cite> attention to kernel learning (Scholkopf and Smola, 2001) : they both concurrently and order-agnostically process all inputs by calculating the similarity between the inputs. | motivation |
04f6b9d4296dee4bbf965f9911bf98_12 | Therefore, in the paper, we present a new formulation for <cite>Transformer's</cite> attention via the lens of kernel. | motivation |
04f6b9d4296dee4bbf965f9911bf98_13 | Furthermore, our proposed formulation highlights naturally the main components of <cite>Transformer's</cite> attention, enabling a better understanding of this mechanism: recent variants of <cite>Transformers</cite> (Shaw et al., 2018; Huang et al., 2018b; Dai et al., 2019; Child et al., 2019; Wang et al., 2018; Tsai et al., 2019a) can be expressed through these individual components. | background |
04f6b9d4296dee4bbf965f9911bf98_15 | Next, we show that this new formulation allows us to explore new family of attention while at the same time offering a framework to categorize previous attention variants <cite>(Vaswani et al., 2017</cite>; Shaw et al., 2018; Huang et al., 2018b; Dai et al., 2019; Child et al., 2019; Wang et al., 2018; Tsai et al., 2019a) . | background |
04f6b9d4296dee4bbf965f9911bf98_17 | Unlike recurrent computation (Sutskever et al., 2014 ) (i.e., RNNs) and temporal convolutional computation (Bai et al., 2018 ) (i.e., TCNs), <cite>Transformer's</cite> attention is an order-agnostic operation given the order in the inputs <cite>(Vaswani et al., 2017)</cite> . | background |
04f6b9d4296dee4bbf965f9911bf98_18 | As a result, <cite>Transformer</cite> <cite>(Vaswani et al., 2017)</cite> introduced positional embedding to indicate the positional relation for the inputs. | background |
04f6b9d4296dee4bbf965f9911bf98_19 | Note that f i can be the word representation (in neural machine translation <cite>(Vaswani et al., 2017)</cite> ), a pixel in a frame (in video activity recognition (Wang et al., 2018) ), or a music unit (in music generation (Huang et al., 2018b) ). | background |
04f6b9d4296dee4bbf965f9911bf98_20 | t i can be a mixture of sine and cosine functions <cite>(Vaswani et al., 2017)</cite> or parameters that can be learned during back-propagation (Dai et al., 2019; Ott et al., 2019) . | background |
04f6b9d4296dee4bbf965f9911bf98_21 | Followed the definition by <cite>Vaswani et al. (2017)</cite> , we use queries(q)/keys(k)/values(v) to represent the inputs for the attention. | uses |
04f6b9d4296dee4bbf965f9911bf98_22 | Given the introduced notation, the attention mechanism in original <cite>Transformer</cite> <cite>(Vaswani et al., 2017)</cite> can be presented as: | background |
04f6b9d4296dee4bbf965f9911bf98_23 | Recent work (Shaw et al., 2018; Dai et al., 2019; Huang et al., 2018b; Child et al., 2019; Parmar et al., 2018; Tsai et al., 2019a) proposed modifications to the <cite>Transformer</cite> for the purpose of better modeling inputs positional relation (Shaw et al., 2018; Huang et al., 2018b; Dai et al., 2019) , appending additional keys in S x k (Dai et al., 2019) , modifying the mask applied to Eq. (1) (Child et al., 2019) , or applying to distinct feature types Parmar et al., 2018; Tsai et al., 2019a) . | background |
04f6b9d4296dee4bbf965f9911bf98_24 | The filtering function M (â‹…, â‹…) plays as the role of the mask in decoder self-attention <cite>(Vaswani et al., 2017)</cite> . | background |
04f6b9d4296dee4bbf965f9911bf98_26 | Note that the kernel form k(x q , x k ) in the original <cite>Transformer</cite> <cite>(Vaswani et al., 2017 )</cite> is a asymmetric exponential kernel with additional mapping W q and W k (Wilson et al., 2016; Li et al., 2017) 2 . | background |
04f6b9d4296dee4bbf965f9911bf98_27 | In addition to modeling sequences like word sentences <cite>(Vaswani et al., 2017)</cite> or music signals (Huang et al., 2018b) , the <cite>Transformer</cite> can also be applied to images (Parmar et al., 2018) , sets , and multimodal sequences (Tsai et al., 2019a) . | background |
04f6b9d4296dee4bbf965f9911bf98_28 | Due to distinct data types, these applications admit various kernel feature space: <cite>(Vaswani et al., 2017</cite>; Dai et al., 2019) : | background |
04f6b9d4296dee4bbf965f9911bf98_33 | Positional Embedding k(⋅, ⋅) The kernel construction on X = (F × T ) has distinct design in variants of <cite>Transformers</cite> <cite>(Vaswani et al., 2017</cite>; Dai et al., 2019; Huang et al., 2018b; Shaw et al., 2018; Child et al., 2019) . | background |
04f6b9d4296dee4bbf965f9911bf98_34 | (i) Absolute Positional Embedding <cite>(Vaswani et al., 2017</cite>; Dai et al., 2019; Ott et al., 2019) : For the original <cite>Transformer</cite> <cite>(Vaswani et al., 2017)</cite> , each t i is represented by a vector with each dimension being sine or cosine functions. | background |
04f6b9d4296dee4bbf965f9911bf98_35 | (ii) Relative Positional Embedding in <cite>Transformer</cite>-XL (Dai et al., 2019) : t represents the indicator of the position in the sequence, and the kernel is chosen to be asymmetric of mixing sine and cosine functions: | background |
04f6b9d4296dee4bbf965f9911bf98_36 | with k fq t q , t k being an asymmetric kernel with coefficients inferred by f q : log k fq t q , t k = ∑ (iii) Relative Positional Embedding of Shaw et al. (2018) and Music <cite>Transformer</cite> (Huang et al., 2018b) : t ⋅ represents the indicator of the position in the sequence, and the kernel is modified to be indexed by a look-up table: | background |
04f6b9d4296dee4bbf965f9911bf98_37 | The current <cite>Transformers</cite> consider two different value function construction: <cite>(Vaswani et al., 2017)</cite> and Sparse <cite>Transformer</cite> (Child et al., 2019) : | background |
04f6b9d4296dee4bbf965f9911bf98_39 | In the following, we itemize the corresponding designs for the variants in <cite>Transformers</cite>: (i) Encoder Self-Attention in original <cite>Transformer</cite> <cite>(Vaswani et al., 2017)</cite> : For each query x q in the encoded sequence, M (x q , S x k ) = S x k contains the keys being all the tokens in the encoded sequence. | background |
04f6b9d4296dee4bbf965f9911bf98_40 | (ii) Encoder-Decoder Attention in original <cite>Transformer</cite> <cite>(Vaswani et al., 2017)</cite> : For each query x q in decoded sequence, M (x q , S x k ) = S x k contains the keys being all the tokens in the encoded sequence. | background |
04f6b9d4296dee4bbf965f9911bf98_41 | (iii) Decoder Self-Attention in original <cite>Transformer</cite> <cite>(Vaswani et al., 2017)</cite> : For each query x q in the decoded sequence, M (x q , S x k ) returns a subset of S x k (M (x q , S x k ) ⊂ S x k ). | background |
04f6b9d4296dee4bbf965f9911bf98_42 | Since the decoded sequence is the output for previous timestep, the query at position i can only observe the keys being the tokens that are decoded with position < i. For convenience, let us define S 1 as the set returned by original <cite>Transformer</cite> <cite>(Vaswani et al., 2017 )</cite> from M (x q , S x k ), which we will use it later. | background |
04f6b9d4296dee4bbf965f9911bf98_43 | (iv) Decoder Self-Attention in <cite>Transformer</cite>-XL (Dai et al., 2019) : For each query x q in the decoded sequence, M (x q , S x k ) returns a set containing S 1 and additional memories (M (x q , S x k ) = S 1 + S mem , M (x q , S x k ) ⊃ S 1 ). | background |
04f6b9d4296dee4bbf965f9911bf98_44 | (v) Decoder Self-Attention in Sparse <cite>Transformer</cite> (Child et al., 2019) : For each query x q in the decoded sentence, M (x q , S x k ) returns a subset of S 1 (M (x q , S x k ) ⊂ S 1 ). | background |
04f6b9d4296dee4bbf965f9911bf98_45 | For performance-wise comparisons, <cite>Transformer</cite>-XL (Dai et al., 2019) showed that, the additional memories in M (x q , S x k ) are able to capture longer-term dependency than the original <cite>Transformer</cite> <cite>(Vaswani et al., 2017)</cite> and hence results in better performance. | background |
04f6b9d4296dee4bbf965f9911bf98_46 | Sparse <cite>Transformer</cite> (Child et al., 2019) showed that although having much fewer elements in M (x q , S x k ), if the elements are carefully chosen, the attention can still reach the same performance as <cite>Transformer</cite>-XL (Dai et al., 2019) . | background |
04f6b9d4296dee4bbf965f9911bf98_48 | Note that t i here is chosen as the mixture of sine and cosine functions as in the prior work <cite>(Vaswani et al., 2017</cite>; Ott et al., 2019) . | uses |
04f6b9d4296dee4bbf965f9911bf98_50 | We conduct experiments on neural machine translation (NMT) and sequence prediction (SP) tasks since these two tasks are commonly chosen for studying <cite>Transformers</cite> <cite>(Vaswani et al., 2017</cite>; Dai et al., 2019) . | background |
04f6b9d4296dee4bbf965f9911bf98_52 | Similar to prior work <cite>(Vaswani et al., 2017</cite>; Dai et al., 2019) , we report BLEU score for NMT and perplexity for SP. | similarities |
04f6b9d4296dee4bbf965f9911bf98_53 | Other than manipulating the kernel choice of the non-positional features, we fix the configuration by <cite>Vaswani et al. (2017)</cite> for NMT and the configuration by Dai et al. (2019) for SP. | uses differences |
04f6b9d4296dee4bbf965f9911bf98_54 | Note that, for fairness, other than manipulating the kernel choice of the non-positional features, we fix the configuration by <cite>Vaswani et al</cite>. <cite>(Vaswani et al., 2017)</cite> for NMT and the configuration by Dai et al. (Dai et al., 2019) for SP. | uses differences |
04f6b9d4296dee4bbf965f9911bf98_56 | The need of the positional embedding (PE) in the attention mechanism is based on the argument that the attention mechanism is an order-agnostic (or, permutation equivariant) operation <cite>(Vaswani et al., 2017</cite>; Shaw et al., 2018; Huang et al., 2018b; Dai et al., 2019; Child et al., 2019) . | background |
04f6b9d4296dee4bbf965f9911bf98_57 | For clarification, we are not attacking the claim made by the prior work <cite>(Vaswani et al.,</cite> 2017; Shaw et al., 2018; Huang et al., 2018b; Dai et al., 2019; Child et al., 2019 ), but we aim at providing a new look at the order-invariance problem when considering the attention mechanism with masks (masks refer to the set filtering function in our kernel formulation). | motivation |
04f6b9d4296dee4bbf965f9911bf98_58 | Denote Πas the set of all permutations over [n] = {1, ⋯, n}. A function f unc ∶ X n → Y n is permutation equivariant iff for any permutation π ∈ Π, f unc(πx) = πf unc(x). showed that the standard attention (encoder self-attention <cite>(Vaswani et al., 2017</cite>; Dai et al., 2019) ) is permutation equivariant. | background |
04f6b9d4296dee4bbf965f9911bf98_60 | Nonetheless, the performance is slightly better than considering PE from the original <cite>Transformer</cite> <cite>(Vaswani et al., 2017)</cite> . | differences |
04f6b9d4296dee4bbf965f9911bf98_61 | Other than relating <cite>Transformer's</cite> attention mechanism with kernel methods, the prior work (Wang et al., 2018; Shaw et al., 2018; Tsai et al., 2019b ) related the attention mechanism with graph-structured learning. | background |
04f6b9d4296dee4bbf965f9911bf98_62 | In addition to the fundamental difference between graph-structured learning and kernel learning, the prior work (Wang et al., 2018; Shaw et al., 2018; Tsai et al., 2019b) focused on presenting <cite>Transformer</cite> for its particular application (e.g., video classification (Wang et al., 2018) and neural machine translation (Shaw et al., 2018) ). | background |
04f6b9d4296dee4bbf965f9911bf98_63 | Alternatively, our work focuses on presenting a new formulation of <cite>Transformer's</cite> attention mechanism that gains us the possibility for understanding the attention mechanism better. | motivation |
04f6b9d4296dee4bbf965f9911bf98_64 | In this paper, we presented a kernel formulation for the attention mechanism in <cite>Transformer</cite>, which allows us to define a larger space for designing attention. | differences |
0526911ab71c85bfa4a20b630f34ae_0 | Morphologically rich languages like Arabic <cite>(Beesley, K. 1996</cite> ) present significant challenges to many natural language processing applications as the one described above because a word often conveys complex meanings decomposable into several morphemes (i.e. prefix, stem, suffix) . | background |
0526911ab71c85bfa4a20b630f34ae_1 | Morphologically rich languages like Arabic <cite>(Beesley, K. 1996</cite> ) present significant challenges to many natural language processing applications as the one described above because a word often conveys complex meanings decomposable into several morphemes (i.e. prefix, stem, suffix) . By segmenting words into morphemes, we can improve the performance of natural language systems including machine translation (Brown et al. 1993 ) and information retrieval (Franz, M. and McCarley, S. 2002) . | extends |
05b53f9e0a347c4f47d0fd066538c7_0 | In order for the event mentions to be useful (i.e., for knowledge extraction tasks), it is important to determine their factual certainty so the actual event mentions can be retrieved (i.e., the event factuality prediction problem (EFP)). In this work, we focus on the recent regression formulation of EFP that aims to predict a real score in the range of [-3,+3 ] to quantify the occurrence possibility of a given event mention (Stanovsky et al., 2017;<cite> Rudinger et al., 2018)</cite> . | background |
05b53f9e0a347c4f47d0fd066538c7_1 | EFP is a challenging problem as different context words might jointly participate to reveal the factuality of the event mentions (i.e., the cue words), possibly located at different parts of the sentences and scattered far away from the anchor words of the events. There are two major mechanisms that can help the models to identify the cue words and link them to the anchor words, i.e., the syntactic trees (i.e., the dependency trees) and the semantic information<cite> (Rudinger et al., 2018)</cite> . | background |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 67