id
stringlengths 10
10
| title
stringlengths 19
145
| abstract
stringlengths 273
1.91k
| full_text
dict | qas
dict | figures_and_tables
dict | question
sequence | retrieval_gt
sequence | answer_gt
sequence | __index_level_0__
int64 0
887
|
---|---|---|---|---|---|---|---|---|---|
2004.04060 | Self-Attention Gazetteer Embeddings for Named-Entity Recognition | Recent attempts to ingest external knowledge into neural models for named-entity recognition (NER) have exhibited mixed results. In this work, we present GazSelfAttn, a novel gazetteer embedding approach that uses self-attention and match span encoding to build enhanced gazetteer embeddings. In addition, we demonstrate how to build gazetteer resources from the open source Wikidata knowledge base. Evaluations on CoNLL-03 and Ontonotes 5 datasets, show F1 improvements over baseline model from 92.34 to 92.86 and 89.11 to 89.32 respectively, achieving performance comparable to large state-of-the-art models. | {
"paragraphs": [
[
"Named-entity recognition (NER) is the task of tagging relevant entities such as person, location and organization in unstructured text. Modern NER has been dominated by neural models BIBREF0, BIBREF1 combined with contextual embeddings from language models (LMs) BIBREF2, BIBREF3, BIBREF4. The LMs are pre-trained on large amounts of unlabeled text which allows the NER model to use the syntactic and semantic information captured by the LM embeddings. On the popular benchmark datasets CoNLL-03 BIBREF5 and Ontonotes 5 BIBREF6, neural models with LMs achieved state-of-the-art results without gazetteers features, unlike earlier approaches that heavily relied on them BIBREF7. Gazetteers are lists that contain entities such as cities, countries, and person names. The gazetteers are matched against unstructured text to provide additional features to the model. Data for building gazetteers is available for multiple language from structured data resources such as Wikipedia, DBpedia BIBREF8 and Wikidata BIBREF9.",
"In this paper, we propose GazSelfAttn, a novel gazetteer embedding approach that uses self-attention and match span encoding to build enhanced gazetteer representation. GazSelfAttn embeddings are concatenated with the input to a LSTM BIBREF10 or CNN BIBREF11 sequence layer and are trained end-to-end with the model. In addition, we show how to extract general gazetteers from the Wikidata, a structured knowledge-base which is part of the Wikipedia project.",
"Our contributions are the following:",
"[topsep=1pt, leftmargin=15pt, itemsep=-1pt]",
"We propose novel gazetteer embeddings that use self-attention combined with match span encoding.",
"We enhance gazetteer matching with multi-token and single-token matches in the same representation.",
"We demonstrate how to use Wikidata with entity popularity filtering as a resource for building gazetteers.",
"GazSelfAttn evaluations on CoNLL-03 and Ontonotes 5 datasets show F$_1$ score improvement over baseline model from 92.34 to 92.86 and from 89.11 to 89.32 respectively. Moreover, we perform ablation experiments to study the contribution of the different model components."
],
[
"Recently, researchers added gazetteers to neural sequence models. BIBREF12 demonstrated small improvements on large datasets and bigger improvements on small datasets. BIBREF13 proposed to train a gazetteer attentive network to learn name regularities and spans of NER entities. BIBREF14 demonstrated that trained gazetteers scoring models combined with hybrid semi-Markov conditional random field (HSCRF) layer improve overall performance. The HSCRF layer predicts a set of candidate spans that are rescored using a gazetteer classifier model. The HSCRF approach differs from the common approach of including gazetteers as an embedding in the model. Unlike the work of BIBREF14, our GazSelfAttn does not require training a separate gazetteer classifier and the HSCRF layer, thus our approach works with any standard output layer such as conditional random field (CRF) BIBREF15.",
"BIBREF16 proposed an auto-encoding loss with hand-crafted features, including gazetteers, to improve accuracy. However, they did not find that gazetteer features significantly improve accuracy.",
"Extracting gazetteers from structure knowledge sources was investigated by BIBREF17 and BIBREF18. They used Wikipedia's instance of relationship as a resource for building gazetteers with classical machine learning models. Compared to Wikidata, the data extracted from Wikipedia is smaller and noisier.",
"Similar to this paper, BIBREF19 used Wikidata as a gazetteer resource. However, they did not use entity popularity to filter ambiguous entities and their gazetteer model features use simple one-hot encoding."
],
[
"We add GazSelfAttn embeddings to the popular Neural CRF model architecture with ELMo LM embeddings from BIBREF2. Figure FIGREF5 depicts the model, which consists of Glove word embeddings BIBREF20, Char-CNN BIBREF21, BIBREF1, ELMo embeddings, Bi-LSTM, and output CRF layer with BILOU (Beginning Inside Last Outside Unit) labels encoding BIBREF22. Note that, we concatenate the gazetteer embeddings to the Bi-LSTM input."
],
[
"In this section, we address the issue of building a high-quality gazetteer dictionary $M$ that maps entities to types, e.g., $M$[Andy Murray] $\\rightarrow $ Person. In this work, we use Wikidata, an open source structured knowledge-base, as the source of gazetteers. Although, Wikidata and DBpedia are similar knowledge bases, we choose Wikidata because, as of 2019, it provides data on around 45 million entities compared to around 5 million in DBpedia.",
"Wikidata is organized as entities and properties. Entities can be concrete (Boston, NATO, Michael Jordan) and abstract (City, Organization, Person). Properties describe an entity relations. For example, Boston instance_of City and Boston part_of Massachusetts; both instance_of and part_of are properties. Also, each entity is associated with sitelink count which tacks mentions of the entity on Wikimedia website and can be used as proxy for popularity.",
"To extract gazetteers from Wikidata, we process the official dumps into tuples of entity and type based only on the left and right part of the instance_of triplet, example resulting tuples are Boston $\\rightarrow $ City and Massachusetts $\\rightarrow $ State. Each entity is associated with a set of aliases, we keep only the aliases that are less than seven tokens long. Example aliases for Boston are “Beantown” and “The Cradle of Liberty”. If there are multiple types per alias, we use the sitelink count to keep the six most popular types. The sitelink filtering is important to reduce the infrequent meanings of an entity in the gazetteer data.",
"The Wikidata types that we obtain after processing the Wikidata dumps are fine-grained. However, certain NER tasks require coarse-grained types. For instance, CoNLL-03 task has a single Location label that consists of cities, states, countries, and other geographic location. To move from fine-grained to coarse-grained types, we use the Wikidata hierarchical structure induced by the subclass_of property. Examples of subclass_of hierarchies in Wikidata are: City $\\rightarrow $ Human Settlement $\\rightarrow $ Geographic Location, and Artist $\\rightarrow $ Creator $\\rightarrow $ Person. We change the types granularity depending on the NER task by traversing up, from fine-grained types to the target coarse-grained types. For instance, we merge the Artist and Painter types to Person, and the River and Mountain types to Location."
],
[
"Gazetteer matching is the process of assigning gazetteer features to sentence tokens. Formally, given a gazetteer dictionary $M$ that maps entities to types, and a sentence $S = (t_1, t_2, ..., t_n)$ with tokens $t_i$, we have to find the $m$ gazetteer types $\\lbrace g^1_i, g^2_i,..,g^m_i\\rbrace $ and spans $\\lbrace s^1_i, s^2_i,..,s^m_i\\rbrace $ for every token $t_i$. The set notation $\\lbrace $} indicates that multiple $m$ matches are allowed per token. The match span $\\lbrace s^j_i\\rbrace $ represents positional information which encodes multi-token matches. The match spans are encoded using a BILU (Beginning Inside Last Unit) tags, similar to the BILOU tags that we use to encode the NER labels.",
"In general, there are two methods for gazetteer matching: multi-token and single-token. Multi-token matching is searching for the longest segments of the sentence that are in $M$. For instance, given $M$[New York] $\\rightarrow $ State, $M$[New York City] $\\rightarrow $ City and the sentence “Yesterday in New York City”, the multi-token matcher assigns the City gazetteer type to the longest segment “New York City”. Single-token matching is searching to match any vocabulary word from a gazetteer type. In the earlier example, each word from the sentence is individually matched to the tokens in $M$, thus “New” and “York” are individually matched to both City and State, and “City” is matched only to City.",
"BIBREF12 research shows that both multi-token and single-token matching perform better on certain datasets. We propose to combine both methods: we tag the multi-token matches with BILU tags, and the single-token matches with a Single (S) tag. The single-token matches are used only if multi-token matches are not present. We consider that the single-token matches are high-recall low-precision, and multi-token matches are low-recall and high-precision. Thus, a combination of both works better than individually. Example sentences are: “Yesterday in New(City-B) York(City-I) City(City-L)”, and “Yesterday in York(City-S) City(City-S)” York City is marked with singles tag since $M$ does not have entities for “York City”, “York”, and “City”.",
"Note that gazetteer matching is unsupervised, i.e., we do not have a ground truth of correctly matched sentences for $M$. Furthermore, it is a hard task because of the many variations in writing and ambiguity of entities."
],
[
"px",
"Equations DISPLAY_FORM11- shows the gazetteer embedding $\\mathbf {g}_i$ computation for a token $t_i$. To compute $\\mathbf {g}_i$, given a set of $m$ gazetteer types $\\lbrace g^m_i\\rbrace $ and spans $\\lbrace s^m_i\\rbrace $, we execute the following procedure:",
"[topsep=1pt, leftmargin=15pt, itemsep=-1pt]",
"Equation DISPLAY_FORM11. We embed the sets $\\lbrace g^m_i\\rbrace $ and $\\lbrace s^m_i\\rbrace $ using the embedding matrices $\\mathbf {G}$ and $\\mathbf {S}$. Then, we do an element-wise addition, denoted $\\oplus $, of the corresponding types and spans embeddings to get a matrix $\\mathbf {E}_i$.",
"Equation . We compute $\\mathbf {A}_i$ using scaled dot-product self-attention BIBREF23, where $d$ is the dimensionality of the gazetteer embeddings. The attention contextualizes the embeddings with multiple gazetteer matches per token $t_i$.",
"Equation . To add model flexibility, we compute $\\mathbf {H}_i$ with a position-wise feed-forward layer and GELU activation BIBREF24.",
"Equation . Finally, we perform max pooling across the embeddings $\\mathbf {H}_i$ to obtain the final gazetteer embedding $\\mathbf {g}_i$."
],
[
"To prevent the neural NER model from overfitting on the gazetteers, we use gazetteers dropout BIBREF25. We randomly set to zero gazetteer embeddings $\\mathbf {g}_i$, so the gazetteer vectors that are input to the LSTM become zero. We tune the gazetteer dropout hyperparameter on the validation set."
],
[
"Datasets. We evaluate on the English language versions of CoNLL-03 dataset BIBREF5 and the human annotated portion of the Ontonotes 5 BIBREF6 dataset. CoNLL-03 labels cover 4 entity types: person, location, organization, and miscellaneous. The Onotonotes 5 dataset is larger and its labels cover 18 types: person, NORP, facility, organization, GPE, location, product, event, work of art, law, language, date, time, percent, money, quantity, ordinal, cardinal.",
"px",
"Gazetteers. We use the Wikidata gazetteers with types merged to the granularity of the CoNLL-03 and Ononotes 5 datasets. We filter non-relevant types (e.g., genome names, disease) and get a total of one million records. For CoNLL-03 and Ontonotes 5, the percentage of entities covered by gazetteers are 96% and 78% respectively, and percentage of gazetteers wrongly assigned to non-entity tokens are 41% and 41.5% respectively.",
"Evaluation. We use the standard CoNLL evaluation script which reports entity F1 scores. The F1 scores are averages over 5 runs.",
"Configuration. We use the Bi-LSTM-CNN-CRF model architecture with ELMo language model embeddings from BIBREF2, which consist of 50 dim pre-trained Glove word embeddings BIBREF20, 128 dim Char-CNN BIBREF21, BIBREF1 embeddings with filter size of 3 and randomly initialized 16 dim char embeddings, 1024 pre-trained ELMo pre-trained embeddings, two layer 200 dim Bi-LSTM, and output CRF layer with BILOU (Beginning Inside Last Outside Unit) spans BIBREF22.",
"For the gazetteer embeddings, we use 128 dim for the embedding matrices $\\mathbf {G}$ and $\\mathbf {S}$, 128 dim output for $\\mathbf {W}$, which yields a gazetteer embedding $\\mathbf {g}_i$ with 128 dim. The parameters are randomly initialized and trained. We apply gazetteer dropout of 0.1 which we tuned on the development set; we tried values form 0.05 to 0.6.",
"All parameters except the ELMo embeddings are trained. We train using the Adam BIBREF26 optimizer with learning rate of 0.001 for 100 epochs. We use early stopping with patience 25 on the development set. Batch size of 64, dropout rate of 0.5 and L2 regularization of 0.1."
],
[
"The experimental results for NER are summarized in Table TABREF20. The top part of the table shows recently published results. BIBREF14's work is using gazetteers with HSCRF and BIBREF4's work is using the Flair language model which is much larger than ELMo. BIBREF27 is the current state-of-the-art language model that uses cloze-driven pretraining. The bottom part of the table is shows our baseline models and results with included gazetteers. We experiment with the Neural CRF model with and without ELMo embeddings. Including ELMo embeddings the CoNLL-03 and Ontonotes 5, F$_1$ score improves from 92.34 to 92.86 and 89.11 to 89.32 respectively. Without ELMo embeddings the F$_1$ score improves from 90.42 to 91.12 and 86.63 to 87 respectively. We observe that GazSelfAttn relative improvements are similar with and without ELMo embeddings. We obtain slightly better CoNLL-03 F$_1$ score compared to BIBREF14 work that uses the HSCRF model, and we match the Ononotes 5 F$_1$ scores of BIBREF4 that uses a much bigger model. BIBREF14 Ononotes 5 results use subset of the dataset labels and are not comparable. Note that because of computation constrains, we did not perform extensive hyperparameter tuning except for the gazetteer dropout rate."
],
[
"Table TABREF22 shows ablation experiments. We remove components of the gazetteer embedding model from the Neural CRF model. In each experiment, we removed only the specified component. Ablations show decreased F$_1$ score on the development and test set if any of the components is removed. The highest degradation is when single matches are removed which underscores the importance of the combining the gazetteer matching techniques for NER. We observe that match span encoding is more important for the CoNLL-02 compared to Ononotes 5 because the former has more entities with multiple tokens. Removing the self-attention shows that self-attention is effective at combining information form multiple gazetteers.",
"In addition, we tried moving the gazetteer embeddings to the CRF layer and using the LSTM token embeddings as attention keys but the F$_1$ degraded significantly. We experimented with adding auto-encoding loss similar to BIBREF16 and multi-head self-attention. However, we did not observe F$_1$ score improvements and sometimes small degradations."
],
[
"We presented GazSelfAttn, a novel approach for gazetteer embeddings that uses self-attention and match span positions. Evaluation results of GazSelfAttn show improvement compared to competitive baselines and state-of-the-art models on multiple datasets.",
"For future work we would like to evaluate GazSelfAttn on non-English language datasets and improve the multi-token gazetteer matching with fuzzy string matching. Also, we would like to explore transfer learning of gazetteer embeddings from high-resource to low-resource setting."
]
],
"section_name": [
"Introduction",
"Related Work",
"Approach ::: Model Architecture",
"Approach ::: Gazetteers",
"Approach ::: Gazetteer Matching",
"Approach ::: Gazetteer Embeddings",
"Approach ::: Gazetteer Dropout",
"Experiments ::: Setup",
"Experiments ::: Results",
"Experiments ::: Ablation study",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"1ea576aad91b5430103ecab12ed5aa29ab5203e0",
"490677d61ee975ea37840e57b257ff8a4e9d569d",
"ec718f58eb9713f35d2ec3933c4eccbd5ea54244"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Results on CoNLL-03 and OntoNotes 5.",
"The experimental results for NER are summarized in Table TABREF20. The top part of the table shows recently published results. BIBREF14's work is using gazetteers with HSCRF and BIBREF4's work is using the Flair language model which is much larger than ELMo. BIBREF27 is the current state-of-the-art language model that uses cloze-driven pretraining. The bottom part of the table is shows our baseline models and results with included gazetteers. We experiment with the Neural CRF model with and without ELMo embeddings. Including ELMo embeddings the CoNLL-03 and Ontonotes 5, F$_1$ score improves from 92.34 to 92.86 and 89.11 to 89.32 respectively. Without ELMo embeddings the F$_1$ score improves from 90.42 to 91.12 and 86.63 to 87 respectively. We observe that GazSelfAttn relative improvements are similar with and without ELMo embeddings. We obtain slightly better CoNLL-03 F$_1$ score compared to BIBREF14 work that uses the HSCRF model, and we match the Ononotes 5 F$_1$ scores of BIBREF4 that uses a much bigger model. BIBREF14 Ononotes 5 results use subset of the dataset labels and are not comparable. Note that because of computation constrains, we did not perform extensive hyperparameter tuning except for the gazetteer dropout rate."
],
"extractive_spans": [],
"free_form_answer": "Average 92.87 for CoNLL-01 and Average 8922 for Ontonotes 5",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Results on CoNLL-03 and OntoNotes 5.",
"The top part of the table shows recently published results. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The experimental results for NER are summarized in Table TABREF20. The top part of the table shows recently published results. BIBREF14's work is using gazetteers with HSCRF and BIBREF4's work is using the Flair language model which is much larger than ELMo. BIBREF27 is the current state-of-the-art language model that uses cloze-driven pretraining. The bottom part of the table is shows our baseline models and results with included gazetteers. We experiment with the Neural CRF model with and without ELMo embeddings. Including ELMo embeddings the CoNLL-03 and Ontonotes 5, F$_1$ score improves from 92.34 to 92.86 and 89.11 to 89.32 respectively. Without ELMo embeddings the F$_1$ score improves from 90.42 to 91.12 and 86.63 to 87 respectively. We observe that GazSelfAttn relative improvements are similar with and without ELMo embeddings. We obtain slightly better CoNLL-03 F$_1$ score compared to BIBREF14 work that uses the HSCRF model, and we match the Ononotes 5 F$_1$ scores of BIBREF4 that uses a much bigger model. BIBREF14 Ononotes 5 results use subset of the dataset labels and are not comparable. Note that because of computation constrains, we did not perform extensive hyperparameter tuning except for the gazetteer dropout rate."
],
"extractive_spans": [],
"free_form_answer": "Akbik et al. (2019) - 89.3 on Ontonotes 5\nBaevski et al. (2019) 93.5 on CoNLL-03",
"highlighted_evidence": [
"The experimental results for NER are summarized in Table TABREF20. The top part of the table shows recently published results. BIBREF14's work is using gazetteers with HSCRF and BIBREF4's work is using the Flair language model which is much larger than ELMo. BIBREF27 is the current state-of-the-art language model that uses cloze-driven pretraining."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The experimental results for NER are summarized in Table TABREF20. The top part of the table shows recently published results. BIBREF14's work is using gazetteers with HSCRF and BIBREF4's work is using the Flair language model which is much larger than ELMo. BIBREF27 is the current state-of-the-art language model that uses cloze-driven pretraining. The bottom part of the table is shows our baseline models and results with included gazetteers. We experiment with the Neural CRF model with and without ELMo embeddings. Including ELMo embeddings the CoNLL-03 and Ontonotes 5, F$_1$ score improves from 92.34 to 92.86 and 89.11 to 89.32 respectively. Without ELMo embeddings the F$_1$ score improves from 90.42 to 91.12 and 86.63 to 87 respectively. We observe that GazSelfAttn relative improvements are similar with and without ELMo embeddings. We obtain slightly better CoNLL-03 F$_1$ score compared to BIBREF14 work that uses the HSCRF model, and we match the Ononotes 5 F$_1$ scores of BIBREF4 that uses a much bigger model. BIBREF14 Ononotes 5 results use subset of the dataset labels and are not comparable. Note that because of computation constrains, we did not perform extensive hyperparameter tuning except for the gazetteer dropout rate.",
"FLOAT SELECTED: Table 2: Results on CoNLL-03 and OntoNotes 5."
],
"extractive_spans": [],
"free_form_answer": "93.5",
"highlighted_evidence": [
"The experimental results for NER are summarized in Table TABREF20. The top part of the table shows recently published results. BIBREF14's work is using gazetteers with HSCRF and BIBREF4's work is using the Flair language model which is much larger than ELMo. BIBREF27 is the current state-of-the-art language model that uses cloze-driven pretraining.",
"FLOAT SELECTED: Table 2: Results on CoNLL-03 and OntoNotes 5."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"6be704c4e6911d751181a05d2da7b0ff2435abb9",
"8eb59241621542c33f0f7b1f56a599d02fb294eb",
"8f5ffc990e9d323886e7fad2ab9b161c7db8b075"
],
"answer": [
{
"evidence": [
"The experimental results for NER are summarized in Table TABREF20. The top part of the table shows recently published results. BIBREF14's work is using gazetteers with HSCRF and BIBREF4's work is using the Flair language model which is much larger than ELMo. BIBREF27 is the current state-of-the-art language model that uses cloze-driven pretraining. The bottom part of the table is shows our baseline models and results with included gazetteers. We experiment with the Neural CRF model with and without ELMo embeddings. Including ELMo embeddings the CoNLL-03 and Ontonotes 5, F$_1$ score improves from 92.34 to 92.86 and 89.11 to 89.32 respectively. Without ELMo embeddings the F$_1$ score improves from 90.42 to 91.12 and 86.63 to 87 respectively. We observe that GazSelfAttn relative improvements are similar with and without ELMo embeddings. We obtain slightly better CoNLL-03 F$_1$ score compared to BIBREF14 work that uses the HSCRF model, and we match the Ononotes 5 F$_1$ scores of BIBREF4 that uses a much bigger model. BIBREF14 Ononotes 5 results use subset of the dataset labels and are not comparable. Note that because of computation constrains, we did not perform extensive hyperparameter tuning except for the gazetteer dropout rate."
],
"extractive_spans": [
"Neural CRF model with and without ELMo embeddings"
],
"free_form_answer": "",
"highlighted_evidence": [
" We experiment with the Neural CRF model with and without ELMo embeddings. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The experimental results for NER are summarized in Table TABREF20. The top part of the table shows recently published results. BIBREF14's work is using gazetteers with HSCRF and BIBREF4's work is using the Flair language model which is much larger than ELMo. BIBREF27 is the current state-of-the-art language model that uses cloze-driven pretraining. The bottom part of the table is shows our baseline models and results with included gazetteers. We experiment with the Neural CRF model with and without ELMo embeddings. Including ELMo embeddings the CoNLL-03 and Ontonotes 5, F$_1$ score improves from 92.34 to 92.86 and 89.11 to 89.32 respectively. Without ELMo embeddings the F$_1$ score improves from 90.42 to 91.12 and 86.63 to 87 respectively. We observe that GazSelfAttn relative improvements are similar with and without ELMo embeddings. We obtain slightly better CoNLL-03 F$_1$ score compared to BIBREF14 work that uses the HSCRF model, and we match the Ononotes 5 F$_1$ scores of BIBREF4 that uses a much bigger model. BIBREF14 Ononotes 5 results use subset of the dataset labels and are not comparable. Note that because of computation constrains, we did not perform extensive hyperparameter tuning except for the gazetteer dropout rate.",
"FLOAT SELECTED: Table 2: Results on CoNLL-03 and OntoNotes 5."
],
"extractive_spans": [
"Neural CRF model with and without ELMo embeddings"
],
"free_form_answer": "",
"highlighted_evidence": [
"The experimental results for NER are summarized in Table TABREF20. ",
"The bottom part of the table is shows our baseline models and results with included gazetteers. We experiment with the Neural CRF model with and without ELMo embeddings. Including ELMo embeddings the CoNLL-03 and Ontonotes 5, F$_1$ score improves from 92.34 to 92.86 and 89.11 to 89.32 respectively. Without ELMo embeddings the F$_1$ score improves from 90.42 to 91.12 and 86.63 to 87 respectively. ",
"FLOAT SELECTED: Table 2: Results on CoNLL-03 and OntoNotes 5."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The experimental results for NER are summarized in Table TABREF20. The top part of the table shows recently published results. BIBREF14's work is using gazetteers with HSCRF and BIBREF4's work is using the Flair language model which is much larger than ELMo. BIBREF27 is the current state-of-the-art language model that uses cloze-driven pretraining. The bottom part of the table is shows our baseline models and results with included gazetteers. We experiment with the Neural CRF model with and without ELMo embeddings. Including ELMo embeddings the CoNLL-03 and Ontonotes 5, F$_1$ score improves from 92.34 to 92.86 and 89.11 to 89.32 respectively. Without ELMo embeddings the F$_1$ score improves from 90.42 to 91.12 and 86.63 to 87 respectively. We observe that GazSelfAttn relative improvements are similar with and without ELMo embeddings. We obtain slightly better CoNLL-03 F$_1$ score compared to BIBREF14 work that uses the HSCRF model, and we match the Ononotes 5 F$_1$ scores of BIBREF4 that uses a much bigger model. BIBREF14 Ononotes 5 results use subset of the dataset labels and are not comparable. Note that because of computation constrains, we did not perform extensive hyperparameter tuning except for the gazetteer dropout rate."
],
"extractive_spans": [
"Neural CRF model with and without ELMo embeddings"
],
"free_form_answer": "",
"highlighted_evidence": [
"The bottom part of the table is shows our baseline models and results with included gazetteers. We experiment with the Neural CRF model with and without ELMo embeddings."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3d35c72cf37eeca0abddd993b50a24a800a246b0",
"87781635c52f98c73167bfc7564a2e61de029018",
"926bb1186dbb80795a693315df800ec1de863499"
],
"answer": [
{
"evidence": [
"To extract gazetteers from Wikidata, we process the official dumps into tuples of entity and type based only on the left and right part of the instance_of triplet, example resulting tuples are Boston $\\rightarrow $ City and Massachusetts $\\rightarrow $ State. Each entity is associated with a set of aliases, we keep only the aliases that are less than seven tokens long. Example aliases for Boston are “Beantown” and “The Cradle of Liberty”. If there are multiple types per alias, we use the sitelink count to keep the six most popular types. The sitelink filtering is important to reduce the infrequent meanings of an entity in the gazetteer data.",
"The Wikidata types that we obtain after processing the Wikidata dumps are fine-grained. However, certain NER tasks require coarse-grained types. For instance, CoNLL-03 task has a single Location label that consists of cities, states, countries, and other geographic location. To move from fine-grained to coarse-grained types, we use the Wikidata hierarchical structure induced by the subclass_of property. Examples of subclass_of hierarchies in Wikidata are: City $\\rightarrow $ Human Settlement $\\rightarrow $ Geographic Location, and Artist $\\rightarrow $ Creator $\\rightarrow $ Person. We change the types granularity depending on the NER task by traversing up, from fine-grained types to the target coarse-grained types. For instance, we merge the Artist and Painter types to Person, and the River and Mountain types to Location."
],
"extractive_spans": [
"process the official dumps into tuples of entity and type based only on the left and right part of the instance_of triplet",
"Each entity is associated with a set of aliases, we keep only the aliases that are less than seven tokens long",
"we use the sitelink count to keep the six most popular types",
"To move from fine-grained to coarse-grained types, we use the Wikidata hierarchical structure"
],
"free_form_answer": "",
"highlighted_evidence": [
"To extract gazetteers from Wikidata, we process the official dumps into tuples of entity and type based only on the left and right part of the instance_of triplet, example resulting tuples are Boston $\\rightarrow $ City and Massachusetts $\\rightarrow $ State. Each entity is associated with a set of aliases, we keep only the aliases that are less than seven tokens long.",
" If there are multiple types per alias, we use the sitelink count to keep the six most popular types. The sitelink filtering is important to reduce the infrequent meanings of an entity in the gazetteer data.",
"To move from fine-grained to coarse-grained types, we use the Wikidata hierarchical structure induced by the subclass_of property."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To extract gazetteers from Wikidata, we process the official dumps into tuples of entity and type based only on the left and right part of the instance_of triplet, example resulting tuples are Boston $\\rightarrow $ City and Massachusetts $\\rightarrow $ State. Each entity is associated with a set of aliases, we keep only the aliases that are less than seven tokens long. Example aliases for Boston are “Beantown” and “The Cradle of Liberty”. If there are multiple types per alias, we use the sitelink count to keep the six most popular types. The sitelink filtering is important to reduce the infrequent meanings of an entity in the gazetteer data.",
"The Wikidata types that we obtain after processing the Wikidata dumps are fine-grained. However, certain NER tasks require coarse-grained types. For instance, CoNLL-03 task has a single Location label that consists of cities, states, countries, and other geographic location. To move from fine-grained to coarse-grained types, we use the Wikidata hierarchical structure induced by the subclass_of property. Examples of subclass_of hierarchies in Wikidata are: City $\\rightarrow $ Human Settlement $\\rightarrow $ Geographic Location, and Artist $\\rightarrow $ Creator $\\rightarrow $ Person. We change the types granularity depending on the NER task by traversing up, from fine-grained types to the target coarse-grained types. For instance, we merge the Artist and Painter types to Person, and the River and Mountain types to Location."
],
"extractive_spans": [],
"free_form_answer": "Extract entity type tuples at appropriate level of granularity depending on the NER task.",
"highlighted_evidence": [
"To extract gazetteers from Wikidata, we process the official dumps into tuples of entity and type based only on the left and right part of the instance_of triplet, example resulting tuples are Boston $\\rightarrow $ City and Massachusetts $\\rightarrow $ State. Each entity is associated with a set of aliases, we keep only the aliases that are less than seven tokens long. Example aliases for Boston are “Beantown” and “The Cradle of Liberty”. If there are multiple types per alias, we use the sitelink count to keep the six most popular types. The sitelink filtering is important to reduce the infrequent meanings of an entity in the gazetteer data.\n\nThe Wikidata types that we obtain after processing the Wikidata dumps are fine-grained. However, certain NER tasks require coarse-grained types. For instance, CoNLL-03 task has a single Location label that consists of cities, states, countries, and other geographic location. To move from fine-grained to coarse-grained types, we use the Wikidata hierarchical structure induced by the subclass_of property. Examples of subclass_of hierarchies in Wikidata are: City $\\rightarrow $ Human Settlement $\\rightarrow $ Geographic Location, and Artist $\\rightarrow $ Creator $\\rightarrow $ Person. We change the types granularity depending on the NER task by traversing up, from fine-grained types to the target coarse-grained types. For instance, we merge the Artist and Painter types to Person, and the River and Mountain types to Location."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this section, we address the issue of building a high-quality gazetteer dictionary $M$ that maps entities to types, e.g., $M$[Andy Murray] $\\rightarrow $ Person. In this work, we use Wikidata, an open source structured knowledge-base, as the source of gazetteers. Although, Wikidata and DBpedia are similar knowledge bases, we choose Wikidata because, as of 2019, it provides data on around 45 million entities compared to around 5 million in DBpedia.",
"To extract gazetteers from Wikidata, we process the official dumps into tuples of entity and type based only on the left and right part of the instance_of triplet, example resulting tuples are Boston $\\rightarrow $ City and Massachusetts $\\rightarrow $ State. Each entity is associated with a set of aliases, we keep only the aliases that are less than seven tokens long. Example aliases for Boston are “Beantown” and “The Cradle of Liberty”. If there are multiple types per alias, we use the sitelink count to keep the six most popular types. The sitelink filtering is important to reduce the infrequent meanings of an entity in the gazetteer data."
],
"extractive_spans": [
"To extract gazetteers from Wikidata, we process the official dumps into tuples of entity and type based only on the left and right part of the instance_of triplet, example resulting tuples are Boston $\\rightarrow $ City and Massachusetts $\\rightarrow $ State."
],
"free_form_answer": "",
"highlighted_evidence": [
"In this work, we use Wikidata, an open source structured knowledge-base, as the source of gazetteers. Although, Wikidata and DBpedia are similar knowledge bases, we choose Wikidata because, as of 2019, it provides data on around 45 million entities compared to around 5 million in DBpedia.",
"To extract gazetteers from Wikidata, we process the official dumps into tuples of entity and type based only on the left and right part of the instance_of triplet, example resulting tuples are Boston $\\rightarrow $ City and Massachusetts $\\rightarrow $ State. Each entity is associated with a set of aliases, we keep only the aliases that are less than seven tokens long. Example aliases for Boston are “Beantown” and “The Cradle of Liberty”. If there are multiple types per alias, we use the sitelink count to keep the six most popular types. The sitelink filtering is important to reduce the infrequent meanings of an entity in the gazetteer data."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What is the performance of large state-of-the-art models on these datasets?",
"What is used as a baseline model?",
"How do they build gazetter resources from Wikipedia knowlege base?"
],
"question_id": [
"2fec84a62b4028bbe6500754d9c058eefbc24d9a",
"2803709fba74e6098aae145abcbf0e9a3f4c35e5",
"ec39120fb879ae10452d3f244e1e32237047005a"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Model architecture with gazetteer embeddings.",
"Table 1: Dataset sizes in number of sentences.",
"Table 2: Results on CoNLL-03 and OntoNotes 5.",
"Table 3: Ablation study results on CoNLL-03 and OntoNotes 5. “- span encoding” removes the BILU match span encoding leaving only the gazetteer types. “- self attention” removes the self-attention. “- uncased matches” removes the uncased matches."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png"
]
} | [
"What is the performance of large state-of-the-art models on these datasets?",
"How do they build gazetter resources from Wikipedia knowlege base?"
] | [
[
"2004.04060-4-Table2-1.png",
"2004.04060-Experiments ::: Results-0"
],
[
"2004.04060-Approach ::: Gazetteers-3",
"2004.04060-Approach ::: Gazetteers-0",
"2004.04060-Approach ::: Gazetteers-2"
]
] | [
"93.5",
"Extract entity type tuples at appropriate level of granularity depending on the NER task."
] | 86 |
1807.08089 | Phonetic-and-Semantic Embedding of Spoken Words with Applications in Spoken Content Retrieval | Word embedding or Word2Vec has been successful in offering semantics for text words learned from the context of words. Audio Word2Vec was shown to offer phonetic structures for spoken words (signal segments for words) learned from signals within spoken words. This paper proposes a two-stage framework to perform phonetic-and-semantic embedding on spoken words considering the context of the spoken words. Stage 1 performs phonetic embedding with speaker characteristics disentangled. Stage 2 then performs semantic embedding in addition. We further propose to evaluate the phonetic-and-semantic nature of the audio embeddings obtained in Stage 2 by parallelizing with text embeddings. In general, phonetic structure and semantics inevitably disturb each other. For example the words"brother"and"sister"are close in semantics but very different in phonetic structure, while the words"brother"and"bother"are in the other way around. But phonetic-and-semantic embedding is attractive, as shown in the initial experiments on spoken document retrieval. Not only spoken documents including the spoken query can be retrieved based on the phonetic structures, but spoken documents semantically related to the query but not including the query can also be retrieved based on the semantics. | {
"paragraphs": [
[
"Word embedding or Word2Vec BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 has been widely used in the area of natural language processing BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , in which text words are transformed into vector representations of fixed dimensionality BIBREF11 , BIBREF12 , BIBREF13 . This is because these vector representations carry plenty of semantic information learned from the context of the considered words in the text training corpus. Similarly, audio Word2Vec has also been proposed in the area of speech signal processing, in which spoken words (signal segments for words without knowing the underlying word it represents) are transformed into vector representations of fixed dimensionality BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 . These vector representations carry the phonetic structures of the spoken words learned from the signals within the spoken words, and have been shown to be useful in spoken term detection, in which the spoken terms are detected simply based on the phonetic structures. Such Audio Word2Vec representations do not carry semantics, because they are learned from individual spoken words only without considering the context.",
"Audio Word2Vec was recently extended to Segmental Audio Word2Vec BIBREF25 , in which an utterance can be automatically segmented into a sequence of spoken words BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 and then transformed into a sequence of vectors of fixed dimensionality by Audio Word2Vec, and the spoken word segmentation and Audio Word2Vec can be jointly trained from an audio corpus. In this way the Audio Word2Vec was upgraded from word-level to utterance-level. This offers the opportunity for Audio Word2Vec to include semantic information in addition to phonetic structures, since the context among spoken words in utterances bring semantic information. This is the goal of this work, and this paper reports the first set of results towards such a goal.",
"In principle, the semantics and phonetic structures in words inevitably disturb each other. For example, the words “brother\" and “sister\" are close in semantics but very different in phonetic structure, while the words “brother\" and “bother\" are close in phonetic structure but very different in semantics. This implies the goal of embedding both phonetic structures and semantics for spoken words is naturally very challenging. Text words can be trained and embedded as vectors carrying plenty of semantics because the phonetic structures are not considered at all. On the other hand, because spoken words are just a different version of representations for text words, it is also natural to believe they do carry some semantic information, except disturbed by phonetic structures plus some other acoustic factors such as speaker characteristics and background noise BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 . So the goal of embedding spoken words to carry both phonetic structures and semantics is possible, although definitely hard.",
"But a nice feature of such embeddings is that they may include both phonetic structures and semantics BIBREF36 , BIBREF37 . A direct application for such phonetic-and-semantic embedding of spoken words is spoken document retrieval BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 . This task is slightly different from spoken term detection, in the latter case spoken terms are simply detected based on the phonetic structures. Here the goal of the task is to retrieve all spoken documents (sets of consecutive utterances) relevant to the spoken query, which may or may not include the query. For example, for the spoken query of “President Donald Trump\", not only those documents including the spoken query should be retrieved based on the phonetic structures, but those documents including semantically related words such as “White House\" and “trade policy\", but not necessarily “President Donald Trump\", should also be retrieved. This is usually referred to as “semantic retrieval\", which can be achieved by the phonetic-and-semantic embedding discussed here.",
"This paper proposes a two-stage framework of phonetic-and-semantic embedding for spoken words. Stage 1 performs phonetic embedding but with speaker characteristics disentangled using separate phonetic and speaker encoders and a speaker discriminator. Stage 2 then performs semantic embedding in addition. We further propose to evaluate the phonetic-and-semantic nature of the audio embeddings obtained in Stage 2 by parallelizing with text embeddings BIBREF43 , BIBREF44 . Very encouraging results including those for an application task of spoken document retrieval were obtained in the initial experiments."
],
[
"The proposed framework of phonetic-and-semantic embedding of spoken words consists of two stages:",
"Stage 1 - Phonetic embedding with speaker characteristics disentangled.",
"Stage 2 - Semantic embedding over phonetic embeddings obtained in Stage 1.",
"In addition, we propose an approach for parallelizing the audio and text embeddings to be used for evaluating the phonetic and semantic information carried by the audio embeddings. These are described in Subsections SECREF2 , SECREF11 and SECREF14 respectively."
],
[
"A text word with a given phonetic structure corresponds to infinite number of audio signals with varying acoustic factors such as speaker characteristics, microphone characteristics, background noise, etc. All the latter acoustic factors are jointly referred to as speaker characteristics here for simplicity, which obviously disturbs the goal of phonetic-and-semantic embedding. So Stage 1 is to obtain phonetic embeddings only with speaker characteristics disentangled.",
"Also, because the training of phonetic-and-semantic embedding is challenging, in the initial effort we slightly simplify the task by assuming all training utterances have been properly segmented into spoken words. Because there exist many approaches for segmenting utterances automatically BIBREF25 , and automatic segmentation plus phonetic embedding of spoken words has been successfully trained and reported before BIBREF25 , such an assumption is reasonable here.",
"We denote the audio corpus as INLINEFORM0 , which consists of INLINEFORM1 spoken words, each represented as INLINEFORM2 , where INLINEFORM3 is the acoustic feature vector for the tth frame and INLINEFORM4 is the total number of frames in the spoken word. The goal of Stage 1 is to disentangle the phonetic structure and speaker characteristics in acoustic features, and extract a vector representation for the phonetic structure only.",
"As shown in the middle of Figure FIGREF3 , a sequence of acoustic features INLINEFORM0 is entered to a phonetic encoder INLINEFORM1 and a speaker encoder INLINEFORM2 to obtain a phonetic vector INLINEFORM3 in orange and a speaker vector INLINEFORM4 in green. Then the phonetic and speaker vectors INLINEFORM5 , INLINEFORM6 are used by the decoder INLINEFORM7 to reconstruct the acoustic features INLINEFORM8 . This phonetic vector INLINEFORM9 will be used in the next stage as the phonetic embedding. The two encoders INLINEFORM10 , INLINEFORM11 and the decoder INLINEFORM12 are jointly learned by minimizing the reconstruction loss below: DISPLAYFORM0 ",
"It will be clear below how to make INLINEFORM0 and INLINEFORM1 separately encode the phonetic structure and speaker characteristics.",
"The speaker encoder training requires speaker information for the spoken words. Assume the spoken word INLINEFORM0 is uttered by speaker INLINEFORM1 . When the speaker information is not available, we can simply assume that the spoken words in the same utterance are produced by the same speaker. As shown in the lower part of Figure FIGREF3 , INLINEFORM2 is learned to minimize the following loss: DISPLAYFORM0 ",
"In other words, if INLINEFORM0 and INLINEFORM1 are uttered by the same speaker ( INLINEFORM2 ), we want their speaker embeddings INLINEFORM3 and INLINEFORM4 to be as close as possible. But if INLINEFORM5 , we want the distance between INLINEFORM6 and INLINEFORM7 larger than a threshold INLINEFORM8 .",
"As shown in the upper right corner of Figure FIGREF3 , a speaker discriminator INLINEFORM0 takes two phonetic vectors INLINEFORM1 and INLINEFORM2 as input and tries to tell if the two vectors come from the same speaker. The learning target of the phonetic encoder INLINEFORM3 is to \"fool\" this speaker discriminator INLINEFORM4 , keeping it from discriminating the speaker identity correctly. In this way, only the phonetic structure information is learned in the phonetic vector INLINEFORM5 , while only the speaker characteristics is encoded in the speaker vector INLINEFORM6 . The speaker discriminator INLINEFORM7 learns to maximize INLINEFORM8 in ( EQREF9 ), while the phonetic encoder INLINEFORM9 learns to minimize INLINEFORM10 , DISPLAYFORM0 ",
"where INLINEFORM0 is a real number.",
"The optimization procedure of Stage 1 consists of four parts: (1) training INLINEFORM0 , INLINEFORM1 and INLINEFORM2 by minimizing INLINEFORM3 , (2) training INLINEFORM4 by minimizing INLINEFORM5 , (3) training INLINEFORM6 by minimizing INLINEFORM7 , and (4) training INLINEFORM8 by maximizing INLINEFORM9 . Parts (1)(2)(3) are jointly trained together, while iteratively trained with part (4) BIBREF45 ."
],
[
"As shown in Figure FIGREF12 , similar to the Word2Vec skip-gram model BIBREF0 , we use two encoders: semantic encoder INLINEFORM0 and context encoder INLINEFORM1 to embed the semantics over phonetic embeddings INLINEFORM2 obtained in Stage 1. On the one hand, given a spoken word INLINEFORM3 , we feed its phonetic vector INLINEFORM4 obtained from Stage 1 into INLINEFORM5 as in the middle of Figure FIGREF12 , producing the semantic embedding (in yellow) of the spoken word INLINEFORM6 . On the other hand, given the context window size INLINEFORM7 , which is a hyperparameter, if a spoken word INLINEFORM8 is in the context window of INLINEFORM9 , then its phonetic vector INLINEFORM10 is a context vector of INLINEFORM11 . For each context vector INLINEFORM12 of INLINEFORM13 , we feed it into the context encoder INLINEFORM14 in the upper part of Figure FIGREF12 , and the output is the context embedding INLINEFORM15 .",
"Given a pair of phonetic vectors INLINEFORM0 , the training criteria for INLINEFORM1 and INLINEFORM2 is to maximize the similarity between INLINEFORM3 and INLINEFORM4 if INLINEFORM5 and INLINEFORM6 are contextual, while minimizing the similarity otherwise. The basic idea is parallel to that of text Word2Vec. Two different spoken words having similar context should have similar semantics. Thus if two different phonetic embeddings corresponding to two different spoken words have very similar context, they should be close to each other after projected by the semantic encoder INLINEFORM7 . The semantic and context encoders INLINEFORM8 and INLINEFORM9 learn to minimize the semantic loss INLINEFORM10 as follows: DISPLAYFORM0 ",
"The sigmoid of dot product of INLINEFORM0 and INLINEFORM1 is used to evaluate the similarity. With ( EQREF13 ), if INLINEFORM2 and INLINEFORM3 are in the same context window, we want INLINEFORM4 and INLINEFORM5 to be as similar as possible. We also use the negative sampling technique, in which only some pairs INLINEFORM6 are randomly sampled as negative examples instead of enumerating all possible negative pairs."
],
[
"In this paper we further propose an approach of parallelizing a set of audio embeddings (for spoken words) with a set of text embeddings (for text words) which will be useful in evaluating the phonetic and semantic information carried by these embeddings.",
"Assume we have the audio embeddings for a set of spoken words INLINEFORM0 INLINEFORM1 , where INLINEFORM2 is the embedding obtained for a spoken word INLINEFORM3 and INLINEFORM4 is the total number of distinct spoken words in the audio corpus. On the other hand, assume we have the text embeddings INLINEFORM5 INLINEFORM6 , where INLINEFORM7 is the embedding of the INLINEFORM8 -th text word for the INLINEFORM9 distinct text words. Although the distributions of INLINEFORM10 and INLINEFORM11 in their respective spaces are not parallel, that is, a specific dimension in the space for INLINEFORM12 does not necessarily correspond to a specific dimension in the space for INLINEFORM13 , there should exist some consistent relationship between the two distributions. For example, the relationships among the words {France, Paris, Germany} learned from context should be consistent in some way, regardless of whether they are in text or spoken form. So we try to learn a mapping relation between the two spaces. It will be clear below such a mapping relation can be used to evaluate the phonetic and semantic information carried by the audio embeddings.",
"Mini-Batch Cycle Iterative Closest Point (MBC-ICP) BIBREF44 previously proposed as described below is used here. Given two sets of embeddings as mentioned above, INLINEFORM0 and INLINEFORM1 , they are first projected to their respective top INLINEFORM2 principal components by PCA. Let the projected sets of vectors of INLINEFORM3 and INLINEFORM4 be INLINEFORM5 and INLINEFORM6 respectively. If INLINEFORM7 can be mapped to the space of INLINEFORM8 by an affine transformation, the distributions of INLINEFORM9 and INLINEFORM10 would be similar after PCA BIBREF44 .",
"Then a pair of transformation matrices, INLINEFORM0 and INLINEFORM1 , is learned, where INLINEFORM2 transforms a vector INLINEFORM3 in INLINEFORM4 to the space of INLINEFORM5 , that is, INLINEFORM6 , while INLINEFORM7 maps a vector INLINEFORM8 in INLINEFORM9 to the space of INLINEFORM10 . INLINEFORM11 and INLINEFORM12 are learned iteratively by the algorithm proposed previously BIBREF44 .",
"In our evaluation as mentioned below, labeled pairs of the audio and text embeddings of each word is available, that is, we know INLINEFORM0 and INLINEFORM1 for each word INLINEFORM2 . So we can train the transformation matrices INLINEFORM3 and INLINEFORM4 using the gradient descent method to minimize the following objective function: DISPLAYFORM0 ",
"where the last two terms in ( EQREF15 ) are cycle-constraints to ensure that both INLINEFORM0 and INLINEFORM1 are almost unchanged after transformed to the other space and back. In this way we say the two sets of embeddings are parallelized."
],
[
"We used LibriSpeech BIBREF46 as the audio corpus in the experiments, which is a corpus of read speech in English derived from audiobooks. This corpus contains 1000 hours of speech sampled at 16 kHz uttered by 2484 speakers. We used the “clean\" and “others\" sets with a total of 960 hours, and extracted 39-dim MFCCs as the acoustic features."
],
[
"In Stage 1, The phonetic encoder INLINEFORM0 , speaker encoder INLINEFORM1 and decoder INLINEFORM2 were all 2-layer GRUs with hidden layer size 128, 128 and 256, respectively. The speaker discriminator INLINEFORM3 is a fully-connected feedforward network with 2 hidden layers with size 128. The value of INLINEFORM4 we used in INLINEFORM5 in ( EQREF7 ) was set to 0.01.",
"In Stage 2, the two encoders INLINEFORM0 and INLINEFORM1 were both 2-hidden-layer fully-connected feedforward networks with size 256. The size of embedding vectors was set to be 128. The context window size was 5, and the negative sampling number was 5.",
"For parallelizing the text and audio embeddings in Subsection SECREF14 , we projected the embeddings to the top 100 principle components, so the affine transformation matrices were INLINEFORM0 . The mini-batch size was 200, and INLINEFORM1 in ( EQREF15 ) was set to 0.5."
],
[
"Each text word corresponds to many audio realizations in spoken form. So we first took the average of the audio embeddings for all those realizations to be the audio embedding for the spoken word considered. In this way, each word has a unique representation in either audio or text form.",
"We applied three different versions of audio embedding (AUD) on the top 1000, 3000 and 5000 words with the highest frequencies in LibriSpeech: (i) phonetic embedding only obtained in Stage 1 in Subsection SECREF2 (AUD-ph); (ii) phonetic-and-semantic embedding obtained by Stages 1 and 2 in Subsections SECREF2 , SECREF11 , except the speaker characteristics not disentangled (AUD-(ph-+se)), or INLINEFORM0 , INLINEFORM1 in ( EQREF7 ), ( EQREF9 ) not considered; (iii) complete phonetic-and-semantic embedding as proposed in this paper including Stages 1 and 2 (AUD-(ph+se)). So this is for ablation study.",
"On the other hand, we also obtained three different types of text embedding (TXT) on the same set of top 1000, 3000 and 5000 words. Type (a) Phonetic Text embedding (TXT-ph) considered precise phonetic structure but not context or semantics at all. This was achieved by a well-trained sequence-to-sequence autoencoder encoding the precise phoneme sequence of a word into a latent embedding. Type (b) Semantic Text embedding considered only context or semantics but not phonetic structure at all, and was obtained by a standard skip-gram model using one-hot representations as the input (TXT-(se,1h)). Type (c) Semantic and Phonetic Text embedding (TXT-(se,ph)) considered context or semantics as well as the precise phonetic structure, obtained by a standard skip-gram model but using the Type (a) Phonetic Text embedding (TXT-ph) as the input. So these three types of text embeddings provided the reference embeddings obtained from text and/or phoneme sequences, not disturbed by audio signals at all.",
"Now we can perform the transformation from the above three versions of audio embeddings (AUD-ph, AUD-(ph-+se), AUD-(ph+se)) to the above three types of text embeddings (TXT-ph, TXT-(se,1h), TXT-(se,ph)) by parallelizing the embeddings as described in Subsection SECREF14 . The evaluation metric used for this parallelizing test is the top-k nearest accuracy. If the audio embedding representation INLINEFORM0 of a word INLINEFORM1 is transformed to the text embedding INLINEFORM2 by INLINEFORM3 , and INLINEFORM4 is among the top-k nearest neighbors of the text embedding representation INLINEFORM5 of the same word, this transformation for word INLINEFORM6 is top-k-accurate. The top-k nearest accuracy is then the percentage of the words considered which are top-k-accurate.",
"The results of top-k nearest accuracies for k=1 and 10 are respectively listed in Tables TABREF18 and TABREF19 , each for 1000, 3000 and 5000 pairs of spoken and text words.",
"First look at the top part of Table TABREF18 for top-1 nearest accuracies for 1000 pairs of audio and text embeddings. Since column (a) (TXT-ph) considered precise phonetic structures but not semantics at all, the relatively high accuracies in column (a) for all three versions of audio embedding (i)(ii)(iii) implied the three versions of audio embedding were all rich of phonetic information. But when the semantics were embedded in (ii)(iii) (AUD-(ph-+se), AUD-(ph+se)), the phonetic structures were inevitably disturbed (0.519, 0.598 vs 0.637). On the other hand, column (b) (TXT-(se,1h)) considered only semantics but not phonetic structure at all, the relatively lower accuracies implied the three versions of audio embedding did bring some good extent of semantics, except (i) AUD-ph, but obviously weaker than the phonetic information in column (a). Also, the Stage 2 training in rows (ii)(iii) (AUD-(ph-+se), AUD-(ph+se)) gave higher accuracies than row (i) (AUD-ph) (0.339, 0.332 vs 0.124 in column (b)), which implied the Stage 2 training was successful. However, column (c) (TXT-(se,ph)) is for the text embedding considering both the semantic and phonetic information, so the two versions of phonetic-and-semantic audio embedding for rows (ii)(iii) had very close distributions (0.750, 0.800 in column (c)), or carried good extent of both semantics and phonetic structure. The above are made clearer by the numbers in bold which are the highest for each row, and the numbers in red which are the highest for each column. It is also clear that the speaker characteristics disentanglement is helpful, since row (iii) for AUD-(ph+se) was always better than row (ii) for AUD-(ph-+se).",
"Similar trends can be observed in the other parts of Table TABREF18 for 3000 and 5000 pairs, except the accuracies were lower, probably because for more pairs the parallelizing transformation became more difficult and less accurate. The only difference is that in these parts column (a) for TXT-ph had the highest accuracies, probably because the goal of semantic embedding for rows (ii)(iii) (AUD-(ph-+se), AUD-(ph+se)) was really difficult, and disturbed or even dominated by phonetic structures. Similar trends can be observed in Table TABREF19 for top-10 accuracies, obviously with higher numbers for top-10 as compared to those for top-1 in Table TABREF18 .",
"In Table TABREF20 , we list some examples of top-10 nearest neighbors in AUD-(ph+se) (proposed), AUD-ph (with phonetic structure) and TXT-(se,1h) (with semantics). The words in red are the common words for AUD-(ph+se) and AUD-ph, and the words in bold are the common words of AUD-(ph+se) and TXT-(se,1h). For example, the word “owned\" has two common semantically related words “learned\" and “known\" in the top-10 nearest neighbors of AUD-(ph+se) and TXT-(se,1h). The word “owned\" also has three common phonetically similar words “armed\", “own\" and “only\" in the top-10 nearest neighbors of AUD-(ph+se) and AUD-ph. This is even clearer for the function word “didn't\". These clearly illustrate the phonetic-and-semantic nature of AUD-(ph+se)."
],
[
"The goal here is to retrieve not only those spoken documents including the spoken query (e.g. “President Donald Trump\") based on the phonetic structures, but those including words semantically related to the query word (e.g. “White House\"). Below we show the effectiveness of the phonetic-and-semantc embedding proposed here in this application.",
"We used the 960 hours of “clean\" and “other\" parts of LibriSpeech dataset as the target archive for retrieval, which consisted of 1478 audio books with 5466 chapters. Each chapter included 1 to 204 utterances or 5 to 6529 spoken words. In our experiments, the queries were the keywords in the book titles, and the spoken documents were the chapters. We chose 100 queries out of 100 randomly selected book titles, and our goal was to retrieve query-relevant documents. For each query INLINEFORM0 , we defined two sets of query-relevant documents: The first set INLINEFORM1 consisted of chapters which included the query INLINEFORM2 . The second set INLINEFORM3 consisted of chapters whose content didn't contain INLINEFORM4 , but these chapters belonged to books whose titles contain INLINEFORM5 (so we assume these chapters are semantically related to INLINEFORM6 ). Obviously INLINEFORM7 and INLINEFORM8 were mutually exclusive, and INLINEFORM9 were the target for semantic retrieval, but couldn't be retrieved based on the phonetic structures only.",
"For each query INLINEFORM0 and each document INLINEFORM1 , the relevance score of INLINEFORM2 with respect to INLINEFORM3 , INLINEFORM4 , is defined as follows: DISPLAYFORM0 ",
"where INLINEFORM0 is the audio embedding of a word INLINEFORM1 in INLINEFORM2 . So ( EQREF25 ) indicates the documents INLINEFORM3 were ranked by the minimum distance between a word INLINEFORM4 in INLINEFORM5 and the query INLINEFORM6 . We used mean average precision (MAP) as the evaluation metric for the spoken document retrieval test.",
"We compared the retrieval results with two versions of audio embedding: AUD-(ph+se) and AUD-ph. The results are listed in Table TABREF21 for two definitions of groundtruth for the query-relevant documents: the union of INLINEFORM0 and INLINEFORM1 and INLINEFORM2 alone. As can be found from this table, AUD-(ph+se) offered better retrieval performance than AUD-ph in both rows. Note that those chapters in INLINEFORM3 in the second row of the table did not include the query INLINEFORM4 , so couldn't be well retrieved using phonetic embedding alone. That is why the phonetic-and-semantic embedding proposed here can help.",
"In Table TABREF22 , we list some chapters in INLINEFORM0 retrieved using AUD-(ph+se) embeddings to illustrate the advantage of the phonetic-and-semantic embeddings. In this table, column (a) is the query INLINEFORM1 , column (b) is the title of a book INLINEFORM2 which had chapters in INLINEFORM3 , column (c) is a certain chapter INLINEFORM4 in INLINEFORM5 , column (d) is the rank of INLINEFORM6 out of all chapters whose content didn't contain INLINEFORM7 , and column (e) is a part of the content in INLINEFORM8 where the word in red is the word in INLINEFORM9 with the highest similarity to INLINEFORM10 . For example, in the first row for the query “nations\", the chapter “Prometheus the Friend of Man\" of the book titled “Myths and Legends of All Nations\" is in INLINEFORM11 . The word “nations\" is not in the content of this chapter. However, because the word “king\" semantically related to “nations\" is in the content, this chapter was ranked the 13th among all chapters whose content didn't contain the word “nations\". This clearly verified why the semantics in the phonetic-and-semantic embeddings can remarkably improve the performance of spoken content retrieval."
],
[
"In this paper we propose a framework to embed spoken words into vector representations carrying both the phonetic structure and semantics of the word. This is intrinsically challenging because the phonetic structure and the semantics of spoken words inevitably disturbs each other. But this phonetic-and-semantic embedding nature is desired and attractive, for example in the application task of spoken document retrieval. A parallelizing transformation between the audio and text embeddings is also proposed to evaluate whether such a goal is achieved."
]
],
"section_name": [
"Introduction",
"Proposed Approach",
"Stage 1 - Phonetic Embedding with Speaker Characteristics Disentangled",
"Stage 2 - Semantic Embedding over Phonetic Embeddings Obtained in Stage 1",
"Parallelizing Audio and Text Embeddings for Evaluation Purposes",
"Dataset",
"Model Implementation",
"Evaluation by Parallelizing Audio and Text Embeddings",
"Results of Spoken Document Retrieval",
"Conclusions and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"e17831b46d7f8c925fd3add460383c597453d15a",
"f4c50da528085095b9bf946f337fba60a58aa09d",
"fdced2315a003e9d2e7888f7a27244d89a7de336"
],
"answer": [
{
"evidence": [
"We used LibriSpeech BIBREF46 as the audio corpus in the experiments, which is a corpus of read speech in English derived from audiobooks. This corpus contains 1000 hours of speech sampled at 16 kHz uttered by 2484 speakers. We used the “clean\" and “others\" sets with a total of 960 hours, and extracted 39-dim MFCCs as the acoustic features."
],
"extractive_spans": [
" LibriSpeech BIBREF46"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used LibriSpeech BIBREF46 as the audio corpus in the experiments, which is a corpus of read speech in English derived from audiobooks. This corpus contains 1000 hours of speech sampled at 16 kHz uttered by 2484 speakers. We used the “clean\" and “others\" sets with a total of 960 hours, and extracted 39-dim MFCCs as the acoustic features."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We used LibriSpeech BIBREF46 as the audio corpus in the experiments, which is a corpus of read speech in English derived from audiobooks. This corpus contains 1000 hours of speech sampled at 16 kHz uttered by 2484 speakers. We used the “clean\" and “others\" sets with a total of 960 hours, and extracted 39-dim MFCCs as the acoustic features."
],
"extractive_spans": [
"LibriSpeech"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used LibriSpeech BIBREF46 as the audio corpus in the experiments, which is a corpus of read speech in English derived from audiobooks. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We used LibriSpeech BIBREF46 as the audio corpus in the experiments, which is a corpus of read speech in English derived from audiobooks. This corpus contains 1000 hours of speech sampled at 16 kHz uttered by 2484 speakers. We used the “clean\" and “others\" sets with a total of 960 hours, and extracted 39-dim MFCCs as the acoustic features.",
"We applied three different versions of audio embedding (AUD) on the top 1000, 3000 and 5000 words with the highest frequencies in LibriSpeech: (i) phonetic embedding only obtained in Stage 1 in Subsection SECREF2 (AUD-ph); (ii) phonetic-and-semantic embedding obtained by Stages 1 and 2 in Subsections SECREF2 , SECREF11 , except the speaker characteristics not disentangled (AUD-(ph-+se)), or INLINEFORM0 , INLINEFORM1 in ( EQREF7 ), ( EQREF9 ) not considered; (iii) complete phonetic-and-semantic embedding as proposed in this paper including Stages 1 and 2 (AUD-(ph+se)). So this is for ablation study."
],
"extractive_spans": [
"LibriSpeech"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used LibriSpeech BIBREF46 as the audio corpus in the experiments, which is a corpus of read speech in English derived from audiobooks. ",
"We applied three different versions of audio embedding (AUD) on the top 1000, 3000 and 5000 words with the highest frequencies in LibriSpeech: (i) phonetic embedding only obtained in Stage 1 in Subsection SECREF2 (AUD-ph); (ii) phonetic-and-semantic embedding obtained by Stages 1 and 2 in Subsections SECREF2 , SECREF11 , except the speaker characteristics not disentangled (AUD-(ph-+se)), or INLINEFORM0 , INLINEFORM1 in ( EQREF7 ), ( EQREF9 ) not considered; (iii) complete phonetic-and-semantic embedding as proposed in this paper including Stages 1 and 2 (AUD-(ph+se))."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"4e0027112e3b4d3bacbd2f22a3fe84c629a7d7bc",
"d3cf58a413b5975a89ef88e46b3485f8cbc4b564",
"de45335c120bc352ac3fa6b559d79ebfef0a41b6"
],
"answer": [
{
"evidence": [
"A text word with a given phonetic structure corresponds to infinite number of audio signals with varying acoustic factors such as speaker characteristics, microphone characteristics, background noise, etc. All the latter acoustic factors are jointly referred to as speaker characteristics here for simplicity, which obviously disturbs the goal of phonetic-and-semantic embedding. So Stage 1 is to obtain phonetic embeddings only with speaker characteristics disentangled."
],
"extractive_spans": [
"speaker characteristics",
"microphone characteristics",
"background noise"
],
"free_form_answer": "",
"highlighted_evidence": [
"A text word with a given phonetic structure corresponds to infinite number of audio signals with varying acoustic factors such as speaker characteristics, microphone characteristics, background noise, etc. All the latter acoustic factors are jointly referred to as speaker characteristics here for simplicity, which obviously disturbs the goal of phonetic-and-semantic embedding. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"A text word with a given phonetic structure corresponds to infinite number of audio signals with varying acoustic factors such as speaker characteristics, microphone characteristics, background noise, etc. All the latter acoustic factors are jointly referred to as speaker characteristics here for simplicity, which obviously disturbs the goal of phonetic-and-semantic embedding. So Stage 1 is to obtain phonetic embeddings only with speaker characteristics disentangled."
],
"extractive_spans": [],
"free_form_answer": "Acoustic factors such as speaker characteristics, microphone characteristics, background noise.",
"highlighted_evidence": [
"A text word with a given phonetic structure corresponds to infinite number of audio signals with varying acoustic factors such as speaker characteristics, microphone characteristics, background noise, etc. All the latter acoustic factors are jointly referred to as speaker characteristics here for simplicity, which obviously disturbs the goal of phonetic-and-semantic embedding. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"836d3cb74314ebcf8f30a3806df39db25b56a585",
"9971d5fec71a89f984ebabb7e30058b17f2f1a9f",
"e65b7c94811b76350e806a0cee4fd946acc7a69e"
],
"answer": [
{
"evidence": [
"We used LibriSpeech BIBREF46 as the audio corpus in the experiments, which is a corpus of read speech in English derived from audiobooks. This corpus contains 1000 hours of speech sampled at 16 kHz uttered by 2484 speakers. We used the “clean\" and “others\" sets with a total of 960 hours, and extracted 39-dim MFCCs as the acoustic features."
],
"extractive_spans": [
"English"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used LibriSpeech BIBREF46 as the audio corpus in the experiments, which is a corpus of read speech in English derived from audiobooks."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We used LibriSpeech BIBREF46 as the audio corpus in the experiments, which is a corpus of read speech in English derived from audiobooks. This corpus contains 1000 hours of speech sampled at 16 kHz uttered by 2484 speakers. We used the “clean\" and “others\" sets with a total of 960 hours, and extracted 39-dim MFCCs as the acoustic features."
],
"extractive_spans": [
"English"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used LibriSpeech BIBREF46 as the audio corpus in the experiments, which is a corpus of read speech in English derived from audiobooks."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We used LibriSpeech BIBREF46 as the audio corpus in the experiments, which is a corpus of read speech in English derived from audiobooks. This corpus contains 1000 hours of speech sampled at 16 kHz uttered by 2484 speakers. We used the “clean\" and “others\" sets with a total of 960 hours, and extracted 39-dim MFCCs as the acoustic features."
],
"extractive_spans": [
"English"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used LibriSpeech BIBREF46 as the audio corpus in the experiments, which is a corpus of read speech in English derived from audiobooks. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"1ecc6babb918f000969640187062be4e06d9b2d4",
"70aae683445bb2665583e91a9e6d5ae4746d43a1",
"735ae34e216ab6bbf047a17a0da629aec93b2c66"
],
"answer": [
{
"evidence": [
"This paper proposes a two-stage framework of phonetic-and-semantic embedding for spoken words. Stage 1 performs phonetic embedding but with speaker characteristics disentangled using separate phonetic and speaker encoders and a speaker discriminator. Stage 2 then performs semantic embedding in addition. We further propose to evaluate the phonetic-and-semantic nature of the audio embeddings obtained in Stage 2 by parallelizing with text embeddings BIBREF43 , BIBREF44 . Very encouraging results including those for an application task of spoken document retrieval were obtained in the initial experiments."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Very encouraging results including those for an application task of spoken document retrieval were obtained in the initial experiments."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"The goal here is to retrieve not only those spoken documents including the spoken query (e.g. “President Donald Trump\") based on the phonetic structures, but those including words semantically related to the query word (e.g. “White House\"). Below we show the effectiveness of the phonetic-and-semantc embedding proposed here in this application."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The goal here is to retrieve not only those spoken documents including the spoken query (e.g. “President Donald Trump\") based on the phonetic structures, but those including words semantically related to the query word (e.g. “White House\"). Below we show the effectiveness of the phonetic-and-semantc embedding proposed here in this application."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"This paper proposes a two-stage framework of phonetic-and-semantic embedding for spoken words. Stage 1 performs phonetic embedding but with speaker characteristics disentangled using separate phonetic and speaker encoders and a speaker discriminator. Stage 2 then performs semantic embedding in addition. We further propose to evaluate the phonetic-and-semantic nature of the audio embeddings obtained in Stage 2 by parallelizing with text embeddings BIBREF43 , BIBREF44 . Very encouraging results including those for an application task of spoken document retrieval were obtained in the initial experiments."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Very encouraging results including those for an application task of spoken document retrieval were obtained in the initial experiments."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What is the dataset that is used to train the embeddings?",
"What speaker characteristics are used?",
"What language is used for the experiments?",
"Is the embedding model test in any downstream task?"
],
"question_id": [
"ac87dd34d28c3edd9419fa0145f3d38c87d696aa",
"e66a88eecf8d5d093caec1f487603534f88dd7e7",
"fef5b65263c81299acc350a101dabaf5a8cb9c6e",
"f40e23adc8245562c8677f0f86fa5175179b5422"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. Phonetic embedding with speaker characteristics disentangled.",
"Fig. 2. Semantic embedding over phonetic embeddings obtained in Stage 1.",
"Table 1. Top-1 nearest accuracies when parallelizing the different versions of audio and text embeddings for different numbers of pairs of spoken and text words.",
"Table 2. Top-10 nearest accuracies when parallelizing the different versions of audio and text embeddings for different numbers of pairs of spoken and text words.",
"Table 3. Some examples of top-10 nearest neighbors in AUD-(ph+se) (proposed), AUD-ph (with phonetic structure) and TXT(se,1h) (with semantics). The words in red are the common words of AUD-(ph+se) and AUD-ph, and the words in bold are the common words of AUD-(ph+se) and TXT-(se,1h).",
"Table 5. Some retrieval examples of chapters in D2 using AUD-(ph+se) show the advantage of semantics information in phonetic-and-semantic embeddings. The word in red in each row indicates the word with the highest similarity to the query in the chapter."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"6-Table5-1.png"
]
} | [
"What speaker characteristics are used?"
] | [
[
"1807.08089-Stage 1 - Phonetic Embedding with Speaker Characteristics Disentangled-0"
]
] | [
"Acoustic factors such as speaker characteristics, microphone characteristics, background noise."
] | 87 |
1703.05320 | Legal Question Answering using Ranking SVM and Deep Convolutional Neural Network | This paper presents a study of employing Ranking SVM and Convolutional Neural Network for two missions: legal information retrieval and question answering in the Competition on Legal Information Extraction/Entailment. For the first task, our proposed model used a triple of features (LSI, Manhattan, Jaccard), and is based on paragraph level instead of article level as in previous studies. In fact, each single-paragraph article corresponds to a particular paragraph in a huge multiple-paragraph article. For the legal question answering task, additional statistical features from information retrieval task integrated into Convolutional Neural Network contribute to higher accuracy. | {
"paragraphs": [
[
"Legal text, along with other natural language text data, e.g. scientific literature, news articles or social media, has seen an exponential growth on the Internet and in specialized systems. Unlike other textual data, legal texts contain strict logical connections of law-specific words, phrases, issues, concepts and factors between sentences or various articles. Those are for helping people to make a correct argumentation and avoid ambiguity when using them in a particular case. Unfortunately, this also makes information retrieval and question answering on legal domain become more complicated than others.",
"There are two primary approaches to information retrieval (IR) in the legal domain BIBREF0 : manual knowledge engineering (KE) and natural language processing (NLP). In the KE approach, an effort is put into translating the way legal experts remember and classify cases into data structures and algorithms, which will be used for information retrieval. Although this approach often yields a good result, it is hard to be applied in practice because of time and financial cost when building the knowledge base. In contrast, NLP-based IR systems are more practical as they are designed to quickly process terabytes of data by utilizing NLP techniques. However, several challenges are presented when designing such system. For example, factors and concepts in legal language are applied in a different way from common usage BIBREF1 . Hence, in order to effectively answer a legal question, it must compare the semantic connections between the question and sentences in relevant articles found in advance BIBREF2 .",
"Given a legal question, retrieving relevant legal articles and deciding whether the content of a relevant article can be used to answer the question are two vital steps in building a legal question answering system. Kim et al. BIBREF2 exploited Ranking SVM with a set of features for legal IR and Convolutional Neural Network (CNN) BIBREF3 combining with linguistic features for question answering (QA) task. However, generating linguistic features is a non-trivial task in the legal domain. Carvalho et al. BIBREF1 utilized n-gram features to rank articles by using an extension of TF-IDF. For QA task, the authors adopted AdaBoost BIBREF4 with a set of similarity features between a query and an article pair BIBREF5 to classify a query-article pair into “YES\" or “NO\". However, overfitting in training may be a limitation of this method. Sushimita et al. BIBREF6 used the voting of Hiemstra, BM25 and PL2F for IR task. Meanwhile, Tran et al. BIBREF7 used Hidden Markov model (HMM) as a generative query model for legal IR task. Kano BIBREF8 addressed legal IR task by using a keyword-based method in which the score of each keyword was computed from a query and its relevant articles using inverse frequency. After calculating, relevant articles were retrieved based on three ranked scores. These methods, however, lack the analysis of feature contribution, which can reveal the relation between legal and NLP domain. This paper makes the following contributions:",
"In the following sections, we first show our idea along with data analysis in the context of COLIEE. Next, we describe our method for legal IR and legal QA tasks. After building a legal QA system, we show experimental results along with discussion and analysis. We finish by drawing some important conclusions."
],
[
"In the context of COLIEE 2016, our approach is to build a pipeline framework which addresses two important tasks: IR and QA. In Figure 1 , in training phase, a legal text corpus was built based on all articles. Each training query-article pair for LIR task and LQA task was represented as a feature vector. Those feature vectors were utilized to train a learning-to-rank (L2R) model (Ranking SVM) for IR and a classifier (CNN) for QA. The red arrows mean that those steps were prepared in advance. In the testing phase, given a query $q$ , the system extracts its features and computes the relevance score corresponding to each article by using the L2R model. Higher score yielded by SVM-Rank means the article is more relevant. As shown in Figure 1 , the article ranked first with the highest score, i.e. 2.6, followed by other lower score articles. After retrieving a set of relevant articles, CNN model was employed to determine the “YES\" or “NO\" answer of the query based on these relevant articles."
],
[
"The published training dataset in COLIEE 2016 consists of a text file containing Japanese Civil Code and eight XML files. Each XML file contains multiple pairs of queries and their relevant articles, and each pair has a label “YES\" or “NO\", which confirms the query corresponding to the relevant articles. There is a total of 412 pairs in eight XML files and 1,105 articles in the Japanese Civil Code file, and each query can have more than one relevant articles.",
"After analyzing the dataset in the Civil Code file, we observed that the content of a query is often more or less related to only a paragraph of an article instead of the entire content. Based on that, each article was treated as one of two types: single-paragraph or multiple-paragraph, in which a multiple-paragraph article is an article which consists of more than one paragraphs. There are 7 empty articles, 682 single-paragraph articles and the rest are multiple-paragraph.",
"Based on our findings, we proposed to split each multiple-paragraph article into several independent articles according to their paragraphs. For instance, in Table 1 , the Article 233 consisting of two paragraphs was split into two single-paragraph articles 233(1) and 233(2). After splitting, there are in total 1,663 single-paragraph articles.",
"Stopwords were also removed before building the corpus. Text was processed in the following order: tokenization, POS tagging, lemmatization, and stopword removal. In BIBREF1 , the stopword removal stage was done before the lemmatization stage, but we found that after lemmatizing, some words might become stopwords, for instance, “done\" becomes “do\". Therefore, the extracted features based on words are more prone to be distorted, leading to lower ranking performance if stopword removal is carried out before lemmatization step. Terms were tokenized and lemmatized using NLTK, and POS tagged by Stanford Tagger."
],
[
"In order to build a legal IR, traditional models such as TF-IDF, BM25 or PL2F can be used to generate basic features for matching documents with a query. Nevertheless, to improve not only the accuracy but also the robustness of ranking function, it is essential to take into account a combination of fundamental features and other potential features. Hence, the idea is to build a L2R model, which incorporates various features to generate an optimal ranking function.",
"Among different L2R methods, Ranking SVM (SVM-Rank) BIBREF9 , a state-of-the-art pairwise ranking method and also a strong method for IR BIBREF10 , BIBREF11 , was used. Our model is an extended version of Kim's model BIBREF2 with two new aspects. Firstly, there is a big distinction between our features and Kim's features. While Kim used three types of features: lexical words, dependency pairs, and TF-IDF score; we conducted a series of experiments to discover a set of best features among six features as shown in Table 2 . Secondly, our model is applied to individual paragraphs as described in section \"Data Observation\" instead of the whole articles as in Kim's work.",
"Given n training queries $\\lbrace q_i\\rbrace _{i=1}^{n}$ , their associated document pairs $(x_u^{(i)},x_v^{(i)})$ and the corresponding ground truth label $y_{u,v}^{(i)}$ , SVM Rank optimizes the objective function shown in Equation ( 13 ) subject to constraints ( 14 ), and ( 15 ): ",
"$$min \\quad \\frac{1}{2}\\Vert w\\Vert ^2 + \\lambda \\sum _{i=1}^{n}\\sum _{u,v:y_{u,v}^{(i)}} \\xi _{u,v}^{(i)}$$ (Eq. 13) ",
"$$s.t. \\quad w^T(x_u^{(i)} - x_v^{(i)}) \\ge 1 - \\xi _{u,v}^{(i)} \\quad \\text{if} \\quad y_{u,v}^{(i)}=1$$ (Eq. 14) ",
"where: $f(x)=w^Tx$ is a linear scoring function, $(x_u,x_v)$ is a pairwise and $\\xi _{u,v}^{(i)}$ is the loss. The document pairwise in our model is a pair of a query and an article.",
"Based on the corpus constructed from all of the single-paragraph articles (see Section \"Data Observation\" ), three basic models were built: TF-IDF, LSI and Latent Dirichlet Allocation (LDA) BIBREF12 . Note that, LSI and LDA model transform articles and queries from their TF-IDF-weighted space into a latent space of a lower dimension. For COLIEE 2016 corpora, the dimension of both LSI and LDA is 300 instead of over 2,100 of TF-IDF model. Those features were extracted by using gensim library BIBREF13 . Additionally, to capture the similarity between a query and an article, we investigated other potential features described in Table 2 . Normally, the Jaccard coefficient measures similarity between two finite sets based on the ratio between the size of the intersection and the size of the union of those sets. However, in this paper, we calculated Generalized Jaccard similarity as: ",
"$$ J(q,A) = J(X,Y) = \\frac{\\sum _{i}^{} min(x_i,y_i)}{\\sum _{i}^{} max(x_i,y_i)}$$ (Eq. 16) ",
"and Jaccard distance as: ",
"$$ D(q,A) = 1 - J(q,A)$$ (Eq. 17) ",
"where $X = \\lbrace x_1,x_2,..,x_n\\rbrace $ and $Y=\\lbrace y_1,y_2,...,y_n\\rbrace $ are two TF-IDF vectors of a query $q$ and an article $A$ respectively.",
"The observation in Section \"Data Observation\" also indicates that one of the important properties of legal documents is the reference or citation among articles. In other words, an article could refer to the whole other articles or to their paragraphs. In BIBREF1 , if an article has a reference to other articles, the authors expanded it with words of referential ones. In our experiment, however, we found that this approach makes the system confused to rank articles and leads to worse performance. Because of that, we ignored the reference and only took into account individual articles themselves. The results of splitting and non-splitting are shown in Table 5 ."
],
[
"Legal Question Answering is a form of textual entailment problem BIBREF14 , which can be viewed as a binary classification task. To capture the relation between a question and an article, a set of features can be used. In the COLLIE 2015, Kim BIBREF3 efficiently applied Convolution Neural Network (CNN) for the legal QA task. However, the small dataset is a limit of deep learning models. Therefores, we provided additional features to the CNN model.",
"The idea behind the QA is that we use CNN BIBREF2 with additional features. This is because: (i) CNN is capable to capture local relationship between neighboring words, which helps CNN to achieve excellent performance in NLP problems BIBREF15 , BIBREF2 , BIBREF16 , BIBREF17 and (ii) we can integrate our knowledge in legal domain in the form of statistical features, e.g. TF-IDF and LSI.",
"In Figure 2 , the input features $v_1,v_2,...,v_{400}$ are constructed and fed to the network as follows :",
" $v_1,v_3,v_5,...,v_{399}$ : a word embedding vector of the question sentence",
" $v_2,v_4,...,v_{400}$ : a word embedding vector of the most relevant article sentence",
"A sentence represented by a set of words was converted to a word embedding vector $v_1^{200}$ by using bag-of-words model (BOW) BIBREF18 . BOW model generates a vector representation for a sentence by taking a summation over embedding of words in the sentence. The vector is then normalized by the length of the sentence: ",
"$$s= \\frac{1}{n}\\sum _{i= 1}^{n}s_{i}$$ (Eq. 22) ",
"where: $s$ is a $d$ -dimensional vector of a sentence, $s_{i}$ is a $d$ -dimensional vector of $i^{th}$ word in the sentence, $n$ is the length of sentence. A word embedding model ( $d=200$ ) was trained by using Word2Vec BIBREF19 on the data of Japanese law corpus BIBREF1 . The corpus contains all Civil law articles of Japan's constitution with 13.5 million words from 642 cleaned and tokenized articles.",
"A filter was denoted as a weight vector $w$ with length $h$ ; $w$ will have $h$ parameters to be estimated. For each input vector $S \\in \\mathbb {R}^{d} $ , the feature map vector $O \\in \\mathbb {R}^{d-h+1}$ of the convolution operator with a filter $w$ was obtained by applying repeatedly $w$ to sub-vectors of $S$ : ",
"$$o_{i}=w\\cdot S[i:i+h-1]$$ (Eq. 24) ",
"where: $i=0,1,2,...,d-h+1$ and ( $\\cdot $ ) is dot product operation.",
"Each feature map was fed to a pooling layer to generate potential features by using the average mechanism BIBREF20 . These features were concatenated to a single vector for classification by using Multi-Layer Perceptron with sigmoid activation. During training process, parameters of filters and perceptrons are learned to optimize the objective function.",
"In our model, 10 convolution filters (length = 2) were applied to two adjacent input nodes because these nodes are the same feature type. An average pooling layer (length = 100) is then utilized to synthesize important features. To enhance the performance of CNN, two additional statistic features: TF-IDF and LSI were concatenated with the result of the pooling layer, then fed them into a 2-layer Perceptron model to predict the answer.",
"In Legal QA task, the proposed model was compared to the original CNN model and separate TF-IDF, LSI features. For evaluation, we took out 10% samples from training set for validation, and carried out experiments on dataset with balanced label distribution for training set, validation set and testing set.",
"In CNN models, we found that these models are sensitive to the initial value of parameters. Different values lead to large difference in results ( $\\pm $ 5%). Therefore, each model was run $n$ times (n=10) and we chose the best-optimized parameters against the validation set. Table 7 shows that CNN with additional features performs better. Also, CNN with LSI produces a better result as opposed to CNN with TF-IDF. We suspect that this is because TF-IDF vector is large but quite sparse (most values are zero), therefore it increases the number of parameters in CNN and consequently makes the model to be overfitted easily.",
"To achieve the best configuration of CNN architecture, the original CNN model was run with different settings of number filter and hidden layer dimension. According to Table 8 , the change of hyperparameter does not significantly affect to the performance of CNN. We, therefore, chose the configuration with the best performance and least number of parameters: 10 filters and 200 hidden layer size."
],
[
"For information retrieval task, 20% of query-article pairs are used for evaluating our model while the rest is for training. As we only consider single-paragraph articles in the training phase, if a multiple-paragraph article is relevant, all of its generated single-paragraph articles will be marked as relevant. In addition, the label for each query-article pair is set either 1 (relevant) or 0 (irrelevant). In our experiment, instead of selecting top $k$ retrieved articles as relevant articles, we consider a retrieved article $A_i$ as a relevant article if its score $S_i$ satisfies Equation ( 26 ): ",
"$$\\frac{S_i}{S_0} \\ge 0.85$$ (Eq. 26) ",
"where: $S_0$ is the highest relevant score. In other words, the score ratio of a relevant article and the most relevant article should not be lower than 85% (choosing the value 0.85 for this threshold is simply heuristic based). This is to prevent a relevant article to have a very low score as opposed to the most relevant article.",
"We ran SVM-Rank with different combinations of features listed in Table 2 , but due to limited space, we only report the result of those combinations which achieved highest F1-score. We compared our method to two baseline models TF-IDF and LSI which only use Cosine similarity to retrieve the relevant articles. Results from Table 3 indicate that (LSI, Manhattan, Jaccard) is the triple of features which achieves the best result and the most stability.",
"The contribution of each feature was investigated by using leave-one-out test. Table 4 shows that when all six features are utilized, the F1-score is approximately 0.55. However when excluding Jaccard, F1-score drops to around 0.5. In contrast, when other features are excluded individually from the feature set, the result remains stable or goes up slightly. From this result, we conclude that Jaccard feature significantly contributes to SVM-Rank performance.",
"We also analyzed the contribution of feature groups to the performance of SVM-Rank. When removing different triples of features from the feature set, it can be seen that (TF-IDF, Manhattan, Jaccard) combination witnesses the highest loss. Nevertheless, as shown in Table 3 , the result of (LSI, Manhattan, Jaccard) combination is more stable and better.",
"As mentioned, we proposed to split a multiple-paragraph article into several single-paragraph articles. Table 5 shows that after splitting, the F1-score performance increases by 0.05 and 0.04 with references and without references respectively. In both cases (with and without the reference), using single-paragraph articles always results a higher performance.",
"Results from Table 5 also indicate that expanding the reference of an article negatively affects the performance of our model, reducing the F1-score by more than 0.02. This is because if we only expand the content of an article with the content of referential one, it is more likely to be noisy and distorted, leading to lower performance. Therefore, we conclude that a simple expansion of articles via their references does not always positively contribute to the performance of the model.",
"Since linear kernel was used to train the SVM-Rank model, the role of trade-off training parameter was analyzed by tuning $C$ value from 100 to 2000 with step size 100. Empirically, F1-score peaks at 0.6087 with $C$ = 600 when it comes to COLIEE 2016 training dataset. We, therefore, use this value for training the L2R model."
],
[
"In COLIEE 2016 competition, Table 6 shows the top three systems and the baseline for the formal run in phase 1 BIBREF21 . Among 7 submissions, iLis7 BIBREF22 was ranked first with outstanding performance (0.6261) by exploiting ensemble methods for legal IR. Several features such as syntactic similarity, lexical similarity, semantic similarity, were used as features for two ensemble methods Least Square Method (LSM) and Linear Discriminant Analysis (LDA).",
"HUKB-2 BIBREF23 used a fundamental feature BM25 and applied mutatis mutandis for articles. If both an article and a query have conditional parts, they are divided into two parts like conditional parts and the rest part before measuring their similarity. This investigation in conditional parts is valuable since it is a common structure in laws. Their F1-score in formal rune is the second highest (0.5532), which is slightly higher than our system (0.5478) using SVM-Rank and a set of features LSI, Manhattan, Jaccard. This shows that for phase 1, our model with a set of defined features is relatively competitive."
],
[
"In this stage, we illustrate our framework on COLIEE 2016 data. The framework was trained on XML files, from H18 to H23 and tested on XML file H24. Given a legal question, the framework first retrieves top five relevant articles and then transfers the question and relevant articles to CNN classifier. The running of framework was evaluated with 3 scenarios:",
"No voting: taking only a top relevant article to use for predicting an answer for that question.",
"Voting without ratio: each of results, which is generated by applying our Textual entailment model to each article, gives one vote to the answer which it belongs to. The final result is the answer with more votes.",
"Voting with ratio: similar to Voting without ratio. However, each of results gives one vote corresponding to article's relevant score. The final result is the answer with higher voting score.",
"Table 9 shows results with different scenarios. The result of No voting approach is influenced by IR task's performance, so the accuracy is not as high as using voting. The relevant score disparity between the first and second relevant article is large, which causes a worse result of Voting with ratio compared to Voting without ratio."
],
[
"Table 10 lists the state-of-the art methods for the formal run 2016 in phase 2 and 3. In phase 2, two best systems are iList7 and KIS-1. iList7 applies major voting of decision tree, SVM and CNN with various features; KIS-1 just uses simple rules of subjective cases and an end-of-sentence expression. In phase 3, UofA achives the best score. It extracts the article segment which related to the query. This system also performs paraphrasing and detects condition-conclusion-exceptions for the query/article. From the experimental results, deep learning models do not show their advantages in case of a small dataset. On the other hand, providing handcraft features and rules are shown to be useful in this case."
],
[
"In this section, we show an example in which our proposed model using single-paragraph articles gives a correct answer in contrast with utilizing non-splitting one. Given a query with id H20-26-3: “A mandate contract is gratuitous contract in principle, but if there is a special provision, the mandatary may demand renumeration from the mandator.”, which refers to Article 648:",
"Apparently, three paragraphs and the query share several words namely mandatary, remuneration, etc. In this case, however, the correct answer is only located in paragraph 1, which is ranked first in the single-paragraph model in contrast to two remaining paragraphs with lower ranks, 5th and 29th as shown in Table 12 .",
"Interestingly, Article 653 has the highest relevant score in non-splitting method and rank 2nd in splitting approach. The reason for this is that Article 653 shares other words like mandatary, mandator as well. Therefore, it makes retrieval system confuse and yield incorrect order rank. By using single-paragraph, the system can find more accurately which part of the multiple-paragraph article is associated with the query's content."
],
[
"This work investigates Ranking SVM model and CNN for building a legal question answering system for Japan Civil Code. Experimental results show that feature selection affects significantly to the performance of SVM-Rank, in which a set of features consisting of (LSI, Manhattan, Jaccard) gives promising results for information retrieval task. For question answering task, the CNN model is sensitive to initial values of parameters and exerts higher accuracy when adding auxiliary features.",
"In our current work, we have not yet fully explored the characteristics of legal texts in order to utilize these features for building legal QA system. Properties such as references between articles or structured relations in legal sentences should be investigated more deeply. In addition, there should be more evaluation of SVM-Rank and other L2R methods to observe how they perform on this legal data using the same feature set. These are left as our future work."
],
[
"This work was supported by JSPS KAKENHI Grant number 15K16048, JSPS KAKENHI Grant Number JP15K12094, and CREST, JST."
]
],
"section_name": [
"Introduction",
"Basic Idea",
"Data Observation",
"Legal Information Retrieval",
"Legal Question Answering",
"Information Retrieval",
"Formal run phase 1 - COLIEE 2016",
"Legal Question Answering System",
"Formal run phase 2 & 3 - COLIEE 2016",
"Splitting and non-splitting error analysis",
"Conclusion",
"Acknowledgement"
]
} | {
"answers": [
{
"annotation_id": [
"5e9536a3634813567e8f7cf547f6902935de4fff",
"81464803827c8aca76e4b7b95a5d60ed4e546ca6",
"d5cd025fb24958874181d9f2cb32dcebabb59849",
"d15033d6716431e36b83bf22bf3a7520e36a45cc"
],
"answer": [
{
"evidence": [
"We ran SVM-Rank with different combinations of features listed in Table 2 , but due to limited space, we only report the result of those combinations which achieved highest F1-score. We compared our method to two baseline models TF-IDF and LSI which only use Cosine similarity to retrieve the relevant articles. Results from Table 3 indicate that (LSI, Manhattan, Jaccard) is the triple of features which achieves the best result and the most stability."
],
"extractive_spans": [
"two baseline models TF-IDF and LSI which only use Cosine similarity"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compared our method to two baseline models TF-IDF and LSI which only use Cosine similarity to retrieve the relevant articles. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We ran SVM-Rank with different combinations of features listed in Table 2 , but due to limited space, we only report the result of those combinations which achieved highest F1-score. We compared our method to two baseline models TF-IDF and LSI which only use Cosine similarity to retrieve the relevant articles. Results from Table 3 indicate that (LSI, Manhattan, Jaccard) is the triple of features which achieves the best result and the most stability."
],
"extractive_spans": [
"two baseline models TF-IDF and LSI"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compared our method to two baseline models TF-IDF and LSI which only use Cosine similarity to retrieve the relevant articles."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We ran SVM-Rank with different combinations of features listed in Table 2 , but due to limited space, we only report the result of those combinations which achieved highest F1-score. We compared our method to two baseline models TF-IDF and LSI which only use Cosine similarity to retrieve the relevant articles. Results from Table 3 indicate that (LSI, Manhattan, Jaccard) is the triple of features which achieves the best result and the most stability."
],
"extractive_spans": [],
"free_form_answer": "The baseline models used for this paper are based on the TF-IDF and LSI features and cosine similarity as a retrieval method.",
"highlighted_evidence": [
"We compared our method to two baseline models TF-IDF and LSI which only use Cosine similarity to retrieve the relevant articles. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We ran SVM-Rank with different combinations of features listed in Table 2 , but due to limited space, we only report the result of those combinations which achieved highest F1-score. We compared our method to two baseline models TF-IDF and LSI which only use Cosine similarity to retrieve the relevant articles. Results from Table 3 indicate that (LSI, Manhattan, Jaccard) is the triple of features which achieves the best result and the most stability.",
"In Legal QA task, the proposed model was compared to the original CNN model and separate TF-IDF, LSI features. For evaluation, we took out 10% samples from training set for validation, and carried out experiments on dataset with balanced label distribution for training set, validation set and testing set."
],
"extractive_spans": [],
"free_form_answer": "For the first task they have two baseline models, TF-IDF and LSI which both use cosine similarity. For the QA task, they baseline models were the original CNN and CNN with separate TF-IDF, LSI features.",
"highlighted_evidence": [
"tf-idf ",
"We compared our method to two baseline models TF-IDF and LSI which only use Cosine similarity to retrieve the relevant articles.",
"In Legal QA task, the proposed model was compared to the original CNN model and separate TF-IDF, LSI features."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"45b212ff3348e2473d3e5504ca1200bcf85fcbf5",
"291c6b2df1bac379d47f5557f9e564a1f6618bf7",
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"annotation_id": [
"1f50d1243eb81ab85e1e26f7654180c4f506b3fc",
"5aa87e0395f8a38b4f950ea69efaea6df93e2474"
],
"answer": [
{
"evidence": [
"In order to build a legal IR, traditional models such as TF-IDF, BM25 or PL2F can be used to generate basic features for matching documents with a query. Nevertheless, to improve not only the accuracy but also the robustness of ranking function, it is essential to take into account a combination of fundamental features and other potential features. Hence, the idea is to build a L2R model, which incorporates various features to generate an optimal ranking function."
],
"extractive_spans": [],
"free_form_answer": "Adding more features to the traditional sets such as TF-IDF, BM25 and PL2F as well as using voting in a ranking system help to improve accuracy on a legal question answering task",
"highlighted_evidence": [
"In order to build a legal IR, traditional models such as TF-IDF, BM25 or PL2F can be used to generate basic features for matching documents with a query. Nevertheless, to improve not only the accuracy but also the robustness of ranking function, it is essential to take into account a combination of fundamental features and other potential features. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The idea behind the QA is that we use CNN BIBREF2 with additional features. This is because: (i) CNN is capable to capture local relationship between neighboring words, which helps CNN to achieve excellent performance in NLP problems BIBREF15 , BIBREF2 , BIBREF16 , BIBREF17 and (ii) we can integrate our knowledge in legal domain in the form of statistical features, e.g. TF-IDF and LSI.",
"In our model, 10 convolution filters (length = 2) were applied to two adjacent input nodes because these nodes are the same feature type. An average pooling layer (length = 100) is then utilized to synthesize important features. To enhance the performance of CNN, two additional statistic features: TF-IDF and LSI were concatenated with the result of the pooling layer, then fed them into a 2-layer Perceptron model to predict the answer."
],
"extractive_spans": [
"two additional statistic features: TF-IDF and LSI"
],
"free_form_answer": "",
"highlighted_evidence": [
"The idea behind the QA is that we use CNN BIBREF2 with additional features. This is because: (i) CNN is capable to capture local relationship between neighboring words, which helps CNN to achieve excellent performance in NLP problems BIBREF15 , BIBREF2 , BIBREF16 , BIBREF17 and (ii) we can integrate our knowledge in legal domain in the form of statistical features, e.g. TF-IDF and LSI.",
"To enhance the performance of CNN, two additional statistic features: TF-IDF and LSI were concatenated with the result of the pooling layer, then fed them into a 2-layer Perceptron model to predict the answer."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"291c6b2df1bac379d47f5557f9e564a1f6618bf7",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"what is the baseline model",
"What contribute to improve the accuracy on legal question answering task?"
],
"question_id": [
"50bcbb730aa74637503c227f022a10f57d43f1f7",
"fac273ecb3e72f2dc94cdbc797582d7225a8e070"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"question",
"Question Answering"
],
"topic_background": [
"research",
"research"
]
} | {
"caption": [
"Fig. 1. The proposed model overview",
"Table 1. Splitting a multiple-paragraph article into some single-paragraph articles",
"Table 2. Similarity features for Ranking SVM",
"Fig. 2. The illustration of CNN model with additional features: LSI and TF-IDF. Given an input vector, CNN applies 10 filers (length = 2) to generate 10 feature maps (length = 399). Afterward, an average pooling filter (length = 100) is employed to produce average values from 4 feature maps. Finally, the average values with LSI and TF-IDF are used as input of two hidden neural network layers for QA",
"Table 3. F1-score with different feature groups with the best parameters of SVM-Rank",
"Table 4. F1-score when excluding some features from all feature set",
"Table 5. IR results with various methods in COLIEE 2016.",
"Table 6. Formal run in phase 1, COLIEE 2016.",
"Table 7. Phase 2 results with different models",
"Table 8. Results of the original CNN with different settings",
"Table 9. Task 3 results with various scenarios",
"Table 11. An example of retrieval articles between two methods: Splitting and Nonsplitting",
"Table 10. Formal run in phase 2 & 3, COLIEE 2016."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"6-Figure2-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"9-Table5-1.png",
"9-Table6-1.png",
"10-Table7-1.png",
"10-Table8-1.png",
"11-Table9-1.png",
"12-Table11-1.png",
"12-Table10-1.png"
]
} | [
"what is the baseline model",
"What contribute to improve the accuracy on legal question answering task?"
] | [
[
"1703.05320-Legal Question Answering-13",
"1703.05320-Information Retrieval-3"
],
[
"1703.05320-Legal Information Retrieval-0",
"1703.05320-Legal Question Answering-12",
"1703.05320-Legal Question Answering-1"
]
] | [
"For the first task they have two baseline models, TF-IDF and LSI which both use cosine similarity. For the QA task, they baseline models were the original CNN and CNN with separate TF-IDF, LSI features.",
"Adding more features to the traditional sets such as TF-IDF, BM25 and PL2F as well as using voting in a ranking system help to improve accuracy on a legal question answering task"
] | 88 |
1910.10762 | Analyzing ASR pretraining for low-resource speech-to-text translation | Previous work has shown that for low-resource source languages, automatic speech-to-text translation (AST) can be improved by pretraining an end-to-end model on automatic speech recognition (ASR) data from a high-resource language. However, it is not clear what factors --e.g., language relatedness or size of the pretraining data-- yield the biggest improvements, or whether pretraining can be effectively combined with other methods such as data augmentation. Here, we experiment with pretraining on datasets of varying sizes, including languages related and unrelated to the AST source language. We find that the best predictor of final AST performance is the word error rate of the pretrained ASR model, and that differences in ASR/AST performance correlate with how phonetic information is encoded in the later RNN layers of our model. We also show that pretraining and data augmentation yield complementary benefits for AST. | {
"paragraphs": [
[
"Low-resource automatic speech-to-text translation (AST) has recently gained traction as a way to bring NLP tools to under-represented languages. An end-to-end approach BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 is particularly appealing for source languages with no written form, or for endangered languages where translations into a high-resource language may be easier to collect than transcriptions BIBREF7. However, building high-quality end-to-end AST with little parallel data is challenging, and has led researchers to explore how other sources of data could be used to help.",
"A number of methods have been investigated. Several of these use transcribed source language audio and/or translated source language text in a multitask learning scenario BIBREF8, BIBREF3, BIBREF5 or to pre-train parts of the model before fine-tuning on the end-to-end AST task BIBREF3. Others assume, as we do here, that no additional source language resources are available, in which case transfer learning using data from language(s) other than the source language is a good option. In particular, several researchers have shown that low-resource AST can be improved by pretraining on an ASR task in some other language, then transferring the encoder parameters to initialize the AST model. For example, Bansal et al. BIBREF4 showed that pre-training on either English or French ASR improved their Spanish-English AST system (trained on 20 hours of parallel data) and Tian BIBREF9 got improvements on an 8-hour Swahili-English AST dataset using English ASR pretraining.",
"Overall these results show that pretraining helps, but leave open the question of what factors affect the degree of improvement. For example, does language relatedness play a role, or simply the amount of pretraining data? Bansal et al. showed bigger AST gains as the amount of English pretraining data increased from 20 to 300 hours, and also found a slightly larger improvement when pretraining on 20 hours of English versus 20 hours of French, but they pointed out that the Spanish data contains many English code-switched words, which could explain the latter result. In related work on multilingual pretraining for low-resource ASR, Adams et al. BIBREF10 showed that pre-training on more languages helps, but it is not clear whether the improvement is due to including more languages, or just more data.",
"To begin to tease apart these issues, we focus here on monolingual pretraining for low-resource AST, and investigate two questions. First, can we predict what sort of pretraining data is best for a particular AST task? Does it matter if the pretraining language is related to the AST source language (defined here as part of the same language family, since phonetic similarity is difficult to measure), or is the amount of pretraining data (or some other factor) more important? Second, can pretraining be effectively combined with other methods, such as data augmentation, in order to further improve AST results?",
"To answer these questions, we use the same AST architecture and Spanish-English parallel data as Bansal et al. BIBREF4, but pretrain the encoder using a number of different ASR datasets: the 150-hour AISHELL corpus of Chinese as well as seven GlobalPhone languages, each with about 20 hours of data. We find that pretraining on a larger amount of data from an unrelated language is much better than pretraining on a smaller amount of data from a related language. Moreover, even when controlling for the amount of data, the WER of the ASR model from pretraining seems to be a better predictor of final AST performance than does language relatedness. Indeed, we show that there is a very strong correlation between the WER of the pretraining model and BLEU score of the final AST model—i.e., the best pretraining strategy may simply be to use datasets and methods that will yield the lowest ASR WER during pretraining. However, we also found that AST results can be improved further by augmenting the AST data using standard speed perturbation techniques BIBREF11. Our best results using non-English pretraining data improve the test set BLEU scores of an AST system trained on 20 hours of parallel data from 10.2 to 14.3, increasing to 15.8 with data augmentation.",
"Finally, we analyze the representations learned by the models and show that better performance seems to correlate with the extent to which phonetic information is encoded in a linearly separable way in the later RNN layers."
],
[
"For both ASR and AST tasks we use the same end-to-end system architecture shown in Figure FIGREF1: the encoder-decoder model from BIBREF4, which itself is adapted from BIBREF1, BIBREF3 and BIBREF2. Details of the architecture and training parameters are described in Section SECREF9.",
"After pretraining an ASR model, we transfer only its encoder parameters to the AST task. Previous experiments BIBREF4 showed that the encoder accounts for most of the benefits of transferring the parameters. Transferring also the decoder and attention mechanism does bring some improvements, but is only feasible when the ASR pretraining language is the same as the AST target language, which is not true in most of our experiments.",
"In addition to pretraining, we experimented with data augmentation. Specifically, we augmented the AST data using Kaldi's BIBREF12 3-way speed perturbation, adding versions of the AST data where the audio is sped down and up by a factor of 0.9 and 1.1, respectively.",
"To evaluate ASR performance we compute the word error rate (WER). To evaluate AST performance we calculate the 4-gram BLEU score BIBREF13 on four reference translations."
],
[
"For the AST models, we use Spanish-English parallel data from Fisher corpus BIBREF14, containing 160 hours of Spanish telephone speech translated into English text. To simulate low-resource settings, we randomly downsample the original corpus to 20 hours of training data. Each of the dev and test sets comprise 4.5 hours of speech."
],
[
"Since we focus on investigating factors that might affect the AST improvements over the baseline when pretraining, we have chosen ASR datasets for pretraining that contrast in the number of hours and/or in the language similarity with Spanish. Statistics for each dataset are in the left half of Table TABREF7, with further details below.",
"To look at a range of languages with similar amounts of data, we used GlobalPhone corpora from seven languages BIBREF15, each with around 20 hours of speech: Mandarin Chinese (zh), Croatian (hr), Czech (cs), French (fr), Polish (pl), Portuguese (pt), and Swedish (sv). French and Portuguese, like the source language (Spanish), belong to the Romance family of languages, while the other languages are less related—especially Chinese, which is not an Indo-European language. GlobalPhone consists of read speech recorded using similar conditions across languages, and the transcriptions for Chinese are Romanized, with annotated word boundaries.",
"To explore the effects of using a large amount of pretraining data from an unrelated language, we used the AISHELL-1 corpus of Mandarin Chinese BIBREF16, which contains 150 hours of read speech. Transcriptions with annotated word boundaries are available in both Hanzi (Chinese characters) and Romanized versions, and we built models with each. To compare to the GlobalPhone data, we also created a 20-hour subset of the Romanized AISHELL (zh-ai-small) by randomly selecting utterances from a subset of the speakers (81, roughly the number present in most of the GlobalPhone datasets).",
"Finally, to reproduce one of the experiments from BIBREF4, we pre-trained one model using 300 hours of Switchboard English BIBREF17. This data is the most similar to the AST speech data in terms of style and channel (both are conversational telephone speech). However, as noted by BIBREF4, the Fisher Spanish speech contains many words that are actually in English (code-switching), so pretraining on English may provide an unfair advantage relative to other languages."
],
[
"We compute 13-dim MFCCs and cepstral mean and variance normalization along speakers using Kaldi BIBREF12 on our ASR and AST audio. To shorten the training time, we trimmed utterances from the AST data to 16 seconds (or 12 seconds for the 160h augmented dataset).",
"To account for unseen words in the test data, we model the ASR and AST text outputs via sub-word units using byte-pair encoding (BPE) BIBREF18. We do this separately for each dataset as BPE works best as a language-specific tool (i.e. it depends on the frequency of different subword units, which varies with the language). We use 1k merge operations in all cases except Hanzi, where there are around 3000 symbols initially (vs around 60 in the other datasets). For Hanzi we ran experiments with both 1k and 15k merge operations. For Chinese Romanized transcriptions we removed tone diacritics."
],
[
"Following the architecture and training procedure described in BIBREF4, input speech features are fed into a stack of two CNN layers. In each CNN layer we stride the input with a factor of 2 along time, apply ReLU activation BIBREF19 followed by batch normalization BIBREF20. The CNN output is fed into a three-layer bi-directional long short-term memory network (LSTM) BIBREF21, with 512 hidden layer dimensions. For decoding, we use the predicted token 20% of the time and the training token 80% of the time BIBREF22 as input to a 128-dimensional embedding layer followed by a three-layer LSTM, with 256 hidden layer dimensions, and combine this with the output from the attention mechanism BIBREF23 to predict the word at the current time step.",
"We use code and hyperparameter settings from BIBREF4: the Adam optimizer BIBREF24 with an initial learning rate of 0.001 and decay it by a factor of 0.5 based on the dev set BLEU score. When training AST models, we regularize using dropout BIBREF25 with a ratio of $0.3$ over the embedding and LSTM layers BIBREF26; weight decay with a rate of $0.0001$; and, after the first 20 epochs, 30% of the time we replace the predicted output word by a random word from the target vocabulary. At test time we use beam decoding with a beam size of 5 and length normalization BIBREF27 with a weight of 0.6."
],
[
"Our baseline 20-hour AST system obtains a BLEU score of 10.3 (Table TABREF7, first row), 0.5 BLEU point lower than that reported by BIBREF4. This discrepancy might be due to differences in subsampling from the 160-hour AST dataset to create the 20-hour subset, or from Kaldi parameters when computing the MFCCs.",
"WERs for our pre-trained models (Table TABREF7) vary from 22.5 for the large AISHELL dataset with Romanized transcript to 80.5 for Portuguese GlobalPhone. These are considerably worse than state-of-the-art ASR systems (e.g., Kaldi recipes can achieve WER of 7.5 on AISHELL and 26.5 on Portuguese GlobalPhone), but we did not optimize our architecture or hyperparameters for the ASR task since our main goal is to analyze the relationship between pretraining and AST performance (and in order to use pretraining, we must use a seq2seq model with the architecture as for AST)."
],
[
"AST results for our pre-trained models are given in Table TABREF7. Pretraining improves AST performance in every case, with improvements ranging from 0.2 (pt-gp) to 4.3 (zh-ai-large). These results make it clear that language relatedness does not play a strong role in predicting AST improvements, since on the similar-sized GlobalPhone datasets, the two languages most related to Spanish (French and Portuguese) yield the highest and lowest improvements, respectively. Moreover, pretraining on the large Chinese dataset yields a bigger improvement than either of these—4.3 BLEU points. This is nearly as much as the 6 point improvement reported by BIBREF4 when pretraining on 100 hours of English data, which is especially surprising given not only that Chinese is very different from Spanish, but also that the Spanish data contains some English words.",
"This finding seems to suggest that data size is more important than language relatedness for predicting the effects of pretraining. However, there are big differences even amongst the languages with similar amounts of pretraining data. Analyzing our results further, we found a striking correlation between the WER of the initial ASR model and the BLEU score of the AST system pretrained using that model, as shown in Figure FIGREF11. Therefore, although pretraining data size clearly influences AST performance, this appears to be mainly due to its effect on WER of the ASR model. We therefore hypothesize that WER is a better direct predictor of AST performance than either data size or language relatedness."
],
[
"Although our main focus is monolingual pretraining, we also looked briefly at multilingual pretraining, inspired by recent work on multilingual ASR BIBREF28, BIBREF29 and evidence that multilingual pretraining followed by fine-tuning on a distinct target language can improve ASR on the target language BIBREF10, BIBREF30, BIBREF31. These experiments did not directly compare pretraining using a similar amount of monolingual data, but such a comparison was done by BIBREF32, BIBREF33 in their work on learning feature representations for a target language with no transcribed data. They found a benefit for multilingual vs monolingual pretraining given the same amount of data.",
"Following up on this work, we tried pretraining using 124 hours of multilingual data (all GlobalPhone languages except Chinese), roughly the amount of data in our large Chinese models. We combined all the data together and trained an ASR model using a common target BPE with 6k merge operations, then transferred only the encoder to the AST model. However, we did not see a benefit to the multilingual training (Table TABREF7, final row); in fact the resulting AST model was slightly worse than the zh-ai-large model (BLEU of 13.3 vs 14.6). Other configurations of multilingual training might still outperform their monolingual counterparts, but we leave this investigation as future work."
],
[
"Table TABREF16 (top) shows how data augmentation affects the results of the baseline 20h AST system, as well as three of the best-performing pretrained models from Table TABREF7. For these experiments only, we changed the learning rates of the augmented-data systems so that all models took about the same amount of time to train (see Figure FIGREF17). Despite a more aggressive learning schedule, the performance of the augmented-data systems surpasses that of the baseline and pretrained models, even those trained on the largest ASR sets (150-hr Chinese and 300-hr English).",
"For comparison to other work, Table TABREF16 (bottom) gives results for AST models trained on the full 160 hours of parallel data, including models with both pretraining and data augmentation. For the latter, we used the original learning schedule, but had to stop training early due to time constraints (after 15 days, compared to 8 days for complete training of the non-augmented 160h models). We find that both pretraining and augmentation still help, providing a combined gain of 3.8 (3.2) BLEU points over the baseline on the dev (test) set."
],
[
"Finally, we hope to gain some understanding into why pretraining on ASR helps with AST, and specifically how the neural network representations change during pretraining and fine-tuning. We follow BIBREF34 and BIBREF9, who built diagnostic classifiers BIBREF35 to examine the representation of phonetic information in end-to-end ASR and AST systems, respectively. Unlike BIBREF34, BIBREF9, who used non-linear classifiers, we use a linear classifier to predict phone labels from the internal representations of the trained ASR or AST model.",
"Using a linear classifier allows us to make more precise claims: if the classifier performs better using the representation from a particular layer, we can say that layer represents the phonetic information in a more linearly separable way. Using a nonlinear classifier raises questions about how to choose the complexity of the classifier itself, and therefore makes any results difficult to interpret.",
"We hypothesized that pretraining allows the models to abstract away from nonlinguistic acoustic differences, and to better represent phonetic information: crucially, both in the trained language and in other languages. To test this hypothesis, we used two phone-labelled datasets distinct from all our ASR and AST datasets: the English TIMIT corpus (a language different to all of our trained models, with hand-labeled phones) and the Spanish GlobalPhone corpus (the same language as our AST source language, with phonetic forced-alignments produced using Kaldi). We randomly sampled utterances from these and passed them through the trained encoders, giving us a total of about 600k encoded frames. We used 400k of these to train logistic regression models to predict the phone labels, and tested on the remaining 200k frames.",
"Separate logistic regression models were trained on the representations from each layer of the encoder. Since convolutional layers have a stride of 2, the number of frames decreases at each convolutional layer. To label the frames after a convolutional layer we eliminated every other label (and corresponding frame) from the original label sequence. For example, given label sequence S$_{\\text{1}}$ = aaaaaaann at input layer, we get sequence S$_{\\text{2}}$ = aaaan at the first convolutional layer and sequence S$_{\\text{3}}$ = aan at the second convolutional layer and at the following recurrent layers.",
"Results for the two classification data sets (Figure FIGREF18) show very similar patterns. In both the ASR and the AST models, the pretraining data seems to make little difference to phonetic encoding at the early layers, and classification accuracy peaks at the second CNN layer. However, the RNN layers show a clear trend where phone classification accuracy drops off more slowly for models with better ASR/AST performance (i.e., zh $>$ fr $>$ pt). That is, the later RNN layers more transparently encode language-universal phonetic information.",
"Phone classification accuracy in the RNN layers drops for both English and Spanish after fine-tuning on the AST data. This is slightly surprising for Spanish, since the fine-tuning data (unlike the pretraining data) is actually Spanish speech. However, we hypothesize that for AST, higher layers of the encoder may be recruited more to encode semantic information needed for the translation task, and therefore lose some of the linear separability in the phonetic information. Nevertheless, we still see the same pattern where better end-to-end models have higher classification accuracy in the later layers."
],
[
"This paper explored what factors help pretraining for low-resource AST. We performed careful comparisons to tease apart the effects of language relatedness and data size, ultimately finding that rather than either of these, the WER of the pre-trained ASR model is likely the best direct predictor of AST performance. Given equivalent amounts of data, we did not find multilingual pretraining to help more than monolingual pretraining, but we did find an added benefit from using speed perturbation to augment the AST data. Finally, analysis of the pretrained models suggests that those models with better WER are transparently encoding more language-universal phonetic information in the later RNN layers, and this appears to help with AST."
]
],
"section_name": [
"Introduction",
"Methodology",
"Experimental Setup ::: Parallel data",
"Experimental Setup ::: Pretraining data",
"Experimental Setup ::: Preprocessing",
"Experimental Setup ::: Model architecture and training",
"Results and Discussion ::: Baseline and ASR results",
"Results and Discussion ::: Pretraining the AST task on ASR models",
"Results and Discussion ::: Multilingual pretraining",
"Results and Discussion ::: Augmenting the parallel data",
"Analyzing the models' representations",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"4cfade606e42458338d501e03636d252f59c300d",
"5f402653b17b91ce202203fd7d25cc9d634bc4dc",
"ec5876b260c00dcd31bed179d862915d3f6c4562"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Dataset statistics (left); dev set results from ASR pretraining and from the final AST system (right). AST results in all rows except the first are from pretraining using the dataset listed in that row, followed by fine-tuning using ast-20h. Numbers in brackets are the improvement over the baseline."
],
"extractive_spans": [],
"free_form_answer": "ast-20h: 20 hours,\nzh-ai-small: 20 hours,\nzh-ai-large: 150 hours,\nzh-ai-hanzi: 150 hours,\nhr-gp: 12 hours,\nsv-gp: 18 hours,\npl-gp: 19 hours,\npt-gp: 23 hours,\nfr-gp: 25 hours,\nzh-gp: 26 hours,\ncs-gp: 27 hours,\nmultilin6: 124 hours",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Dataset statistics (left); dev set results from ASR pretraining and from the final AST system (right). AST results in all rows except the first are from pretraining using the dataset listed in that row, followed by fine-tuning using ast-20h. Numbers in brackets are the improvement over the baseline."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To answer these questions, we use the same AST architecture and Spanish-English parallel data as Bansal et al. BIBREF4, but pretrain the encoder using a number of different ASR datasets: the 150-hour AISHELL corpus of Chinese as well as seven GlobalPhone languages, each with about 20 hours of data. We find that pretraining on a larger amount of data from an unrelated language is much better than pretraining on a smaller amount of data from a related language. Moreover, even when controlling for the amount of data, the WER of the ASR model from pretraining seems to be a better predictor of final AST performance than does language relatedness. Indeed, we show that there is a very strong correlation between the WER of the pretraining model and BLEU score of the final AST model—i.e., the best pretraining strategy may simply be to use datasets and methods that will yield the lowest ASR WER during pretraining. However, we also found that AST results can be improved further by augmenting the AST data using standard speed perturbation techniques BIBREF11. Our best results using non-English pretraining data improve the test set BLEU scores of an AST system trained on 20 hours of parallel data from 10.2 to 14.3, increasing to 15.8 with data augmentation."
],
"extractive_spans": [
"150-hour AISHELL corpus of Chinese as well as seven GlobalPhone languages, each with about 20 hours of data"
],
"free_form_answer": "",
"highlighted_evidence": [
"To answer these questions, we use the same AST architecture and Spanish-English parallel data as Bansal et al. BIBREF4, but pretrain the encoder using a number of different ASR datasets: the 150-hour AISHELL corpus of Chinese as well as seven GlobalPhone languages, each with about 20 hours of data."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For the AST models, we use Spanish-English parallel data from Fisher corpus BIBREF14, containing 160 hours of Spanish telephone speech translated into English text. To simulate low-resource settings, we randomly downsample the original corpus to 20 hours of training data. Each of the dev and test sets comprise 4.5 hours of speech."
],
"extractive_spans": [
"20 hours of training data",
"dev and test sets comprise 4.5 hours of speech"
],
"free_form_answer": "",
"highlighted_evidence": [
"To simulate low-resource settings, we randomly downsample the original corpus to 20 hours of training data. Each of the dev and test sets comprise 4.5 hours of speech."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1f58902ac98ceae20e7f93ad31802387558c2a6e",
"81ccf9e2344c96a32ffb445f4ebd1a8d100db2d9",
"ee3476e52881e6363739bcceffbef3a938bcf6aa"
],
"answer": [
{
"evidence": [
"Following the architecture and training procedure described in BIBREF4, input speech features are fed into a stack of two CNN layers. In each CNN layer we stride the input with a factor of 2 along time, apply ReLU activation BIBREF19 followed by batch normalization BIBREF20. The CNN output is fed into a three-layer bi-directional long short-term memory network (LSTM) BIBREF21, with 512 hidden layer dimensions. For decoding, we use the predicted token 20% of the time and the training token 80% of the time BIBREF22 as input to a 128-dimensional embedding layer followed by a three-layer LSTM, with 256 hidden layer dimensions, and combine this with the output from the attention mechanism BIBREF23 to predict the word at the current time step.",
"FLOAT SELECTED: Fig. 1: Encoder-decoder architecture used for both ASR and AST."
],
"extractive_spans": [],
"free_form_answer": "10 ",
"highlighted_evidence": [
"Following the architecture and training procedure described in BIBREF4, input speech features are fed into a stack of two CNN layers. In each CNN layer we stride the input with a factor of 2 along time, apply ReLU activation BIBREF19 followed by batch normalization BIBREF20. ",
"The CNN output is fed into a three-layer bi-directional long short-term memory network (LSTM) BIBREF21, with 512 hidden layer dimensions. ",
"For decoding, we use the predicted token 20% of the time and the training token 80% of the time BIBREF22 as input to a 128-dimensional embedding layer followed by a three-layer LSTM, with 256 hidden layer dimensions, and combine this with the output from the attention mechanism BIBREF23 to predict the word at the current time step.",
"FLOAT SELECTED: Fig. 1: Encoder-decoder architecture used for both ASR and AST."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Following the architecture and training procedure described in BIBREF4, input speech features are fed into a stack of two CNN layers. In each CNN layer we stride the input with a factor of 2 along time, apply ReLU activation BIBREF19 followed by batch normalization BIBREF20. The CNN output is fed into a three-layer bi-directional long short-term memory network (LSTM) BIBREF21, with 512 hidden layer dimensions. For decoding, we use the predicted token 20% of the time and the training token 80% of the time BIBREF22 as input to a 128-dimensional embedding layer followed by a three-layer LSTM, with 256 hidden layer dimensions, and combine this with the output from the attention mechanism BIBREF23 to predict the word at the current time step."
],
"extractive_spans": [
"two "
],
"free_form_answer": "",
"highlighted_evidence": [
"Following the architecture and training procedure described in BIBREF4, input speech features are fed into a stack of two CNN layers. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Following the architecture and training procedure described in BIBREF4, input speech features are fed into a stack of two CNN layers. In each CNN layer we stride the input with a factor of 2 along time, apply ReLU activation BIBREF19 followed by batch normalization BIBREF20. The CNN output is fed into a three-layer bi-directional long short-term memory network (LSTM) BIBREF21, with 512 hidden layer dimensions. For decoding, we use the predicted token 20% of the time and the training token 80% of the time BIBREF22 as input to a 128-dimensional embedding layer followed by a three-layer LSTM, with 256 hidden layer dimensions, and combine this with the output from the attention mechanism BIBREF23 to predict the word at the current time step."
],
"extractive_spans": [
"two CNN layers",
"three-layer bi-directional long short-term memory network (LSTM)",
"followed by a three-layer LSTM"
],
"free_form_answer": "",
"highlighted_evidence": [
"Following the architecture and training procedure described in BIBREF4, input speech features are fed into a stack of two CNN layers.",
"The CNN output is fed into a three-layer bi-directional long short-term memory network (LSTM) BIBREF21, with 512 hidden layer dimensions.",
"For decoding, we use the predicted token 20% of the time and the training token 80% of the time BIBREF22 as input to a 128-dimensional embedding layer followed by a three-layer LSTM, with 256 hidden layer dimensions, and combine this with the output from the attention mechanism BIBREF23 to predict the word at the current time step."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"607a31a0753099103267c84745de1d2663c00552",
"e1a025df9266557b76e145d7cfbc0dd9008bfeec",
"f87bb1b4e14568f1976ab39d46b221de154ef9bb"
],
"answer": [
{
"evidence": [
"For both ASR and AST tasks we use the same end-to-end system architecture shown in Figure FIGREF1: the encoder-decoder model from BIBREF4, which itself is adapted from BIBREF1, BIBREF3 and BIBREF2. Details of the architecture and training parameters are described in Section SECREF9."
],
"extractive_spans": [
" the encoder-decoder model from BIBREF4, which itself is adapted from BIBREF1, BIBREF3 and BIBREF2"
],
"free_form_answer": "",
"highlighted_evidence": [
"For both ASR and AST tasks we use the same end-to-end system architecture shown in Figure FIGREF1: the encoder-decoder model from BIBREF4, which itself is adapted from BIBREF1, BIBREF3 and BIBREF2. Details of the architecture and training parameters are described in Section SECREF9."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For both ASR and AST tasks we use the same end-to-end system architecture shown in Figure FIGREF1: the encoder-decoder model from BIBREF4, which itself is adapted from BIBREF1, BIBREF3 and BIBREF2. Details of the architecture and training parameters are described in Section SECREF9."
],
"extractive_spans": [
"encoder-decoder model",
"end-to-end system architecture"
],
"free_form_answer": "",
"highlighted_evidence": [
"or both ASR and AST tasks we use the same end-to-end system architecture shown in Figure FIGREF1: the encoder-decoder model from BIBREF4, which itself is adapted from BIBREF1, BIBREF3 and BIBREF2. Details of the architecture and training parameters are described in Section SECREF9."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Following the architecture and training procedure described in BIBREF4, input speech features are fed into a stack of two CNN layers. In each CNN layer we stride the input with a factor of 2 along time, apply ReLU activation BIBREF19 followed by batch normalization BIBREF20. The CNN output is fed into a three-layer bi-directional long short-term memory network (LSTM) BIBREF21, with 512 hidden layer dimensions. For decoding, we use the predicted token 20% of the time and the training token 80% of the time BIBREF22 as input to a 128-dimensional embedding layer followed by a three-layer LSTM, with 256 hidden layer dimensions, and combine this with the output from the attention mechanism BIBREF23 to predict the word at the current time step."
],
"extractive_spans": [
"two CNN layers",
"three-layer bi-directional long short-term memory network (LSTM)",
" followed by a three-layer LSTM"
],
"free_form_answer": "",
"highlighted_evidence": [
"Following the architecture and training procedure described in BIBREF4, input speech features are fed into a stack of two CNN layers. In each CNN layer we stride the input with a factor of 2 along time, apply ReLU activation BIBREF19 followed by batch normalization BIBREF20. The CNN output is fed into a three-layer bi-directional long short-term memory network (LSTM) BIBREF21, with 512 hidden layer dimensions. For decoding, we use the predicted token 20% of the time and the training token 80% of the time BIBREF22 as input to a 128-dimensional embedding layer followed by a three-layer LSTM, with 256 hidden layer dimensions, and combine this with the output from the attention mechanism BIBREF23 to predict the word at the current time step."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a5584aaeaeb4020efcfb459e553659095a29b36b",
"aedece3b852ed74a3201c68030256eae9fe80e3e",
"d71020166099fdc23bf483604142c167b7c740b0"
],
"answer": [
{
"evidence": [
"To answer these questions, we use the same AST architecture and Spanish-English parallel data as Bansal et al. BIBREF4, but pretrain the encoder using a number of different ASR datasets: the 150-hour AISHELL corpus of Chinese as well as seven GlobalPhone languages, each with about 20 hours of data. We find that pretraining on a larger amount of data from an unrelated language is much better than pretraining on a smaller amount of data from a related language. Moreover, even when controlling for the amount of data, the WER of the ASR model from pretraining seems to be a better predictor of final AST performance than does language relatedness. Indeed, we show that there is a very strong correlation between the WER of the pretraining model and BLEU score of the final AST model—i.e., the best pretraining strategy may simply be to use datasets and methods that will yield the lowest ASR WER during pretraining. However, we also found that AST results can be improved further by augmenting the AST data using standard speed perturbation techniques BIBREF11. Our best results using non-English pretraining data improve the test set BLEU scores of an AST system trained on 20 hours of parallel data from 10.2 to 14.3, increasing to 15.8 with data augmentation.",
"To look at a range of languages with similar amounts of data, we used GlobalPhone corpora from seven languages BIBREF15, each with around 20 hours of speech: Mandarin Chinese (zh), Croatian (hr), Czech (cs), French (fr), Polish (pl), Portuguese (pt), and Swedish (sv). French and Portuguese, like the source language (Spanish), belong to the Romance family of languages, while the other languages are less related—especially Chinese, which is not an Indo-European language. GlobalPhone consists of read speech recorded using similar conditions across languages, and the transcriptions for Chinese are Romanized, with annotated word boundaries."
],
"extractive_spans": [
"Spanish",
"English ",
"Chinese ",
"Mandarin Chinese ",
"Croatian ",
"Czech ",
"French ",
"Polish ",
"Portuguese ",
"Swedish "
],
"free_form_answer": "",
"highlighted_evidence": [
"To answer these questions, we use the same AST architecture and Spanish-English parallel data as Bansal et al. BIBREF4, but pretrain the encoder using a number of different ASR datasets: the 150-hour AISHELL corpus of Chinese as well as seven GlobalPhone languages, each with about 20 hours of data.",
"To look at a range of languages with similar amounts of data, we used GlobalPhone corpora from seven languages BIBREF15, each with around 20 hours of speech: Mandarin Chinese (zh), Croatian (hr), Czech (cs), French (fr), Polish (pl), Portuguese (pt), and Swedish (sv). French and Portuguese, like the source language (Spanish), belong to the Romance family of languages, while the other languages are less related—especially Chinese, which is not an Indo-European language. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For the AST models, we use Spanish-English parallel data from Fisher corpus BIBREF14, containing 160 hours of Spanish telephone speech translated into English text. To simulate low-resource settings, we randomly downsample the original corpus to 20 hours of training data. Each of the dev and test sets comprise 4.5 hours of speech.",
"To look at a range of languages with similar amounts of data, we used GlobalPhone corpora from seven languages BIBREF15, each with around 20 hours of speech: Mandarin Chinese (zh), Croatian (hr), Czech (cs), French (fr), Polish (pl), Portuguese (pt), and Swedish (sv). French and Portuguese, like the source language (Spanish), belong to the Romance family of languages, while the other languages are less related—especially Chinese, which is not an Indo-European language. GlobalPhone consists of read speech recorded using similar conditions across languages, and the transcriptions for Chinese are Romanized, with annotated word boundaries."
],
"extractive_spans": [
"Spanish",
"English",
"Mandarin Chinese",
"Croatian",
"Czech",
"French",
"Polish",
"Portuguese",
"Swedish"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the AST models, we use Spanish-English parallel data from Fisher corpus BIBREF14, containing 160 hours of Spanish telephone speech translated into English text.",
"To look at a range of languages with similar amounts of data, we used GlobalPhone corpora from seven languages BIBREF15, each with around 20 hours of speech: Mandarin Chinese (zh), Croatian (hr), Czech (cs), French (fr), Polish (pl), Portuguese (pt), and Swedish (sv)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For the AST models, we use Spanish-English parallel data from Fisher corpus BIBREF14, containing 160 hours of Spanish telephone speech translated into English text. To simulate low-resource settings, we randomly downsample the original corpus to 20 hours of training data. Each of the dev and test sets comprise 4.5 hours of speech."
],
"extractive_spans": [
"Spanish-English"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the AST models, we use Spanish-English parallel data from Fisher corpus BIBREF14, containing 160 hours of Spanish telephone speech translated into English text. To simulate low-resource settings, we randomly downsample the original corpus to 20 hours of training data."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What sizes were their datasets?",
"How many layers does their model have?",
"What is their model's architecture?",
"What languages did they use?"
],
"question_id": [
"7c561db6847fb0416bca8a6cb5eebf689a4b1438",
"13eb64957478ade79a1e81d32e36ee319209c19a",
"3cfe464052f0a248b6e22c9351279403dfe34f3c",
"119c404da6e42d4879eee10edeab4b2851162659"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1: Encoder-decoder architecture used for both ASR and AST.",
"Table 1: Dataset statistics (left); dev set results from ASR pretraining and from the final AST system (right). AST results in all rows except the first are from pretraining using the dataset listed in that row, followed by fine-tuning using ast-20h. Numbers in brackets are the improvement over the baseline.",
"Fig. 2: WER of each ASR model vs BLEU score of the corresponding pre-trained AST model, computed in both cases on dev sets. Diamond markers are AISHELL data sets; circles are from GlobalPhone. The points in the circled group come from different runs on the same dataset but with different BPE or learning rate schedules. The Spearman rank correlation of these points is -0.97; the correlation is -0.92 when using test sets to compute both ASR and BLEU.",
"Fig. 4: Phonetic classification accuracy at different layers of our ASR (left) and AST (right) models. Different color bars indicate representations extracted from models (pre)trained on different datasets (pt-gp, fr-gp, or zh-ai-large). Results from the baseline AST model (without pretraining) are shown in both panels for comparison. The bars with black edges are results on TIMIT (majority baseline: 12.9%); the taller bars are for Spanish GlobalPhone (majority baseline: 15.2%).",
"Table 2: BLEU scores on dev and test sets for models trained with and without data augmentation. We used either 20h of AST training data (top block) or 160h (bottom block), with various pretraining.",
"Fig. 3: The AST performance over time (without beam-search) of baseline, pretrained, and pretrained+augmented models."
],
"file": [
"2-Figure1-1.png",
"2-Table1-1.png",
"3-Figure2-1.png",
"4-Figure4-1.png",
"4-Table2-1.png",
"4-Figure3-1.png"
]
} | [
"What sizes were their datasets?",
"How many layers does their model have?"
] | [
[
"1910.10762-2-Table1-1.png",
"1910.10762-Experimental Setup ::: Parallel data-0",
"1910.10762-Introduction-4"
],
[
"1910.10762-2-Figure1-1.png",
"1910.10762-Experimental Setup ::: Model architecture and training-0"
]
] | [
"ast-20h: 20 hours,\nzh-ai-small: 20 hours,\nzh-ai-large: 150 hours,\nzh-ai-hanzi: 150 hours,\nhr-gp: 12 hours,\nsv-gp: 18 hours,\npl-gp: 19 hours,\npt-gp: 23 hours,\nfr-gp: 25 hours,\nzh-gp: 26 hours,\ncs-gp: 27 hours,\nmultilin6: 124 hours",
"10 "
] | 89 |
1711.01567 | Robust Speech Recognition Using Generative Adversarial Networks | This paper describes a general, scalable, end-to-end framework that uses the generative adversarial network (GAN) objective to enable robust speech recognition. Encoders trained with the proposed approach enjoy improved invariance by learning to map noisy audio to the same embedding space as that of clean audio. Unlike previous methods, the new framework does not rely on domain expertise or simplifying assumptions as are often needed in signal processing, and directly encourages robustness in a data-driven way. We show the new approach improves simulated far-field speech recognition of vanilla sequence-to-sequence models without specialized front-ends or preprocessing. | {
"paragraphs": [
[
"Automatic speech recognition (ASR) is becoming increasingly more integral in our day-to-day lives enabling virtual assistants and smart speakers like Siri, Google Now, Cortana, Amazon Echo, Google Home, Apple HomePod, Microsoft Invoke, Baidu Duer and many more. While recent breakthroughs have tremendously improved ASR performance BIBREF0 , BIBREF1 these models still suffer considerable degradation from reasonable variations in reverberations, ambient noise, accents and Lombard reflexes that humans have little or no issue recognizing.",
"Most of these problems can be mitigated by training the models on a large volume of data that exemplify these effects. However, in the case of non-stationary processes, such as accents, accurate data augmentation is most likely infeasible, and in general, collecting high quality datasets can be expensive and time-consuming. Past robust ASR literature has considered hand-engineered front-ends and data-driven approaches in an attempt to increase the value of relatively parsimonious data with desired effects BIBREF2 , BIBREF3 . While these techniques are quite effective in their respective operating regimes, they do not generalize well to other modalities in practice due to the aforementioned reasons. Namely, it is difficult to model anything beyond reverberation and background noise from the first principles. Existing techniques do not directly induce invariance for ASR or are not scalable. And, due to the sequential nature of speech, alignments are needed to compare two different utterances of the same text.",
"In this work, we employ the generative adversarial network (GAN) framework BIBREF4 to increase the robustness of seq-to-seq models BIBREF5 in a scalable, end-to-end fashion. The encoder component is treated as the generator of GAN and is trained to produce indistinguishable embeddings between noisy and clean audio samples. Because no restricting assumptions are made, this new robust training approach can in theory learn to induce robustness without alignment or complicated inference pipeline and even where augmentation is not possible. We also experiment with encoder distance objective to explicitly restrict the embedding space and demonstrate that achieving invariance at the hidden representation level is a promising direction for robust ASR.",
"The rest of the paper is organized as follows. Related work is documented in Section \"RELATED WORK\" . Section \"ROBUST ASR\" defines our notations and details the robust ASR GAN. Section \"EXPERIMENTAL SETUP\" explains the experimental setup. Section \"RESULTS\" shows results on the Wall Street Journal (WSJ) dataset with simulated far-field effects. Finishing thoughts are found in Section \"CONCLUSION\" ."
],
[
"A vast majority of work in robust ASR deals with reverberations and ambient noise; BIBREF2 provides an extensive survey in this effort. One of the most effective approaches in this variability is to devise a strong front-end such as the weighted prediction error (WPE) speech dereverberation BIBREF6 , BIBREF7 and train the resulting neural network with realistic augmented data BIBREF8 , BIBREF9 .",
"A shift from more traditional signal processing techniques to more modern, data-driven methods was seen when the denoising autoencoder BIBREF10 was employed to induce invariance to reverberations BIBREF11 . This is novel in that the autoencoder is explicitly trained to predict the original audio features from a perturbed version convolved with an impulse response. While denoising autoencoder models for enhancing speech have been shown to improve perceptual quality of the produced speech, they have not demonstrated significant improvement for the task of speech recognition. This is because autoencoders are trained to reconstruct all aspects of the original audio, including many features that are not important for speech recognition, such as the voice and accent of the speaker, background noises etc. In fact, ASR systems learn to remove such artifacts of the input audio as they can hinder speech recognition performance. BIBREF12 proposed multiple rounds of joint denoising and ASR training for each audio sample, but this approach is not scalable for large datasets.",
"A similar approach in spirit is to minimize the distance in the embedding space between clean and noisy audio. The intuition here is that the embedding distance is a measure of semantic similarity BIBREF13 . However, the perturbed speech may have a different time duration than the reference audio; dynamic time warping BIBREF14 can be used to approximate the alignment and compare sequences of varying lengths, but there is an increased computational overhead.",
" BIBREF15 uses the generative adversarial networks (GAN) for domain adaptation to make the simulated images look more realistic to improve the task of robotic hand grasping. GAN BIBREF4 is an unsupervised learning framework, where the generator network learns to produce increasingly more realistic data in attempt to fool a competing discriminator. Because equilibrium is reached at a saddle point, it is notoriously hard to train. There have been many improvements to this technique. For example, Wasserstein GAN BIBREF16 uses the Earth-Mover distance to mitigate optimization issues. It is also less susceptible to architectural choices.",
"For speech, BIBREF17 proposes a GAN based speech enhancement method called SEGAN but without the end goal of speech recognition. SEGAN operates on raw speech samples and hence it is computationally impractical for large scale experiments."
],
[
"As explained in Section \"RELATED WORK\" , denoising reconstruction and perceptual enhancement do not significantly improve ASR. A better approach would be to reconstruct only those aspects of the audio which are important for predicting the text spoken and ignore everything else. We hypothesize that the encoders of well trained ASR systems would learn to retain only this information from the input audio. Based on this idea, we propose a new sequence-to-sequence architecture for robust speech recognition that tries to match the output of the encoder for clean audio and noisy audio.",
"The system works as follows: the same encoder, $g$ , is applied to the clean audio $x$ and the corresponding noisy audio $\\widetilde{x}$ to produce hidden states $z=g(x)$ and $\\widetilde{z}=g(\\widetilde{x})$ . The decoder, $h$ , models the conditional probability $p(y|x) = p(y|z)$ and is used to predict the output text sequence one character at a time. This architecture is described in Figure 1 . The entire system is trained end-to-end using a multi-task objective that tries to minimize the cross-entropy loss of predicting $y$ from $\\widetilde{x}$ and the normalized $L^1-$ distance between $x$0 and $x$1 : ",
"$$ \n\\mathbb {E}_{(x,y) \\sim \\mathcal {D}} \\left[\nH(h(\\widetilde{z}), y) + \\lambda \\frac{\\Vert z - \\widetilde{z} \\Vert _{1}}{\\Vert z \\Vert _{1} + \\Vert \\widetilde{z} \\Vert _{1} + \\epsilon }\n\\right].$$ (Eq. 2) "
],
[
"[htb!] $n_\\text{critic}$ , the number of critic per robust ASR updates. $c$ , the clipping parameter. $m$ , the batch size. $\\theta $ has not converged $t=1,\\dots ,n_\\text{critic}$ Sample $\\lbrace (x^{(i)}, y^{(i)}) \\sim \\mathcal {D}\\rbrace _{i=1}^m$ a batch of labeled speech data. Sample $\\lbrace \\widetilde{x}^{(i)}\\rbrace _{i=1}^m$ by augmentation or from a different distribution. Sample $\\lbrace \\varepsilon ^{(i)}\\rbrace _{i=1}^m$ a batch of prior noise. $g_\\theta \\leftarrow \\nabla _\\theta \\left[\n\\frac{1}{m}\\sum _{i=1}^m H(h_\\theta (g_\\theta (x^{(i)})), y^{(i)})\n\\right]$ $\\theta \\leftarrow \\theta - \\text{Optimizer}(\\theta , g_\\theta )$ $c$0 $c$1 $c$2 Sample $c$3 a batch of labeled speech data. Sample $c$4 by augmentation or from a different distribution. Sample $c$5 a batch of prior noise. $c$6 $c$7 WGAN enhancer training. The seq-to-seq model was trained using the Adam optimizer in our experiments. If $c$8 can be generated from $c$9 , data augmentation can also be used to update the seq-to-seq model.",
"In our experiments, we found the encoder distance penalty to yield excellent results but it has the disadvantage that the encoder content between clean and noisy audio has to match frame for frame. Instead, employing the GAN framework, we can have a discriminator output a scalar likelihood of the entire speech being clean, and train the encoder to generate embeddings that are indistinguishable by the discriminator.",
"In this paper, Wasserstein GAN (WGAN) BIBREF16 is used. Following the notations of WGAN, we parametrize the seq-to-seq and discriminator models with $\\theta $ and $w$ respectively. The overall architecture depicted in Figure 1 remains the same, but the encoder distance in ( 2 ) is now replaced with the dual of Earth-Mover (EM) distance ",
"$$\\max _{w\\in \\mathcal {W}}\n\\left\\lbrace \n\\mathbb {E}_{x}\n\\left[f_w(g_\\theta (x))\\right] -\n\\mathbb {E}_{\\widetilde{x},\\varepsilon }\n\\left[f_w(g_\\theta (\\widetilde{x} + \\varepsilon )\\right]\n\\right\\rbrace .$$ (Eq. 5) ",
"We treat the embedding of the clean input $x$ as real data and the embedding of $\\widetilde{x}$ , which can either be augmented from $x$ or drawn from a different modality, as being fake. And so, as GAN training progresses, the encoder $g_\\theta $ should learn to remove extraneous information to ASR to be able to fool the discriminator. In practice, we found that including a random Gaussian noise $\\varepsilon $ to the input prior of the generator helps improve training. Also, weights in the parameter set $\\mathcal {W}$ should be clipped to ensure the duality of ( 5 ) holds up to a constant multiple BIBREF16 . The adapted WGAN training procedure is detailed in Algorithm \"EXPERIMENTAL SETUP\" ."
],
[
"We evaluated the enhancer framework on the Wall Street Journal (WSJ) corpus with simulated far-field effects. The dev93 and eval92 sets were used for hyperparameter selection and evaluation respectively. The reverberant speech is generated with room impulse response (RIR) augmentation as in BIBREF18 , where each audio is convolved with a randomly chosen RIR signal. The clean and far-field audio durations are kept the same with valid convolution so that the encoder distance enhancer can be applied. We collected 1088 impulse responses, using a linear array of 8 microphones, 120 and 192 of which were held out for development and evaluation. The speaker was placed in a variety of configurations, ranging from 1 to 3 meters distance and 60 to 120 degrees inclination with respect to the array, for 20 different rooms. Mel spectrograms of 20 ms samples with 10 ms stride and 40 bins were used as input features to all of our baseline and enhancer models."
],
[
"For the acoustic model, we used the sequence-to-sequence framework with soft attention based on BIBREF5 . The architecture of the encoder is described in Table 1 . The decoder consisted of a single 256 dimensional GRU layer with a hybrid attention mechanism similar to the models described in BIBREF19 .",
"The discriminator network of the WGAN enhancer is described in Table 2 . All convolutional layers use leaky ReLU activation BIBREF20 with 0.2 slope for the leak, and batch normalization BIBREF21 ."
],
[
"To establish a baseline, in the first experiment, we trained a simple attention based seq-to-seq model. All the seq-to-seq networks in our experiments were trained using the Adam optimizer. We evaluate all models on both clean and far-field test sets.",
"To study the effects of data augmentation, we train a new seq-to-seq model with the same architecture and training procedure as the baseline. However this time, in each epoch, we randomly select 40% of the training utterances and apply the train RIRs to them (in our previous experiments we had observed that 40% augmentation results in the best validation performance).",
"For the enhancer models, $\\lambda $ in Equation 2 was tuned over the dev set by doing a logarithmic sweep in [0.01, 10]. $\\lambda = 1$ gave the best performance.",
"We use Algorithm \"EXPERIMENTAL SETUP\" to train the WGAN enhancer. The clipping parameter was 0.05 and $\\varepsilon $ was random normal with 0.001 standard deviation. We found that having a schedule for $n_\\text{critic}$ was crucial. Namely, we do not update the encoder parameters with WGAN gradients for the first 3000 steps. Then, we use the normal $n_\\text{critic}=5$ . We hypothesize that the initial encoder embedding is of poor quality and encouraging invariance at this stage through the critic gradients significantly hinders seq-to-seq training."
],
[
"We present results in Table 3 . All of the evaluations were performed using greedy decoding and no language models. To provide context, our near-field result is comparable to the 18.6% WER of BIBREF5 obtained with language model beam decoding with 200 beam size. We can see that a seq-to-seq model trained only on near-field audio data performs extremely poorly on far-field audio. This suggests that it is non-trivial for an ASR model to generalize from homogeneous near-field audio to far-field audio.",
"To overcome this, we train a stronger baseline with simulated far-field audio examples. This model had the same architecture but 40% of the examples that the model was trained on were convolved with a randomly chosen room impulse response during training. We can see from Table 3 that simple data augmentation can significantly improve performance on far-field audio without compromising the performance on near-field audio, implying that seq-to-seq models have a strong ability to learn from far-field examples.",
"Even with data augmentation, however, there is still a large gap between the WERs on near-field and far-field test sets. The bottom two rows of Table 3 show the performance of the methods introduced in this paper on the same test sets. An $L^1$ -distance penalty can lower the test set WER by 1.32% absolute. Using a GAN enhancer can reduce the WER by an additional 1.07%. Overall, the gap between near-field and far-field performance decreases by almost 27% compared to the model that only uses data augmentation.",
"An additional benefit of our methods is that the $L^1$ -distance penalty and GAN loss function act as regularizers which reduce generalization error on near field data. The enhancer models have lower WERs even on near-field data compared to the baseline models."
],
[
"We introduced a GAN-based framework to train robust ASR models in a scalable, data-driven way, and showed that inducing invariance at the encoder embedding level considerably improves the recognition of simulated far-field speech by vanilla seq-to-seq models. This method has effectively imbued the seq-to-seq encoder with a far-field front-end. We anticipate that coupling the new framework with specialized trainable front-ends, such as WPE, would enhance robustness even more significantly."
]
],
"section_name": [
"Introduction",
"RELATED WORK",
"Encoder distance enhancer",
"GAN enhancer",
"Corpora and Tasks",
"Network Architecture",
"Training",
"RESULTS",
"CONCLUSION"
]
} | {
"answers": [
{
"annotation_id": [
"205d1f3bea2df7e9362a67f6e4687daaf8f3d174",
"809fc706f338b28fbd1640e84ca573172342dac7",
"f1daed9b43d8a719215fdc9c091cdd7685fc78b8"
],
"answer": [
{
"evidence": [
"The rest of the paper is organized as follows. Related work is documented in Section \"RELATED WORK\" . Section \"ROBUST ASR\" defines our notations and details the robust ASR GAN. Section \"EXPERIMENTAL SETUP\" explains the experimental setup. Section \"RESULTS\" shows results on the Wall Street Journal (WSJ) dataset with simulated far-field effects. Finishing thoughts are found in Section \"CONCLUSION\" ."
],
"extractive_spans": [],
"free_form_answer": "Yes. They show results on the Wall Street Journal Corpus, which consists of recordings of real speech.",
"highlighted_evidence": [
"Section \"RESULTS\" shows results on the Wall Street Journal (WSJ) dataset with simulated far-field effects."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluated the enhancer framework on the Wall Street Journal (WSJ) corpus with simulated far-field effects. The dev93 and eval92 sets were used for hyperparameter selection and evaluation respectively. The reverberant speech is generated with room impulse response (RIR) augmentation as in BIBREF18 , where each audio is convolved with a randomly chosen RIR signal. The clean and far-field audio durations are kept the same with valid convolution so that the encoder distance enhancer can be applied. We collected 1088 impulse responses, using a linear array of 8 microphones, 120 and 192 of which were held out for development and evaluation. The speaker was placed in a variety of configurations, ranging from 1 to 3 meters distance and 60 to 120 degrees inclination with respect to the array, for 20 different rooms. Mel spectrograms of 20 ms samples with 10 ms stride and 40 bins were used as input features to all of our baseline and enhancer models."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluated the enhancer framework on the Wall Street Journal (WSJ) corpus with simulated far-field effects. "
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"We evaluated the enhancer framework on the Wall Street Journal (WSJ) corpus with simulated far-field effects. The dev93 and eval92 sets were used for hyperparameter selection and evaluation respectively. The reverberant speech is generated with room impulse response (RIR) augmentation as in BIBREF18 , where each audio is convolved with a randomly chosen RIR signal. The clean and far-field audio durations are kept the same with valid convolution so that the encoder distance enhancer can be applied. We collected 1088 impulse responses, using a linear array of 8 microphones, 120 and 192 of which were held out for development and evaluation. The speaker was placed in a variety of configurations, ranging from 1 to 3 meters distance and 60 to 120 degrees inclination with respect to the array, for 20 different rooms. Mel spectrograms of 20 ms samples with 10 ms stride and 40 bins were used as input features to all of our baseline and enhancer models."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluated the enhancer framework on the Wall Street Journal (WSJ) corpus with simulated far-field effects."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"043654eefd60242ac8da08ddc1d4b8d73f86f653",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"Are there experiments with real data?"
],
"question_id": [
"32f2aa2df0152050cbcd27dd2f408b2fa5894031"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. Architecture of the enhancer models introduced in this paper. The discriminator loss can be L1-distance or WGAN loss. The entire model is trained end-to-end using both the discriminator loss and the cross-entropy loss. We use RIR convolution to simulate far-field audio. It’s also possible to train this model with the same speech recorded in different conditions.",
"Table 1. Architecture of the encoder.",
"Table 2. Architecture of the critic. (feature)×(time).",
"Table 3. Speech recognition performance on the Wall Street Journal Corpus"
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png"
]
} | [
"Are there experiments with real data?"
] | [
[
"1711.01567-Corpora and Tasks-0",
"1711.01567-Introduction-3"
]
] | [
"Yes. They show results on the Wall Street Journal Corpus, which consists of recordings of real speech."
] | 90 |
1706.04206 | Identifying Condition-Action Statements in Medical Guidelines Using Domain-Independent Features | This paper advances the state of the art in text understanding of medical guidelines by releasing two new annotated clinical guidelines datasets, and establishing baselines for using machine learning to extract condition-action pairs. In contrast to prior work that relies on manually created rules, we report experiment with several supervised machine learning techniques to classify sentences as to whether they express conditions and actions. We show the limitations and possible extensions of this work on text mining of medical guidelines. | {
"paragraphs": [
[
"Clinical decision-support system (CDSS) is any computer system intended to provide decision support for healthcare professionals, and using clinical data or knowledge BIBREF0 . The classic problem of diagnosis is only one of the clinical decision problems. Deciding which questions to ask, tests to order, procedures to perform, treatment to indicate, or which alternative medical care to try, are other examples of clinical decisions. CDSSs generally fall into two categories BIBREF0 ",
"Most of the questions physicians need to consult about with CDSSs are from the latter category. Medical guidelines (also known as clinical guidelines, clinical protocols or clinical practice guidelines) are most useful at the point of care and answering to \"what to do\" questions.",
"Medical guidelines are systematically developed statements to assist with practitioners' and patients' decisions. They establish criteria regarding diagnosis, management, and treatment in specific areas of healthcare. For example, a sentence such as \"if the A1C is 7.0% and a repeat result is 6.8%, the diagnosis of diabetes is confirmed\" in medical guidelines determines what is true about a patient. Sentences such as \"Topical and oral decongestants and antihistamines should be avoided in patients with ABRS\" guide what to do or not to do with a patient. These examples illustrate conditions, criteria applicable to patients, and consequences of the conditions. The consequences may refer to activities, effects, intentions, or events. If a guideline-based CDSS needs to answer \"what to do\" questions, it has to have access to condition-action statements describing under what circumstances an action can be performed.",
"Medical guidelines contain many condition-action statements. Condition-action statements provide information about expected process flow. If a guideline-based CDSS could extract and formalize these statements, it could help practitioners in the decision-making process. For example, it could help automatically asses the relationship between therapies, guidelines and outcomes, and in particular could help the impact of changing guidelines.",
"However, completely automated extraction of condition-action statements does not seem possible. This is due among other things to the variety of linguistic expressions used in condition-action sentences. For example, they are not always in form of \"{if} condition {then} action”. In the sentence \"Conditions that affect erythrocyte turnover and hemoglobin variants must be considered, particularly when the A1C result does not correlate with the patient's clinical situation”, we have a condition-action sentence without an \"{if}\" term.",
"We propose a supervised machine learning model classifying sentences as to whether they express a condition or not. After we determine a sentence contain a condition, we use natural language processing and information extraction tools to extract conditions and resulting activities.",
"With the help of a domain expert, we annotated three sets of guidelines to create gold standards to measure the performance of our condition-action extracting models. The sets of guidelines are: hypertension BIBREF1 , chapter4 of asthma BIBREF2 , and rhinosinusitis BIBREF3 . Chapter 4 of asthma guidelines was selected for comparison with prior work of Wenzina and Kaiser BIBREF4 . We have annotated the guidelines for the conditions, consequences, modifiers of conditions, and type of consequences. These annotate sets of guidelines are available for experiments https://www.dropbox.com/."
],
[
"We will briefly discuss the modeling and annotation of condition-action for medical usage in this section. Our corpus and method of identifying conditions in clinical guidelines is explained in section 3.",
"Research on CIGs started about 20 years ago and became more popular in the late-1990s and early 2000s. Different approaches have been developed to represent and execute clinical guidelines over patient-specific clinical data. They include document-centric models, decision trees and probabilistic models, and \"Task-Network Models\"(TNMs) BIBREF5 , which represent guideline knowledge in hierarchical structures containing networks of clinical actions and decisions that unfold over time. Serban et. al BIBREF6 developed a methodology for extracting and using linguistic patterns in guideline formalization, to aid the human modellers in guideline formalization and reduce the human modelling effort. Kaiser et. al BIBREF7 developed a method to identify activities to be performed during a treatment which are described in a guideline document. They used relations of the UMLS Semantic Network BIBREF8 to identify these activities in a guideline document. Wenzina and Kaiser BIBREF4 developed a rule-based method to automatically identifying conditional activities in guideline documents.They achieved a recall of 75% and a precision of 88% on chapter 4 of asthma guidelines which was mentioned before."
],
[
"Medical guidelines’ condition-action statements provide information to determine \"what to do\" with a patient. Other types of consequences of a condition in a sentence may help practitioner to find what is true about a patient. In this paper, we propose an automated process to find and extract condition-action statements from medical guidelines. We employed NLP tools and concepts in the process to achieve more general models.",
"We define the task as classification task. Given an input statement, classify it to one of the three categories: NC (no condition) if the statement doesn’t have a condition; CA if the statement is a condition-action sentence; and CC (condition-consequence) if the statement has a condition which has a non-action consequence. For a CDSS, to determine both \"what is true\" about a patient and \"what to do\" with a patient, CC and CA statements can be merged to one category.",
"There are limitations in this specification of classification categories. For example, guidelines may contain statements with a condition referring to a consequence in another statement. Or, we can see condition and effect in two different sentences: \"However, there are some cases for which the results for black persons were different from the results for the general population (question 3, evidence statements 2, 10, and 17). In those cases, separate evidence statements were developed.\"",
"In this work we focus only on statements that follow the above sentence categorization rules. This allows us to make clear comparison to prior work e.g. by Wenzina and Kaiser BIBREF4 . They annotated chapter 4 of asthma and other guidelines. They used information extraction rules and semantic pattern rules to extract conditional activities, condition-action statements. We use POS tags as features in the classification models. In our opinion, using POS tags instead of semantic pattern rules makes our model more domain-independent, and therefore more suitable for establishing baselines, not only for text mining of medical guidelines but also in other domains, such as text mining of business rules. But we also expect to improve the performance of our extraction programs by adding semantic and discourse information (this work is ongoing)."
],
[
"Most of the condition-action sentences have a modifier in the sentences. For example, in \"In the population aged 18 years or older with CKD and hypertension, initial (or add-on) antihypertensive treatment should include an ACEI or ARB to improve kidney outcomes\", we have \"the population aged 18 years or older with CKD and hypertension\" as a condition and \"{in}\" is the modifier. \"If\", \"in\", \"for\", \"to\", \"which\", and \"when\" are the most frequent modifiers in our guidelines.",
"We used CoreNLP BIBREF9 Shift-Reduce Constituency Parser to parse sentences in guidelines. As we mentioned, \"if\", \"in\", \"for\", \"to\", \"which\", and \"when\" are the most frequent modifiers in our guidelines. \"If\", \"in\", and \"for\" are tagged as \"IN\" which represents preposition or subordinating conjunction. \"To\" is tagged as \"TO\" and \"when\" and \"which\" are tagged as \"WHADV\". We used regular expressions to find those parses which are promising candidates for extraction of condition-action pairs; for example, we selected sentences which include these tags: IN, TO and WHADVP.",
"We extracted part of speech (POS) tags as our features for our model. Each candidate sentence has at least one candidate condition part. We extract these parts by regular expressions. Each part of sentence which starts with below patterns is a candidate condition part:",
"\"\\((SBAR|PP) \\(IN\"",
"\"\\(SBAR \\(WHADVP\"",
"\"\\(PP \\(TO\"",
"For example, \"(ROOT (S (PP (IN In) (NP (NP (NNS adults)) (PP (IN with) (NP (NN hypertension))))) (, ,) (VP (VBZ does) (S (VP (VBG initiating) (S (NP (NP (JJ antihypertensive) (JJ pharmacologic) (NN therapy)) (PP (IN at) (NP (JJ specific) (NN BP) (NNS thresholds)))) (VP (VBP improve) (NP (NN health) (NNS outcomes))))))) (. ?)))\" is the constituent parsed tree of \"In adults with hypertension, does initiating antihypertensive pharmacologic therapy at specific BP thresholds improve health outcomes?\". \"(PP (IN In) (NP (NP (NNS adults)) (PP (IN with) (NP (NN hypertension)))))\" and \"(PP (IN at) (NP (JJ specific) (NN BP) (NNS thresholds)))\" are two candidate condition parts in this example.",
"We created features for our model based on POS tags and their combinations. The sets of features and the combinations are learned automatically from annotated examples. We used these novel features to make our model more domain-independent.",
"For each sentence, we extracted POS tags, sequences of 3 POS tags, and combination of all POS tags of candidate conditions as features. For example, \"PP IN NP NP NNS PP IN NP NN PPINNP INNPNP NPNPNNS NPNNSPP NNSPPIN PPINNP INNPNN PPINNPNPNNSPPINNPNN PP IN NP NN PPINNP INNPNN PPINNPNN PP IN NP JJ NN NNS PPINNP INNPJJ NPJJNN JJNNNNS PPINNPJJNNNNS\" represents \"In adults with hypertension, does initiating antihypertensive pharmacologic therapy at specific BP thresholds improve health outcomes?\" in our model. Note that the glued together part of speech tags are not a formatting error but features automatically derived by our model (from consecutive part of speech tags)."
],
[
"We use three medical guidelines documents to create gold standard datasets. They provide statements, tables, and figures about hypertension, rhinosinusitis, and asthma. The creation of the gold standard datasets is described below in detail.",
"Our data preparation process proceeded as follows: We started by converting the guidelines from PDF or html to text format, editing sentences only to manage conversion errors, the majority of which were bullet points. Tables and some figures pose a problem, and we are simply treating them as unstructured text. We are not dealing at this time with the ambiguities introduced by this approach; we do have plans to address it in future work.",
"Using regular expressions, as described above, we selected candidate sentences from text files. Note that candidate sentences do not always include a modifier such as \"if\" or \"in\". For example, in \"Patients on long-term steroid tablets (e.g. longer than three months) or requiring frequent courses of steroid tablets (e.g. three to four per year) will be at risk of systemic side-effects\", there is no modifier in the sentence.",
"The annotation of the guidelines text (the next step), focused on determining whether there were condition statements in the candidate sentences or not. The instruction to the annotators were to try to paraphrase candidate sentences as sentences with \"if condition, then consequence\". If the transformed/paraphrased sentence conveyed the same meaning as the original, we considered to be a condition-consequence sentence. Then we we could annotate condition and consequence parts. For example, we paraphrased \"Beta-blockers, including eye drops, are contraindicated in patients with asthma\" to \"If patients have asthma, then beta-blockers, including eye drops, are contraindicated\". The paraphrased sentence conveys same meaning. So it became a condition-consequence sentence in our dataset. On the other hand, for example, we cannot paraphrase \"Further, the diagnostic criteria for CKD do not consider age-related decline in kidney function as reflected in estimated GFR\" to an if-then sentence.",
"We also annotated the type of sentences based on their semantics: We classified them into three classes: condition-action, condition-consequence(effect, intention, and event) and action. Examples are shown in table 1.",
"Each sentence was annotated by one domain expert and us (and the disagreements where less than 10 percent). Table 2 shows the statistics of the annotated sentences for 3 different medical guidelines."
],
[
"Hypertension, asthma, and rhinosinusitis guidelines and gold standard datasets were applied to evaluate our model. Since two of these annotated corpora are new, our model is establishing a baseline. The asthma corpus was investigated previously by BIBREF4 .",
"We extracted candidate statements by applying aforementioned regex on POS tags. Hypertension, asthma, and rhinosinusitis guidelines had 278, 172, and 761 candidate statements respectively. By applying this filtering subtask, we get rid of 38, 116, and 5 no condition statement respectively from guidelines. We used Weka BIBREF10 classifiers to create our models. ZeroR, Naïve Bayes, J48, and random forest classifiers were applied in our project. Table 3 , 4 , and 5 show the results of classifiers for each guidelines.The results are based on 10-fold cross-validation on respective datasets.",
"The results show that generally random forest classifier seems to work best in extracting Condition-Action statements.",
"Notice that these results are lower than previously reported by BIBREF4 . The difference is due to our using of completely automated feature selection when training on an annotated corpus, and not relying on manually created extraction rules. In addition, their results demonstrate recalls on activities with specific patterns. If we consider all activities in their annotated corpus, their recall will be 56%. And if we apply their approach on our annotated corpus, the recall will be 39%. In ongoing work we hope to reduce or close this gap by adding semantic and discourse information to our feature sets."
],
[
"We investigated the problem of automated extraction of condition-action from clinical guidelines based on an annotated corpus. We proposed a simple supervised model which classifies statements based on combinations of part of speech tags used as features. We showed results of classifiers using this model on three different annotated datasets which we created. We release these dataset for others to use.",
"Obviously, this is very preliminary work. Our work established baselines for automated extraction of condition-action rules from medical guidelines, but its performance is still inferior to a collection of manually created extraction rules. To close this gap we are currently augmenting our model with semantic information along the lines of BIBREF7 and BIBREF4 . In addition, we are beginning to experiment with some discourse relations – these are important, for example, in understanding of lists and tables. We also plan to make our annotated datasets more convenient to use by re-annotating them with standard annotation tools e.g. BRAT BIBREF11 ."
]
],
"section_name": [
"Introduction",
"Related Work",
"Condition-Action Extraction",
"Classification",
"Gold Standard Datasets",
"Model Performance",
"Conclusions and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"958899f7e6fdad3ad284b632b3d969b78d4d7e27",
"aee4b888ce9ce26714d2f93c22670fa0abcdfdc2",
"d8982cae28e0b4f60add0e03330d7169d0d835e7"
],
"answer": [
{
"evidence": [
"We extracted candidate statements by applying aforementioned regex on POS tags. Hypertension, asthma, and rhinosinusitis guidelines had 278, 172, and 761 candidate statements respectively. By applying this filtering subtask, we get rid of 38, 116, and 5 no condition statement respectively from guidelines. We used Weka BIBREF10 classifiers to create our models. ZeroR, Naïve Bayes, J48, and random forest classifiers were applied in our project. Table 3 , 4 , and 5 show the results of classifiers for each guidelines.The results are based on 10-fold cross-validation on respective datasets."
],
"extractive_spans": [
"ZeroR, Naïve Bayes, J48, and random forest classifiers"
],
"free_form_answer": "",
"highlighted_evidence": [
"ZeroR, Naïve Bayes, J48, and random forest classifiers were applied in our project."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We extracted candidate statements by applying aforementioned regex on POS tags. Hypertension, asthma, and rhinosinusitis guidelines had 278, 172, and 761 candidate statements respectively. By applying this filtering subtask, we get rid of 38, 116, and 5 no condition statement respectively from guidelines. We used Weka BIBREF10 classifiers to create our models. ZeroR, Naïve Bayes, J48, and random forest classifiers were applied in our project. Table 3 , 4 , and 5 show the results of classifiers for each guidelines.The results are based on 10-fold cross-validation on respective datasets."
],
"extractive_spans": [
"ZeroR, Naïve Bayes, J48, and random forest "
],
"free_form_answer": "",
"highlighted_evidence": [
"ZeroR, Naïve Bayes, J48, and random forest classifiers were applied in our project. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We extracted candidate statements by applying aforementioned regex on POS tags. Hypertension, asthma, and rhinosinusitis guidelines had 278, 172, and 761 candidate statements respectively. By applying this filtering subtask, we get rid of 38, 116, and 5 no condition statement respectively from guidelines. We used Weka BIBREF10 classifiers to create our models. ZeroR, Naïve Bayes, J48, and random forest classifiers were applied in our project. Table 3 , 4 , and 5 show the results of classifiers for each guidelines.The results are based on 10-fold cross-validation on respective datasets."
],
"extractive_spans": [],
"free_form_answer": "They use four classifiers: ZeroR, Naive Bayes, J48, and random forest.",
"highlighted_evidence": [
"ZeroR, Naïve Bayes, J48, and random forest classifiers were applied in our project. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"291c6b2df1bac379d47f5557f9e564a1f6618bf7"
]
},
{
"annotation_id": [
"219f0bea1996aef80477cee6cb2dd370bd6c2d82",
"33927d890582aa8c7dc39134ebfcd4fe2ee90318",
"7bc8088368a4e392aa7f3f41ec0d9d84003f678b"
],
"answer": [
{
"evidence": [
"Research on CIGs started about 20 years ago and became more popular in the late-1990s and early 2000s. Different approaches have been developed to represent and execute clinical guidelines over patient-specific clinical data. They include document-centric models, decision trees and probabilistic models, and \"Task-Network Models\"(TNMs) BIBREF5 , which represent guideline knowledge in hierarchical structures containing networks of clinical actions and decisions that unfold over time. Serban et. al BIBREF6 developed a methodology for extracting and using linguistic patterns in guideline formalization, to aid the human modellers in guideline formalization and reduce the human modelling effort. Kaiser et. al BIBREF7 developed a method to identify activities to be performed during a treatment which are described in a guideline document. They used relations of the UMLS Semantic Network BIBREF8 to identify these activities in a guideline document. Wenzina and Kaiser BIBREF4 developed a rule-based method to automatically identifying conditional activities in guideline documents.They achieved a recall of 75% and a precision of 88% on chapter 4 of asthma guidelines which was mentioned before.",
"Notice that these results are lower than previously reported by BIBREF4 . The difference is due to our using of completely automated feature selection when training on an annotated corpus, and not relying on manually created extraction rules. In addition, their results demonstrate recalls on activities with specific patterns. If we consider all activities in their annotated corpus, their recall will be 56%. And if we apply their approach on our annotated corpus, the recall will be 39%. In ongoing work we hope to reduce or close this gap by adding semantic and discourse information to our feature sets."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Wenzina and Kaiser BIBREF4 developed a rule-based method to automatically identifying conditional activities in guideline documents.They achieved a recall of 75% and a precision of 88% on chapter 4 of asthma guidelines which was mentioned before.",
"Notice that these results are lower than previously reported by BIBREF4 ."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"Notice that these results are lower than previously reported by BIBREF4 . The difference is due to our using of completely automated feature selection when training on an annotated corpus, and not relying on manually created extraction rules. In addition, their results demonstrate recalls on activities with specific patterns. If we consider all activities in their annotated corpus, their recall will be 56%. And if we apply their approach on our annotated corpus, the recall will be 39%. In ongoing work we hope to reduce or close this gap by adding semantic and discourse information to our feature sets."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Notice that these results are lower than previously reported by BIBREF4 . The difference is due to our using of completely automated feature selection when training on an annotated corpus, and not relying on manually created extraction rules."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"Notice that these results are lower than previously reported by BIBREF4 . The difference is due to our using of completely automated feature selection when training on an annotated corpus, and not relying on manually created extraction rules. In addition, their results demonstrate recalls on activities with specific patterns. If we consider all activities in their annotated corpus, their recall will be 56%. And if we apply their approach on our annotated corpus, the recall will be 39%. In ongoing work we hope to reduce or close this gap by adding semantic and discourse information to our feature sets."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Notice that these results are lower than previously reported by BIBREF4 ."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"043654eefd60242ac8da08ddc1d4b8d73f86f653",
"291c6b2df1bac379d47f5557f9e564a1f6618bf7"
]
},
{
"annotation_id": [
"2c4a37edc5aa09c96a3bcbd06fc3cdecb9f2ee83",
"3943c618e2d72fad49aecf07bf33f4ba63c6349a",
"caaed42eaa66c7f1e41d9e95fdc54d68ceeb10b1"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Statistical information about annotated guidelines",
"Each sentence was annotated by one domain expert and us (and the disagreements where less than 10 percent). Table 2 shows the statistics of the annotated sentences for 3 different medical guidelines."
],
"extractive_spans": [],
"free_form_answer": "1470 sentences",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Statistical information about annotated guidelines",
"Table 2 shows the statistics of the annotated sentences for 3 different medical guidelines."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Statistical information about annotated guidelines",
"Hypertension, asthma, and rhinosinusitis guidelines and gold standard datasets were applied to evaluate our model. Since two of these annotated corpora are new, our model is establishing a baseline. The asthma corpus was investigated previously by BIBREF4 ."
],
"extractive_spans": [],
"free_form_answer": "316 sentences in Hypertension corpus, 877 sentences in Rhinosinusitis corpus",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Statistical information about annotated guidelines",
"Since two of these annotated corpora are new, our model is establishing a baseline. The asthma corpus was investigated previously by BIBREF4 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"043654eefd60242ac8da08ddc1d4b8d73f86f653",
"291c6b2df1bac379d47f5557f9e564a1f6618bf7"
]
},
{
"annotation_id": [
"28992160e4a5f6f38a0f0201d73c48907802bbcd",
"bb73b95d36809f9abb18ad8ba08e4f30b900e4ac",
"e990c283845d7d6a57447aa98cac15b205473238"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"However, completely automated extraction of condition-action statements does not seem possible. This is due among other things to the variety of linguistic expressions used in condition-action sentences. For example, they are not always in form of \"{if} condition {then} action”. In the sentence \"Conditions that affect erythrocyte turnover and hemoglobin variants must be considered, particularly when the A1C result does not correlate with the patient's clinical situation”, we have a condition-action sentence without an \"{if}\" term."
],
"extractive_spans": [
"Conditions that affect erythrocyte turnover and hemoglobin variants must be considered, particularly when the A1C result does not correlate with the patient's clinical situation"
],
"free_form_answer": "",
"highlighted_evidence": [
"\"Conditions that affect erythrocyte turnover and hemoglobin variants must be considered, particularly when the A1C result does not correlate with the patient's clinical situation”"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The annotation of the guidelines text (the next step), focused on determining whether there were condition statements in the candidate sentences or not. The instruction to the annotators were to try to paraphrase candidate sentences as sentences with \"if condition, then consequence\". If the transformed/paraphrased sentence conveyed the same meaning as the original, we considered to be a condition-consequence sentence. Then we we could annotate condition and consequence parts. For example, we paraphrased \"Beta-blockers, including eye drops, are contraindicated in patients with asthma\" to \"If patients have asthma, then beta-blockers, including eye drops, are contraindicated\". The paraphrased sentence conveys same meaning. So it became a condition-consequence sentence in our dataset. On the other hand, for example, we cannot paraphrase \"Further, the diagnostic criteria for CKD do not consider age-related decline in kidney function as reflected in estimated GFR\" to an if-then sentence.",
"FLOAT SELECTED: Table 1: Examples of classified sentence classes"
],
"extractive_spans": [
"If patients have asthma, then beta-blockers, including eye drops, are contraindicated"
],
"free_form_answer": "",
"highlighted_evidence": [
"For example, we paraphrased \"Beta-blockers, including eye drops, are contraindicated in patients with asthma\" to \"If patients have asthma, then beta-blockers, including eye drops, are contraindicated\". The paraphrased sentence conveys same meaning. So it became a condition-consequence sentence in our dataset.",
"FLOAT SELECTED: Table 1: Examples of classified sentence classes"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"291c6b2df1bac379d47f5557f9e564a1f6618bf7",
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What supervised machine learning models do they use?",
"Does the supervised machine learning approach outperform previous work?",
"How large is the released data set?",
"What is an example of a condition-action pair?"
],
"question_id": [
"065623cc1d5f5b19ec1f84d286522fc2f805c6ce",
"5c17559749810c67c50a7dbe34580d5e3b4f9acb",
"1c0a575e289eb486d3e6375d6f783cc2bf18adf9",
"4efe0d62bba618803ec12b63f32debb8b757dd68"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Examples of classified sentence classes",
"Table 2: Statistical information about annotated guidelines",
"Table 3: Classification results on asthma guidelines. (The ZeroR gives 0 precision and recall, because the majority of the guidelines sentences do not contain conditions and actions).",
"Table 4: Classification results on rhinosinusitis guidelines",
"Table 5: Classification results on hypertension guidelines"
],
"file": [
"3-Table1-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"4-Table4-1.png",
"4-Table5-1.png"
]
} | [
"What supervised machine learning models do they use?",
"How large is the released data set?"
] | [
[
"1706.04206-Model Performance-1"
],
[
"1706.04206-3-Table2-1.png",
"1706.04206-Gold Standard Datasets-5",
"1706.04206-Model Performance-0"
]
] | [
"They use four classifiers: ZeroR, Naive Bayes, J48, and random forest.",
"316 sentences in Hypertension corpus, 877 sentences in Rhinosinusitis corpus"
] | 91 |
1708.07690 | Revisiting the Centroid-based Method: A Strong Baseline for Multi-Document Summarization | The centroid-based model for extractive document summarization is a simple and fast baseline that ranks sentences based on their similarity to a centroid vector. In this paper, we apply this ranking to possible summaries instead of sentences and use a simple greedy algorithm to find the best summary. Furthermore, we show possi- bilities to scale up to larger input docu- ment collections by selecting a small num- ber of sentences from each document prior to constructing the summary. Experiments were done on the DUC2004 dataset for multi-document summarization. We ob- serve a higher performance over the orig- inal model, on par with more complex state-of-the-art methods. | {
"paragraphs": [
[
"Extractive multi-document summarization (MDS) aims to summarize a collection of documents by selecting a small number of sentences that represent the original content appropriately. Typical objectives for assembling a summary include information coverage and non-redundancy. A wide variety of methods have been introduced to approach MDS.",
"Many approaches are based on sentence ranking, i.e. assigning each sentence a score that indicates how well the sentence summarizes the input BIBREF0 , BIBREF1 , BIBREF2 . A summary is created by selecting the top entries of the ranked list of sentences. Since the sentences are often treated separately, these models might allow redundancy in the summary. Therefore, they are often extended by an anti-redundancy filter while de-queuing ranked sentence lists.",
"Other approaches work at summary-level rather than sentence-level and aim to optimize functions of sets of sentences to find good summaries, such as KL-divergence between probability distributions BIBREF3 or submodular functions that represent coverage, diversity, etc. BIBREF4 ",
"The centroid-based model belongs to the former group: it represents sentences as bag-of-word (BOW) vectors with TF-IDF weighting and uses a centroid of these vectors to represent the whole document collection BIBREF5 . The sentences are ranked by their cosine similarity to the centroid vector. This method is often found as a baseline in evaluations where it usually is outperformed BIBREF0 , BIBREF6 .",
"This baseline can easily be adapted to work at the summary-level instead the sentence level. This is done by representing a summary as the centroid of its sentence vectors and maximizing the similarity between the summary centroid and the centroid of the document collection. A simple greedy algorithm is used to find the best summary under a length constraint.",
"In order to keep the method efficient, we outline different methods to select a small number of candidate sentences from each document in the input collection before constructing the summary.",
"We test these modifications on the DUC2004 dataset for multi-document summarization. The results show an improvement of Rouge scores over the original centroid method. The performance is on par with state-of-the-art methods which shows that the similarity between a summary centroid and the input centroid is a well-suited function for global summary optimization.",
"The summarization approach presented in this paper is fast, unsupervised and simple to implement. Nevertheless, it performs as well as more complex state-of-the-art approaches in terms of Rouge scores on the DUC2004 dataset. It can be used as a strong baseline for future research or as a fast and easy-to-deploy summarization tool."
],
[
"The original centroid-based model is described by BIBREF5 . It represents sentences as BOW vectors with TF-IDF weighting. The centroid vector is the sum of all sentence vectors and each sentence is scored by the cosine similarity between its vector representation and the centroid vector. Cosine similarity measures how close two vectors INLINEFORM0 and INLINEFORM1 are based on their angle and is defined as follows: DISPLAYFORM0 ",
"A summary is selected by de-queuing the ranked list of sentences in decreasing order until the desired summary length is reached.",
" BIBREF7 implement this original model with the following modifications:",
"In order to avoid redundant sentences in the summary, a new sentence is only included if it does not exceed a certain maximum similarity to any of the already included sentences.",
"To focus on only the most important terms of the input documents, the values in the centroid vector which fall below a tuned threshold are set to zero.",
"This model, which includes the anti-redundancy filter and the selection of top-ranking features, is treated as the \"original\" centroid-based model in this paper.",
"We implement the selection of top-ranking features for both the original and modified models slightly differently to BIBREF7 : all words in the vocabulary are ranked by their value in the centroid vector. On a development dataset, a parameter is tuned that defines the proportion of the ranked vocabulary that is represented in the centroid vector and the rest is set to zero. This variant resulted in more stable behavior for different amounts of input documents."
],
[
"The similarity to the centroid vector can also be used to score a summary instead of a sentence. By representing a summary as the sum of its sentence vectors, it can be compared to the centroid, which is different from adding centroid-similarity scores of individual sentences.",
"With this modification, the summarization task is explicitly modelled as finding a combination of sentences that summarize the input well together instead of finding sentences that summarize the input well independently. This strategy should also be less dependent on anti-redundancy filtering since a combination of redundant sentences is probably less similar to the centroid than a more diverse selection that covers different prevalent topics.",
"In the experiments, we will therefore call this modification the \"global\" variant of the centroid model. The same principle is used by the KLSum model BIBREF3 in which the optimal summary minimizes the KL-divergence of the probability distribution of words in the input from the distribution in the summary. KLSum uses a greedy algorithm to find the best summary. Starting with an empty summary, the algorithm includes at each iteration the sentence that maximizes the similarity to the centroid when added to the already selected sentences. We also use this algorithm for sentence selection. The procedure is depicted in Algorithm SECREF5 below. [H] [1] Input: INLINEFORM0 Output: INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 Greedy Sentence Selection"
],
[
"The modified sentence selection method is less efficient than the orginal method since at each iteration the score of a possible summary has to be computed for all remaining candidate sentences. It may not be noticeable for a small number of input sentences. However, it would have an impact if the amount of input documents was larger, e.g. for the summarization of top-100 search results in document retrieval.",
"Therefore, we explore different methods for reducing the number of input sentences before applying the greedy sentence selection algorithm to make the model more suited for larger inputs. It is also important to examine how this affects Rouge scores.",
"We test the following methods of selecting INLINEFORM0 sentences from each document as candidates for the greedy sentence selection algorithm:",
"The first INLINEFORM0 sentences of the document are selected. This results in a mixture of a lead- INLINEFORM1 baseline and the centroid-based method.",
"The sentences are ranked separately in each document by their cosine similarity to the centroid vector, in decreasing order. The INLINEFORM0 best sentences of each document are selected as candidates.",
"Each sentence is scored by the sum of the TF-IDF scores of the terms that are mentioned in that sentence for the first time in the document. The intuition is that sentences are preferred if they introduce new important information to a document.",
"Note that in each of these candidate selection methods, the centroid vector is always computed as the sum of all sentence vectors, including the ones of the ignored sentences."
],
[
"For testing, we use the DUC2004 Task 2 dataset from the Document Understanding Conference (DUC). The dataset consists of 50 document clusters containing 10 documents each. For tuning hyperparameters, we use the CNN/Daily Mail dataset BIBREF8 which provides summary bulletpoints for individual news articles. In order to adapt the dataset for MDS, 50 CNN articles were randomly selected as documents to initialize 50 clusters. For each of these seed articles, 9 articles with the highest word-overlap in the first 3 sentences were added to that cluster. This resulted in 50 documents clusters, each containing 10 topically related articles. The reference summaries for each cluster were created by interleaving the sentences of the article summaries until a length contraint (100 words) was reached."
],
[
" BIBREF6 published SumRepo, a repository of summaries for the DUC2004 dataset generated by several baseline and state-of-the-art methods . We evaluate summaries generated by a selection of these methods on the same data that we use for testing. We calculate Rouge scores with the Rouge toolkit BIBREF9 . In order to compare our results to BIBREF6 we use the same Rouge settings as they do and report results for Rouge-1, Rouge-2 and Rouge-4 recall. The baselines include a basic centroid-based model without an anti-redundancy filter and feature reduction."
],
[
"In the summarization methods proposed in this paper, the preprocessing includes sentence segmentation, lowercasing and stopword removal."
],
[
"The similarity threshold for avoiding redundancy ( INLINEFORM0 ) and the vocabulary-included-in-centroid ratio ( INLINEFORM1 ) are tuned with the original centroid model on our development set. Values from 0 to 1 with step size INLINEFORM2 were tested using a grid search. The optimal values for INLINEFORM3 and INLINEFORM4 were INLINEFORM5 and INLINEFORM6 , respectively. These values were used for all tested variants of the centroid model. For the different methods of choosing INLINEFORM7 sentences of each document before summarization, we tuned INLINEFORM8 separately for each, with values from 1 to 10, using the global model. The best INLINEFORM9 found for INLINEFORM10 -first, INLINEFORM11 -best, new-tfidf were 7, 2 and 3 respectively."
],
[
"Table TABREF9 shows the Rouge scores measured in our experiments.",
"The first two sections show results for baseline and SOTA summaries from SumRepo. The third section shows the summarization variants presented in this paper. \"G\" indicates that the global greedy algorithm was used instead of sentence-level ranking. In the last section, \"- R\" indicates that the method was tested without the anti-redundancy filter.",
"Both the global optimization and the sentence preselection have a positive impact on the performance.",
"The global + new-TF-IDF variant outperforms all but the DPP model in Rouge-1 recall. The global + N-first variant outperforms all other models in Rouge-2 recall. However, the Rouge scores of the SOTA methods and the introduced centroid variants are in a very similar range.",
"Interestingly, the original centroid-based model, without any of the new modifications introduced in this paper, already shows quite high Rouge scores in comparison to the other baseline methods. This is due to the anti-redundancy filter and the selection of top-ranking features.",
"In order to see whether the global sentence selection alleviates the need for an anti-redundancy filter, the original method and the global method (without INLINEFORM0 sentences per document selection) were tested without it (section 4 in Table TABREF9 ). In terms of Rouge-1 recall, the original model is clearly very dependent on checking for redundancy when including sentences, while the global variant does not change its performance much without the anti-redundancy filter. This matches the expectation that the globally motivated method handles redundancy implicitly."
],
[
"Table TABREF10 shows generated example summaries using the global centroid method with the three sentence preselection methods. For readability, truncated sentences (due to the 100-word limit) at the end of the summaries are excluded. The original positions of the summary sentences, i.e. the indices of the document and the sentence inside the document are given. As can be seen in the examples, the N-first method is restricted to sentences appearing early in documents. In the new-TF-IDF example, the second and third sentences were preselected because high ranking features such as \"robot\" and \"arm\" appeared for the first time in the respective documents."
],
[
"In addition to various works on sophisticated models for multi-document summarization, other experiments have been done showing that simple modifications to the standard baseline methods can perform quite well.",
" BIBREF7 improved the centroid-based method by representing sentences as sums of word embeddings instead of TF-IDF vectors so that semantic relationships between sentences that have no words in common can be captured. BIBREF10 also evaluated summaries from SumRepo and did experiments on improving baseline systems such as the centroid-based and the KL-divergence method with different anti-redundancy filters. Their best optimized baseline obtained a performance similar to the ICSI method in SumRepo."
],
[
"In this paper we show that simple modifications to the centroid-based method can bring its performance to the same level as state-of-the-art methods on the DUC2004 dataset. The resulting summarization methods are unsupervised, efficient and do not require complicated feature engineering or training.",
"Changing from a ranking-based method to a global optimization method increases performance and makes the summarizer less dependent on explicitly checking for redundancy. This can be useful for input document collections with differing levels of content diversity.",
"The presented methods for restricting the input to a maximum of INLINEFORM0 sentences per document lead to additional improvements while reducing computation effort, if global optimization is being used. These methods could be useful for other summarization models that rely on pairwise similarity computations between all input sentences, or other properties which would slow down summarization of large numbers of input sentences.",
"The modified methods can also be used as strong baselines for future experiments in multi-document summarization. "
]
],
"section_name": [
"Introduction",
"Original Centroid-based Method",
"Modified Summary Selection",
"Preselection of Sentences",
"Datasets",
"Baselines & Evaluation",
"Preprocessing",
"Parameter Tuning",
"Results",
"Example Summaries",
"Related Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"48417f7617aa9e7d75f2161e8ed850aab95e9852",
"c8c373b48ffe39024c9cd27cd8b9446b44de82d6",
"e0c49156676d25e633b73576164f48ddecb46412"
],
"answer": [
{
"evidence": [
"BIBREF6 published SumRepo, a repository of summaries for the DUC2004 dataset generated by several baseline and state-of-the-art methods . We evaluate summaries generated by a selection of these methods on the same data that we use for testing. We calculate Rouge scores with the Rouge toolkit BIBREF9 . In order to compare our results to BIBREF6 we use the same Rouge settings as they do and report results for Rouge-1, Rouge-2 and Rouge-4 recall. The baselines include a basic centroid-based model without an anti-redundancy filter and feature reduction.",
"Table TABREF9 shows the Rouge scores measured in our experiments.",
"The first two sections show results for baseline and SOTA summaries from SumRepo. The third section shows the summarization variants presented in this paper. \"G\" indicates that the global greedy algorithm was used instead of sentence-level ranking. In the last section, \"- R\" indicates that the method was tested without the anti-redundancy filter.",
"FLOAT SELECTED: Table 1: Rouge scores on DUC2004."
],
"extractive_spans": [],
"free_form_answer": "CLASSY04, ICSI, Submodular, DPP, RegSum",
"highlighted_evidence": [
"BIBREF6 published SumRepo, a repository of summaries for the DUC2004 dataset generated by several baseline and state-of-the-art methods .",
"Table TABREF9 shows the Rouge scores measured in our experiments.\n\nThe first two sections show results for baseline and SOTA summaries from SumRepo. ",
"FLOAT SELECTED: Table 1: Rouge scores on DUC2004."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF9 shows the Rouge scores measured in our experiments.",
"The first two sections show results for baseline and SOTA summaries from SumRepo. The third section shows the summarization variants presented in this paper. \"G\" indicates that the global greedy algorithm was used instead of sentence-level ranking. In the last section, \"- R\" indicates that the method was tested without the anti-redundancy filter.",
"FLOAT SELECTED: Table 1: Rouge scores on DUC2004."
],
"extractive_spans": [],
"free_form_answer": "CLASSY04, ICSI, Submodular, DPP and RegSum.",
"highlighted_evidence": [
"Table TABREF9 shows the Rouge scores measured in our experiments.\n\nThe first two sections show results for baseline and SOTA summaries from SumRepo. ",
"FLOAT SELECTED: Table 1: Rouge scores on DUC2004."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The first two sections show results for baseline and SOTA summaries from SumRepo. The third section shows the summarization variants presented in this paper. \"G\" indicates that the global greedy algorithm was used instead of sentence-level ranking. In the last section, \"- R\" indicates that the method was tested without the anti-redundancy filter.",
"FLOAT SELECTED: Table 1: Rouge scores on DUC2004."
],
"extractive_spans": [],
"free_form_answer": "CLASSY04, ICSI, Submodular, DPP, RegSum",
"highlighted_evidence": [
"The first two sections show results for baseline and SOTA summaries from SumRepo. ",
"FLOAT SELECTED: Table 1: Rouge scores on DUC2004."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"47765cb313b5bfcf4fe1cee440d0305c4d701847",
"a99a9939517ac297d24c18c09df657f5a9b74346",
"ba5ee4c6c8b02d0adb4fa08f2a26dc8e93101320"
],
"answer": [
{
"evidence": [
"BIBREF6 published SumRepo, a repository of summaries for the DUC2004 dataset generated by several baseline and state-of-the-art methods . We evaluate summaries generated by a selection of these methods on the same data that we use for testing. We calculate Rouge scores with the Rouge toolkit BIBREF9 . In order to compare our results to BIBREF6 we use the same Rouge settings as they do and report results for Rouge-1, Rouge-2 and Rouge-4 recall. The baselines include a basic centroid-based model without an anti-redundancy filter and feature reduction."
],
"extractive_spans": [
"Rouge-1, Rouge-2 and Rouge-4 recall"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to compare our results to BIBREF6 we use the same Rouge settings as they do and report results for Rouge-1, Rouge-2 and Rouge-4 recall. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"BIBREF6 published SumRepo, a repository of summaries for the DUC2004 dataset generated by several baseline and state-of-the-art methods . We evaluate summaries generated by a selection of these methods on the same data that we use for testing. We calculate Rouge scores with the Rouge toolkit BIBREF9 . In order to compare our results to BIBREF6 we use the same Rouge settings as they do and report results for Rouge-1, Rouge-2 and Rouge-4 recall. The baselines include a basic centroid-based model without an anti-redundancy filter and feature reduction."
],
"extractive_spans": [],
"free_form_answer": "Rouge-1 recall, Rouge-2 recall, Rouge-4 recall",
"highlighted_evidence": [
"In order to compare our results to BIBREF6 we use the same Rouge settings as they do and report results for Rouge-1, Rouge-2 and Rouge-4 recall. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"BIBREF6 published SumRepo, a repository of summaries for the DUC2004 dataset generated by several baseline and state-of-the-art methods . We evaluate summaries generated by a selection of these methods on the same data that we use for testing. We calculate Rouge scores with the Rouge toolkit BIBREF9 . In order to compare our results to BIBREF6 we use the same Rouge settings as they do and report results for Rouge-1, Rouge-2 and Rouge-4 recall. The baselines include a basic centroid-based model without an anti-redundancy filter and feature reduction."
],
"extractive_spans": [
"Rouge-1, Rouge-2 and Rouge-4 recall"
],
"free_form_answer": "",
"highlighted_evidence": [
" In order to compare our results to BIBREF6 we use the same Rouge settings as they do and report results for Rouge-1, Rouge-2 and Rouge-4 recall. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"22939c5854f9a0442c03369df6dd1b43c7eb6c2a",
"5b8b16053d2b4b7a91257419cdc12b3e963996b7",
"ec21c6a1d60c041e82559dcd39b4ed32bf602267"
],
"answer": [
{
"evidence": [
"The centroid-based model belongs to the former group: it represents sentences as bag-of-word (BOW) vectors with TF-IDF weighting and uses a centroid of these vectors to represent the whole document collection BIBREF5 . The sentences are ranked by their cosine similarity to the centroid vector. This method is often found as a baseline in evaluations where it usually is outperformed BIBREF0 , BIBREF6 ."
],
"extractive_spans": [
"BIBREF0 , BIBREF6"
],
"free_form_answer": "",
"highlighted_evidence": [
"This method is often found as a baseline in evaluations where it usually is outperformed BIBREF0 , BIBREF6 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The original centroid-based model is described by BIBREF5 . It represents sentences as BOW vectors with TF-IDF weighting. The centroid vector is the sum of all sentence vectors and each sentence is scored by the cosine similarity between its vector representation and the centroid vector. Cosine similarity measures how close two vectors INLINEFORM0 and INLINEFORM1 are based on their angle and is defined as follows: DISPLAYFORM0"
],
"extractive_spans": [],
"free_form_answer": "Original centroid-based model by BIBREF5",
"highlighted_evidence": [
"The original centroid-based model is described by BIBREF5 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The centroid-based model belongs to the former group: it represents sentences as bag-of-word (BOW) vectors with TF-IDF weighting and uses a centroid of these vectors to represent the whole document collection BIBREF5 . The sentences are ranked by their cosine similarity to the centroid vector. This method is often found as a baseline in evaluations where it usually is outperformed BIBREF0 , BIBREF6 ."
],
"extractive_spans": [
"it represents sentences as bag-of-word (BOW) vectors with TF-IDF weighting and uses a centroid of these vectors to represent the whole document collection"
],
"free_form_answer": "",
"highlighted_evidence": [
"The centroid-based model belongs to the former group: it represents sentences as bag-of-word (BOW) vectors with TF-IDF weighting and uses a centroid of these vectors to represent the whole document collection BIBREF5 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"22309c0f4b96b12206c5d3c802b0664007639622",
"adf782d3547a25cdd4219051c869a95ea1c572be",
"c500c014c4633d57ad52b1d3a761587c718787a7"
],
"answer": [
{
"evidence": [
"We test the following methods of selecting INLINEFORM0 sentences from each document as candidates for the greedy sentence selection algorithm:",
"The first INLINEFORM0 sentences of the document are selected. This results in a mixture of a lead- INLINEFORM1 baseline and the centroid-based method.",
"The sentences are ranked separately in each document by their cosine similarity to the centroid vector, in decreasing order. The INLINEFORM0 best sentences of each document are selected as candidates.",
"Each sentence is scored by the sum of the TF-IDF scores of the terms that are mentioned in that sentence for the first time in the document. The intuition is that sentences are preferred if they introduce new important information to a document."
],
"extractive_spans": [],
"free_form_answer": "Using three algorithms: N-first, N-best and New-TF-IDF.",
"highlighted_evidence": [
"We test the following methods of selecting INLINEFORM0 sentences from each document as candidates for the greedy sentence selection algorithm:\n\nThe first INLINEFORM0 sentences of the document are selected. This results in a mixture of a lead- INLINEFORM1 baseline and the centroid-based method.\n\nThe sentences are ranked separately in each document by their cosine similarity to the centroid vector, in decreasing order. The INLINEFORM0 best sentences of each document are selected as candidates.\n\nEach sentence is scored by the sum of the TF-IDF scores of the terms that are mentioned in that sentence for the first time in the document. The intuition is that sentences are preferred if they introduce new important information to a document."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We test the following methods of selecting INLINEFORM0 sentences from each document as candidates for the greedy sentence selection algorithm:",
"The first INLINEFORM0 sentences of the document are selected. This results in a mixture of a lead- INLINEFORM1 baseline and the centroid-based method.",
"The sentences are ranked separately in each document by their cosine similarity to the centroid vector, in decreasing order. The INLINEFORM0 best sentences of each document are selected as candidates.",
"Each sentence is scored by the sum of the TF-IDF scores of the terms that are mentioned in that sentence for the first time in the document. The intuition is that sentences are preferred if they introduce new important information to a document."
],
"extractive_spans": [],
"free_form_answer": "Sentences are selected using 3 different greedy selection algorithms.",
"highlighted_evidence": [
"We test the following methods of selecting INLINEFORM0 sentences from each document as candidates for the greedy sentence selection algorithm:\n\nThe first INLINEFORM0 sentences of the document are selected. This results in a mixture of a lead- INLINEFORM1 baseline and the centroid-based method.\n\nThe sentences are ranked separately in each document by their cosine similarity to the centroid vector, in decreasing order. The INLINEFORM0 best sentences of each document are selected as candidates.\n\nEach sentence is scored by the sum of the TF-IDF scores of the terms that are mentioned in that sentence for the first time in the document. The intuition is that sentences are preferred if they introduce new important information to a document."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"A summary is selected by de-queuing the ranked list of sentences in decreasing order until the desired summary length is reached.",
"We implement the selection of top-ranking features for both the original and modified models slightly differently to BIBREF7 : all words in the vocabulary are ranked by their value in the centroid vector. On a development dataset, a parameter is tuned that defines the proportion of the ranked vocabulary that is represented in the centroid vector and the rest is set to zero. This variant resulted in more stable behavior for different amounts of input documents."
],
"extractive_spans": [],
"free_form_answer": "All words in the vocabulary are ranked by their value in the centroid vector. Then the ranked list of sentences is de-queued in decreasing order.",
"highlighted_evidence": [
"A summary is selected by de-queuing the ranked list of sentences in decreasing order until the desired summary length is reached.",
"We implement the selection of top-ranking features for both the original and modified models slightly differently to BIBREF7 : all words in the vocabulary are ranked by their value in the centroid vector. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"what state of the art methods are compared to?",
"what are the performance metrics?",
"what is the original model they refer to?",
"how are sentences selected prior to making the summary?"
],
"question_id": [
"0bb97991fc297aa5aed784568de52d5b9121f920",
"7ba6330d105f49c7f71dba148bb73245a8ef2966",
"157de5175259d6f25db703efb299f948dae597b7",
"cf3fab54b2b289b66e7dba4706c47a62569627c5"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Rouge scores on DUC2004.",
"Table 2: Summaries of the cluster d30031 in DUC2004 generated by the modified centroid method using different sentence preselection methods."
],
"file": [
"4-Table1-1.png",
"5-Table2-1.png"
]
} | [
"what state of the art methods are compared to?",
"what is the original model they refer to?",
"how are sentences selected prior to making the summary?"
] | [
[
"1708.07690-Results-1",
"1708.07690-4-Table1-1.png",
"1708.07690-Results-0"
],
[
"1708.07690-Introduction-3"
],
[
"1708.07690-Original Centroid-based Method-6",
"1708.07690-Preselection of Sentences-4",
"1708.07690-Preselection of Sentences-3",
"1708.07690-Preselection of Sentences-2",
"1708.07690-Preselection of Sentences-5",
"1708.07690-Original Centroid-based Method-1"
]
] | [
"CLASSY04, ICSI, Submodular, DPP, RegSum",
"Original centroid-based model by BIBREF5",
"All words in the vocabulary are ranked by their value in the centroid vector. Then the ranked list of sentences is de-queued in decreasing order."
] | 93 |
1804.05253 | "With 1 follower I must be AWESOME :P". Exploring the role of irony markers in irony recognition | Conversations in social media often contain the use of irony or sarcasm, when the users say the opposite of what they really mean. Irony markers are the meta-communicative clues that inform the reader that an utterance is ironic. We propose a thorough analysis of theoretically grounded irony markers in two social media platforms: $Twitter$ and $Reddit$. Classification and frequency analysis show that for $Twitter$, typographic markers such as emoticons and emojis are the most discriminative markers to recognize ironic utterances, while for $Reddit$ the morphological markers (e.g., interjections, tag questions) are the most discriminative. | {
"paragraphs": [
[
"With the advent of social media, irony and sarcasm detection has become an active area of research in Natural Language Processing (NLP) BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Most computational studies have focused on building state-of-the-art models to detect whether an utterance or comment is ironic/sarcastic or not, sometimes without theoretical grounding. In linguistics and discourse studies, BIBREF4 (2000) and later BIBREF5 (2010) have studied two theoretical aspects of irony in the text: irony factors' and irony markers. Irony factors are characteristics of ironic utterances that cannot be removed without destroying the irony. In contrast, irony markers are a meta-communicative clue that “alert the reader to the fact that a sentence is ironical” BIBREF4 . They can be removed and the utterance is still ironic.",
"In this paper, we examine the role of irony markers in social media for irony recognition. Although punctuations, capitalization, and hyperboles are previously used as features in irony detection BIBREF6 , BIBREF7 , here we thoroughly analyze a set of theoretically-grounded types of irony markers, such as tropes (e.g., metaphors), morpho-syntactic indicators (e.g., tag questions), and typographic markers (e.g., emoji) and their use in ironic utterances. Consider the following two irony examples from INLINEFORM0 and INLINEFORM1 given in Table TABREF2 .",
"Both utterances are labeled as ironic by their authors (using hashtags in INLINEFORM0 and the /s marker in INLINEFORM1 ). In the INLINEFORM2 example, the author uses several irony markers such as Rhetorical question (e.g., “are you telling” ...) and metaphor (e.g., “golden age”). In the INLINEFORM3 example, we notice the use of capitalization (“AWESOME”) and emoticons (“:P” (tongue out)) that the author uses to alert the readers that it is an ironic tweet.",
"We present three contributions in this paper. First, we provide a detailed investigation of a set of theoretically-grounded irony markers (e.g., tropes, morpho-syntactic, and typographic markers) in social media. We conduct the classification and frequency analysis based on their occurrence. Second, we analyze and compare the use of irony markers on two social media platforms ( INLINEFORM0 and INLINEFORM1 ). Third, we provide an analysis of markers on topically different social media content (e.g., technology vs. political subreddits)."
],
[
"Twitter: We use a set of 350K tweets for our experiments. The ironic/sarcastic tweets are collected using hashtags, such as #irony, #sarcasm, and #sarcastic whereas the non-sarcastic tweets do not contain these hashtags, but they might include sentiment hashtags, such as #happy, #love, #sad, #hate (similar to BIBREF8 , BIBREF9 ). As pre-processing, we removed the retweets, spam, duplicates, and tweets written in languages other than English. Also, we deleted all tweets where the hashtags of interest were not located at the very end (i.e., we eliminated “#sarcasm is something that I love”). We lowercased the tweets, except the words where all the characters are uppercased.",
"Reddit: BIBREF10 (2018) introduced an extensive collection of sarcastic and non-sarcastic posts collected from different subreddits. In Reddit, authors mark their sarcastic intent of their posts by adding “/s” at the end of a post/comment. We collected 50K instances from the corpus for our experiments (denoted as INLINEFORM0 ), where the sarcastic and non-sarcastic replies are at least two sentences (i.e., we discard posts that are too short). For brevity, we denote ironic utterances as INLINEFORM1 and non-ironic utterances as INLINEFORM2 . Both INLINEFORM3 and INLINEFORM4 datasets are balanced between the INLINEFORM5 and INLINEFORM6 classes. We uuse 80% of the datasets for training, 10% for development, and the remaining 10% for testing."
],
[
"Three types of markers — tropes, morpho-syntactic, and typographic are used as features."
],
[
"Tropes are figurative use of expressions.",
"Metaphors - Metaphors often facilitate ironic representation and are used as markers. We have drawn metaphors from different sources (e.g., 884 and 8,600 adjective/noun metaphors from BIBREF11 and BIBREF12 , respectively, and used them as binary features. We also evaluate the metaphor detector BIBREF13 over INLINEFORM0 and INLINEFORM1 datasets. We considered metaphor candidates that have precision INLINEFORM2 0.75 (see BIBREF13 (2017)).",
"Hyperbole - Hyperboles or intensifiers are commonly used in irony because speakers frequently overstate the magnitude of a situation or event. We use terms that are denoted as “strong subjective” (positive/negative) from the MPQA corpus BIBREF14 as hyperboles. Apart from using hyperboles directly as the binary feature we also use their sentiment as features.",
"Rhetorical Questions - Rhetorical Questions (for brevity INLINEFORM0 ) have the structure of a question but are not typical information seeking questions. We follow the hypothesis introduced by BIBREF15 (2017) that questions that are in the middle of a comment are more likely to be RQ since since questions followed by text cannot be typical information seeking questions. Presence of INLINEFORM1 is used as a binary feature."
],
[
"This type of markers appear at the morphologic and syntactic levels of an utterance.",
"Exclamation - Exclamation marks emphasize a sense of surprise on the literal evaluation that is reversed in the ironic reading BIBREF5 . We use two binary features, single or multiple uses of the marker.",
"Tag questions - We built a list of tag questions (e.g.,, “didn't you?”, “aren't we?”) from a grammar site and use them as binary indicators.",
"Interjections - Interjections seem to undermine a literal evaluation and occur frequently in ironic utterances (e.g., “`yeah\", `wow”, “yay”,“ouch” etc.). Similar to tag questions we assembled interjections (a total of 250) from different grammar sites."
],
[
"Capitalization - Users often capitalize words to represent their ironic use (e.g., the use of “GREAT\", “SO”, and “WONDERFUL” in the ironic tweet “GREAT i'm SO happy shattered phone on this WONDERFUL day!!!”).",
"Quotation mark - Users regularly put quotation marks to stress the ironic meaning (e.g., “great” instead of GREAT in the above example).",
"Other punctuation marks - Punctuation marks such as “?”, “.”, “;” and their various uses (e.g., single/multiple/mix of two different punctuations) are used as features.",
"Hashtag - Particularly in INLINEFORM0 , hashtags often represent the sentiment of the author. For example, in the ironic tweet “nice to wake up to cute text. #suck”, the hashtag “#suck” depicts the negative sentiment. We use binary sentiment feature (positive or negative) to identify the sentiment of the hashtag, while comparing against the MPQA sentiment lexicon. Often multiple words are combined in a hashtag without spacing (e.g., “fun” and “night” in #funnight). We use an off-the-shelf tool to split words in such hashtags and then checked the sentiment of the words.",
"Emoticon - Emoticons are frequently used to emphasize the ironic intent of the user. In the example “I love the weather ;) #irony”, the emoticon “;)” (wink) alerts the reader to a possible ironic interpretation of weather (i.e., bad weather). We collected a comprehensive list of emoticons (over one-hundred) from Wikipedia and also used standard regular expressions to identify emoticons in our datasets. Beside using the emoticons directly as binary features, we use their sentiment as features as well (e.g., “wink” is regarded as positive sentiment in MPQA).",
"Emoji - Emojis are like emoticons, but they are actual pictures and recently have become very popular in social media. Figure FIGREF22 shows a tweet with two emojis (e.g., “unassumed” and “confounded” faces respectively) used as markers. We use an emoji library of 1,400 emojis to identify the particular emoji used in irony utterances and use them as binary indicators."
],
[
"We first conduct a binary classification task to decide whether an utterance (e.g., a tweet or a INLINEFORM0 post) is ironic or non-ironic, exclusively based on the irony marker features. We use Support Vector Machines (SVM) classifier with linear kernel BIBREF16 . Table TABREF23 and Table TABREF24 present the results of the ablation tests for INLINEFORM1 and INLINEFORM2 . We report Precision ( INLINEFORM3 ), Recall ( INLINEFORM4 ) and INLINEFORM5 scores of both INLINEFORM6 and INLINEFORM7 categories.",
"Table TABREF23 shows that for ironic utterances in INLINEFORM0 , removing tropes have the maximum negative effect on Recall, with a reduction on INLINEFORM1 score by 15%. This is primarily due to the removal of hyperboles that frequently appear in ironic utterances in INLINEFORM2 . Removing typographic markers (e.g., emojis, emoticons, etc.) have the maximum negative effect on the Precision for the irony INLINEFORM3 category, since particular emojis and emoticons appear regularly in ironic utterances (Table TABREF25 ). For INLINEFORM4 , Table TABREF24 shows that removal of typographic markers such as emoticons does not affect the F1 scores, whereas the removal of morpho-syntactic markers, e.g., tag questions, interjections have a negative effect on the F1. Table TABREF25 and Table TABREF26 represent the INLINEFORM5 most discriminative features for both categories based on the feature weights learned during the SVM training for INLINEFORM6 and INLINEFORM7 , respectively. Table TABREF25 shows that for INLINEFORM8 , typographic features such as emojis and emoticons have the highest feature weights for both categories. Interestingly, we observe that for ironic tweets users often express negative sentiment directly via emojis (e.g., angry face, rage) whereas for non-ironic utterances, emojis with positive sentiments (e.g., hearts, wedding) are more familiar. For INLINEFORM9 (Table TABREF26 ), we observe that instead of emojis, other markers such as exclamation marks, negative tag questions, and metaphors are discriminatory markers for the irony category. In contrary, for the non-irony category, positive tag questions and negative sentiment hyperboles are influential features."
],
[
"We also investigate the occurrence of markers in the two platforms via frequency analysis (Table TABREF29 ). We report the mean of occurrence per utterance and the standard deviation (SD) of each marker. Table TABREF29 shows that markers such as hyperbole, punctuations, and interjections are popular in both platforms. Emojis and emoticons, although the two most popular markers in INLINEFORM0 are almost unused in INLINEFORM1 . Exclamations and INLINEFORM2 s are more common in the INLINEFORM3 corpus. Next, we combine each marker with the type they belong to (i.e., either trope, morpho-syntactic and typographic) and compare the means between each pair of types via independent t-tests. We found that the difference of means is significant ( INLINEFORM4 ) for all pair of types across the two platforms."
],
[
"Finally, we collected another set of irony posts from BIBREF10 , but this time we collected posts from specific topical subreddits. We collected irony posts about politics (e.g., subreddits: politics, hillary, the_donald), sports (e.g., subreddits: nba, football, soccer), religion (e.g., subreddits: religion) and technology (e.g., subreddits: technology). Table TABREF27 presents the mean and SD for each genre. We observe that users use tropes such as hyperbole and INLINEFORM0 , morpho-syntactic markers such as exclamation and interjections and multiple-punctuations more in politics and religion than in technology and sports. This is expected since subreddits regarding politics and religion are often more controversial than technology and sports and the users might want to stress that they are ironic or sarcastic using the markers."
],
[
"We provided a thorough investigation of irony markers across two social media platforms: Twitter and Reddit. Classification experiments and frequency analysis suggest that typographic markers such as emojis and emoticons are most frequent for INLINEFORM0 whereas tag questions, exclamation, metaphors are frequent for INLINEFORM1 . We also provide an analysis across different topical subreddits. In future, we are planning to experiment with other markers (e.g., ironic echo, repetition, understatements)."
]
],
"section_name": [
"Introduction",
"Data",
"Irony Markers",
"Tropes:",
"Morpho-syntactic (MS) irony markers:",
"Typographic irony markers:",
"Classification Experiments and Results",
"Frequency analysis of markers",
"Irony markers across topical subreddits",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"232a20a0c74e592c8987aa28722eebbd086c2feb",
"b1c2b588f586820f5f7376ecde119aa19e03f229",
"b5cb087d5d35d03a6ee37cee99855e1229fc7bb0"
],
"answer": [
{
"evidence": [
"Twitter: We use a set of 350K tweets for our experiments. The ironic/sarcastic tweets are collected using hashtags, such as #irony, #sarcasm, and #sarcastic whereas the non-sarcastic tweets do not contain these hashtags, but they might include sentiment hashtags, such as #happy, #love, #sad, #hate (similar to BIBREF8 , BIBREF9 ). As pre-processing, we removed the retweets, spam, duplicates, and tweets written in languages other than English. Also, we deleted all tweets where the hashtags of interest were not located at the very end (i.e., we eliminated “#sarcasm is something that I love”). We lowercased the tweets, except the words where all the characters are uppercased."
],
"extractive_spans": [],
"free_form_answer": "The twitter dataset is English-only; no information for the reddit dataset is given",
"highlighted_evidence": [
"As pre-processing, we removed the retweets, spam, duplicates, and tweets written in languages other than English."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Twitter: We use a set of 350K tweets for our experiments. The ironic/sarcastic tweets are collected using hashtags, such as #irony, #sarcasm, and #sarcastic whereas the non-sarcastic tweets do not contain these hashtags, but they might include sentiment hashtags, such as #happy, #love, #sad, #hate (similar to BIBREF8 , BIBREF9 ). As pre-processing, we removed the retweets, spam, duplicates, and tweets written in languages other than English. Also, we deleted all tweets where the hashtags of interest were not located at the very end (i.e., we eliminated “#sarcasm is something that I love”). We lowercased the tweets, except the words where all the characters are uppercased."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"As pre-processing, we removed the retweets, spam, duplicates, and tweets written in languages other than English."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Twitter: We use a set of 350K tweets for our experiments. The ironic/sarcastic tweets are collected using hashtags, such as #irony, #sarcasm, and #sarcastic whereas the non-sarcastic tweets do not contain these hashtags, but they might include sentiment hashtags, such as #happy, #love, #sad, #hate (similar to BIBREF8 , BIBREF9 ). As pre-processing, we removed the retweets, spam, duplicates, and tweets written in languages other than English. Also, we deleted all tweets where the hashtags of interest were not located at the very end (i.e., we eliminated “#sarcasm is something that I love”). We lowercased the tweets, except the words where all the characters are uppercased."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"As pre-processing, we removed the retweets, spam, duplicates, and tweets written in languages other than English."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"272eeddbcb85f75eec4d03b313cd1eafaed94155",
"77072e3bb5267d98a97f6d6b79194ca4ddb9efcf",
"bab752719494f4389bba94cebd096e50bd0ed85c"
],
"answer": [
{
"evidence": [
"We also investigate the occurrence of markers in the two platforms via frequency analysis (Table TABREF29 ). We report the mean of occurrence per utterance and the standard deviation (SD) of each marker. Table TABREF29 shows that markers such as hyperbole, punctuations, and interjections are popular in both platforms. Emojis and emoticons, although the two most popular markers in INLINEFORM0 are almost unused in INLINEFORM1 . Exclamations and INLINEFORM2 s are more common in the INLINEFORM3 corpus. Next, we combine each marker with the type they belong to (i.e., either trope, morpho-syntactic and typographic) and compare the means between each pair of types via independent t-tests. We found that the difference of means is significant ( INLINEFORM4 ) for all pair of types across the two platforms."
],
"extractive_spans": [
"mean of occurrence per utterance and the standard deviation (SD) of each marker"
],
"free_form_answer": "",
"highlighted_evidence": [
"We report the mean of occurrence per utterance and the standard deviation (SD) of each marker."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We present three contributions in this paper. First, we provide a detailed investigation of a set of theoretically-grounded irony markers (e.g., tropes, morpho-syntactic, and typographic markers) in social media. We conduct the classification and frequency analysis based on their occurrence. Second, we analyze and compare the use of irony markers on two social media platforms ( INLINEFORM0 and INLINEFORM1 ). Third, we provide an analysis of markers on topically different social media content (e.g., technology vs. political subreddits)."
],
"extractive_spans": [
"based on their occurrence"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct the classification and frequency analysis based on their occurrence. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We also investigate the occurrence of markers in the two platforms via frequency analysis (Table TABREF29 ). We report the mean of occurrence per utterance and the standard deviation (SD) of each marker. Table TABREF29 shows that markers such as hyperbole, punctuations, and interjections are popular in both platforms. Emojis and emoticons, although the two most popular markers in INLINEFORM0 are almost unused in INLINEFORM1 . Exclamations and INLINEFORM2 s are more common in the INLINEFORM3 corpus. Next, we combine each marker with the type they belong to (i.e., either trope, morpho-syntactic and typographic) and compare the means between each pair of types via independent t-tests. We found that the difference of means is significant ( INLINEFORM4 ) for all pair of types across the two platforms."
],
"extractive_spans": [],
"free_form_answer": "Mean of occurrence per utterance and the standard deviation is calculated for every marker type; the means between each pair of types is compared via independent t-tests",
"highlighted_evidence": [
"We also investigate the occurrence of markers in the two platforms via frequency analysis (Table TABREF29 ). We report the mean of occurrence per utterance and the standard deviation (SD) of each marker.",
"Next, we combine each marker with the type they belong to (i.e., either trope, morpho-syntactic and typographic) and compare the means between each pair of types via independent t-tests. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"5ab6095aa679bd779b2db3940d87b1801cceca80",
"67b8d26d76c576225bb670cdf04e703a149c5c62",
"b23b13bc5863d3edb1b146023a3353428663f2ef"
],
"answer": [
{
"evidence": [
"We first conduct a binary classification task to decide whether an utterance (e.g., a tweet or a INLINEFORM0 post) is ironic or non-ironic, exclusively based on the irony marker features. We use Support Vector Machines (SVM) classifier with linear kernel BIBREF16 . Table TABREF23 and Table TABREF24 present the results of the ablation tests for INLINEFORM1 and INLINEFORM2 . We report Precision ( INLINEFORM3 ), Recall ( INLINEFORM4 ) and INLINEFORM5 scores of both INLINEFORM6 and INLINEFORM7 categories."
],
"extractive_spans": [
"Support Vector Machines (SVM) classifier with linear kernel BIBREF16"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use Support Vector Machines (SVM) classifier with linear kernel BIBREF16 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We first conduct a binary classification task to decide whether an utterance (e.g., a tweet or a INLINEFORM0 post) is ironic or non-ironic, exclusively based on the irony marker features. We use Support Vector Machines (SVM) classifier with linear kernel BIBREF16 . Table TABREF23 and Table TABREF24 present the results of the ablation tests for INLINEFORM1 and INLINEFORM2 . We report Precision ( INLINEFORM3 ), Recall ( INLINEFORM4 ) and INLINEFORM5 scores of both INLINEFORM6 and INLINEFORM7 categories."
],
"extractive_spans": [
"Support Vector Machines (SVM) classifier with linear kernel BIBREF16 "
],
"free_form_answer": "",
"highlighted_evidence": [
"We first conduct a binary classification task to decide whether an utterance (e.g., a tweet or a INLINEFORM0 post) is ironic or non-ironic, exclusively based on the irony marker features. We use Support Vector Machines (SVM) classifier with linear kernel BIBREF16 . "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We first conduct a binary classification task to decide whether an utterance (e.g., a tweet or a INLINEFORM0 post) is ironic or non-ironic, exclusively based on the irony marker features. We use Support Vector Machines (SVM) classifier with linear kernel BIBREF16 . Table TABREF23 and Table TABREF24 present the results of the ablation tests for INLINEFORM1 and INLINEFORM2 . We report Precision ( INLINEFORM3 ), Recall ( INLINEFORM4 ) and INLINEFORM5 scores of both INLINEFORM6 and INLINEFORM7 categories."
],
"extractive_spans": [
"Support Vector Machines (SVM) classifier with linear kernel"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use Support Vector Machines (SVM) classifier with linear kernel BIBREF16 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"2d590c2dd8e33aa73d41b9aad8f1d99beed76341",
"9b03cf9a49bfab75a7b08c43f1c038f0cf887740",
"d10ed98389b4b42c1c5b21fc7d00e001a58d1507"
],
"answer": [
{
"evidence": [
"Twitter: We use a set of 350K tweets for our experiments. The ironic/sarcastic tweets are collected using hashtags, such as #irony, #sarcasm, and #sarcastic whereas the non-sarcastic tweets do not contain these hashtags, but they might include sentiment hashtags, such as #happy, #love, #sad, #hate (similar to BIBREF8 , BIBREF9 ). As pre-processing, we removed the retweets, spam, duplicates, and tweets written in languages other than English. Also, we deleted all tweets where the hashtags of interest were not located at the very end (i.e., we eliminated “#sarcasm is something that I love”). We lowercased the tweets, except the words where all the characters are uppercased."
],
"extractive_spans": [
"collected using hashtags, such as #irony, #sarcasm, and #sarcastic"
],
"free_form_answer": "",
"highlighted_evidence": [
"The ironic/sarcastic tweets are collected using hashtags, such as #irony, #sarcasm, and #sarcastic whereas the non-sarcastic tweets do not contain these hashtags, but they might include sentiment hashtags, such as #happy, #love, #sad, #hate (similar to BIBREF8 , BIBREF9 )."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Twitter: We use a set of 350K tweets for our experiments. The ironic/sarcastic tweets are collected using hashtags, such as #irony, #sarcasm, and #sarcastic whereas the non-sarcastic tweets do not contain these hashtags, but they might include sentiment hashtags, such as #happy, #love, #sad, #hate (similar to BIBREF8 , BIBREF9 ). As pre-processing, we removed the retweets, spam, duplicates, and tweets written in languages other than English. Also, we deleted all tweets where the hashtags of interest were not located at the very end (i.e., we eliminated “#sarcasm is something that I love”). We lowercased the tweets, except the words where all the characters are uppercased.",
"Reddit: BIBREF10 (2018) introduced an extensive collection of sarcastic and non-sarcastic posts collected from different subreddits. In Reddit, authors mark their sarcastic intent of their posts by adding “/s” at the end of a post/comment. We collected 50K instances from the corpus for our experiments (denoted as INLINEFORM0 ), where the sarcastic and non-sarcastic replies are at least two sentences (i.e., we discard posts that are too short). For brevity, we denote ironic utterances as INLINEFORM1 and non-ironic utterances as INLINEFORM2 . Both INLINEFORM3 and INLINEFORM4 datasets are balanced between the INLINEFORM5 and INLINEFORM6 classes. We uuse 80% of the datasets for training, 10% for development, and the remaining 10% for testing."
],
"extractive_spans": [],
"free_form_answer": "Authors of the tweets and reddit posts",
"highlighted_evidence": [
"The ironic/sarcastic tweets are collected using hashtags, such as #irony, #sarcasm, and #sarcastic whereas the non-sarcastic tweets do not contain these hashtags, but they might include sentiment hashtags, such as #happy, #love, #sad, #hate (similar to BIBREF8 , BIBREF9 ).",
"In Reddit, authors mark their sarcastic intent of their posts by adding “/s” at the end of a post/comment. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Twitter: We use a set of 350K tweets for our experiments. The ironic/sarcastic tweets are collected using hashtags, such as #irony, #sarcasm, and #sarcastic whereas the non-sarcastic tweets do not contain these hashtags, but they might include sentiment hashtags, such as #happy, #love, #sad, #hate (similar to BIBREF8 , BIBREF9 ). As pre-processing, we removed the retweets, spam, duplicates, and tweets written in languages other than English. Also, we deleted all tweets where the hashtags of interest were not located at the very end (i.e., we eliminated “#sarcasm is something that I love”). We lowercased the tweets, except the words where all the characters are uppercased.",
"Reddit: BIBREF10 (2018) introduced an extensive collection of sarcastic and non-sarcastic posts collected from different subreddits. In Reddit, authors mark their sarcastic intent of their posts by adding “/s” at the end of a post/comment. We collected 50K instances from the corpus for our experiments (denoted as INLINEFORM0 ), where the sarcastic and non-sarcastic replies are at least two sentences (i.e., we discard posts that are too short). For brevity, we denote ironic utterances as INLINEFORM1 and non-ironic utterances as INLINEFORM2 . Both INLINEFORM3 and INLINEFORM4 datasets are balanced between the INLINEFORM5 and INLINEFORM6 classes. We uuse 80% of the datasets for training, 10% for development, and the remaining 10% for testing."
],
"extractive_spans": [],
"free_form_answer": "Twitter and Reddit users of the original data ",
"highlighted_evidence": [
"The ironic/sarcastic tweets are collected using hashtags, such as #irony, #sarcasm, and #sarcastic whereas the non-sarcastic tweets do not contain these hashtags, but they might include sentiment hashtags, such as #happy, #love, #sad, #hate (similar to BIBREF8 , BIBREF9 ). ",
" In Reddit, authors mark their sarcastic intent of their posts by adding “/s” at the end of a post/comment. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Do they evaluate only on English datasets?",
"What type of frequency analysis was used?",
"What type of classifiers were used?",
"Who annotated the Twitter and Reddit data for irony?"
],
"question_id": [
"000549a217ea24432c0656598279dbb85378c113",
"63d2e97657419a0185127534f4ff9d0039cb1a63",
"43f43b135109ebd1d2d1f9af979c64ce550b5f0f",
"e797634fa77e490783b349034f9e095ee570b7a9"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"irony",
"irony",
"irony",
"irony"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Use of irony markers in two social media platforms",
"Figure 1: Utterance with emoji (best in color)",
"Table 4: Irony markers based on feature weights for Twitter",
"Table 2: Ablation Tests of irony markers for Twitter. bold are best scores (in %).",
"Table 5: Irony markers based on feature weights for Reddit",
"Table 3: Ablation Tests of irony markers for Reddit posts. bold are best scores (in %).",
"Table 6: Frequency of irony markers in different genres (subreddits). The mean and the SD (in bracket) are reported.x ∗∗ and x ∗ depict significance on p ≤ 0.005 and p ≤ 0.05, respectively.",
"Table 7: Frequency of irony markers in two platforms. The mean and the SD (in bracket) are reported."
],
"file": [
"1-Table1-1.png",
"2-Figure1-1.png",
"3-Table4-1.png",
"3-Table2-1.png",
"3-Table5-1.png",
"3-Table3-1.png",
"4-Table6-1.png",
"4-Table7-1.png"
]
} | [
"Do they evaluate only on English datasets?",
"What type of frequency analysis was used?",
"Who annotated the Twitter and Reddit data for irony?"
] | [
[
"1804.05253-Data-0"
],
[
"1804.05253-Introduction-3",
"1804.05253-Frequency analysis of markers-0"
],
[
"1804.05253-Data-0",
"1804.05253-Data-1"
]
] | [
"The twitter dataset is English-only; no information for the reddit dataset is given",
"Mean of occurrence per utterance and the standard deviation is calculated for every marker type; the means between each pair of types is compared via independent t-tests",
"Twitter and Reddit users of the original data "
] | 94 |
1805.11598 | Polyglot Semantic Role Labeling | Previous approaches to multilingual semantic dependency parsing treat languages independently, without exploiting the similarities between semantic structures across languages. We experiment with a new approach where we combine resources from a pair of languages in the CoNLL 2009 shared task to build a polyglot semantic role labeler. Notwithstanding the absence of parallel data, and the dissimilarity in annotations between languages, our approach results in an improvement in SRL performance on multiple languages over a monolingual baseline. Analysis of the polyglot model shows it to be advantageous in lower-resource settings. | {
"paragraphs": [
[
"The standard approach to multilingual NLP is to design a single architecture, but tune and train a separate model for each language. While this method allows for customizing the model to the particulars of each language and the available data, it also presents a problem when little data is available: extensive language-specific annotation is required. The reality is that most languages have very little annotated data for most NLP tasks.",
"ammar2016malopa found that using training data from multiple languages annotated with Universal Dependencies BIBREF1 , and represented using multilingual word vectors, outperformed monolingual training. Inspired by this, we apply the idea of training one model on multiple languages—which we call polyglot training—to PropBank-style semantic role labeling (SRL). We train several parsers for each language in the CoNLL 2009 dataset BIBREF0 : a traditional monolingual version, and variants which additionally incorporate supervision from English portion of the dataset. To our knowledge, this is the first multilingual SRL approach to combine supervision from several languages.",
"The CoNLL 2009 dataset includes seven different languages, allowing study of trends across the same. Unlike the Universal Dependencies dataset, however, the semantic label spaces are entirely language-specific, making our task more challenging. Nonetheless, the success of polyglot training in this setting demonstrates that sharing of statistical strength across languages does not depend on explicit alignment in annotation conventions, and can be done simply through parameter sharing. We show that polyglot training can result in better labeling accuracy than a monolingual parser, especially for low-resource languages. We find that even a simple combination of data is as effective as more complex kinds of polyglot training. We include a breakdown into label categories of the differences between the monolingual and polyglot models. Our findings indicate that polyglot training consistently improves label accuracy for common labels."
],
[
"We evaluate our system on the semantic role labeling portion of the CoNLL-2009 shared task BIBREF0 , on all seven languages, namely Catalan, Chinese, Czech, English, German, Japanese and Spanish. For each language, certain tokens in each sentence in the dataset are marked as predicates. Each predicate takes as arguments other words in the same sentence, their relationship marked by labeled dependency arcs. Sentences may contain no predicates.",
"Despite the consistency of this format, there are significant differences between the training sets across languages. English uses PropBank role labels BIBREF2 . Catalan, Chinese, English, German, and Spanish include (but are not limited to) labels such as “arg INLINEFORM0 -agt” (for “agent”) or “A INLINEFORM1 ” that may correspond to some degree to each other and to the English roles. Catalan and Spanish share most labels (being drawn from the same source corpus, AnCora; BIBREF3 ), and English and German share some labels. Czech and Japanese each have their own distinct sets of argument labels, most of which do not have clear correspondences to English or to each other.",
"We also note that, due to semi-automatic projection of annotations to construct the German dataset, more than half of German sentences do not include labeled predicate and arguments. Thus while German has almost as many sentences as Czech, it has by far the fewest training examples (predicate-argument structures); see Table TABREF3 ."
],
[
"Given a sentence with a marked predicate, the CoNLL 2009 shared task requires disambiguation of the sense of the predicate, and labeling all its dependent arguments. The shared task assumed predicates have already been identified, hence we do not handle the predicate identification task.",
"Our basic model adapts the span-based dependency SRL model of He2017-deepsrl. This adaptation treats the dependent arguments as argument spans of length 1. Additionally, BIO consistency constraints are removed from the original model— each token is tagged simply with the argument label or an empty tag. A similar approach has also been proposed by marcheggiani2017lstm.",
"The input to the model consists of a sequence of pretrained embeddings for the surface forms of the sentence tokens. Each token embedding is also concatenated with a vector indicating whether the word is a predicate or not. Since the part-of-speech tags in the CoNLL 2009 dataset are based on a different tagset for each language, we do not use these. Each training instance consists of the annotations for a single predicate. These representations are then passed through a deep, multi-layer bidirectional LSTM BIBREF4 , BIBREF5 with highway connections BIBREF6 .",
"We use the hidden representations produced by the deep biLSTM for both argument labeling and predicate sense disambiguation in a multitask setup; this is a modification to the models of He2017-deepsrl, who did not handle predicate senses, and of marcheggiani2017lstm, who used a separate model. These two predictions are made independently, with separate softmaxes over different last-layer parameters; we then combine the losses for each task when training. For predicate sense disambiguation, since the predicate has been identified, we choose from a small set of valid predicate senses as the tag for that token. This set of possible senses is selected based on the training data: we map from lemmatized tokens to predicates and from predicates to the set of all senses of that predicate. Most predicates are only observed to have one or two corresponding senses, making the set of available senses at test time quite small (less than five senses/predicate on average across all languages). If a particular lemma was not observed in training, we heuristically predict it as the first sense of that predicate. For Czech and Japanese, the predicate sense annotation is simply the lemmatized token of the predicate, giving a one-to-one predicate-“sense” mapping.",
"For argument labeling, every token in the sentence is assigned one of the argument labels, or INLINEFORM0 if the model predicts it is not an argument to the indicated predicate."
],
[
"We use pretrained word embeddings as input to the model. For each of the shared task languages, we produced GloVe vectors BIBREF7 from the news, web, and Wikipedia text of the Leipzig Corpora Collection BIBREF8 . We trained 300-dimensional vectors, then reduced them to 100 dimensions with principal component analysis for efficiency."
],
[
"In the first polyglot variant, we consider multilingual sharing between each language and English by using pretrained multilingual embeddings. This polyglot model is trained on the union of annotations in the two languages. We use stratified sampling to give the two datasets equal effective weight in training, and we ensure that every training instance is seen at least once per epoch.",
"The basis of our polyglot training is the use of pretrained multilingual word vectors, which allow representing entirely distinct vocabularies (such as the tokens of different languages) in a shared representation space, allowing crosslingual learning BIBREF9 . We produced multilingual embeddings from the monolingual embeddings using the method of ammar2016massively: for each non-English language, a small crosslingual dictionary and canonical correlation analysis was used to find a transformation of the non-English vectors into the English vector space BIBREF10 .",
"Unlike multilingual word representations, argument label sets are disjoint between language pairs, and correspondences are not clearly defined. Hence, we use separate label representations for each language's labels. Similarly, while (for example) eng:look and spa:mira may be semantically connected, the senses look.01 and mira.01 may not correspond. Hence, predicate sense representations are also language-specific."
],
[
"In the second variant, we concatenate a language ID vector to each multilingual word embedding and predicate indicator feature in the input representation. This vector is randomly initialized and updated in training. These additional parameters provide a small degree of language-specificity in the model, while still sharing most parameters."
],
[
"This third variant takes inspiration from the “frustratingly easy” architecture of daumeiii2007easy for domain adaptation. In addition to processing every example with a shared biLSTM as in previous models, we add language-specific biLSTMs that are trained only on the examples belonging to one language. Each of these language-specific biLSTMs is two layers deep, and is combined with the shared biSLTM in the input to the third layer. This adds a greater degree of language-specific processing while still sharing representations across languages. It also uses the language identification vector and multilingual word vectors in the input."
],
[
"We present our results in Table TABREF11 . We observe that simple polyglot training improves over monolingual training, with the exception of Czech, where we observe no change in performance. The languages with the fewest training examples (German, Japanese, Catalan) show the most improvement, while large-dataset languages such as Czech or Chinese see little or no improvement (Figure FIGREF10 ).",
"The language ID model performs inconsistently; it is better than the simple polyglot model in some cases, including Czech, but not in all. The language-specific LSTMs model performs best on a few languages, such as Catalan and Chinese, but worst on others. While these results may reflect differences between languages in the optimal amount of crosslingual sharing, we focus on the simple polyglot results in our analysis, which sufficiently demonstrate that polyglot training can improve performance over monolingual training.",
"We also report performance of state-of-the-art systems in each of these languages, all of which make explicit use of syntactic features, marcheggiani2017lstm excepted. While this results in better performance on many languages, our model has the advantage of not relying on a syntactic parser, and is hence more applicable to languages with lower resources. However, the results suggest that syntactic information is critical for strong performance on German, which has the fewest predicates and thus the least semantic annotation for a semantics-only model to learn from. Nevertheless, our baseline is on par with the best published scores for Chinese, and it shows strong performance on most languages."
],
[
"Recent improvements in multilingual SRL can be attributed to neural architectures. Swayamdipta2016-qt present a transition-based stack LSTM model that predicts syntax and semantics jointly, as a remedy to the reliance on pipelined models. Guo2016-zc and BIBREF11 use deep biLSTM architectures which use syntactic information to guide the composition. marcheggiani2017lstm use a simple LSTM model over word tokens to tag semantic dependencies, like our model. Their model predicts a token's label based on the combination of the token vector and the predicate vector, and saw benefits from using POS tags, both improvements that could be added to our model. marcheggiani2017gcn apply the recently-developed graph convolutional networks to SRL, obtaining state of the art results on English and Chinese. All of these approaches are orthogonal to ours, and might benefit from polyglot training.",
"Other polyglot models have been proposed for semantics. Richardson2018-ov-naacl train on multiple (natural language)-(programming language) pairs to improve a model that translates API text into code signature representations. Duong2017-qy treat English and German semantic parsing as a multi-task learning problem and saw improvement over monolingual baselines, especially for small datasets. Most relevant to our work is Johannsen2015-nb, which trains a polyglot model for frame-semantic parsing. In addition to sharing features with multilingual word vectors, they use them to find word translations of target language words for additional lexical features."
],
[
"In this work, we have explored a straightforward method for polyglot training in SRL: use multilingual word vectors and combine training data across languages. This allows sharing without crosslingual alignments, shared annotation, or parallel data. We demonstrate that a polyglot model can outperform a monolingual one for semantic analysis, particularly for languages with less data."
],
[
"We thank Luke Zettlemoyer, Luheng He, and the anonymous reviewers for helpful comments and feedback. This research was supported in part by the Defense Advanced Research Projects Agency (DARPA) Information Innovation Office (I2O) under the Low Resource Languages for Emergent Incidents (LORELEI) program issued by DARPA/I2O under contract HR001115C0113 to BBN. Views expressed are those of the authors alone."
]
],
"section_name": [
"Introduction",
"Data",
"Model",
"Monolingual Baseline",
"Simple Polyglot Sharing",
"Language Identification",
"Language-Specific LSTMs",
"Experiments",
"Related Work",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"276a9f30bc9c6b13285cb6f336694507b3d69daa",
"e6ac8aa7eb698957511d53d5a6a483d7313ca9f4",
"ebbd6f30a320cc0444393903adbd78a1106bddfb"
],
"answer": [
{
"evidence": [
"In this work, we have explored a straightforward method for polyglot training in SRL: use multilingual word vectors and combine training data across languages. This allows sharing without crosslingual alignments, shared annotation, or parallel data. We demonstrate that a polyglot model can outperform a monolingual one for semantic analysis, particularly for languages with less data."
],
"extractive_spans": [
"multilingual word vectors",
"training data across languages"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this work, we have explored a straightforward method for polyglot training in SRL: use multilingual word vectors and combine training data across languages. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The input to the model consists of a sequence of pretrained embeddings for the surface forms of the sentence tokens. Each token embedding is also concatenated with a vector indicating whether the word is a predicate or not. Since the part-of-speech tags in the CoNLL 2009 dataset are based on a different tagset for each language, we do not use these. Each training instance consists of the annotations for a single predicate. These representations are then passed through a deep, multi-layer bidirectional LSTM BIBREF4 , BIBREF5 with highway connections BIBREF6 ."
],
"extractive_spans": [
"a sequence of pretrained embeddings for the surface forms of the sentence tokens",
"annotations for a single predicate",
"CoNLL 2009 dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"The input to the model consists of a sequence of pretrained embeddings for the surface forms of the sentence tokens. Each token embedding is also concatenated with a vector indicating whether the word is a predicate or not. Since the part-of-speech tags in the CoNLL 2009 dataset are based on a different tagset for each language, we do not use these. Each training instance consists of the annotations for a single predicate. These representations are then passed through a deep, multi-layer bidirectional LSTM BIBREF4 , BIBREF5 with highway connections BIBREF6 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The basis of our polyglot training is the use of pretrained multilingual word vectors, which allow representing entirely distinct vocabularies (such as the tokens of different languages) in a shared representation space, allowing crosslingual learning BIBREF9 . We produced multilingual embeddings from the monolingual embeddings using the method of ammar2016massively: for each non-English language, a small crosslingual dictionary and canonical correlation analysis was used to find a transformation of the non-English vectors into the English vector space BIBREF10 .",
"In the second variant, we concatenate a language ID vector to each multilingual word embedding and predicate indicator feature in the input representation. This vector is randomly initialized and updated in training. These additional parameters provide a small degree of language-specificity in the model, while still sharing most parameters."
],
"extractive_spans": [
"multilingual word vectors",
"concatenate a language ID vector to each multilingual word embedding"
],
"free_form_answer": "",
"highlighted_evidence": [
"The basis of our polyglot training is the use of pretrained multilingual word vectors, which allow representing entirely distinct vocabularies (such as the tokens of different languages) in a shared representation space, allowing crosslingual learning BIBREF9 .",
"In the second variant, we concatenate a language ID vector to each multilingual word embedding and predicate indicator feature in the input representation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3a909be7352ee1324dabd8f848b2ce49a2e9ad29",
"3d55e38d4168c71407845001e758713dfa863732",
"58a695497c60c6765b200fc75a0be5e514d51900"
],
"answer": [
{
"evidence": [
"We evaluate our system on the semantic role labeling portion of the CoNLL-2009 shared task BIBREF0 , on all seven languages, namely Catalan, Chinese, Czech, English, German, Japanese and Spanish. For each language, certain tokens in each sentence in the dataset are marked as predicates. Each predicate takes as arguments other words in the same sentence, their relationship marked by labeled dependency arcs. Sentences may contain no predicates."
],
"extractive_spans": [
"semantic role labeling portion of the CoNLL-2009 shared task BIBREF0"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our system on the semantic role labeling portion of the CoNLL-2009 shared task BIBREF0 , on all seven languages, namely Catalan, Chinese, Czech, English, German, Japanese and Spanish."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"ammar2016malopa found that using training data from multiple languages annotated with Universal Dependencies BIBREF1 , and represented using multilingual word vectors, outperformed monolingual training. Inspired by this, we apply the idea of training one model on multiple languages—which we call polyglot training—to PropBank-style semantic role labeling (SRL). We train several parsers for each language in the CoNLL 2009 dataset BIBREF0 : a traditional monolingual version, and variants which additionally incorporate supervision from English portion of the dataset. To our knowledge, this is the first multilingual SRL approach to combine supervision from several languages.",
"We evaluate our system on the semantic role labeling portion of the CoNLL-2009 shared task BIBREF0 , on all seven languages, namely Catalan, Chinese, Czech, English, German, Japanese and Spanish. For each language, certain tokens in each sentence in the dataset are marked as predicates. Each predicate takes as arguments other words in the same sentence, their relationship marked by labeled dependency arcs. Sentences may contain no predicates."
],
"extractive_spans": [
"CoNLL 2009 dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"We train several parsers for each language in the CoNLL 2009 dataset BIBREF0 : a traditional monolingual version, and variants which additionally incorporate supervision from English portion of the dataset.",
"We evaluate our system on the semantic role labeling portion of the CoNLL-2009 shared task BIBREF0 , on all seven languages, namely Catalan, Chinese, Czech, English, German, Japanese and Spanish. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate our system on the semantic role labeling portion of the CoNLL-2009 shared task BIBREF0 , on all seven languages, namely Catalan, Chinese, Czech, English, German, Japanese and Spanish. For each language, certain tokens in each sentence in the dataset are marked as predicates. Each predicate takes as arguments other words in the same sentence, their relationship marked by labeled dependency arcs. Sentences may contain no predicates."
],
"extractive_spans": [
"semantic role labeling portion of the CoNLL-2009 shared task"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our system on the semantic role labeling portion of the CoNLL-2009 shared task BIBREF0 , on all seven languages, namely Catalan, Chinese, Czech, English, German, Japanese and Spanish. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"7332f7b9fa4d2e077fd4b0908620f6fe006f4dab",
"98e52bc5f4e87000eec16f3615fb23e5a59be7d1",
"a02ea3e4485868a4c1ae399c958e70efc4c96c8f"
],
"answer": [
{
"evidence": [
"We use pretrained word embeddings as input to the model. For each of the shared task languages, we produced GloVe vectors BIBREF7 from the news, web, and Wikipedia text of the Leipzig Corpora Collection BIBREF8 . We trained 300-dimensional vectors, then reduced them to 100 dimensions with principal component analysis for efficiency."
],
"extractive_spans": [],
"free_form_answer": "For each of the shared task languages, they produced GloVe vectors BIBREF7 from the news, web, and Wikipedia text of the Leipzig Corpora Collection and trained 300-dimensional vectors then reduced them to 100 dimensions with principal component analysis for efficiency.",
"highlighted_evidence": [
"We use pretrained word embeddings as input to the model. For each of the shared task languages, we produced GloVe vectors BIBREF7 from the news, web, and Wikipedia text of the Leipzig Corpora Collection BIBREF8 . We trained 300-dimensional vectors, then reduced them to 100 dimensions with principal component analysis for efficiency."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use pretrained word embeddings as input to the model. For each of the shared task languages, we produced GloVe vectors BIBREF7 from the news, web, and Wikipedia text of the Leipzig Corpora Collection BIBREF8 . We trained 300-dimensional vectors, then reduced them to 100 dimensions with principal component analysis for efficiency.",
"Our basic model adapts the span-based dependency SRL model of He2017-deepsrl. This adaptation treats the dependent arguments as argument spans of length 1. Additionally, BIO consistency constraints are removed from the original model— each token is tagged simply with the argument label or an empty tag. A similar approach has also been proposed by marcheggiani2017lstm.",
"The input to the model consists of a sequence of pretrained embeddings for the surface forms of the sentence tokens. Each token embedding is also concatenated with a vector indicating whether the word is a predicate or not. Since the part-of-speech tags in the CoNLL 2009 dataset are based on a different tagset for each language, we do not use these. Each training instance consists of the annotations for a single predicate. These representations are then passed through a deep, multi-layer bidirectional LSTM BIBREF4 , BIBREF5 with highway connections BIBREF6 ."
],
"extractive_spans": [
" basic model adapts the span-based dependency SRL model of He2017-deepsrl"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use pretrained word embeddings as input to the model. For each of the shared task languages, we produced GloVe vectors BIBREF7 from the news, web, and Wikipedia text of the Leipzig Corpora Collection BIBREF8 .",
"Our basic model adapts the span-based dependency SRL model of He2017-deepsrl",
"The input to the model consists of a sequence of pretrained embeddings for the surface forms of the sentence tokens."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the hidden representations produced by the deep biLSTM for both argument labeling and predicate sense disambiguation in a multitask setup; this is a modification to the models of He2017-deepsrl, who did not handle predicate senses, and of marcheggiani2017lstm, who used a separate model. These two predictions are made independently, with separate softmaxes over different last-layer parameters; we then combine the losses for each task when training. For predicate sense disambiguation, since the predicate has been identified, we choose from a small set of valid predicate senses as the tag for that token. This set of possible senses is selected based on the training data: we map from lemmatized tokens to predicates and from predicates to the set of all senses of that predicate. Most predicates are only observed to have one or two corresponding senses, making the set of available senses at test time quite small (less than five senses/predicate on average across all languages). If a particular lemma was not observed in training, we heuristically predict it as the first sense of that predicate. For Czech and Japanese, the predicate sense annotation is simply the lemmatized token of the predicate, giving a one-to-one predicate-“sense” mapping.",
"We use pretrained word embeddings as input to the model. For each of the shared task languages, we produced GloVe vectors BIBREF7 from the news, web, and Wikipedia text of the Leipzig Corpora Collection BIBREF8 . We trained 300-dimensional vectors, then reduced them to 100 dimensions with principal component analysis for efficiency."
],
"extractive_spans": [],
"free_form_answer": "biLSTM with pre-trained GloVe embeddings.",
"highlighted_evidence": [
"We use the hidden representations produced by the deep biLSTM for both argument labeling and predicate sense disambiguation in a multitask setup; this is a modification to the models of He2017-deepsrl, who did not handle predicate senses, and of marcheggiani2017lstm, who used a separate model. ",
"We use pretrained word embeddings as input to the model. For each of the shared task languages, we produced GloVe vectors BIBREF7 from the news, web, and Wikipedia text of the Leipzig Corpora Collection BIBREF8 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"7bc717a0d4d35873cb5c7f9fe3f34603f03a7e15",
"a977c2fe432910b8e13a4b4d034aae92a39f8910",
"b86f4e4da4b0a88c8dca20741f86c10a4d2e49e8"
],
"answer": [
{
"evidence": [
"We evaluate our system on the semantic role labeling portion of the CoNLL-2009 shared task BIBREF0 , on all seven languages, namely Catalan, Chinese, Czech, English, German, Japanese and Spanish. For each language, certain tokens in each sentence in the dataset are marked as predicates. Each predicate takes as arguments other words in the same sentence, their relationship marked by labeled dependency arcs. Sentences may contain no predicates."
],
"extractive_spans": [
"Catalan",
"Chinese",
"Czech",
"English",
"German",
"Japanese",
"Spanish"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our system on the semantic role labeling portion of the CoNLL-2009 shared task BIBREF0 , on all seven languages, namely Catalan, Chinese, Czech, English, German, Japanese and Spanish. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate our system on the semantic role labeling portion of the CoNLL-2009 shared task BIBREF0 , on all seven languages, namely Catalan, Chinese, Czech, English, German, Japanese and Spanish. For each language, certain tokens in each sentence in the dataset are marked as predicates. Each predicate takes as arguments other words in the same sentence, their relationship marked by labeled dependency arcs. Sentences may contain no predicates."
],
"extractive_spans": [
"Catalan",
"Chinese",
"Czech",
"English",
"German",
"Japanese",
"Spanish"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our system on the semantic role labeling portion of the CoNLL-2009 shared task BIBREF0 , on all seven languages, namely Catalan, Chinese, Czech, English, German, Japanese and Spanish. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate our system on the semantic role labeling portion of the CoNLL-2009 shared task BIBREF0 , on all seven languages, namely Catalan, Chinese, Czech, English, German, Japanese and Spanish. For each language, certain tokens in each sentence in the dataset are marked as predicates. Each predicate takes as arguments other words in the same sentence, their relationship marked by labeled dependency arcs. Sentences may contain no predicates."
],
"extractive_spans": [
" Catalan, Chinese, Czech, English, German, Japanese and Spanish"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our system on the semantic role labeling portion of the CoNLL-2009 shared task BIBREF0 , on all seven languages, namely Catalan, Chinese, Czech, English, German, Japanese and Spanish."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"what resources are combined to build the labeler?",
"what datasets were used?",
"what is the monolingual baseline?",
"what languages are explored in this paper?"
],
"question_id": [
"475e698a801be0ad9e4f74756d1fff4fe0728009",
"8246d1eee1482555d075127ac84f2e1d0781a446",
"1ec0be667a6594eb2e07c50258b120e693e040a8",
"e3bafa432cd3e1225170ff04de2fdf1ede38c6ef"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Example predicate-argument structures from English, Spanish, and Czech. Note that the argument labels are different in each language.",
"Table 1: Train data statistics. Languages are indicated with ISO 639-3 codes.",
"Figure 2: Improvement in absolute F1 with polyglot training with addition of English. Languages are sorted in order of increasing number of predicates in the training set.",
"Table 2: Semantic F1 scores (including predicate sense disambiguation) on the CoNLL 2009 dataset. State of the art for Catalan and Japanese is from Zhao et al. (2009), for German and Spanish from Roth and Lapata (2016), for English and Chinese from Marcheggiani and Titov (2017). Italics indicate use of syntax.",
"Table 3: Per-label breakdown of F1 scores for Catalan and Spanish. These numbers reflect labels for each argument; the combination is different from the overall semantic F1, which includes predicate sense disambiguation.",
"Table 4: Semantic F1 scores on the English test set for each language pair.",
"Table 5: Unlabeled semantic F1 scores on the CoNLL 2009 dataset."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"4-Figure2-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"5-Table5-1.png"
]
} | [
"what is the monolingual baseline?"
] | [
[
"1805.11598-Model-3",
"1805.11598-Monolingual Baseline-0",
"1805.11598-Model-2",
"1805.11598-Model-1"
]
] | [
"biLSTM with pre-trained GloVe embeddings."
] | 95 |
1610.03955 | Dialogue Session Segmentation by Embedding-Enhanced TextTiling | In human-computer conversation systems, the context of a user-issued utterance is particularly important because it provides useful background information of the conversation. However, it is unwise to track all previous utterances in the current session as not all of them are equally important. In this paper, we address the problem of session segmentation. We propose an embedding-enhanced TextTiling approach, inspired by the observation that conversation utterances are highly noisy, and that word embeddings provide a robust way of capturing semantics. Experimental results show that our approach achieves better performance than the TextTiling, MMD approaches. | {
"paragraphs": [
[
"Human-computer dialog/conversation is one of the most challenging problems in artificial intelligence. Given a user-issued utterance (called a query in this paper), the computer needs to provide a reply to the query. In early years, researchers have developed various domain-oriented dialogue systems, which are typically based on rules or templates BIBREF4 , BIBREF5 , BIBREF6 . Recently, open-domain conversation systems have attracted more and more attention in both academia and industry (e.g., XiaoBing from Microsoft and DuMi from Baidu). Due to high diversity, we can hardly design rules or templates in the open domain. Researchers have proposed information retrieval methods BIBREF7 and modern generative neural networks BIBREF8 , BIBREF9 to either search for a reply from a large conversation corpus or generate a new sentence as the reply.",
"In open-domain conversations, context information (one or a few previous utterances) is particularly important to language understanding BIBREF1 , BIBREF9 , BIBREF10 , BIBREF11 . As dialogue sentences are usually casual and short, a single utterance (e.g., “Thank you.” in Figure FIGREF2 ) does not convey much meaning, but its previous utterance (“...writing an essay”) provides useful background information of the conversation. Using such context will certainly benefit the conversation system.",
"However, tracking all previous utterances as the context is unwise. First, commercial chat-bots usually place high demands on efficiency. In a retrieval-based system, for example, performing a standard process of candidate retrieval and re-ranking for each previous utterance may well exceed the time limit (which is very short, e.g., 500ms). Second, we observe that not all sentences in the current conversation session are equally important. The sentence “Want to take a walk?” is irrelevant to the current context, and should not be considered when the computer synthesizes the reply. Therefore, it raises the question of session segmentation in conversation systems.",
"Document segmentation for general-purpose corpora has been widely studied in NLP. For example, Hearst BIBREF12 proposes the TextTiling approach; she measures the similarity of neighboring sentences based on bag-of-words features, and performs segmentation by thresholding. However, such approaches are not tailored to the dialogue genre and may not be suitable for conversation session segmentation.",
"In this paper, we address the problem of session segmentation for open-domain conversations. We leverage the classic TextTiling approach, but enhance it with modern embedding-based similarity measures. Compared with traditional bag-of-words features, embeddings map discrete words to real-valued vectors, capturing underlying meanings in a continuous vector space; hence, it is more robust for noisy conversation corpora. Further, we propose a tailored method for word embedding learning. In traditional word embedding learning, the interaction between two words in a query and a reply is weaker than that within an utterance. We propose to combine a query and its corresponding reply as a “virtual sentence,” so that it provides a better way of modeling utterances between two agents."
],
[
"Human-computer dialogue systems can be roughly divided into several categories. Template- and rule-based systems are mainly designed for certain domains BIBREF4 , BIBREF5 , BIBREF13 . Although manually engineered templates can also be applied in the open domain like BIBREF14 , but their generated sentences are subject to 7 predefined forms, and hence are highly restricted. Retrieval methods search for a candidate reply from a large conversation corpus given a user-issued utterance as a query BIBREF7 . Generative methods can synthesize new replies by statistical machine translation BIBREF15 , BIBREF16 or neural networks BIBREF8 .",
"The above studies do not consider context information in reply retrieval or generation. However, recent research shows that previous utterances in a conversation session are important because they capture rich background information. Sordoni et al. BIBREF11 summarize a single previous sentence as bag-of-words features, which are fed to a recurrent neural network for reply generation. Serban et al. BIBREF17 design an attention-based neural network over all previous conversation turns/rounds, but this could be inefficient if a session lasts long in real commercial applications. By contrast, our paper addresses the problem of session segmentation so as to retain near, relevant context utterances and to eliminate far, irrelevant ones.",
"A similar (but different) research problem is topic tracking in conversations, e.g., BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 . In these approaches, the goal is typically a classification problem with a few pre-defined conversation states/topics, and hence it can hardly be generalized to general-purpose session segmentation."
],
[
"An early and classic work on text segmentation is TextTiling, proposed in BIBREF12 . The idea is to measure the similarity between two successive sentences with smoothing techniques; then segmentation is accomplished by thresholding of the depth of a “valley.” In the original form of TextTiling, the cosine of term frequency features is used as the similarity measure. Joty et al. BIBREF22 apply divisive clustering instead of thresholding for segmentation. Malioutov et al. BIBREF23 formalize segmentation as a graph-partitioning problem and propose a minimum cut model based on tf INLINEFORM0 idf features to segment lectures. Ye et al. BIBREF24 minimize between-segment similarity while maximizing within-segment similarity. However, the above complicated approaches are known as global methods: when we perform segmentation between two successive sentences, future context information is needed. Therefore, they are inapplicable to real-time chat-bots, where conversation utterances can be viewed as streaming data.",
"In our study, we prefer the simple yet effective TextTiling approach for open-domain dialogue session segmentation, but enhance it with modern advances of word embeddings, which are robust in capturing semantics of words. We propose a tailored algorithm for word embedding learning by combining a query and context as a “virtual document”; we also propose several heuristics for similarity measuring."
],
[
"We apply a TextTiling-like algorithm for session segmentation. The original TextTiling is proposed by Hearst BIBREF12 . The main idea is to measure the similarity of each adjacent sentence pair; then “valleys” of similarities are detected for segmentation.",
"Concretely, the “depth of the valley” is defined by the similarity differences between the peak point in each side and the current position. We may obtain some statistics of depth scores like the mean INLINEFORM0 and standard deviation INLINEFORM1 , and perform segmentation by a cutoff threshold.",
"where INLINEFORM0 is a hyperparameter adjusting the number of segmentation boundaries; INLINEFORM1 and INLINEFORM2 are the average and standard deviation of depth scores, respectively.",
"In the scenario of human-computer conversations, we compute the depth solely by the similarity difference between its left peak (previous context) and the current position. This is because we cannot obtain future utterances during online conversation.",
"Although bag-of-words features work well in the original TextTiling algorithm for general text segmentation, it is not suitable for dialogue segmentation. As argued by Hearst BIBREF12 , text overlap (repetition) between neighboring sentences is a strong hint of semantic coherence, which can be well captured by term frequency or tf INLINEFORM0 idf variants. However, in human-computer conversations, sentences are usually short, noisy, highly diversified, and probably incomplete, which requires a more robust way of similarity measuring. Therefore, we enhance TextTiling with modern word embedding techniques, as will be discussed in the next part."
],
[
"Word embeddings are distributed, real-valued vector representations of discrete words BIBREF25 , BIBREF26 . Compared with one-hot representation, word embeddings are low-dimensional and dense, measuring word meanings in a continuous vector space. Studies show that the offset of two words' embeddings represents a certain relation, e.g., “man” INLINEFORM0 “woman” INLINEFORM1 “king” INLINEFORM2 “queen” BIBREF25 . Hence, it is suitable to use word embeddings to model short and noisy conversation utterances.",
"To train the embeddings, we adopt the word2vec approach. The idea is to map a word INLINEFORM0 and its context INLINEFORM1 to vectors ( INLINEFORM2 and INLINEFORM3 ). Then we estimate the probability of a word by DISPLAYFORM0 ",
"The goal of word embedding learning is to maximize the average probability of all words (suppose we have INLINEFORM0 running words): DISPLAYFORM0 ",
"We used hierarchical softmax to approximate the probability.",
"To model the context, we further adopt the continuous bag-of-words (CBOW) method. The context is defined by the sum of neighboring words' (input) vectors in a fixed-size window ( INLINEFORM0 to INLINEFORM1 ) within a sentence: DISPLAYFORM0 ",
"Notice that the context vector INLINEFORM0 in Equation ( EQREF12 ) and the output vector INLINEFORM1 in Equation ( EQREF9 ) are different as suggested in BIBREF25 , BIBREF26 , but the details are beyond the scope of our paper.",
"Virtual Sentences",
"In a conversation corpus, successive sentences have a stronger interaction than general texts. For example, in Figure FIGREF2 , the words thank and welcome are strongly correlated, but they hardly appear in the a sentence and thus a same window. Therefore, traditional within-sentence CBOW may not capture the interaction between a query and its corresponding reply.",
"In this paper, we propose the concept of virtual sentences to learn word embeddings for conversation data. We concatenate a query INLINEFORM0 and its reply INLINEFORM1 as a virtual sentence INLINEFORM2 . We also use all words (other than the current one) in the virtual sentence as context (Figure 2). Formally, the context INLINEFORM3 of the word INLINEFORM4 is given by DISPLAYFORM0 ",
"In this way, related words across two successive utterances from different agents can have interaction during word embedding learning. As will be shown in Subsection SECREF22 , virtual sentences yield a higher performance for dialogue segmentation."
],
[
"In this part, we introduce several heuristics of similarity measuring based on word embeddings. Notice that, we do not leverage supervised learning (e.g., full neural networks for sentence paring BIBREF27 , BIBREF28 ) to measure similarity, because it is costly to obtain labeled data of high quality.",
"The simplest approach, perhaps, is to sum over all word embeddings in an utterance as sentence-level features INLINEFORM0 . This heuristic is essentially the sum pooling method widely used in neural networks BIBREF29 , BIBREF30 , BIBREF27 . The cosine measure is used as the similarity score between two utterances INLINEFORM1 and INLINEFORM2 . Let INLINEFORM3 and INLINEFORM4 be their sentence vectors; then we have DISPLAYFORM0 ",
"where INLINEFORM0 is the INLINEFORM1 -norm of a vector.",
"To enhance the interaction between two successive sentences, we propose a more complicated heuristic as follows. Let INLINEFORM0 and INLINEFORM1 be a word in INLINEFORM2 and INLINEFORM3 , respectively. (Embeddings are denoted as bold alphabets.) Suppose further that INLINEFORM4 and INLINEFORM5 are the numbers of words in INLINEFORM6 and INLINEFORM7 . The similarity is given by DISPLAYFORM0 ",
"For each word INLINEFORM0 in INLINEFORM1 , our intuition is to find the most related word in INLINEFORM2 , given by the INLINEFORM3 part; their relatedness is also defined by the cosine measure. Then the sentence-level similarity is obtained by the average similarity score of words in INLINEFORM4 . This method is denoted as heuristic-max.",
"Alternatively, we may substitute the INLINEFORM0 operator in Equation ( EQREF16 ) with INLINEFORM1 , resulting in the heuristic-avg variant, which is equivalent to the average of word-by-word cosine similarity. However, as shown in Subsection SECREF22 , intensive similarity averaging has a “blurring” effect and will lead to significant performance degradation. This also shows that our proposed heuristic-max does capture useful interaction between two successive utterances in a dialogue."
],
[
"In this section, we evaluate our embedding-enhanced TextTiling method as well as the effect of session segmentation. In Subsection SECREF17 , we describe the datasets used in our experiments. Subsection SECREF22 presents the segmentation accuracy of our method and baselines. In Subsection SECREF27 , we show that, with our session segmentation, we can improve the performance of a retrieval-based conversation system."
],
[
"To evaluate the session segmentation method, we used a real-world chatting corpus from DuMi, a state-of-the-practice open-domain conversation system in Chinese. We sampled 200 sessions as our experimental corpus. Session segmentation was manually annotated before experiments, serving as the ground truth. The 200 sessions were randomly split by 1:1 for validation and testing. Notice that, our method does not require labeled training samples; massive data with labels of high quality are quite expensive to obtain.",
"We also leveraged an unlabeled massive dataset of conversation utterances to train our word embeddings with “virtual sentences.” The dataset was crawled from the Douban forum, containing 3 million utterances and approximately 150,000 unique words (Chinese terms)."
],
[
"We compared our full method (TextTiling with heuristic-max based on embeddings trained by virtual sentences) with several baselines:",
"Random. We randomly segmented conversation sessions. In this baseline, we were equipped with the prior probability of segmentation.",
"MMD. We applied the MinMax-Dotplotting (MMD) approach proposed by Ye et al. BIBREF24 . We ran the executable program provided by the authors.",
"TextTiling w/ tf INLINEFORM0 idf features. We implemented TextTiling ourselves according to BIBREF12 .",
"We tuned the hyperparameter INLINEFORM0 in Equation ()on the validation set to make the number of segmentation close to that of manual annotation, and reported precision, recall, and the F-score on the test set in Table TABREF18 . As seen, our approach significantly outperforms baselines by a large margin in terms of both precision and recall. Besides, we can see that MMD obtains low performance, which is mainly because the approach cannot be easily adapted to other datasets like short sentences of conversation utterances. In summary, we achieve an INLINEFORM1 -score higher than baseline methods by more than 20%, showing the effectiveness of enhancing TextTiling with modern word embeddings.",
"We further conducted in-depth analysis of different strategies of training word-embeddings and matching heuristics in Table TABREF21 . For word embeddings, we trained them on the 3M-sentence dataset with three strategies: (1) virtual-sentence context proposed in our paper; (2) within-sentence context, where all words (except the current one) within a sentence (either a query or reply) are regarded as the context; (3) window-based context, which is the original form of BIBREF25 : the context is the words in a window (previous 2 words and future 2 words in the sentence). We observe that our virtual-sentence strategy consistently outperforms the other two in all three matching heuristics. The results suggest that combining a query and a reply does provide more information in learning dialogue-specific word embeddings.",
"Regarding matching heuristics, we find that in the second and third strategies of training word embeddings, the complicated heuristic-max method yields higher INLINEFORM0 -scores than simple sum pooling by 2–3%. However, for the virtual-sentence strategy, heuristic-max is slightly worse than the sum pooling. (The degradation is only 0.1% and not significant.) This is probably because both heuristic-max and virtual sentences emphasize the rich interaction between a query and its corresponding reply; combining them does not result in further gain.",
"We also notice that heuristic-avg is worse than other similarity measures. As this method is mathematically equivalent to the average of word-by-word similarity, it may have an undesirable blurring effect.",
"To sum up, our experiments show that both the proposed embedding learning approach and the similarity heuristic are effective for session segmentation. The embedding-enhanced TextTiling approach largely outperforms baselines.",
"We conducted an external experiment to show the effect of session segmentation in dialogue systems. We integrated the segmentation mechanism into a state-of-the-practice retrieval-based system and evaluated the results by manual annotation, similar to our previous work BIBREF27 , BIBREF31 , BIBREF32 .",
"Concretely, we compared our session segmentation with fixed-length context, used in BIBREF11 . That is to say, the competing method always regards two previous utterances as context. We hired three workers to annotate the results with three integer scores (0–2 points, indicating bad, borderline, and good replies, respectively.) We sampled 30 queries from the test set of 100 sessions. For each query, we retrieved 10 candidates and computed p@1 and nDCG scores BIBREF33 (averaged over three annotators). Provided with previous utterances as context, each worker had up to 1000 sentences to read during annotation.",
"Table TABREF26 presents the results of the dialogue system with session segmentation. As demonstrated, our method outperforms the simple fixed-context approach in terms of both metrics. We computed the inner-annotator agreement: std INLINEFORM0 0.309; 3-discrete-class Fleiss' kappa score INLINEFORM1 0.411, indicating moderate agreement BIBREF34 .",
"Case Study. We present a case study on our website: https://sites.google.com/site/sessionsegmentation/. From the case study, we see that the proposed approach is able to segment the dialogue session appropriately, so as to better utilize background information from a conversation session.",
"In this paper, we addressed the problem of session segmentation for open-domain dialogue systems. We proposed an embedding-enhanced TextTiling approach, where we trained embeddings with the novel notion of virtual sentences; we also proposed several heuristics for similarity measure. Experimental results show that both our embedding learning and similarity measuring are effective in session segmentation, and that with our approach, we can improve the performance of a retrieval-based dialogue system.",
"We thank anonymous reviewers for useful comments and Jingbo Zhu for sharing the MMD executable program. This paper is partially supported by the National Natural Science Foundation of China (NSFC Grant Nos. 61272343 and 61472006), the Doctoral Program of Higher Education of China (Grant No. 20130001110032), and the National Basic Research Program (973 Program No. 2014CB340405)."
]
],
"section_name": [
"Introduction",
"Dialogue Systems and Context Modeling",
"Text Segmentation",
"TextTiling",
"Learning Word Embeddings",
"Measuring Similarity",
"Experiments",
"Dataset",
"Segmentation Performance"
]
} | {
"answers": [
{
"annotation_id": [
"29ab5dfa71a8c6180d766527f0c001d45e54d3c8",
"4c696dab9e6357d336f39b518f1f53e87edd2f05",
"a4adaf71f782e889894be087876e4123ecd93ce3"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"27c97216df9c3e233568835403ed5fb10b2e4fbf",
"c2554c3fab96b9086d66872908f3f98fb0dec869",
"f4a8a7f4691193f49768b3e92406d0db35666bba"
],
"answer": [
{
"evidence": [
"However, tracking all previous utterances as the context is unwise. First, commercial chat-bots usually place high demands on efficiency. In a retrieval-based system, for example, performing a standard process of candidate retrieval and re-ranking for each previous utterance may well exceed the time limit (which is very short, e.g., 500ms). Second, we observe that not all sentences in the current conversation session are equally important. The sentence “Want to take a walk?” is irrelevant to the current context, and should not be considered when the computer synthesizes the reply. Therefore, it raises the question of session segmentation in conversation systems."
],
"extractive_spans": [
"ot all sentences in the current conversation session are equally important",
" irrelevant to the current context, and should not be considered when the computer synthesizes the reply"
],
"free_form_answer": "",
"highlighted_evidence": [
"Second, we observe that not all sentences in the current conversation session are equally important. The sentence “Want to take a walk?” is irrelevant to the current context, and should not be considered when the computer synthesizes the reply. Therefore, it raises the question of session segmentation in conversation systems."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The above studies do not consider context information in reply retrieval or generation. However, recent research shows that previous utterances in a conversation session are important because they capture rich background information. Sordoni et al. BIBREF11 summarize a single previous sentence as bag-of-words features, which are fed to a recurrent neural network for reply generation. Serban et al. BIBREF17 design an attention-based neural network over all previous conversation turns/rounds, but this could be inefficient if a session lasts long in real commercial applications. By contrast, our paper addresses the problem of session segmentation so as to retain near, relevant context utterances and to eliminate far, irrelevant ones."
],
"extractive_spans": [],
"free_form_answer": "To retain near and context relevant dialog session utterances and to discard far, irrelevant ones.",
"highlighted_evidence": [
"By contrast, our paper addresses the problem of session segmentation so as to retain near, relevant context utterances and to eliminate far, irrelevant ones."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The above studies do not consider context information in reply retrieval or generation. However, recent research shows that previous utterances in a conversation session are important because they capture rich background information. Sordoni et al. BIBREF11 summarize a single previous sentence as bag-of-words features, which are fed to a recurrent neural network for reply generation. Serban et al. BIBREF17 design an attention-based neural network over all previous conversation turns/rounds, but this could be inefficient if a session lasts long in real commercial applications. By contrast, our paper addresses the problem of session segmentation so as to retain near, relevant context utterances and to eliminate far, irrelevant ones."
],
"extractive_spans": [],
"free_form_answer": "Retaining relevant contextual information from previous utterances. ",
"highlighted_evidence": [
"The above studies do not consider context information in reply retrieval or generation. However, recent research shows that previous utterances in a conversation session are important because they capture rich background information. ",
"Serban et al. BIBREF17 design an attention-based neural network over all previous conversation turns/rounds, but this could be inefficient if a session lasts long in real commercial applications. By contrast, our paper addresses the problem of session segmentation so as to retain near, relevant context utterances and to eliminate far, irrelevant ones."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"2861394e1abbfe4b3bda558a6bbcb979e2ef30f1",
"3445f24a5a1349c519b1b8976b9a034931857f94",
"3a98eda8ea097b699192a0830064c2c9a1d32fed"
],
"answer": [
{
"evidence": [
"To evaluate the session segmentation method, we used a real-world chatting corpus from DuMi, a state-of-the-practice open-domain conversation system in Chinese. We sampled 200 sessions as our experimental corpus. Session segmentation was manually annotated before experiments, serving as the ground truth. The 200 sessions were randomly split by 1:1 for validation and testing. Notice that, our method does not require labeled training samples; massive data with labels of high quality are quite expensive to obtain.",
"We also leveraged an unlabeled massive dataset of conversation utterances to train our word embeddings with “virtual sentences.” The dataset was crawled from the Douban forum, containing 3 million utterances and approximately 150,000 unique words (Chinese terms)."
],
"extractive_spans": [
"real-world chatting corpus from DuMi",
"unlabeled massive dataset of conversation utterances"
],
"free_form_answer": "",
"highlighted_evidence": [
"o evaluate the session segmentation method, we used a real-world chatting corpus from DuMi, a state-of-the-practice open-domain conversation system in Chinese.",
"We also leveraged an unlabeled massive dataset of conversation utterances to train our word embeddings with “virtual sentences.” The dataset was crawled from the Douban forum, containing 3 million utterances and approximately 150,000 unique words (Chinese terms)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To evaluate the session segmentation method, we used a real-world chatting corpus from DuMi, a state-of-the-practice open-domain conversation system in Chinese. We sampled 200 sessions as our experimental corpus. Session segmentation was manually annotated before experiments, serving as the ground truth. The 200 sessions were randomly split by 1:1 for validation and testing. Notice that, our method does not require labeled training samples; massive data with labels of high quality are quite expensive to obtain.",
"We also leveraged an unlabeled massive dataset of conversation utterances to train our word embeddings with “virtual sentences.” The dataset was crawled from the Douban forum, containing 3 million utterances and approximately 150,000 unique words (Chinese terms)."
],
"extractive_spans": [],
"free_form_answer": "chatting corpus from DuMi and conversation data from Douban forum",
"highlighted_evidence": [
"To evaluate the session segmentation method, we used a real-world chatting corpus from DuMi, a state-of-the-practice open-domain conversation system in Chinese. ",
"We also leveraged an unlabeled massive dataset of conversation utterances to train our word embeddings with “virtual sentences.” The dataset was crawled from the Douban forum, containing 3 million utterances and approximately 150,000 unique words (Chinese terms)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To evaluate the session segmentation method, we used a real-world chatting corpus from DuMi, a state-of-the-practice open-domain conversation system in Chinese. We sampled 200 sessions as our experimental corpus. Session segmentation was manually annotated before experiments, serving as the ground truth. The 200 sessions were randomly split by 1:1 for validation and testing. Notice that, our method does not require labeled training samples; massive data with labels of high quality are quite expensive to obtain."
],
"extractive_spans": [
"chatting corpus from DuMi"
],
"free_form_answer": "",
"highlighted_evidence": [
"To evaluate the session segmentation method, we used a real-world chatting corpus from DuMi, a state-of-the-practice open-domain conversation system in Chinese."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Does their model use MFCC?",
"What is the problem of session segmentation?",
"What dataset do they use?"
],
"question_id": [
"dde29d9ea5859aa5a4bcd613dca80aec501ef03a",
"9b1382b44dc69f7ee20acf952f7ceb1c3ef83965",
"3c414f7fbf577dfd3363be6bbc9eba8bdd01f45f"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: An example of multiple-turn dialogues.",
"Figure 2: Word embedding learning by the continuous bagof-words model with virtual sentences (the concatenation of a query and its reply). wt is a word in the virtual sentence, either appearing in the query or the reply; the summed embeddings of remaining words are context.",
"Table 1: Dialogue session segmentation performance in terms of precision (P), recall (R) and F -measure (F). Results are in percentage.",
"Table 2: Analysis of word embedding strategies and similarity heuristics. Bold numbers are the highest value in each row; underlined ones are the highest in each column.",
"Table 3: A retrieval dialogue system with fixed context (2 previous utterances) and the proposed sentence segmentation (virtual sentences with heuristic-max)."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png"
]
} | [
"What is the problem of session segmentation?",
"What dataset do they use?"
] | [
[
"1610.03955-Dialogue Systems and Context Modeling-1",
"1610.03955-Introduction-2"
],
[
"1610.03955-Dataset-1",
"1610.03955-Dataset-0"
]
] | [
"Retaining relevant contextual information from previous utterances. ",
"chatting corpus from DuMi and conversation data from Douban forum"
] | 96 |
1610.03807 | Question Generation from a Knowledge Base with Web Exploration | Question generation from a knowledge base (KB) is the task of generating questions related to the domain of the input KB. We propose a system for generating fluent and natural questions from a KB, which significantly reduces the human effort by leveraging massive web resources. In more detail, a seed question set is first generated by applying a small number of hand-crafted templates on the input KB, then more questions are retrieved by iteratively forming already obtained questions as search queries into a standard search engine, before finally questions are selected by estimating their fluency and domain relevance. Evaluated by human graders on 500 random-selected triples from Freebase, questions generated by our system are judged to be more fluent than those of \newcite{serban-EtAl:2016:P16-1} by human graders. | {
"paragraphs": [
[
"Question generation is important as questions are useful for student assessment or coaching purposes in educational or professional contexts, and a large-scale corpus of question and answer pairs is also critical to many NLP tasks including question answering, dialogue interaction and intelligent tutoring systems. There has been much literature so far BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 studying question generation from text. Recently people are becoming interested in question generation from KB, since large-scale KBs, such as Freebase BIBREF7 and DBPedia BIBREF8 , are freely available, and entities and their relations are already present in KBs but not for texts.",
"Question generation from KB is challenging as function words and morphological forms for entities are abstracted away when a KB is created. To tackle this challenge, previous work BIBREF9 , BIBREF10 relies on massive human-labeled data. Treating question generation as a machine translation problem, serban-EtAl:2016:P16-1 train a neural machine translation (NMT) system with 10,000 $\\langle $ triple, question $\\rangle $ pairs. At test time, input triples are “translated” into questions with the NMT system. On the other hand, the question part of the 10,000 pairs are human generated, which requires a large amount of human effort. In addition, the grammaticality and naturalness of generated questions can not be guaranteed (as seen in Table 1 ).",
"We propose a system for generating questions from KB that significantly reduces the human effort by leveraging the massive web resources. Given a KB, a small set of question templates are first hand-crafted based on the predicates in the KB. These templates consist of a transcription of the predicate in the KB (e.g. performsActivity $\\Rightarrow $ how to) and placeholders for the subject (#X#) and the object (#Y#). A seed question set is then generated by applying the templates on the KB. The seed question set is further expanded through a search engine (e.g., Google, Bing), by iteratively forming each generated question as a search query to retrieve more related question candidates. Finally a selection step is applied by estimating the fluency and domain relevance of each question candidate.",
"The only human labor in this work is the question template construction. Our system does not require a large number of templates because: (1) the iterative question expansion can produce a large number of questions even with a relatively small number of seed questions, as we see in the experiments, (2) multiple entities in the KB share the same predicates. Another advantage is that our system can easily generate updated questions as web is self-updating consistently. In our experiment, we compare with serban-EtAl:2016:P16-1 on 500 random selected triples from Freebase BIBREF7 . Evaluated by 3 human graders, questions generated by our system are significantly better then serban-EtAl:2016:P16-1 on grammaticality and naturalness."
],
[
"A knowledge base (KB) can be viewed as a directed graph, in which nodes are entities (such as “jigsaw” and “CurveCut”) and edges are relations of entities (such as “performsActivity”). A KB can also be viewed as a list of triples in the format of $\\langle $ subject, predicate, object $\\rangle $ , where subjects and objects are entities, and predicates are relations."
],
[
"Shown in Figure 1 , our system contains the sub-modules of question template construction, seed question generation, question expansion and selection. Given an input KB, a small set of question templates is first constructed such that each template is associated with a predicate, then a seed question set is generated by applying the template set on the input KB, before finally more questions are generated from related questions that are iteratively retrieved from a search engine with already-obtained questions as search queries (section \"Experiments\" ). Taking our in-house KB of power tool domain as an example, template “how to use #X#” is first constructed for predicate “performsActivity”. In addition, seed question “how to use jigsaw” is generated by applying the template on triple “ $\\langle $ jigsaw, performsActivity, CurveCut $\\rangle $ ”, before finally questions (Figure 2 ) are retrieved from Google with the seed question."
],
[
"[t] seed question set $S$ candidate questions $E$ $E \\leftarrow S$ $Q \\leftarrow S$ $I \\leftarrow 0$ len $(Q) > 0$ and $I < I_{max}$ $I = I + 1$ $q_{cur}$ $\\leftarrow $ $E$0 .Pop() $E$1 in WebExp $E$2 not $E$3 .contains $E$4 $E$5 .Append( $E$6 ) $E$7 .Push( $E$8 ) Question expansion method",
"Shown in Algorithm \"Experiments\" , the expanded question set $E$ is initialized as the seed question set (Line 1). In each iteration, an already-obtained question is expanded from web and the retrieved questions are added to $E$ if $E$ does not contain them (Lines 6-10). As there may be a large number of questions generated in the loop, we limit the maximum number of iterations with $I_{max}$ (Line 4).",
"The questions collected from the web search engine may not be fluent or domain relevant; especially the domain relevance drops significantly as the iteration goes on. Here we adopt a skip-gram model BIBREF11 and a language model for evaluating the domain relevance and fluency of the expanded questions, respectively. For domain relevance, we take the seed question set as the in-domain data $D_{in}$ , the domain relevance of expanded question $q$ is defined as: ",
"$$\\textsc {Rel}(q) = \\cos (v(q),v(D_{in}))$$ (Eq. 7) ",
"where $v(\\cdot )$ is the document embedding defined as the averaged word embedding within the document. For fluency, we define the averaged language model score as: ",
"$$\\textsc {AvgLM}(q) = \\frac{\\textsc {Lm}(q)}{\\textsc {Len}(q)}$$ (Eq. 8) ",
"where $\\textsc {Lm}(\\cdot )$ is the general-domain language model score (log probability), and $\\textsc {Len}(\\cdot )$ is the word count. We apply thresholds $t_{rel}$ and $t_{flu}$ for domain relevance and fluency respectively, and filter out questions whose scores are below these thresholds."
],
[
"We perform three experiments to evaluate our system qualitatively and quantitatively. In the first experiment, we compare our end-to-end system with the previous state-of-the-art method BIBREF10 on Freebase BIBREF7 , a domain-general KB. In the second experiment, we validate our domain relevance evaluation method on a standard dataset about short document classification. In the final experiment, we run our end-to-end system on a highly specialized in-house KB and present sample results, showing that our system is capable of generating questions from domain specific KBs."
],
[
"We first compare our system with serban-EtAl:2016:P16-1 on 500 randomly selected triples from Freebase BIBREF7 . For the 500 triples, we hand-crafted 106 templates, as these triples share only 53 distinct predicates (we made 2 templates for each predicate on average). 991 seed questions are generated by applying the templates on the triples, and 1529 more questions are retrieved from Google. To evaluate the fluency of the candidate questions, we train a 4-gram language model (LM) on gigaword (LDC2011T07) with Kneser Ney smoothing. Using the averaged language model score as index, the top 500 questions are selected to compare with the results from serban-EtAl:2016:P16-1. We ask three native English speakers to evaluate the fluency and the naturalness of both results based on a 4-point scheme where 4 is the best.",
"We show the averaged human rate in Table 2 , where we can see that our questions are more grammatical and natural than serban-EtAl:2016:P16-1. The naturalness score is less than the grammatical score for both methods. It is because naturalness is a more strict metric since a natural question should also be grammatical.",
"Shown in Table 1 , we compare our questions with serban-EtAl:2016:P16-1 where questions in the same line describe the same entity. We can see that our questions are grammatical and natural as these questions are what people usually ask on the web. On the other hand, questions from serban-EtAl:2016:P16-1 are either ungrammatical (such as “who was someone who was involved in the leukemia ?” and “whats the title of a book of the subject of the bible ?”), unnatural (“what 's one of the mountain where can you found in argentina in netflix ?”) or confusing (“who was someone who was involved in the leukemia ?”)."
],
[
"We test our domain-relevance evaluating method on the web snippet dataset, which is a commonly-used for domain classification of short documents. It contains 10,060 training and 2,280 test snippets (short documents) in 8 classes (domains), and each snippet has 18 words on average. There have been plenty of prior results BIBREF12 , BIBREF13 , BIBREF14 on the dataset.",
"Shown in Table 3 , we compare our domain-relevance evaluation method (section \"Experiments\" ) with previous state-of-the-art methods: phan2008learning first derives latent topics with LDA BIBREF15 from Wikipedia, then uses the topics as appended features to expand the short text. chen2011short further expanded phan2008learning by using multi-granularity topics. ma-EtAl:2015:VSM-NLP adopts a Bayesian model that the probability a document $D$ belongs to a topic $t$ equals to the prior of $t$ times the probability each word $w$ in $D$ comes from $t$ . Our method first concatenates training documents of the same domain into one “domain document”, then calculates each document embedding by averaging word embeddings within it, before finally assigns the label of the nearest (cosine similarity) “domain document” to each test document.",
"Simple as it is, our method outperforms all previous methods proving its effectiveness. The reason can be that word embeddings captures the similarity between distinct words (such as “finance” and “economy”), while it is hard for traditional methods. On the order hand, LDA only learns probabilities of words belonging to topics."
],
[
"The last experiment is on our in-house KB in the power tool domain. It contains 67 distinct predicates, 293 distinct subjects and 279 distinct objects respectively. For the 67 predicates, we hand-craft 163 templates. Here we use the same language model as in our first experiment, and learn a skip-gram model BIBREF11 on Wikipedia for evaluating domain relevance.",
"We generate 12,228 seed questions from which 20,000 more questions are expanded with Google. Shown in Table 4 are some expanded questions from which we can see that most of them are grammatical and relevant to the power tool domain. In addition, most questions are informative and correspond to a specific answer, except the one “do I need a hammer drill” that lacks context information. Finally, in addition to the simple factoid questions, our system generates many complex questions such as “how to cut a groove in wood without a router”."
],
[
"We presented a system to generate natural language questions from a knowledge base. By leveraging rich web information, our system is able to generate domain-relevant questions in wide scope, while human effort is significantly reduced. Evaluated by human graders, questions generated by our system are significantly better than these from serban-EtAl:2016:P16-1 on 500 random-selected triples from Freebase. We also demonstrated generated questions from our in-house KB of power tool domain, which are fluent and domain-relevant in general. Our current system only generates questions without answers, leaving automatic answer mining as our future work."
]
],
"section_name": [
"Introduction",
"Knowledge Base",
"System",
"Question expansion and selection",
"Experiments",
"Evaluation on Freebase",
"Domain Relevance",
"Evaluation on the Domain-specific KB",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"2a1b8f1fc516070fefc7257dce0079e8ade29a0b",
"2c3e7d6057ace540d12d9294375a86f8dfe90f98",
"56247f1efcf1f26eb0b448ba4e0e208ce73c83a1"
],
"answer": [
{
"evidence": [
"The questions collected from the web search engine may not be fluent or domain relevant; especially the domain relevance drops significantly as the iteration goes on. Here we adopt a skip-gram model BIBREF11 and a language model for evaluating the domain relevance and fluency of the expanded questions, respectively. For domain relevance, we take the seed question set as the in-domain data $D_{in}$ , the domain relevance of expanded question $q$ is defined as:",
"$$\\textsc {Rel}(q) = \\cos (v(q),v(D_{in}))$$ (Eq. 7)",
"where $v(\\cdot )$ is the document embedding defined as the averaged word embedding within the document. For fluency, we define the averaged language model score as:",
"$$\\textsc {AvgLM}(q) = \\frac{\\textsc {Lm}(q)}{\\textsc {Len}(q)}$$ (Eq. 8)",
"where $\\textsc {Lm}(\\cdot )$ is the general-domain language model score (log probability), and $\\textsc {Len}(\\cdot )$ is the word count. We apply thresholds $t_{rel}$ and $t_{flu}$ for domain relevance and fluency respectively, and filter out questions whose scores are below these thresholds."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The questions collected from the web search engine may not be fluent or domain relevant",
"Here we adopt a skip-gram model BIBREF11 and a language model for evaluating the domain relevance and fluency of the expanded questions, respectively.",
"For domain relevance, we take the seed question set as the in-domain data $D_{in}$ , the domain relevance of expanded question $q$ is defined as:\n\n$$\\textsc {Rel}(q) = \\cos (v(q),v(D_{in}))$$ (Eq. 7)\n\nwhere $v(\\cdot )$ is the document embedding defined as the averaged word embedding within the document.",
"For fluency, we define the averaged language model score as:\n\n$$\\textsc {AvgLM}(q) = \\frac{\\textsc {Lm}(q)}{\\textsc {Len}(q)}$$ (Eq. 8)\n\nwhere $\\textsc {Lm}(\\cdot )$ is the general-domain language model score (log probability), and $\\textsc {Len}(\\cdot )$ is the word count. ",
"We apply thresholds $t_{rel}$ and $t_{flu}$ for domain relevance and fluency respectively, and filter out questions whose scores are below these thresholds."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"The questions collected from the web search engine may not be fluent or domain relevant; especially the domain relevance drops significantly as the iteration goes on. Here we adopt a skip-gram model BIBREF11 and a language model for evaluating the domain relevance and fluency of the expanded questions, respectively. For domain relevance, we take the seed question set as the in-domain data $D_{in}$ , the domain relevance of expanded question $q$ is defined as:",
"$$\\textsc {Rel}(q) = \\cos (v(q),v(D_{in}))$$ (Eq. 7)",
"where $v(\\cdot )$ is the document embedding defined as the averaged word embedding within the document. For fluency, we define the averaged language model score as:",
"$$\\textsc {AvgLM}(q) = \\frac{\\textsc {Lm}(q)}{\\textsc {Len}(q)}$$ (Eq. 8)",
"where $\\textsc {Lm}(\\cdot )$ is the general-domain language model score (log probability), and $\\textsc {Len}(\\cdot )$ is the word count. We apply thresholds $t_{rel}$ and $t_{flu}$ for domain relevance and fluency respectively, and filter out questions whose scores are below these thresholds."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Here we adopt a skip-gram model BIBREF11 and a language model for evaluating the domain relevance and fluency of the expanded questions, respectively. For domain relevance, we take the seed question set as the in-domain data $D_{in}$ , the domain relevance of expanded question $q$ is defined as:\n\n$$\\textsc {Rel}(q) = \\cos (v(q),v(D_{in}))$$ (Eq. 7)\n\nwhere $v(\\cdot )$ is the document embedding defined as the averaged word embedding within the document. For fluency, we define the averaged language model score as:\n\n$$\\textsc {AvgLM}(q) = \\frac{\\textsc {Lm}(q)}{\\textsc {Len}(q)}$$ (Eq. 8)\n\nwhere $\\textsc {Lm}(\\cdot )$ is the general-domain language model score (log probability), and $\\textsc {Len}(\\cdot )$ is the word count. We apply thresholds $t_{rel}$ and $t_{flu}$ for domain relevance and fluency respectively, and filter out questions whose scores are below these thresholds."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"where $\\textsc {Lm}(\\cdot )$ is the general-domain language model score (log probability), and $\\textsc {Len}(\\cdot )$ is the word count. We apply thresholds $t_{rel}$ and $t_{flu}$ for domain relevance and fluency respectively, and filter out questions whose scores are below these thresholds."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We apply thresholds $t_{rel}$ and $t_{flu}$ for domain relevance and fluency respectively, and filter out questions whose scores are below these thresholds"
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"74eea9f3f4f790836045fcc75d0b3f5156901499",
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86",
"64535162a1194b06db3080285c566202b651354c"
]
},
{
"annotation_id": [
"acb515644eda44ecda176d46dd98c83a724f53cb",
"ce590983744bc62a69b9ab15c5363a2a936ab915",
"d6e43d49b397b5e2109300fbfc1f7c6f74c8665f"
],
"answer": [
{
"evidence": [
"Shown in Table 3 , we compare our domain-relevance evaluation method (section \"Experiments\" ) with previous state-of-the-art methods: phan2008learning first derives latent topics with LDA BIBREF15 from Wikipedia, then uses the topics as appended features to expand the short text. chen2011short further expanded phan2008learning by using multi-granularity topics. ma-EtAl:2015:VSM-NLP adopts a Bayesian model that the probability a document $D$ belongs to a topic $t$ equals to the prior of $t$ times the probability each word $w$ in $D$ comes from $t$ . Our method first concatenates training documents of the same domain into one “domain document”, then calculates each document embedding by averaging word embeddings within it, before finally assigns the label of the nearest (cosine similarity) “domain document” to each test document.",
"FLOAT SELECTED: Table 3: Precision on the web snippet dataset"
],
"extractive_spans": [
"For domain relevance, we take the seed question set as the in-domain data $D_{in}$ , the domain relevance of expanded question $q$ is defined as:\n\n$$\\textsc {Rel}(q) = \\cos (v(q),v(D_{in}))$$ (Eq. 7)\n\nwhere $v(\\cdot )$ is the document embedding defined as the averaged word embedding within the document."
],
"free_form_answer": "",
"highlighted_evidence": [
"Shown in Table 3 , we compare our domain-relevance evaluation method (section \"Experiments\" ) with previous state-of-the-art methods",
"FLOAT SELECTED: Table 3: Precision on the web snippet dataset",
" Our method first concatenates training documents of the same domain into one “domain document”, then calculates each document embedding by averaging word embeddings within it, before finally assigns the label of the nearest (cosine similarity) “domain document” to each test document."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The questions collected from the web search engine may not be fluent or domain relevant; especially the domain relevance drops significantly as the iteration goes on. Here we adopt a skip-gram model BIBREF11 and a language model for evaluating the domain relevance and fluency of the expanded questions, respectively. For domain relevance, we take the seed question set as the in-domain data $D_{in}$ , the domain relevance of expanded question $q$ is defined as:",
"$$\\textsc {Rel}(q) = \\cos (v(q),v(D_{in}))$$ (Eq. 7)",
"where $v(\\cdot )$ is the document embedding defined as the averaged word embedding within the document. For fluency, we define the averaged language model score as:"
],
"extractive_spans": [
"the domain relevance of expanded question $q$ is defined as:\n\n$$\\textsc {Rel}(q) = \\cos (v(q),v(D_{in}))$$ (Eq. 7)\n\nwhere $v(\\cdot )$ is the document embedding defined as the averaged word embedding within the document"
],
"free_form_answer": "",
"highlighted_evidence": [
"For domain relevance, we take the seed question set as the in-domain data $D_{in}$ , the domain relevance of expanded question $q$ is defined as:\n\n$$\\textsc {Rel}(q) = \\cos (v(q),v(D_{in}))$$ (Eq. 7)\n\nwhere $v(\\cdot )$ is the document embedding defined as the averaged word embedding within the document."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The questions collected from the web search engine may not be fluent or domain relevant; especially the domain relevance drops significantly as the iteration goes on. Here we adopt a skip-gram model BIBREF11 and a language model for evaluating the domain relevance and fluency of the expanded questions, respectively. For domain relevance, we take the seed question set as the in-domain data $D_{in}$ , the domain relevance of expanded question $q$ is defined as:",
"$$\\textsc {Rel}(q) = \\cos (v(q),v(D_{in}))$$ (Eq. 7)",
"where $v(\\cdot )$ is the document embedding defined as the averaged word embedding within the document. For fluency, we define the averaged language model score as:"
],
"extractive_spans": [
"we take the seed question set as the in-domain data $D_{in}$ , the domain relevance of expanded question $q$ is defined as:\n\n$$\\textsc {Rel}(q) = \\cos (v(q),v(D_{in}))$$"
],
"free_form_answer": "",
"highlighted_evidence": [
"For domain relevance, we take the seed question set as the in-domain data $D_{in}$ , the domain relevance of expanded question $q$ is defined as:\n\n$$\\textsc {Rel}(q) = \\cos (v(q),v(D_{in}))$$ (Eq. 7)\n\nwhere $v(\\cdot )$ is the document embedding defined as the averaged word embedding within the document"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"74eea9f3f4f790836045fcc75d0b3f5156901499",
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86",
"759f6545a7882ef402b30242eab2dc84c1bced71"
]
},
{
"annotation_id": [
"283c7ee7edc1392a31242a5cc739ccaef1c3634c",
"4504e77a34040c0cae9d697a3387c9712d0fce1d",
"cda45463eabba594686d2fbb05df5a7ce86632c5"
],
"answer": [
{
"evidence": [
"We first compare our system with serban-EtAl:2016:P16-1 on 500 randomly selected triples from Freebase BIBREF7 . For the 500 triples, we hand-crafted 106 templates, as these triples share only 53 distinct predicates (we made 2 templates for each predicate on average). 991 seed questions are generated by applying the templates on the triples, and 1529 more questions are retrieved from Google. To evaluate the fluency of the candidate questions, we train a 4-gram language model (LM) on gigaword (LDC2011T07) with Kneser Ney smoothing. Using the averaged language model score as index, the top 500 questions are selected to compare with the results from serban-EtAl:2016:P16-1. We ask three native English speakers to evaluate the fluency and the naturalness of both results based on a 4-point scheme where 4 is the best.",
"The last experiment is on our in-house KB in the power tool domain. It contains 67 distinct predicates, 293 distinct subjects and 279 distinct objects respectively. For the 67 predicates, we hand-craft 163 templates. Here we use the same language model as in our first experiment, and learn a skip-gram model BIBREF11 on Wikipedia for evaluating domain relevance."
],
"extractive_spans": [],
"free_form_answer": "269.",
"highlighted_evidence": [
"For the 500 triples, we hand-crafted 106 templates, ",
"For the 67 predicates, we hand-craft 163 templates. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We first compare our system with serban-EtAl:2016:P16-1 on 500 randomly selected triples from Freebase BIBREF7 . For the 500 triples, we hand-crafted 106 templates, as these triples share only 53 distinct predicates (we made 2 templates for each predicate on average). 991 seed questions are generated by applying the templates on the triples, and 1529 more questions are retrieved from Google. To evaluate the fluency of the candidate questions, we train a 4-gram language model (LM) on gigaword (LDC2011T07) with Kneser Ney smoothing. Using the averaged language model score as index, the top 500 questions are selected to compare with the results from serban-EtAl:2016:P16-1. We ask three native English speakers to evaluate the fluency and the naturalness of both results based on a 4-point scheme where 4 is the best.",
"The last experiment is on our in-house KB in the power tool domain. It contains 67 distinct predicates, 293 distinct subjects and 279 distinct objects respectively. For the 67 predicates, we hand-craft 163 templates. Here we use the same language model as in our first experiment, and learn a skip-gram model BIBREF11 on Wikipedia for evaluating domain relevance."
],
"extractive_spans": [],
"free_form_answer": "269",
"highlighted_evidence": [
" For the 500 triples, we hand-crafted 106 templates, as these triples share only 53 distinct predicates (we made 2 templates for each predicate on average)",
"For the 67 predicates, we hand-craft 163 templates."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We first compare our system with serban-EtAl:2016:P16-1 on 500 randomly selected triples from Freebase BIBREF7 . For the 500 triples, we hand-crafted 106 templates, as these triples share only 53 distinct predicates (we made 2 templates for each predicate on average). 991 seed questions are generated by applying the templates on the triples, and 1529 more questions are retrieved from Google. To evaluate the fluency of the candidate questions, we train a 4-gram language model (LM) on gigaword (LDC2011T07) with Kneser Ney smoothing. Using the averaged language model score as index, the top 500 questions are selected to compare with the results from serban-EtAl:2016:P16-1. We ask three native English speakers to evaluate the fluency and the naturalness of both results based on a 4-point scheme where 4 is the best.",
"The last experiment is on our in-house KB in the power tool domain. It contains 67 distinct predicates, 293 distinct subjects and 279 distinct objects respectively. For the 67 predicates, we hand-craft 163 templates. Here we use the same language model as in our first experiment, and learn a skip-gram model BIBREF11 on Wikipedia for evaluating domain relevance."
],
"extractive_spans": [
"106",
"163"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the 500 triples, we hand-crafted 106 templates, as these triples share only 53 distinct predicates (we made 2 templates for each predicate on average)",
"For the 67 predicates, we hand-craft 163 templates."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"594e0b1297abe0ad3e2555ad27eedfb59c442bb9",
"74eea9f3f4f790836045fcc75d0b3f5156901499",
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86"
]
},
{
"annotation_id": [
"2ba04269643a704586187b26b60e40bbb1d9b294",
"3571015101664176ed817f4e72023379a0022518",
"9b09fef099510014ce6340ea3054e94a70f5e4d4"
],
"answer": [
{
"evidence": [
"where $v(\\cdot )$ is the document embedding defined as the averaged word embedding within the document. For fluency, we define the averaged language model score as:",
"$$\\textsc {AvgLM}(q) = \\frac{\\textsc {Lm}(q)}{\\textsc {Len}(q)}$$ (Eq. 8)",
"where $\\textsc {Lm}(\\cdot )$ is the general-domain language model score (log probability), and $\\textsc {Len}(\\cdot )$ is the word count. We apply thresholds $t_{rel}$ and $t_{flu}$ for domain relevance and fluency respectively, and filter out questions whose scores are below these thresholds."
],
"extractive_spans": [
"For fluency, we define the averaged language model score as:\n\n$$\\textsc {AvgLM}(q) = \\frac{\\textsc {Lm}(q)}{\\textsc {Len}(q)}$$ (Eq. 8)\n\nwhere $\\textsc {Lm}(\\cdot )$ is the general-domain language model score (log probability), and $\\textsc {Len}(\\cdot )$ is the word count."
],
"free_form_answer": "",
"highlighted_evidence": [
"For fluency, we define the averaged language model score as:\n\n$$\\textsc {AvgLM}(q) = \\frac{\\textsc {Lm}(q)}{\\textsc {Len}(q)}$$ (Eq. 8)\n\nwhere $\\textsc {Lm}(\\cdot )$ is the general-domain language model score (log probability), and $\\textsc {Len}(\\cdot )$ is the word count. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"where $v(\\cdot )$ is the document embedding defined as the averaged word embedding within the document. For fluency, we define the averaged language model score as:",
"$$\\textsc {AvgLM}(q) = \\frac{\\textsc {Lm}(q)}{\\textsc {Len}(q)}$$ (Eq. 8)",
"where $\\textsc {Lm}(\\cdot )$ is the general-domain language model score (log probability), and $\\textsc {Len}(\\cdot )$ is the word count. We apply thresholds $t_{rel}$ and $t_{flu}$ for domain relevance and fluency respectively, and filter out questions whose scores are below these thresholds."
],
"extractive_spans": [
"$$\\textsc {AvgLM}(q) = \\frac{\\textsc {Lm}(q)}{\\textsc {Len}(q)}$$ (Eq. 8)\n\nwhere $\\textsc {Lm}(\\cdot )$ is the general-domain language model score (log probability), and $\\textsc {Len}(\\cdot )$ is the word count"
],
"free_form_answer": "",
"highlighted_evidence": [
"For fluency, we define the averaged language model score as:\n\n$$\\textsc {AvgLM}(q) = \\frac{\\textsc {Lm}(q)}{\\textsc {Len}(q)}$$ (Eq. 8)\n\nwhere $\\textsc {Lm}(\\cdot )$ is the general-domain language model score (log probability), and $\\textsc {Len}(\\cdot )$ is the word count."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We first compare our system with serban-EtAl:2016:P16-1 on 500 randomly selected triples from Freebase BIBREF7 . For the 500 triples, we hand-crafted 106 templates, as these triples share only 53 distinct predicates (we made 2 templates for each predicate on average). 991 seed questions are generated by applying the templates on the triples, and 1529 more questions are retrieved from Google. To evaluate the fluency of the candidate questions, we train a 4-gram language model (LM) on gigaword (LDC2011T07) with Kneser Ney smoothing. Using the averaged language model score as index, the top 500 questions are selected to compare with the results from serban-EtAl:2016:P16-1. We ask three native English speakers to evaluate the fluency and the naturalness of both results based on a 4-point scheme where 4 is the best.",
"FLOAT SELECTED: Table 2: Human ratings of generated questions",
"We show the averaged human rate in Table 2 , where we can see that our questions are more grammatical and natural than serban-EtAl:2016:P16-1. The naturalness score is less than the grammatical score for both methods. It is because naturalness is a more strict metric since a natural question should also be grammatical."
],
"extractive_spans": [
"For fluency, we define the averaged language model score as:\n\n$$\\textsc {AvgLM}(q) = \\frac{\\textsc {Lm}(q)}{\\textsc {Len}(q)}$$ (Eq. 8)\n\nwhere $\\textsc {Lm}(\\cdot )$ is the general-domain language model score (log probability), and $\\textsc {Len}(\\cdot )$ is the word count"
],
"free_form_answer": "",
"highlighted_evidence": [
"To evaluate the fluency of the candidate questions, we train a 4-gram language model (LM) on gigaword (LDC2011T07) with Kneser Ney smoothing. Using the averaged language model score as index, the top 500 questions are selected to compare with the results from serban-EtAl:2016:P16-1.",
"We ask three native English speakers to evaluate the fluency and the naturalness of both results based on a 4-point scheme where 4 is the best.",
"FLOAT SELECTED: Table 2: Human ratings of generated questions",
"We show the averaged human rate in Table 2 , where we can see that our questions are more grammatical and natural than serban-EtAl:2016:P16-1"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"594e0b1297abe0ad3e2555ad27eedfb59c442bb9",
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86",
"74eea9f3f4f790836045fcc75d0b3f5156901499"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Was the filtering based on fluency and domain relevance done automatically?",
"How was domain relevance estimated?",
"How many hand-crafted templates did they have to make?",
"How was the fluency measured?"
],
"question_id": [
"6157567c5614e1954b801431fec680f044e102c6",
"8ea4a75dacf6a39f9d385ba14b3dce715a47d689",
"1e11e74481ead4b7635922bbe0de041dc2dde28d",
"597d3fc9b8c0c036f58cea5b757d0109d5211b2f"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"question generation",
"question generation",
"question generation",
"question generation"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Overview of our framework.",
"Figure 2: Related search results for the question “how to use jigsaw”.",
"Table 2: Human ratings of generated questions",
"Table 1: Comparing generated questions",
"Table 3: Precision on the web snippet dataset",
"Table 4: Example question expanded"
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"2-Table2-1.png",
"3-Table1-1.png",
"4-Table3-1.png",
"4-Table4-1.png"
]
} | [
"How many hand-crafted templates did they have to make?"
] | [
[
"1610.03807-Evaluation on Freebase-0",
"1610.03807-Evaluation on the Domain-specific KB-0"
]
] | [
"269"
] | 97 |
1607.03895 | Tie-breaker: Using language models to quantify gender bias in sports journalism | Gender bias is an increasingly important issue in sports journalism. In this work, we propose a language-model-based approach to quantify differences in questions posed to female vs. male athletes, and apply it to tennis post-match interviews. We find that journalists ask male players questions that are generally more focused on the game when compared with the questions they ask their female counterparts. We also provide a fine-grained analysis of the extent to which the salience of this bias depends on various factors, such as question type, game outcome or player rank. | {
"paragraphs": [
[
"There has been an increasing level of attention to and discussion of gender bias in sports, ranging from differences in pay and prize money to different levels of focus on off-court topics in interviews by journalists. With respect to the latter, Cover the Athlete, an initiative that urges the media to focus on sport performance, suggests that female athletes tend to get more “sexist commentary\" and “inappropriate interview questions\" than males do; the organization put out an attention-getting video in 2015 purportedly showing male athletes' awkward reactions to receiving questions like those asked of female athletes. However, it is not universally acknowledged that female athletes attract more attention for off-court activities. For instance, a manual analysis by BIBREF0 [ BIBREF0 ] of online articles revealed significantly more descriptors associated with the physical appearance and personal lives of male basketball players in comparison to female ones.",
"Transcripts of pre- or post-game press conferences offer an opportunity to determine quantitatively and in a data-driven manner how different are the questions which journalists pose to male players from those they pose to female players. Here are examples of a game-related and a non-game-relevant question, respectively, drawn from actual tennis interviews:",
"To quantify gender discrepancies in questions, we propose a statistical language-model-based approach to measure how game-related questions are. In order to make such an approach effective, we restrict our attention in this study to a single sport—tennis—so that mere variations in the lingo of different sports do not introduce extra noise in our language models. Tennis is also useful for our investigation because, as BIBREF1 [ BIBREF1 ] noted, it “marks the only professional sports where male and female athletes generally receive similar amounts of overall broadcast media coverage during the major tournaments.\"",
"Using our methodology, we are able to quantify gender bias with respect to how game-related interview questions are. We also provide a more fine-grained analysis of how gender differences in journalistic questioning are displayed under various scenarios. To help with further analysis of interview questions and answers, we introduce a dataset of tennis post-match interview transcripts along with corresponding match information."
],
[
"In contrast with our work, prior investigations of bias in sport journalism rely on manual coding or are based on simple lists of manually defined keywords. These focus on bias with respect to race, nationality, and gender BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF1 , BIBREF7 ; see BIBREF8 [ BIBREF8 ] for a review.",
"Much of the work on gender bias in sports reporting has focused on “air-time” BIBREF9 , BIBREF10 . Other studies looked at stereotypical descriptions and framing BIBREF11 , BIBREF12 , BIBREF13 , BIBREF0 . For surveys, see BIBREF14 [ BIBREF14 ] or BIBREF15 [ BIBREF15 ], inter alia. Several studies have focused on the particular case of gender-correlated differences in tennis coverage BIBREF16 , BIBREF17 , BIBREF1 . We extend this line of work by proposing an automatic way to quantify gender bias in sport journalism."
],
[
"We collect tennis press-conference transcripts from ASAP Sport's website (http://www.asapsports.com/), whose tennis collection dates back to 1992 and is still updated for current tournaments. For our study, we take post- game interviews for tennis singles matches played between Jan, 2000 to Oct 18, 2015. We also obtain easily-extractable match information from a dataset provided by Tennis-Data, which covers the majority of the matches played on the men's side from 2000-2015 and on the women's side from 2007-2015.",
"We match interview transcripts with game statistics by date and player name, keeping only the question and answer pairs from games where the statistics are successfully merged. This gives us a dataset consisting of 6467 interview transcripts and a total of 81906 question snippets posed to 167 female players and 191 male players. To model tennis-game-specific language, we use live text play-by-play commentaries collected from the website Sports Mole (http://www.sportsmole.co.uk/). These tend to be short, averaging around 40 words. Here is a sample, taken from the Federer-Murray match at the 2015 Wimbledon semi-final:",
"“The serve-and-volley is being used frequently by Federer and it's enabling him to take control behind his own serve. Three game points are earned before an ace down the middle seal [sic] the love hold.”",
"For our analysis, we create a gender-balanced set of commentaries consisting of descriptions for 1981 games played for each gender."
],
[
"As a preliminary step, we apply a word-level analysis to understand if there appear to be differences in word usage when journalists interview male players compared to female players. We then introduce our method for quantifying the degree to which a question is game-related, which we will use to explore gender differences."
],
[
"To compare word usage in questions, we consider, for each word $w$ , the percentage of players who have ever been asked a question containing $w$ . We then consider words with the greatest difference in percentage between male and female players. The top distinguishing words, which are listed below in descending order of percentage difference, seem to suggest that questions journalists pose to male players are more game-related:",
"clay, challenger(s), tie, sets, practiced, tiebreaker, maybe, see, impression, serve, history, volley, chance, height, support, shots, server(s), greatest, way, tiebreaks, tiebreakers, era, lucky, luck;",
"yet, new, nervous, improve, seed, friends, nerves, mom, every, matter, become, meet, winning, type, won, draw, found, champion, stop, fight, wind, though, father, thing, love."
],
[
"To quantify how game-related a question is in a data-driven fashion, we train a bigram language model using KenLM BIBREF18 on the gender-balanced set of live-text play-by-play commentaries introduced in Section \"Dataset Description\" .",
"For an individual question $q$ , we measure its perplexity $PP(q)$ with respect to this game language model $P_{\\textnormal {\\tiny \\tiny commentary}}$ as an indication of how game-related the question is: the higher the perplexity value, the less game-related the question. Perplexity, a standard measure of language-model fit BIBREF19 , is defined as follows for an $N$ -word sequence $w_1 w_2 \\ldots w_N$ : $\nPP(w_1 w_2 ... w_N) = \\@root N \\of {\\displaystyle \\frac{1}{P_{\\textnormal {\\tiny \\tiny commentary}}(w_1\\cdots w_N)}} \\hspace*{2.84544pt}.\n$ ",
"Below are some sample questions of low-perplexity and high-perplexity values:"
],
[
"In this section we use the game language model to quantify gender-based bias in questions. We then compare the extent to which this difference depends of various factors, such as question type, game outcome, or player rank."
],
[
"We first compute perplexities for each individual question and then group the question instances according to the interviewee's gender class. Throughout we use the Mann-Whitney $U$ statistical significance test, unless otherwise noted.",
"Comparing perplexity values between the two groups, we find that the mean perplexity of questions posed to male players is significantly smaller ( $p$ -value $<$ 0.001) than that of questions posed to female players. This suggests that the questions male athletes receive are more game-related.",
"However, the number of interviews each player participates in varies greatly, with highly interviewed players answering as many as thousands of questions while some lesser-known players have fewer than 10 interview questions in the dataset. Thus it is conceivable that the difference is simply explained by questions asked to a few prolific players. To test whether this is the case, or whether the observation is more general, we micro-average the perplexities by player: for each of the 167 male players and 143 females who have at least 10 questions in our dataset, we consider the average perplexities of the questions they receive. Comparing these micro-averages, we find that it is still the case that questions posed to male players are significantly closer to game language ( $p$ -value $<$ 0.05), indicating that the observed gender difference is not simply explained by a few highly interviewed players."
],
[
"We further investigate how the level of gender bias is tied to different factors: how typical the question is (section UID20 ), the ranking of the player (section UID24 ), and whether the player won or lost the match (section UID26 ). For all the following experiments, we use per-question perplexity for comparisons: per-player perplexity is not used due to limited sample size.",
"One might wonder whether the perplexity disparities we see in questions asked of female vs. male players are due to “off-the-wall” queries, rather than to those that are more typical in post-match interviews. We therefore use a data-driven approach to distinguish between typical and atypical questions.",
"For any given question, we consider how frequently its words appear in post-match press conferences in general. Specifically, we take the set of all questions as the set of documents, $D$ . We compute the inverse document frequency for each word (after stemming) that has appeared in our dataset, excluding the set $S$ consisting of stop words and a special token for entity names. For a question $q$ that contains the set of unique words $\\lbrace w_1, w_2, ... , w_N\\rbrace \\notin S$ , we compute its atypicality score $Sc(q)$ as: $\nSc(\\lbrace w_1, w_2, ... , w_N\\rbrace ) = \\displaystyle \\frac{1}{N}\\sum \\limits _{i=1}^{N} \\textnormal {idf}(w_i, D) \\, .\n$ ",
"We use the overall mean atypicality score of the entire question dataset as the cutoff point: questions with scores above the overall mean are considered atypical and the rest are considered typical. Below are some examples:",
"Figure 1 shows that a gender bias with respect to whether game-related language is used exists for both typical and atypical questions. However, additional analysis reveals that the difference in mean perplexity values between genders is highly statistically significantly larger for atypical questions, suggesting that gender bias is more salient among the more unusual queries.",
"Higher ranked players generally attract more media attention, and therefore may be targeted differently by journalists. To understand the effect of player ranking, we divide players into two groups: top 10 players and the rest. For our analysis, we use the ranking of the player at the time the interview was conducted. (It is therefore possible that questions posed to the same player but at different times could fall into different ranking groups due to ranking fluctuations over time.) We find that questions to male players are significantly closer to game language regardless of player ranking ( $p$ -value $<$ 0.001, Figure 2 ).",
"Furthermore, if we focus only on players who have ranked both in and outside the top 10 in our dataset, and pair the questions asked to them when they were higher-ranked to the questions asked when their ranking was lower, we find that there is no significant difference between questions asked to male athletes when they were in different ranking groups (Wilcoxon signed-rank $p$ -value $>$ 0.05). However, the difference is significant for females (Wilcoxon signed-rank $p$ -value $<$ 0.01), suggesting that gender bias may be more salient for lower ranked players as questions to lower-ranked female athletes tend to be less game-related.",
"While one might expect that star players would receive more off-court questions (yielding higher perplexities), the perplexity values for questions posed to top 10 players are actually lower regardless of gender. This may be because the training data for our language model is more focused on specific points played in matches, and may not be representative of tennis-related questions that are more general (e.g., longer-term career goals, personal records, injuries). In other words, our result suggests that journalists may attend more to the specifics of the games of higher ranked players, posing more specific questions about points played in the match during interviews.",
"While it is reasonable to expect that whether the interviewee won or lost would affect how game-related the questions are, the difference in mean perplexity for males and females conditioned on win/loss game outcome are comparable. In addition, for both male players and female players, there is no significant difference observed between the paired set of questions asked in winning interviews and the losing ones (Wilcoxon signed-rank $p$ -value $>$ 0.05), controlling for both player and season. This suggests that that game result may not be a factor affecting how game-related the interview questions are."
],
[
"In this work we propose a language-model based approach to quantify gender bias in the interview questions tennis players receive. We find that questions to male athletes are generally more game-related. The difference is more salient among the unusual questions in press conferences, and for lower-ranked players.",
"However, this preliminary study has a number of limitations. We have considered only a single sport. In addition, our dataset does not contain any information about who asked which question, which makes us unable to control for any idiosyncrasies of specific journalists. For example, it is conceivable that the disparities we observe are explained by differences in the journalists that are assigned to conduct the respective interviews.",
"In this work, we limit our scope to bias in terms of game-related language, not considering differences (or similarities) that may exist in other dimensions. Further studies may use a similar approach to quantify and explore differences in other dimensions, by using language models specifically trained to model other domains of interests, which may provide a more comprehensive view of how questions differ when targeting different groups.",
"Furthermore, our main focus is on questions asked during press conferences; we have not looked at the players' responses. The transcripts data, which we release publicly, may provide opportunities for further studies."
],
[
"We thank the anonymous reviewers and the participants in the Fall 2015 edition of the course “Natural Language Processing and Social Interaction” for helpful comments and discussion. This research was supported in part by a Discovery and Innovation Research Seed award from the Office of the Vice Provost for Research at Cornell."
]
],
"section_name": [
"Introduction",
"Related Work",
"Dataset Description",
"Method",
"Preliminary Analysis",
"Game Language Model",
"Experiments",
"Main Result: Males vs. Females",
"Relation to Other Factors",
"Concluding discussion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"2afe292528d6e31f35723d0a699266af45c51bb7",
"5bb46f0439f133eb9b73176b91cb875b6cb2b005",
"f3073e6639ac7d0c93feeb9f37037f993a24cb41"
],
"answer": [
{
"evidence": [
"Using our methodology, we are able to quantify gender bias with respect to how game-related interview questions are. We also provide a more fine-grained analysis of how gender differences in journalistic questioning are displayed under various scenarios. To help with further analysis of interview questions and answers, we introduce a dataset of tennis post-match interview transcripts along with corresponding match information.",
"We collect tennis press-conference transcripts from ASAP Sport's website (http://www.asapsports.com/), whose tennis collection dates back to 1992 and is still updated for current tournaments. For our study, we take post- game interviews for tennis singles matches played between Jan, 2000 to Oct 18, 2015. We also obtain easily-extractable match information from a dataset provided by Tennis-Data, which covers the majority of the matches played on the men's side from 2000-2015 and on the women's side from 2007-2015."
],
"extractive_spans": [],
"free_form_answer": "Post-match interviews for tennis singles matches from ASAP Sport's website with match information from a dataset provided by Tennis-Data",
"highlighted_evidence": [
"To help with further analysis of interview questions and answers, we introduce a dataset of tennis post-match interview transcripts along with corresponding match information.",
"We collect tennis press-conference transcripts from ASAP Sport's website (http://www.asapsports.com/), whose tennis collection dates back to 1992 and is still updated for current tournaments. For our study, we take post- game interviews for tennis singles matches played between Jan, 2000 to Oct 18, 2015. We also obtain easily-extractable match information from a dataset provided by Tennis-Data, which covers the majority of the matches played on the men's side from 2000-2015 and on the women's side from 2007-2015."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We collect tennis press-conference transcripts from ASAP Sport's website (http://www.asapsports.com/), whose tennis collection dates back to 1992 and is still updated for current tournaments. For our study, we take post- game interviews for tennis singles matches played between Jan, 2000 to Oct 18, 2015. We also obtain easily-extractable match information from a dataset provided by Tennis-Data, which covers the majority of the matches played on the men's side from 2000-2015 and on the women's side from 2007-2015."
],
"extractive_spans": [],
"free_form_answer": "post-game interviews from ASAP Sport's website",
"highlighted_evidence": [
"We collect tennis press-conference transcripts from ASAP Sport's website (http://www.asapsports.com/), whose tennis collection dates back to 1992 and is still updated for current tournaments. For our study, we take post- game interviews for tennis singles matches played between Jan, 2000 to Oct 18, 2015. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Using our methodology, we are able to quantify gender bias with respect to how game-related interview questions are. We also provide a more fine-grained analysis of how gender differences in journalistic questioning are displayed under various scenarios. To help with further analysis of interview questions and answers, we introduce a dataset of tennis post-match interview transcripts along with corresponding match information.",
"We match interview transcripts with game statistics by date and player name, keeping only the question and answer pairs from games where the statistics are successfully merged. This gives us a dataset consisting of 6467 interview transcripts and a total of 81906 question snippets posed to 167 female players and 191 male players. To model tennis-game-specific language, we use live text play-by-play commentaries collected from the website Sports Mole (http://www.sportsmole.co.uk/). These tend to be short, averaging around 40 words. Here is a sample, taken from the Federer-Murray match at the 2015 Wimbledon semi-final:"
],
"extractive_spans": [
"tennis post-match interview transcripts",
"live text play-by-play commentaries"
],
"free_form_answer": "",
"highlighted_evidence": [
"To help with further analysis of interview questions and answers, we introduce a dataset of tennis post-match interview transcripts along with corresponding match information.",
"To model tennis-game-specific language, we use live text play-by-play commentaries collected from the website Sports Mole (http://www.sportsmole.co.uk/). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"What data is used in this work?"
],
"question_id": [
"f0404673085517eea708c5e91f32fb0f7728fa08"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"gender bias"
],
"topic_background": [
"research"
]
} | {
"caption": [
"Figure 1: Mean perplexity values for male and female athletes after grouping the questions by how typical they are. Stars indicate high statistical significance (p < 0.001) between the male and female case. The male-female difference for the atypical group is statistically significantly larger than for the typical group.",
"Figure 2: Mean perplexity values for male and female athletes after grouping the questions by the ranking of the player to which they are addressed. Stars indicate high statistical significance (p < 0.001) between the male and female case."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png"
]
} | [
"What data is used in this work?"
] | [
[
"1607.03895-Dataset Description-1",
"1607.03895-Introduction-3",
"1607.03895-Dataset Description-0"
]
] | [
"post-game interviews from ASAP Sport's website"
] | 98 |
1704.06960 | Translating Neuralese | Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel. While these policies are effective for many tasks, interpretation of their induced communication strategies has remained a challenge. Here we propose to interpret agents' messages by translating them. Unlike in typical machine translation problems, we have no parallel data to learn from. Instead we develop a translation model based on the insight that agent messages and natural language strings mean the same thing if they induce the same belief about the world in a listener. We present theoretical guarantees and empirical evidence that our approach preserves both the semantics and pragmatics of messages by ensuring that players communicating through a translation layer do not suffer a substantial loss in reward relative to players with a common language. | {
"paragraphs": [
[
"Several recent papers have described approaches for learning deep communicating policies (DCPs): decentralized representations of behavior that enable multiple agents to communicate via a differentiable channel that can be formulated as a recurrent neural network. DCPs have been shown to solve a variety of coordination problems, including reference games BIBREF0 , logic puzzles BIBREF1 , and simple control BIBREF2 . Appealingly, the agents' communication protocol can be learned via direct backpropagation through the communication channel, avoiding many of the challenging inference problems associated with learning in classical decentralized decision processes BIBREF3 .",
"But analysis of the strategies induced by DCPs has remained a challenge. As an example, fig:teaser depicts a driving game in which two cars, which are unable to see each other, must both cross an intersection without colliding. In order to ensure success, it is clear that the cars must communicate with each other. But a number of successful communication strategies are possible—for example, they might report their exact $(x, y)$ coordinates at every timestep, or they might simply announce whenever they are entering and leaving the intersection. If these messages were communicated in natural language, it would be straightforward to determine which strategy was being employed. However, DCP agents instead communicate with an automatically induced protocol of unstructured, real-valued recurrent state vectors—an artificial language we might call “neuralese,” which superficially bears little resemblance to natural language, and thus frustrates attempts at direct interpretation.",
"We propose to understand neuralese messages by translating them. In this work, we present a simple technique for inducing a dictionary that maps between neuralese message vectors and short natural language strings, given only examples of DCP agents interacting with other agents, and humans interacting with other humans. Natural language already provides a rich set of tools for describing beliefs, observations, and plans—our thesis is that these tools provide a useful complement to the visualization and ablation techniques used in previous work on understanding complex models BIBREF4 , BIBREF5 .",
"While structurally quite similar to the task of machine translation between pairs of human languages, interpretation of neuralese poses a number of novel challenges. First, there is no natural source of parallel data: there are no bilingual “speakers” of both neuralese and natural language. Second, there may not be a direct correspondence between the strategy employed by humans and DCP agents: even if it were constrained to communicate using natural language, an automated agent might choose to produce a different message from humans in a given state. We tackle both of these challenges by appealing to the grounding of messages in gameplay. Our approach is based on one of the core insights in natural language semantics: messages (whether in neuralese or natural language) have similar meanings when they induce similar beliefs about the state of the world.",
"Based on this intuition, we introduce a translation criterion that matches neuralese messages with natural language strings by minimizing statistical distance in a common representation space of distributions over speaker states. We explore several related questions:",
"Our translation model and analysis are general, and in fact apply equally to human–computer and human–human translation problems grounded in gameplay. In this paper, we focus our experiments specifically on the problem of interpreting communication in deep policies, and apply our approach to the driving game in fig:teaser and two reference games of the kind shown in fig:bird-examples. We find that this approach outperforms a more conventional machine translation criterion both when attempting to interoperate with neuralese speakers and when predicting their state."
],
[
"A variety of approaches for learning deep policies with communication were proposed essentially simultaneously in the past year. We have broadly labeled these as “deep communicating policies”; concrete examples include Lazaridou16Communication, Foerster16Communication, and Sukhbaatar16CommNet. The policy representation we employ in this paper is similar to the latter two of these, although the general framework is agnostic to low-level modeling details and could be straightforwardly applied to other architectures. Analysis of communication strategies in all these papers has been largely ad-hoc, obtained by clustering states from which similar messages are emitted and attempting to manually assign semantics to these clusters. The present work aims at developing tools for performing this analysis automatically.",
"Most closely related to our approach is that of Lazaridou16LanguageGame, who also develop a model for assigning natural language interpretations to learned messages; however, this approach relies on supervised cluster labels and is targeted specifically towards referring expression games. Here we attempt to develop an approach that can handle general multiagent interactions without assuming a prior discrete structure in space of observations.",
"The literature on learning decentralized multi-agent policies in general is considerably larger BIBREF6 , BIBREF7 . This includes work focused on communication in multiagent settings BIBREF3 and even communication using natural language messages BIBREF8 . All of these approaches employ structured communication schemes with manually engineered messaging protocols; these are, in some sense, automatically interpretable, but at the cost of introducing considerable complexity into both training and inference.",
"Our evaluation in this paper investigates communication strategies that arise in a number of different games, including reference games and an extended-horizon driving game. Communication strategies for reference games were previously explored by Vogel13Grice, Andreas16Pragmatics and Kazemzadeh14ReferIt, and reference games specifically featuring end-to-end communication protocols by Yu16Reinforcer. On the control side, a long line of work considers nonverbal communication strategies in multiagent policies BIBREF9 .",
"Another group of related approaches focuses on the development of more general machinery for interpreting deep models in which messages have no explicit semantics. This includes both visualization techniques BIBREF10 , BIBREF4 , and approaches focused on generating explanations in the form of natural language BIBREF11 , BIBREF12 ."
],
[
"What does it mean for a message $z_h$ to be a “translation” of a message $z_r$ ? In standard machine translation problems, the answer is that $z_h$ is likely to co-occur in parallel data with $z_r$ ; that is, $p(z_h |\nz_r)$ is large. Here we have no parallel data: even if we could observe natural language and neuralese messages produced by agents in the same state, we would have no guarantee that these messages actually served the same function. Our answer must instead appeal to the fact that both natural language and neuralese messages are grounded in a common environment. For a given neuralese message $z_r$ , we will first compute a grounded representation of that message's meaning; to translate, we find a natural-language message whose meaning is most similar. The key question is then what form this grounded meaning representation should take. The existing literature suggests two broad approaches:"
],
[
"In this section, we build on the intuition that messages should be translated via their semantics to define a concrete translation model—a procedure for constructing a natural language $\\leftrightarrow $ neuralese dictionary given agent and human interactions.",
"We understand the meaning of a message $z_a$ to be represented by the distribution $p(x_a|z_a, x_b)$ it induces over speaker states given listener context. We can formalize this by defining the belief distribution $\\beta $ for a message $z$ and context $x_b$ as:",
"Here we have modeled the listener as performing a single step of Bayesian inference, using the listener state and the message generation model (by assumption shared between players) to compute the posterior over speaker states. While in general neither humans nor DCP agents compute explicit representations of this posterior, past work has found that both humans and suitably-trained neural networks can be modeled as Bayesian reasoners BIBREF15 , BIBREF16 .",
"This provides a context-specific representation of belief, but for messages $z$ and $z^{\\prime }$ to have the same semantics, they must induce the same belief over all contexts in which they occur. In our probabilistic formulation, this introduces an outer expectation over contexts, providing a final measure $q$ of the quality of a translation from $z$ to $z^{\\prime }$ : ",
"$$&q(z, z^{\\prime }) = \\mathbb {E}\\big [\\mathcal {D}_{\\textrm {KL}}(\\beta (z, X_b)\\ ||\\ \\beta (z^{\\prime }, X_b))\\ |\\ z, z^{\\prime }\\big ] \\nonumber \\\\\n&= \\sum _{x_a, x_b} p(x_a, x_b | z, z^{\\prime }) \\nonumber \\mathcal {D}_{\\textrm {KL}}(\\beta (z, x_b)\\ ||\\ \\beta (z^{\\prime }, x_b)) \\nonumber \\\\\n&\\propto \\sum _{x_a, x_b} p(x_a, x_b) \\cdot p(z| x_a) \\cdot p(z^{\\prime } | x_a) \\nonumber \\\\[-.9em]\n&\\qquad \\qquad \\ \\cdot \\mathcal {D}_{\\textrm {KL}}(\\beta (z, x_b)\\ ||\\ \\beta (z^{\\prime }, x_b));$$ (Eq. 15) ",
" recalling that in this setting ",
"$$&\\hspace{-8.99994pt}\\mathcal {D}_{\\textrm {KL}}(\\beta \\ ||\\ \\beta ^{\\prime }) = \\sum _{x_a} p(x_a | z, x_b) \\log \\frac{p(x_a\n| z, x_b)}{p(x_a | z^{\\prime }, x_b)}\n\\nonumber \\\\\n&\\hspace{-8.99994pt}\\propto \\sum _{x_a} p(x_a, x_b) p(z| x_a) \\log \\frac{p(z|\nx_a)}{p(z^{\\prime } | x_a)} \\frac{p(z^{\\prime })}{p(z)}$$ (Eq. 16) ",
"which is zero when the messages $z$ and $z^{\\prime }$ give rise to identical belief distributions and increases as they grow more dissimilar. To translate, we would like to compute $\\textit {tr}(z_r) = \\operatornamewithlimits{arg\\,min}_{z_h} q(z_r, z_h)$ and $\\textit {tr}(z_h) = \\operatornamewithlimits{arg\\,min}_{z_r} q(z_h, z_r)$ . Intuitively, eq:q says that we will measure the quality of a proposed translation $z\\mapsto z^{\\prime }$ by asking the following question: in contexts where $z$ is likely to be used, how frequently does $z^{\\prime }$ induce the same belief about speaker states as $z$ ?",
"While this translation criterion directly encodes the semantic notion of meaning described in sec:philosophy, it is doubly intractable: the KL divergence and outer expectation involve a sum over all observations $x_a$ and $x_b$ respectively; these sums are not in general possible to compute efficiently. To avoid this, we approximate eq:q by sampling. We draw a collection of samples $(x_a, x_b)$ from the prior over world states, and then generate for each sample a sequence of distractors $(x_a^{\\prime }, x_b)$ from $p(x_a^{\\prime } | x_b)$ (we assume access to both of these distributions from the problem representation). The KL term in eq:q is computed over each true sample and its distractors, which are then normalized and averaged to compute the final score.",
"[t] given: a phrase inventory $L$ translate $z$ $\\operatornamewithlimits{arg\\,min}_{z^{\\prime } \\in L} \\hat{q}(z, z^{\\prime })$ ",
" $\\hat{q}$ $z, z^{\\prime }$ // sample contexts and distractors $x_{ai}, x_{bi} \\sim p(X_a, X_b) \\textrm { for $ i=1..n $}$ $x_{ai}^{\\prime } \\sim p(X_a | x_{bi})$ // compute context weights $\\tilde{w}_i \\leftarrow p(z | x_{ai}) \\cdot p(z^{\\prime } | x_{ai})$ $w_i \\leftarrow \\tilde{w}_i / \\sum _j \\tilde{w}_j$ // compute divergences $ k_i \\leftarrow \\sum _{x \\in \\lbrace x_a, x_a^{\\prime }\\rbrace } p(z|x) \\log \\frac{p(z|x)}{p(z^{\\prime }|x)}\\frac{p(z^{\\prime })}{p(z)}$ $\\sum _i w_i k_i$ ",
" Translating messages",
"Sampling accounts for the outer $p(x_a, x_b)$ in eq:q and the inner $p(x_a|x_b)$ in eq:kl. The only quantities remaining are of the form $p(z|x_a)$ and $p(z)$ . In the case of neuralese, these are determined by the agent policy $\\pi _r$ . For natural language, we use transcripts of human interactions to fit a model that maps from world states to a distribution over frequent utterances as discussed in sec:formulation. Details of these model implementations are provided in sec:impl, and the full translation procedure is given in alg:translation."
],
[
"The translation criterion in the previous section makes no reference to listener actions at all. The shapes example in sec:philosophy shows that some model performance might be lost under translation. It is thus reasonable to ask whether this translation model of sec:models can make any guarantees about the effect of translation on behavior. In this section we explore the relationship between belief-preserving translations and the behaviors they produce, by examining the effect of belief accuracy and strategy mismatch on the reward obtained by cooperating agents.",
"To facilitate this analysis, we consider a simplified family of communication games with the structure depicted in fig:simplegame. These games can be viewed as a subset of the family depicted in fig:model; and consist of two steps: a listener makes an observation $x_a$ and sends a single message $z$ to a speaker, which makes its own observation $x_b$ , takes a single action $u$ , and receives a reward. We emphasize that the results in this section concern the theoretical properties of idealized games, and are presented to provide intuition about high-level properties of our approach. sec:results investigates empirical behavior of this approach on real-world tasks where these ideal conditions do not hold.",
"Our first result is that translations that minimize semantic dissimilarity $q$ cause the listener to take near-optimal actions:",
"Proposition 1",
"Semantic translations reward rational listeners.Define a rational listener as one that chooses the best action in expectation over the speaker's state: $ U(z, x_b) = \\operatornamewithlimits{arg\\,max}_u \\sum _{x_a} p(x_a | x_b, z) r(x_a, x_b, u) $ ",
"for a reward function $r \\in [0, 1]$ that depends only on the two observations and the action. Now let $a$ be a speaker of a language $r$ , $b$ be a listener of the same language $r$ , and $b^{\\prime }$ be a listener of a different language $h$ . Suppose that we wish for $a$ and $b^{\\prime }$ to interact via the translator $\\textit {tr}:\nz_r \\mapsto z_h$ (so that $a$0 produces a message $a$1 , and $a$2 takes an action $a$3 ). If $a$4 respects the semantics of $a$5 , then the bilingual pair $a$6 and $a$7 achieves only boundedly worse reward than the monolingual pair $a$8 and $a$9 . Specifically, if $r$0 , then ",
"$$&\\mathbb {E}r(X_a, X_b, U(\\textit {tr}(Z)) \\nonumber \\\\\n&\\qquad \\ge \\mathbb {E}r(X_a, X_b, U(Z)) - \\sqrt{2D}$$ (Eq. 21) ",
"So as discussed in sec:philosophy, even by committing to a semantic approach to meaning representation, we have still succeeded in (approximately) capturing the nice properties of the pragmatic approach.",
"sec:philosophy examined the consequences of a mismatch between the set of primitives available in two languages. In general we would like some measure of our approach's robustness to the lack of an exact correspondence between two languages. In the case of humans in particular we expect that a variety of different strategies will be employed, many of which will not correspond to the behavior of the learned agent. It is natural to want some assurance that we can identify the DCP's strategy as long as some human strategy mirrors it. Our second observation is that it is possible to exactly recover a translation of a DCP strategy from a mixture of humans playing different strategies:",
"Proposition 2",
"encoding=*-30Semantic translations find hidden correspondences. encoding=*0Consider a fixed robot policy $\\pi _r$ and a set of human policies $\\lbrace \\pi _{h1}, \\pi _{h2}, \\dots \\rbrace $ (recalling from sec:formulation that each $\\pi $ is defined by distributions $p(z|x_a)$ and $p(u|z,x_b)$ ). Suppose further that the messages employed by these human strategies are disjoint; that is, if $p_{hi}(z|x_a) > 0$ , then $p_{hj}(z|x_a) = 0$ for all $j \\ne i$ . Now suppose that all $q(z_r, z_h) = 0$ for all messages in the support of some $p_{hi}(z|x_a)$ and $\\lbrace \\pi _{h1}, \\pi _{h2}, \\dots \\rbrace $0 for all $\\lbrace \\pi _{h1}, \\pi _{h2}, \\dots \\rbrace $1 . Then every message $\\lbrace \\pi _{h1}, \\pi _{h2}, \\dots \\rbrace $2 is translated into a message produced by $\\lbrace \\pi _{h1}, \\pi _{h2}, \\dots \\rbrace $3 , and messages from other strategies are ignored.",
"This observation follows immediately from the definition of $q(z_r, z_h)$ , but demonstrates one of the key distinctions between our approach and a conventional machine translation criterion. Maximizing $p(z_h | z_r)$ will produce the natural language message most often produced in contexts where $z_r$ is observed, regardless of whether that message is useful or informative. By contrast, minimizing $q(z_h, z_r)$ will find the $z_h$ that corresponds most closely to $z_r$ even when $z_h$ is rarely used.",
"The disjointness condition, while seemingly quite strong, in fact arises naturally in many circumstances—for example, players in the driving game reporting their spatial locations in absolute vs. relative coordinates, or speakers in a color reference game (fig:tasks) discriminating based on lightness vs. hue. It is also possible to relax the above condition to require that strategies be only locally disjoint (i.e. with the disjointness condition holding for each fixed $x_a$ ), in which case overlapping human strategies are allowed, and the recovered robot strategy is a context-weighted mixture of these."
],
[
"In the remainder of the paper, we evaluate the empirical behavior of our approach to translation. Our evaluation considers two kinds of tasks: reference games and navigation games. In a reference game (e.g. fig:tasksa), both players observe a pair of candidate referents. A speaker is assigned a target referent; it must communicate this target to a listener, who then performs a choice action corresponding to its belief about the true target. In this paper we consider two variants on the reference game: a simple color-naming task, and a more complex task involving natural images of birds. For examples of human communication strategies for these tasks, we obtain the XKCD color dataset BIBREF17 , BIBREF18 and the Caltech–UCSD Birds dataset BIBREF19 with accompanying natural language descriptions BIBREF20 . We use standard train / validation / test splits for both of these datasets.",
"The final task we consider is the driving task (fig:tasksc) first discussed in the introduction. In this task, two cars, invisible to each other, must each navigate between randomly assigned start and goal positions without colliding. This task takes a number of steps to complete, and potentially involves a much broader range of communication strategies. To obtain human annotations for this task, we recorded both actions and messages generated by pairs of human Amazon Mechanical Turk workers playing the driving game with each other. We collected close to 400 games, with a total of more than 2000 messages exchanged, from which we held out 100 game traces as a test set.",
"We use the version of the XKCD dataset prepared by McMahan15Colors. Here the input feature vector is simply the LAB representation of each color, and the message inventory taken to be all unigrams that appear at least five times.",
"We use the dataset of Welinder10Birds with natural language annotations from Reed16Birds. The model's input feature representations are a final 256-dimensional hidden feature vector from a compact bilinear pooling model BIBREF24 pre-trained for classification. The message inventory consists of the 50 most frequent bigrams to appear in natural language descriptions; example human traces are generated by for every frequent (bigram, image) pair in the dataset.",
"Driving data is collected from pairs of human workers on Mechanical Turk. Workers received the following description of the task:",
"Your goal is to drive the red car onto the red square. Be careful! You're driving in a thick fog, and there is another car on the road that you cannot see. However, you can talk to the other driver to make sure you both reach your destinations safely.",
"Players were restricted to messages of 1–3 words, and required to send at least one message per game. Each player was paid $0.25 per game. 382 games were collected with 5 different road layouts, each represented as an 8x8 grid presented to players as in fig:drive-examples. The action space is discrete: players can move forward, back, turn left, turn right, or wait. These were divided into a 282-game training set and 100-game test set. The message inventory consists of all messages sent more than 3 times. Input features consists of indicators on the agent's current position and orientation, goal position, and map identity. Data is available for download at http://github.com/jacobandreas/neuralese."
],
[
"A mechanism for understanding the behavior of a learned model should allow a human user both to correctly infer its beliefs and to successfully interoperate with it; we accordingly report results of both “belief” and “behavior” evaluations.",
"To support easy reproduction and comparison (and in keeping with standard practice in machine translation), we focus on developing automatic measures of system performance. We use the available training data to develop simulated models of human decisions; by first showing that these models track well with human judgments, we can be confident that their use in evaluations will correlate with human understanding. We employ the following two metrics:",
"This evaluation focuses on the denotational perspective in semantics that motivated the initial development of our model. We have successfully understood the semantics of a message $z_r$ if, after translating $z_r \\mapsto z_h$ , a human listener can form a correct belief about the state in which $z_r$ was produced. We construct a simple state-guessing game where the listener is presented with a translated message and two state observations, and must guess which state the speaker was in when the message was emitted.",
"When translating from natural language to neuralese, we use the learned agent model to directly guess the hidden state. For neuralese to natural language we must first construct a “model human listener” to map from strings back to state representations; we do this by using the training data to fit a simple regression model that scores (state, sentence) pairs using a bag-of-words sentence representation. We find that our “model human” matches the judgments of real humans 83% of the time on the colors task, 77% of the time on the birds task, and 77% of the time on the driving task. This gives us confidence that the model human gives a reasonably accurate proxy for human interpretation.",
"This evaluation focuses on the cooperative aspects of interpretability: we measure the extent to which learned models are able to interoperate with each other by way of a translation layer. In the case of reference games, the goal of this semantic evaluation is identical to the goal of the game itself (to identify the hidden state of the speaker), so we perform this additional pragmatic evaluation only for the driving game. We found that the most reliable way to make use of human game traces was to construct a speaker-only model human. The evaluation selects a full game trace from a human player, and replays both the human's actions and messages exactly (disregarding any incoming messages); the evaluation measures the quality of the natural-language-to-neuralese translator, and the extent to which the learned agent model can accommodate a (real) human given translations of the human's messages.",
"We compare our approach to two baselines: a random baseline that chooses a translation of each input uniformly from messages observed during training, and a direct baseline that directly maximizes $p(z^{\\prime } | z)$ (by analogy to a conventional machine translation system). This is accomplished by sampling from a DCP speaker in training states labeled with natural language strings."
],
[
"In all below, “R” indicates a DCP agent, “H” indicates a real human, and “H*” indicates a model human player."
],
[
"We have investigated the problem of interpreting message vectors from deep networks by translating them. After introducing a translation criterion based on matching listener beliefs about speaker states, we presented both theoretical and empirical evidence that this criterion outperforms a conventional machine translation approach at recovering the content of message vectors and facilitating collaboration between humans and learned agents.",
"While our evaluation has focused on understanding the behavior of deep communicating policies, the framework proposed in this paper could be much more generally applied. Any encoder–decoder model BIBREF21 can be thought of as a kind of communication game played between the encoder and the decoder, so we can analogously imagine computing and translating “beliefs” induced by the encoding to explain what features of the input are being transmitted. The current work has focused on learning a purely categorical model of the translation process, supported by an unstructured inventory of translation candidates, and future work could explore the compositional structure of messages, and attempt to synthesize novel natural language or neuralese messages from scratch. More broadly, the work here shows that the denotational perspective from formal semantics provides a framework for precisely framing the demands of interpretable machine learning BIBREF22 , and particularly for ensuring that human users without prior exposure to a learned model are able to interoperate with it, predict its behavior, and diagnose its errors."
],
[
"JA is supported by a Facebook Graduate Fellowship and a Berkeley AI / Huawei Fellowship. We are grateful to Lisa Anne Hendricks for assistance with the Caltech–UCSD Birds dataset, and to Liang Huang and Sebastian Schuster for useful feedback."
],
[
"Learned agents have the following form:",
"where $h$ is a hidden state, $z$ is a message from the other agent, $u$ is a distribution over actions, and $x$ is an observation of the world. A single hidden layer with 256 units and a $\\tanh $ nonlinearity is used for the MLP. The GRU hidden state is also of size 256, and the message vector is of size 64.",
"Agents are trained via interaction with the world as in Hausknecht15DRQN using the adam optimizer BIBREF28 and a discount factor of 0.9. The step size was chosen as $0.003$ for reference games and $0.0003$ for the driving game. An $\\epsilon $ -greedy exploration strategy is employed, with the exploration parameter for timestep $t$ given by:",
" $\n\\epsilon = \\max {\\left\\lbrace \\begin{array}{ll}\n(1000 - t) / 1000 \\\\\n(5000 - t) / 50000 \\\\\n0\n\\end{array}\\right.}\n$ ",
"As in Foerster16Communication, we found it useful to add noise to the communication channel: in this case, isotropic Gaussian noise with mean 0 and standard deviation 0.3. This also helps smooth $p(z|x_a)$ when computing the translation criterion."
],
[
"As discussed in sec:models, the translation criterion is computed based on the quantity $p(z|x)$ . The policy representation above actually defines a distribution $p(z|x, h)$ , additionally involving the agent's hidden state $h$ from a previous timestep. While in principle it is possible to eliminate the dependence on $h$ by introducing an additional sampling step into alg:translation, we found that it simplified inference to simply learn an additional model of $p(z|x)$ directly. For simplicity, we treat the term $\\log (p(z^{\\prime }) / p(z))$ as constant, those these could be more accurately approximated with a learned density estimator.",
"This model is trained alongside the learned agent to imitate its decisions, but does not get to observe the recurrent state, like so:",
"Here the multilayer perceptron has a single hidden layer with $\\tanh $ nonlinearities and size 128. It is also trained with adam and a step size of 0.0003.",
"We use exactly the same model and parameters to implement representations of $p(z|x)$ for human speakers, but in this case the vector $z$ is taken to be a distribution over messages in the natural language inventory, and the model is trained to maximize the likelihood of labeled human traces."
]
],
"section_name": [
"Introduction",
"Related work",
"What's in a translation?",
"Translation models",
"Belief and behavior",
"Tasks",
"Metrics",
"Results",
"Conclusion",
"Acknowledgments",
"Agents",
"Representational models"
]
} | {
"answers": [
{
"annotation_id": [
"2b2a11067dc8268e0199bc73a86c6d21cc807ae2",
"d842e805b2168b629b8d9b12291dd7996ebefe03",
"abbf1974de8891aea590e8d402c23d5b345177a9"
],
"answer": [
{
"evidence": [
"In the remainder of the paper, we evaluate the empirical behavior of our approach to translation. Our evaluation considers two kinds of tasks: reference games and navigation games. In a reference game (e.g. fig:tasksa), both players observe a pair of candidate referents. A speaker is assigned a target referent; it must communicate this target to a listener, who then performs a choice action corresponding to its belief about the true target. In this paper we consider two variants on the reference game: a simple color-naming task, and a more complex task involving natural images of birds. For examples of human communication strategies for these tasks, we obtain the XKCD color dataset BIBREF17 , BIBREF18 and the Caltech–UCSD Birds dataset BIBREF19 with accompanying natural language descriptions BIBREF20 . We use standard train / validation / test splits for both of these datasets."
],
"extractive_spans": [
"the XKCD color dataset",
"the Caltech–UCSD Birds dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"For examples of human communication strategies for these tasks, we obtain the XKCD color dataset BIBREF17 , BIBREF18 and the Caltech–UCSD Birds dataset BIBREF19 with accompanying natural language descriptions BIBREF20 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In the remainder of the paper, we evaluate the empirical behavior of our approach to translation. Our evaluation considers two kinds of tasks: reference games and navigation games. In a reference game (e.g. fig:tasksa), both players observe a pair of candidate referents. A speaker is assigned a target referent; it must communicate this target to a listener, who then performs a choice action corresponding to its belief about the true target. In this paper we consider two variants on the reference game: a simple color-naming task, and a more complex task involving natural images of birds. For examples of human communication strategies for these tasks, we obtain the XKCD color dataset BIBREF17 , BIBREF18 and the Caltech–UCSD Birds dataset BIBREF19 with accompanying natural language descriptions BIBREF20 . We use standard train / validation / test splits for both of these datasets.",
"The final task we consider is the driving task (fig:tasksc) first discussed in the introduction. In this task, two cars, invisible to each other, must each navigate between randomly assigned start and goal positions without colliding. This task takes a number of steps to complete, and potentially involves a much broader range of communication strategies. To obtain human annotations for this task, we recorded both actions and messages generated by pairs of human Amazon Mechanical Turk workers playing the driving game with each other. We collected close to 400 games, with a total of more than 2000 messages exchanged, from which we held out 100 game traces as a test set."
],
"extractive_spans": [
"XKCD color dataset",
"Caltech–UCSD Birds dataset",
"actions and messages generated by pairs of human Amazon Mechanical Turk workers playing the driving game"
],
"free_form_answer": "",
"highlighted_evidence": [
"For examples of human communication strategies for these tasks, we obtain the XKCD color dataset BIBREF17 , BIBREF18 and the Caltech–UCSD Birds dataset BIBREF19 with accompanying natural language descriptions BIBREF20 .",
"The final task we consider is the driving task (fig:tasksc) first discussed in the introduction.",
"To obtain human annotations for this task, we recorded both actions and messages generated by pairs of human Amazon Mechanical Turk workers playing the driving game with each other. We collected close to 400 games, with a total of more than 2000 messages exchanged, from which we held out 100 game traces as a test set."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In the remainder of the paper, we evaluate the empirical behavior of our approach to translation. Our evaluation considers two kinds of tasks: reference games and navigation games. In a reference game (e.g. fig:tasksa), both players observe a pair of candidate referents. A speaker is assigned a target referent; it must communicate this target to a listener, who then performs a choice action corresponding to its belief about the true target. In this paper we consider two variants on the reference game: a simple color-naming task, and a more complex task involving natural images of birds. For examples of human communication strategies for these tasks, we obtain the XKCD color dataset BIBREF17 , BIBREF18 and the Caltech–UCSD Birds dataset BIBREF19 with accompanying natural language descriptions BIBREF20 . We use standard train / validation / test splits for both of these datasets.",
"The final task we consider is the driving task (fig:tasksc) first discussed in the introduction. In this task, two cars, invisible to each other, must each navigate between randomly assigned start and goal positions without colliding. This task takes a number of steps to complete, and potentially involves a much broader range of communication strategies. To obtain human annotations for this task, we recorded both actions and messages generated by pairs of human Amazon Mechanical Turk workers playing the driving game with each other. We collected close to 400 games, with a total of more than 2000 messages exchanged, from which we held out 100 game traces as a test set."
],
"extractive_spans": [],
"free_form_answer": "XKCD color dataset; Caltech-UCSD Birds dataset; game data from Amazon Mechanical Turk workers ",
"highlighted_evidence": [
"For examples of human communication strategies for these tasks, we obtain the XKCD color dataset BIBREF17 , BIBREF18 and the Caltech–UCSD Birds dataset BIBREF19 with accompanying natural language descriptions BIBREF20 .",
"To obtain human annotations for this task, we recorded both actions and messages generated by pairs of human Amazon Mechanical Turk workers playing the driving game with each other. We collected close to 400 games, with a total of more than 2000 messages exchanged, from which we held out 100 game traces as a test set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"043654eefd60242ac8da08ddc1d4b8d73f86f653",
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"What dataset is used?"
],
"question_id": [
"d6b0c71721ed24ef1d9bd31ed3a266b0c7fc9b57"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"pragmatics"
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Example interaction between a pair of agents in a deep communicating policy. Both cars are attempting to cross the intersection, but cannot see each other. By exchanging message vectors z(t), the agents are able to coordinate and avoid a collision. This paper presents an approach for understanding the contents of these message vectors by translating them into natural language.",
"Figure 2: Overview of our approach—best-scoring translations generated for a reference game involving images of birds. The speaking agent’s goal is to send a message that uniquely identifies the bird on the left. From these translations it can be seen that the learned model appears to discriminate based on coarse attributes like size and color.",
"Figure 3: Schematic representation of communication games. At every timestep t, players a and b make an observation x(t) and receive a message z(t−1), then produce an action u(t) and a new message z(t).",
"Figure 4: Cell implementing a single step of agent communication (compare with Sukhbaatar et al. (2016) and Foerster et al. (2016)). MLP denotes a multilayer perceptron; GRU denotes a gated recurrent unit (Cho et al., 2014). Dashed lines represent recurrent connections.",
"Figure 5: Simplified game representation used for analysis in Section 6. A speaker agent sends a message to a listener agent, which takes a single action and receives a reward.",
"Figure 6: Tasks used to evaluate the translation model. (a–b) Reference games: both players observe a pair of reference candidates (colors or images); Player a is assigned a target (marked with a star), which player b must guess based on a message from a. (c) Driving game: each car attempts to navigate to its goal (marked with a star). The cars cannot see each other, and must communicate to avoid a collision.",
"Table 1: Evaluation results for reference games. (a) The colors task. (b) The birds task. Whether the model human is in a listener or speaker role, translation based on belief matching outperforms both random and machine translation baselines.",
"Figure 7: Best-scoring translations generated for color task.",
"Figure 8: Best-scoring translations generated for driving task generated from the given speaker state.",
"Table 2: Belief evaluation results for the driving game. Driving states are challenging to identify based on messages alone (as evidenced by the comparatively low scores obtained by singlelanguage pairs) . Translation based on belief achieves the best overall performance in both directions.",
"Table 3: Behavior evaluation results for the driving game. Scores are presented in the form “reward / completion rate”. While less accurate than either humans or DCPs with a shared language, the models that employ a translation layer obtain higher reward and a greater overall success rate than baselines."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"3-Figure3-1.png",
"3-Figure4-1.png",
"6-Figure5-1.png",
"7-Figure6-1.png",
"8-Table1-1.png",
"8-Figure7-1.png",
"9-Figure8-1.png",
"9-Table2-1.png",
"9-Table3-1.png"
]
} | [
"What dataset is used?"
] | [
[
"1704.06960-Tasks-1",
"1704.06960-Tasks-0"
]
] | [
"XKCD color dataset; Caltech-UCSD Birds dataset; game data from Amazon Mechanical Turk workers "
] | 99 |
1904.03670 | Speech Model Pre-training for End-to-End Spoken Language Understanding | Whereas conventional spoken language understanding (SLU) systems map speech to text, and then text to intent, end-to-end SLU systems map speech directly to intent through a single trainable model. Achieving high accuracy with these end-to-end models without a large amount of training data is difficult. We propose a method to reduce the data requirements of end-to-end SLU in which the model is first pre-trained to predict words and phonemes, thus learning good features for SLU. We introduce a new SLU dataset, Fluent Speech Commands, and show that our method improves performance both when the full dataset is used for training and when only a small subset is used. We also describe preliminary experiments to gauge the model's ability to generalize to new phrases not heard during training. | {
"paragraphs": [
[
"Spoken language understanding (SLU) systems infer the meaning or intent of a spoken utterance BIBREF0 . This is crucial for voice user interfaces, in which the speaker's utterance needs to be converted into an action or query. For example, for a voice-controlled coffee machine, an utterance like “make me a large coffee with two milks and a sugar, please” might have an intent representation like {drink: \"coffee\", size: \"large\", additions: [{type: \"milk\", count: 2}, {type: \"sugar\", count: 1}]}.",
"The conventional SLU pipeline is composed of two modules: an automatic speech recognition (ASR) module that maps the speech to a text transcript, and a natural language understanding (NLU) module that maps the text transcript to the speaker's intent BIBREF1 , BIBREF2 , BIBREF3 . An alternative approach that is beginning to gain popularity is end-to-end SLU BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . In end-to-end SLU, a single trainable model maps the speech audio directly to the speaker's intent without explicitly producing a text transcript (Fig. FIGREF4 ). Unlike the conventional SLU pipeline, end-to-end SLU:",
"End-to-end models have been made possible by deep learning, which automatically learns hierarchical representations of the input signal BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . Speech is natural to represent in a hierarchical way: waveform INLINEFORM0 phonemes INLINEFORM1 morphemes INLINEFORM2 words INLINEFORM3 concepts INLINEFORM4 meaning. However, because speech signals are high-dimensional and highly variable even for a single speaker, training deep models and learning these hierarchical representations without a large amount of training data is difficult.",
"The computer vision BIBREF14 , BIBREF15 , natural language processing BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , and ASR BIBREF21 , BIBREF22 communities have attacked the problem of limited supervised training data with great success by pre-training deep models on related tasks for which there is more training data. Following their lead, we propose an efficient ASR-based pre-training methodology in this paper and show that it may be used to improve the performance of end-to-end SLU models, especially when the amount of training data is very small.",
"Our contributions are as follows:"
],
[
"Three key papers describing end-to-end SLU were written by Qian et al. BIBREF4 , Serdyuk et al. BIBREF5 , and Chen et al. BIBREF6 . Serdyuk et al. in BIBREF5 use no pre-training whatsoever. Qian et al. in BIBREF4 use an auto-encoder to initialize the SLU model. Chen et al. BIBREF6 pre-train the first stage of an SLU model to recognize graphemes; the softmax outputs of the first stage are then fed to a classifier second stage. The model proposed in this paper is similar to theirs, but removes the restriction of the softmax bottleneck and uses alternative training targets, as we will describe later.",
"More recently, Haghani et al. in BIBREF7 compare four types of sequence-to-sequence models for SLU, including a direct model (end-to-end with no pre-training) and a multi-task model (uses a shared encoder whose output is ingested by a separate ASR decoder and SLU decoder). The model proposed here is somewhat similar to their multi-task model, although we do not use or require the ASR targets during SLU training.",
"The work listed above deals with very high resource SLU—in BIBREF7 , for instance, the Google Home BIBREF23 dataset consists of 24 million labeled utterances. In contrast, Renkens et al. in BIBREF8 consider the problem of end-to-end SLU with limited training data, and find that capsule networks BIBREF24 , compared to conventional neural network models, are more easily capable of learning end-to-end SLU from scratch. However, they do not consider the effect of pre-training on other speech data.",
"This previous work has all been conducted on datasets that are closed-source or too small to test hypotheses about the amount of data required to generalize well. The lack of a good open-source dataset for end-to-end SLU experiments makes it difficult for most people to perform high-quality, reproducible research on this topic. We therefore created a new SLU dataset, the “Fluent Speech Commands” dataset, which Fluent.ai releases along with this paper."
],
[
"This section describes the structure and creation of Fluent Speech Commands."
],
[
"The dataset is composed of 16 kHz single-channel .wav audio files. Each audio file contains a recording of a single command that one might use for a smart home or virtual assistant, like “put on the music” or “turn up the heat in the kitchen”.",
"Each audio is labeled with three slots: action, object, and location. A slot takes on one of multiple values: for instance, the “location” slot can take on the values “none”, “kitchen”, “bedroom”, or “washroom”. We refer to the combination of slot values as the intent of the utterance. The dataset has 31 unique intents in total. We do not distinguish between domain, intent, and slot prediction, as is sometimes done in SLU BIBREF25 .",
"The dataset can be used as a multi-label classification task, where the goal is to predict the action, object, and location labels. Since the slots are not actually independent of each other, a more careful approach would model the relationship between slots, e.g. using an autoregressive model, as in BIBREF7 . We use the simpler multi-label classification approach in this paper, so as to avoid the issues sometimes encountered training autoregressive models and instead focus on questions related to generalization using a simpler model. Alternately, the 31 distinct intents can be “flattened” and used as 31 distinct labels for a single-label classification task.",
"For each intent, there are multiple possible wordings: for example, the intent {action: \"activate\", object: \"lights\", location: \"none\"} can be expressed as “turn on the lights”, “switch the lights on”, “lights on”, etc.. These phrases were decided upon before data collection by asking employees at Fluent.ai, including both native and non-native English speakers, for various ways in which they might express a particular intent. There are 248 different phrases in total."
],
[
"The data was collected using crowdsourcing. Each speaker was recorded saying each wording for each intent twice. The phrases to record were presented in a random order. Participants consented to data being released and provided demographic information about themselves. The demographic information about these anonymized speakers (age range, gender, speaking ability, etc.) is included along with the dataset.",
"The data was validated by a separate set of crowdsourcers. All audios deemed by the crowdsourcers to be unintelligible or contain the wrong phrase were removed. The total number of speakers, utterances, and hours of audio remaining is shown in Table TABREF12 ."
],
[
"The utterances are randomly divided into train, valid, and test splits in such a way that no speaker appears in more than one split. Each split contains all possible wordings for each intent, though our code has the option to include data for only certain wordings for different sets, to test the model's ability to recognize wordings not heard during training. The dataset has a .csv file for each split that lists the speaker ID, file path, transcription, and slots for all the .wav files in that split."
],
[
"Here we review some related public datasets and show the gap that Fluent Speech Commands fills.",
"The Google Speech Commands dataset BIBREF26 (to which the name “Fluent Speech Commands” is an homage) is a free dataset of 30 single-word spoken commands (“yes”, “no”, “stop”, “go”, etc.). This dataset is suitable for keyword spotting experiments, but not for SLU.",
"ATIS is an SLU dataset consisting of utterances related to travel planning. This dataset can only be obtained expensively from the Linguistic Data Consortium.",
"The Snips NLU Benchmark BIBREF2 has a rich set of virtual assistant commands, but contains only text, with no audio, and hence is not suitable for end-to-end SLU experiments.",
"The Grabo, Domotica, and Patcor datasets are three related datasets of spoken commands for robot control and card games developed by KU Leuven and used in BIBREF8 . These datasets are free, but have only a small number of speakers and phrases.",
"In contrast to these datasets, Fluent Speech Commands is simultaneously audio-based, reasonably large, and free, and contains several multiple-word commands corresponding to each of the intents."
],
[
"The model proposed in this paper, shown in Fig. FIGREF17 , is a deep neural network consisting of a stack of modules, where the first modules are pre-trained to predict phonemes and words. The word and phoneme classifiers are discarded, and the entire model is then trained end-to-end on the supervised SLU task. In what follows, we justify these design decisions and give more details about the model hyperparameters."
],
[
"ASR models are trained using a variety of targets, including phonemes, graphemes, wordpieces, or more recently whole words BIBREF27 , BIBREF28 , BIBREF29 . We choose whole words as the pre-training targets, since this is what a typical NLU module would expect as input. A typical ASR dataset contains too many unique words (LibriSpeech BIBREF30 has more than 200,000) to assign an output to each one; we only assign a label to the 10,000 most common words. This leaves much of the pre-training data without any labels, which wastes data. By using phonemes as intermediate pre-training targets BIBREF31 , BIBREF19 , BIBREF32 , we are able to pre-train on speech segments with no word label. Additionally, we find that using phonemes as intermediate targets speeds up word-level pre-training BIBREF33 , BIBREF34 .",
"We use the Montreal Forced Aligner BIBREF35 to obtain word- and phoneme-level alignments for LibriSpeech, and we pre-train the model on the entire 960 hours of training data using these alignments INLINEFORM0 . Using force-aligned labels has the additional benefit of enabling pre-training using short, random crops rather than entire utterances, which reduces the computation and memory required to pre-train the model.",
"",
"",
"",
""
],
[
"The first module takes as input the audio signal INLINEFORM0 and outputs INLINEFORM1 , a sequence of hidden representations that are pre-trained to predict phonemes. The phoneme-level logits are computed using a linear classifier: DISPLAYFORM0 ",
"The phoneme module is implemented using a SincNet layer BIBREF36 , BIBREF37 , which processes the raw input waveform, followed by multiple convolutional layers and recurrent layers with pooling and dropout. More detailed hyperparameters can be found in our code."
],
[
"The second module takes as input INLINEFORM0 and outputs INLINEFORM1 . Similar to the phoneme-level module, it uses recurrent layers with dropout and pooling, and is pre-trained to predict words using another linear classifier: DISPLAYFORM0 ",
"Notice that the input to this module is INLINEFORM0 , not INLINEFORM1 , and likewise the output to the next stage is INLINEFORM2 , not INLINEFORM3 . There are two good reasons for forwarding INLINEFORM4 instead of INLINEFORM5 . The first is that we don't want to remove a degree of freedom from the model: the size of INLINEFORM6 is fixed by the number of targets, and this would in turn fix the size of the next layer of the model. The second reason is that computing INLINEFORM7 requires multiplying and storing a large ( INLINEFORM8 2.5 million parameters) weight matrix, and by discarding this matrix after pre-training, we save on memory and computation."
],
[
"The third module, which is not pre-trained, maps INLINEFORM0 to the predicted intent. Depending on the structure of the intent representation, the intent module might take on various forms. Since in this work we use a fixed three-slot intent representation, we implement this module using a recurrent layer, followed by max-pooling to squash the sequence of outputs from the recurrent layer into a single vector of logits corresponding to the different slot values, similar to BIBREF5 ."
],
[
"Although the pre-trained model works well as a frozen feature extractor, it may be preferable to “unfreeze” its weights and finetune them for the SLU task with backpropagation. Similar to ULMFiT BIBREF17 , we find that gradually unfreezing the pre-trained layers works better than unfreezing them all at once. We unfreeze one layer each epoch, and stop at a pre-determined layer, which is a hyperparameter."
],
[
"Here we report results for three experiments on Fluent Speech Commands: using the full dataset, using a subset of the dataset, and using a subset of wordings."
],
[
"We first trained models given the entire SLU training set. The models used one of: 1) no pre-training (randomly initialized), 2) pre-training with no unfreezing, 3) gradually unfreezing only the word layers, or 4) gradually unfreezing both the word layers and phoneme layers. What we report here as “accuracy” refers to the accuracy of all slots for an utterance taken together—that is, if the predicted intent differs from the true intent in even one slot, the prediction is deemed incorrect.",
"The validation accuracy of these models over time is shown in Fig. . The best results are obtained when only the word layers of the pre-trained model are unfrozen. This may be because the model begins to forget the more general phonetic knowledge acquired during pre-training. For the test set, the frozen model and partially unfrozen model perform roughly equally well (Table TABREF28 , “full” column), possibly because the test set is “easier” than the validation set. In all cases, the pre-trained models outperform the randomly initialized model."
],
[
"To simulate a smaller dataset, we randomly selected 10% of the training set, and used this instead of the entire training set. Fig. shows the validation accuracy (on the entire validation set, not a subset) over time. A similar trend is observed as for the entire dataset: unfreezing the word layers works best. The gap in final test accuracy between the randomly initialized model and the pre-trained models increases (Table TABREF28 , “10%” column); the final test accuracy for the pre-trained models drops only slightly, further highlighting the advantage of our proposed method."
],
[
"What happens if new wordings appear in the test data that never appear in the training data? This is an important question, since it is generally impractical to try to imagine every possible wording for a particular intent while gathering training data.",
"To test this, we trained models on three specific phrases, “turn on the lights”, “turn off the lights”, and “switch on the lights” (273 utterances total), and tested on those same phrases, as well as a new phrase: “switch off the lights”. If the model incorrectly infers that utterances that contain “switch” always correspond to turning on the lights, it will incorrectly guess that “switch off the lights” corresponds to turning on the lights; if the model infers that the presence of the word “off” corresponds to turning off the lights, it will generalize to the new phrase. The randomly initialized model was unable to fit this tiny training set, even with a very low learning rate and no regularization. The pre-trained models were able to generalize to the new wording (with 97% accuracy on the validation set, which contains more examples of the new phrase than of the training phrases).",
"However, there are many situations in which our model does not correctly generalize. For example, if the model is trained only with examples containing “bedroom” and “washroom”, but then tested on an example containing “bathroom”, it will guess the intent corresponding to “bedroom” because “bedroom” sounds more similar to “bathroom” than to “washroom”, even though “washroom” is the correct meaning. In text-based NLU, this scenario can be handled using word embeddings, which represent words in such a way that words with similar meanings have similar vector representations BIBREF1 , BIBREF38 . It may be possible to teach the pre-trained part of the model to output “embedding-like” word representations so that the intent module can recognize the meaning of phrases with synonyms."
],
[
"In this paper, we proposed a pre-training methodology for end-to-end SLU models, introduced the Fluent Speech Commands dataset, and used this dataset to show that our pre-training techniques improve performance both for large and small SLU training sets. In the future, we plan to continue using Fluent Speech Commands to explore the limitations of end-to-end SLU, like new wordings and synonyms not observed in the SLU dataset, to see if these limitations can be overcome."
],
[
"We would like to acknowledge the following for research funding and computing support: NSERC, Calcul Québec, Compute Canada, the Canada Research Chairs, and CIFAR.",
"Thanks to Dima Serdyuk and Kyle Kastner at Mila, and Farzaneh Fard, Luis Rodriguez Ruiz, Sam Myer, Mohamed Mhiri, and Arash Rad at Fluent.ai for helpful discussions with us about this work."
]
],
"section_name": [
"Introduction",
"Related work",
"Dataset",
"Audio and labels",
"Data collection",
"Dataset splits",
"Related datasets",
"Model and Pre-training Strategy",
"Which ASR targets to use?",
"Phoneme module",
"Word module",
"Intent module",
"Unfreezing schedule",
"Experiments",
"Full dataset",
"Partial dataset",
"Generalizing to new wordings",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"2dc524e6062c5ede03075ec93f0372844288368d",
"7c84fb594cec81129ed66ae18f32faf7dc845fa6",
"a62fff440f3c40dc441a09f7212e89c4b8a1b1a2"
],
"answer": [
{
"evidence": [
"The data was collected using crowdsourcing. Each speaker was recorded saying each wording for each intent twice. The phrases to record were presented in a random order. Participants consented to data being released and provided demographic information about themselves. The demographic information about these anonymized speakers (age range, gender, speaking ability, etc.) is included along with the dataset."
],
"extractive_spans": [],
"free_form_answer": "data was collected using crowdsourcing where speakers were recorded saying random ordered phrases for each intent twice",
"highlighted_evidence": [
"The data was collected using crowdsourcing. Each speaker was recorded saying each wording for each intent twice. The phrases to record were presented in a random order."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The data was collected using crowdsourcing. Each speaker was recorded saying each wording for each intent twice. The phrases to record were presented in a random order. Participants consented to data being released and provided demographic information about themselves. The demographic information about these anonymized speakers (age range, gender, speaking ability, etc.) is included along with the dataset."
],
"extractive_spans": [
"crowdsourcing"
],
"free_form_answer": "",
"highlighted_evidence": [
"The data was collected using crowdsourcing. Each speaker was recorded saying each wording for each intent twice. The phrases to record were presented in a random order."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The data was collected using crowdsourcing. Each speaker was recorded saying each wording for each intent twice. The phrases to record were presented in a random order. Participants consented to data being released and provided demographic information about themselves. The demographic information about these anonymized speakers (age range, gender, speaking ability, etc.) is included along with the dataset."
],
"extractive_spans": [
"using crowdsourcing"
],
"free_form_answer": "",
"highlighted_evidence": [
"The data was collected using crowdsourcing. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
""
],
"paper_read": [
""
],
"question": [
"How was the dataset collected?"
],
"question_id": [
"63cdac43a643fc1e06da44910458e89b2c7cd921"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
""
],
"topic_background": [
""
]
} | {
"caption": [
"Figure 1: Conventional ASR → NLU system for SLU versus end-to-end SLU.",
"Table 1: Information about the Fluent Speech Commands dataset.",
"Figure 2: The lower layers of the model are pre-trained using ASR targets (words and phonemes). The word and phoneme classifiers are discarded, and the features from the pre-trained part of the model (blue) are used as the input to the subsequent module (white), which is trained using SLU targets.",
"Figure 3: Accuracy on the validation set over time for models trained on (a) the full SLU dataset or (b) 10% of the dataset.",
"Table 2: Accuracy on the test set for different models, given the full training dataset or a 10% subset of the training data."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"4-Table2-1.png"
]
} | [
"How was the dataset collected?"
] | [
[
"1904.03670-Data collection-0"
]
] | [
"data was collected using crowdsourcing where speakers were recorded saying random ordered phrases for each intent twice"
] | 100 |
1601.03313 | Political Speech Generation | In this report we present a system that can generate political speeches for a desired political party. Furthermore, the system allows to specify whether a speech should hold a supportive or opposing opinion. The system relies on a combination of several state-of-the-art NLP methods which are discussed in this report. These include n-grams, Justeson&Katz POS tag filter, recurrent neural networks, and latent Dirichlet allocation. Sequences of words are generated based on probabilities obtained from two underlying models: A language model takes care of the grammatical correctness while a topic model aims for textual consistency. Both models were trained on the Convote dataset which contains transcripts from US congressional floor debates. Furthermore, we present a manual and an automated approach to evaluate the quality of generated speeches. In an experimental evaluation generated speeches have shown very high quality in terms of grammatical correctness and sentence transitions. | {
"paragraphs": [
[
"Many political speeches show the same structures and same characteristics regardless of the actual topic. Some phrases and arguments appear again and again and indicate a certain political affiliation or opinion. We want to use these remarkable patterns to train a system that generates new speeches. Since there are major differences between the political parties we want the system to consider the political affiliation and the opinion of the intended speaker. The goal is to generate speeches where no one can tell the difference to hand-written speeches.",
"In this report we first discuss related works which deal with similar or related methods. Then we describe and analyze the dataset we use. Next, we present the methods we used to implement our system. We also describe investigated methods that were not used in the final implementation. Then we describe a performed experiment and how we evaluated the results. Finally, we conclude our work and give an outlook. The appendix of this report contains the generated speeches from the experiment."
],
[
"Creating models for a corpus that allow retrieving certain information is a major part of this project as well as in the entire NLP domain. Blei et al. UID17 present in their paper a model which is known as latent Dirichlet allocation (LDA). LDA has become one of the most popular topic models in the NLP domain. LDA is generative probabilistic model that discovers automatically the underlying topics. Each document is modeled as a mixture of various topics. These topics can be understood as a collection of words that have different probabilities of appearance. Words with the highest probabilities represent the topics.",
"However, LDA is a bag-of-words model which means that the word orders are not preserved. That means LDA does not capture collocations or multiword named entities. Lau et al. UID18 claim that collocations empirically enhance topic models. In an experiment they replaced the top-ranked bigrams with single tokens, deleted the 200 most frequent terms from the vocabulary and performed ordinary LDA. The results from experiments on four distinct datasets have shown that this bigram-variant is very beneficial for LDA topic models.",
"Fürnkranz UID19 has studied the usage of n-grams in the text-categorization domain. He has shown that using bi- and trigrams in addition to the set-of-word representation improves the classification performance significantly. Furthermore, he has shown that sequences longer than three words reduce the classification performance. That also indicates that collocations play a crucial role when it comes to inferring the latent structure of documents.",
"Cavnar and Trenkle UID20 have also used an n-gram-based approach for text categorization. Their system is based on calculating and comparing profiles of N-gram frequencies. They compute for every category a representing profile from the training data. Then the system computes a profile for a particular document that is to be classified. Finally, the system computes a distance measure between the document’s profile and each of the category profiles and selects the category whose profile has the smallest distance.",
"Smadja UID21 presents a tool, Xtract, which implements methods to extracts variable-length collocations. The extraction process is done in several stages. In the first stage the system determines the top-ranked bigrams of the corpus. In the second stage Xtract examines the statistical distribution of words and part-of-speech tags around the bigrams from the previous stage. Compounds with a probability above a certain threshold are retained while the others are rejected. In the third stage they enrich the collocations with syntactical information obtained from Cass UID22 . The syntactical information helps to evaluate the candidate collocations and to decide whether they should be rejected or not.",
"Wang et al UID23 propose a topical n-gram model that is capable of extracting meaningful phrases and topics. It combines the bigram topic model UID24 and LDA collocation model UID25 . One of the key features of this model is to decide whether two consecutive words should be treated as a single token or not depending on their nearby context. Compared to LDA the extracted topics are semantically more meaningful. This model shows also really good results in information retrieval (IR) tasks.",
"Justeson and Katz UID26 present a method to extract technical terms from documents. Their approach is not restricted to technical terms but applies to all multiword named entities of length two or three. The foundations of their method are bi- and trigrams which have a certain POS tag structure. That is, they extract all bi- and trigrams from the corpus, identify their POS tags and check them against a predefined list of accepted POS tag patterns. In their experiment this method identifies 99% of the technical multiword terms in the test data.",
"Wacholder UID27 presents an approach for identifying significant topics within a document. The proposed method bases on the identification of Noun Phrases (NPs) and consists of three steps. First, a list of candidate significant topics consisting of all simplex NPs is extracted from the document. Next, these NPs are clustered by head. Finally, a significance measure is obtained by ranking frequency of heads. Those NPs with heads that occur with greater frequency in the document are more significant than NPs whose head occurs less frequently.",
"Blei and Lafferty UID28 propose their Correlated Topic model (CTM). While LDA assumes all latent topics are independent CTM aims to capture correlations between them. They argue that a document about genetics is more likely also about disease than X-ray astronomy. The CTM builds on the LDA model but they use a hierarchical topic model of documents that replaces the Dirichlet distribution of per-document topic proportions with a logistic normal. According to their results the model gives better predictive performance and uncovers interesting descriptive statistics.",
"Ivyer et al. UID35 apply Recursive Neural Networks (RNN) to political ideology detection. The RNNs were initialized with word2vec embeddings. The word vector dimensions were set to 300 to allow direct comparison with other experiments. However, they claim that smaller vector sizes (50, 100) do not significantly change accuracy. They performed experiments on two different dataset: the Convote dataset UID41 and the Ideological Books Corpus (IBC) UID37 . They claim that their model outperforms existing models on these two datasets.",
"There has been a lot of research in the field of Natural Language Generation (NLG). The paper Building Applied Natural Language Generation Systems UID29 discusses the main requirements and tasks of NLG systems. Among others, they investigate a so-called Corpus-based approach. That is, a collection of example inputs is mapped to output texts of the corpus. This is basically what we plan to do because we have already all the speech segments labeled with the political party and the opinion. However, our generator will have a simpler architecture but we will use the described list of tasks as a guideline.",
"Most NLG systems are designed to create a textual representation of some input data. That is, the input data determines the content. For example SumTime-Mousam UID30 generates a textual weather forecast based on numerical weather simulations. Another example is the ModelExplainer system UID31 which takes as input a specification of an object-oriented class model and produces as output a text describing the model. Other NLG systems are used as authoring aid for example to help personnel officers to write job descriptions UID32 or to help technical authors produce instructions for using software UID33 .",
"A NLG system that follows a different approach is SciGen UID38 . SciGen is an automatic computer science research paper generator developed by three MIT students. That is, it creates random papers which show actually a very high quality in terms of structuring and lexicalization, and they even include graphs, figures, and citations. SciGen has become pretty famous after some of its generated papers got accepted at conferences and published in journals. In particular, their paper Rooter: A Methodology for the Typical Unification of Access Points and Redundancy raised a lot of attention because it was accepted to the 2005 World Multiconference on Systemics, Cybernetics and Informatics (WMSCI) and the authors were even invited to speak at the conference. SciGen requires as input only the names of the authors; all the content will be generated randomly. Our generator will follow the same approach since we also do not specify the content of the generated speech. The content is determined by the training data and requires no further specification."
],
[
"The main data source for this project is the Convote data set UID41 . It contains a total of 3857 speech segments from 53 US Congressional floor debates from the year 2005. Each speech segment can be referred to its debate, its speaker, the speaker’s party and the speaker’s vote which serves as the ground-truth label for the speech. The dataset was originally created in the course of the project Get out the vote UID34 . The authors used the dataset to train a classifier in order to determine whether a speech represents support of or opposition to proposed legislation. They did not only analyze the speeches individually but also investigated agreements and disagreements with the opinions of other speakers. That is, they identified references in the speech segments, determined the targets of those references, and decided whether a reference represents an instance of agreement or disagreement. However, we focus only on the individual speech segments and disregard references.",
"For our work we have removed single-sentence speeches, HTML-tags and corrected punctuation marks. In order to enable simple sentence splitting we replaced all sentence delimiters by a stop-token. Furthermore, we inserted special tokens which indicate the start and the end of a speech. Then we divided all the speeches into the four classes given by the combination of possible political parties and speech opinions. Table TABREF1 shows the four speech classes and table TABREF2 gives a quantitative overview of the corpus’ content. It can be seen that the classes RY and DN contain the majority of the speeches."
],
[
"We use a simple statistical language model based on n-grams. In particular, we use 6-grams. That is, for each sequence of six consecutive words we calculate the probability of seeing the sixth word given the previous five ones. That allows us to determine very quickly all words which can occur after the previous five ones and how likely each of them is."
],
[
"For our topic model we use a Justeson and Katz (J&K) POS tag filter for two- and three-word terms UID26 . As suggested by WordHoard UID39 we expanded the list of POS tag patterns by the sequence Noun-Conjunction-Noun. We determined the POS tags for each sentence in the corpus and identified then all two- and three-word terms that match one of the patterns. For the POS tagging we used maxent treebank pos tagging model from the Natural Language Toolkit (NLTK) for Python. It uses the maximum entropy model and was trained on the Wall Street Journal subset of the Penn Tree bank corpus UID40 .",
"Some of the terms are very generic and appear very often in all classes. In order to find those terms that appear particularly often in a certain class we calculate a significance score. Our significance score INLINEFORM0 is defined by the ratio of the probability of seeing a word INLINEFORM1 in a certain class INLINEFORM2 to the probability to see the word in the entire corpus: INLINEFORM3 ",
"This significance score gives information about how often a term occurs in a certain class compared to the entire corpus. That is, every score greater than 1.0 indicates that in the given class a certain term occurs more often than average. We consider all phrases which occur at least 20 times in the corpus and have a ratio greater than 1. These terms represent the topics of the corpus. Table TABREF5 lists the top ten topics of each class ordered by their score. All these terms represent meaningful topics and it seems reasonable that there were debates about them."
],
[
"For the speech generation one has to specify the desired class which consists of the political party and the intended vote. Based on the selected class the corresponding models for the generation are picked. From the language model of the selected class we obtain the probabilities for each 5-gram that starts a speech. From that distribution we pick one of the 5-grams at random and use it as the beginning of our opening sentence. Then the system starts to predict word after word until it predicts the token that indicates the end of the speech. In order to predict the next word we first determine what topics the so far generated speech is about. This is done by checking every topic-term if it appears in the speech. For every occurring term we calculate the topic coverage INLINEFORM0 in our speech. The topic coverage is an indicator of how well a certain topic INLINEFORM1 is represented in a speech INLINEFORM2 . The following equation shows the definition of the topic coverage: INLINEFORM3 ",
"We rank all topics by their topic coverage values and pick the top 3 terms as our current topic set INLINEFORM0 . For these 3 terms we normalize the values of the ratios so that they sum up to 1. This gives us the probability INLINEFORM1 of seeing a topic INLINEFORM2 in our current speech INLINEFORM3 of class INLINEFORM4 .",
"The next step is to find our candidate words. All words which have been seen in the training data following the previous 5-gram are our candidates. For each candidate we calculate the probability of the language model INLINEFORM0 and the probability of the topic model INLINEFORM1 .",
" INLINEFORM0 tells how likely this word is to occur after the previous 5 ones. This value can be directly obtained by the language model of the specified class. INLINEFORM1 tells how likely the word w is to occur in a speech which covers the current topics INLINEFORM2 . The following equation shows the definition of INLINEFORM3 where INLINEFORM4 denotes our dataset and INLINEFORM5 is the subset containing only speeches of class INLINEFORM6 . INLINEFORM7 ",
"The factor INLINEFORM0 prevents divisions by zero is set to a very small value ( INLINEFORM1 ). The probabilities for all candidate words are normalized so that they sum up to 1.",
"With the probabilities from the language model and the topic model we can now calculate the probability of predicting a certain word. This is done by combining those two probabilities. The weighting factor INLINEFORM0 balances the impact of the two probabilities. Furthermore, we want to make sure that a phrase is not repeated again and again. Thus, we check how often the phrase consisting of the previous five words and the current candidate word has already occurred in the generated speech and divide the combined probability by this value squared plus 1. So if this phrase has not been generated yet the denominator of this fraction is 1 and the original probability remains unchanged. The following equation shows how to calculate for a word INLINEFORM1 the probability of being predicted as next word of the incomplete speech INLINEFORM2 : INLINEFORM3 ",
"From the distribution given by the normalized probabilities of all candidate words we pick then one of the words at random. Then the whole procedure starts again with assessing the current topics. This is repeated until the end-of-speech token is generated or a certain word limit is reached.",
"Instead of using the probability distribution of the candidates we could have also just picked the word with the highest probability. But then the method would be deterministic. Using the distribution to pick a word at random enables the generator to produce every time a different speech."
],
[
"In this section we present some alternative approaches which were pursued in the course of this project. These methods have not shown sufficiently good results and were therefore not further pursued."
],
[
"Instead of using n-grams we also considered using Recurrent Neural Networks (RNN) as language models. Our approach was heavily based on the online tutorial from Denny Britz UID42 . The RNN takes as input a sequence of words and outputs the next word. We limited the vocabulary to the 6000 most frequent words. Words were represented by one-hot-encoded feature vectors. The RNN had 50 hidden layers and used tanh as activation function. For assessing the error we used cross-entropy loss function. Furthermore we used Stochastic Gradient Descent (SGD) to minimize the loss and Backpropagation Through Time (BPTT) to calculate the gradients.",
"After training the network for 100 time epochs ( INLINEFORM0 14 h) the results were still pretty bad. Most of the generated sentences were grammatically incorrect. There are many options to improve the performance of RNNs but due to the good performance shown by n-grams, the time-consuming training, and the limited time for this project we have decided to not further purse this approach."
],
[
"As alternative to the J&K POS tag filter we used LDA as topic model. In particular we used the approach from Lau et al. UID18 . That is, we removed all occurrences of stop words, stemmed the remaining words, replaced the 1000 most-frequent bigrams with single tokens, and deleted the 200 most frequent terms from the vocabulary before applying ordinary LDA. Since our dataset contains speech segments from 53 different debates we set the number of underlying topics to 53. Some of the results represented quite meaningful topics. However, the majority did not reveal any useful information. Table TABREF9 shows some examples of good and bad results from LDA. It can be seen that the extracted terms of the bad examples are very generic and do not necessarily indicate a meaningful topic."
],
[
"For the speech generation task we have also pursued a sentence-based approach in the beginning of this project. The idea of the sentence-based approach is to take whole sentences from the training data and concatenate them in a meaningful way. We start by picking a speech of the desired class at random and take the first sentence of it. This will be the start sentence of our speech. Then we pick 20 speeches at random from the same class. We compare our first sentence with each sentence in those 20 speeches by calculating a similarity measure. The next sentence is than determined by the successor of the sentence with the highest similarity. In case no sentence shows sufficient similarity (similarity score below threshold) we just take the successor of our last sentence. In the next step we pick again 20 speeches at random and compare each sentence with the last one in order to find the most similar sentence. This will be repeated until we come across the speech-termination token or the generated speech reaches a certain length.",
"The crucial part of this method is the measure of similarity between two sentences. Our similarity is composed of structural and textual similarity. Both are normalized to a range between 0 and 1 and weighted through a factor INLINEFORM0 . We compute the similarity between two sentences INLINEFORM1 and INLINEFORM2 as follows: INLINEFORM3 ",
"For the structural similarity we compare the POS tags of both sentences and determine the longest sequence of congruent POS tags. The length of this sequence, normalized by the length of the shorter sentence, gives us the structural similarity. The structural similarity measure aims to support smooth sentence transitions. That is, if we find sentences which have a very similar sentence structure, it is very likely that they connect well to either of their following sentences. The textual similarity is defined by the number of trigrams that occur in both sentences, normalized by the length of the longer sentence. This similarity aims to find sentences which use the same words.",
"The obvious advantage of the sentence-based approach is that every sentence is grammatically correct since they originate directly from the training data. However, connecting sentences reasonable is a very challenging task. A further step to improve this approach would be to extend the similarity measure by a topical similarity and a semantic similarity. The topical similarity should measure the topical correspondence of the originating speeches, while the semantic similarity should help to find sentences which express the same meaning although using different words. However, the results from the word-based approach were more promising and therefore we have decided to discard the sentence-based approach."
],
[
"This section describes the experimental setup we used to evaluate our system. Furthermore, we present here two different approach of evaluating the quality of generated speeches."
],
[
"In order to test our implemented methods we performed an experimental evaluation. In this experiment we generated ten speeches, five for class DN and five for class RY. We set the weighting factor INLINEFORM0 to 0.5 which means the topic and the language model have both equal impact on predicting the next word. The quality of the generated speeches was then evaluated. We used two different evaluation methods: a manual evaluation and an automatic evaluation. Both methods will be described in more detail in the following paragraphs of this section. The generated speeches can be found in the appendix of this report."
],
[
"For the manual evaluation we have defined a list of evaluation criteria. That is, a generated speech is evaluated by assessing each of the criterion and assigning a score between 0 and 3 to it. Table TABREF13 lists all evaluation criteria and describes the meaning of the different scores."
],
[
"The automatic evaluation aims to evaluate both the grammatical correctness and the consistency of the speech in terms of its content. For evaluating the grammatical correctness we identify for each sentence of the speech its POS tags. Then we check all sentences of the entire corpus whether one has the same sequence of POS tags. Having a sentence with the same POS tag structure does not necessarily mean that the grammar is correct. Neither does the lack of finding a matching sentence imply the existence of an error. But it points in a certain direction. Furthermore, we let the system output the sentence for which it could not find a matching sentence so that we can evaluate those sentences manually.",
"In order to evaluate the content of the generated speech we determine the mixture of topics covered by the speech and order them by their topic coverage. That gives us information about the primary topic and secondary topics. Then we do the same for each speech in our dataset which is of the same class and compare the topic order with the one of the generated speech. We sum up the topic coverage values of each topic that occurs in both speeches at the same position. The highest achieved value is used as evaluation score. That is, finding a speech which covers the same topics with the same order of significance give us a score of 1."
],
[
"In this section we present the results from our experiments. Table TABREF15 shows the results from the manual evaluation. Note that each criterion scores between 0 and 3 which leads to a maximum total score of 12. The achieved total score range from 5 to 10 with an average of 8.1. In particular, the grammatical correctness and the sentence transitions were very good. Each of them scored on average 2.3 out of 3. The speech content yielded the lowest scores. This indicates that the topic model may need some improvement.",
"Table TABREF16 shows the results from the automatic evaluation. The automatic evaluation confirms pretty much the results from the manual evaluation. Most of the speeches which achieved a high score in the manual evaluation scored also high in the automatic evaluation. Furthermore, it also confirms that the overall the grammatical correctness of the speeches is very good while the content is a bit behind."
],
[
"In this report we have presented a novel approach of training a system on speech transcripts in order to generate new speeches. We have shown that n-grams and J&K POS tag filter are very effective as language and topic model for this task. We have shown how to combine these models to a system that produces good results. Furthermore, we have presented different methods to evaluate the quality of generated texts. In an experimental evaluation our system performed very well. In particular, the grammatical correctness and the sentence transitions of most speeches were very good. However, there are no comparable systems which would allow a direct comparison.",
"Despite the good results it is very unlikely that these methods will be actually used to generate speeches for politicians. However, the approach applies to the generation of all kind of texts given a suitable dataset. With some modifications it would be possible to use the system to summarize texts about the same topic from different source, for example when several newspapers report about the same event. Terms that occur in the report of every newspaper would get a high probability to be generated.",
"All of our source code is available on GitHub UID43 . We explicitly encourage others to try using, modifying and extending it. Feedback and ideas for improvement are most welcome."
],
[
"__START__ mr. speaker , i thank my colleague on the committee on rules . i rise in full support of this resolution and urge my colleagues to support this bill and urge my colleagues to support the bill . mr. speaker , supporting this rule and supporting this bill is good for small business . it is great for american small business , for main street , for jobs creation . we have an economy that has created nearly 2 million jobs in the past couple of months : apparel , textiles , transportation and equipment , electronic components and equipment , chemicals , industrial and commercial equipment and computers , instruments , photographic equipment , metals , food , wood and wood products . virtually every state in the union can claim at least one of these industrial sectors . in fact , one young girl , lucy , wanted to make sure that the economy keeps growing . that should not be done on borrowed money , on borrowed time . it should be done with a growing economy . it is under this restraint , with this discipline , that this budget comes before the house , and we should honor that work . __END__",
"__START__ mr. speaker , for years , honest but unfortunate consumers have had the ability to plead their case to come under bankruptcy protection and have their reasonable and valid debts discharged . the way the system is supposed to work , the bankruptcy court evaluates various factors including income , assets and debt to determine what debts can be paid and how consumers can get back on their feet . stand up for growth and opportunity . pass this legislation . __END__",
"__START__ mr. speaker , i yield back the balance of my time , and i want to commend , finally , the chairman of the committee , there will be vigorous oversight of the department of justice on a regular and on a timely basis , and the answer to how many civil liberties violations have been proven is none . repeatedly they have said there are no civil liberties violations that the inspector general has been able to uncover . further , i resisted a premature repeal or extension of the sunset prior to this congress because i felt it was important that the oversight be done for as long a time as possible so that the congress will be able to vote and a decision can be made today . mr. speaker , i reserve the balance of my time , and i want to thank the gentleman from texas for helping put together this package and for all the work that he and his staff put into this bill . this was an important thing for us to go through , and i think that we produced a good bill at the end of that dark ally over there . and the gentleman says : because there is more light over here . sometimes i think the way we look at these medical issues , instead of looking at the cost savings involved with prevention , we simply are able to look at how much it saves in the long run . again , i look at such things as if we are able to have more people go to federally approved health centers , community health centers in their community instead of showing up in the emergency departments , yes , it may cost money ; the president called for a couple billion dollars to put into those community health centers . but if it is going to relate to state law , that is the discussion that needs to take place . my state may have lucked out because a clerical error in this particular case did not refer specifically to the utah state law ; and , therefore , it may not be applicable . but the fear factor is still there , that in the future he continue that policy . __END__",
"__START__ mr. speaker , for years , honest but unfortunate consumers have had the ability to plead their case to come under bankruptcy protection and have their reasonable and valid debts discharged . the way the system is supposed to work , the bankruptcy court evaluates various factors including income , assets and debt to determine what debts can be paid and how consumers can get back on their feet , they need to have money to pay for child care . they need transportation . it allows them to get reestablished , and we think this is certainly very helpful . and then it also allows faith-based organizations to offer job training service . we think this is critical and has great potential . at the present time , brazil mandates 23 percent of their fuel supply be from ethanol . we certainly could hit 7 or 8 percent in this country . mr. speaker , this is a very modest proposal . i think it is important that this resolution be considered quickly , so that members may be appointed to the task force and can begin their work and produce a report by june 2006 . __END__",
"__START__ mr. speaker , i yield myself the time remaining . mr. speaker , i rise today in support of the rule on h.r. 418 . our nation's immigration policy has been of top concern in recent years , and for good reason . with between eight and twelve million illegal aliens in the united states , the late ronald wilson reagan , enshrined these three words as part of american policy : trust but verify . the legislation on the floor today deals with verification . i say as one who opposed a trading agreement with china that this legislation brings the monitoring capacity necessary to understand what happens in international trade . simply stated , madam speaker , if you want to cut those things , you can put it in your program . if you do not like that , you better go out and lobby against what they are doing in in vitro fertilization clinics throughout the u.s. , about 2 percent are discarded annually – that is about 8 , 000 – 11 , 000 embryos that could be slated for research . allowing the option of donating these excess embryos to research is similar to donating organs for organ transplantation in order to save or improve the quality of another person's life . the bottom line is that class-action reform is badly needed . currently , crafty lawyers are able to game the system by filing large , nationwide class-action suits in certain preferred state courts such as madison county , illinois , where judges are quick to certify classes and quick to approve settlements that give the lawyers millions of dollars in fees . this problem will be addressed by providing greater scrutiny over settlements that involve coupons or very small cash amounts . this legislation also ensures that deserving plaintiffs are able to make full use of the class action system . it allows easier removal of class action cases to federal courts . this is important because class actions tend to affect numerous americans and often involve millions of dollars . federal court is the right place for such large lawsuits . moving more class actions to federal courts also prevents one of the worst problems in class actions today , forum shopping . mr. speaker , while many concessions were made on both sides , this is still a very worthwhile bill that contains many good reforms , and i fully support it and look forward to its enactment into law and also encourage my colleagues to support this bill . __END__",
"__START__ mr. speaker , i yield 2 minutes to the gentleman from illinois ( mr. hyde ) , my dear friend , with whom i agree on some things but not on this issue , although the majority of the bill i know is consistent with the gentleman from california's ( mr. lantos ) and the gentleman from virginia with their very wise substitute give a chance to help the consumer and declare energy independence . i also want to point out that this bill is far from perfect . in many respects it is troubling . this congress has a proven history of lax oversight of the administration , and there is a difference . __END__",
"__START__ mr. speaker , the gentleman is absolutely right . the amazing thing to me when i was listening to the republicans in the last hour is when they were trying to make the analogy to their households and talking about their kids . and one of the most significant broken promises is in the area of making higher educational opportunities more available to minority and low-income students . i am so proud of the fact that every iraqi school child on the opening day of school had received a book bag with the seal of the u.s. , pencils , pads , all kinds of things , free of charge . i had just come back from iraq , and they had been there on the first day of this new congress , the republican majority is publicly demonstrating what has been evident for some time , and that is its arrogance , its pettiness , its shortsighted focus on their political life rather than to decide how we are each of us fit to govern . here is the thing . we have this rules package before us . they did some flash last night so that the press is saying , oh , they blinked . they did blink on a couple of different scores , but the fundamental challenge to the ethical standard of the house being enforced is still in this rules package are destructive , and they are unethical . mr. speaker , i reserve the balance of my time . mr. chairman , this bill frightens me . it scares me . i would hope that we could deal with this in as bipartisan a fashion as possible so that when we send it to the other body that we may have more success there , more success out of conference , and send a bill to the president that will facilitate both energy independence and the effective and efficient discovery , development , and delivery at retail to the consumer of energy options . i do not know if politics was part of that . maybe someone can answer that question . but therein lies the problem , that from time to time need to be recognized . that is what this is about . this bill is opposed by every consumer group , by all the bankruptcy judges , the trustees , law professors , by all of organized labor , by the military groups , by the civil rights organizations , and by every major group concerned about seniors , women , and children are dead ; the fact that hundreds of thousands more have become evacuees in the richest country in the world . our children will then be forced to live with the consequences of an undereducated workforce , a weak economy , and a society where good health and social justice are only afforded to the most privileged . mr. speaker , i reserve the balance of my time to read the resolution that i believe ought to be before us , mr. speaker . the president has a credibility gap when it comes to iraq . we have been misled too often , and it is time to go back and revisit those. ” i would remind the house that it was widely pointed out when that legislation was before us what a remarkable example of bipartisanship and legislative cooperation it was . of course , the defense appropriations bill is of great interest to our members . __END__",
"__START__ mr. speaker , i rise today in opposition to the labor , health and human services and education appropriations conference report before us . one month ago , the house of representatives voted this bill down because it failed to address the priorities of the american people : good jobs , safe communities , quality education , and access to health care . with over 7 million americans out of work . yet the bill cuts $ 437 million out of training and employment services . that is the lowest level of adult training grants in a decade . this bill also cuts the community college initiative , the president's initiative for community colleges , an effort to train workers for high-skill , high-paying jobs . it cuts that effort by INLINEFORM0 125 million from funds provided last year , denying the help that the president was talking about giving to 100 , 000 americans of a continued education to help them get a new job . this bill also cuts job search assistance through the employment service by 11 percent and cut state unemployment insurance and employment service offices are cut $ 245 million eliminating help for 1.9 million people . this bill is no better for those attending college full-time . despite the fact that college costs have increased by $ 3 , 095 , 34 percent , since 2001 . consumers are expected to pay 52 percent more for natural gas , 30 percent more for home heating oil , you are expected to pay three times as much as you did 4 years ago , the first year president bush took office . winter is around the corner , and so are skyrocketing increases in home heating costs . families who heat with natural gas could see their fuel costs increase more than 70 percent in some parts of the country . this honorable response to the tragedy of september 11 puts to shame what has been proposed today in the wake of hurricane katrina , that the workers in the afflicted area who are trying to put that area back together are not even going to be allowed to get a decent prevailing wage that they would otherwise be guaranteed under davis-bacon . and yet while it is chiseling on the wages of those workers , it is bad for those countries that desperately need a middle class , it is bad for those workers , it is saying to the persons who make over $ 400 , 000 a year , and we roll back cuts on the top 2 percent of americans , and by doing so , we have saved almost $ 47 billion that we have used to invest in the human assets of this country , the american people . __END__",
"__START__ mr. speaker , i yield 2 minutes to the gentlewoman from california ( mrs. capps ) pointed out , after the knowledge was available and was continued to pursue the use of this compound as an additive to the fuels of our automobiles . those communities now are stuck with the costs of either cleaning up that drinking water supply , finding an alternative source and dealing with it , and they must do so . to suggest now that we are going to be giving to seniors , to keep them in nursing homes with alzheimer's and with parkinson's disease , just keep cutting it . give more tax breaks to the richest one-tenth of 1 percent . they call it the death tax . i think that is a flaw in the bill . that leads to the second point . the bill specifically mentions weight gain and obesity . well , i think most of us have a sense of what obesity is . weight gain is a whole different issue , and weight gain may occur not from obesity , not from getting fat , not from putting on too many calories ; weight gain can occur for a variety of medical reasons related to a variety of different causes . for example , i mean probably all of us have had a mom or a grandmom or an uncle to whom we say , hey , i noticed your legs are swelling again . fluid retention . fluid retention . now , that can be from a variety of causes . that is not from increased caloric intake . that could have been , for example , from a food additive , maybe a cause that was not known to the public of some kind of additive in something that they had eaten or drank . it may have been something that interfered with one of their medications and led to fluid retention . i am just making up hypotheticals here . or , the hypothetical , perhaps you have something that is actually a heart poison from some food additive that has no calories in it , zero calories in it , but over a period of time does bad things to the ability of under this bill , which i believe is absolutely essential for our health system . at a time when our country has been severely impacted by natural disasters , it is extremely urgent that congress maintain csbg funding at its current level so that the delivery of much needed services to low-income people is not disrupted . we have a responsibility to protect our environment – as well as the diverse forms of life that share it . the bipartisan substitute will help us achieve the goal . i urge my colleagues on both sides of the aisle to protect the benefits that our constituents earned and deserve and to prevent the increase in the number of frivolous filings . __END__",
"__START__ mr. speaker , i yield 2 minutes to the gentlewoman from texas ( ms. jackson-lee ) , the gentleman from new jersey ( mr. andrews ) , for the leadership he has shown on this issue . here we are again , mr. speaker . year after year after year trying to get into federal court . what it also does is minimizes the opportunity of those who can secure their local lawyer to get them into a state court and burdens them with the responsibility of finding some high-priced counsel that they can not afford to buy food . seven million more people , an increase of 12 percent , and what does this combination of reconciliation in order to give tax cuts to people making more than $ 500 , 000 . footnote right there . what about the committees of jurisdiction already in existence in congress . and what about creating a circus atmosphere that drains resources from this congress do you not understand . shamefully , the house will not have an opportunity to vote on the hastings-menendez independent katrina commission legislation , because republicans have blocked us from offering it . just as they always do , republicans block what they can not defeat . despite what republicans will suggest , today's debate is not about politics . it is about the need for truth to assure the american people that we will not allow their retirement checks to be slashed to pay for private accounts . it is time for congress , as part of the national marine sanctuary program , but there have been no hearings on this bill or any other bill to protect our oceans . let us reject this unnecessary task force and get down to some real work . mr. speaker , i reserve the balance of my time to the gentleman from maryland ( mr. cardin ) , who is the ranking member , was part and parcel of that , as well as the gentleman from virginia ( chairman tom davis ) is trying to do to improve the integrity of driver's licenses , but i find it interesting that the state of utah , while the gentleman from utah ( mr. bishop ) is arguing that they are not getting enough money for education , the state of utah legislature passed measures saying they do not want any kind of investigation of themselves . the republicans control the white house , they control the senate , and they control the house of representatives . mr. speaker , is it possible for us to let this young woman take her leave in peace . __END__"
]
],
"section_name": [
"Introduction",
"Related work",
"Data set",
"Language Model",
"Topic Model",
"Speech Generation",
"Alternative Methods",
"Recurrent Neural Networks",
"Latent Dirichlet Allocation",
"Sentence-based approach",
"Experiments",
"Setup",
"Manual Evaluation",
"Automatic Evaluation",
"Results",
"Conclusion",
"Generated speeches from experiment"
]
} | {
"answers": [
{
"annotation_id": [
"7c486b4dfbafa9d4aa3077a0c12279b547e54299",
"c9d85431b4105999d7a2f194acb5e92a2963d2d8",
"ed3c99d4882a1ff3a81cd05950800c39ba0ee232"
],
"answer": [
{
"evidence": [
"The automatic evaluation aims to evaluate both the grammatical correctness and the consistency of the speech in terms of its content. For evaluating the grammatical correctness we identify for each sentence of the speech its POS tags. Then we check all sentences of the entire corpus whether one has the same sequence of POS tags. Having a sentence with the same POS tag structure does not necessarily mean that the grammar is correct. Neither does the lack of finding a matching sentence imply the existence of an error. But it points in a certain direction. Furthermore, we let the system output the sentence for which it could not find a matching sentence so that we can evaluate those sentences manually."
],
"extractive_spans": [],
"free_form_answer": "Identify POS tags for each sentence, check whether one sentence from the corpus has the same sequence of POS tags. If the same POS sequence has been found, that points in a certain direction, if not found, the evaluation for that sentence is performed manually.",
"highlighted_evidence": [
" For evaluating the grammatical correctness we identify for each sentence of the speech its POS tags. Then we check all sentences of the entire corpus whether one has the same sequence of POS tags. Having a sentence with the same POS tag structure does not necessarily mean that the grammar is correct. Neither does the lack of finding a matching sentence imply the existence of an error. But it points in a certain direction. Furthermore, we let the system output the sentence for which it could not find a matching sentence so that we can evaluate those sentences manually."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The automatic evaluation aims to evaluate both the grammatical correctness and the consistency of the speech in terms of its content. For evaluating the grammatical correctness we identify for each sentence of the speech its POS tags. Then we check all sentences of the entire corpus whether one has the same sequence of POS tags. Having a sentence with the same POS tag structure does not necessarily mean that the grammar is correct. Neither does the lack of finding a matching sentence imply the existence of an error. But it points in a certain direction. Furthermore, we let the system output the sentence for which it could not find a matching sentence so that we can evaluate those sentences manually."
],
"extractive_spans": [],
"free_form_answer": "They measure grammatical correctness by checking whether a sentence has the same sequence of POS tags.",
"highlighted_evidence": [
"For evaluating the grammatical correctness we identify for each sentence of the speech its POS tags. Then we check all sentences of the entire corpus whether one has the same sequence of POS tags."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The automatic evaluation aims to evaluate both the grammatical correctness and the consistency of the speech in terms of its content. For evaluating the grammatical correctness we identify for each sentence of the speech its POS tags. Then we check all sentences of the entire corpus whether one has the same sequence of POS tags. Having a sentence with the same POS tag structure does not necessarily mean that the grammar is correct. Neither does the lack of finding a matching sentence imply the existence of an error. But it points in a certain direction. Furthermore, we let the system output the sentence for which it could not find a matching sentence so that we can evaluate those sentences manually."
],
"extractive_spans": [
"identify for each sentence of the speech its POS tags",
"Having a sentence with the same POS tag structure does not necessarily mean that the grammar is correct.",
"points in a certain direction",
"evaluate those sentences manually"
],
"free_form_answer": "",
"highlighted_evidence": [
"For evaluating the grammatical correctness we identify for each sentence of the speech its POS tags. Then we check all sentences of the entire corpus whether one has the same sequence of POS tags. Having a sentence with the same POS tag structure does not necessarily mean that the grammar is correct. Neither does the lack of finding a matching sentence imply the existence of an error. But it points in a certain direction. Furthermore, we let the system output the sentence for which it could not find a matching sentence so that we can evaluate those sentences manually."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a9ec0dc5fcd4ca84e790bbc61ebf307a95e4637c",
"d3e2b862c49a5585d8253644af015947de64dfc0",
"d48170f5d1ce5f4da96b533876dcd216652a5793"
],
"answer": [
{
"evidence": [
"In this section we present the results from our experiments. Table TABREF15 shows the results from the manual evaluation. Note that each criterion scores between 0 and 3 which leads to a maximum total score of 12. The achieved total score range from 5 to 10 with an average of 8.1. In particular, the grammatical correctness and the sentence transitions were very good. Each of them scored on average 2.3 out of 3. The speech content yielded the lowest scores. This indicates that the topic model may need some improvement."
],
"extractive_spans": [],
"free_form_answer": "Manually, using the criterion score between 0 and 3.",
"highlighted_evidence": [
" Table TABREF15 shows the results from the manual evaluation. Note that each criterion scores between 0 and 3 which leads to a maximum total score of 12. The achieved total score range from 5 to 10 with an average of 8.1. In particular, the grammatical correctness and the sentence transitions were very good. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For the manual evaluation we have defined a list of evaluation criteria. That is, a generated speech is evaluated by assessing each of the criterion and assigning a score between 0 and 3 to it. Table TABREF13 lists all evaluation criteria and describes the meaning of the different scores.",
"FLOAT SELECTED: Table 5: Evaluation criteria"
],
"extractive_spans": [],
"free_form_answer": "The quality of sentence transition was measured manually by checking how well do consecutive sentences connect",
"highlighted_evidence": [
"For the manual evaluation we have defined a list of evaluation criteria. ",
"Table TABREF13 lists all evaluation criteria and describes the meaning of the different scores.",
"FLOAT SELECTED: Table 5: Evaluation criteria"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For the manual evaluation we have defined a list of evaluation criteria. That is, a generated speech is evaluated by assessing each of the criterion and assigning a score between 0 and 3 to it. Table TABREF13 lists all evaluation criteria and describes the meaning of the different scores.",
"FLOAT SELECTED: Table 5: Evaluation criteria"
],
"extractive_spans": [],
"free_form_answer": "Manually evaluated on scale 0 to 3.",
"highlighted_evidence": [
"For the manual evaluation we have defined a list of evaluation criteria. That is, a generated speech is evaluated by assessing each of the criterion and assigning a score between 0 and 3 to it. Table TABREF13 lists all evaluation criteria and describes the meaning of the different scores.",
"FLOAT SELECTED: Table 5: Evaluation criteria"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2fcd63e09706b25fc71f79f8d0fab69ca2593b0b",
"98b718bd6d270c88fa84b0ddd7f816e3daa33b41",
"cb866136fbd24237b00f14836995cdc188a7f50d"
],
"answer": [
{
"evidence": [
"The main data source for this project is the Convote data set UID41 . It contains a total of 3857 speech segments from 53 US Congressional floor debates from the year 2005. Each speech segment can be referred to its debate, its speaker, the speaker’s party and the speaker’s vote which serves as the ground-truth label for the speech. The dataset was originally created in the course of the project Get out the vote UID34 . The authors used the dataset to train a classifier in order to determine whether a speech represents support of or opposition to proposed legislation. They did not only analyze the speeches individually but also investigated agreements and disagreements with the opinions of other speakers. That is, they identified references in the speech segments, determined the targets of those references, and decided whether a reference represents an instance of agreement or disagreement. However, we focus only on the individual speech segments and disregard references."
],
"extractive_spans": [
"3857 speech segments"
],
"free_form_answer": "",
"highlighted_evidence": [
"The main data source for this project is the Convote data set UID41 . It contains a total of 3857 speech segments from 53 US Congressional floor debates from the year 2005. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For our work we have removed single-sentence speeches, HTML-tags and corrected punctuation marks. In order to enable simple sentence splitting we replaced all sentence delimiters by a stop-token. Furthermore, we inserted special tokens which indicate the start and the end of a speech. Then we divided all the speeches into the four classes given by the combination of possible political parties and speech opinions. Table TABREF1 shows the four speech classes and table TABREF2 gives a quantitative overview of the corpus’ content. It can be seen that the classes RY and DN contain the majority of the speeches.",
"FLOAT SELECTED: Table 2: Corpus overview"
],
"extractive_spans": [],
"free_form_answer": "2771 speeches containing 50871 sentences",
"highlighted_evidence": [
"Table TABREF1 shows the four speech classes and table TABREF2 gives a quantitative overview of the corpus’ content. ",
"FLOAT SELECTED: Table 2: Corpus overview"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The main data source for this project is the Convote data set UID41 . It contains a total of 3857 speech segments from 53 US Congressional floor debates from the year 2005. Each speech segment can be referred to its debate, its speaker, the speaker’s party and the speaker’s vote which serves as the ground-truth label for the speech. The dataset was originally created in the course of the project Get out the vote UID34 . The authors used the dataset to train a classifier in order to determine whether a speech represents support of or opposition to proposed legislation. They did not only analyze the speeches individually but also investigated agreements and disagreements with the opinions of other speakers. That is, they identified references in the speech segments, determined the targets of those references, and decided whether a reference represents an instance of agreement or disagreement. However, we focus only on the individual speech segments and disregard references."
],
"extractive_spans": [
"3857 speech segments from 53 US Congressional floor debates"
],
"free_form_answer": "",
"highlighted_evidence": [
"It contains a total of 3857 speech segments from 53 US Congressional floor debates from the year 2005."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3c04dabae61540db71bf23e32605cb29c543d4f3",
"7663e07d24313169cb0da55966e2648392a35497",
"c24acdfe866259c3f40a6ca6f3ec8665475a10c3"
],
"answer": [
{
"evidence": [
"In this section we present the results from our experiments. Table TABREF15 shows the results from the manual evaluation. Note that each criterion scores between 0 and 3 which leads to a maximum total score of 12. The achieved total score range from 5 to 10 with an average of 8.1. In particular, the grammatical correctness and the sentence transitions were very good. Each of them scored on average 2.3 out of 3. The speech content yielded the lowest scores. This indicates that the topic model may need some improvement.",
"FLOAT SELECTED: Table 6: Results from manual evaluation"
],
"extractive_spans": [],
"free_form_answer": "Manual evaluation of four evaluation criteria: grammatical correctness, sentence transitions, speech structure, and speech content. ",
"highlighted_evidence": [
" Table TABREF15 shows the results from the manual evaluation. Note that each criterion scores between 0 and 3 which leads to a maximum total score of 12. The achieved total score range from 5 to 10 with an average of 8.1. In particular, the grammatical correctness and the sentence transitions were very good. Each of them scored on average 2.3 out of 3. The speech content yielded the lowest scores. This indicates that the topic model may need some improvement.",
"FLOAT SELECTED: Table 6: Results from manual evaluation"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For the manual evaluation we have defined a list of evaluation criteria. That is, a generated speech is evaluated by assessing each of the criterion and assigning a score between 0 and 3 to it. Table TABREF13 lists all evaluation criteria and describes the meaning of the different scores.",
"FLOAT SELECTED: Table 5: Evaluation criteria"
],
"extractive_spans": [
"generated speech is evaluated by assessing each of the criterion and assigning a score between 0 and 3 to it"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the manual evaluation we have defined a list of evaluation criteria. That is, a generated speech is evaluated by assessing each of the criterion and assigning a score between 0 and 3 to it. Table TABREF13 lists all evaluation criteria and describes the meaning of the different scores.",
"FLOAT SELECTED: Table 5: Evaluation criteria"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 5: Evaluation criteria",
"Manual Evaluation",
"For the manual evaluation we have defined a list of evaluation criteria. That is, a generated speech is evaluated by assessing each of the criterion and assigning a score between 0 and 3 to it. Table TABREF13 lists all evaluation criteria and describes the meaning of the different scores."
],
"extractive_spans": [],
"free_form_answer": "The manual evaluation contains 4 criteria to check grammatical correctness, sentence transitions, speech structure, and speech content of the generated speech and assigning a score between 0 to 3 for each criterion",
"highlighted_evidence": [
"FLOAT SELECTED: Table 5: Evaluation criteria",
"Manual Evaluation\nFor the manual evaluation we have defined a list of evaluation criteria. That is, a generated speech is evaluated by assessing each of the criterion and assigning a score between 0 and 3 to it. Table TABREF13 lists all evaluation criteria and describes the meaning of the different scores."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"how did they measure grammatical correctness?",
"how was quality of sentence transition measured?",
"what is the size of the dataset?",
"what manual evaluation is presented?"
],
"question_id": [
"e6204daf4efeb752fdbd5c26e179efcb8ddd2807",
"95c3907c5e8f57f239f3b031b1e41f19ff77924a",
"b900122c7d6c2d6161bfca8a95eae11952d1cb58",
"5206b6f40a91fc16179829041c1139a6c6d91ce7"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Speech classes",
"Table 2: Corpus overview",
"Table 3: Top topics per class",
"Table 4: Results from LDA",
"Table 5: Evaluation criteria",
"Table 6: Results from manual evaluation",
"Table 7: Results from automatic evaluation"
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"8-Table5-1.png",
"9-Table6-1.png",
"9-Table7-1.png"
]
} | [
"how did they measure grammatical correctness?",
"how was quality of sentence transition measured?",
"what is the size of the dataset?",
"what manual evaluation is presented?"
] | [
[
"1601.03313-Automatic Evaluation-0"
],
[
"1601.03313-Manual Evaluation-0",
"1601.03313-Results-0",
"1601.03313-8-Table5-1.png"
],
[
"1601.03313-Data set-1",
"1601.03313-Data set-0",
"1601.03313-4-Table2-1.png"
],
[
"1601.03313-Manual Evaluation-0",
"1601.03313-9-Table6-1.png",
"1601.03313-Results-0",
"1601.03313-8-Table5-1.png"
]
] | [
"They measure grammatical correctness by checking whether a sentence has the same sequence of POS tags.",
"Manually evaluated on scale 0 to 3.",
"2771 speeches containing 50871 sentences",
"The manual evaluation contains 4 criteria to check grammatical correctness, sentence transitions, speech structure, and speech content of the generated speech and assigning a score between 0 to 3 for each criterion"
] | 102 |
1707.06519 | Language Transfer of Audio Word2Vec: Learning Audio Segment Representations without Target Language Data | Audio Word2Vec offers vector representations of fixed dimensionality for variable-length audio segments using Sequence-to-sequence Autoencoder (SA). These vector representations are shown to describe the sequential phonetic structures of the audio segments to a good degree, with real world applications such as query-by-example Spoken Term Detection (STD). This paper examines the capability of language transfer of Audio Word2Vec. We train SA from one language (source language) and use it to extract the vector representation of the audio segments of another language (target language). We found that SA can still catch phonetic structure from the audio segments of the target language if the source and target languages are similar. In query-by-example STD, we obtain the vector representations from the SA learned from a large amount of source language data, and found them surpass the representations from naive encoder and SA directly learned from a small amount of target language data. The result shows that it is possible to learn Audio Word2Vec model from high-resource languages and use it on low-resource languages. This further expands the usability of Audio Word2Vec. | {
"paragraphs": [
[
"Embedding audio word segments into fixed-length vectors has many useful applications in natural language processing such as speaker identification BIBREF0 , audio emotion classification BIBREF1 , and spoken term detection (STD) BIBREF2 , BIBREF3 , BIBREF4 . In these applications, audio segments are usually represented as feature vectors to be applied to a standard classifiers which determines the speaker's identification, emotion or whether the input queries are included. By representing the audio segments in fixed-length vectors instead of using the original segments in variable lengths, we can reduce the effort for indexing, accelerate the speed of calculation, and improve the efficiency for the retrieval task BIBREF5 , BIBREF6 , BIBREF7 .",
"Recently, deep learning has been used for encoding acoustic information into vectors BIBREF8 , BIBREF9 , BIBREF10 . Existing works have shown that it is possible to transform audio word segments into fixed dimensional vectors. The transformation successfully produces vector space where word audio segments with similar phonetic structures are closely located. In BIBREF10 , the authors train a Siamese convolutional neural network with side information to obtain embeddings that separate same-word pairs and different-word pairs. Human annotated data is required under this supervised learning scenario. Besides supervised approaches BIBREF11 , BIBREF10 , BIBREF12 , BIBREF13 , unsupervised approaches are also proposed to reduce the annotation effort BIBREF14 . As for the unsupervised learning for the audio embedding, LSTM-based sequence-to-sequence autoencoder demonstrates a promising result BIBREF14 . The model is trained to minimize the reconstruction error of the input audio sequence and then provides the embedding, namely Audio Word2Vec, from its bottleneck layer. This is done without any annotation effort.",
"Although deep learning approaches have produced satisfactory result, the data-hungry nature of the deep model makes it hard to produce the same performance with low-resource data. Both supervised and unsupervised approaches assume that a large amount of audio data of the target language is available. A question arises whether it is possible to transfer the Audio Word2Vec model learned from a high-resource language into a model targeted at a low-resource language. While this problem is not yet to be fully examined in Audio Word2Vec, works in neural machine translation (NMT) successfully transfer the model learned on high-resource languages to low-resource languages. In BIBREF15 , BIBREF16 , the authors first train a source model with high-resource language pair. The source model is used to initialize the target model which is then trained by low-resource language pairs.",
"For audio, all languages are uttered by human beings with a similar vocal tract structure, and therefore share some common acoustic patterns. This fact implies that knowledge obtained from one spoken language can be transferred onto other languages. This paper verifies that sequence-to-sequence autoencoder is not only able to transform audio word segments into fixed-length vectors, the model is also transferable to the languages it has never heard before. We also demonstrate its promising applications with a query-by-example spoken term detection (STD) experiment. In the query-by-example STD experiment, even without tunning with partial low-resource language segments, the autoencoder can still produce high-quality vectors."
],
[
"The goal for Audio Word2Vec model is to identify the phonetic patterns in acoustic feature sequences such as MFCCs. Given a sequence INLINEFORM0 where INLINEFORM1 is the acoustic feature at time INLINEFORM2 , and INLINEFORM3 is the length, Audio Word2Vec transforms the features into fixed-length vector INLINEFORM4 with dimension INLINEFORM5 based on the phonetic structure."
],
[
"Recurrent Neural Networks (RNNs) has shown great success in many NLP tasks with its capability of capturing sequential information. The hidden neurons form a directed cycle and perform the same task for every element in a sequence. Given a sequence INLINEFORM0 , RNN updates its hidden state INLINEFORM1 according to the current input INLINEFORM2 and the previous INLINEFORM3 . The hidden state INLINEFORM4 acts as an internal memory at time INLINEFORM5 that enables the network to capture dynamic temporal information, and also allows the network to process sequences of variable length. However, in practice, RNN does not seem to learn long-term dependencies due to the vanishing gradient problem BIBREF17 , BIBREF18 . To conquer such difficulties, LSTM BIBREF19 and GRU BIBREF20 , BIBREF21 were proposed. While LSTM achieves many amazing results BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF20 , BIBREF27 , the relative new GRU performs just as well with less parameters and training effort BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 .",
"RNN Encoder-Decoder BIBREF26 , BIBREF32 consists of an Encoder RNN and a Decoder RNN.The Encoder RNN reads the input sequence INLINEFORM0 sequentially and the hidden state INLINEFORM1 of the RNN is updated accordingly. After the last symbol INLINEFORM2 is processed, the hidden state INLINEFORM3 is interpreted as the learned representation of the whole input sequence. Then, by taking INLINEFORM4 as input, the Decoder RNN generates the output sequence INLINEFORM5 sequentially, where INLINEFORM6 and INLINEFORM7 can be different, or the length of INLINEFORM8 and INLINEFORM9 can be different. Such RNN Encoder-Decoder framework is able to handle variable-length input. Although there may exist a considerable time lag between the input symbols and their corresponding output symbols, LSTM and GRU are able to handle such situation well due to their powerfulness in modeling long-term dependencies."
],
[
"Figure FIGREF3 depicts the structure of Sequence-to-sequence Autoencoder ( INLINEFORM0 ), which integrates the RNN Encoder-Decoder framework with Autoencoder for unsupervised learning of audio segment representations. INLINEFORM1 consists of an Encoder RNN (the left part of Figure FIGREF3 ) and a RNN Decoder (the right part). Given an audio segment represented as an acoustic feature sequence INLINEFORM2 of any length INLINEFORM3 , the RNN Encoder reads each acoustic feature INLINEFORM4 sequentially and the hidden state INLINEFORM5 is updated accordingly. After the last acoustic feature INLINEFORM6 has been read and processed, the hidden state INLINEFORM7 of the Encoder RNN is viewed as the learned representation INLINEFORM8 of the input sequence (the purple block in Figure FIGREF3 ). The Decoder RNN takes INLINEFORM9 as the initial state of the RNN cell, and generates a output INLINEFORM10 . Instead of taking INLINEFORM11 as the input of the next time step, a zero vector is fed in as input to generate INLINEFORM12 , and so on. This structure is called the historyless decoder. Based on the principles of Autoencoder BIBREF33 , BIBREF34 , the target of the output sequence INLINEFORM13 is the input sequence INLINEFORM14 . In other words, the RNN Encoder and Decoder are jointly trained by minimizing the reconstruction error, measured by the general mean squared error INLINEFORM15 . Because the input sequence is taken as the learning target, the training process does not need any labeled data. The fixed-length vector representation INLINEFORM16 will be a meaningful representation for the input audio segment INLINEFORM17 because the whole input sequence INLINEFORM18 can be reconstructed from INLINEFORM19 by the RNN Decoder.",
"Using historyless decoder is critical here. We found out that the performance in the STD experiment was undermined despite the low reconstruction error. This shows that the vector representations learned from INLINEFORM0 do not include useful information. This might be caused by a strong decoder as the model focuses less on including more information into the vector representation. We eventually solved the problem by using a historyless decoder. Historyless decoder is a weakened decoder. The input of the decoder is removed, and this forces the model to rely more on the vector representation. The historyless decoder is also used in recent NLP works BIBREF35 , BIBREF36 , BIBREF37 ."
],
[
"In the study of linguistic, scholars define a set of universal phonetic rules which describe how sounds are commonly organized across different languages. Actually, in real life, we often find languages sharing similar phonemes especially the ones spoken in nearby regions. These facts implies that when switching target languages, we do not need to learn the new audio pattern from scratch due to the transferability in spoken languages. Language transfer has shown to be helpful in STD BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 , BIBREF43 , BIBREF44 , BIBREF45 . In this paper, we focus on studying the capability of transfer learning of Audio Word2Vec.",
"In the proposed approach, we first train an INLINEFORM0 using the high-resource source language, as shown in the upper part of Fig. FIGREF4 , and then the encoder is used to transform the audio segment of a low-resource target language. It is also possible to fine-tune the parameters of INLINEFORM1 with the target language. In the following experiments, we found that in some cases the STD performance of the encoder without fine-tuning with the low-resource target language can be as good as the one with fine-tuning."
],
[
"The audio segment representation INLINEFORM0 learned in the last section can be applied in many possible scenarios. Here in the preliminary tests we consider the unsupervised query-by-example STD, whose target is to locate the occurrence regions of the input spoken query term in a large spoken archive without speech recognition. Figure FIGREF5 shows how the representation INLINEFORM1 proposed here can be easily used in this task. This approach is inspired from the previous work BIBREF6 , but completely different in the ways to represent the audio segments. In the upper half of Figure FIGREF5 , the audio archive are segmented based on word boundaries into variable-length sequences, and then the system exploits the trained RNN encoder in Figure FIGREF3 to encode these audio segments into fixed-length vectors. All these are done off-line. In the lower left corner of Figure FIGREF5 , when a spoken query is entered, the input spoken query is similarly encoded by the same RNN encoder into a vector. The system then returns a list of audio segments in the archive ranked according to the cosine similarities evaluated between the vector representation of the query and those of all segments in the archive. Note that the computation requirements for the online process here are extremely low."
],
[
"Here we provide detail of our experiment including the dataset, model setup, and the baseline model."
],
[
"Two corpora across five languages were used in the experiment. One of the corpora we used is LibriSpeech corpus BIBREF46 (English). In this 960-hour English dataset, 2.2 million audio word segments were used for training while the other 250 thousand segments were used as the database to be retrieved in STD and 1 thousand segments as spoken queries. In Section 6.1, we further sampled 20 thousand segments from 250 thousand segments to form a small database to investigate the influence of database size. English served as the high-resource source language for model pre-training.",
"The other dataset is the GlobalPhone corpus BIBREF47 , which includes French (FRE), German (GER), Czech (CZE), and Spanish (ESP). The four languages from GlobalPhone were used as the low-resource target languages. In Section 6.2, 20 thousand segments for each language were used to calculate the average cosine similarity. For the experiments of STD, the 20 thousands segments served as the database to be retrieved, and the other 1 thousand used for query and 4 thousand for fine-tuning.",
"MFCCs of 39-dim were used as the acoustic features. The length of the input sequence was limited to 50 frames. All datasets were segmented according to the word boundaries obtained by forced alignment with respect to the reference transcriptions. Although the oracle word boundaries were used here for the query-by-example STD in the preliminary tests, the comparison in the following experiment was fair since all approaches used the same segmentation. Mean average precision (MAP) was used as the evaluation measure for query-by-example STD."
],
[
"Both the proposed model ( INLINEFORM0 ) and baseline model ( INLINEFORM1 , described in the next subsection) were implemented with Tensorflow. The network structure and the hyper parameters were set as below:",
"Both RNN Encoder and Decoder consisted one hidden layer of GRU cells BIBREF20 , BIBREF21 . The number of units in the layer would be discussed in the experiment.",
"The networks were trained by SGD without momentum. The initial learning rate was 1 and decayed with a factor of 0.95 every 500 batches."
],
[
"We used naive encoder ( INLINEFORM0 ) as the baseline approach. In this encoder, the input acoustic feature sequence INLINEFORM1 = ( INLINEFORM2 ), where INLINEFORM3 was the 39-dimension MFCC feature vector at time t, were divided into INLINEFORM4 partitions with roughly equal length INLINEFORM5 . Then, we averaged each partition into a single 39-dimension vector, and finally got the vector representation through concatenating the INLINEFORM6 average vectors sequentially into a vector representation of dimensionality INLINEFORM7 . Although INLINEFORM8 is simple, similar approaches have been used in STD and achieved successful results BIBREF2 , BIBREF3 , BIBREF4 ."
],
[
"In this section, we first examine how changing the hidden layer size of the RNN Encoder/Decoder, the dimension of Audio Word2Vec, affects the MAP performance of query-by-example STD (Section 6.1). After obtaining the best hidden layer size, we analyze the transferability of the Audio Word2Vec by comparing the cosine similarity of the learned representations to phoneme sequence edit distance (Section 6.2) . Visualization of multiple word pairs in different target languages is also provided (Section 6.3). Last but not least, we performed the query-by-example STD on target languages (Section 6.4). These experiments together verify that INLINEFORM0 is capable of extracting common phonetic structure in human language and thus is transferable to various languages."
],
[
"Before evaluating the language transfer result, we first experimented on the primary INLINEFORM0 model in the source language (English). The results are shown in Fig. FIGREF12 . Here we compare the representations of INLINEFORM1 and INLINEFORM2 . Furthermore, we examined the influence of the dimension of Audio Word2Vector in terms of MAP. We also compared the MAP results on large testing database (250K segments) and small database (20K).",
"In Fig. FIGREF12 , we varied the dimension of Audio Word2Vector as 100, 200, 400, 600, 800 and 1000. To match up the dimensionality with INLINEFORM0 , we tested INLINEFORM1 with dimensionality 117, 234, 390, 585, 819, 1014 ( INLINEFORM2 ) and denoted them by INLINEFORM3 where INLINEFORM4 is the dimensionality. INLINEFORM5 get higher MAP values than INLINEFORM6 no matter the vector dimension and the size of database. The highest MAP score INLINEFORM7 can achieve is 0.881 ( INLINEFORM8 on small database), while the highest score of the INLINEFORM9 model is 0.490 ( INLINEFORM10 on small database). The size of database has large influence on the results. The MAP scores of the two models both drop in the large database. For example, INLINEFORM11 drops from 0.490 to 0.158, decaying by 68%, and the performance of INLINEFORM12 drops from 0.881 to 0.317, decaying by 64%. As shown in Fig. FIGREF12 , larger dimensionality does not imply better performance in query-by-example STD. The MAP scores gradually improve until reaching the dimensionality of 400 in INLINEFORM13 and 234 in INLINEFORM14 , and start to decrease as the dimension increases. In the rest of the experiments, we would use 400 GRU units in the INLINEFORM15 hidden layer, and set INLINEFORM16 ( INLINEFORM17 )."
],
[
"To evaluate the quality of language transfer, we trained the Audio Word2Vec model by INLINEFORM0 from the source language, English, and applied it on different target languages, French (FRE), German (GER), Czech (CZE), and Spanish (ESP). We computed the average cosine similarity of the vector representations for each pair of the audio segments in the retrieval database of the target languages (20K segments for each language), and compare it with the phoneme sequence edit distance (PSED). The average and variance (the length of the black line on each bar) of the cosine similarity for groups of pairs clustered by the phoneme sequence edit distances (PSED) between the two words are shown in Fig. FIGREF14 . For comparison, we also provide the results obtained from the English retrieval database (250K segments), where the segments were not seen by the model in training procedure.",
"In Fig. FIGREF14 , the cosine similarities of the segment pairs get smaller as the edit distances increase, and the trend is observed in all languages. The gap between each edit distance groups, i.e. (0,1), (1,2), (2,3), (3,4), is obvious. This means that INLINEFORM0 learned from English can successfully encode the sequential phonetic structures into fixed-length vector for the target languages to some good extend even though it has never seen any audio data of the target languages. Another interesting fact is the corresponding variance between languages. In the source language, English, the variances of the five edit distance groups are fixed at 0.030, which means that the cosine similarity in each edit distance group is centralized. However, the variances of the groups in the target languages vary. In French and German, the variance grows from 0.030 to 0.060 as the edit distance increases from 0 to 4. For Czech/Spanish, the variance starts at a larger value of 0.040/0.050 and increases to 0.050/0.073. We suspect that the fluctuating variance is related to the similarity between languages. English, German and French are more similar compared with Czech and Spanish. Among the four target languages, German has the highest lexical similarity with English (0.60) and the second highest is French (0.27), while for Czech and Spanish, the lexical similarity scores is 0 BIBREF48 .",
""
],
[
"In order to further investigate the performance of INLINEFORM0 , we visualize the vector representation of two sets of word pairs differing by only one phoneme from French and German as below:",
"French Word Pairs: (parler, parlons), (noter,notons), (rappeler, rappelons), (utiliser, utilisons)",
"German Word Pairs: (tag, tage), (spiel, spiele), (wenig, wenige), (angriff, angriffe)",
"To show the vector representations in Fig. FIGREF18 , we first obtained the mean value of representations for the audio segments of a specific word, denoted by INLINEFORM0 (word). Then the average representation INLINEFORM1 was projected from 400-dimensional to 2-dimensional using PCA BIBREF49 . The result of the difference vector from each word pair, e.g. INLINEFORM2 (parlons) - INLINEFORM3 (parler), is shown. Although the representations for French and German word audio segments were extracted from the model trained by English audio word segments and never heard any French and German, the direction and magnitude of the different vectors are coherent. In Fig. FIGREF18 , INLINEFORM4 (parlons) - INLINEFORM5 (parler) is close to INLINEFORM6 (utilison) - INLINEFORM7 (utiliser); and INLINEFORM8 (tage) - INLINEFORM9 (tag) is close to INLINEFORM10 (wenige) - INLINEFORM11 (wenig) in Fig. FIGREF18 ."
],
[
"Besides analyzing the cosine similarity of the learned representations, we also apply them to the query-by-example STD task. Here we compare the retrieval performance in MAP of INLINEFORM0 with different levels of accessibility to the low-resource target language along with two baseline models, INLINEFORM1 and INLINEFORM2 trained purely by the target languages. For the four target languages, the total available amount of audio word segments in the training set were 4 thousands for each language. In Table TABREF20 , we took different partitions of the target language training sets to fine tune the INLINEFORM3 pretrained by the source languages. The amount of audio word segments in these partitions are: 1K, 2K, 3K, 4K, and 0, which means no fine-tuning.",
"From Table TABREF20 , INLINEFORM0 trained by source language generally outperforms the INLINEFORM1 trained by the limited amount of target language (\" INLINEFORM2 No Transfer\"), proving that with enough audio segments, INLINEFORM3 can identify and encode universal phonetic structure. Comparing with NE, INLINEFORM4 surpasses INLINEFORM5 in German and French even without fine-tuning, whereas in Czech, INLINEFORM6 also achieves better score than INLINEFORM7 with fine-tuning. However, in Spanish, INLINEFORM8 achieved a MAP score of 0.13 with fine-tuning, slightly lower than 0.17 obtained by INLINEFORM9 . Back to Fig. FIGREF14 , the gap between phoneme sequence edit distances 2 and 3 in Spanish is smaller than other languages. Also, as discussed earlier in Section 6.2, the variance in Spanish is also bigger. The smaller gap and bigger variance together indicate that the model is weaker on Spanish at identifying audio segments of different words and thus affects the MAP performance in Spanish."
],
[
"In this paper, we verify the capability of language transfer of Audio Word2Vec using Sequence-to-sequence Autoencoer ( INLINEFORM0 ). We demonstrate that INLINEFORM1 can learn the sequential phonetic structure commonly appearing in human language and thus make it possible to apply an Audio Word2Vec model learned from high-resource language to low-resource languages. The capability of language transfer in Audio Word2Vec is beneficial to many real world applications, for example, the query-by-example STD shown in this work. For the future work, we are examining the performance of the transferred system in other application scenarios, and exploring the performance of Audio Word2Vec under automatic segmentation."
]
],
"section_name": [
"Introduction",
"Audio Word2Vec",
"RNN Encoder-Decoder Network",
"Sequence-to-sequence Autoencoder",
"Language Transfer",
"An Example Application: Query-by-example STD",
"Experimental Setup",
"Dataset",
"Proposed Model: Sequence Autoencoder (SASA)",
"Baseline: Naive Encoder (NENE)",
"Experiments",
"Analysis on Dimension of Audio Word2Vector",
"Analysis of Language Transfer",
"Visualization",
"Language Transferring on STD",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"3ac79a44f1cdf4e120d453e3560ea92595813cdf",
"6ede898fd2b007aa33c681dd2f92c1bb3d64b9e8",
"d7cdc3f3fd3b56945024b640002110933e01b3ce"
],
"answer": [
{
"evidence": [
"Two corpora across five languages were used in the experiment. One of the corpora we used is LibriSpeech corpus BIBREF46 (English). In this 960-hour English dataset, 2.2 million audio word segments were used for training while the other 250 thousand segments were used as the database to be retrieved in STD and 1 thousand segments as spoken queries. In Section 6.1, we further sampled 20 thousand segments from 250 thousand segments to form a small database to investigate the influence of database size. English served as the high-resource source language for model pre-training.",
"The other dataset is the GlobalPhone corpus BIBREF47 , which includes French (FRE), German (GER), Czech (CZE), and Spanish (ESP). The four languages from GlobalPhone were used as the low-resource target languages. In Section 6.2, 20 thousand segments for each language were used to calculate the average cosine similarity. For the experiments of STD, the 20 thousands segments served as the database to be retrieved, and the other 1 thousand used for query and 4 thousand for fine-tuning."
],
"extractive_spans": [
"LibriSpeech corpus BIBREF46",
"GlobalPhone corpus BIBREF47"
],
"free_form_answer": "",
"highlighted_evidence": [
"Two corpora across five languages were used in the experiment. One of the corpora we used is LibriSpeech corpus BIBREF46 (English).",
"The other dataset is the GlobalPhone corpus BIBREF47 , which includes French (FRE), German (GER), Czech (CZE), and Spanish (ESP)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Two corpora across five languages were used in the experiment. One of the corpora we used is LibriSpeech corpus BIBREF46 (English). In this 960-hour English dataset, 2.2 million audio word segments were used for training while the other 250 thousand segments were used as the database to be retrieved in STD and 1 thousand segments as spoken queries. In Section 6.1, we further sampled 20 thousand segments from 250 thousand segments to form a small database to investigate the influence of database size. English served as the high-resource source language for model pre-training.",
"The other dataset is the GlobalPhone corpus BIBREF47 , which includes French (FRE), German (GER), Czech (CZE), and Spanish (ESP). The four languages from GlobalPhone were used as the low-resource target languages. In Section 6.2, 20 thousand segments for each language were used to calculate the average cosine similarity. For the experiments of STD, the 20 thousands segments served as the database to be retrieved, and the other 1 thousand used for query and 4 thousand for fine-tuning."
],
"extractive_spans": [
"LibriSpeech corpus",
"GlobalPhone corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"Two corpora across five languages were used in the experiment. One of the corpora we used is LibriSpeech corpus BIBREF46 (English). ",
"The other dataset is the GlobalPhone corpus BIBREF47 , which includes French (FRE), German (GER), Czech (CZE), and Spanish (ESP). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Two corpora across five languages were used in the experiment. One of the corpora we used is LibriSpeech corpus BIBREF46 (English). In this 960-hour English dataset, 2.2 million audio word segments were used for training while the other 250 thousand segments were used as the database to be retrieved in STD and 1 thousand segments as spoken queries. In Section 6.1, we further sampled 20 thousand segments from 250 thousand segments to form a small database to investigate the influence of database size. English served as the high-resource source language for model pre-training.",
"The other dataset is the GlobalPhone corpus BIBREF47 , which includes French (FRE), German (GER), Czech (CZE), and Spanish (ESP). The four languages from GlobalPhone were used as the low-resource target languages. In Section 6.2, 20 thousand segments for each language were used to calculate the average cosine similarity. For the experiments of STD, the 20 thousands segments served as the database to be retrieved, and the other 1 thousand used for query and 4 thousand for fine-tuning."
],
"extractive_spans": [
"LibriSpeech",
"GlobalPhone"
],
"free_form_answer": "",
"highlighted_evidence": [
"One of the corpora we used is LibriSpeech corpus BIBREF46 (English). ",
"The other dataset is the GlobalPhone corpus BIBREF47 , which includes French (FRE), German (GER), Czech (CZE), and Spanish (ESP). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"51d24c8d4a126527271ea40feafa87d3ea057458",
"e816bd3da90853d021855cae3a5c59edae895054",
"f66b6ad81959434944edbda8aa4b03c8b4339d6c"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Retrieval Performance in MAP. Dim is the dimension of the vector representation.Small DB is the small database with 20000 examples, Large DB is the large database with 250000 examples"
],
"extractive_spans": [],
"free_form_answer": "They compare retrieval performance in MAP.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Retrieval Performance in MAP. Dim is the dimension of the vector representation.Small DB is the small database with 20000 examples, Large DB is the large database with 250000 examples"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this section, we first examine how changing the hidden layer size of the RNN Encoder/Decoder, the dimension of Audio Word2Vec, affects the MAP performance of query-by-example STD (Section 6.1). After obtaining the best hidden layer size, we analyze the transferability of the Audio Word2Vec by comparing the cosine similarity of the learned representations to phoneme sequence edit distance (Section 6.2) . Visualization of multiple word pairs in different target languages is also provided (Section 6.3). Last but not least, we performed the query-by-example STD on target languages (Section 6.4). These experiments together verify that INLINEFORM0 is capable of extracting common phonetic structure in human language and thus is transferable to various languages.",
"Analysis on Dimension of Audio Word2Vector",
"Before evaluating the language transfer result, we first experimented on the primary INLINEFORM0 model in the source language (English). The results are shown in Fig. FIGREF12 . Here we compare the representations of INLINEFORM1 and INLINEFORM2 . Furthermore, we examined the influence of the dimension of Audio Word2Vector in terms of MAP. We also compared the MAP results on large testing database (250K segments) and small database (20K)."
],
"extractive_spans": [],
"free_form_answer": "They compare MAP performance of query-by-example STD using representations obtained from naive encoder and their method",
"highlighted_evidence": [
"In this section, we first examine how changing the hidden layer size of the RNN Encoder/Decoder, the dimension of Audio Word2Vec, affects the MAP performance of query-by-example STD (Section 6.1).",
"Analysis on Dimension of Audio Word2Vector\nBefore evaluating the language transfer result, we first experimented on the primary INLINEFORM0 model in the source language (English). The results are shown in Fig. FIGREF12 . Here we compare the representations of INLINEFORM1 and INLINEFORM2 . "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Before evaluating the language transfer result, we first experimented on the primary INLINEFORM0 model in the source language (English). The results are shown in Fig. FIGREF12 . Here we compare the representations of INLINEFORM1 and INLINEFORM2 . Furthermore, we examined the influence of the dimension of Audio Word2Vector in terms of MAP. We also compared the MAP results on large testing database (250K segments) and small database (20K)."
],
"extractive_spans": [
"MAP",
"MAP results on large testing database (250K segments)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Here we compare the representations of INLINEFORM1 and INLINEFORM2 . Furthermore, we examined the influence of the dimension of Audio Word2Vector in terms of MAP. We also compared the MAP results on large testing database (250K segments) and small database (20K)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"d976d4a5a969f53907ac4cc87840f15ca000d434",
"d131ad8a0dfcc041d3ce36d69129d8799c2e0d6e",
"e93bf437ad2140d0d1447a8aed94dde1d9312128"
],
"answer": [
{
"evidence": [
"In Fig. FIGREF14 , the cosine similarities of the segment pairs get smaller as the edit distances increase, and the trend is observed in all languages. The gap between each edit distance groups, i.e. (0,1), (1,2), (2,3), (3,4), is obvious. This means that INLINEFORM0 learned from English can successfully encode the sequential phonetic structures into fixed-length vector for the target languages to some good extend even though it has never seen any audio data of the target languages. Another interesting fact is the corresponding variance between languages. In the source language, English, the variances of the five edit distance groups are fixed at 0.030, which means that the cosine similarity in each edit distance group is centralized. However, the variances of the groups in the target languages vary. In French and German, the variance grows from 0.030 to 0.060 as the edit distance increases from 0 to 4. For Czech/Spanish, the variance starts at a larger value of 0.040/0.050 and increases to 0.050/0.073. We suspect that the fluctuating variance is related to the similarity between languages. English, German and French are more similar compared with Czech and Spanish. Among the four target languages, German has the highest lexical similarity with English (0.60) and the second highest is French (0.27), while for Czech and Spanish, the lexical similarity scores is 0 BIBREF48 ."
],
"extractive_spans": [
"German and French"
],
"free_form_answer": "",
"highlighted_evidence": [
"English, German and French are more similar compared with Czech and Spanish."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To evaluate the quality of language transfer, we trained the Audio Word2Vec model by INLINEFORM0 from the source language, English, and applied it on different target languages, French (FRE), German (GER), Czech (CZE), and Spanish (ESP). We computed the average cosine similarity of the vector representations for each pair of the audio segments in the retrieval database of the target languages (20K segments for each language), and compare it with the phoneme sequence edit distance (PSED). The average and variance (the length of the black line on each bar) of the cosine similarity for groups of pairs clustered by the phoneme sequence edit distances (PSED) between the two words are shown in Fig. FIGREF14 . For comparison, we also provide the results obtained from the English retrieval database (250K segments), where the segments were not seen by the model in training procedure.",
"In Fig. FIGREF14 , the cosine similarities of the segment pairs get smaller as the edit distances increase, and the trend is observed in all languages. The gap between each edit distance groups, i.e. (0,1), (1,2), (2,3), (3,4), is obvious. This means that INLINEFORM0 learned from English can successfully encode the sequential phonetic structures into fixed-length vector for the target languages to some good extend even though it has never seen any audio data of the target languages. Another interesting fact is the corresponding variance between languages. In the source language, English, the variances of the five edit distance groups are fixed at 0.030, which means that the cosine similarity in each edit distance group is centralized. However, the variances of the groups in the target languages vary. In French and German, the variance grows from 0.030 to 0.060 as the edit distance increases from 0 to 4. For Czech/Spanish, the variance starts at a larger value of 0.040/0.050 and increases to 0.050/0.073. We suspect that the fluctuating variance is related to the similarity between languages. English, German and French are more similar compared with Czech and Spanish. Among the four target languages, German has the highest lexical similarity with English (0.60) and the second highest is French (0.27), while for Czech and Spanish, the lexical similarity scores is 0 BIBREF48 ."
],
"extractive_spans": [],
"free_form_answer": "English paired with any of the following: French, German, Czech, Spanish.",
"highlighted_evidence": [
"To evaluate the quality of language transfer, we trained the Audio Word2Vec model by INLINEFORM0 from the source language, English, and applied it on different target languages, French (FRE), German (GER), Czech (CZE), and Spanish (ESP).",
"This means that INLINEFORM0 learned from English can successfully encode the sequential phonetic structures into fixed-length vector for the target languages to some good extend even though it has never seen any audio data of the target languages."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In Fig. FIGREF14 , the cosine similarities of the segment pairs get smaller as the edit distances increase, and the trend is observed in all languages. The gap between each edit distance groups, i.e. (0,1), (1,2), (2,3), (3,4), is obvious. This means that INLINEFORM0 learned from English can successfully encode the sequential phonetic structures into fixed-length vector for the target languages to some good extend even though it has never seen any audio data of the target languages. Another interesting fact is the corresponding variance between languages. In the source language, English, the variances of the five edit distance groups are fixed at 0.030, which means that the cosine similarity in each edit distance group is centralized. However, the variances of the groups in the target languages vary. In French and German, the variance grows from 0.030 to 0.060 as the edit distance increases from 0 to 4. For Czech/Spanish, the variance starts at a larger value of 0.040/0.050 and increases to 0.050/0.073. We suspect that the fluctuating variance is related to the similarity between languages. English, German and French are more similar compared with Czech and Spanish. Among the four target languages, German has the highest lexical similarity with English (0.60) and the second highest is French (0.27), while for Czech and Spanish, the lexical similarity scores is 0 BIBREF48 ."
],
"extractive_spans": [
"English, German and French"
],
"free_form_answer": "",
"highlighted_evidence": [
"English, German and French are more similar compared with Czech and Spanish. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Which datasets do they use?",
"How do they compare representations performance obtained from a naive encoder versus ones learned from large amount of source language data?",
"Which pairs of languages do they consider similar enough to capture phonetic structure?"
],
"question_id": [
"c7ffef8bf0100eb6148bd932d0409b21759060b1",
"1ff0ffeb2d0b2e150abdb2f559d8b31f4dd8aa2c",
"3cc0d773085dc175b85955e95911a2cfaab2cdc4"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 2: Language Transfer Mechanism.",
"Fig. 1: Sequence-to-sequence Autoencoder (SA).",
"Fig. 3: Spoken Term Detection Application.",
"Table 2: Retrieval Performance in MAP. Dim is the dimension of the vector representation.Small DB is the small database with 20000 examples, Large DB is the large database with 250000 examples",
"Table 1: Number of segments used for training or fine-tuning, STD database, and STD query in each corpus. For GlobalPhone, the amount shown is retrieved from each language.",
"Table 3: The average(µ)/variance(σ2) of the cosine similarity between vector representations for all segment pairs in the target languages testing set, clustered by the phoneme sequence edit distances (PSED).",
"Table 4: The retrieval performance ofNE, SA trained by the target language only (denoted as SA No Transfer), and SA of the source language tuning with different amounts of data.",
"Fig. 4: Difference between average vectors for word pairs differing by one edit distance in (a) French and (b) German."
],
"file": [
"2-Figure2-1.png",
"2-Figure1-1.png",
"2-Figure3-1.png",
"3-Table2-1.png",
"3-Table1-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"4-Figure4-1.png"
]
} | [
"How do they compare representations performance obtained from a naive encoder versus ones learned from large amount of source language data?",
"Which pairs of languages do they consider similar enough to capture phonetic structure?"
] | [
[
"1707.06519-Experiments-0",
"1707.06519-3-Table2-1.png",
"1707.06519-Analysis on Dimension of Audio Word2Vector-0"
],
[
"1707.06519-Analysis of Language Transfer-1",
"1707.06519-Analysis of Language Transfer-0"
]
] | [
"They compare MAP performance of query-by-example STD using representations obtained from naive encoder and their method",
"English paired with any of the following: French, German, Czech, Spanish."
] | 104 |
1912.06905 | Long-length Legal Document Classification | One of the principal tasks of machine learning with major applications is text classification. This paper focuses on the legal domain and, in particular, on the classification of lengthy legal documents. The main challenge that this study addresses is the limitation that current models impose on the length of the input text. In addition, the present paper shows that dividing the text into segments and later combining the resulting embeddings with a BiLSTM architecture to form a single document embedding can improve results. These advancements are achieved by utilising a simpler structure, rather than an increasingly complex one, which is often the case in NLP research. The dataset used in this paper is obtained from an online public database containing lengthy legal documents with highly domain-specific vocabulary and thus, the comparison of our results to the ones produced by models implemented on the commonly used datasets would be unjustified. This work provides the foundation for future work in document classification in the legal field. | {
"paragraphs": [
[
"Text classification is a problem in library, information and computer science and one of the most classical and prominent tasks in Natural Language Processing (NLP). In particular, document classification is a procedure of assigning one or more labels to a document from a predetermined set of labels. Automatic document classification tasks can be divided into three categories: supervised, unsupervised and semi-supervised. This study focuses on supervised document classification.",
"Research so far has focused on short text BIBREF0, BIBREF1, BIBREF2, BIBREF3, whereas the main objective of this paper is to address the classification of lengthy legal documents. In fact, pre-existing models could not be applied on our corpus, which consists of excessively lengthy legal documents. In the legal field, companies manage millions of documents per year, depending on the size of the company. Therefore, automatic categorisation of documents into different groups significantly enhances the efficiency of document management and decreases the time spent by legal experts analysing documents.",
"Recently, several quite sophisticated frameworks have been proposed to address the document classification task. However, as proven by BIBREF3 regarding the document classification task, complex neural networks such as Bidirectional Encoder Representations from Transformers (BERT; BIBREF4) can be distilled and yet achieve similar performance scores. In addition, BIBREF5 shows that complex architectures are more sensitive to hyperparameter fluctuations and are susceptible to domains that consist of data with dissimilar characteristics. In this study, rather than employing an overly complex neural architecture, we focus on a relatively simpler neural structure that, in short, creates text embeddings using Doc2Vec BIBREF6 and then passes them through a Bi-directional LSTM (BiLSTM) with attention before making the final prediction.",
"Furthermore, an important contribution of this paper to automatic document classification is the concept of dividing documents into chunks before processing. It is demonstrated that the segmentation of lengthy documents into smaller chunks of text allows the context of each document to be encapsulated in an improved way, leading to enhanced results. The intuition behind this idea was formed by investigating automatic audio segmentation research. Audio segmentation (also known as audio classification) is an essential pre-processing step in audio analysis that separates different types of sound (e.g. speech, music, silence etc.) and splits audio signals into chunks in order to further improve the comprehension of these signals BIBREF7. Analogously, the present paper shows that splitting overly lengthy legal documents into smaller parts before processing them, boosts the final results."
],
[
"In several industries that produce or handle colossal amounts of text data such as the legal industry, document categorisation is still often performed manually by human experts. Automatic categorisation of documents is highly beneficial for reducing the human effort spent on time-consuming operations. In particular, deep neural networks have achieved state-of-the-art results in document classification over the last few years, outperforming the human classifiers in numerous cases."
],
[
"The majority of researchers evaluate their document classifying models on the following four datasets: Reuters-21578 BIBREF8, ArXiv Academic Paper Dataset - AAPD BIBREF9, IMDB reviews BIBREF10, and Yelp 2014 reviews BIBREF11. However, these commonly used datasets do not contain large documents, which conflicts with one of the main objectives of this study. Note that our definition of `document' in this specific context is a document that has at least 5000 words.",
"For that purpose, we use a dataset provided by the U.S Securities and Exchange Commission (SEC), namely EDGAR BIBREF12. As anticipated, most models that have achieved inspiring results have very poor performance or even fail when they are tested on large documents from the EDGAR corpus. As shown in Table TABREF1 and Table TABREF3, the differences between the commonly used datasets and the EDGAR dataset are evident."
],
[
"The application of deep neural networks in the field of computer vision has achieved great success. Following this success, several well-known DNN models attained remarkable results when applied on the document classification task. One of the most popular models is the Hierarchical Attention Network (HAN) proposed by BIBREF0. HAN used word and sentence-level attention in order to extract meaningful features of the documents and ultimately classify them. However, the fact that this architecture is based on a Gated Recurrent Unit (GRU) framework combined with the excessive size of the documents in our corpus would severely affect the results. Concretely, using overly large documents would result in a vast number of time steps and the vanishing gradient problem would be detrimental to performance.",
"A different yet powerful framework, namely BERT BIBREF4, has achieved state-of-the art results on a large amount of NLP tasks. BERT architecture employs self-attention instead of general attention, thus making the neural network even more complex. Nevertheless, BIBREF3 have established groundbreaking results and demonstrated that sophisticated architectures such as BERT are not necessary to succeed in the document classification task. Furthermore, it is worth mentioning that both the aforementioned models were trained on a rather different corpora. The main difference between the datasets used by those researchers and the EDGAR dataset is the size of the documents, which explains why these models could not be utilised in the present study. In particular, BERT was incompatible with our dataset due to the maximum input sequence length that imposes, namely the 512 terms threshold."
],
[
"The novelty of this work is the application of audio segmentation used for speech recognition BIBREF13 in document classification. The ultimate purpose of audio segmentation is to divide the signal into segments, each of which contains distinct audio information. In our case, the same occurs during the document segmentation, where the split chunks become the inputs of our neural network.",
"From a human perspective, when reading a rather long document or book, we are constantly storing and updating our memory with the essential parts or information of that record. Once enough information is stored in our memory we can form connections so as to gain a deeper understanding of the context and potentially extract valuable insight. In the same way, instead of passing the whole document to Doc2Vec, we split the document into multiple chunks (Figure FIGREF5). Hence, the machine can imitate human behaviour by identifying and determining the relevance of each chunk.",
"We create different models with respect to the number of chunks that we divide the initial text into, in order to observe how the different number of chunks affect the efficiency of the final model. These chunks are then used to train Doc2Vec. In short, the intuition behind Doc2Vec is analogous to the intuition behind Word2Vec, where the",
"words are used to make predictions about the target word (central word). The additional part of Doc2Vec is that it also considers the document ID when predicting a word. Ultimately, after the training each chunk has the form of an embedding.",
"In the next phase, we aggregate the different chunk embeddings of a document into one vector through the use of a BiLSTM (see Figure FIGREF10). First, the different chunk embeddings $E_{i}^1, E_{i}^2,..., E_{i}^n$ of a document are sequentially fed to the BiLSTM model. Then, the outputs of the forward and the backward layer are concatenated; $h_{it}=[\\overrightarrow{h_{it}}\\overleftarrow{h_{it}}]$. $h_{it}$ denotes the resulting vectors.",
"The final classification is subjected to the various features that each chunk contains. Thus, the attention mechanisms are introduced so as to enable the assignment of different weights to each chunk, depending on how strong of a class indicator this chunk is. In particular, the attention scores are assigned to the corresponding hidden state outputs as follows:",
"Here $\\alpha _{it}$ is the attention score assigned to hidden state $h_{it}$ of document $i$ at time step $t$. This score is determined by the similarity between $u_{it}$ and $u_{w}$, where $u_{it}$ is a mere non-linear transformation of $h_{it}$ and $u_{w}$ is the context (category) vector BIBREF1. During the following steps, the products of the hidden states and their corresponding attention scores are calculated and the document vector $d_{i}$ is formed from the summation of those products. Note that $u_{w}$ is randomly initialised and then constantly updated during the training process.",
"Ultimately, we try different classifiers in order to assess the impact of the segmentation method. As part of the models of the first type, the resulting document vector is output from a batch normalisation layer. A linear transformation is then applied to that and this output is passed through a softmax classifier in order to acquire the multi-class probabilities. This final process is summarised in the following formula:",
"where $W\\in R^{c\\times d}$ is the weight matrix, $c$ and $d$ are the number of classes and the number of dimensions of the hidden states respectively and $b\\in R^d$ is the bias term. Hence, the final vector $s_{i}$ is a c-dimension vector comprising the probability of that document belonging to each class.",
"The models of the second type are based on a strong machine learning classifier, namely Support Vector Machine (SVM). SVM also performs document classification by utilising the resulting document embeddings. The main parameters used to train SVM were obtained by optimising each model separately (see Section SECREF14)."
],
[
"We evaluate the proposed model on a document classification dataset; 70% of the data is used for the training and the remaining 30% is equally divided and used for tuning and testing our model.",
"During the pre-processing stage where the documents are split into chunks, we utilise a cluster of Azure Virtual Machines with 32 GB RAM and 16 cores, which are optimised for CPU usage. A similar cluster is used during the hyperparameter optimisation, however, with 112 GB RAM. Reading from the remote disk (Azure Blob Storage) is rather time-consuming, since the corpus comprises lengthy documents. Thus, to accelerate the training, we chose nodes with abundant memory in order to load everything in memory just once (required roughly one hour for that process).",
"We use Pytorch 1.2.0 as the backend framework, Scikit-learn 0.20.3 for SVM and dataset splits, and gensim 3.8.1 for Doc2Vec model.",
"Regulation S-K is an official regulation under the US Securities Act of 1933 that establishes reporting regulations for a variety SEC filings used by public companies."
],
[
"The data we use to evaluate our model is a set of documents downloaded from EDGAR, an online public database from the U.S. Securities and Exchange Commission (SEC). EDGAR is the primary system for submissions by companies and others who are required by law to file information with the SEC. These documents can be grouped according to filing types, which determines the substantial content to fulfill their filing obligation. To work on as many documents as possible, we choose the following types: “10-Q”, “10-K”, “EX-99.1”, “EX-10.1” and “EX-101.INS”. The total number of documents is 28,445 and there are 5,689 documents for each filing type. We summarise the statistics of this dataset in Table TABREF11.",
"Almost all documents of type “10-K” begin with lines that contain identical headings. In order to enable the machine to truly comprehend why a document of type “10-K” should be categorised to that filing type, we remove the first six lines where the identical text is located. The model is then able to focus on finding common features that exist in documents of the same filing type, rather than focusing on just capturing the few sentences that are the same in almost all of the documents of type “10-K”. A similar procedure is followed with the documents of type “10-Q”."
],
[
"As Table TABREF13 shows, we create seven different models that correspond to the number of chunks that the text is divided into before passing through Doc2Vec. Each model is optimised separately to ensure fair comparison.",
"For the optimisation of the BiLSTM with attention model, we use Adam optimiser with a learning rate of 0.001, batch size of 1,000 and distinct values for each one of the other hyperparameters. Analogously, the SVM classifier consists of the Radial Basis Function (RBF) as the kernel function and a different value of gamma and the penalty parameter for each different model. The intention of the distinct values used for each model is to optimise each model separately so as to enable them to reach their best performance.",
"Furthermore, we observe that Doc2Vec requires only a small portion of the corpus to train accurately. Indeed, when training Doc2Vec on more documents we observe a substantial decrease in accuracy. It is well-known that legal documents contain several domain-specific words that are often repeated not only among different documents, but also within the same document. Training Doc2Vec on more documents introduced undesirable noise that results from company names, numbers such as transaction amounts and dates, job titles and addresses. Consequently, Doc2Vec is proven to generate more accurate document embeddings when trained on just 150 randomly chosen documents (30 for each filing type)."
],
[
"Recently, reproducibility is becoming a growing concern for the NLP community BIBREF14. In fact, the majority of the papers we consider in this study fail to report the validation set results. To address these issues, apart from the F1 scores on the test sets we also report the F1 scores for the validation sets.",
"Legal documents contain domain-specific vocabulary and each type of document is normally defined in a very unambiguous way. Hence, even simple classifiers can achieve relatively high accuracy when classifying different documents. Nevertheless, even the slightest improvement of 1% or less will result in the correct classification of thousands of additional documents, which is crucial in the legal industry when handling large numbers of documents. This research allows these simple classifiers to achieve even greater results, by combining them with different architectures.",
"As Table TABREF13 and Table TABREF15 indicate, dividing the document in chunks - up to certain thresholds - results in improved models compared to those where the whole document is input into the classifier. Note that the model with one chunk denotes the model which takes as input the whole document to produce the document embedding and thereby is used as a benchmark in order to be able to identify the effectiveness of the document segmentation method.",
"More specifically, splitting the document into chunks yields higher test accuracy than having the whole document as input. Our first model with the BiLSTM based framework and the linear classifier reaches a 97.97% accuracy with a 1.1% improvement upon the benchmark model. Similarly, the second model with the SVM classifier reaches a remarkable 98.11% accuracy with a 0.4% improvement upon the benchmark model.",
"A more thorough investigation of the test accuracy scores indicate that documents of type “EX-99.1\" are the ones that get misclassified the most, whereas the remaining four types of documents are in general classified correctly at a considerably higher rate. As confusion matrix plot in Figure FIGREF18 highlights, there are cases that documents of type “EX-10.1\" are misclassified as “EX-99.1\", however, the reverse occurs more frequently. Further exploration of documents of type “EX-99.1\" reveals that these documents often contain homogeneous agreements or clauses with the ones embodied in documents of type “EX-10.1\".",
"Ultimately, Figure FIGREF16 and Figure FIGREF17 demonstrate the increase of the efficiency of the document embeddings after the use of BiLSTM. These vector representations of each cluster have noticeably more robustly defined boundaries after they are passed through the BiLSTM network compared to the ones that are only passed through the mere Doc2Vec."
],
[
"The main contribution of this paper is to overcome the document length limitations that are imposed by most modern architectures. It also shows that dividing documents into chunks before inputting them into Doc2Vec can result in enhanced models. Nonetheless, these advancements are accomplished with a relatively simplified structure, rather than a significantly more sophisticated architecture than its predecessors, which is often the case in NLP research.",
"One potential extension of this work would be to apply powerful yet computationally expensive pre-processing techniques to the various documents. Techniques such as Named Entity Recognition (NER) could enable the training of the whole corpus in Doc2Vec by removing the undesired noise. Furthermore, the projections of the document embeddings at the end of our pipeline are shown to have clearly defined boundaries and thus they can be valuable for different NLP tasks, such as estimating document similarities. In the legal industry, this can contribute to identifying usages of legal templates and clauses."
]
],
"section_name": [
"Introduction",
"Related Work",
"Related Work ::: Document Classification Datasets",
"Related Work ::: Document Classification Approaches",
"Methods",
"Experimental Setup",
"Experimental Setup ::: Dataset",
"Experimental Setup ::: Model Configuration",
"Results and Discussion",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"3f55edf660724bee3d08f376b6710777cf4af1a0",
"7f032d3be5ee04812eeb7b6d13c15a7bea90b668",
"ae7377b6441e4ec6ecaa0ec3fe57d8f0eb00d4c3"
],
"answer": [
{
"evidence": [
"More specifically, splitting the document into chunks yields higher test accuracy than having the whole document as input. Our first model with the BiLSTM based framework and the linear classifier reaches a 97.97% accuracy with a 1.1% improvement upon the benchmark model. Similarly, the second model with the SVM classifier reaches a remarkable 98.11% accuracy with a 0.4% improvement upon the benchmark model."
],
"extractive_spans": [
"98.11% accuracy with a 0.4% improvement upon the benchmark model"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our first model with the BiLSTM based framework and the linear classifier reaches a 97.97% accuracy with a 1.1% improvement upon the benchmark model. Similarly, the second model with the SVM classifier reaches a remarkable 98.11% accuracy with a 0.4% improvement upon the benchmark model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 4: Performance of models of the first type (simple linear classifier) reported on validation and test set. Wc denotes the average words per chunk and best scores are shown in bold.",
"FLOAT SELECTED: Table 5: Performance of models of the second type (SVM classifier) reported on validation and test set. Wc denotes the average words per chunk and best scores are shown in bold.",
"More specifically, splitting the document into chunks yields higher test accuracy than having the whole document as input. Our first model with the BiLSTM based framework and the linear classifier reaches a 97.97% accuracy with a 1.1% improvement upon the benchmark model. Similarly, the second model with the SVM classifier reaches a remarkable 98.11% accuracy with a 0.4% improvement upon the benchmark model."
],
"extractive_spans": [
" BiLSTM based framework and the linear classifier reaches a 97.97% accuracy",
"SVM classifier reaches a remarkable 98.11% accuracy"
],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Performance of models of the first type (simple linear classifier) reported on validation and test set. Wc denotes the average words per chunk and best scores are shown in bold.",
"FLOAT SELECTED: Table 5: Performance of models of the second type (SVM classifier) reported on validation and test set. Wc denotes the average words per chunk and best scores are shown in bold.",
"Our first model with the BiLSTM based framework and the linear classifier reaches a 97.97% accuracy with a 1.1% improvement upon the benchmark model. Similarly, the second model with the SVM classifier reaches a remarkable 98.11% accuracy with a 0.4% improvement upon the benchmark model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 4: Performance of models of the first type (simple linear classifier) reported on validation and test set. Wc denotes the average words per chunk and best scores are shown in bold.",
"FLOAT SELECTED: Table 5: Performance of models of the second type (SVM classifier) reported on validation and test set. Wc denotes the average words per chunk and best scores are shown in bold."
],
"extractive_spans": [],
"free_form_answer": "F1 score of 97.97 for a linear classifier and 98.11 for a SVM classifier",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Performance of models of the first type (simple linear classifier) reported on validation and test set. Wc denotes the average words per chunk and best scores are shown in bold.",
"FLOAT SELECTED: Table 5: Performance of models of the second type (SVM classifier) reported on validation and test set. Wc denotes the average words per chunk and best scores are shown in bold."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"54427ee5b36aa2f5fb28b25b941d29e2239c8ac8",
"6974020449e854c2f4aaf37cbbee24bfa7aed01f",
"aa111c322b02224d42d2c45fea2f7be7018c4906"
],
"answer": [
{
"evidence": [
"We create different models with respect to the number of chunks that we divide the initial text into, in order to observe how the different number of chunks affect the efficiency of the final model. These chunks are then used to train Doc2Vec. In short, the intuition behind Doc2Vec is analogous to the intuition behind Word2Vec, where the"
],
"extractive_spans": [
"dividing documents into chunks before processing"
],
"free_form_answer": "",
"highlighted_evidence": [
"We create different models with respect to the number of chunks that we divide the initial text into, in order to observe how the different number of chunks affect the efficiency of the final model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"From a human perspective, when reading a rather long document or book, we are constantly storing and updating our memory with the essential parts or information of that record. Once enough information is stored in our memory we can form connections so as to gain a deeper understanding of the context and potentially extract valuable insight. In the same way, instead of passing the whole document to Doc2Vec, we split the document into multiple chunks (Figure FIGREF5). Hence, the machine can imitate human behaviour by identifying and determining the relevance of each chunk.",
"In the next phase, we aggregate the different chunk embeddings of a document into one vector through the use of a BiLSTM (see Figure FIGREF10). First, the different chunk embeddings $E_{i}^1, E_{i}^2,..., E_{i}^n$ of a document are sequentially fed to the BiLSTM model. Then, the outputs of the forward and the backward layer are concatenated; $h_{it}=[\\overrightarrow{h_{it}}\\overleftarrow{h_{it}}]$. $h_{it}$ denotes the resulting vectors."
],
"extractive_spans": [],
"free_form_answer": "They simply split document in chunks, get embedding for each chunk and train BiLSTM models with embeddings.",
"highlighted_evidence": [
" In the same way, instead of passing the whole document to Doc2Vec, we split the document into multiple chunks (Figure FIGREF5).",
"In the next phase, we aggregate the different chunk embeddings of a document into one vector through the use of a BiLSTM (see Figure FIGREF10).",
"First, the different chunk embeddings $E_{i}^1, E_{i}^2,..., E_{i}^n$ of a document are sequentially fed to the BiLSTM model. Then, the outputs of the forward and the backward layer are concatenated; $h_{it}=[\\overrightarrow{h_{it}}\\overleftarrow{h_{it}}]$. $h_{it}$ denotes the resulting vectors."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two"
],
"paper_read": [
"no",
"no"
],
"question": [
"What are their results on this task?",
"How is the text segmented?"
],
"question_id": [
"2e70d25f14357ad74c085a9454a2ce33bb988a6f",
"de84972c5d1bbf664d0f8b702fce5f161449ec23"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Summary of the commonly used datasets for document classification. C denotes the number of classes in the dataset, N the number of samples and W and S the average number of words and sentences per document respectively.",
"Table 2: Summary of EDGAR dataset. N denotes the number of samples and W and S the average number of words and sentences per document respectively.",
"Figure 1: Overall architecture of proposed BiLSTM model.",
"Table 3: Description of different filing type contents.",
"Figure 2: Document embedding process through BiLSTM framework.",
"Table 4: Performance of models of the first type (simple linear classifier) reported on validation and test set. Wc denotes the average words per chunk and best scores are shown in bold.",
"Table 5: Performance of models of the second type (SVM classifier) reported on validation and test set. Wc denotes the average words per chunk and best scores are shown in bold.",
"Figure 3: t-SNE plot of projections of document embeddings, using vanilla Doc2Vec.",
"Figure 4: t-SNE plot of projections of document embeddings, using Doc2Vec + BiLSTM.",
"Figure 5: Confusion matrix plot of classification results for 7-chunk model on test set."
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"3-Figure1-1.png",
"4-Table3-1.png",
"4-Figure2-1.png",
"5-Table4-1.png",
"5-Table5-1.png",
"6-Figure3-1.png",
"6-Figure4-1.png",
"7-Figure5-1.png"
]
} | [
"What are their results on this task?",
"How is the text segmented?"
] | [
[
"1912.06905-Results and Discussion-3",
"1912.06905-5-Table5-1.png",
"1912.06905-5-Table4-1.png"
],
[
"1912.06905-Methods-4",
"1912.06905-Methods-1",
"1912.06905-Methods-2"
]
] | [
"F1 score of 97.97 for a linear classifier and 98.11 for a SVM classifier",
"They simply split document in chunks, get embedding for each chunk and train BiLSTM models with embeddings."
] | 106 |
1908.05731 | Simple and Effective Noisy Channel Modeling for Neural Machine Translation | Previous work on neural noisy channel modeling relied on latent variable models that incrementally process the source and target sentence. This makes decoding decisions based on partial source prefixes even though the full source is available. We pursue an alternative approach based on standard sequence to sequence models which utilize the entire source. These models perform remarkably well as channel models, even though they have neither been trained on, nor designed to factor over incomplete target sentences. Experiments with neural language models trained on billions of words show that noisy channel models can outperform a direct model by up to 3.2 BLEU on WMT'17 German-English translation. We evaluate on four language-pairs and our channel models consistently outperform strong alternatives such right-to-left reranking models and ensembles of direct models. | {
"paragraphs": [
[
"Sequence to sequence models directly estimate the posterior probability of a target sequence $y$ given a source sequence $x$ BIBREF0, BIBREF1, BIBREF2, BIBREF3 and can be trained with pairs of source and target sequences. Unpaired sequences can be leveraged by data augmentation schemes such as back-translation, but direct models cannot naturally take advantage of unpaired data BIBREF4, BIBREF5.",
"The noisy channel approach is an alternative which is used in statistical machine translation BIBREF6, BIBREF7. It entails a channel model probability $p(x|y)$ that operates in the reverse direction as well as a language model probability $p(y)$. The language model can be estimated on unpaired data and can take a separate form to the channel model. Noisy channel modeling mitigates explaining away effects that result in the source being ignored for highly likely output prefixes BIBREF8.",
"Previous work on neural noisy channel modeling relied on a complex latent variable model that incrementally processes source and target prefixes BIBREF9. This trades efficiency for accuracy because their model performs significantly less well than a vanilla sequence to sequence model. For languages with similar word order, it can be sufficient to predict the first target token based on a short source prefix, but for languages where word order differs significantly, we may need to take the entire source sentence into account to make a decision.",
"In this paper, we show that a standard sequence to sequence model is an effective parameterization of the channel probability. We train the model on full sentences and apply it to score the source given an incomplete target sentence. This bases decoding decisions on scoring the entire source sequence and it is very simple and effective (§SECREF2). We analyze this approach for various target prefix sizes and find that it is most accurate for large target context sizes. Our simple noisy channel approach consistently outperforms strong baselines such as online ensembles and left-to-right re-ranking setups (§SECREF3)."
],
[
"The noisy channel approach applies Bayes' rule to model $p(y|x) = p(x|y) p(y)/ p(x)$, that is, the channel model $p(x|y)$ operating from the target to the source and a language model $p(y)$. We do not model $p(x)$ since it is constant for all $y$. We compute the channel model probabilities as follows:",
"We refer to $p(y|x)$ as the direct model. A critical choice in our approach is to model $p(x|y)$ with a standard Transformer architecture BIBREF3 as opposed to a model which factors over target prefixes BIBREF9. This setup presents a clear train/test mismatch: we train $p(x|y)$ on complete sentence-pairs and perform inference with incomplete target prefixes of varying size $k$, i.e., $p(x|y_1,\\dots ,y_k)$. However, we find standard sequence to sequence models to be very robust to this mismatch (§SECREF3)."
],
[
"To generate $y$ given $x$ with the channel model, we wish to compute $\\operatornamewithlimits{arg\\,max}_y \\log p(x|y) + \\log p(y)$. However, naïve decoding in this way is computationally expensive because the channel model $p(x|y)$ is conditional on each candidate target prefix. For the direct model, it is sufficient to perform a single forward pass over the network parameterizing $p(y|x)$ to obtain output word probabilities for the entire vocabulary. However, the channel model requires separate forward passes for each vocabulary word."
],
[
"To mitigate this issue, we perform a two-step beam search where the direct model pre-prunes the vocabulary BIBREF9. For beam size $k_1$, and for each beam, we collect $k_2$ possible next word extensions according to the direct model. Next, we score the resulting $k_1 \\times k_2$ partial candidates with the channel model and then prune this set to size $k_1$. Other approaches to pre-pruning may be equally beneficial but we adopt this approach for simplicity. A downside of online decoding with the channel model approach is the high computational overhead: we need to invoke the channel model $k_1 \\times k_2$ times compared to just $k_1$ times for the direct model."
],
[
"The model of BIBREF9 factorizes over source and target prefixes. During decoding, their model alternates between incrementally reading the target prefix and scoring a source prefix, resulting in a runtime of $O(n+m)$, where $n$ and $m$ are the source and target lengths, respectively. In comparison, our approach repeatedly scores the entire source for each target prefix, resulting in $O(mn)$ runtime. Although our approach has greater time complexity, the practical difference of scoring the tokens of a single source sentence instead of just one token is likely to be negligible on modern GPUs since all source tokens can be scored in parallel. Inference is mostly slowed down by the autoregressive nature of decoding. Scoring the entire source enables capturing more dependencies between the source and target, since the beginning of the target must explain the entire source, not just the beginning. This is especially critical when the word order between the source and target language varies considerably, and likely accounts for the lower performance of the direct model of BIBREF9 in comparison to a standard seq2seq model."
],
[
"Since the direct model needs to be evaluated for pre-pruning, we also include these probabilities in making decoding decisions. We use the following linear combination of the channel model, the language model and the direct model for decoding:",
"where $t$ is the length of the target prefix $y$, $s$ is the source sentence length and $\\lambda $ is a tunable weight. Initially, we used separate weights for $p(x|y)$ and $p(y)$ but we found that a single weight resulted in the same accuracy and was easier to tune. Scaling by $t$ and $s$ makes the scores of the direct and channel model comparable to each other throughout decoding. In n-best re-ranking, we have complete target sentences which are of roughly equal length and therefore do not use per word scores."
],
[
"For English-German (En-De) we train on WMT'17 data, validate on news2016 and test on news2017. For reranking, we train models with a 40K joint byte pair encoding vocabulary (BPE; BIBREF11). To be able to use the language model during online decoding, we use the vocabulary of the langauge model on the target side. For the source vocabulary, we learn a 40K byte pair encoding on the source portion of the bitext; we find using LM and bitext vocabularies give similar accuracy. For Chinese-English (Zh-En), we pre-process WMT'17 data following BIBREF12, we develop on dev2017 and test on news2017. For IWSLT'14 De-En we follow the setup of BIBREF13 and measure case-sensitive tokenized BLEU. For WMT De-En, En-De and Zh-En we measure detokenized BLEU BIBREF14."
],
[
"We train two big Transformer language models with 12 blocks BIBREF15: one on the German newscrawl data distributed by WMT'18 comprising 260M sentences and another one on the English newscrawl data comprising 193M sentences. Both use a BPE vocabulary of 32K types. We train on 32 Nvidia V100 GPUs with 16-bit floating point operations BIBREF16 and training took four days."
],
[
"For En-De, De-En, Zh-En we use big Transformers and for IWSLT De-En a base Transformer BIBREF3 as implemented in fairseq BIBREF17. For online decoding experiments, we do not share encoder and decoder embeddings since the source and target vocabularies were learned separately. We report average accuracy of three random initializations of a each configuration. We generally use $k_1=5$ and $k_2=10$. We tune $\\lambda _1$, and a length penalty on the validation set."
],
[
"We first motivate a standard sequence to sequence model to parameterize $p(x|y)$ as opposed to a model that is trained to operate over prefixes. We train Transformer models to translate from the target to the source (En-De) and compare two variants: i) a standard sequence to sequence model trained to predict full source sentences based on full targets (seq2seq). ii) a model trained to predict the full source based on a prefix of the target; we train on all possible prefixes of a target sentence, each paired with the full source (prefix-model).",
"Figure FIGREF12 shows that the prefix-model performs slightly better for short target prefixes but this advantage disappears after 15 tokens. On full target sentences seq2seq outperforms the prefix model by 5.7 BLEU. This is likely because the prefix-model needs to learn how to process both long and short prefixes which results in lower accuracy. The lower performance on long prefixes is even more problematic considering our subsequent finding that channel models perform over-proportionally well on long target prefixes (§SECREF18). The seq2seq model has not been trained to process incomplete targets but empirically it provides a simple and effective parameterization of $p(x|y)$."
],
[
"The model of BIBREF9 uses a latent variable to incrementally score the source for prefixes of the target. Although this results in a faster run time, the model makes decoding decisions based on source prefixes even though the full source is available. In order to quantify the benefit of scoring the entire source instead of a learned prefix length, we simulate different fractions of the source and target in an n-best list reranking setup.",
"The n-best list is generated by the direct model and we re-rank the list in setups where we only have a fraction of the candidate hypothesis and the source sentence. We report BLEU of the selected full candidate hypothesis.",
"Figure FIGREF15 shows that for any given fraction of the target, scoring the entire source (src 1) has better or comparable performance than all other source prefix lengths. It is therefore beneficial to have a channel model that scores the entire source sentence."
],
[
"Next, we evaluate online decoding with a noisy channel setup compared to just a direct model () as well as an ensemble of two direct models (). Table TABREF16 shows that adding a language model to () gives a good improvement BIBREF18 over a single direct model but ensembling two direct models is slightly more effective (). The noisy channel approach () improves by 1.9 BLEU over on news2017 and by 0.9 BLEU over the ensemble. Without per word scores, accuracy drops because the direct model and the channel model are not balanced and their weight shifts throughout decoding. Our simple approach outperforms strong online ensembles which illustrates the advantage over incremental architectures BIBREF9 that do not match vanilla seq2seq models by themselves."
],
[
"Using the channel model in online decoding enables searching a much larger space compared to n-best list re-ranking. However, online decoding is also challenging because the channel model needs to score the entire source sequence given a partial target which can be hard. To measure this, we simulate different target prefix lengths in an n-best list re-ranking setup. The n-best list is generated by the direct model and we re-rank it for different target prefixes of the candidate hypothesis. As in SECREF14, we measure BLEU of the selected full candidate hypothesis. Figure FIGREF19 shows that the channel model enjoys much larger benefits from more target context than re-ranking with just the direct model and an LM () or re-ranking with a direct ensemble (). This experiment shows the importance of large context sizes for the channel approach to work well. It indicates that the channel approach may not be able to effectively exploit the large search space in online decoding due to the limited conditioning context provided by partial target prefixes."
],
[
"Next, we switch to n-best re-ranking where we have the full target sentence available compared to online decoding. Noisy channel model re-ranking has been used by the top ranked entries of the WMT 2019 news translation shared task for English-German, German-English, Englsh-Russian and Russian-English BIBREF19. We compare to various baselines including right-to-left sequence to sequence models which are a popular choice for re-ranking and regularly feature in successful WMT submissions BIBREF20, BIBREF21, BIBREF22.",
"Table TABREF20 shows that the noisy channel model outperforms the baseline () by up to 4.0 BLEU for very large beams, the ensemble by up to 2.9 BLEU () and the best right-to-left configuration by 1.4 BLEU (). The channel approach improves more than other methods with larger n-best lists by adding 2.4 BLEU from $k_1=5$ to $k_1=100$. Other methods improve a lot less with larger beams, e.g., has the next largest improvement of 1.4 BLEU when increasing the beam size but this is still significantly lower than for the noisy channel approach. Adding a language model benefits all settings (, , ) but the channel approach benefits most ( vs ). The direct model with a language model () performs better than for online decoding, likely because the constrained re-ranking setup mitigates explaining away effects (cf. Table TABREF16).",
"Interestingly, both or give only modest improvements compared to . Although previous work demonstrated that reranking with can improve over , we show that the channel model is important to properly leverage the language model without suffering from explaining away effects BIBREF23, BIBREF24. Test results on all language directions confirm that performs best (Table TABREF21)."
],
[
"Previous work relied on incremental channel models which do not make use of the entire source even though it is available and, as we demonstrate, beneficial. Standard sequence to sequence models are a simple parameterization for the channel probability that naturally exploits the entire source. This parameterization outperforms strong baselines such as ensembles of direct models and right-to-left models. Channel models are particularly effective with large context sizes and an interesting future direction is to iteratively refine the output while conditioning on previous contexts."
]
],
"section_name": [
"Introduction",
"Approach",
"Approach ::: Decoding.",
"Approach ::: Approximation.",
"Approach ::: Complexity.",
"Approach ::: Model combinaton.",
"Experiments ::: Datasets.",
"Experiments ::: Language Models.",
"Experiments ::: Sequence to Sequence Model training.",
"Experiments ::: Simple Channel Model",
"Experiments ::: Effect of Scoring the Entire Source Given Partial Target Prefixes",
"Experiments ::: Online Decoding",
"Experiments ::: Analysis",
"Experiments ::: Re-ranking",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"70ad416974b799c259fe0389bf811acf7753060d",
"7524ef95724de3a5820b2b6d9962c7159f7567f6",
"f3b2215996cdde79f708588dd27c9a7cad5f2a16"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"48a8b84ced6a51c63cb142d68f7a39a6363a591f",
"78e0fc33400319fe94911124aeedd13b1cb5a631",
"cd18a8fa647339785ad0b62ed202f4e8ac6924c6"
],
"answer": [
{
"evidence": [
"For English-German (En-De) we train on WMT'17 data, validate on news2016 and test on news2017. For reranking, we train models with a 40K joint byte pair encoding vocabulary (BPE; BIBREF11). To be able to use the language model during online decoding, we use the vocabulary of the langauge model on the target side. For the source vocabulary, we learn a 40K byte pair encoding on the source portion of the bitext; we find using LM and bitext vocabularies give similar accuracy. For Chinese-English (Zh-En), we pre-process WMT'17 data following BIBREF12, we develop on dev2017 and test on news2017. For IWSLT'14 De-En we follow the setup of BIBREF13 and measure case-sensitive tokenized BLEU. For WMT De-En, En-De and Zh-En we measure detokenized BLEU BIBREF14."
],
"extractive_spans": [
"English-German",
"Chinese-English"
],
"free_form_answer": "",
"highlighted_evidence": [
"For English-German (En-De) we train on WMT'17 data, validate on news2016 and test on news2017. ",
"For Chinese-English (Zh-En), we pre-process WMT'17 data following BIBREF12, we develop on dev2017 and test on news2017."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For English-German (En-De) we train on WMT'17 data, validate on news2016 and test on news2017. For reranking, we train models with a 40K joint byte pair encoding vocabulary (BPE; BIBREF11). To be able to use the language model during online decoding, we use the vocabulary of the langauge model on the target side. For the source vocabulary, we learn a 40K byte pair encoding on the source portion of the bitext; we find using LM and bitext vocabularies give similar accuracy. For Chinese-English (Zh-En), we pre-process WMT'17 data following BIBREF12, we develop on dev2017 and test on news2017. For IWSLT'14 De-En we follow the setup of BIBREF13 and measure case-sensitive tokenized BLEU. For WMT De-En, En-De and Zh-En we measure detokenized BLEU BIBREF14."
],
"extractive_spans": [],
"free_form_answer": "English-German; Chinese-English; German-English",
"highlighted_evidence": [
"For English-German (En-De) we train on WMT'17 data, validate on news2016 and test on news2017.",
"For Chinese-English (Zh-En), we pre-process WMT'17 data following BIBREF12, we develop on dev2017 and test on news2017. For IWSLT'14 De-En we follow the setup of BIBREF13 and measure case-sensitive tokenized BLEU."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For En-De, De-En, Zh-En we use big Transformers and for IWSLT De-En a base Transformer BIBREF3 as implemented in fairseq BIBREF17. For online decoding experiments, we do not share encoder and decoder embeddings since the source and target vocabularies were learned separately. We report average accuracy of three random initializations of a each configuration. We generally use $k_1=5$ and $k_2=10$. We tune $\\lambda _1$, and a length penalty on the validation set.",
"Next, we switch to n-best re-ranking where we have the full target sentence available compared to online decoding. Noisy channel model re-ranking has been used by the top ranked entries of the WMT 2019 news translation shared task for English-German, German-English, Englsh-Russian and Russian-English BIBREF19. We compare to various baselines including right-to-left sequence to sequence models which are a popular choice for re-ranking and regularly feature in successful WMT submissions BIBREF20, BIBREF21, BIBREF22."
],
"extractive_spans": [
"En-De",
"De-En",
"Zh-En",
"Englsh-Russian and Russian-English"
],
"free_form_answer": "",
"highlighted_evidence": [
"For En-De, De-En, Zh-En we use big Transformers and for IWSLT De-En a base Transformer BIBREF3 as implemented in fairseq BIBREF17. ",
"Noisy channel model re-ranking has been used by the top ranked entries of the WMT 2019 news translation shared task for English-German, German-English, Englsh-Russian and Russian-English BIBREF19."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"How many parameters does their noisy channel model have?",
"Which language pairs do they evaluate on?"
],
"question_id": [
"11dd2913d1517a1d47b367acb29fe9d79a9c95d1",
"8701ec7345ccc2c35eca4e132a8e16d58585cd63"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"neural",
"neural"
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Comparison of two channel models: a standard seq2seq model trained on full sentence-pairs and a model trained on all possible target prefixes with the full source (prefix-model). We measure accuracy of predicting the full sourcewith increasing target prefixes for both models. Results are on news2016.",
"Figure 3: Impact of target prefix length for the channel model (CH+DIR+LM), direct model + LM (DIR+LM) and a direct ensemble (DIR ENS). We show detokenized BLEU on WMT De-En news2016 with beam 10.",
"Figure 2: For any given target prefix fraction, scoring the entire source has the best or comparable performance compared to other source prefixes. We show detokenized BLEU on the dev set of WMT17 Zh-En with beam 50.",
"Table 1: Online decoding accuracy for a direct model (DIR), ensembling two direct models (DIR ENS) and the channel approach (CH+DIR+LM). We ablate the impact of using per word scores. Results are on WMT De-En. Table 4 in the appendix shows standard deviations.",
"Table 2: Re-ranking BLEU with different n-best list sizes on news2016 of WMT De-En. We compare to decoding with a direct model only (DIR) and decoding with an ensemble of direct models (DIR ENS). Table 5 in the appendix shows standard deviations.",
"Table 3: Re-ranking accuracy with k1 = 50 on four language directions on the respective test sets. See Table 6 in the appendix for standard deviations.",
"Table 4: Online decoding accuracy for a direct model (DIR), ensembling two direct models (DIR ENS) and the channel approach (CH+DIR+LM). We ablate the impact of length normalization. Results are on news2017 of WMT De-En.",
"Table 5: Re-ranking BLEU with different n-best list sizes on news2016 of WMT De-En.",
"Table 6: Re-ranking accuracy with k1 = 50 on four language directions on the respective test sets."
],
"file": [
"3-Figure1-1.png",
"4-Figure3-1.png",
"4-Figure2-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"7-Table4-1.png",
"7-Table5-1.png",
"7-Table6-1.png"
]
} | [
"Which language pairs do they evaluate on?"
] | [
[
"1908.05731-Experiments ::: Sequence to Sequence Model training.-0",
"1908.05731-Experiments ::: Datasets.-0",
"1908.05731-Experiments ::: Re-ranking-0"
]
] | [
"English-German; Chinese-English; German-English"
] | 108 |
1808.10059 | Zero-Shot Adaptive Transfer for Conversational Language Understanding | Conversational agents such as Alexa and Google Assistant constantly need to increase their language understanding capabilities by adding new domains. A massive amount of labeled data is required for training each new domain. While domain adaptation approaches alleviate the annotation cost, prior approaches suffer from increased training time and suboptimal concept alignments. To tackle this, we introduce a novel Zero-Shot Adaptive Transfer method for slot tagging that utilizes the slot description for transferring reusable concepts across domains, and enjoys efficient training without any explicit concept alignments. Extensive experimentation over a dataset of 10 domains relevant to our commercial personal digital assistant shows that our model outperforms previous state-of-the-art systems by a large margin, and achieves an even higher improvement in the low data regime. | {
"paragraphs": [
[
"Recently, there is a surge of excitement in adding numerous new domains to conversational agents such as Alexa, Google Assistant, Cortana and Siri to support a myriad of use cases. However, building a slot tagger, which is a key component for natural language understanding (NLU) BIBREF0 , for a new domain requires massive amounts of labeled data, hindering rapid development of new skills. To address the data-intensiveness problem, domain adaptation approaches have been successfully applied. Previous approaches are roughly categorized into two groups: data-driven approaches BIBREF1 , BIBREF2 and model-driven approaches BIBREF3 , BIBREF4 .",
"In the data-driven approach, new target models are trained by combining target domain data with relevant data from a repository of arbitrary labeled datasets using domain adaptation approaches such as feature augmentation BIBREF1 . A disadvantage of this approach is the increase in training time as the amount of reusable data grows. The reusable data might contain hundreds of thousands of samples, making iterative refinement prohibitive. In contrast, the model-driven approach utilizes “expert\" models for summarizing the data for reusable slots BIBREF3 , BIBREF4 . The outputs of the expert models are directly used when training new domains, allowing for faster training. A drawback of this approach is that it requires explicit concept alignments which itself is not a trivial task, potentially missing lots of reusable concepts. Additionally, it's not easy to generalize these models to new, unseen slots.",
"In this paper, we present a new domain adaptation technique for slot tagging inspired by recent advances in zero-shot learning. Traditionally, slot tagging is formulated as a sequence labeling task using the BIO representation (Figure 1 ). Our approach formulates this problem as detecting spans that contain values for each slot as shown in Figure 1 . For implicit transfer of reusable concepts across domains, we represent slots in a shared latent semantic space by embedding the slot description. With the shared latent space, domain adaptation can simply be done by fine-tuning a base model, which is trained on massive data, with a handful of target domain data without any explicit concept alignments. A similar idea of utilizing zero-shot learning for slot tagging has been proven to work in semi-supervised settings BIBREF5 . Our zero-shot model architecture differs from this by adding: 1) an attention layer to produce the slot-aware representations of input words, 2) a CRF layer to better satisfy global consistency constraints, 3) character-level embeddings to incorporate morphological information. Despite its simplicity, we show that our model outperforms all existing methods including the previous zero-shot learning approach in domain adaptation settings.",
"We first describe our approach called Zero-Shot Adaptive Transfer model (ZAT) in detail. We then describe the dataset we used for our experiments. Using this data, we conduct experiments comparing our ZAT model with a set of state-of-the-art models: Bag-of-Expert (BoE) models and their non-expert counterparts BIBREF4 , and the Concept Tagger model BIBREF5 , showing that our model can lead to significant F1-score improvements. This is followed by an in-depth analysis of the results. We then provide a survey of related work and concluding remarks."
],
[
"Our Zero-Shot Adaptive Transfer model for slot tagging is a hierarchical model with six layers (Figure 2 )."
],
[
"For our experiments, we collected data from a set of ten diverse domains. Table 1 shows the domains along with some statistics and sample utterances. Since these are new domains for our digital assistant, we did not have enough data for these domains in our historical logs. Therefore, the data was collected using crowdsourcing from human judges. For each domain, several prompts were created to crowdsource utterances for a variety of intents. These utterances were then annotated through our standard data annotation pipeline after several iterations of measuring interannotator agreement and refining the annotation guidelines. We collected at least 5000 instances for each domain, with more data collected for some domains based on business priority.",
"For each of the domains, we sampled 80% of the data as training and 10% each as dev and test sets. Further samples of 2000, 1000, and 500 training samples were taken to compare our approach with previous methods. All samples were obtained by stratified sampling based on the annotated intents of the utterances."
],
[
"In order to compare our method against the state-of-the-art models, we compare against the models presented in BIBREF4 , including the BoE models and their non-BoE variants. We also compare our method with another zero-shot model for slot tagging BIBREF5 in domain adaptation settings.",
"Following BIBREF4 , we concatenate the output of 25 dimensional character-level bidirectional LSTMs with pre-trained word embeddings to obtain morphology-sensitive embeddings. We then use a 100 dimensional word-level bidirectional LSTM layer to obtain contextualized word representations. Finally, the output of this layer is passed on to a dense feed forward layer with a softmax activation to predict the label probabilities for each word. We train using stochastic gradient descent with Adam BIBREF11 . To avoid overfitting, we also apply dropout to the output of each layer, with a default dropout keep probability of 0.8.",
"The LSTM-BoE architecture is similar to the LSTM model with the exception that we use the output vectors of the word-level bidirectional LSTM layer of each expert model to obtain enriched word embeddings. Specifically, let $e_1 ... e_k \\in E$ be the set of reusable expert domains. For each expert $e_j$ , we train a separate LSTM model. Let $h^{e_j}_i$ be the word-level bi-directional LSTM output for expert $e_j$ on word $w_i$ . When training on a target domain, for each word $w_i$ , we first compute a BoE representation for this word as $h^E = \\sum _{e_i \\in E} h^{e_j}_i$ . The input to the word-level LSTM for word $w_i$ in the target domain is now a concatenation of the character-level LSTM outputs, the pre-trained word embedding, and the BoE representation.",
"Following BIBREF4 , We use two expert domains containing reusable slots: timex and location. The timex domain consists of utterances containing the slots $date$ , $time$ and $duration$ . The location domain consists of utterances containing $location$ , $location\\_type$ and $place\\_name$ slots. Both of these types of slots appear in more than 20 of a set of 40 domains developed for use in our commercial personal assistant, making them ideal candidates for reuse. Data for these domains was sampled from the input utterances from our commercial digital assistant. Each reusable domain contains about a million utterances. There is no overlap between utterances in the target domains used for our experiments and utterances in the reusable domains. The data for the reusable domains is sampled from other domains available to the digital assistant, not including our target domains. Models trained on the timex and location data have F1-scores of 96% and 89% respectively on test data from their respective domains.",
"We use a standard linear-chain CRF architecture with n-gram and context features. In particular, for each token, we use unigram, bigram and trigram features, along with previous and next unigrams, bigrams, and trigrams for context length of up to 3 words. We also use a skip bigram feature created by concatenating the current unigram and skip-one unigram. We train our CRF using stochastic gradient descent with L1 regularization to prevent overfitting. The L1 coefficient was set to 0.1 and we use a learning rate of 0.1 with exponential decay for learning rate scheduling BIBREF12 .",
"Similar to the LSTM-BoE model, we first train a CRF model $c_j$ for each of the reusable expert domains $e_j \\in E$ . When training on a target domain, for every query word $w_i$ , a one-hot label vector $l^j_i$ is emitted by each expert CRF model $c_j$ . The length of the label vector $l^j_i$ is the number of labels in the expert domain, with the value corresponding to the label predicted by $c_j$ for word $w_i$ set to 1, and values for all other labels set to 0. For each word, the label vectors for all the expert CRF models are concatenated and provided as features for the target domain CRF training, along with the n-gram features.",
"For comparison with a state-of-the-art zero-shot model, we implement the concept tagger (CT) BIBREF5 . The CT model consists of a single 256 dimensional bidirectional LSTM layer that takes pre-trained word embeddings as input to produce contextual word representations. This is followed by a feed forward layer where the contextual word representations are combined with a slot encoding to produce vectors of 128 dimensions. The slot encoding is the average vector of the word embeddings for the slot description. This feeds into another 128 dimensional bi-directional LSTM layer followed by a softmax layer that outputs the prediction for that slot."
],
[
"For domain adaptation with zero-shot models, we first construct a joint training dataset by combining the training datasets of size 2000 from all domains except for a target domain. We then train a base model on the joint dataset. We sample input examples during training and evaluation for each slot to include both positive examples (which have the slot) and negative examples (which do not have the slot) with a ratio of 1 to 3. After the base model is trained, domain adaptation is simply done by further training the base model on varying amounts of the training data of the target domain. Note that the size of the joint dataset for each target domain is 18,000, which is dramatically smaller than millions of examples used for training expert models in the BoE approach. Furthermore, there are a lot of utterances in the joint dataset where no slots from the target domain is present."
],
[
"Table 2 shows the F1-scores obtained by the different methods for each of the 10 domains. LSTM based models in general perform better than the CRF based models. Both the CRF-BoE and LSTM-BoE outperform the basic CRF and LSTM models. Both zero-shot models, CT and ZAT, again surpass the BoE models. ZAT has a statistically significant mean improvement of $4.04$ , $5.37$ and $3.27$ points over LSTM-BoE with training size 500, 1000 and 2000, respectively. ZAT also shows a statistically significant average improvement of $2.58$ , $2.44$ and $2.5$ points over CT, another zero-shot model with training size 500, 1000 and 2000, respectively. Looking at results for individual domains, the highest improvement for BoE models are seen for transportation and travel. This can be explained by these domains having a high frequency of $timex$ and $location$ slots. But BoE models show a regression in the shopping domain, and a reason could be the low frequency of expert slots. In contrast, ZAT consistently outperforms non-adapted models (CRF and LSTM) by a large margin. This is because ZAT can benefit from other reusable slots than $timex$ and $location$ . Though not as popular as $5.37$0 and $5.37$1 , slots such as $5.37$2 , $5.37$3 , $5.37$4 , and $5.37$5 appear across many domains.",
"We plot the averaged performances on varying amounts of training data for each target domain in Figure 3 . Note that the improvements are even higher for the experiments with smaller training data. In particular, ZAT shows an improvement of $14.67$ in absolute F1-score over CRF when training with 500 instances. ZAT achieves an F1-score of 76.04% with only 500 training instances, while even with 2000 training instances the CRF model achieves an F1-score of only 75%. Thus the ZAT model achieves better F1-score with only one-fourth the training data.",
"Table 3 shows the performances of CT and ZAT when no target domain data is available. Both models are able to achieve reasonable zero-shot performance for most domains, and ZAT shows an average improvement of $5.07$ over CT."
],
[
"In Table 4 , we ablate our full model by removing the CRF layer ( $-CRF$ ) and character-level word embeddings ( $-CHAR$ ). Without CRF, the model suffers a loss of 1%-1.8% points. The character-level word embeddings are also important: without this, the performance is down by 0.5%-2.7%. We study the impact of fine-tuning the pre-trained word embeddings ( $+WEFT$ ). When there is no target domain data available, fine-tuning hurts performance. But, with a moderate amount of target domain data, fine-tuning improves performance."
],
[
"To better understand our model, in Figure 7 , we visualize the attention weights for the input sentence \"Can I wear jeans to a casual dinner?\" with different slots: (a) category, (b) item, and (c) time. From (a) and (b), it is clear that the attention is concentrated on the relevant words of the input and slot description. In contrast, there is no salient attention when the slot is not present in the input sentence.",
"To analyze the impact of context, we compute the error rate with respect to span start position in the input sentence. Figure 4 shows that error rate tends to degrade for span start positions further from the beginning. This highlights opportunities to reduce a significant amount of errors by considering previous context.",
"As shown in Figure 5 , our model makes more errors for longer spans. This can be improved by consulting spans detected by parsers or other span-based models such as coreference resolution systems BIBREF13 .",
"Finally, we compute the percentage of POS tags that are tied to labeling errors. Figure 6 shows POS tags which occurs more than 10,000 times and contributes to more than 10% of errors. It is not surprising that there are many errors for ADJ, ADV and NOUN. Our system suffers in handling conjunctive structures, for instance “Help me find my $[black\\text{ }and\\text{ }tan]_{described\\_as}$ $[jacket]_{item}$ ”, and parsing information can be helpful at enforcing structural consistencies. The NUM category is associated with a variety of concepts and diverse surface forms. Thus it is a probably good idea to have an expert model focusing on the NUM category."
],
[
"A number of deep learning approaches have been applied to the problem of language understanding in recent years BIBREF14 , BIBREF15 , BIBREF16 . For a thorough overview of deep learning methods in conversational language understanding, we refer the readers to BIBREF17 .",
"As the digital assistants increase in sophistication, an increasing number of slot models have to be trained, making scalability of these models a concern. Researchers have explored several directions for data efficient training of new models. One of the directions has been multi-task learning, where a joint model across multiple tasks and domains might be learned BIBREF18 , BIBREF19 , BIBREF20 . As a recent example, BIBREF21 presented an approach for multi-task learning across the tasks of language understanding and dialog state tracking. BIBREF22 presented a multi-task learning approach for language understanding that consists of training a shared representation over multiple domains, with additional fine-tuning applied for new target domains by replacing the affine transform and softmax layers.",
"Another direction has been domain adaptation and transfer learning methods. Early focus was on data driven adaptation techniques where data from multiple source domains was combined BIBREF1 . Such data-driven approaches offer model improvements at the cost of increased training time. More recently, model-driven approaches have shown success BIBREF3 , BIBREF4 . These approaches follow the strategy of first training expert models on the source data, and then using the output of these models when training new target models. A benefit of these approaches over data-driven adaptation techniques is the improved training time that scales well as the number of source domains increase.",
"However, both these transfer learning approaches require concept alignment to map the new labels to existing ones, and cannot generalize to unseen labels. This has led researchers to investigate zero-shot learning techniques, where a model is learned against label representations as opposed to a fixed set of labels.",
"Several researchers have explored zero-shot models for domain and intent classification. BIBREF23 described a zero-shot model for domain classification of input utterances by using query click logs to learn domain label representations. BIBREF24 also learn a zero-shot model for domain classification. BIBREF25 learn a zero-shot model for intent classification using a DSSM style model for learning semantic representations for intents.",
"Slot tagging using zero-shot models has also been explored. BIBREF26 presented a zero-shot approach for slot tagging based on a knowledge base and word representations learned from unlabeled data. BIBREF5 also applied zero-shot learning to slot-filling by implicitly linking slot representations across domains by using the label descriptions of the slots. Our method is similar to their approach, but we use an additional attention layer to produce the slot-aware representations of input words, leading to better performance as demonstrated by our empirical results.",
"More recently, zero-shot learning has also been applied to other tasks. For example, BIBREF27 apply zero-shot learning for training language understanding models for multiple languages and show good results. BIBREF28 presented a zero-shot model for question generation from knowledge graphs, and BIBREF29 presented a model for zero-shot transfer learning for event extraction."
],
[
"In this paper, we introduce a novel Zero-Shot Adaptive Transfer method for slot tagging that utilizes the slot description for transferring reusable concepts across domains to avoid some drawbacks of prior approaches such as increased training time and suboptimal concept alignments. Experiment results show that our model performs significantly better than state-of-the-art systems by a large margin of 7.24% in absolute F1-score when training with 2000 instances per domain, and achieves an even higher improvement of 14.57% when only 500 training instances are used. We provide extensive analysis of the results to shed light on future work. We plan to extend our model to consider more context and utilize exogenous resources like parsing information."
]
],
"section_name": [
"Introduction",
"Adaptive Transfer",
"Data",
"Baseline Systems",
"Domain Adaptation using Zero-Shot Model",
"Comparative Results",
"Model Variants",
"Analysis",
"Related Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"4db87d26b04d4039f1d71f0f672ad57af3dc7dac",
"95afb1b8256d1c8e65334ccae164618e7e3e0dd1",
"e8452b2ab9f77c69d379725d98909157b1aee5b3"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: F1-scores obtained by each of the six models for the 10 domains, with the highest score in each row marked as bold. Table (a), (b) and (c) report the results for 2000, 1000 and 500 training instances, respectively. The average improvement is computed over the CRF model, with the ones marked ∗ being statistically significant with p-value < 0.05."
],
"extractive_spans": [],
"free_form_answer": "+7.24 for train size of 2000, +11.03 for train size of 1000, and +14.67 for train size of 500",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: F1-scores obtained by each of the six models for the 10 domains, with the highest score in each row marked as bold. Table (a), (b) and (c) report the results for 2000, 1000 and 500 training instances, respectively. The average improvement is computed over the CRF model, with the ones marked ∗ being statistically significant with p-value < 0.05."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: F1-scores with zero training instances for target domain."
],
"extractive_spans": [],
"free_form_answer": "Average F1 improvement of 5.07",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: F1-scores with zero training instances for target domain."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: F1-scores obtained by each of the six models for the 10 domains, with the highest score in each row marked as bold. Table (a), (b) and (c) report the results for 2000, 1000 and 500 training instances, respectively. The average improvement is computed over the CRF model, with the ones marked ∗ being statistically significant with p-value < 0.05.",
"FLOAT SELECTED: Table 3: F1-scores with zero training instances for target domain."
],
"extractive_spans": [],
"free_form_answer": "+7.24, +11.03, +14.67, +5.07 for 2000, 1000, 500 and zero training instances respectively",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: F1-scores obtained by each of the six models for the 10 domains, with the highest score in each row marked as bold. Table (a), (b) and (c) report the results for 2000, 1000 and 500 training instances, respectively. The average improvement is computed over the CRF model, with the ones marked ∗ being statistically significant with p-value < 0.05.",
"FLOAT SELECTED: Table 3: F1-scores with zero training instances for target domain."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b",
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
}
],
"nlp_background": [
"two"
],
"paper_read": [
"somewhat"
],
"question": [
"How large the improvement margin is?"
],
"question_id": [
"d20fd6330cb9d03734e2632166d6c8f780359a94"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 1: (a) Traditional slot tagging approaches with the BIO representation. (b) For each slot, zero-shot models independently detect spans that contain values for the slot. Detected spans are then merged to produce a final prediction.",
"Figure 2: Network architecture for the Zero-Shot Adaptive Transfer model.",
"Table 1: List of domains we experimented with. 80% of the data is sampled for building the training sets, with 10% each for dev and test sets.",
"Table 2: F1-scores obtained by each of the six models for the 10 domains, with the highest score in each row marked as bold. Table (a), (b) and (c) report the results for 2000, 1000 and 500 training instances, respectively. The average improvement is computed over the CRF model, with the ones marked ∗ being statistically significant with p-value < 0.05.",
"Figure 5: Error rate with respect to span length",
"Figure 3: Performance curves with varying amounts of training data for target domain.",
"Table 3: F1-scores with zero training instances for target domain.",
"Table 4: Model variants.",
"Figure 6: Error rate with respect to POS tag",
"Figure 4: Error rate with respect to span position",
"Figure 7: Visualization of attention weights for the input sentence ”Can I wear jeans to a casual dinner?” with different slots: (a) category, (b) item, and (c) time."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"6-Figure5-1.png",
"6-Figure3-1.png",
"6-Table3-1.png",
"6-Table4-1.png",
"6-Figure6-1.png",
"6-Figure4-1.png",
"7-Figure7-1.png"
]
} | [
"How large the improvement margin is?"
] | [
[
"1808.10059-6-Table3-1.png",
"1808.10059-5-Table2-1.png"
]
] | [
"+7.24, +11.03, +14.67, +5.07 for 2000, 1000, 500 and zero training instances respectively"
] | 109 |
1911.12848 | Sentiment Analysis On Indian Indigenous Languages: A Review On Multilingual Opinion Mining | An increase in the use of smartphones has laid to the use of the internet and social media platforms. The most commonly used social media platforms are Twitter, Facebook, WhatsApp and Instagram. People are sharing their personal experiences, reviews, feedbacks on the web. The information which is available on the web is unstructured and enormous. Hence, there is a huge scope of research on understanding the sentiment of the data available on the web. Sentiment Analysis (SA) can be carried out on the reviews, feedbacks, discussions available on the web. There has been extensive research carried out on SA in the English language, but data on the web also contains different other languages which should be analyzed. This paper aims to analyze, review and discuss the approaches, algorithms, challenges faced by the researchers while carrying out the SA on Indigenous languages. | {
"paragraphs": [
[
"SA is the process of extracting the opinions of people and use it to understand the people’s attitude, reactions expressed on the web regarding the various issues in the world and is also known as opinion mining. Nowadays with the increasing use of the internet a lot of information is available on the web which is about the different products, movies, books, technologies etc. People express their views, opinions etc on the different products,services,books etc on the web. For e.g. customer has bought a smart phone, as soon as the customer starts using the phone, he/she gives the feedback about whether they liked the phone, which features they liked or disliked. This type of reviews or feedback from the customers or users have become a boon to the industry. These views can help the industry or a company to improve their services i.e. if the reviews are negative then the aspects can be improved and if the reviews are positive, then that aspect can be kept in mind while creating a newer version of the service.",
"According to the authors Medagoda et al. BIBREF0 there has being a continuous research going on in the English language but the research carried out in the indigenous languages is less. Also, the researches in indigenous languages follow the techniques used for the English language but this has one disadvantage which is, techniques have properties which are specific to a language. Hence It is really important to understand and analyze Indigenous language data because it can give meaningful insights to the companies. For example, India and China have world's largest population and are rich in diverse languages, analysing these indigenous language will be useful to companies because they have large share of users in India and China. In the current study, the types of languages i.e. indigenous languages and code mix languages are discussed prior to the approaches, methodologies used by the researchers and challenges faced by them."
],
[
"Indigenous languages are the languages that are native to a region or spoken by a group of people in a particular state. It is not necessarily a national language. For e.g. Irish, Tibetan, Spanish, Hindi, Marathi, Gujarati, Telugu, Tamil are the indigenous languages."
],
[
"Code-mixing is mixing two or more languages while communicating in person or over the web. Code-mixing is basically observed in the multilingual speakers. Code-mixed languages are a challenge to the sentiment analysis problem. A classic example of the code-mix language is Hinglish which is combination of English and Hindi words present in a sentence. Hinglish is widely used language in India to communicate over the web. For e.g. movie review in Hinglish is “yeh movie kitni best hai.. Awesome.” In this sentence movie, best and awesome are English words but the remaining words are Hindi words, so the language identification becomes the first step in code mix languages followed by the SA which indirectly increases the overhead for the researchers and becomes time consuming process.",
"The remaining paper is structured as follows. Section II explains about the the process carried out in SA. Section III describes about SA levels and the different work done in each level. Section IV is about the current trending techniques in Natural Language Processing(NLP). Section V describes about the data-sets used by the researchers. Section VI explains about the SA techniques and the work done by the researchers using the different techniques. Section VII is about the challenges and limitations faced by the researches. Section VIII is the discussions and analysis about the papers been studied. Section IX is conclusion and future scope."
],
[
"The process of SA is carried out in 6 major steps which are data extraction, annotation, pre-processing, feature extraction, modelling, evaluation. Figure FIGREF3 shows the steps in the SA task and the explanation of each step is as follows."
],
[
"The first step of any SA task is data extraction. The data can be extracted either manually or automatically. Different web scraping algorithms help in automatically extracting the data from the web. One of the popular web scraping technique is Text pattern matching, which extracts only the information which matches the search criteria mentioned in the algorithm. Also, different Application Programming Interface (API) offered by social media platforms like Twitter, YouTube, Facebook etc. help in the data extraction process."
],
[
"Once the data extraction step is completed it is important to label the data. Annotation is process to add comments, observations, notes, questions related to the data at a particular point in the document. Labeling is a part of annotation and is used to classify the data as positive, negative or neutral. Labelling can be carried out manually or automatically. Majority of the researchers have done manual labelling on the dataset BIBREF1, BIBREF2. Data collected from the web is raw and unstructured. It is essential for the researchers to carry out the pre-processing step which is as follows."
],
[
"Pre-processing is the process of converting the raw and unstructured data into the understandable and structured form. There are 3 major steps which are involved in pre-processing which are data cleaning, data transformation, data reduction. Each step is explained as follows."
],
[
"In this step the missing values and the noisy data is handled. Missing values in the data can be handled by filling the missing values manually or by finding the attribute mean or probability values. Noisy data can be due to data collection , data entry errors.Noisy data can be handled by using clustering algorithm. In clustering algorithm similar data is grouped together to form one cluster and the noisy data which is usually an outlier lies outside the clusters."
],
[
"Data is sometimes not in the suitable form for mining process therefore some type of transformation is required. Normalization, attribute derivation are ways of data transformation. Normalization is a process of scaling the data values in a specific scale ( 0 to 1 , -1 to 1). Attribute derivation is process of extracting the data from multiple attributes and creating new attribute. For e.g. age can be a derived attribute from the date of birth of customer."
],
[
"Data which is available on the web is huge. In order to process the data lot of efforts and time is required. There are some attributes in the data which are not that important and can be removed. Data reduction process can be carried out using attribute selection, numerosity reduction technique. Attribute selection is a process of selecting only the important and relevant attributes from the dataset and discarding the remaining attributes. Numerosity reduction stores the model of the data instead of the whole data. There are different pre-processing techniques used by the researchers and the most common ones are tokenization, stop-word removal, Parts Of Speech Tagging (POS), stemming and lemmatization. Tokenization splits the data into individual words known as tokens BIBREF3. Tokenization can be explained using Figure FIGREF10 which is as follows.",
"Stop words are frequent words used in the sentences. Removal of stop words will not effect the sentiment polarity. Common stop words for English language are “is”, “was”, “there”, “that”, “they”,” he”,” she” etc. POS tagging is the technique where the words are tagged based on the parts of speech they are present in . For e.g. “ She is beautiful” for this sentence the POS Tagger will tag words as follows ‘She’- pronoun , ‘is’- verb , ‘beautiful’- adjective BIBREF3.",
"Stemming is the process to reduced words to the root form by removing the suffix and the prefix present in the word. For e.g. “occurring” is the word the stem form of it is “occur” because the suffix “ing” is removed from it. One disadvantage of stemming is that sometimes the words do not have any dictionary meaning.",
"Lemmitization solves the problem of stemming. It first tries to find the root form of the word and then only the prefix and suffix of the words are removed. For e.g “leaves” is the word. The stem form of it is “leav” and the lemmitized form of it is “leaf”.",
"Different feature extraction techniques can be applied on this pre-processed data which is explained in detail as follow."
],
[
"Text vectorization is the process of converting the textual attributes into the numeric format. Machine learning algorithms usually work with the numeric data and hence there is a need to convert the textual data into the numeric or vector format. The most common vectorization techniques are bag of words, Term Frequency and Inverse Term frequency (TF-IDF) and count vectorizer. Bag-of-Words (BOW) is the most common vectorization technique. In this technique the pre-defined list of words i.e BOW is maintained and the words in the BOW are compared to the sentences. If the word in the sentence is present in the BOW list, it is marked as 1 else it is marked as 0. The vector created is of the size of the BOW. Figure FIGREF12 explains the BOW in detail.",
"TF-IDF is the very common feature extraction technique. It is a statistical measure to find how important the word is in the document. Term Frequency (TF) calculates the occurance of the word in the single document by the total number of words in the document, where as inverse term frequency (IDF) tries to find how important the word is in all documents BIBREF4.",
"Statistically TF and IDF are represented in equations DISPLAY_FORM13 and DISPLAY_FORM14 respectively.",
"Count Vectorization is a vectorization technique in which a document matrix is maintained. The document matrix contains the words present in each document with the frequency of occurrence of that word in the document. Figure FIGREF15 explains the count vectorization with an example."
],
[
"Classification of the data can be done by 3 approaches which are machine learning approach, lexicon based approach, rule based approaches."
],
[
"These are the approaches in which different supervised, unsupervised and semi-supervised learning algorithms are applied on the dataset to carry out the analysis and predictions."
],
[
"In this approach the dictionary or corpora is used to carry out the SA task. In this approach the dictionary or the corpus words have polarity values assigned to each one of them. The words in the dataset are searched in the lexicon and if the word match is found the polarity of the word is assigned. For e.g the task is to find out the list of computer programming languages in the sentences which can be done using lexicon based approach by maintaining the predefined list of the programming language as a dictionary and then searching the words from the sentences in it."
],
[
"It is the traditional approach in which the set of rules are defined to carry out the SA task. For e.g the task is to find out the list of computer programming languages in the sentences. The rule developers scan the sentences and try to define rules which can perfectly predict the languages. Rule defined by developers is to extract all the capital words in the sentence except the first capital word. Test sentence is “Language above is Python”. The rule based approach will correctly identify the language but it will be failed when the sentence is “Java is programming language”.",
"Figure FIGREF20 represents the different classification techniques."
],
[
"Once the model is validated and the results are available the different models are evaluated using different performance metrics. The most common performance evaluation metrics are accuracy , precision , recall , F1-score.",
"Accuracy:",
"It is the number of correct predictions over the total number of the instances of data BIBREF4.",
"Precision:",
"It is the number of the correct positive results over the total number of positive predicted results BIBREF4.",
"Recall:",
"It is number of correct predicted results over the total number of actual positive results BIBREF4.",
"F1 score:",
"It is the weighed average of precision and recall BIBREF4.",
"Statistically accuracy, precision, recall and F1-score are represented in equations DISPLAY_FORM26, DISPLAY_FORM27, DISPLAY_FORM28, DISPLAY_FORM29 respectively.",
"where , TP = Truly predicted positives, TN = Truly predicted negatives , FP = Falsely predicted positives , FN = Falsely predicted negatives."
],
[
"SA can be carried out at 3 levels. document level, sentence level and aspect level."
],
[
"In this process the SA is carried out on the document or paragraph as a whole. Whenever a document is about a single subject it is best to carry out document level SA. Examples of document level SA datasets are speeches of the word leaders, movie review, mobile review etc.",
"SentiWordNet(SWN) is a opinion based lexicon derived from the WordNets. WordNets are the lexical database which consist of words with short definition and example. SWN consist of dictionary words and the numeric positive and negative sentiment score of each word. WordNets and SWNs are researchers common choice when carrying out SA on document level. Pundlik et al. BIBREF5 were working on multi-domain Hindi language dataset. The architecture implemented in the paper BIBREF5 contained two steps. Domain classification which was the first step was performed using ontology based approach. Sentiment classification being the second step was performed using HNSW and Language Model (LM) Classifier. There was a comparative study done on the results by the HNSW and HNSW + LM Classifiers. The combination of HNSW and LM Classifier gave better classification results as compared to HNSW BIBREF5.",
"The work by Yadav et al. BIBREF6 showed that SA for the mix-Hindi language can be performed using three approaches. The first approach was to perform classification based on neural network on the predefined words. Second approach used IIT Bombay HNSW. Third approach performed classification using neural network on the predefined Hindi sentences. The approaches in BIBREF6 are explained in detail as follows. The first approach maintained the positive and negative word list. The mix-Hindi words were converted into pure Hindi words and were searched in the positive and negative list which was created manually. If the word was found in the positive word list the positive word count was incremented and if the negative word was found the negative word counter was incremented. In second approach instead of the positive and negative word list the HNSW was used remaining all the steps were same as in the first approach. In third approach seven features were created and applied on the sentences. The features are as follows, to find the frequency of the word, adjective, noun, verb, adverb, total positive polarity and negative polarity of the sentence. These features were send to the neural network for testing and the polarity of the word was detected. After the comparison of all approaches it was found that the second approach had the best accuracy which was 71.5%.",
"Ansari et al. BIBREF7 introduced an architecture for two code mix languages Hindi and Marathi. The architecture included language identification, feature generation and sentiment classification as major steps. Hindi and English WordNet’s and SWNs were used as there was no SWN for Marathi. The Marathi words were first translated into English and the sentiment score of the English words were found and assigned to the words. Also, classification algorithms like Random Forest, Naïve Bayes, Support Vector Machine (SVM) were used for finding the polarity in the final step. Slang identification and emoticons were also crucial steps in the study. Slang are a group of words which are used informally and in a particular language. Emoticons are the representation of different facial expressions. SVM performed the best among all the algorithms with accuracy of 90% and 70% for Marathi and Hindi language.",
"In the paper, Jha et al. BIBREF8 explains that there is a lot of research done in the English language for SA, but little for the Hindi language. The system developed by the authors carried out the SA in Hindi language using two approaches. In first approach, supervised machine learning algorithm Naïve Bayes was used for document classification and in the second approach, the parts of speech (POS) tagging was done using TnT POS Tagger and using the rule-based approach the classification of opinionated words was completed. 200 positive and 200 negative movie review documents are web scraping for testing the system. Accuracy of 80% was achieved by the system."
],
[
"Sentence level SA identifies the opinions on the sentence and classify the sentence as positive, negative or neutral. There are two types of sentences, subjective and objective sentences which are required to be identified while performing sentence level SA. Subjective sentences carry opinions, expressions and emotions in them. Objective sentences are the factual information. Sentence level SA can be carried out only on the subjective sentences hence it is important to first filter out objective sentences.",
"SWN is a most common lexicon-based approach used by the researchers. Haithem et al. BIBREF9 developed the Irish SWN whose accuracy was 6% greater than the accuracy obtained by transliteration of the Irish Tweets into English language. The lexicon was manually created. The accuracy difference between the systems was because of the translation carried out into the English language BIBREF9. Naidu et al. BIBREF10 carried out the SA on Telugu e-newspapers. Their system was divided in two steps. First step was subjectivity classification. Second step was sentiment classification. In the first step the sentences were divided as subjective and objective sentences. In the second step only, the subjective sentences were further classified as positive, negative and neutral. Both the steps were performed using the SWN which gave the accuracy of 74% and 81% BIBREF10.",
"Nanda et al. BIBREF11 used the SWN to automatically annotate the movie review dataset. Machine learning algorithms Random Forest and SVM were used to carry out the sentiment classification. Random Forest performed better than SVM giving the accuracy of 91%. Performance metrics used to evaluate the algorithms were accuracy, precision, recall, F1-score BIBREF11.",
"Pandey et al. BIBREF12 defined a framework to carry out the SA task on the Hindi movie reviews. BIBREF12 observed that the lower accuracy was obtained by using SWN as a classification technique and hence suggested using synset replacement algorithm along with the SWN. Synset replacement algorithms groups the synonymous words having same concepts together. It helped in increasing the accuracy of the system because if the word was not present in the Hindi SWN then it found the closest word and assigned the score of that word BIBREF12. In the study, Bhargava et al. BIBREF13 completed the SA task on the FIRE 2015 dataset. The dataset consisted of code-mixed sentences in English along with 4 Indian languages (Hindi, Bengali, Tamil, Telugu). The architecture consisted of 2 main steps Language Identification and Sentiment Classification. Punctuations, hashtags were identified and handled by the CMU Ark tagger. Machine learning techniques like logistic regression and SVM were used for language identification. SWN’s of each language were used for sentiment classification. The results of the implemented system were compared with the previous language translation technique and 8% better precision was observed BIBREF13.",
"Kaur, Mangat and Krail BIBREF14 carried out their SA task on Hinglish language, which is code mix language highly popular in India. It is mainly used for the social media communication. The authors [10] had created a Hinglish corpus which contained movie reviews domain specific Hindi words. Stop-word removal, tokenization were the pre-processing techniques used in the system, along with TF-IDF as the vectorization technique. Classification algorithms like SVM and Naïve Bayes where used to carry out the classification task. As a future work, the authors in BIBREF14 are trying to find the best feature and classifier combination.",
"SVM is the machine learning algorithm which is among the top choice by researchers nowadays. The researchers have even compared the results of the different deep learning models with SVM Sun et al. BIBREF15. In BIBREF15 SA task performed on Tibetan microblog. Word2vec was the vectorization technique used. It converts the words into the numeric vector. After the vectorization step the classification of the data was carried out by the different machine learning and deep learning algorithms like SVM, Convolution Neural Network (CNN), Long short-term memory (LSTM), CNN-LSTM. CNN is a type of neural network having 4 layers. Input layer, convolution layer, global max pooling layer, output layer. Convolutional layer is the main layer because feature extraction is done in this layer. LSTM is the variant of the RNN (Recurrent Neural Network) which are capable of learning long term dependencies and detecting patterns in the data. The comparative study of different algorithm displays CNN-LSTM model as the best model with the accuracy of 86.21% BIBREF15.",
"Joshi et al. BIBREF16 carried out SA on the Gujarati tweets. Stopword removal, stemming were the pre-processing techniques used in the implemented model. Feature extraction technique Parts of Speech (POS) tagging and the classification algorithm SVM was used in the system. SVM performed very well and gave the accuracy of 92%. Sharma et al. BIBREF17 tried to predict the Indian election results by extracting the Hindi tweets for political domain. The tweets were mainly for 5 major political parties. Three approaches where implemented to predict the winner in the election. First approach was dictionary based in which n-gram was used as a pre-processing technique and TF-IDF was used as a vectorization technique. SWN was used to classify the data and assign the polarity score to the words. Naïve Bayes algorithm and SVM were the remaining two approaches which were used. SVM and Naïve Bayes predicted party BJP (Bhartiya Janta Party) as the winner. SVM had the accuracy of 78.4% which was highest among the three implemented approaches.",
"The authors, Phani et al. BIBREF18 carried out SA in three different languages Hindi, Tamil and Bengali. Feature extraction techniques n-grams and surface features were explored in detail because they were language independent, simple and robust. 12 surface features where considered in the study in which some of them were number of the words in tweet, number of hashtags in the tweet, number of characters in the tweet etc. Comparative study was carried out to find out which feature extraction and sentiment classifier algorithm worked best together. The classifiers like Multinomial Naïve Bayes, Logical Regression (LR), Decision Trees, Random Forest, SVM SVC and SVM Linear SVC were applied on the dataset. Majority of the languages worked best with the word unigram and LR algorithm. Highest accuracy of 81.57% was for Hindi BIBREF18. Research by Sahu et al. BIBREF19 was carried out on movie reviews in Odia language. Naïve Bayes, Logistic Regression, SVM were used for the purpose of classification. Comparison of the results of different algorithms was done using performance metrics like accuracy, precision and recall. Logistic Regression performed the best with the accuracy of 88% followed by Naïve Bayes with accuracy of 81% and SVM with the accuracy of 60% BIBREF19.",
"In paper by, Guthier et al. BIBREF20 proposed the language independent approach for SA. An emoticon dictionary was created and score were assigned to the emoticons. When the tweet contained the combination of Hashtags and emoticon, The hashtags were also added in the dictionary. A graph-based approach was implemented in the study. The graph-based approach worked on the principle, if multiple hashtags were present in the sentence then all the hashtags would have the same sentiment score. Also, all the hashtags present in the same sentence could be linked with each other. The work was tested on 5 different languages and the accuracy obtained was above 75%. Average accuracy of the model was 79.8%. The approach worked fairly with the single word hashtags and the hashtags which formed the sentences and accuracy for them were 98.3% and 84.5% respectively.",
"Kaur et al. BIBREF21 worked on the Hinglish language dataset. YouTube comments of two popular cookery channels were extracted and analysis was carried on them. Pre-processing techniques like stop words removal, null values removal, spell errors removal, tokenization and stemming were performed. DBSCAN which is the unsupervised learning clustering algorithm was used and 7 clusters were formed for the entire dataset. Dataset was manually annotated with the labels of 7 classes. 8 machine learning algorithms were used to perform the sentiment classification. Logistic regression along with term frequency vectorization outperforms the other classification techniques with the accuracy of 74.01% for one dataset and 75.37% for the other dataset. Statistical testing was also being carried out to confirm the accuracy of the classifiers.",
"Both document level and sentence level SA extract the sentiments for the given text but the feature for which the sentiment is expressed cannot be found out. This shortcoming is fulfilled by aspect level SA."
],
[
"Aspect level SA is carried out in two steps. First step is to find the features or the components in the text and the second step is to find polarity of sentiments attached to each feature. For e.g. Mobile reviews are given in the series of the tweets. The companies first find out which part or feature of the mobile the users are talking about and then find out the emotions related to that feature.",
"In the paper by Ekbal et al. BIBREF22 the aspect level SA was carried out on the product reviews. Dataset was obtained by web scrapping on different websites. Multi-domain product reviews obtained were analyzed in two steps process, first step was aspect extraction i.e. the aspects(features) in the review were extracted using the Condition Random Field Algorithm. In the second step SVM was used to carry out the SA task. Performance evaluation metrics like F-measure and accuracy were used. SVM gave the accuracy of 54.05% for sentiment classification.",
"The proposed work by Ray et al. BIBREF23 is SA of twitter data. POS tagging was used as feature extraction technique. Word embedding was used as the vectorization technique. Word embedding is the method where the words of sentences are converted into vectors of real numbers. Aspect were not directly labelled instead aspects were tagged to predefined list of categories. Classification of the data was done using three approaches CNN, Rule based approach, CNN + Rule based approach. The hybrid model of CNN + Rule based approach gave the accuracy of 87%. Table 1 is the representation of the work done by different researchers in indigenous language."
],
[
"The traditional machine learning and lexicon-based approaches did not give the expected results. With the emergence of the deep learning techniques like CNN, RNN, LSTM the performance improvements in the results was observed. The main problem of the deep learning algorithms is that they have high complexity and computational cost. BERT, ELMo are few pre-trained classifiers which solved the problems of the deep learning models and also outperformed them. This section identifies the different papers in which deep learning models and advanced models like BERT, ELMo etc. are used.",
"In the paper by, Hoang et al. BIBREF27 aspect-based sentiment analysis on the SemEval-2016 - Task 5 was performed. There were three models implemented in the paper, the aspect classification model which identified whether the aspect was related or not to the text. Sentiment Classifier which classified the text into the three sentiment classes positive, negative, neutral. Both of the classifiers follow the structure of the sentence pair classifier which takes two inputs, the classifier token and the separation token which were added to the beginning and end of the sentences respectively. Final classifier implemented was the combined model which identified the sentiments of the text as well as the aspect of the text. The sentence pair classifier is the part of the Bidirectional encoder representation from transformer (BERT) model. BERT is a bidirectional and unsupervised language representation model. It considers the context of a word from both left to right and right to left simultaneously and provide better features compared to the traditional models. The performance of the combined model was better than the traditional approaches and was tested on 18 different datasets.",
"Khatua et al. BIBREF24 performed SA on the twitter to understand user’s response on the supreme court verdict of the decimalization of the LGBT. The authors had extracted 0.58 million tweets and used different machine learning and deep learning classifiers like Naïve Bayes, SVM-R, SVM-P, BLM, multi layer perceptron (MLP), Long short-term memory (LSTM), Bi- LSTM and CNN. Bi-LSTM are special type of LSTM in which the information is available from forward to backward and backward to forward that is in both directions. Bi – LSTM outperforms with the accuracy of 90%.",
"In this study, Rani et al. BIBREF26 have performed SA on the Hindi movie reviews collected from e-newspapers and different online websites. The classification technique used in the paper was CNN. CNN gave the accuracy of 95% which was much higher than the other traditional algorithms.",
"In the paper, Godino et al. BIBREF25 carried out SA on Spanish tweets using three different classifier models which are feature classifier, FastText classifier, BERT classifier. Feature classifier extracted the important features from the tweets such as the length of the tweets, number of hashtags etc. and applied these features to the traditional machine learning algorithms to carry out the sentiment classification. The traditional algorithms used where: Logistic Regression, Multinomial Naive Bayes, Decision Tree, Support Vector Machines, Random Forest, Extra Trees, AdaBoost and Gradient Boost. FastText Classifier was developed by Facebook AI research and it internally works on the neural network architecture. BERT Classifier was also applied on the tweets. The output of the three classifiers were combined using the averaging assembling. The model was evaluated using the F1 score. F1 score of 45% and 46% was obtained on the train and test data of the implemented model."
],
[
"With the increasing use of the web there is a lot of User Generated Content (UGC) available on different websites. Lot of research is carried out for the English language. Work done for the indigenous languages is less as compared to the English language. By studying different papers on SA, it can be found out that researchers have started working on the indigenous languages. Data for the indigenous languages is available across the web but is mainly collected from social media platforms like Twitter, Facebook and YouTube.",
"Some researchers have extracted their data from Twitter BIBREF9, BIBREF16, BIBREF17, BIBREF20, BIBREF23, BIBREF24, BIBREF25, while some have opted for extracting the data manually or by performing web scrapping on different websites like Facebook, microblogs, e-commerce websites, YouTube etc. BIBREF7, BIBREF8, BIBREF11, BIBREF12, BIBREF14, BIBREF22. Authors in BIBREF13 have accessed the FIRE 2015 dataset. The dataset has 792 utterances and has 8 different languages other than English. Researchers in BIBREF19 collected 3000 positive and 3000 negative Odia movie reviews. Authors in BIBREF10 collected 1400 Telugu sentences from e-Newspapers from data 1st December 2016 to 31st December 2016.",
"The study in BIBREF5 contained the speeches of different leaders who spoke about different domain topics like festivals, environment, society etc. The dataset was manually created. BIBREF15 performed SA on the Tibetan language and hence collected the data from the Tibetan micro-blog. In BIBREF6 112 Hindi text file pertaining to different domains have been collected for analysis. Authors in BIBREF18 have used the SAIL Dataset which consist of training and test data for three different languages. Approximately 1000 tweets for each language was present as a training data. BIBREF21 extracted the data from the YouTube comments. The data extracted was related to the cookery website from 2 channels. Total of 9800 comments were collected.",
"Major observations made in this paper are listed below. Not many researches have carried out SA on the large dataset, Majority of the research work is done on Facebook, Twitter, YouTube data, Extensive research is mainly carried out only on 2 domains which are movie reviews and politics. Very few researches are done on the cookery websites, medical data, multi-domain data. Data is not extracted from the popular social media platforms like Instagram, LinkedIn in spite of Instagram and LinkedIn being among the top websites used by the people."
],
[
"Sentiment Analysis is the Natural language processing task. Machine Learning, Deep learning and Lexicon based approach are mainly used to classify the data based on the sentiments. Rule based approaches which were once used for the SA task are now used to carry out the pre-processing and feature extraction on the data.",
"Machine learning based approaches split the data into the training and test set. The training set trains the different machine learning algorithms so that they can understand the patterns present in the data and helps in finding the association between the different attributes in the data which can further help for future predictions. After the machine learning algorithms are trained the test set helps the algorithm to check the accuracy of the model. Accuracy helps us to understand how much the algorithm was able to learn from the training set and perform on the unknown data (test set). In the lexicon-based approach the words present in the dataset are searched in the SWN’s. Lexicon based approach is considered as an unsupervised learning technique because it does not require any prior knowledge about the data. Rule Based approaches are approaches which have a set of rules which are to be applied to the dataset to carry out the SA task.",
"In various studies machine learning algorithms were used to carry out the SA task BIBREF7, BIBREF8, BIBREF11, BIBREF16, BIBREF19, BIBREF21, BIBREF22. It was observed that SVM performed very well for the sentiment classification followed by LR and Naïve Bayes algorithm. Deep learning algorithms like CNN, LSTM, Bi-LSTM were applied on the datasets to find out the performance improvement over the traditional machine learning algorithms. From the final analysis it was concluded that the CNN-LSTM and Bi-LSTM performed the best as compared to the other algorithms BIBREF15, BIBREF23, BIBREF24, BIBREF28.",
"In some paper’s lexicon-based approach was used to carry out the classification task BIBREF9, BIBREF10, BIBREF12, BIBREF14, BIBREF18, BIBREF20. SWN’s of different languages were created and improved to carry out the task effectively. Some studies suggested use of both Lexicon and Machine Learning approaches to carry out SA task. Also, suggestions to compare the algorithms and find the best algorithm was given by BIBREF5, BIBREF6, BIBREF17. In BIBREF13 Machine learning algorithms LR and SVM were used for Language detection and SWN was used for classification of sentiments. SVM outperformed LR in Language detection.",
"With the advancement of techniques various advanced deep learning algorithms like BERT, ELMo, FastText Classifier were applied on the datasets BERT classifier performed the best BIBREF27, BIBREF25. Different rule-based approach has been used for pre-processing of the data because without the pre-processing the accuracy of the model cannot be found out correctly."
],
[
"The main challenges faced by the authors are the availability of the annotated corpora, poor quality SWNs or no SWNs, no stop word list for languages. Along with these challenges some of the individual specific challenges faced by the authors are listed below. In BIBREF5 Document having more than 1000 and less than 500 words could not be classified by the implemented model. Ontology was also manually created which can affect the accuracy of the system. In BIBREF11 the data was classified based on only 2 sentiments positive and negative. Neutral polarity was not considered which could affect the analysis to greater extent. In BIBREF13 transliteration of words caused issues. Authors in BIBREF14 faced issue in automatic detection of the topic hashtags because the context was no provided to the system. In BIBREF22 Multi word aspect terms were not detected and the accuracy of the negative class was low."
],
[
"After the detailed review of different papers, few points that can be considered for discussion further are mentioned below.",
"Small Dataset:",
"There is no substantial research carried out for the sentiment analysis in indigenous language for larger dataset. All the datasets have size in between 10k-20k. Usually the data available on the internet is of millions and millions of rows and hence the models which are not tested on the larger dataset can have accuracy problems.",
"Less Usage of Deep Learning Algorithms:",
"Majority of the research carried out for indigenous languages is performed using Machine Learning algorithms except the research carried out by the authors in BIBREF12, BIBREF24, BIBREF26, BIBREF25. Deep learning algorithms have time and again proved to be much better than the traditional machine learning techniques.",
"Non-Availability of corpus:",
"The datasets for many of the indigenous languages are not available easily. Many of the researches have to manually collected the data and hence this becomes one of the reasons for the smaller dataset.",
"Non-Availability of the SWNs and WordNet’s:",
"There are lot of Indian Languages which don’t have the WordNet’s and SWNs developed hence some of the researchers had to create the WordNet’s and SWN manually. Also, WordNet’s and SWNs are constantly in the evolving state and are not stable.",
"Code-Mix Languages:",
"There is a lot of code-mix language used especially in India on the social media. As multiple languages are mixed it takes large computation time to first perform the language identification and second perform the SA task. There are no resources like WordNet’s, POS Taggers etc. for the code-mix languages. Hence the research in such languages is limited and still evolving.",
"Less Development on the Aspect Level SA:",
"There are very few research papers available on the SA at the aspect level on the indigenous languages."
],
[
"In this review paper, the main aim is to understand the recent work that has been done in SA for indigenous languages. 23 papers are being studied to find the trends in the field of SA. 67% of the papers reviewed have used Machine learning, deep learning and advanced deep learning algorithms. Only 29% of researchers have used lexicon-based approach. SVM (Support Vector Machine) and LR (Logical Regression) performed the best among the machine learning approach. CNN performed the best in the deep learning techniques and BERT was the choice by the researchers in the advanced deep learning techniques. The code-mix languages are the new non official language which we can see on the web. There isn’t much work done on code-mix language data. Also, a lot of work is done in SA of Hindi language as compared to the other Indian languages like Gujarati, Marathi, Telugu. There is a lot of work carried out in the sentence level of sentiment analysis. There is a need for more SA work to be carried out at document level or aspect. Also, there are very few papers which have multi domain dataset. In majority of the papers, analysis is carried out on the movie reviews and the political domain data. There is a need for research on the other domains like festivals, development, education, sociology etc. Also, there is negligible research done on the data collected from Instagram and LinkedIn. BERT model can be considered for classification of code-mix languages because there has been no such research carried out so far.",
"The future work will involve the investigation on using the advance deep learning model such as Bert in mix code language classification. We have collected over 20000 reviews (combination of Marathi and English). We would be comparing the state of the art methods discussed in the current paper during our investigation and discussed the insightful."
]
],
"section_name": [
"Introduction",
"Introduction ::: Indigenous Languages",
"Introduction ::: Code Mix Languages",
"Sentiment Analysis Process",
"Sentiment Analysis Process ::: Data Extraction",
"Sentiment Analysis Process ::: Annotation",
"Sentiment Analysis Process ::: Pre-processing",
"Sentiment Analysis Process ::: Pre-processing ::: Data Cleaning",
"Sentiment Analysis Process ::: Pre-processing ::: Data Transformation",
"Sentiment Analysis Process ::: Pre-processing ::: Data Reduction",
"Sentiment Analysis Process ::: Data Vectorization",
"Sentiment Analysis Process ::: Classification Techniques",
"Sentiment Analysis Process ::: Classification Techniques ::: Machine Learning approaches",
"Sentiment Analysis Process ::: Classification Techniques ::: Lexicon based approach",
"Sentiment Analysis Process ::: Classification Techniques ::: Rule based approach",
"Sentiment Analysis Process ::: Evaluation",
"Sentiment Analysis Levels",
"Sentiment Analysis Levels ::: Document Level",
"Sentiment Analysis Levels ::: Sentence Level",
"Sentiment Analysis Levels ::: Aspect Level",
"Current Trending Techniques in NLP",
"Datasets",
"Classification Techniques",
"Challenges and Limitations",
"Discussions and Analysis",
"Conclusion and Future Scope"
]
} | {
"answers": [
{
"annotation_id": [
"4dd06b475699025134560620dce7976cd17be08d",
"9ca3a376f638d8701280eb545bfe0e30650640ae",
"fa2e149bd12d1695db3a5fc99b2a91c168b84907"
],
"answer": [
{
"evidence": [
"Indigenous languages are the languages that are native to a region or spoken by a group of people in a particular state. It is not necessarily a national language. For e.g. Irish, Tibetan, Spanish, Hindi, Marathi, Gujarati, Telugu, Tamil are the indigenous languages.",
"Code-mixing is mixing two or more languages while communicating in person or over the web. Code-mixing is basically observed in the multilingual speakers. Code-mixed languages are a challenge to the sentiment analysis problem. A classic example of the code-mix language is Hinglish which is combination of English and Hindi words present in a sentence. Hinglish is widely used language in India to communicate over the web. For e.g. movie review in Hinglish is “yeh movie kitni best hai.. Awesome.” In this sentence movie, best and awesome are English words but the remaining words are Hindi words, so the language identification becomes the first step in code mix languages followed by the SA which indirectly increases the overhead for the researchers and becomes time consuming process.",
"Pandey et al. BIBREF12 defined a framework to carry out the SA task on the Hindi movie reviews. BIBREF12 observed that the lower accuracy was obtained by using SWN as a classification technique and hence suggested using synset replacement algorithm along with the SWN. Synset replacement algorithms groups the synonymous words having same concepts together. It helped in increasing the accuracy of the system because if the word was not present in the Hindi SWN then it found the closest word and assigned the score of that word BIBREF12. In the study, Bhargava et al. BIBREF13 completed the SA task on the FIRE 2015 dataset. The dataset consisted of code-mixed sentences in English along with 4 Indian languages (Hindi, Bengali, Tamil, Telugu). The architecture consisted of 2 main steps Language Identification and Sentiment Classification. Punctuations, hashtags were identified and handled by the CMU Ark tagger. Machine learning techniques like logistic regression and SVM were used for language identification. SWN’s of each language were used for sentiment classification. The results of the implemented system were compared with the previous language translation technique and 8% better precision was observed BIBREF13."
],
"extractive_spans": [],
"free_form_answer": "Irish, Tibetian, Spanish, Hindi, Marathi, Gujarati, Telugu, Tamil, Hinglish, Bengali,Arabic, French, German, Odia",
"highlighted_evidence": [
"For e.g. Irish, Tibetan, Spanish, Hindi, Marathi, Gujarati, Telugu, Tamil are the indigenous languages.",
"A classic example of the code-mix language is Hinglish which is combination of English and Hindi words present in a sentence.",
" The dataset consisted of code-mixed sentences in English along with 4 Indian languages (Hindi, Bengali, Tamil, Telugu)"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Ansari et al. BIBREF7 introduced an architecture for two code mix languages Hindi and Marathi. The architecture included language identification, feature generation and sentiment classification as major steps. Hindi and English WordNet’s and SWNs were used as there was no SWN for Marathi. The Marathi words were first translated into English and the sentiment score of the English words were found and assigned to the words. Also, classification algorithms like Random Forest, Naïve Bayes, Support Vector Machine (SVM) were used for finding the polarity in the final step. Slang identification and emoticons were also crucial steps in the study. Slang are a group of words which are used informally and in a particular language. Emoticons are the representation of different facial expressions. SVM performed the best among all the algorithms with accuracy of 90% and 70% for Marathi and Hindi language.",
"SWN is a most common lexicon-based approach used by the researchers. Haithem et al. BIBREF9 developed the Irish SWN whose accuracy was 6% greater than the accuracy obtained by transliteration of the Irish Tweets into English language. The lexicon was manually created. The accuracy difference between the systems was because of the translation carried out into the English language BIBREF9. Naidu et al. BIBREF10 carried out the SA on Telugu e-newspapers. Their system was divided in two steps. First step was subjectivity classification. Second step was sentiment classification. In the first step the sentences were divided as subjective and objective sentences. In the second step only, the subjective sentences were further classified as positive, negative and neutral. Both the steps were performed using the SWN which gave the accuracy of 74% and 81% BIBREF10.",
"Pandey et al. BIBREF12 defined a framework to carry out the SA task on the Hindi movie reviews. BIBREF12 observed that the lower accuracy was obtained by using SWN as a classification technique and hence suggested using synset replacement algorithm along with the SWN. Synset replacement algorithms groups the synonymous words having same concepts together. It helped in increasing the accuracy of the system because if the word was not present in the Hindi SWN then it found the closest word and assigned the score of that word BIBREF12. In the study, Bhargava et al. BIBREF13 completed the SA task on the FIRE 2015 dataset. The dataset consisted of code-mixed sentences in English along with 4 Indian languages (Hindi, Bengali, Tamil, Telugu). The architecture consisted of 2 main steps Language Identification and Sentiment Classification. Punctuations, hashtags were identified and handled by the CMU Ark tagger. Machine learning techniques like logistic regression and SVM were used for language identification. SWN’s of each language were used for sentiment classification. The results of the implemented system were compared with the previous language translation technique and 8% better precision was observed BIBREF13.",
"Kaur, Mangat and Krail BIBREF14 carried out their SA task on Hinglish language, which is code mix language highly popular in India. It is mainly used for the social media communication. The authors [10] had created a Hinglish corpus which contained movie reviews domain specific Hindi words. Stop-word removal, tokenization were the pre-processing techniques used in the system, along with TF-IDF as the vectorization technique. Classification algorithms like SVM and Naïve Bayes where used to carry out the classification task. As a future work, the authors in BIBREF14 are trying to find the best feature and classifier combination.",
"The study in BIBREF5 contained the speeches of different leaders who spoke about different domain topics like festivals, environment, society etc. The dataset was manually created. BIBREF15 performed SA on the Tibetan language and hence collected the data from the Tibetan micro-blog. In BIBREF6 112 Hindi text file pertaining to different domains have been collected for analysis. Authors in BIBREF18 have used the SAIL Dataset which consist of training and test data for three different languages. Approximately 1000 tweets for each language was present as a training data. BIBREF21 extracted the data from the YouTube comments. The data extracted was related to the cookery website from 2 channels. Total of 9800 comments were collected.",
"In the paper, Godino et al. BIBREF25 carried out SA on Spanish tweets using three different classifier models which are feature classifier, FastText classifier, BERT classifier. Feature classifier extracted the important features from the tweets such as the length of the tweets, number of hashtags etc. and applied these features to the traditional machine learning algorithms to carry out the sentiment classification. The traditional algorithms used where: Logistic Regression, Multinomial Naive Bayes, Decision Tree, Support Vector Machines, Random Forest, Extra Trees, AdaBoost and Gradient Boost. FastText Classifier was developed by Facebook AI research and it internally works on the neural network architecture. BERT Classifier was also applied on the tweets. The output of the three classifiers were combined using the averaging assembling. The model was evaluated using the F1 score. F1 score of 45% and 46% was obtained on the train and test data of the implemented model.",
"Joshi et al. BIBREF16 carried out SA on the Gujarati tweets. Stopword removal, stemming were the pre-processing techniques used in the implemented model. Feature extraction technique Parts of Speech (POS) tagging and the classification algorithm SVM was used in the system. SVM performed very well and gave the accuracy of 92%. Sharma et al. BIBREF17 tried to predict the Indian election results by extracting the Hindi tweets for political domain. The tweets were mainly for 5 major political parties. Three approaches where implemented to predict the winner in the election. First approach was dictionary based in which n-gram was used as a pre-processing technique and TF-IDF was used as a vectorization technique. SWN was used to classify the data and assign the polarity score to the words. Naïve Bayes algorithm and SVM were the remaining two approaches which were used. SVM and Naïve Bayes predicted party BJP (Bhartiya Janta Party) as the winner. SVM had the accuracy of 78.4% which was highest among the three implemented approaches.",
"Indigenous languages are the languages that are native to a region or spoken by a group of people in a particular state. It is not necessarily a national language. For e.g. Irish, Tibetan, Spanish, Hindi, Marathi, Gujarati, Telugu, Tamil are the indigenous languages."
],
"extractive_spans": [
"Irish, Tibetan, Spanish, Hindi, Marathi, Gujarati, Telugu, Tamil"
],
"free_form_answer": "",
"highlighted_evidence": [
"Ansari et al. BIBREF7 introduced an architecture for two code mix languages Hindi and Marathi.",
"BIBREF9 developed the Irish SWN whose accuracy was 6% greater than the accuracy obtained by transliteration of the Irish Tweets into English language.",
"The dataset consisted of code-mixed sentences in English along with 4 Indian languages (Hindi, Bengali, Tamil, Telugu).",
"Kaur, Mangat and Krail BIBREF14 carried out their SA task on Hinglish language, which is code mix language highly popular in India.",
"The dataset was manually created. BIBREF15 performed SA on the Tibetan language and hence collected the data from the Tibetan micro-blog.",
" BIBREF25 carried out SA on Spanish tweets using three different classifier models which are feature classifier, FastText classifier, BERT classifier.",
"BIBREF16 carried out SA on the Gujarati tweets.",
"For e.g. Irish, Tibetan, Spanish, Hindi, Marathi, Gujarati, Telugu, Tamil are the indigenous languages."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Review Papers.11"
],
"extractive_spans": [],
"free_form_answer": "Irish, Gujarati, Hindi, Arabic, English, Spanish, French, German, Tamil, Bengali, Odia, Marathi, Telugu, Hinglish",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Review Papers.11"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
""
],
"paper_read": [
"no"
],
"question": [
"Which languages do they explore?"
],
"question_id": [
"1a1d94c981c58e2f2ee18bdfc4abc69fd8f15e14"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Sentiment Analysis Process.",
"Figure 2: Tokenization",
"Figure 3: Bag of Words Steps",
"Figure 4: Count Vectorizer",
"Figure 5: Classification techniques",
"Table 1: Review Papers.11"
],
"file": [
"2-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"5-Figure4-1.png",
"7-Figure5-1.png",
"11-Table1-1.png"
]
} | [
"Which languages do they explore?"
] | [
[
"1911.12848-Sentiment Analysis Levels ::: Sentence Level-6",
"1911.12848-11-Table1-1.png",
"1911.12848-Sentiment Analysis Levels ::: Sentence Level-4",
"1911.12848-Sentiment Analysis Levels ::: Sentence Level-3",
"1911.12848-Sentiment Analysis Levels ::: Document Level-3",
"1911.12848-Current Trending Techniques in NLP-4",
"1911.12848-Datasets-2",
"1911.12848-Introduction ::: Indigenous Languages-0",
"1911.12848-Introduction ::: Code Mix Languages-0",
"1911.12848-Sentiment Analysis Levels ::: Sentence Level-1"
]
] | [
"Irish, Gujarati, Hindi, Arabic, English, Spanish, French, German, Tamil, Bengali, Odia, Marathi, Telugu, Hinglish"
] | 110 |
1911.01770 | Self-Attention and Ingredient-Attention Based Model for Recipe Retrieval from Image Queries | Direct computer vision based-nutrient content estimation is a demanding task, due to deformation and occlusions of ingredients, as well as high intra-class and low inter-class variability between meal classes. In order to tackle these issues, we propose a system for recipe retrieval from images. The recipe information can subsequently be used to estimate the nutrient content of the meal. In this study, we utilize the multi-modal Recipe1M dataset, which contains over 1 million recipes accompanied by over 13 million images. The proposed model can operate as a first step in an automatic pipeline for the estimation of nutrition content by supporting hints related to ingredient and instruction. Through self-attention, our model can directly process raw recipe text, making the upstream instruction sentence embedding process redundant and thus reducing training time, while providing desirable retrieval results. Furthermore, we propose the use of an ingredient attention mechanism, in order to gain insight into which instructions, parts of instructions or single instruction words are of importance for processing a single ingredient within a certain recipe. Attention-based recipe text encoding contributes to solving the issue of high intra-class/low inter-class variability by focusing on preparation steps specific to the meal. The experimental results demonstrate the potential of such a system for recipe retrieval from images. A comparison with respect to two baseline methods is also presented. | {
"paragraphs": [
[
"Social media and designated online cooking platforms have made it possible for large populations to share food culture (diet, recipes) by providing a vast amount of food-related data. Despite the interest in food culture, global eating behavior still contributes heavily to diet-related diseases and deaths, according to the Lancet BIBREF0. Nutrition assessment is a demanding, time-consuming and expensive task. Moreover, the conventional approaches for nutrition assessment are cumbersome and prone to errors. A tool that enables users to easily and accurately estimate the nutrition content of a meal, while at the same time minimize the need for tedious work is of great importance for a number of different population groups. Such a tool can be utilized for promoting a healthy lifestyle, as well as to support patients suffering food-related diseases such as diabetes. To this end, a number of computer vision approaches have been developed, in order to extract nutrient information from meal images by using machine learning. Typically, such systems detect the different food items in a picture BIBREF1, BIBREF2, BIBREF3, estimate their volumes BIBREF4, BIBREF5, BIBREF6 and calculate the nutrient content using a food composition database BIBREF7. In some cases however, inferring the nutrient content of a meal from an image can be really challenging - due to unseen ingredients (e.g. sugar, oil) or the structure of the meal (mixed food, soups, etc.).",
"Humans often use information from diverse sensory modalities (visual, auditory, haptic) to infer logical conclusions. This kind of multi-sensory integration helps us process complex tasks BIBREF8. In this study, we investigate the use of recipe information, in order to better estimate nutrient content of complex meal compositions. With the aim to develop a pipeline for holistic dietary assessment, we present and evaluate a method based on machine learning to retrieve recipe information from images, as a first step towards more accurate nutrient estimation. Such recipe information can then be utilized together with the volume of the food item to enhance an automatic system to estimate the nutrient content of complex meals, such as lasagna, crock pot or stew.",
"The performance of approaches based on machine learning relies heavily on the quantity and quality of the available data. To this end, a number of efforts have been made to compile informative datasets to be used for machine learning approaches. Most of the early released food databases were assembled only by image data for a special kind of meal. In particular, the first publicly available database was the Pittsburgh Fast-Food Image Dataset (PFID) BIBREF9, which contains only fast food images taken under laboratory conditions. After the recent breakthrough in deep learning models, a number of larger databases were introduced. Bossard et al. BIBREF10 introduced the Food-101 dataset, which is composed of 101 food categories represented by 101'000 food images. This was followed by several image-based databases, such as the UEC-100 BIBREF11 and its augmented version, the UEC-256 BIBREF12 dataset, with 9060 food images referring to 100 Japanese food types and 31651 food images referring to 256 Japanese food types, respectively. Xu et al. BIBREF13 developed a specialized dataset by including geolocation and external information about restaurants to simplify the food recognition task. Wang et al. BIBREF14 introduced the UPMC Food-101 multi-modal dataset, that shares the same 101 food categories with the popular Food-101 dataset, but contains textual information in addition. A number of studies have been carried out utilizing the aforementioned databases, mainly for the task of food recognition. Salvador et al. BIBREF15 published Recipe1M, the largest publicly available multi-modal dataset, that consists of 1 million recipes together with the accompanying images.",
"The emergence of multi-modal databases has led to novel approaches for meal image analysis. The fusion of visual features learned from images by deep Convolution Neural Networks (CNN) and textual features lead to outstanding results in food recognition applications. An early approach for recipe retrieval was based on jointly learning to predict food category and its ingredients using deep CNN BIBREF16. In a following step, the predicted ingredients are matched against a large corpus of recipes. More recent approach is proposed by BIBREF15 and is based on jointly learning recipe-text and image representations in a shared latent space. Recurrent Neural Networks (RNN) and CNN are mainly used to map text and image into the shared space. To align the text and image embedding vectors between matching recipe-image pairs, cosine similarity loss with margin was applied. Carvalho et al. BIBREF17 proposed a similar multi-modal embedding method for aligning text and image representations in a shared latent space. In contrast to Salvador et al. BIBREF15, they formulated a joint objective function which incorporates the loss for the cross-modal retrieval task and a classification loss, instead of using the latent space for a multitask learning setup. To address the challenge of encoding long sequences (like recipe instructions), BIBREF15 chose to represent single instructions as sentence embedding using the skip-thought technique BIBREF18. These encoded instruction sentences are referred to as skip-instructions and their embedding is not fine tuned when learning the image-text joint embedding.",
"In this study, we present a method for the joint learning of meal image and recipe embedding, using a multi-path structure that incorporates natural language processing paths, as well as image analysis paths. The main contribution of the proposed method is threefold: i) the direct encoding of the instructions, ingredients and images during training, making the need of skip instruction embedding redundant; ii) the utilization of multiple attention mechanisms (i.e. self-attention and ingredient-attention), and iii) a lightweight architecture."
],
[
"The proposed method is trained and evaluated on Recipe1M BIBREF15, the largest publicly available multi-modal food database. Recipe1M provides over 1 million recipes (ingredients and instructions), accompanied by one or more images per recipe, leading to 13 million images. The large corpus is supplemented with semantic information (1048 meal classes) for injecting an additional source of information in potential models. In the table in Figure FIGREF1, the structure of recipes belonging to different semantic classes is displayed. Using a slightly adjusted pre-processing than that in BIBREF15 (elimination of noisy instruction sentences), the training set, validation set and test set contain 254,238 and 54,565 and 54,885 matching pairs, respectively. In BIBREF15, the authors chose the overall amount of instructions per recipe as one criterion for a valid matching pair. But we simply removed instruction sentences that contain only punctuation and gained some extra data for training and validation."
],
[
"The proposed model architecture is based on a multi-path approach for each of the involved input data types namely, instructions, ingredients and images, similarly to BIBREF19. In Figure FIGREF4, the overall structure is presented. For the instruction encoder, we utilized a self-attention mechanism BIBREF20, which learns which words of the instructions are relevant with a certain ingredient. In order to encode the ingredients, a bidirectional RNN is used, since ingredients are an unordered list of words. All RNNs in the ingredients path were implemented with Long Short-Term Memory (LSTM) cells BIBREF21. We fixed the ingredient representation to have a length of 600, independent of the amount of ingredients. Lastly, the outputs of the self-attention-instruction encoder with ingredient attention and the output of the bidirectional LSTM ingredient-encoder are concatenated and mapped to the joint embedding space. The image analysis path is composed of a ResNet-50 model BIBREF22, pretrained on the ImageNet Dataset BIBREF23, with a custom top layer for mapping the image features to the joint embedding space. All word embeddings are pretrained with the word2vec algorithm BIBREF24 and fine tuned during the joint embedding learning phase. We chose 512-dimensional word embedding for our model with self-attention, whereas BIBREF19 and BIBREF17 chose a vector length of 300. In the following sections, more details about the aforementioned paths are presented."
],
[
"The instruction encoder follows a transformer based encoder, as suggested by BIBREF20. Since we do not focus on syntactic rules, but mostly on weak sentence semantics or single words, we built a more shallow encoder containing only 2 stacked layers, where each of this layers contains two sub-layers. The first is the multi-head attention layer, and the second is a position-wise densely connected feed-forward network (FFN). Due to recipes composed of over 600 words as instructions, we decided to trim words per instruction sentence to restrict the overall words per recipe to 300. In order to avoid removing complete instructions at the end of the instruction table, we removed a fraction of words from each instruction, based on this instruction's length and the overall recipe-instruction length. This strategy reinforces the neglect of syntactic structures in the instruction encoding process. With such a model, we can directly perform the instruction encoding during the learning process for the joint embedding, thus saving training time and reducing disk space consumption. The transformer-like encoder does not make use of any recurrent units, thus providing the opportunity for a more lightweight architecture. By using self-attention BIBREF20, the model learns to focus on instructions relevant to recipe-retrieval-relevant, parts of instructions or single instruction-words. Furthermore we gain insight into which instructions are important to distinguish recipes with similar ingredients but different preparation styles.",
"The instruction encoder transforms the sequence of plain word representations with added positional information to a sequence of similarity-based weighted sum of all word representations. The outputted sequence of the encoder exhibits the same amount of positions as the input to the instruction encoder (in our experiments 300). Each of this positions is represented by a 512-dimensional vector. To obtain a meaningful representation without a vast number of parameters, we reduced the number of word representations before the concatenation with the ingredient representation. For this reduction step, we implemented a recipe-embedding specific attention layer where the ingredient representation is used to construct $n$ queries, where $n$ is the amount of new instruction representation vectors. Each of these new representations is a composition of all previous word representations weighted by the ingredient attention score. Following, the ingredient attention process is formulated mathematically and is visually portrayed in Figure FIGREF4.",
"where $K(inst)$ and $V(inst)$ are linear mappings of the encoded instruction words, and $Q(ing)$ is a linear mapping of the ingredient representation and $d_k$ is the dimensionality of linearly projected position vectors.",
"where $b$ is the batch-size, $p$ is the amount of word embeddings, $w$ is the dimensionality of the wort embedding, $h$ is the dimensionality of the space to where we project the word embeddings and queries, $q$ is the dimensionality of the ingredient representation and $n$ is the amount of Ingredient Attention-based instruction representations. Ingredient Attention can be performed step-wise, similarly to the well known dimensionality reduction in convolution neural networks."
],
[
"To align text and image embeddings of matching recipe-image pairs alongside each other, we maximize the cosine distance between positive pairs and minimize it between negative pairs.",
"We have trained our model using cosine similarity loss with margin as in BIBREF19 and with the triplet loss proposed by BIBREF17. Both objective functions and the semantic regularization by BIBREF19 aim at maximizing intra-class correlation and minimizing inter-class correlation.",
"Let us define the text query embedding as $\\phi ^q$ and the embedding of the image query as $\\phi ^d$, then the cosine embedding loss can be defined as follows:",
"where $cos(x,y)$ is the normalized cosine similarity and $\\alpha $ is a margin ($-1\\leqslant \\alpha \\leqslant 1)$, that determines how similar negative pairs are allowed to be. Positive margins allow negative pairs to share at maximum $\\alpha $ similarity, where a maximum margin of zero or negative margins allow no correlation between non matching embedding vectors or force the model to learn antiparallel representations, respectively. $\\phi ^d$ is the corresponding image counterpart to $\\phi ^q$ if $y=1$ or a randomly chosen sample $\\phi ^d \\in S \\wedge \\phi ^d \\ne \\phi ^{d(q)}$ if $y=-1$, where $\\phi ^{d(q)}$ is the true match for $\\phi ^q$ and $S$ is the dataset we sample from it. Furthermore, we complement the cosine similarity with cross-entropy classification loss ($L_{reg}$), leading to the applied objective function.",
"with $c_r$ and $c_v$ as semantic recipe-class and semantic image-class, respectively, while $c_r=c_v$ if the food image and recipe text are a positive pair.",
"For the triplet loss, we define $\\phi ^q$ as query embedding, $\\phi ^{d+}$ as matching image counterpart and $\\phi ^{d-}$ as another random sample taken from $S$. Further $\\phi ^{d_{sem}+} \\in S \\wedge \\phi ^{d_{sem}+} \\ne \\phi ^{d(q)}$ is a sample from $S$ sharing the same semantic class as $\\phi ^q$ and $\\phi ^{d_{sem}-}$ is a sample from any other class. The triplet loss is formulated as follows:",
"where $\\beta \\in [0,1]$ weights between quadratic and linear loss, $\\alpha \\in [0,2]$ is the margin and $\\gamma \\in [0,1]$ weights between semantic- and sample-loss. The triplet loss encourages the embedding vectors of a matching pair to be larger by a margin above its non-matching counterpart. Further, the semantic loss encourages the model to form clusters of dishes, sharing the same class. We chose $\\beta $ to be $0.1$, $\\alpha $ to be $0.3$ and $\\gamma $ to be $0.3$."
],
[
"We used Adam BIBREF25 optimizer with an initial learning rate of $10^{-4}$. At the beginning of the training session, we freeze the pretrained ResNet-50 weights and optimize only the text-processing branch until we do no longer make progress. Then, we alternate train image and text branch until we switched modality for 10 times. Lastly, we fine-tune the overall model by releasing all trainable parameters in the model. Our optimization strategy differs from BIBREF19 in that we use an aggressive learning rate decay, namely exponential decay, so that the learning rate is halved all 20 epochs. Since the timing of freezing layers proved not to be of importance unless the recipe path is trained first, we used the same strategy under the cosine distance objective BIBREF19 and for the triplet loss BIBREF17."
],
[
"Recipe1M is already distributed in three parts, the training, validation and testing sets. We did not make any changes to these partitions. Except with our more sensitive preprocessing algorithm, we accept more recipes from the raw corpus. BIBREF19 used 238,399 samples for their effective training set and for the validation and testing set 51,119 and 51,303 samples, respectively. By filtering out noisy instructions sentences (e.g. instructions containing only punctuation) we increased the effective dataset size to 254,238 samples for the training set and 54,565 and 54,885 for the validation and testing sets, respectively.",
"Similarly to BIBREF19 and BIBREF17, we evaluated our model on 10 subsets of 1000 samples each. One sample of these subsets is composed of text embedding and image embedding in the shared latent space. Since our interest lies in the recipe retrieval task, we optimized and evaluated our model by using each image embedding in the subsets as query against all text embeddings. By ranking the query and the candidate embeddings according to their cosine distance, we estimate the median rank. The model's performance is best, if the matching text embedding is found at the first rank. Further, we estimate the recall percentage at the top K percent over all queries. The recall percentage describes the quantity of queries ranked amid the top K closest results. In Table TABREF11 the results are presented, in comparison to baseline methods.",
"Both BIBREF19 and BIBREF17 use time-consuming instruction text preprocessing over the skip-thought technique BIBREF18. This process doubles the overall training time from three days to six days using two Nvidia Titan X GPU's. By using online-instruction encoding with the self-attention encoder, we were able train the model for its main task in under 30 hours. Furthermore, the proposed approach offers more flexibility for dataset alterations.",
"Qualitative results such as recipe retrieval, quality of the cluster formation in the joint embedding space and heat maps of instruction words are more important than the previously mentioned benchmarking scores. Depending on meal type, all baseline implementations as well as our Ingredient Attention based model exhibit a broad range of retrieval accuracy. In Figure FIGREF16 we present a few typical results on the intended recipe retrieval task.",
"AdaMine BIBREF17 creates more distinct class clusters than in BIBREF19. In Figure FIGREF12, we demonstrate the difference in cluster formation using the aforementioned Methods for our Ingredient Attention. We visualize the top ten most common recipe classes in Recipe1M using t-SNE BIBREF26. Since chocolate chip, peanut butter, cream cheese and/or ice cream are used as ingredients in desserts, due to semantic regularization inside the triplet loss, clusters of sweet meals are close together (Figure FIGREF12 top right corner).",
"We use heat maps on instruction words as tool to visualize words relevant to ingredient-lists in plain instruction text. In Figure FIGREF15, we demonstrate how easily we can achieve insight into the models decision making."
],
[
"In this paper, we have introduced self-attention for instruction encoding in the context of the recipe retrieval task and ingredient attention for disclosing ingredient dependent meal preparation steps. Our main contribution is the aforementioned ingredient attention, empowering our model to solve the recipe retrieval without any upstream skip instruction embedding, as well as the light-weight architecture provided by the transformer-like instruction encoder. On the recipe retrieval task, our method performs similarly to our baseline implementation of BIBREF17. Regarding training time on the other hand, we increased the efficiency significantly for cross-modal based retrieval methods. There is no need for a maximum number of instructions for a recipe to be considered as valid for training or testing; only for total words, making more samples of the large Recipe1M corpus usable for training. Through ingredient attention, we are able to unveil internal focus in the text processing path by observing attention weights. Incorporation of new samples in the train set can be done by retraining just one model. Overall, an accurate and flexible method for recipe retrieval from meal images could provide downstream models (e.g. automatic nutrient content estimation) with decisive information and significantly improve their results."
]
],
"section_name": [
"Introduction",
"Materials and Methods ::: Database",
"Materials and Methods ::: Model Architecture",
"Materials and Methods ::: Attention Mechanisms",
"Materials and Methods ::: Loss function",
"Materials and Methods ::: Training configuration",
"Experimental Setup and Results",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"569ed3122c5615d88ffb403cbac2043f19e0859b",
"8873421c87a4d8d2796cadb73147dd72d160f5e0",
"e4196c2864fc1ebee666aaa46bfcfe07e59151bf"
],
"answer": [
{
"evidence": [
"Similarly to BIBREF19 and BIBREF17, we evaluated our model on 10 subsets of 1000 samples each. One sample of these subsets is composed of text embedding and image embedding in the shared latent space. Since our interest lies in the recipe retrieval task, we optimized and evaluated our model by using each image embedding in the subsets as query against all text embeddings. By ranking the query and the candidate embeddings according to their cosine distance, we estimate the median rank. The model's performance is best, if the matching text embedding is found at the first rank. Further, we estimate the recall percentage at the top K percent over all queries. The recall percentage describes the quantity of queries ranked amid the top K closest results. In Table TABREF11 the results are presented, in comparison to baseline methods.",
"FLOAT SELECTED: Figure 3: (a) Visualization of the joint embedding space under the cosine distance with semantic regularization objective. (b) organization of the joint embedding space under the triplet",
"The proposed model architecture is based on a multi-path approach for each of the involved input data types namely, instructions, ingredients and images, similarly to BIBREF19. In Figure FIGREF4, the overall structure is presented. For the instruction encoder, we utilized a self-attention mechanism BIBREF20, which learns which words of the instructions are relevant with a certain ingredient. In order to encode the ingredients, a bidirectional RNN is used, since ingredients are an unordered list of words. All RNNs in the ingredients path were implemented with Long Short-Term Memory (LSTM) cells BIBREF21. We fixed the ingredient representation to have a length of 600, independent of the amount of ingredients. Lastly, the outputs of the self-attention-instruction encoder with ingredient attention and the output of the bidirectional LSTM ingredient-encoder are concatenated and mapped to the joint embedding space. The image analysis path is composed of a ResNet-50 model BIBREF22, pretrained on the ImageNet Dataset BIBREF23, with a custom top layer for mapping the image features to the joint embedding space. All word embeddings are pretrained with the word2vec algorithm BIBREF24 and fine tuned during the joint embedding learning phase. We chose 512-dimensional word embedding for our model with self-attention, whereas BIBREF19 and BIBREF17 chose a vector length of 300. In the following sections, more details about the aforementioned paths are presented.",
"The emergence of multi-modal databases has led to novel approaches for meal image analysis. The fusion of visual features learned from images by deep Convolution Neural Networks (CNN) and textual features lead to outstanding results in food recognition applications. An early approach for recipe retrieval was based on jointly learning to predict food category and its ingredients using deep CNN BIBREF16. In a following step, the predicted ingredients are matched against a large corpus of recipes. More recent approach is proposed by BIBREF15 and is based on jointly learning recipe-text and image representations in a shared latent space. Recurrent Neural Networks (RNN) and CNN are mainly used to map text and image into the shared space. To align the text and image embedding vectors between matching recipe-image pairs, cosine similarity loss with margin was applied. Carvalho et al. BIBREF17 proposed a similar multi-modal embedding method for aligning text and image representations in a shared latent space. In contrast to Salvador et al. BIBREF15, they formulated a joint objective function which incorporates the loss for the cross-modal retrieval task and a classification loss, instead of using the latent space for a multitask learning setup. To address the challenge of encoding long sequences (like recipe instructions), BIBREF15 chose to represent single instructions as sentence embedding using the skip-thought technique BIBREF18. These encoded instruction sentences are referred to as skip-instructions and their embedding is not fine tuned when learning the image-text joint embedding."
],
"extractive_spans": [],
"free_form_answer": "Joint Neural Embedding (JNE)\nAdaMine",
"highlighted_evidence": [
"Further, we estimate the recall percentage at the top K percent over all queries. The recall percentage describes the quantity of queries ranked amid the top K closest results. In Table TABREF11 the results are presented, in comparison to baseline methods.",
"FLOAT SELECTED: Figure 3: (a) Visualization of the joint embedding space under the cosine distance with semantic regularization objective. (b) organization of the joint embedding space under the triplet",
"BIBREF19 ",
"BIBREF17 "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Figure 3: (a) Visualization of the joint embedding space under the cosine distance with semantic regularization objective. (b) organization of the joint embedding space under the triplet"
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Table1 merged with Figure 3) Joint Neural\nEmbedding (JNE) and AdaMine",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 3: (a) Visualization of the joint embedding space under the cosine distance with semantic regularization objective. (b) organization of the joint embedding space under the triplet"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Figure 3: (a) Visualization of the joint embedding space under the cosine distance with semantic regularization objective. (b) organization of the joint embedding space under the triplet"
],
"extractive_spans": [],
"free_form_answer": "JNE and AdaMine",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 3: (a) Visualization of the joint embedding space under the cosine distance with semantic regularization objective. (b) organization of the joint embedding space under the triplet"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"90d9be80786b2e19e661c775a3447c038f23fc0a",
"957d7ef8dd9d38f75cedfd18c54a8286c39f7d4a",
"b33b3803f350578e779615f5cc1a15dab28cb899"
],
"answer": [
{
"evidence": [
"Similarly to BIBREF19 and BIBREF17, we evaluated our model on 10 subsets of 1000 samples each. One sample of these subsets is composed of text embedding and image embedding in the shared latent space. Since our interest lies in the recipe retrieval task, we optimized and evaluated our model by using each image embedding in the subsets as query against all text embeddings. By ranking the query and the candidate embeddings according to their cosine distance, we estimate the median rank. The model's performance is best, if the matching text embedding is found at the first rank. Further, we estimate the recall percentage at the top K percent over all queries. The recall percentage describes the quantity of queries ranked amid the top K closest results. In Table TABREF11 the results are presented, in comparison to baseline methods.",
"FLOAT SELECTED: Figure 3: (a) Visualization of the joint embedding space under the cosine distance with semantic regularization objective. (b) organization of the joint embedding space under the triplet"
],
"extractive_spans": [],
"free_form_answer": "The model outperforms the two baseline models, since it has higher recall values. ",
"highlighted_evidence": [
"The recall percentage describes the quantity of queries ranked amid the top K closest results. In Table TABREF11 the results are presented, in comparison to baseline methods.",
"FLOAT SELECTED: Figure 3: (a) Visualization of the joint embedding space under the cosine distance with semantic regularization objective. (b) organization of the joint embedding space under the triplet"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Similarly to BIBREF19 and BIBREF17, we evaluated our model on 10 subsets of 1000 samples each. One sample of these subsets is composed of text embedding and image embedding in the shared latent space. Since our interest lies in the recipe retrieval task, we optimized and evaluated our model by using each image embedding in the subsets as query against all text embeddings. By ranking the query and the candidate embeddings according to their cosine distance, we estimate the median rank. The model's performance is best, if the matching text embedding is found at the first rank. Further, we estimate the recall percentage at the top K percent over all queries. The recall percentage describes the quantity of queries ranked amid the top K closest results. In Table TABREF11 the results are presented, in comparison to baseline methods.",
"FLOAT SELECTED: Figure 3: (a) Visualization of the joint embedding space under the cosine distance with semantic regularization objective. (b) organization of the joint embedding space under the triplet"
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Table1 part of Figure 3):\nProposed vs Best baseline result\n- Median Rank: 2.9 vs 3.0 (lower better)\n- Rank 1 recall: 34.6 vs 33.1 (higher better)",
"highlighted_evidence": [
"Similarly to BIBREF19 and BIBREF17, we evaluated our model on 10 subsets of 1000 samples each. One sample of these subsets is composed of text embedding and image embedding in the shared latent space. Since our interest lies in the recipe retrieval task, we optimized and evaluated our model by using each image embedding in the subsets as query against all text embeddings. By ranking the query and the candidate embeddings according to their cosine distance, we estimate the median rank. The model's performance is best, if the matching text embedding is found at the first rank. Further, we estimate the recall percentage at the top K percent over all queries. The recall percentage describes the quantity of queries ranked amid the top K closest results. In Table TABREF11 the results are presented, in comparison to baseline methods.",
"FLOAT SELECTED: Figure 3: (a) Visualization of the joint embedding space under the cosine distance with semantic regularization objective. (b) organization of the joint embedding space under the triplet"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Figure 3: (a) Visualization of the joint embedding space under the cosine distance with semantic regularization objective. (b) organization of the joint embedding space under the triplet"
],
"extractive_spans": [],
"free_form_answer": "The model improved over the baseline with scores of 34.6, 66.0 and 76.6 for Recall at 1, 5 and 10 respectively",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 3: (a) Visualization of the joint embedding space under the cosine distance with semantic regularization objective. (b) organization of the joint embedding space under the triplet"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero"
],
"paper_read": [
"no",
"no"
],
"question": [
"What are two baseline methods?",
"How does model compare to the baselines?"
],
"question_id": [
"5d790459b05c5a3e6f1e698824444e55fc11890c",
"1ef6471cc3e1eb10d2e92656c77020ca1612f08e"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"computer vision",
"computer vision"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Recipe samples from the Recipe1M Dataset.",
"Figure 2: Text-image embeddingmodelwith optional semantic classifier for semantic regularization according to [17] and with Ingredient Attention based instruction encoding",
"Figure 3: (a) Visualization of the joint embedding space under the cosine distance with semantic regularization objective. (b) organization of the joint embedding space under the triplet",
"Figure 4: Ingredient-Attention based focus on instruction sentences. We use two different mapping matrices for the two ingredient based queries.",
"Figure 5: The retrieval performance of ourmodel depends heavily on themeal type.Wemarkedmatching retrieved ingredients or those of the same family in green. The Ingredient Attention model performed well on Sample 1, and acceptably on Sample 2. On Sample 3, the model missed the main ingredient in all top three retrievals."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"5-Figure3-1.png",
"5-Figure4-1.png",
"6-Figure5-1.png"
]
} | [
"What are two baseline methods?",
"How does model compare to the baselines?"
] | [
[
"1911.01770-Experimental Setup and Results-1",
"1911.01770-Materials and Methods ::: Model Architecture-0",
"1911.01770-Introduction-3",
"1911.01770-5-Figure3-1.png"
],
[
"1911.01770-Experimental Setup and Results-1",
"1911.01770-5-Figure3-1.png"
]
] | [
"JNE and AdaMine",
"The model improved over the baseline with scores of 34.6, 66.0 and 76.6 for Recall at 1, 5 and 10 respectively"
] | 111 |
2002.12699 | Automatic Section Recognition in Obituaries | Obituaries contain information about people's values across times and cultures, which makes them a useful resource for exploring cultural history. They are typically structured similarly, with sections corresponding to Personal Information, Biographical Sketch, Characteristics, Family, Gratitude, Tribute, Funeral Information and Other aspects of the person. To make this information available for further studies, we propose a statistical model which recognizes these sections. To achieve that, we collect a corpus of 20058 English obituaries from TheDaily Item, this http URL and The London Free Press. The evaluation of our annotation guidelines with three annotators on 1008 obituaries shows a substantial agreement of Fleiss k = 0.87. Formulated as an automatic segmentation task, a convolutional neural network outperforms bag-of-words and embedding-based BiLSTMs and BiLSTM-CRFs with a micro F1 = 0.81. | {
"paragraphs": [
[
"An obituary, typically found in newspapers, informs about the recent death of a person, and usually includes a brief biography of the deceased person, which sometimes recounts detailed life stories and anecdotes. Structural elements, styles, formats, and information presented vary slightly from culture to culture or from community to community BIBREF0. Obituaries can be considered to be short essays and contain information on the living family members and information about the upcoming funeral, such as visitation, burial service, and memorial information as well as the cause of death BIBREF0.",
"Similarly to biographies, obituaries represent an interesting type of text because the information contained is usually focused on the values and the qualities of a given human being that is part of a particular community BIBREF1, BIBREF2, BIBREF3. From the digital humanities perspective investigating obituaries also provides an understanding of how the community who writes the obituaries decides what is relevant about life and death.",
"Potential applications that are enabled by having access to large collections of obituaries are finding such themes that are relevant while discussing life and death, investigation of different aspects of social memory BIBREF4, BIBREF5 (finding what is being remembered or chosen to be excluded from an obiturary), investigation of correlations between work or other different themes and the cause of death, analysis of linguistic, structural or cultural differences BIBREF6, and the investigation of different biases and values within a community BIBREF7, BIBREF8, BIBREF9, BIBREF10.",
"More recently, obituaries have been published on dedicated social networks where the mourners who write the obituaries express their emotions and tell stories of the deceased in comments to the obituaries (e. g. Legacy.com, Remembering.CA). These networks facilitate interactions between readers and the family of the deceased BIBREF11. With this paper, we focus on online publications of obituaries which are available online and are in English.",
"Research that builds on top of such data is presumably mostly concerned with a part of the information contained in obituaries. For example, when investigating mortality records BIBREF12, one might only be interested in the Personal Information section. Therefore, we propose to perform zoning as a preprocessing step and publish a corpus and trained models for the sections Personal information (including names of the deceased, birth date, date of death, and cause of death), Biographical sketch, Tribute, Family, and Funeral Information (such as time, place, and date of the funeral). No such resource is currently available to the research community.",
"Our main contributions are therefore (1) to annotate a collection of obituaries, (2) to analyze the corpus and to formulate the task of automatic recognition of structures, (3) to evaluate which models perform best on this task, and (4) to compare the models' results qualitatively and quantitatively. To achieve our goals and as additional support for future research, we publish information how to obtain the data and the annotated dataset as well as the models at http://www.ims.uni-stuttgart.de/data/obituaries."
],
[
"Research on obituaries can be structured by research area, namely language studies, cultural studies, computational linguistics, psychology studies, and medical studies."
],
[
"One of the common topics that are studied in the context of cultural studies and obituaries is religion. herat2014 investigate how certain language expressions are used in obituaries in Sri Lanka, how religion and culture play a role in the conceptualization of death, and how language reflects social status. They find that the conceptualization of death is in terms of a journey in the Buddhist and Hindu communities whereas death is conceptualized as an end in Christian and Muslim communities. They show that the language of obituaries appears to be conditioned by the religious and cultural identity of the deceased.",
"ergin2012 look into Turkish obituary data from Hürriyet, a major Turkish daily newspaper, from 1970 to 2009, with the goal of finding expressions of religiosity and constructions of death in relation to gender and temporal variations together with markers of status. Their results show that the obituaries considered are relying on “an emotional tone of loss” and that the spiritual preferences are linked to the status and appartenance to a specific social class.",
"Next to religion, elements of the obituary language are in the focus of various works across countries and cultures. metaphors2019 undertake a qualitative analysis of metaphors in 150 obituaries of professional athletes published in various newspapers. They find traditional metaphors of death but also creative metaphors that describe death euphemistically. Some of the creative metaphors have a connection to sports but not necessarily to the sport practiced by the deceased athlete.",
"The language of obituaries is also investigated in the context of gender analysis by malesvsfemales who test the hypothesis that obituaries are less emotional in the language used for females than for males. They collect 703 obituaries from a local newspaper from US and investigate whether the person is described to have “died” or “passed away”. Their results show that the deaths of females are more likely to be described as “passing away”.",
"Furthermore, the perception of women in leading positions in communist and post-communist Romania is researched by gender2011 by analyzing the content of obituaries published in the Romanian newspaper România Liberă from 1975 to 2003. They show that the gender gap in management widened after the fall of communism.",
"epstein2013 study the relationship between career success, terminal disease frequency, and longevity using New York Times obituaries. Their results show that obituaries written in the memory of men are more prevalent and the mean age of death was higher for males than females. They concluded that “smoking and other risk behaviours may be either the causes or effects of success and/or early death”, and fame and achievement in performance-related careers correlate with a shorter life span expectancy.",
"rusu2017 also look at famous people, and the posthumous articles written about them to test whether the deceased are protected from negative evaluations within their community. They find out that more than one fifth of the articles do contain negative evaluations of the deceased.",
"barth2013 gains insights into how different communities deal with death according to their respective norms. They study the differences between German and Dutch obituaries in terms of visual and textual elements, information about the deceased, and funeral-related information. Their study shows that German obituaries use illustrations more than the Dutch ones and that the Dutch obituaries provide more information than the German ones.",
"Another cross-cultural study is made by hubbard2009 who investigate whether obituaries placed by families reflect specific societal attitudes towards aging and dementia. They use discourse analysis of obituaries in newspapers from Canada and the UK and show that donations to dementia charities were more common in obituaries from Canada than in the UK.",
"themesopiod study the public perception on the opioid epidemic in obituaries from the US where the cause of death is related to overdose. They investigated emotion related themes and categories by using the IBM Watson Tone Analyzer and show that joy and sadness are the most prevalent emotion categories with the most common emotion being love. The terms that are most used to describe death are “accidental” and “addiction”. Shame and stigma are less prevalent “which might suggest that addiction is perceived as a disease rather than a criminal behaviour”.",
"usobi investigate the shared values of the community of neurosurgeons in the US by doing a text analysis on obituaries from Neurosurgery, Journal of Neurosurgery and the New York Times. Their study analyzes frequent terms and derives the relative importance of various concepts: innovation, research, training and family. Within this work, the sentiment of the obituaries within the Neurosurgery research community is being annotated. A result of this study is that the obituaries of neurosurgeons written by the research community put a greater emphasis on professional leadership and residency training and that the family mentions occured more in the lay press.",
"vital develop a methodology to link mortality data from internet sources with administrative data from electronic health records. To do so they implement and evaluate the performance of different linkage methods. The electronic health records are from patients in Rennes, France and the extracted obituaries are all available online obituaries from French funeral home websites. They evaluate three different linkage methods and obtain almost perfect precisions with all methods. They conclude that using obituaries published online could address the problem of long delays in the sharing of mortality data whereas online obituaries could be considered as reliable data source for real-time suveillance of mortality in patients with cancer."
],
[
"With a focus on computational linguistics, obituarymining1 analyze text data from obituary websites, with the intention to use it to prevent identity theft. The goal was to evaluate how “often and how accurately name and address fragments extracted from these notices developed into complete name and address information corresponding to the deceased individual”. They use a knowledge base with name and address information, extracte the name and address fragments from the text and match them against the knowledge base to create a set of name and address candidates. This result set is then compared to an authoritative source in order to determine which of the candidate records actually correspond to the name and address of an individual reported as deceased.",
"alfano2018 collect obituaries from various newspapers, to get a better understanding of people's values. They conduct three studies in which the obituaries are annotated with age at death, gender and general categories that summarize traits of the deceased (a trait like hiker would be summarized by the category “nature-lover”). All studies are analyzed from a network perspective: when the deceased is described as having the traits X and Y, then an edge between the two traits is created with the weight of the edge being the total number of persons described as having both traits. The first study is done on obituaries collected from local newspapers. They find that women's obituaries focus more on family and “care-related affairs” in contrast to men's obituaries which focus on “public and political matters”. In the second study they explore the New York Times Obituaries and find that the network of the second study differs from the first study in terms of network density, mean clustering coefficient and modularity. The last study is done on data from ObituaryData.com and the annotation with traits is performed in a semi-automatic manner.",
"obi1 extract various facts about persons from obituaries. They use a feature scoring method that uses prior knowledge. Their method achieved high performance for the attributes person name, affiliation, position (occupation), age, gender, and cause of death.",
"bamman2014 present an unsupervised model for learning life event classes from biographical texts in Wikipedia along with the structure that connects them. They discover evidence of systematic bias in the presentation of male and female biographies in which female biographies placed a significantly disproportionate emphasis on the personal events of marriage and divorce. This work is of interest here because it handled biographical information (Wikipedia biographies), of which obituaries are also a part.",
"simonson2016 investigate the distribution of narrative schemas BIBREF13 throughout different categories of documents and show that the structure of the narrative schemas are conditioned by the type of document. Their work uses the New York Times corpus, which makes the work relevant for us, because obituary data is part of the NYT library and a category of document the work focuses on. Their results show that obituaries are narratologically homogeneous and therefore more rigid in their wording and the events they describe.",
"The stability of narrative schemas is explored in a follow up paper by simonson2018. Their goal was to test whether small changes in the corpus would produce small changes in the induced schemas. The results confirm the distinction between the homogeneous and heterogeneous articles and show that homogeneous categories produced more stable batches of schemas than the heterogeneous ones. This is not surprising but supports that obituaries have a coherent structure which could be turned into a stable narrative schema.",
"he2019 propose using online obituaries as a new data source for doing named entity recognition and relation extraction to capture kinship and family relation information. Their corpus consists of 1809 obituaries annotated with a novel tagging scheme. Using a joint neural model they classify to 57 kinships each with 10 or more examples in 10-fold cross-validation experiment."
],
[
"Many NLP tasks focus on the extraction and abstraction of specific types of information in documents. To make searching and retrieving information in documents accessible, the logical structure of documents in titles, headings, sections, arguments, and thematically related parts must be recognized BIBREF14.",
"A notable amount of work focuses on the argumentative zoning of scientific documents BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. zoning2 stated that readers of scientific work may be looking for “information about the objective of the study in question, the methods used in the study, the results obtained, or the conclusions drawn by authors”.",
"The recognition of document structures generally makes use of two sources of information. On one side, text layout enables recognition of relationships between the various structural units such as headings, body text, references, figures, etc. On the other side, the wording and content itself can be used to recognize the connections and semantics of text passages. Most methods use section names, argumentative zoning, qualitative dimensions, or the conceptual structure of documents BIBREF22.",
"Common to all the works that focus on zoning of scientific articles is the formulation or use of an annotation scheme, which in this case relies on the form and meaning of the argumentative aspects found in text rather than on the layout or contents. In contrast to argumentative zoning, our work does not make use of an annotation scheme of categories that relate to rhetorical moves of argumentation BIBREF15, but focuses instead on content."
],
[
"We collected obituaries from three websites: The Daily Item, where obituaries from the USA are published, Remembering.CA, which covers obituaries from Canada, and The London Free Press, which covers obituaries from London (see Table TABREF5). The obituaries on The Daily Item and The London Free Press are dedicated websites where people could publish their obituaries. Remembering.CA is an aggregator and shows obituaries published from different sources. The total set consists of 20058 obituaries."
],
[
"In each obituary, we can find certain recurring elements, some factual, such as the statement that announces the death which contains the names of the deceased, age, date of death, information about career, information about the context and the cause of death (detailed if the person was young or suffering of a specific disease). The life events and career steps are sketched after that. This is usually followed by a list of hobbies and interests paired with accomplishments and expressions of gratitude or a tribute from the community of the deceased. Towards the end of the obituary, there are mentions of family members (through names and type of relation). The obituaries commonly end with details about the funeral BIBREF0.",
"Therefore, we define the following eight classes: Personal information, Biographical sketch, Characteristics, Tribute, Expression of gratitude, Family, Funeral information, and Other to structure obituaries at the sentence level. An example of these classes in context of one obituary is depicted in Table TABREF1.",
"The Personal Information class serves the purpose to classify most of the introductory clauses in obituaries. We have chosen to refer to a sentence as Personal Information when it includes the name of the deceased, the date of death, the cause of death, or the place of death. For example John Doe, 64, of Newport, found eternal rest on Nov. 22, 2018.",
"The Biographical sketch is similar to a curriculum vitae. Sections in a person's life fall into this category. However, it should not be regarded exclusively as a curriculum vitae, since it forms the superset of personal information. We decided to label a sentence as Biographical sketch if it includes the place of birth, the date of birth, the last place of residence, the wedding date, the duration of the marriage, the attended schools, the occupations, or the further events in life. An example is He entered Bloomsburg State Teachers College in 1955 and graduated in 1959.",
"The class Characteristics is recognizable by the fact that the deceased person is described through character traits or things the dead person loved to do. Apart from hobbies and interests, the deceased's beliefs are also part of the characteristics. An example is He enjoyed playing basketball, tennis, golf and Lyon's softball.",
"Sentences about major achievements and contributions to society are labeled as Tribute. An example is His work was a credit to the Ukrainian community, elevating the efforts of its arts sector beyond its own expectations.",
"Sentences in obituaries are labeled as an expression of Gratitude if any form of gratitude occurs in it, be it directed to doctors, friends, or other people. In most cases, it comes from the deceased's family. An example is We like to thank Leamington Hospital ICU staff, Windsor Regional Hospital ICU staff and Trillium for all your great care and support.",
"The class Family is assigned to all sentences that address the survivors or in which previously deceased close relatives, such as siblings or partners, are mentioned. The mentioning of the wedding date is not covered by this category, because we consider it an event and as such, it falls under the Biographical sketch category. If the precedence of those persons is mentioned it falls in this category. If a marriage is mentioned without the wedding date or the duration it falls into the Family category. An example is: Magnus is survived by his daughter Marlene (Dwight), son Kelvin (Patricia), brother Otto (Jean) and also by numerous grandchildren & great grandchildren, nieces and nephews.",
"Sentences are labeled as Funeral information when they contain information related to the funeral, such as date of the funeral, time of the funeral, place of the funeral, and where to make memorial contributions. An example is A Celebration of Life will be held at the Maple Ridge Legion 12101-224th Street, Maple Ridge Saturday December 8, 2018 from 1 to 3 p.m.",
"Everything that does not fall into the above-mentioned classes is assigned the class Other. An example is: Dad referred to Lynda as his Swiss Army wife."
],
[
"Our overall annotated data set consists of 1008 obituaries which are randomly sampled from the overall crawled data. For the evaluation of our annotation guidelines, three students of computer science at the University of Stuttgart (all of age 23) annotate a subset of 99 obituaries from these 1008 instances. The first and second annotator are male and the third is female. The mother tongue of the first annotator is Italian and the mother tongue of the second and third annotator is German. All pairwise Kappa scores as well as the overall Fleiss' kappa scores are .87 (except for the pairwise Kappa between the first and the second annotator, being .86). Based on this result, the first annotator continued to label all 1008 instances.",
"Table TABREF13 reports the agreement scores by country and category. Annotated obituaries from the UK have the lowest $\\kappa =$$0.59$ and the ones from the US the highest $\\kappa =$$0.88$. Category-wise, we observed difficulties to classify some of the rarer categories that appeared, such as examples from the class Tribute or Other. Another quite difficult distinction is the one between the class Family and the class Biographical sketch due to the occurrence of a wedding date, which we considered an event, in connection with the other family criteria. Furthermore we found difficult to decide on the border between Personal Information and Biographical sketch zones."
],
[
"Table TABREF14 shows the analysis of our 1008 annotated obituaries from three different sources which form altogether 11087 sentences (where the longest sentence as 321 words). 475 obituaries are from The Daily Item (USA), 445 obituaries are from Remembering.CA (Canada), and 88 obituaries are from The London Free Press (UK). Most sentences in the dataset are labeled as Biographical sketch (3041), followed by Funeral information (2831) and Family (2195). The least assigned label is Tribute, with 11 sentences, followed by Gratitude with 144 sentences.",
"Sentences of class Biographical Sketch and Characteristics are more frequent in obituaries from the US than from Canada and UK. On the other side, Family is a more dominant class ins UK than in the other sources.",
"Surprisingly, the class Funeral information is also not equally distributed across locations, which is dominated by the UK.",
"Finally, Canada has a substantially higher section of sentences labeled with Other. A manual inspection of the annotation showed that this is mostly because it seems to be more common than in other locations to mention that the person will be remembered."
],
[
"To answer the question whether or not we can recognize the structure in obituaries we formulate the task as sentence classification, where each sentence will be assigned to one of the eight classes we defined previously. We evaluate four different models."
],
[
"Convolutional Neural Networks (CNN) BIBREF23, BIBREF24 have been succesfully applied to practical NLP problems in the recent years. We use the sequential model in Keras where each sentence is represented as a sequence of one-hot embeddings of its words. We use three consecutive pairs of convolutional layers with 128 output channels, the ReLu activation function and max pooling followed by the output layer with softmax as activation function and with cross entropy as loss. This model does not have access to information of neighboring sentences."
],
[
"The BiLSTM models are structurally different from the CNN. The CNN predicts on the sentence-level without having access to neighboring information. For the BiLSTM models we opt for a token-based IOB scheme in which we map the dominantly predicted class inside of one sentence to the whole sentence. Our BiLSTM (BOW) model BIBREF25, BIBREF26 uses 100 memory units, a softmax activation function and categorical cross entropy as the loss function. The BiLSTM (W2V) model uses pre-trained word embeddings (Word2Vec on Google News) BIBREF27 instead of the bag of words. The BiLSTM-CRF is an extension of the BiLSTM (W2V) which uses a conditional random field layer for the output."
],
[
"We split our 1008 obituaries into training set (70 %) and test set (30 %). From the training set, 10 % are used for validation. The batch size is set to 8 and the optimizer to rmsprop for all experiments. We do not perform hyperparameter tuning."
],
[
"The CNN model has the highest macro average $\\textrm {F}_1$ score with a value of 0.65. This results from the high values for the classes Family and Funeral information. The $\\textrm {F}_1$ score for the class Other is 0.52 in contrast with the $\\textrm {F}_1$ of the other three models, which is lower than 0.22. The macro average $\\textrm {F}_1$ for the BiLSTM (BOW) model is 0.58. It also has highest F1-scores for the classes Personal Information and Biographical Sketch among all models. For the classes Family, and Funeral information has comparable scores to the CNN model. Interestingly this model performs the best among the BiLSTM variants. The BiLSTM (W2V) model performs overall worse than the one which makes use only of a BOW. It also has the worst macro average $\\textrm {F}_1$ together with the BiLSTM-CRF with a value of 0.50. The BiLSTM-CRF performs better than the other BiLSTM variants on the rare classes Gratitude and Other.",
"Since we have few samples labelled as Tribute none of our models predict a sentence as such, resulting in precision, recall, and $\\textrm {F}_1$ value of 0 for each model.",
"From the results we conclude that the CNN model works best. Apart from the high $\\textrm {F}_1$ it is also the only model that predicts the class Gratitude as well as the class Other better than the other models."
],
[
"We investigate the best performing model by making use of the confusion matrix (see Figure FIGREF20) and by inspecting all errors made by the model on the test set (see Table TABREF21).",
"In Figure FIGREF20, we observe that the diagonal has relatively high numbers with more correctly labeled instances than confused ones for all classes, with the exception of class Tribute (the rarest class). Secondly, the confusions are not globally symmetric. However, we observe that the lower left corner formed by the classes Family, Characteristics and Biographical Sketch is almost symmetric in its confusions, which led us to inspect and classify the types of errors.",
"Therefore, we investigated all errors manually and classified them in three main types of errors: errors due to Ambiguity (39%), errors due to wrong Annotation (18%) and errors tagged as Other (42%) where the errors are more difficult to explain (see last column in Table TABREF21).",
"The errors due to Ambiguity are those where a test sentence could be reasonably assigned multiple different zones, and both the annotated class and the predicted class would be valid zones of the sentence. Such cases are most common between the zones Biographical Sketch, Personal Information, Characteristics, Other, and Family and occur even for the rare zones Tribute and Gratitude. An example of this error type is sentence 7 in Table TABREF21, which shows that there is a significant event that happened in the life of the deceased that changed their characteristics.",
"Another pattern we observe emerging within the Ambiguity class of errors is that borders between the classes confused are not as rigid, and sometimes parts of one class could be entailed in another. An example of this is when the class Other being entailed in Funeral Information or Characteristics as a quote, as a wish in sentence 5 (e. g., “may you find comfort...”) or as a last message from the family to the deceased (e. g. “You are truly special.”) in sentence 14.",
"The errors we mark as being errors of Annotation are those where the model is actually right in its prediction. Such cases are spread among all classes. The class that is the most affected by these errors is the class Characteristics, for which there are 23 cases of sentences wrongly annotated as being in the class Other or Biographical Sketch (e. g. sentences 9, 12). The second most affected class by this type of error is Biographical Sketch where the sentences are also wrongly annotated as Other. The rare class Gratitude is also 13 time wrongly annotated as Other, Personal Information or Biographical Sketch. This might explain why the model confuses these classes as well (Figure FIGREF20) Other examples for this type of error we can see for sentence 2, 6 and 16.",
"The rest of the errors, labeled here as Other, are diverse and more difficult to categorize. However, we see a pattern within this group of errors as well, such as when the model appears to be mislead by the presence of words that are strong predictive features of other classes. This could be seen for instance in sentence 19 where Gratitude in confused with Family due to the presence of words like “family”, “love”, “support”. This type of error can be also seen in sentence 11, 19. Another pattern that shows for errors of the type Other is when the model fails to predict the correct class because is not able to do coreference resolution as in sentences 10 and 15.",
"Regarding Gratitude, the confusion matrix shows that it is confounded with Family, Other, and Funeral Information. Inspecting these cases shows that the wrongly classified cases are due to the presence of strong predictive features of other classes, like family mentions or locations which are more prevalent in other classes as in the sentences 18 and 19.",
"Further, the class Funeral Information is confounded the most with Other, followed by Personal Information and Characteristics. We see a high number of confusions between Funeral Information and Gratitude as well, and since Gratitude is one of the rare classes we decide to have a closer look at these cases. We find that most of the misclassified sentences include expressions of gratitude and are therefore wrongly annotated, which shows that the model correctly learned that expressions like “would like to thank”, “thanks”, “thank you” include predictive features for the class Gratitude (see sentence 6).",
"When the class Characteristics is confounded with Other, this happens mostly due to presence of words related to memory “we will miss”, “we will always remember”, “our memories”, “will be deeply missed” which are most occurring within the class Other. This hints to a potential improvement in the Annotation Scheme, where one could add the class Societal Memory where all the sentences that mention what the community will miss due to the loss would belong to. We think that another improvement would be if the class Other would be further divided into Wish and Quote as well, this would eliminate the issue of entailed sentences of Other in other classes."
],
[
"This work addresses the question of how to automatically structure obituaries. Therefore, we acquire a new corpus consisting of 20058 obituaries of which 1008 are annotated. To tackle the task of assigning zones to sentences and uncover the structure of obituaries, four segmentation models are implemented and tested: a CNN, a BiLSTM network using a BOW model and one using word embeddings, and a BiLSTM-CRF. The models are then compared based on precision, recall, and F1-score. From our results, we conclude that the CNN text classifier produced the best results with a macro F1-score of 0.81, considering the experimental settings, and the highest macro average F1-score of 0.65. The BiLSTM (BOW) model produced comparable results and even better regarding the classes Personal information and Biographical sketch, which makes it also a valid baseline for the task.",
"Our work enables future research, showing that automatic recognition of structures in obituaries is a viable task. Through performing zoning on the raw obituaries, it is becoming possible to address other research questions: whether there is a correlation between the occupation of the deceased and the cause of death, what are the cultural and structural differences between obituaries from different countries.",
"Another open question is if the annotation scheme is the best. Given the errors we found, we argue that the annotation scheme could be refined and that the class Other could be split into at least two different new classes. We leave to future work developing a new annotation scheme. Further, one could annotate obituaries across cultures, optimize the parameters of our models for the structuring task or improve over the existing models. It might be an interesting direction to compare our defined structure with one of a topic modeler. Also possible is to postannotate the dataset with emotion classes and investigate the emotional connotation of different zones."
]
],
"section_name": [
"Introduction and Motivation",
"Related Work",
"Related Work ::: Obituaries in Cultural and Medical Studies",
"Related Work ::: Obituaries as a Data Source in Various Tasks of Computational Linguistics",
"Related Work ::: Zoning",
"Data ::: Collection",
"Data ::: Annotation Scheme and Guidelines",
"Data ::: Annotation Procedure and Inter-Annotator Agreement",
"Data ::: Analysis",
"Methods",
"Methods ::: CNN",
"Methods ::: BiLSTM",
"Experimental Setup",
"Experimental Setup ::: Results",
"Experimental Setup ::: Error Analysis",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"6e738c940b55a6d6899c89f06b51226a1dc59d02",
"8dddb11572ff364313ba3a169e58f7092316a609",
"e0706424b7d73ef2cf16cd8001ee649b5a25fbb5"
],
"answer": [
{
"evidence": [
"The CNN model has the highest macro average $\\textrm {F}_1$ score with a value of 0.65. This results from the high values for the classes Family and Funeral information. The $\\textrm {F}_1$ score for the class Other is 0.52 in contrast with the $\\textrm {F}_1$ of the other three models, which is lower than 0.22. The macro average $\\textrm {F}_1$ for the BiLSTM (BOW) model is 0.58. It also has highest F1-scores for the classes Personal Information and Biographical Sketch among all models. For the classes Family, and Funeral information has comparable scores to the CNN model. Interestingly this model performs the best among the BiLSTM variants. The BiLSTM (W2V) model performs overall worse than the one which makes use only of a BOW. It also has the worst macro average $\\textrm {F}_1$ together with the BiLSTM-CRF with a value of 0.50. The BiLSTM-CRF performs better than the other BiLSTM variants on the rare classes Gratitude and Other."
],
"extractive_spans": [],
"free_form_answer": "In terms of macro F1 score their model has 0.65 compared to 0.58 of best other model.",
"highlighted_evidence": [
"The CNN model has the highest macro average $\\textrm {F}_1$ score with a value of 0.65.",
"The macro average $\\textrm {F}_1$ for the BiLSTM (BOW) model is 0.58.",
"The BiLSTM (W2V) model performs overall worse than the one which makes use only of a BOW. It also has the worst macro average $\\textrm {F}_1$ together with the BiLSTM-CRF with a value of 0.50"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 5: Comparison of the models using Precision, Recall, and F1-score (macro and micro)"
],
"extractive_spans": [],
"free_form_answer": "Their model outperforms other models by 0.01 micro F1 and 0.07 macro F1",
"highlighted_evidence": [
"FLOAT SELECTED: Table 5: Comparison of the models using Precision, Recall, and F1-score (macro and micro)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
""
],
"paper_read": [
""
],
"question": [
"by how much did their model outperform the other models?"
],
"question_id": [
"46146ff3ef3430924e6b673a28df96ccb869dee4"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
""
],
"topic_background": [
""
]
} | {
"caption": [
"Table 1: Example of an annotated obituary together with specific constituent elements for each of the zones.",
"Table 2: Overview of the sources of obituary data.",
"Table 3: Inter-annotator agreement scores with Fleiss’ κ. PI: Personal information, BS: Biographical Sketch, FA: Family, C: Characteristics, T: Tribute, G: Gratitude, FI: Funeral",
"Table 4: Information on full annotated dataset of obituaries. PI: Personal information, BS: Biographical Sketch, FA: Family, C: Characteristics, T: Tribute, G: Gratitude, FI: Funeral Information, O: Other # sent. denotes number of sentences. % denotes the relative counts in each class.",
"Table 5: Comparison of the models using Precision, Recall, and F1-score (macro and micro)",
"Table 6: Example of errors done by the CNN model. PI: Personal information, BS: Biographical Sketch, FA: Family, C: Characteristics, T: Tribute, G: Gratitude, FI: Funeral Information, O: Other",
"Figure 1: Confusion matrix of the CNN model."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"6-Table5-1.png",
"6-Table6-1.png",
"7-Figure1-1.png"
]
} | [
"by how much did their model outperform the other models?"
] | [
[
"2002.12699-Experimental Setup ::: Results-0",
"2002.12699-6-Table5-1.png"
]
] | [
"Their model outperforms other models by 0.01 micro F1 and 0.07 macro F1"
] | 114 |
1901.01590 | Improving Unsupervised Word-by-Word Translation with Language Model and Denoising Autoencoder | Unsupervised learning of cross-lingual word embedding offers elegant matching of words across languages, but has fundamental limitations in translating sentences. In this paper, we propose simple yet effective methods to improve word-by-word translation of cross-lingual embeddings, using only monolingual corpora but without any back-translation. We integrate a language model for context-aware search, and use a novel denoising autoencoder to handle reordering. Our system surpasses state-of-the-art unsupervised neural translation systems without costly iterative training. We also analyze the effect of vocabulary size and denoising type on the translation performance, which provides better understanding of learning the cross-lingual word embedding and its usage in translation. | {
"paragraphs": [
[
"Building a machine translation (MT) system requires lots of bilingual data. Neural MT models BIBREF0 , which become the current standard, are even more difficult to train without huge bilingual supervision BIBREF1 . However, bilingual resources are still limited to some of the selected language pairs—mostly from or to English.",
"A workaround for zero-resource language pairs is translating via an intermediate (pivot) language. To do so, we need to collect parallel data and train MT models for source-to-pivot and pivot-to-target individually; it takes a double effort and the decoding is twice as slow.",
"Unsupervised learning is another alternative, where we can train an MT system with only monolingual corpora. Decipherment methods BIBREF2 , BIBREF3 are the first work in this direction, but they often suffer from a huge latent hypothesis space BIBREF4 .",
"Recent work by unmt-artetxe and unmt-facebook train sequence-to-sequence MT models of both translation directions together in an unsupervised way. They do back-translation BIBREF5 back and forth for every iteration or batch, which needs an immensely long time and careful tuning of hyperparameters for massive monolingual data.",
"Here we suggest rather simple methods to build an unsupervised MT system quickly, based on word translation using cross-lingual word embeddings. The contributions of this paper are:",
"The proposed models can be efficiently trained with off-the-shelf softwares with little or no changes in the implementation, using only monolingual data. The provided analyses help for better learning of cross-lingual word embeddings for translation purpose. Altogether, our unsupervised MT system outperforms the sequence-to-sequence neural models even without training signals from the opposite translation direction, i.e. via back-translation."
],
[
"As a basic step for unsupervised MT, we learn a word translation model from monolingual corpora of each language. In this work, we exploit cross-lingual word embedding for word-by-word translation, which is state-of-the-art in terms of type translation quality BIBREF6 , BIBREF7 .",
"Cross-lingual word embedding is a continuous representation of words whose vector space is shared across multiple languages. This enables distance calculation between word embeddings across languages, which is actually finding translation candidates.",
"We train cross-lingual word embedding in a fully unsupervised manner:",
"Once we have the cross-lingual mapping, we can transform the embedding of a given source word and find a target word with the closest embedding, i.e. nearest neighbor search. Here, we apply cross-domain similarity local scaling BIBREF7 to penalize the word similarities in dense areas of the embedding distribution.",
"We further refine the mapping obtained from Step 2 as follows BIBREF6 :"
],
[
"In translating sentences, cross-lingual word embedding has several drawbacks. We describe each of them and our corresponding solutions."
],
[
"The word translation using nearest neighbor search does not consider context around the current word. In many cases, the correct translation is not the nearest target word but other close words with morphological variations or synonyms, depending on the context.",
"The reasons are in two-fold: 1) Word embedding is trained to place semantically related words nearby, even though they have opposite meanings. 2) A hubness problem of high-dimensional embedding space hinders a correct search, where lots of different words happen to be close to each other BIBREF10 .",
"In this paper, we integrate context information into word-by-word translation by combining a language model (LM) with cross-lingual word embedding. Let $f$ be a source word in the current position and $e$ a possible target word. Given a history $h$ of target words before $e$ , the score of $e$ to be the translation of $f$ would be:",
" $\nL(e;f,h) = \\lambda _\\text{emb}\\log q(f,e) + \\lambda _\\text{LM}\\log p(e|h)\n$ ",
"Here, $q(f,e)$ is a lexical score defined as:",
" $\nq(f,e) = \\frac{d(f,e) + 1}{2}\n$ ",
" where $d(f,e) \\in [-1,1]$ is a cosine similarity between $f$ and $e$ . It is transformed to the range $[0,1]$ to make it similar in scale with the LM probability. In our experiments, we found that this simple linear scaling is better than sigmoid or softmax functions in the final translation performance.",
"Accumulating the scores per position, we perform a beam search to allow only reasonable translation hypotheses."
],
[
"Even when we have correctly translated words for each position, the output is still far from an acceptable translation. We adopt sequence denoising autoencoder BIBREF11 to improve the translation output of Section \"Context-aware Beam Search\" . The main idea is to train a sequence-to-sequence neural network model that takes a noisy sentence as input and produces a (denoised) clean sentence as output, both of which are of the same (target) language. The model was originally proposed to learn sentence embeddings, but here we use it directly to actually remove noise in a sentence.",
"Training label sequences for the denoising network would be target monolingual sentences, but we do not have their noisy versions at hand. Given a clean target sentence, the noisy input should be ideally word-by-word translation of the corresponding source sentence. However, such bilingual sentence alignment is not available in our unsupervised setup.",
"Instead, we inject artificial noise into a clean sentence to simulate the noise of word-by-word translation. We design different noise types after the following aspects of word-by-word translation.",
"Word-by-word translation always outputs a target word for every position. However, there are a plenty of cases that multiple source words should be translated to a single target word, or that some source words are rather not translated to any word to make a fluent output. For example, a German sentence “Ich höre zu.” would be translated to “I'm listening to.” by a word-by-word translator, but “I'm listening.” is more natural in English (Figure 1 ).",
"We pretend to have extra target words which might be translation of redundant source words, by inserting random target words to a clean sentence:",
"For each position $i$ , sample a probability $p_i \\sim \\text{Uniform}(0,1)$ .",
"If $p_i < p_\\text{ins}$ , sample a word $e$ from the most frequent $V_\\text{ins}$ target words and insert it before position $i$ .",
"We limit the inserted words by $V_\\text{ins}$ because target insertion occurs mostly with common words, e.g. prepositions or articles, as the example above. We insert words only before—not after—a position, since an extra word after the ending word (usually a punctuation) is not probable.",
"Similarly, word-by-word translation cannot handle the contrary case: when a source word should be translated into more than one target words, or a target word should be generated from no source words for fluency. For example, a German word “im” must be “in the” in English, but word translation generates only one of the two English words. Another example is shown in Figure 2 .",
"To simulate such situations, we drop some words randomly from a clean target sentence BIBREF11 :",
"For each position $i$ , sample a probability $p_i \\sim \\text{Uniform}(0,1)$ .",
"If $p_i < p_\\text{del}$ , drop the word in the position $i$ .",
"Also, translations generated word-by-word are not in an order of the target language. In our beam search, LM only assists in choosing the right word in context but does not modify the word order. A common reordering problem of German $\\rightarrow $ English is illustrated in Figure 3 .",
"From a clean target sentence, we corrupt its word order by random permutations. We limit the maximum distance between an original position and its new position like unmt-facebook:",
"For each position $i$ , sample an integer $\\delta _i$ from $[0,d_\\text{per}]$ .",
"Add $\\delta _i$ to index $i$ and sort the incremented indices $i + \\delta _i$ in an increasing order.",
"Rearrange the words to be in the new positions, to which their original indices have moved by Step 2.",
"This is a generalized version of swapping two neighboring words BIBREF11 . Reordering is highly dependent of each language, but we found that this noise is generally close to word-by-word translation outputs.",
"Insertion, deletion, and reordering noises were applied to each mini-batch with different random seeds, allowing the model to see various noisy versions of the same clean sentence over the epochs.",
"Note that the deletion and permutation noises are integrated in the neural MT training of unmt-artetxe and unmt-facebook as additional training objectives. Whereas we optimize an independent model solely for denoising without architecture change. It allows us to easily train a larger network with a larger data. Insertion noise is of our original design, which we found to be the most effective (Section \"Ablation Study: Denoising\" )."
],
[
"We applied the proposed methods on WMT 2016 German $\\leftrightarrow $ English task and WMT 2014 French $\\leftrightarrow $ English task. For German/English, we trained word embeddings with 100M sentences sampled from News Crawl 2014-2017 monolingual corpora. For French, we used News Crawl 2007-2014 (around 42M sentences). The data was lowercased and filtered to have a maximum sentence length 100. German compound words were splitted beforehand. Numbers were replaced with category labels and recovered back after decoding by looking at the source sentence.",
"fasttext BIBREF8 was used to learn monolingual embeddings for only the words with minimum count 10. MUSE BIBREF7 was used for cross-lingual mappings with $V_\\text{cross-train}$ = 100k and 10 refinement iterations (Step 3-5 in Section \"Cross-lingual Word Embedding\" ). Other parameters follow the values in cross-facebook. With the same data, we trained 5-gram count-based LMs using KenLM BIBREF14 with its default setting.",
"Denoising autoencoders were trained using Sockeye BIBREF15 on News Crawl 2016 for German/English and News Crawl 2014 for French. We considered only top 50k frequent words for each language and mapped other words to <unk>. The unknowns in the denoised output were replaced with missing words from the noisy input by a simple line search.",
"We used 6-layer Transformer encoder/decoder BIBREF16 for denoisers, with embedding/hidden layer size 512, feedforward sublayer size 2048 and 8 attention heads.",
"As a validation set for the denoiser training, we used newstest2015 (German $\\leftrightarrow $ English) or newstest2013 (French $\\leftrightarrow $ English), where the input/output sides both have the same clean target sentences, encouraging a denoiser to keep at least clean part of word-by-word translations. Here, the noisy input showed a slight degradation of performance; the model seemed to overfit to specific noises in the small validation set.",
"Optimization of the denoising models was done with Adam BIBREF17 : initial learning rate 0.0001, checkpoint frequency 4000, no learning rate warmup, multiplying 0.7 to the learning rate when the perplexity on the validation set did not improve for 3 checkpoints. We stopped the training if it was not improved for 8 checkpoints.",
"In decoding, we used $\\lambda _\\text{embed} = 1$ and $\\lambda _\\text{LM} = 0.1$ with beam size 10. We only translated top frequent 50k source words and merely copied other words to target side. For each position, only the nearest 100 target words were considered.",
"Table 1 shows the results. LM improves word-by-word baselines consistently in all four tasks, giving at least +3% Bleu. When our denoising model is applied on top of it, we have additional gain around +3% Bleu. Note that our methods do not involve any decoding steps to generate pseudo-parallel training data, but still perform better than unsupervised MT systems that rely on repetitive back-translations BIBREF13 , BIBREF12 by up to +3.9% Bleu. The total training time of our method is only 1-2 days with a single GPU."
],
[
"To examine the effect of each noise type in denoising autoencoder, we tuned each parameter of the noise and combined them incrementally (Table 2 ). Firstly, for permutations, a significant improvement is achieved from $d_\\text{per} = 3$ , since a local reordering usually involves a sequence of 3 to 4 words. With $d_\\text{per} > 5$ , it shuffles too many consecutive words together, yielding no further improvement. This noise cannot handle long-range reordering, which is usually a swap of words that are far from each other, keeping the words in the middle as they are.",
"Secondly, we applied the deletion noise with different values of $p_\\text{del}$ . 0.1 gives +0.8% Bleu, but we immediately see a degradation with a larger value; it is hard to observe one-to-many translations more than once in each sentence pair.",
"Finally, we optimized $V_\\text{ins}$ for the insertion noise, fixing $p_\\text{ins} = 0.1$ . Increasing $V_\\text{ins}$ is generally not beneficial, since it provides too much variations in the inserted word; it might not be related to its neighboring words. Overall, we observe the best result (+1.5% Bleu) with $V_\\text{ins} = 50$ ."
],
[
"We also examined how the translation performance varies with different vocabularies of cross-lingual word embedding in Table 3 . The first three rows show that BPE embeddings perform worse than word embeddings, especially with smaller vocabulary size. For short BPE tokens, the context they meet during the embedding training is much more various than a complete word, and a direct translation of such token to a BPE token of another language would be very ambiguous.",
"For word level embeddings, we compared different vocabulary sizes used for training the cross-lingual mapping (the second step in Section \"Cross-lingual Word Embedding\" ). Surprisingly, cross-lingual word embedding learned only on top 20k words is comparable to that of 200k words in the translation quality. We also increased the search vocabulary to more than 200k but the performance only degrades. This means that word-by-word translation with cross-lingual embedding depends highly on the frequent word mappings, and learning the mapping between rare words does not have a positive effect."
],
[
"In this paper, we proposed a simple pipeline to greatly improve sentence translation based on cross-lingual word embedding. We achieved context-aware lexical choices using beam search with LM, and solved insertion/deletion/reordering problems using denoising autoencoder. Our novel insertion noise shows a promising performance even combined with other noise types. Our methods do not need back-translation steps but still outperforms costly unsupervised neural MT systems. In addition, we proved that for general translation purpose, an effective cross-lingual mapping can be learned using only a small set of frequent words, not on subword units. Our implementation of the LM integration and the denoising autoencoder is available online."
],
[
"This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No. 694537 (SEQCLAS). The GPU computing cluster was partially funded by Deutsche Forschungsgemeinschaft (DFG) under grant INST 222/1168-1 FUGG. The work reflects only the authors' views and neither ERC nor DFG is responsible for any use that may be made of the information it contains."
]
],
"section_name": [
"Introduction",
"Cross-lingual Word Embedding",
"Sentence Translation",
"Context-aware Beam Search",
"Denoising",
"Experiments",
"Ablation Study: Denoising",
"Ablation Study: Vocabulary",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"71ecb82f2bb5a14966380b4accbbd3e11eea388d",
"94901c163f406948403ac65fbd88b171f98b1603",
"9c213374f161b3f083588e2ec77c2c7d44944bfc"
],
"answer": [
{
"evidence": [
"Also, translations generated word-by-word are not in an order of the target language. In our beam search, LM only assists in choosing the right word in context but does not modify the word order. A common reordering problem of German $\\rightarrow $ English is illustrated in Figure 3 .",
"FLOAT SELECTED: Figure 3: Example of denoising the reordering noise."
],
"extractive_spans": [],
"free_form_answer": "changing the order of the word-by-word translation so it matches the target language",
"highlighted_evidence": [
"Also, translations generated word-by-word are not in an order of the target language.",
"FLOAT SELECTED: Figure 3: Example of denoising the reordering noise."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Also, translations generated word-by-word are not in an order of the target language. In our beam search, LM only assists in choosing the right word in context but does not modify the word order. A common reordering problem of German $\\rightarrow $ English is illustrated in Figure 3 ."
],
"extractive_spans": [],
"free_form_answer": "Changing the word order of the translation so it is in the right order of the target language.",
"highlighted_evidence": [
"Also, translations generated word-by-word are not in an order of the target language. In our beam search, LM only assists in choosing the right word in context but does not modify the word order. A common reordering problem of German $\\rightarrow $ English is illustrated in Figure 3 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Also, translations generated word-by-word are not in an order of the target language. In our beam search, LM only assists in choosing the right word in context but does not modify the word order. A common reordering problem of German $\\rightarrow $ English is illustrated in Figure 3 .",
"From a clean target sentence, we corrupt its word order by random permutations. We limit the maximum distance between an original position and its new position like unmt-facebook:"
],
"extractive_spans": [],
"free_form_answer": "Re-arranging translated words so that they are in the correct order in the target language",
"highlighted_evidence": [
"Also, translations generated word-by-word are not in an order of the target language. In our beam search, LM only assists in choosing the right word in context but does not modify the word order. A common reordering problem of German $\\rightarrow $ English is illustrated in Figure 3 .",
"From a clean target sentence, we corrupt its word order by random permutations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"f840a836eee0180d2c976457f8b3052d8e78050c"
]
},
{
"annotation_id": [
"7f6b3044d15268c9fee1d58f0be9e920f7e6eb7f",
"8055015a2d805546da5be07d109f5a8ab2c0e387",
"a5f194c30674babd27fab0e1e7e968b4355c5b99"
],
"answer": [
{
"evidence": [
"In this paper, we integrate context information into word-by-word translation by combining a language model (LM) with cross-lingual word embedding. Let $f$ be a source word in the current position and $e$ a possible target word. Given a history $h$ of target words before $e$ , the score of $e$ to be the translation of $f$ would be:",
"$ L(e;f,h) = \\lambda _\\text{emb}\\log q(f,e) + \\lambda _\\text{LM}\\log p(e|h) $",
"Here, $q(f,e)$ is a lexical score defined as:",
"$ q(f,e) = \\frac{d(f,e) + 1}{2} $",
"where $d(f,e) \\in [-1,1]$ is a cosine similarity between $f$ and $e$ . It is transformed to the range $[0,1]$ to make it similar in scale with the LM probability. In our experiments, we found that this simple linear scaling is better than sigmoid or softmax functions in the final translation performance."
],
"extractive_spans": [],
"free_form_answer": "the language model is combined with cross-lingual word embedding to obtain context information in the word-by-word translation",
"highlighted_evidence": [
"In this paper, we integrate context information into word-by-word translation by combining a language model (LM) with cross-lingual word embedding. Let $f$ be a source word in the current position and $e$ a possible target word. Given a history $h$ of target words before $e$ , the score of $e$ to be the translation of $f$ would be:\n\n$ L(e;f,h) = \\lambda _\\text{emb}\\log q(f,e) + \\lambda _\\text{LM}\\log p(e|h) $\n\nHere, $q(f,e)$ is a lexical score defined as:\n\n$ q(f,e) = \\frac{d(f,e) + 1}{2} $\n\nwhere $d(f,e) \\in [-1,1]$ is a cosine similarity between $f$ and $e$ . It is transformed to the range $[0,1]$ to make it similar in scale with the LM probability. In our experiments, we found that this simple linear scaling is better than sigmoid or softmax functions in the final translation performance."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, we integrate context information into word-by-word translation by combining a language model (LM) with cross-lingual word embedding. Let $f$ be a source word in the current position and $e$ a possible target word. Given a history $h$ of target words before $e$ , the score of $e$ to be the translation of $f$ would be:",
"$ L(e;f,h) = \\lambda _\\text{emb}\\log q(f,e) + \\lambda _\\text{LM}\\log p(e|h) $",
"Here, $q(f,e)$ is a lexical score defined as:",
"$ q(f,e) = \\frac{d(f,e) + 1}{2} $",
"where $d(f,e) \\in [-1,1]$ is a cosine similarity between $f$ and $e$ . It is transformed to the range $[0,1]$ to make it similar in scale with the LM probability. In our experiments, we found that this simple linear scaling is better than sigmoid or softmax functions in the final translation performance.",
"Accumulating the scores per position, we perform a beam search to allow only reasonable translation hypotheses."
],
"extractive_spans": [
"combining a language model (LM) with cross-lingual word embedding",
"Let $f$ be a source word in the current position and $e$ a possible target word. Given a history $h$ of target words before $e$ , the score of $e$ to be the translation of $f$ would be:\n\n$ L(e;f,h) = \\lambda _\\text{emb}\\log q(f,e) + \\lambda _\\text{LM}\\log p(e|h) $\n\nHere, $q(f,e)$ is a lexical score defined as:\n\n$ q(f,e) = \\frac{d(f,e) + 1}{2} $\n\nwhere $d(f,e) \\in [-1,1]$ is a cosine similarity between $f$ and $e$ .",
"Accumulating the scores per position, we perform a beam search to allow only reasonable translation hypotheses."
],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we integrate context information into word-by-word translation by combining a language model (LM) with cross-lingual word embedding. Let $f$ be a source word in the current position and $e$ a possible target word. Given a history $h$ of target words before $e$ , the score of $e$ to be the translation of $f$ would be:\n\n$ L(e;f,h) = \\lambda _\\text{emb}\\log q(f,e) + \\lambda _\\text{LM}\\log p(e|h) $\n\nHere, $q(f,e)$ is a lexical score defined as:\n\n$ q(f,e) = \\frac{d(f,e) + 1}{2} $\n\nwhere $d(f,e) \\in [-1,1]$ is a cosine similarity between $f$ and $e$ . It is transformed to the range $[0,1]$ to make it similar in scale with the LM probability. In our experiments, we found that this simple linear scaling is better than sigmoid or softmax functions in the final translation performance.\n\nAccumulating the scores per position, we perform a beam search to allow only reasonable translation hypotheses."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The word translation using nearest neighbor search does not consider context around the current word. In many cases, the correct translation is not the nearest target word but other close words with morphological variations or synonyms, depending on the context.",
"In this paper, we integrate context information into word-by-word translation by combining a language model (LM) with cross-lingual word embedding. Let $f$ be a source word in the current position and $e$ a possible target word. Given a history $h$ of target words before $e$ , the score of $e$ to be the translation of $f$ would be:",
"$ L(e;f,h) = \\lambda _\\text{emb}\\log q(f,e) + \\lambda _\\text{LM}\\log p(e|h) $",
"Here, $q(f,e)$ is a lexical score defined as:",
"$ q(f,e) = \\frac{d(f,e) + 1}{2} $",
"where $d(f,e) \\in [-1,1]$ is a cosine similarity between $f$ and $e$ . It is transformed to the range $[0,1]$ to make it similar in scale with the LM probability. In our experiments, we found that this simple linear scaling is better than sigmoid or softmax functions in the final translation performance.",
"Accumulating the scores per position, we perform a beam search to allow only reasonable translation hypotheses."
],
"extractive_spans": [],
"free_form_answer": "It is used to calculate the probability of a possible target word given the history of target words that come before it.",
"highlighted_evidence": [
"The word translation using nearest neighbor search does not consider context around the current word. In many cases, the correct translation is not the nearest target word but other close words with morphological variations or synonyms, depending on the context.",
"In this paper, we integrate context information into word-by-word translation by combining a language model (LM) with cross-lingual word embedding. Let $f$ be a source word in the current position and $e$ a possible target word. Given a history $h$ of target words before $e$ , the score of $e$ to be the translation of $f$ would be:\n\n$ L(e;f,h) = \\lambda _\\text{emb}\\log q(f,e) + \\lambda _\\text{LM}\\log p(e|h) $\n\nHere, $q(f,e)$ is a lexical score defined as:\n\n$ q(f,e) = \\frac{d(f,e) + 1}{2} $\n\nwhere $d(f,e) \\in [-1,1]$ is a cosine similarity between $f$ and $e$ . It is transformed to the range $[0,1]$ to make it similar in scale with the LM probability.",
"Accumulating the scores per position, we perform a beam search to allow only reasonable translation hypotheses."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b",
"c7d4a630661cd719ea504dba56393f78278b296b",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"What is reordering in the context of the paper?",
"How does the paper use language model for context aware search?"
],
"question_id": [
"3499d5feeb3a45411d8e893516adbdc14e72002a",
"d0048ef1cba3f63b5d60c568d5d0ba62ac4d7e75"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"language model",
"language model"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 2: Example of denoising a deletion noise.",
"Figure 1: Example of denoising an insertion noise.",
"Figure 3: Example of denoising the reordering noise.",
"Table 1: Translation results on German↔English newstest2016 and French↔English newstest2014.",
"Table 3: Translation results with different vocabularies for German→English (without denoising).",
"Table 2: Translation results with different values of denoising parameters for German→English."
],
"file": [
"3-Figure2-1.png",
"3-Figure1-1.png",
"3-Figure3-1.png",
"4-Table1-1.png",
"5-Table3-1.png",
"5-Table2-1.png"
]
} | [
"What is reordering in the context of the paper?",
"How does the paper use language model for context aware search?"
] | [
[
"1901.01590-3-Figure3-1.png",
"1901.01590-Denoising-12",
"1901.01590-Denoising-13"
],
[
"1901.01590-Context-aware Beam Search-0",
"1901.01590-Context-aware Beam Search-2",
"1901.01590-Context-aware Beam Search-4",
"1901.01590-Context-aware Beam Search-7"
]
] | [
"Re-arranging translated words so that they are in the correct order in the target language",
"It is used to calculate the probability of a possible target word given the history of target words that come before it."
] | 115 |
1708.01065 | Reader-Aware Multi-Document Summarization: An Enhanced Model and The First Dataset | We investigate the problem of reader-aware multi-document summarization (RA-MDS) and introduce a new dataset for this problem. To tackle RA-MDS, we extend a variational auto-encodes (VAEs) based MDS framework by jointly considering news documents and reader comments. To conduct evaluation for summarization performance, we prepare a new dataset. We describe the methods for data collection, aspect annotation, and summary writing as well as scrutinizing by experts. Experimental results show that reader comments can improve the summarization performance, which also demonstrates the usefulness of the proposed dataset. The annotated dataset for RA-MDS is available online. | {
"paragraphs": [
[
"The goal of multi-document summarization (MDS) is to automatically generate a brief, well-organized summary for a topic which describes an event with a set of documents from different sources. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . In the typical setting of MDS, the input is a set of news documents about the same topic. The output summary is a piece of short text document containing several sentences, generated only based on the input original documents.",
"With the development of social media and mobile equipments, more and more user generated content is available. Figure FIGREF2 is a snapshot of reader comments under the news report “The most important announcements from Google's big developers' conference”. The content of the original news report talks about some new products based on AI techniques. The news report generally conveys an enthusiastic tone. However, while some readers share similar enthusiasms, some others express their worries about new products and technologies and these comments can also reflect their interests which may not be very salient in the original news reports. Unfortunately, existing MDS approaches cannot handle this issue. We investigate this problem known as reader-aware multi-document summarization (RA-MDS). Under the RA-MDS setting, one should jointly consider news documents and reader comments when generating the summaries.",
"One challenge of the RA-MDS problem is how to conduct salience estimation by jointly considering the focus of news reports and the reader interests revealed by comments. Meanwhile, the model should be insensitive to the availability of diverse aspects of reader comments. Another challenge is that reader comments are very noisy, not fully grammatical and often expressed in informal expressions. Some previous works explore the effect of comments or social contexts in single document summarization such as blog summarization BIBREF7 , BIBREF8 . However, the problem setting of RA-MDS is more challenging because the considered comments are about an event which is described by multiple documents spanning a time period. Another challenge is that reader comments are very diverse and noisy. Recently, BIBREF9 employed a sparse coding based framework for RA-MDS jointly considering news documents and reader comments via an unsupervised data reconstruction strategy. However, they only used the bag-of-words method to represent texts, which cannot capture the complex relationship between documents and comments.",
"Recently, BIBREF6 proposed a sentence salience estimation framework known as VAESum based on a neural generative model called Variational Auto-Encoders (VAEs) BIBREF10 , BIBREF11 . During our investigation, we find that the Gaussian based VAEs have a strong ability to capture the salience information and filter the noise from texts. Intuitively, if we feed both the news sentences and the comment sentences into the VAEs, commonly existed latent aspect information from both of them will be enhanced and become salient. Inspired by this consideration, to address the sentence salience estimation problem for RA-MDS by jointly considering news documents and reader comments, we extend the VAESum framework by training the news sentence latent model and the comment sentence latent model simultaneously by sharing the neural parameters. After estimating the sentence salience, we employ a phrase based compressive unified optimization framework to generate a final summary.",
"There is a lack of high-quality dataset suitable for RA-MDS. Existing datasets from DUC and TAC are not appropriate. Therefore, we introduce a new dataset for RA-MDS. We employed some experts to conduct the tasks of data collection, aspect annotation, and summary writing as well as scrutinizing. To our best knowledge, this is the first dataset for RA-MDS.",
"Our contributions are as follows: (1) We investigate the RA-MDS problem and introduce a new dataset for the problem of RA-MDS. To our best knowledge, it is the first dataset for RA-MDS. (2) To tackle the RA-MDS, we extend a VAEs-based MDS framework by jointly considering news documents and reader comments. (3) Experimental results show that reader comments can improve the summarization performance, which also demonstrates the usefulness of the dataset."
],
[
"As shown in Figure FIGREF7 , our reader-aware news sentence salience framework has three main components: (1) latent semantic modeling; (2) comment weight estimation; (3) joint reconstruction. Consider a dataset INLINEFORM0 and INLINEFORM1 consisting of INLINEFORM2 news sentences and INLINEFORM3 comment sentences respectively from all the documents in a topic (event), represented by bag-of-words vectors. Our proposed news sentence salience estimation framework is extended from VAESum BIBREF6 , which can jointly consider news documents and reader comments. One extension is that, in order to absorb more useful information and filter the noisy data from comments, we design a weight estimation mechanism which can assign a real value INLINEFORM4 for a comment sentence INLINEFORM5 . The comment weight INLINEFORM6 is integrated into the VAEs based sentence modeling and data reconstruction component to handle comments."
],
[
"Variational Autoencoders (VAEs) BIBREF10 , BIBREF11 is a generative model based on neural networks which can be used to conduct latent semantic modeling. BIBREF6 employ VAEs to map the news sentences into a latent semantic space, which is helpful in improving the MDS performance. Similarly, we also employ VAEs to conduct the semantic modeling for news sentences and comment sentences. Assume that both the prior and posterior of the latent variables are Gaussian, i.e., INLINEFORM0 and INLINEFORM1 , where INLINEFORM2 and INLINEFORM3 denote the variational mean and standard deviation respectively, which can be calculated with a multilayer perceptron (MLP). VAEs can be divided into two phases, namely, encoding (inference), and decoding (generation). All the operations are depicted as follows: DISPLAYFORM0 ",
"Based on the reparameterization trick in Equation EQREF9 , we can get the analytical representation of the variational lower bound INLINEFORM0 : DISPLAYFORM0 ",
"where INLINEFORM0 denotes a general sentence, and it can be a news sentence INLINEFORM1 or a comment sentnece INLINEFORM2 .",
"By feeding both the news documents and the reader comments into VAEs, we equip the model a ability of capturing the information from them jointly. However, there is a large amount of noisy information hidden in the comments. Hence we design a weighted combination mechanism for fusing news and comments in the VAEs. Precisely, we split the variational lower bound INLINEFORM0 into two parts and fuse them using the comment weight INLINEFORM1 : DISPLAYFORM0 ",
"The calculation of INLINEFORM0 will be discussed later.",
"The news sentence salience estimation is conducted by an unsupervised data reconstruction framework. Assume that INLINEFORM0 are INLINEFORM1 latent aspect vectors used for reconstructing all the latent semantic vectors INLINEFORM2 . Thereafter, the variational-decoding progress of VAEs can map the latent aspect vector INLINEFORM3 to INLINEFORM4 , and then produce INLINEFORM5 new aspect term vectors INLINEFORM6 : DISPLAYFORM0 ",
"VAESum BIBREF6 employs an alignment mechanism BIBREF12 , BIBREF13 to recall the lost detailed information from the input sentence. Inspired this idea, we design a jointly weighted alignment mechanism by considering the news sentence and the comment sentence simultaneously. For each decoder hidden state INLINEFORM0 , we align it with each news encoder hidden state INLINEFORM1 by an alignment vector INLINEFORM2 . We also align it with each comments encoder hidden state INLINEFORM3 by an alignment vector INLINEFORM4 . In order to filter the noisy information from the comments, we again employ the comment weight INLINEFORM5 to adjust the alignment vector of comments: DISPLAYFORM0 ",
"The news-based context vector INLINEFORM0 and the comment-based context vector INLINEFORM1 can be obtained by linearly blending the input hidden states respectively. Then the output hidden state can be updated based on the context vectors: DISPLAYFORM0 ",
"Then we can generate the updated output aspect vectors based on INLINEFORM0 . We add a similar alignment mechanism into the output layer.",
" INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 can be used to reconstruct the space to which they belong respectively. In order to capture the information from comments, we design a joint reconstruction approach here. Let INLINEFORM3 be the reconstruction coefficient matrix for news sentences, and INLINEFORM4 be the reconstruction coefficient matrix for comment sentences. The optimization objective contains three reconstruction terms, jointly considering the latent semantic reconstruction and the term vector space reconstruction for news and comments respectively: DISPLAYFORM0 ",
"This objective is integrated with the variational lower bound of VAEs INLINEFORM0 and optimized in a multi-task learning fashion. Then the new optimization objective is: DISPLAYFORM0 ",
"where INLINEFORM0 is a set of all the parameters related to this task. We define the magnitude of each row of INLINEFORM1 as the salience scores for the corresponding news sentences.",
"We should note that the most important variable in our framework is the comment weight vector INLINEFORM0 , which appears in all the three components of our framework. The basic idea for calculating INLINEFORM1 is that if the comment sentence is more similar to the news content, then it contains less noisy information. For all the news sentences INLINEFORM2 and all the comment sentences INLINEFORM3 , calculate the relation matrix INLINEFORM4 by: DISPLAYFORM0 ",
"Then we add an average pooling layer to get the coefficient value for each comment sentence: DISPLAYFORM0 ",
"Finally, we add a sigmoid function to adjust the coefficient value to INLINEFORM0 : DISPLAYFORM0 ",
"Because we have different representations from different vector space for the sentences, therefore we can calculate the comment weight in different semantic vector space. Here we use two spaces, namely, latent semantic space obtained by VAEs, and the original bag-of-words vector space. Then we can merge the weights by a parameter INLINEFORM0 : DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are the comment weight calculated from latent semantic space and term vector space. Actually, we can regard INLINEFORM2 as some gates to control the proportion of each comment sentence absorbed by the framework."
],
[
"In order to produce reader-aware summaries, inspired by the phrase-based model in BIBREF5 and BIBREF9 , we refine this model to consider the news sentences salience information obtained by our framework. Based on the parsed constituency tree for each input sentence, we extract the noun-phrases (NPs) and verb-phrases (VPs). The overall objective function of this optimization formulation for selecting salient NPs and VPs is formulated as an integer linear programming (ILP) problem: DISPLAYFORM0 ",
"where INLINEFORM0 is the selection indicator for the phrase INLINEFORM1 , INLINEFORM2 is the salience scores of INLINEFORM3 , INLINEFORM4 and INLINEFORM5 is co-occurrence indicator and the similarity a pair of phrases ( INLINEFORM6 , INLINEFORM7 ) respectively. The similarity is calculated with the Jaccard Index based method. In order to obtain coherent summaries with good readability, we add some constraints into the ILP framework. For details, please refer to BIBREF14 , BIBREF5 , and BIBREF9 . The objective function and constraints are linear. Therefore the optimization can be solved by existing ILP solvers such as simplex algorithms BIBREF15 . In the implementation, we use a package called lp_solve."
],
[
"In this section, we describe the preparation process of the dataset. Then we provide some properties and statistics."
],
[
"The definition of the terminology related to the dataset is given as follows.",
"Topic: A topic refers to an event and it is composed of a set of news documents from different sources.",
"Document: A news article describing some aspects of the topic. The set of documents in the same topic typically span a period, say a few days.",
"Category: Each topic belongs to a category. There are 6 predefined categories: (1) Accidents and Natural Disasters, (2) Attacks (Criminal/Terrorist), (3) New Technology, (4) Health and Safety, (5) Endangered Resources, and (6) Investigations and Trials (Criminal/Legal/Other).",
"Aspect: Each category has a set of predefined aspects. Each aspect describes one important element of an event. For example, for the category “Accidents and Natural Disasters”, the aspects are “WHAT”, “WHEN”, “WHERE”, “WHY”, “WHO_AFFECTED”, “DAMAGES”, and “COUNTERMEASURES”.",
"Aspect facet: An aspect facet refers to the actual content of a particular aspect for a particular topic. Take the topic “Malaysia Airlines Disappearance” as an example, facets for the aspect “WHAT” include “missing Malaysia Airlines Flight 370”, “two passengers used passports stolen in Thailand from an Austrian and an Italian.” etc. Facets for the aspect “WHEN” are “ Saturday morning”, “about an hour into its flight from Kuala Lumpur”, etc.",
"Comment: A piece of text written by a reader conveying his or her altitude, emotion, or any thought on a particular news document."
],
[
"The first step is to select topics. The selected topics should be in one of the above categories. We make use of several ways to find topics. The first way is to search the category name using Google News. The second way is to follow the related tags on Twitter. One more useful method is to scan the list of event archives on the Web, such as earthquakes happened in 2017 .",
"For some news websites, in addition to provide news articles, they offer a platform to allow readers to enter comments. Regarding the collection of news documents, for a particular topic, one consideration is that reader comments can be easily found. Another consideration is that all the news documents under a topic must be collected from different websites as far as possible. Similar to the methods used in DUC and TAC, we also capture and store the content using XML format.",
"Each topic is assigned to 4 experts, who are major in journalism, to conduct the summary writing. The task of summary writing is divided into two phases, namely, aspect facet identification, and summary generation. For the aspect facet identification, the experts read and digested all the news documents and reader comments under the topic. Then for each aspect, the experts extracted the related facets from the news document. The summaries were generated based on the annotated aspect facets. When selecting facets, one consideration is those facets that are popular in both news documents and reader comments have higher priority. Next, the facets that are popular in news documents have the next priority. The generated summary should cover as many aspects as possible, and should be well-organized using complete sentences with a length restriction of 100 words.",
"After finishing the summary writing procedure, we employed another expert for scrutinizing the summaries. Each summary is checked from five linguistic quality perspectives: grammaticality, non-redundancy, referential clarity, focus, and coherence. Finally, all the model summaries are stored in XML files."
],
[
"The dataset contains 45 topics from those 6 predefined categories. Some examples of topics are “Malaysia Airlines Disappearance”, “Flappy Bird”, “Bitcoin Mt. Gox”, etc. All the topics and categories are listed in Appendix SECREF7 . Each topic contains 10 news documents and 4 model summaries. The length limit of the model summary is 100 words (slitted by space). On average, each topic contains 215 pieces of comments and 940 comment sentences. Each news document contains an average of 27 sentences, and each sentence contains an average of 25 words. 85% of non-stop model summary terms (entities, unigrams, bigrams) appeared in the news documents, and 51% of that appeared in the reader comments. The dataset contains 19k annotated aspect facets."
],
[
"The properties of our own dataset are depicted in Section SECREF28 . We use ROUGE score as our evaluation metric BIBREF16 with standard options. F-measures of ROUGE-1, ROUGE-2 and ROUGE-SU4 are reported."
],
[
"To evaluate the performance of our dataset and the proposed framework RAVAESum for RA-MDS, we compare our model with the following methods:",
"RA-Sparse BIBREF9 : It is a framework to tackle the RA-MDS problem. A sparse-coding-based method is used to calculate the salience of the news sentences by jointly considering news documents and reader comments.",
"Lead BIBREF17 : It ranks the news sentences chronologically and extracts the leading sentences one by one until the length limit.",
"Centroid BIBREF18 : It summarizes clusters of news articles automatically grouped by a topic detection system, and then it uses information from the centroids of the clusters to select sentences.",
"LexRank BIBREF1 and TextRank BIBREF19 : Both methods are graph-based unsupervised framework for sentence salience estimation based on PageRank algorithm.",
"Concept BIBREF5 : It generates abstractive summaries using phrase-based optimization framework with concept weight as salience estimation. The concept set contains unigrams, bigrams, and entities. The weighted term-frequency is used as the concept weight.",
"We can see that only the method RA-Sparse can handle RA-MDS. All the other methods are only for traditional MDS without comments."
],
[
"The input news sentences and comment sentences are represented as BoWs vectors with dimension INLINEFORM0 . The dictionary INLINEFORM1 is created using unigrams, bigrams and named entity terms. INLINEFORM2 and INLINEFORM3 are the number of news sentences and comment sentences respectively. For the number of latent aspects used in data reconstruction, we let INLINEFORM4 . For the neural network framework, we set the hidden size INLINEFORM5 and the latent size INLINEFORM6 . For the parameter INLINEFORM7 used in comment weight, we let INLINEFORM8 . Adam BIBREF20 is used for gradient based optimization with a learning rate 0.001. Our neural network based framework is implemented using Theano BIBREF21 on a single GPU."
],
[
"The results of our framework as well as the baseline methods are depicted in Table TABREF40 . It is obvious that our framework RAVAESum is the best among all the comparison methods. Specifically, it is better than RA-Sparse significantly ( INLINEFORM0 ), which demonstrates that VAEs based latent semantic modeling and joint semantic space reconstruction can improve the MDS performance considerably. Both RAVAESum and RA-Sparse are better than the methods without considering reader comments."
],
[
"To further investigate the effectiveness of our proposed RAVAESum framework, we adjust our framework by removing the comments related components. Then the model settings of RAVAESum-noC are similar to VAESum BIBREF6 . The evaluation results are shown in Table TABREF42 , which illustrate that our framework with reader comments RAVAESum is better than RAVAESum-noC significantly( INLINEFORM0 ).",
"Moreover, as mentioned in VAESum BIBREF6 , the output aspect vectors contain the word salience information. Then we select the top-10 terms for event “Sony Virtual Reality PS4”, and “`Bitcoin Mt. Gox Offlile”' for model RAVAESum (+C) and RAVAESum-noC (-C) respectively, and the results are shown in Table TABREF43 . It is obvious that the rank of the top salience terms are different. We check from the news documents and reader comments and find that some terms are enhanced by the reader comments successfully. For example, for the topic “Sony Virtual Reality PS4”, many readers talked about the product of “Oculus”, hence the word “oculus” is assigned a high salience by our model."
],
[
"Based on the news and comments of the topic “Sony Virtual Reality PS4”, we generate two summaries with our model considering comments (RAVAESum) and ignoring comments (RAVAESum-noC) respectively. The summaries and ROUGE evaluation are given in Table TABREF45 . All the ROUGE values of our model considering comments are better than those ignoring comments with large gaps. The sentences in italic bold of the two summaries are different. By reviewing the comments of this topic, we find that many readers talked about “Oculus”, the other product with virtual reality techniques. This issue is well identified by our model and select the sentence “Mr. Yoshida said that Sony was inspired and encouraged to do its own virtual reality project after the enthusiastic response to the efforts of Oculus VR and Valve, another game company working on the technology.”."
],
[
"We investigate the problem of reader-aware multi-document summarization (RA-MDS) and introduce a new dataset. To tackle the RA-MDS, we extend a variational auto-encodes (VAEs) based MDS framework by jointly considering news documents and reader comments. The methods for data collection, aspect annotation, and summary writing and scrutinizing by experts are described. Experimental results show that reader comments can improve the summarization performance, which demonstrate the usefulness of the proposed dataset."
]
],
"section_name": [
"Introduction",
"Overview",
"Reader-Aware Salience Estimation",
"Summary Construction",
"Data Description",
"Background",
"Data Collection",
"Data Properties",
"Dataset and Metrics",
"Comparative Methods",
"Experimental Settings",
"Results on Our Dataset",
"Further Investigation of Our Framework ",
"Case Study",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"40f24b6fa41276359f1e14fe829141ca32b65569",
"9abe8f09f626e289cdef024ca9ad4fdb5be0adb0",
"ce0c35de939c07ca730cd45122e195af37476ad7"
],
"answer": [
{
"evidence": [
"The properties of our own dataset are depicted in Section SECREF28 . We use ROUGE score as our evaluation metric BIBREF16 with standard options. F-measures of ROUGE-1, ROUGE-2 and ROUGE-SU4 are reported."
],
"extractive_spans": [
"F-measures of ROUGE-1, ROUGE-2 and ROUGE-SU4"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use ROUGE score as our evaluation metric BIBREF16 with standard options. F-measures of ROUGE-1, ROUGE-2 and ROUGE-SU4 are reported."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The properties of our own dataset are depicted in Section SECREF28 . We use ROUGE score as our evaluation metric BIBREF16 with standard options. F-measures of ROUGE-1, ROUGE-2 and ROUGE-SU4 are reported."
],
"extractive_spans": [
"ROUGE"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use ROUGE score as our evaluation metric BIBREF16 with standard options."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The properties of our own dataset are depicted in Section SECREF28 . We use ROUGE score as our evaluation metric BIBREF16 with standard options. F-measures of ROUGE-1, ROUGE-2 and ROUGE-SU4 are reported.",
"Comparative Methods"
],
"extractive_spans": [
"ROUGE-1",
"ROUGE-2 ",
"ROUGE-SU4"
],
"free_form_answer": "",
"highlighted_evidence": [
". F-measures of ROUGE-1, ROUGE-2 and ROUGE-SU4 are reported.\n\nComparative Methods",
"F-measures of ROUGE-1, ROUGE-2 and ROUGE-SU4 are reported."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"2a9ccae73203e12eb414a73e6bf532cb7a8e5845",
"6a0ee75e4934b2edf83eb5395a948c5e8ae98096",
"fe893403e89e81b3526552ada697b13d8a5fbe60"
],
"answer": [
{
"evidence": [
"The first step is to select topics. The selected topics should be in one of the above categories. We make use of several ways to find topics. The first way is to search the category name using Google News. The second way is to follow the related tags on Twitter. One more useful method is to scan the list of event archives on the Web, such as earthquakes happened in 2017 ."
],
"extractive_spans": [
"Google News",
"follow the related tags on Twitter",
"scan the list of event archives on the Web, such as earthquakes happened in 2017"
],
"free_form_answer": "",
"highlighted_evidence": [
"We make use of several ways to find topics. The first way is to search the category name using Google News. The second way is to follow the related tags on Twitter. One more useful method is to scan the list of event archives on the Web, such as earthquakes happened in 2017 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The first step is to select topics. The selected topics should be in one of the above categories. We make use of several ways to find topics. The first way is to search the category name using Google News. The second way is to follow the related tags on Twitter. One more useful method is to scan the list of event archives on the Web, such as earthquakes happened in 2017 .",
"For some news websites, in addition to provide news articles, they offer a platform to allow readers to enter comments. Regarding the collection of news documents, for a particular topic, one consideration is that reader comments can be easily found. Another consideration is that all the news documents under a topic must be collected from different websites as far as possible. Similar to the methods used in DUC and TAC, we also capture and store the content using XML format."
],
"extractive_spans": [],
"free_form_answer": "Topics were taken from category names in Google News, tags on Twitter, event archives on the Web. News articles were taken from news websites.",
"highlighted_evidence": [
"The first way is to search the category name using Google News. The second way is to follow the related tags on Twitter. One more useful method is to scan the list of event archives on the Web, such as earthquakes happened in 2017 .",
"For some news websites, in addition to provide news articles, they offer a platform to allow readers to enter comments. Regarding the collection of news documents, for a particular topic, one consideration is that reader comments can be easily found."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The first step is to select topics. The selected topics should be in one of the above categories. We make use of several ways to find topics. The first way is to search the category name using Google News. The second way is to follow the related tags on Twitter. One more useful method is to scan the list of event archives on the Web, such as earthquakes happened in 2017 ."
],
"extractive_spans": [
" Google News",
"Twitter"
],
"free_form_answer": "",
"highlighted_evidence": [
" We make use of several ways to find topics. The first way is to search the category name using Google News. The second way is to follow the related tags on Twitter."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"3e45e5476fbf01dd22a9c86d9b74459f37426c86",
"932ed17474882b222851346184a85a61cf2c3801",
"b7a894b2296171994dbc205337cf5850b1b30200"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Summarization performance."
],
"extractive_spans": [],
"free_form_answer": "The proposed RAVAESum method improves from 0.001 to 0.059 Rouge1.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Summarization performance."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Summarization performance."
],
"extractive_spans": [],
"free_form_answer": "They improved by 0.007 on average across R-1, R-2, R-SU4 over the best baseline.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Summarization performance."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"13e58d6000f163a01a8a0d01910070b210e85fdc",
"54ab2010fe225555ec4ea8254f8f606e0422d0aa",
"ab28b76260de2f45c66f7a86f19ef4e01251062f"
],
"answer": [
{
"evidence": [
"Each topic is assigned to 4 experts, who are major in journalism, to conduct the summary writing. The task of summary writing is divided into two phases, namely, aspect facet identification, and summary generation. For the aspect facet identification, the experts read and digested all the news documents and reader comments under the topic. Then for each aspect, the experts extracted the related facets from the news document. The summaries were generated based on the annotated aspect facets. When selecting facets, one consideration is those facets that are popular in both news documents and reader comments have higher priority. Next, the facets that are popular in news documents have the next priority. The generated summary should cover as many aspects as possible, and should be well-organized using complete sentences with a length restriction of 100 words."
],
"extractive_spans": [
"Each topic is assigned to 4 experts"
],
"free_form_answer": "",
"highlighted_evidence": [
"Each topic is assigned to 4 experts, who are major in journalism, to conduct the summary writing."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Each topic is assigned to 4 experts, who are major in journalism, to conduct the summary writing. The task of summary writing is divided into two phases, namely, aspect facet identification, and summary generation. For the aspect facet identification, the experts read and digested all the news documents and reader comments under the topic. Then for each aspect, the experts extracted the related facets from the news document. The summaries were generated based on the annotated aspect facets. When selecting facets, one consideration is those facets that are popular in both news documents and reader comments have higher priority. Next, the facets that are popular in news documents have the next priority. The generated summary should cover as many aspects as possible, and should be well-organized using complete sentences with a length restriction of 100 words.",
"After finishing the summary writing procedure, we employed another expert for scrutinizing the summaries. Each summary is checked from five linguistic quality perspectives: grammaticality, non-redundancy, referential clarity, focus, and coherence. Finally, all the model summaries are stored in XML files."
],
"extractive_spans": [],
"free_form_answer": "5",
"highlighted_evidence": [
"Each topic is assigned to 4 experts, who are major in journalism, to conduct the summary writing. ",
"After finishing the summary writing procedure, we employed another expert for scrutinizing the summaries."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Each topic is assigned to 4 experts, who are major in journalism, to conduct the summary writing. The task of summary writing is divided into two phases, namely, aspect facet identification, and summary generation. For the aspect facet identification, the experts read and digested all the news documents and reader comments under the topic. Then for each aspect, the experts extracted the related facets from the news document. The summaries were generated based on the annotated aspect facets. When selecting facets, one consideration is those facets that are popular in both news documents and reader comments have higher priority. Next, the facets that are popular in news documents have the next priority. The generated summary should cover as many aspects as possible, and should be well-organized using complete sentences with a length restriction of 100 words.",
"After finishing the summary writing procedure, we employed another expert for scrutinizing the summaries. Each summary is checked from five linguistic quality perspectives: grammaticality, non-redundancy, referential clarity, focus, and coherence. Finally, all the model summaries are stored in XML files."
],
"extractive_spans": [],
"free_form_answer": "5",
"highlighted_evidence": [
"Each topic is assigned to 4 experts, who are major in journalism, to conduct the summary writing. ",
"After finishing the summary writing procedure, we employed another expert for scrutinizing the summaries."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"6cec83f96373cb36b8a8f1ee30954f420fe92668",
"4558da2606def9cd197e707b0a375a27c49bc8b2",
"fc40f968ceecb1f72b902f98c2be73ab759fe4a5"
],
"answer": [
{
"evidence": [
"The dataset contains 45 topics from those 6 predefined categories. Some examples of topics are “Malaysia Airlines Disappearance”, “Flappy Bird”, “Bitcoin Mt. Gox”, etc. All the topics and categories are listed in Appendix SECREF7 . Each topic contains 10 news documents and 4 model summaries. The length limit of the model summary is 100 words (slitted by space). On average, each topic contains 215 pieces of comments and 940 comment sentences. Each news document contains an average of 27 sentences, and each sentence contains an average of 25 words. 85% of non-stop model summary terms (entities, unigrams, bigrams) appeared in the news documents, and 51% of that appeared in the reader comments. The dataset contains 19k annotated aspect facets."
],
"extractive_spans": [],
"free_form_answer": " The dataset contains 19k annotated aspect facets, 45 topics, 6 predefined categories, 450 news document, 180 model summaries. The length limit of the model summary is 100 words (slitted by space). On average, each topic contains 215 pieces of comments and 940 comment sentences. Each news document contains an average of 27 sentences, and each sentence contains an average of 25 words",
"highlighted_evidence": [
"The dataset contains 45 topics from those 6 predefined categories. Some examples of topics are “Malaysia Airlines Disappearance”, “Flappy Bird”, “Bitcoin Mt. Gox”, etc. All the topics and categories are listed in Appendix SECREF7 . Each topic contains 10 news documents and 4 model summaries. The length limit of the model summary is 100 words (slitted by space). On average, each topic contains 215 pieces of comments and 940 comment sentences. Each news document contains an average of 27 sentences, and each sentence contains an average of 25 words. 85% of non-stop model summary terms (entities, unigrams, bigrams) appeared in the news documents, and 51% of that appeared in the reader comments. The dataset contains 19k annotated aspect facets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The dataset contains 45 topics from those 6 predefined categories. Some examples of topics are “Malaysia Airlines Disappearance”, “Flappy Bird”, “Bitcoin Mt. Gox”, etc. All the topics and categories are listed in Appendix SECREF7 . Each topic contains 10 news documents and 4 model summaries. The length limit of the model summary is 100 words (slitted by space). On average, each topic contains 215 pieces of comments and 940 comment sentences. Each news document contains an average of 27 sentences, and each sentence contains an average of 25 words. 85% of non-stop model summary terms (entities, unigrams, bigrams) appeared in the news documents, and 51% of that appeared in the reader comments. The dataset contains 19k annotated aspect facets."
],
"extractive_spans": [],
"free_form_answer": "19000",
"highlighted_evidence": [
"The dataset contains 19k annotated aspect facets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The dataset contains 45 topics from those 6 predefined categories. Some examples of topics are “Malaysia Airlines Disappearance”, “Flappy Bird”, “Bitcoin Mt. Gox”, etc. All the topics and categories are listed in Appendix SECREF7 . Each topic contains 10 news documents and 4 model summaries. The length limit of the model summary is 100 words (slitted by space). On average, each topic contains 215 pieces of comments and 940 comment sentences. Each news document contains an average of 27 sentences, and each sentence contains an average of 25 words. 85% of non-stop model summary terms (entities, unigrams, bigrams) appeared in the news documents, and 51% of that appeared in the reader comments. The dataset contains 19k annotated aspect facets."
],
"extractive_spans": [
"45 topics from those 6 predefined categories",
"On average, each topic contains 215 pieces of comments and 940 comment sentences.",
"19k annotated aspect facets"
],
"free_form_answer": "",
"highlighted_evidence": [
"The dataset contains 45 topics from those 6 predefined categories.",
"Each topic contains 10 news documents and 4 model summaries. The length limit of the model summary is 100 words (slitted by space). On average, each topic contains 215 pieces of comments and 940 comment sentences. Each news document contains an average of 27 sentences, and each sentence contains an average of 25 words. 85% of non-stop model summary terms (entities, unigrams, bigrams) appeared in the news documents, and 51% of that appeared in the reader comments. The dataset contains 19k annotated aspect facets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"298975b0d034457c84f741d5e3a76f3d02dbdc0d",
"942641fcd5ad774e88b733a384249b339c949e93",
"d21e042e74dccdc93fcfe87061686bd6afee34c1"
],
"answer": [
{
"evidence": [
"Each topic is assigned to 4 experts, who are major in journalism, to conduct the summary writing. The task of summary writing is divided into two phases, namely, aspect facet identification, and summary generation. For the aspect facet identification, the experts read and digested all the news documents and reader comments under the topic. Then for each aspect, the experts extracted the related facets from the news document. The summaries were generated based on the annotated aspect facets. When selecting facets, one consideration is those facets that are popular in both news documents and reader comments have higher priority. Next, the facets that are popular in news documents have the next priority. The generated summary should cover as many aspects as possible, and should be well-organized using complete sentences with a length restriction of 100 words."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Each topic is assigned to 4 experts, who are major in journalism, to conduct the summary writing. The task of summary writing is divided into two phases, namely, aspect facet identification, and summary generation. For the aspect facet identification, the experts read and digested all the news documents and reader comments under the topic. Then for each aspect, the experts extracted the related facets from the news document. The summaries were generated based on the annotated aspect facets"
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"15b9ce0a908086189282765300890374c19c6b4a",
"1fcdb7bb1a64b2412e39023cbfe2936fe139c45b",
"58fa98ac34ab3cf92649a52f615b342205464c57"
],
"answer": [
{
"evidence": [
"Each topic is assigned to 4 experts, who are major in journalism, to conduct the summary writing. The task of summary writing is divided into two phases, namely, aspect facet identification, and summary generation. For the aspect facet identification, the experts read and digested all the news documents and reader comments under the topic. Then for each aspect, the experts extracted the related facets from the news document. The summaries were generated based on the annotated aspect facets. When selecting facets, one consideration is those facets that are popular in both news documents and reader comments have higher priority. Next, the facets that are popular in news documents have the next priority. The generated summary should cover as many aspects as possible, and should be well-organized using complete sentences with a length restriction of 100 words."
],
"extractive_spans": [],
"free_form_answer": "Experts identified aspect facets and wrote summaries.",
"highlighted_evidence": [
"The task of summary writing is divided into two phases, namely, aspect facet identification, and summary generation. For the aspect facet identification, the experts read and digested all the news documents and reader comments under the topic. Then for each aspect, the experts extracted the related facets from the news document. The summaries were generated based on the annotated aspect facets. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"There is a lack of high-quality dataset suitable for RA-MDS. Existing datasets from DUC and TAC are not appropriate. Therefore, we introduce a new dataset for RA-MDS. We employed some experts to conduct the tasks of data collection, aspect annotation, and summary writing as well as scrutinizing. To our best knowledge, this is the first dataset for RA-MDS."
],
"extractive_spans": [
"employed some experts to conduct the tasks of data collection, aspect annotation, and summary writing as well as scrutinizing"
],
"free_form_answer": "",
"highlighted_evidence": [
"We employed some experts to conduct the tasks of data collection, aspect annotation, and summary writing as well as scrutinizing. To our best knowledge, this is the first dataset for RA-MDS."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Each topic is assigned to 4 experts, who are major in journalism, to conduct the summary writing. The task of summary writing is divided into two phases, namely, aspect facet identification, and summary generation. For the aspect facet identification, the experts read and digested all the news documents and reader comments under the topic. Then for each aspect, the experts extracted the related facets from the news document. The summaries were generated based on the annotated aspect facets. When selecting facets, one consideration is those facets that are popular in both news documents and reader comments have higher priority. Next, the facets that are popular in news documents have the next priority. The generated summary should cover as many aspects as possible, and should be well-organized using complete sentences with a length restriction of 100 words."
],
"extractive_spans": [],
"free_form_answer": "Each topic is assigned to 4 experts to conduct the summary writing in two phases: facet identification, and summary generation. For the aspect facet identification, the experts read and digested all the news documents and reader comments under the topic. Then for each aspect, the experts extracted the related facets from the news document. The summaries were generated based on the annotated aspect facets.",
"highlighted_evidence": [
"Each topic is assigned to 4 experts, who are major in journalism, to conduct the summary writing. The task of summary writing is divided into two phases, namely, aspect facet identification, and summary generation. For the aspect facet identification, the experts read and digested all the news documents and reader comments under the topic. Then for each aspect, the experts extracted the related facets from the news document. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"ba70be19e03a250f8402ac08bae121fd58b06d1d",
"dbc3b44d655a314c73da77eb5d34d90722e84122"
],
"answer": [
{
"evidence": [
"The dataset contains 45 topics from those 6 predefined categories. Some examples of topics are “Malaysia Airlines Disappearance”, “Flappy Bird”, “Bitcoin Mt. Gox”, etc. All the topics and categories are listed in Appendix SECREF7 . Each topic contains 10 news documents and 4 model summaries. The length limit of the model summary is 100 words (slitted by space). On average, each topic contains 215 pieces of comments and 940 comment sentences. Each news document contains an average of 27 sentences, and each sentence contains an average of 25 words. 85% of non-stop model summary terms (entities, unigrams, bigrams) appeared in the news documents, and 51% of that appeared in the reader comments. The dataset contains 19k annotated aspect facets."
],
"extractive_spans": [
"topics",
"categories",
"news documents",
"model summaries",
" comments",
"annotated aspect facets"
],
"free_form_answer": "",
"highlighted_evidence": [
"The dataset contains 45 topics from those 6 predefined categories.",
"Each topic contains 10 news documents and 4 model summaries. ",
" On average, each topic contains 215 pieces of comments and 940 comment sentences. ",
"The dataset contains 19k annotated aspect facets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The dataset contains 45 topics from those 6 predefined categories. Some examples of topics are “Malaysia Airlines Disappearance”, “Flappy Bird”, “Bitcoin Mt. Gox”, etc. All the topics and categories are listed in Appendix SECREF7 . Each topic contains 10 news documents and 4 model summaries. The length limit of the model summary is 100 words (slitted by space). On average, each topic contains 215 pieces of comments and 940 comment sentences. Each news document contains an average of 27 sentences, and each sentence contains an average of 25 words. 85% of non-stop model summary terms (entities, unigrams, bigrams) appeared in the news documents, and 51% of that appeared in the reader comments. The dataset contains 19k annotated aspect facets."
],
"extractive_spans": [
"45 topics from those 6 predefined categories",
"Each topic contains 10 news documents and 4 model summaries",
"On average, each topic contains 215 pieces of comments and 940 comment sentences",
"Each news document contains an average of 27 sentences, and each sentence contains an average of 25 words.",
"dataset contains 19k annotated aspect facets"
],
"free_form_answer": "",
"highlighted_evidence": [
"The dataset contains 45 topics from those 6 predefined categories. Some examples of topics are “Malaysia Airlines Disappearance”, “Flappy Bird”, “Bitcoin Mt. Gox”, etc. All the topics and categories are listed in Appendix SECREF7 . Each topic contains 10 news documents and 4 model summaries. The length limit of the model summary is 100 words (slitted by space). On average, each topic contains 215 pieces of comments and 940 comment sentences. Each news document contains an average of 27 sentences, and each sentence contains an average of 25 words. 85% of non-stop model summary terms (entities, unigrams, bigrams) appeared in the news documents, and 51% of that appeared in the reader comments. The dataset contains 19k annotated aspect facets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
"",
"",
""
],
"question": [
"what evaluation metrics were used?",
"what is the source of their dataset?",
"by how much did the performance improve?",
"how many experts were there?",
"what is the size of the data collected?",
"did they use a crowdsourcing platform?",
"how was annotation conducted?",
"what does their dataset contain?"
],
"question_id": [
"4e63454275380787ebd0e38aa885977332ab33af",
"dfaeb8faf04505a4178945c933ba217e472979d8",
"342ada55bd4d7408e1fcabf1810b92d84c1dbc41",
"86d1c990c1639490c239c3dbf5492ecc44ab6652",
"b065c2846817f3969b39e355d5d017e326d6f42e",
"9536e4a2455008007067f23cc873768374c8f664",
"cfa44bb587b0c05906d8325491ca9e0f024269e8",
"b3dc9a35e8c3ed7abcc4ca0bf308dea75be9c016"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Reader comments of the news “The most important announcements from Google’s big developers’ conference (May, 2017)”.",
"Figure 2: Our proposed framework. Left: Latent semantic modeling via variation auto-encoders for news sentence xd and comment sentence xc. Middle: Comment sentence weight estimation. Right: Salience estimation by a joint data reconstruction method. Ad is a news reconstruction coefficient matrix which contains the news sentence salience information.",
"Table 1: Summarization performance.",
"Table 2: Further investigation of RAVAESum.",
"Table 3: Top-10 terms extracted from each topic according to the word salience values",
"Table 5: All the topics and the corresponding categories. The 6 predefined categories are: (1) Accidents and Natural Disasters, (2) Attacks (Criminal/Terrorist), (3) New Technology, (4) Health and Safety, (5) Endangered Resources, and (6) Investigations and Trials (Criminal/Legal/Other)."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"7-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"9-Table5-1.png"
]
} | [
"what is the source of their dataset?",
"by how much did the performance improve?",
"how many experts were there?",
"what is the size of the data collected?",
"how was annotation conducted?"
] | [
[
"1708.01065-Data Collection-0",
"1708.01065-Data Collection-1"
],
[
"1708.01065-7-Table1-1.png"
],
[
"1708.01065-Data Collection-3",
"1708.01065-Data Collection-2"
],
[
"1708.01065-Data Properties-0"
],
[
"1708.01065-Introduction-4",
"1708.01065-Data Collection-2"
]
] | [
"Topics were taken from category names in Google News, tags on Twitter, event archives on the Web. News articles were taken from news websites.",
"They improved by 0.007 on average across R-1, R-2, R-SU4 over the best baseline.",
"5",
"19000",
"Each topic is assigned to 4 experts to conduct the summary writing in two phases: facet identification, and summary generation. For the aspect facet identification, the experts read and digested all the news documents and reader comments under the topic. Then for each aspect, the experts extracted the related facets from the news document. The summaries were generated based on the annotated aspect facets."
] | 117 |
1905.07562 | Human-like machine thinking: Language guided imagination | Human thinking requires the brain to understand the meaning of language expression and to properly organize the thoughts flow using the language. However, current natural language processing models are primarily limited in the word probability estimation. Here, we proposed a Language guided imagination (LGI) network to incrementally learn the meaning and usage of numerous words and syntaxes, aiming to form a human-like machine thinking process. LGI contains three subsystems: (1) vision system that contains an encoder to disentangle the input or imagined scenarios into abstract population representations, and an imagination decoder to reconstruct imagined scenario from higher level representations; (2) Language system, that contains a binarizer to transfer symbol texts into binary vectors, an IPS (mimicking the human IntraParietal Sulcus, implemented by an LSTM) to extract the quantity information from the input texts, and a textizer to convert binary vectors into text symbols; (3) a PFC (mimicking the human PreFrontal Cortex, implemented by an LSTM) to combine inputs of both language and vision representations, and predict text symbols and manipulated images accordingly. LGI has incrementally learned eight different syntaxes (or tasks), with which a machine thinking loop has been formed and validated by the proper interaction between language and vision system. The paper provides a new architecture to let the machine learn, understand and use language in a human-like way that could ultimately enable a machine to construct fictitious 'mental' scenario and possess intelligence. | {
"paragraphs": [
[
"Human thinking is regarded as ‘mental ideas flow guided by language to achieve a goal’. For instance, after seeing heavy rain, you may say internally ‘holding an umbrella could avoid getting wet’, and then you will take an umbrella before leaving. In the process, we know that the visual input of ‘water drop’ is called rain, and can imagine ‘holding an umbrella’ could keep off the rain, and can even experience the feeling of being wet. This continual thinking capacity distinguishes us from the machine, even though the latter can also recognize images, process language, and sense rain-drops. Continual thinking requires the capacity to generate mental imagination guided by language, and extract language representations from a real or imagined scenario.",
"Modern natural language processing (NLP) techniques can handle question answering etc. tasks, such as answering that ‘Cao Cao’s nickname is Meng De’ based on the website knowledge [1]. However, the NLP network is just a probability model [2] and does not know whether Cao Cao is a man or cat. Indeed, it even does not understand what is a man. On the other hand, human being learns Cao Cao with his nickname via watching TV. When presented the question ‘what’s Cao Cao’s nickname?’, we can give the correct answer of ‘Meng De’ while imagining the figure of an actor in the brain. In this way, we say the machine network does not understand it, but the human does.",
"Human beings possess such thinking capacity due to its cumulative learning capacity accompanying the neural developmental process. Initially, parent points to a real apple and teaches the baby ‘this is an apple’. After gradually assimilating the basic meanings of numerous nouns, children begin to learn some phrases and finally complicated syntaxes. Unlike the cumulative learning, most NLP techniques normally choose to learn by reading and predicting target words. After consuming billions of words in corpus materials [2], the NLP network can predict ‘Trump’ following ‘Donald’, but it is merely a probability machine.",
"The human-like thinking system often requires specific neural substrates to support the corresponding functionalities. The most important brain area related to thinking is the prefrontal cortex (PFC), where the working memory takes place, including but not confined to, the maintenance and manipulation of particular information [3]. With the PFC, human beings can analyze and execute various tasks via ‘phonological loop’ and ‘visuospatial scratchpad’ etc. [4,5]. Inspired by the human-like brain organization, we build a ‘PFC’ network to combine language and vision streams to achieve tasks such as language controlled imagination, and imagination based thinking process. Our results show that the LGI network could incrementally learn eight syntaxes rapidly. Based on the LGI, we present the first language guided continual thinking process, which shows considerable promise for the human-like strong machine intelligence."
],
[
"Our goal is to build a human-like neural network by removing components unsupported by neuroscience from AI architecture while introducing novel neural mechanisms and algorithms into it. Taking the convolution neural network (CNN) as an example, although it has reached human-level performance in image recognition tasks [6], animal neural systems do not support such kernel scanning operation across retinal neurons, and thus the neuronal responses measured on monkeys do not match that of CNN units [7,8]. Therefore, instead of CNN, we used fully connected (FC) module [9] to build our neural network, which achieved more resemblance to animal neurophysiology in term of the network development, neuronal firing patterns, object recognition mechanism, learning and forgetting mechanisms, as illustrated in our concurrent submission [10]. In addition, the error backpropagation technique is generally used to modify network weights to learn representation and achieve training objectives [11]. However, in neuroscience, it is the activity-dependent molecular events (e.g. the inflow of calcium ion and the switching of glutamate N-methyl-D-aspartate receptor etc.) that modify synaptic connections [12, 13]. Indeed, the real neural feedback connection provides the top-down imagery information [14], which is usually ignored by AI network constructions due to the concept of error backpropagation. What’s more, our concurrent paper [10] demonstrates that the invariance property of visual recognition under the rotation, scaling, and translation of an object is supported by coordinated population coding rather than the max-pooling mechanism [15]. The softmax classification is usually used to compute the probability of each category (or word) in the repository (or vocabulary) before prediction. However, in reality, we never evaluate all fruit categories in mind before saying ‘it is an apple’, let alone the complicated computation of the normalization term in the softmax. In this paper, we demonstrate object classification is directly output by neurons via a simple rounding operation, rather than the neuroscience unsupported softmax classification [16].",
"Modern autoencoder techniques could synthesize an unseen view for the desired viewpoint. Using car as an example [17], during training, the autoencoder learns the 3D characteristics of a car with a pair of images from two views of the same car together with the viewpoint of the output view. During testing, the autoencoder could predict the desired image from a single image of the car given the expected viewpoint. However, this architecture is task-specific, namely that the network can only make predictions on cars' unseen views. To include multiple tasks, we added an additional PFC layer that can receive task commands conveyed via language stream and object representation via the visual encoder pathway, and output the modulated images according to task commands and the desired text prediction associated with the images. In addition, by transmitting the output image from the decoder to the encoder, an imagination loop is formed, which enables the continual operation of a human-like thinking process involving both language and image."
],
[
"As is shown in Figure 1, the LGI network contains three main subsystems including the vision, language and PFC subsystems. The vision autoencoder network was trained separately, whose characteristics of development, recognition, learning, and forgetting can be referred to [10]. After training, the autoencoder is separated into two parts: the encoder (or recognition) part ranges from the image entry point to the final encoding layer, which functions as human anterior inferior temporal lobe (AIT) to provide the high-level abstract representation of the input image [18]; the decoder (or imagination) part ranges from the AIT to image prediction point. The activity vectors of the third encoding layer INLINEFORM0 and AIT layer INLINEFORM1 are concatenated with language activity vectors INLINEFORM2 as input signals to the PFC. We expect, after acquiring the language command, the PFC could output a desired visual activation vector INLINEFORM3 , based on which the imagination network could reconstruct the predicted image. Finally, the predicted or imagined image is fed back to the encoder network for the next thinking iteration.",
"The language processing component first binarizes the input text symbol-wise into a sequence of binary vectors INLINEFORM0 , where T is the text length. To improve the language command recognition, we added one LSTM layer to extract the quantity information of the text (for example, suppose text = ‘move left 12’, the expected output INLINEFORM1 is 1 dimensional quantity 12 at the last time point). This layer mimics the number processing functionality of human Intra-Parietal Sulcus (IPS), so it is given the name IPS layer. The PFC outputs the desired activation of INLINEFORM2 , which can either be decoded by the ‘texitizer’ into predicted text or serve as INLINEFORM3 for the next iteration of the imagination process. Here, we propose a textizer (a rounding operation, followed by symbol mapping from binary vector, whose detailed discussion can be referred to the Supplementary section A) to classify the predicted symbol instead of softmax operation which has no neuroscience foundation.",
"The PFC subsystem contains a LSTM and a full connected layer. It receives inputs from both language and vision subsystems in a concatenated form of INLINEFORM0 at time t, and gives a prediction output INLINEFORM1 , which is expected to be identical to INLINEFORM2 at time t+1. This has been achieved with a next frame prediction (NFP) loss function as, INLINEFORM3 . So given an input image, the PFC can predict the corresponding text description; while given an input text command the PFC can predict the corresponding manipulated image. This NFP loss function has neuroscience foundation, since the molecular mediated synaptic plasticity always takes place after the completion of an event, when the information of both t and t+1 time points have been acquired and presented by the neural system. The strategy of learning by predicting its own next frame is essentially an unsupervised learning.",
"For human brain development, the visual and auditory systems mature in much earlier stages than the PFC [19]. To mimic this process, our PFC subsystem was trained separately after vision and language components had completed their functionalities. We have trained the network to accumulatively learn eight syntaxes, and the related results are shown in the following section. Finally, we demonstrate how the network forms a thinking loop with text language and imagined pictures."
],
[
"The first syntaxes that LGI has learned are the ‘move left’ and ‘move right’ random pixels, with the corresponding results shown in Figure 3. After 50000 steps training, LGI could not only reconstruct the input image with high precision but also predict the 'mentally' moved object with specified morphology, correct manipulated direction and position just after the command sentence completed. The predicted text can complete the word ‘move’ given the first letter ‘m’ (till now, LGI has only learned syntaxes of ‘move left or right’). LGI tried to predict the second word ‘right’ with initial letter ‘r’, however, after knowing the command text is ‘l’, it turned to complete the following symbols with ‘eft’. It doesn’t care if the sentence length is 12 or 11, the predicted image and text just came at proper time and position. Even if the command asked to move out of screen, LGI still could reconstruct the partially occluded image with high fidelity.",
"Based on the same network, LGI continued to learn syntax ‘this is …’. Just like a parent teaching child numbers by pointing to number instances, Figure 4 demonstrates that, after training of 50000 steps, LGI could classify figures in various morphology with correct identity (accuracy = 72.7%). Note that, the classification process is not performed by softmax operation, but by directly textizing operation (i.e. rounding followed by a symbol mapping operation), which is more biologically plausible than the softmax operation.",
"After that, LGI learned the syntax ‘the size is big/small’, followed by ‘the size is not small/big’. Figure 5 illustrates that LGI could correctly categorize whether the digit size was small or big with proper text output. And we witness that, based on the syntax of ‘the size is big/small’ (train steps =1000), the negative adverb ‘not’ in the language text ‘the size is not small/big’ was much easier to be learned (train steps =200, with same hyper-parameters). This is quite similar to the cumulative learning process of the human being.",
"And then, LGI rapidly learned three more syntaxes: ‘give me a …’, ‘enlarge/shrink’, and ‘rotate …’, whose results are shown in Figure 6. After training (5000 steps), LGI could generate a correct digit figure given the language command ‘give me a [number]’ (Figure 6.A). The generated digit instance is somewhat the ‘averaged’ version of all training examples of the same digit identity. In the future, the generative adversarial network (GAN) technique could be included to generate object instances with specific details. However, using more specific language, such as ‘give me a red Arial big 9’ to generate the characterized instance can better resemble the human thinking process than GAN. LGI can also learn to change the size and orientation of an imagined object. Figure 6.B-C illustrates the morphology of the final imagined instance could be kept unchanged after experiencing various manipulations. Some other syntaxes or tasks could be integrated into LGI in a similar way.",
"Finally, in Figure 7, we illustrate how LGI performed the human-like language-guided thinking process, with the above-learned syntaxes. (1) LGI first closed its eyes, namely, that no input images were fed into the vision subsystem (all the subsequent input images were generated through the imagination process). (2) LGI said to itself ‘give me a 9’, then the PFC produced the corresponding encoding vector INLINEFORM0 , and finally one digit ‘9’ instance was reconstructed via the imagination network. (3) LGI gave the command ‘rotate 180’, then the imagined digit ‘9’ was rotated upside down. (4) Following the language command ‘this is ’, LGI automatically predicted that the newly imaged object was the digit ‘6’. (5) LGI used ‘enlarge’ command to make the object bigger. (6) Finally, LGI predicted that the size was ‘big’ according to the imagined object morphology. This demonstrates that LGI can understand the verbs and nouns by properly manipulating the imagination, and can form the iterative thinking process via the interaction between vision and language subsystems through the PFC layer. The human thinking process normally would not form a concrete imagination through the full visual loop, but rather a vague and rapid imagination through the short-cut loop by feeding back INLINEFORM1 to AIT directly. On the other hand, the full path of clear imagination may explain the dream mechanism. Figure 7.B shows the short cut imagination process, where LGI also regarded the rotated ‘9’ as digit 6, which suggests the AIT activation does not encode the digit identity, but the untangled features of input image or imagined image. Those high level cortices beyond visual cortex could be the place for identity representation."
],
[
"Language guided imagination is the nature of human thinking and intelligence. Normally, the real-time tasks or goals are conveyed by language, such as ‘to build a Lego car’. To achieve this goal, first, an agent (human being or machine) needs to know what’s car, and then imagine a vague car instance, based on which the agent can plan to later collect wheel, window and chassis blocks for construction. Imagining the vague car is the foundation for decomposing future tasks. We trained the LGI network with a human-like cumulative learning process, from learning the meaning of words, to understanding complicated syntaxes, and finally organizing the thinking process with language. We trained the LGI to associate object name with corresponding instances by ‘this is …’ syntax; and trained the LGI to produce a digit instance, when there comes the sentence ‘give me a [number]’. In contrast, traditional language models could only serve as a word dependency predictor rather than really understand the sentence.",
"Language is the most remarkable characteristics distinguishing mankind from animals. Theoretically, all kinds of information such as object properties, tasks and goals, commands and even emotions can be described and conveyed by language [21]. We trained with LGI eight different syntaxes (in other word, eight different tasks), and LGI demonstrates its understanding by correctly interacting with the vision system. After learning ‘this is 9’, it is much easier to learn ‘give me a 9’; after learning the ‘size is big’, it is much easier to learn ‘the size is not small’. Maybe some digested words or syntaxes were represented by certain PFC units, which could be shared with the following sentence learning.",
"Imagination is another key component of human thinking. For the game Go [22, 23], the network using a reinforcement learning strategy has to be trained with billions of games in order to acquire a feeling (Q value estimated for each potential action) to move the chess. As human beings, after knowing the rule conveyed by language, we can quickly start a game with proper moves using a try-in-imagination strategy without requiring even a single practice. With imagination, people can change the answering contents (or even tell good-will lies) by considering or imagining the consequence of the next few output sentences. Machine equipped with the unique ability of imagination could easily select clever actions for multiple tasks without being trained heavily.",
"In the future, many more syntaxes and functionalities can be added to LGI in a similar way, such as math reasoning, intuitive physics prediction and navigation [24, 25, 26]. Insights of human audition processing could be leveraged to convert sound wave into language text as a direct input for LGI [27, 28]. And the mechanisms of human value systems in the striatum [29] may also endow LGI with motivation and emotion. The PFC cortex consists of many sub-regions interacted within the PFC and across the whole brain areas [3, 30], and the implementation of these features might finally enable LGI to possess real machine intelligence."
],
[
"In this paper, we first introduced a PFC layer to involve representations from both language and vision subsystems to form a human-like thinking system (the LGI system). The LGI contains three subsystems: the vision, language, and PFC subsystem, which are trained separately. The development, recognition and learning mechanism is discussed in the cocurrent paper [10]. In the language subsystem, we use an LSTM layer to mimic the human IPS to extract the quantity information from language text and proposed a biologically plausible textizer to produce text symbols output, instead of traditional softmax classifier. We propose to train the LGI with the NFP loss function, which endows the capacity to describe the image content in form of symbol text and manipulated images according to language commands. LGI shows its ability to learn eight different syntaxes or tasks in a cumulative learning way, and form the first machine thinking loop with the interaction between imagined pictures and language text."
],
[
"[1] Wei, M., He, Y., Zhang, Q. & Si, L. (2019). Multi-Instance Learning for End-to-End Knowledge Base Question Answering. arXiv preprint arXiv:1903.02652.",
"[2] Devlin, J., Chang, M. W., Lee, K. & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.",
"[3] Miller, E. K. & Cohen, J. D. (2001). An integrative theory of prefrontal cortex function. Annual review of neuroscience, 24(1), 167-202.",
"[4] Baddeley, A., Gathercole, S. & Papagno, C. (1998). The phonological loop as a language learning device. Psychological review, 105(1), 158.",
"[5] Finke, K., Bublak, P., Neugebauer, U. & Zihl, J. (2005). Combined processing of what and where information within the visuospatial scratchpad. European Journal of Cognitive Psychology, 17(1), 1-22.",
"[6] Simonyan, K. & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.",
"[7] DiCarlo, J. J., Zoccolan, D. & Rust, N. C. (2012). How does the brain solve visual object recognition?. Neuron, 73(3), 415-434.",
"[8] Freiwald, W. A. & Tsao, D. Y. (2010). Functional compartmentalization and viewpoint generalization within the macaque face-processing system. Science, 330(6005), 845-851.",
"[9] Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6), 386.",
"[10] Anonymous A. (2019). The development, recognition, and learning mechanisms of animal-like neural network. Advances in Neural Information Processing Systems, in submission",
"[11] Rumelhart, D. E., Hinton, G. E. & Williams, R. J. (1988). Learning representations by back-propagating errors. Cognitive modeling, 5(3), 1.",
"[12] Yasuda, R., Sabatini, B. L. & Svoboda, K. (2003). Plasticity of calcium channels in dendritic spines. Nature neuroscience, 6(9), 948.",
"[13] Liu, L., Wong, T. P., Pozza, M. F., Lingenhoehl, K., Wang, Y., Sheng, M. & Wang, Y. T. (2004). Role of NMDA receptor subtypes in governing the direction of hippocampal synaptic plasticity. Science, 304(5673), 1021-1024.",
"[14] Pearson, J., Naselaris, T., Holmes, E. A. & Kosslyn, S. M. (2015). Mental imagery: functional mechanisms and clinical applications. Trends in cognitive sciences, 19(10), 590-602.",
"[15] Boureau, Y. L., Ponce, J. & LeCun, Y. (2010). A theoretical analysis of feature pooling in visual recognition. In Proceedings of the 27th international conference on machine learning (ICML-10) (pp. 111-118).",
"[16] LeCun, Y., Bengio, Y. & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436.",
"[17] Zhou, T., Brown, M., Snavely, N. & Lowe, D. G. (2017). Unsupervised learning of depth and ego-motion from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1851-1858).",
"[18] Ralph, M. A. L., Jefferies, E., Patterson, K. & Rogers, T. T. (2017). The neural and computational bases of semantic cognition. Nature Reviews Neuroscience, 18(1), 42.",
"[19] Petanjek, Z., Judaš, M., Kostović, I. & Uylings, H. B. (2007). Lifespan alterations of basal dendritic trees of pyramidal neurons in the human prefrontal cortex: a layer-specific pattern. Cerebral cortex, 18(4), 915-929.",
"[20] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S. & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672-2680).",
"[21] Wittgenstein, L. (2013). Tractatus logico-philosophicus.",
"[22] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., & Dieleman, S. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484.",
"[23] Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A. & Chen, Y. (2017). Mastering the game of go without human knowledge. Nature, 550(7676), 354.",
"[24] Saxton, D., Grefenstette, E., Hill, F. & Kohli, P. (2019). Analysing Mathematical Reasoning Abilities of Neural Models. arXiv preprint arXiv:1904.01557.",
"[25] Battaglia, P., Pascanu, R., Lai, M. & Rezende, D. J. (2016). Interaction networks for learning about objects, relations and physics. In Advances in neural information processing systems (pp. 4502-4510).",
"[26] Banino, A., Barry, C., Uria, B., Blundell, C., Lillicrap, T., Mirowski, P. & Wayne, G. (2018). Vector-based navigation using grid-like representations in artificial agents. Nature, 557(7705), 429.",
"[27] Jasmin, K., Lima, C. F. & Scott, S. K. (2019). Understanding rostral–caudal auditory cortex contributions to auditory perception. Nature Reviews Neuroscience, in press.",
"[28] Afouras, T., Chung, J. S. & Zisserman, A. (2018). The conversation: Deep audio-visual speech enhancement. arXiv preprint arXiv:1804.04121.",
"[29] Husain, M. & Roiser, J. (2018). Neuroscience of apathy and anhedonia: a transdiagnostic approach. Nature Reviews Neuroscience, 19, 470-484.",
"[30] Barbas, H. (2015). General cortical and special prefrontal connections: principles from structure to function. Annual review of neuroscience, 38, 269-289."
]
],
"section_name": [
"Introduction",
"Related work",
"Architecture",
"Experiment",
"Discussion",
"Conclusion",
"References"
]
} | {
"answers": [
{
"annotation_id": [
"0f16f51a71c8eb451630fd56aad68d181f0a4e66",
"83d16a90685b94fe9a92a83e74724fe0ba832912",
"b5ed2c9c72e2686cbb2a5acdbc9c2fa1edd38326"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"The first syntaxes that LGI has learned are the ‘move left’ and ‘move right’ random pixels, with the corresponding results shown in Figure 3. After 50000 steps training, LGI could not only reconstruct the input image with high precision but also predict the 'mentally' moved object with specified morphology, correct manipulated direction and position just after the command sentence completed. The predicted text can complete the word ‘move’ given the first letter ‘m’ (till now, LGI has only learned syntaxes of ‘move left or right’). LGI tried to predict the second word ‘right’ with initial letter ‘r’, however, after knowing the command text is ‘l’, it turned to complete the following symbols with ‘eft’. It doesn’t care if the sentence length is 12 or 11, the predicted image and text just came at proper time and position. Even if the command asked to move out of screen, LGI still could reconstruct the partially occluded image with high fidelity.",
"FLOAT SELECTED: Figure 3: Mental manipulation of images based on syntaxes of ‘move left x’ and ‘move right x’, where x is a random number, ranging from 0 to 28. LGI has the capacity to correctly predict the next text symbols and image manipulation (with correct morphology, position, direction) at the proper time point. It can recognize the sentence with flexible text length and digit length."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The first syntaxes that LGI has learned are the ‘move left’ and ‘move right’ random pixels, with the corresponding results shown in Figure 3. ",
"FLOAT SELECTED: Figure 3: Mental manipulation of images based on syntaxes of ‘move left x’ and ‘move right x’, where x is a random number, ranging from 0 to 28. LGI has the capacity to correctly predict the next text symbols and image manipulation (with correct morphology, position, direction) at the proper time point. It can recognize the sentence with flexible text length and digit length."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Based on the same network, LGI continued to learn syntax ‘this is …’. Just like a parent teaching child numbers by pointing to number instances, Figure 4 demonstrates that, after training of 50000 steps, LGI could classify figures in various morphology with correct identity (accuracy = 72.7%). Note that, the classification process is not performed by softmax operation, but by directly textizing operation (i.e. rounding followed by a symbol mapping operation), which is more biologically plausible than the softmax operation.",
"After that, LGI learned the syntax ‘the size is big/small’, followed by ‘the size is not small/big’. Figure 5 illustrates that LGI could correctly categorize whether the digit size was small or big with proper text output. And we witness that, based on the syntax of ‘the size is big/small’ (train steps =1000), the negative adverb ‘not’ in the language text ‘the size is not small/big’ was much easier to be learned (train steps =200, with same hyper-parameters). This is quite similar to the cumulative learning process of the human being."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Based on the same network, LGI continued to learn syntax ‘this is …’. Just like a parent teaching child numbers by pointing to number instances, Figure 4 demonstrates that, after training of 50000 steps, LGI could classify figures in various morphology with correct identity (accuracy = 72.7%).",
"After that, LGI learned the syntax ‘the size is big/small’, followed by ‘the size is not small/big’."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1999dddf19376bf9bd837a3b3f35956ad1aae114",
"333dc3036033cc0e801ec121a24637256cd0b534"
],
"answer": [
{
"evidence": [
"The first syntaxes that LGI has learned are the ‘move left’ and ‘move right’ random pixels, with the corresponding results shown in Figure 3. After 50000 steps training, LGI could not only reconstruct the input image with high precision but also predict the 'mentally' moved object with specified morphology, correct manipulated direction and position just after the command sentence completed. The predicted text can complete the word ‘move’ given the first letter ‘m’ (till now, LGI has only learned syntaxes of ‘move left or right’). LGI tried to predict the second word ‘right’ with initial letter ‘r’, however, after knowing the command text is ‘l’, it turned to complete the following symbols with ‘eft’. It doesn’t care if the sentence length is 12 or 11, the predicted image and text just came at proper time and position. Even if the command asked to move out of screen, LGI still could reconstruct the partially occluded image with high fidelity.",
"Based on the same network, LGI continued to learn syntax ‘this is …’. Just like a parent teaching child numbers by pointing to number instances, Figure 4 demonstrates that, after training of 50000 steps, LGI could classify figures in various morphology with correct identity (accuracy = 72.7%). Note that, the classification process is not performed by softmax operation, but by directly textizing operation (i.e. rounding followed by a symbol mapping operation), which is more biologically plausible than the softmax operation."
],
"extractive_spans": [
"precision",
"accuracy"
],
"free_form_answer": "",
"highlighted_evidence": [
"After 50000 steps training, LGI could not only reconstruct the input image with high precision but also predict the 'mentally' moved object with specified morphology, correct manipulated direction and position just after the command sentence completed. ",
"Based on the same network, LGI continued to learn syntax ‘this is …’. Just like a parent teaching child numbers by pointing to number instances, Figure 4 demonstrates that, after training of 50000 steps, LGI could classify figures in various morphology with correct identity (accuracy = 72.7%). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Based on the same network, LGI continued to learn syntax ‘this is …’. Just like a parent teaching child numbers by pointing to number instances, Figure 4 demonstrates that, after training of 50000 steps, LGI could classify figures in various morphology with correct identity (accuracy = 72.7%). Note that, the classification process is not performed by softmax operation, but by directly textizing operation (i.e. rounding followed by a symbol mapping operation), which is more biologically plausible than the softmax operation.",
"Finally, in Figure 7, we illustrate how LGI performed the human-like language-guided thinking process, with the above-learned syntaxes. (1) LGI first closed its eyes, namely, that no input images were fed into the vision subsystem (all the subsequent input images were generated through the imagination process). (2) LGI said to itself ‘give me a 9’, then the PFC produced the corresponding encoding vector INLINEFORM0 , and finally one digit ‘9’ instance was reconstructed via the imagination network. (3) LGI gave the command ‘rotate 180’, then the imagined digit ‘9’ was rotated upside down. (4) Following the language command ‘this is ’, LGI automatically predicted that the newly imaged object was the digit ‘6’. (5) LGI used ‘enlarge’ command to make the object bigger. (6) Finally, LGI predicted that the size was ‘big’ according to the imagined object morphology. This demonstrates that LGI can understand the verbs and nouns by properly manipulating the imagination, and can form the iterative thinking process via the interaction between vision and language subsystems through the PFC layer. The human thinking process normally would not form a concrete imagination through the full visual loop, but rather a vague and rapid imagination through the short-cut loop by feeding back INLINEFORM1 to AIT directly. On the other hand, the full path of clear imagination may explain the dream mechanism. Figure 7.B shows the short cut imagination process, where LGI also regarded the rotated ‘9’ as digit 6, which suggests the AIT activation does not encode the digit identity, but the untangled features of input image or imagined image. Those high level cortices beyond visual cortex could be the place for identity representation."
],
"extractive_spans": [
"classify figures in various morphology with correct identity (accuracy = 72.7%)",
"demonstrates that LGI can understand the verbs and nouns"
],
"free_form_answer": "",
"highlighted_evidence": [
"Based on the same network, LGI continued to learn syntax ‘this is …’. Just like a parent teaching child numbers by pointing to number instances, Figure 4 demonstrates that, after training of 50000 steps, LGI could classify figures in various morphology with correct identity (accuracy = 72.7%).",
"Finally, in Figure 7, we illustrate how LGI performed the human-like language-guided thinking process, with the above-learned syntaxes. (1) LGI first closed its eyes, namely, that no input images were fed into the vision subsystem (all the subsequent input images were generated through the imagination process). (2) LGI said to itself ‘give me a 9’, then the PFC produced the corresponding encoding vector INLINEFORM0 , and finally one digit ‘9’ instance was reconstructed via the imagination network. (3) LGI gave the command ‘rotate 180’, then the imagined digit ‘9’ was rotated upside down. (4) Following the language command ‘this is ’, LGI automatically predicted that the newly imaged object was the digit ‘6’. (5) LGI used ‘enlarge’ command to make the object bigger. (6) Finally, LGI predicted that the size was ‘big’ according to the imagined object morphology. This demonstrates that LGI can understand the verbs and nouns by properly manipulating the imagination, and can form the iterative thinking process via the interaction between vision and language subsystems through the PFC layer."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"50e5e185f41e48ded335f65d80ddc186df117245",
"8d3266c4039fd4f93d602833d15a0a9642ea2f84",
"b5b83c529ef9d0d14697f05bc9085024abf908c0"
],
"answer": [
{
"evidence": [
"For human brain development, the visual and auditory systems mature in much earlier stages than the PFC [19]. To mimic this process, our PFC subsystem was trained separately after vision and language components had completed their functionalities. We have trained the network to accumulatively learn eight syntaxes, and the related results are shown in the following section. Finally, we demonstrate how the network forms a thinking loop with text language and imagined pictures.",
"Experiment",
"The first syntaxes that LGI has learned are the ‘move left’ and ‘move right’ random pixels, with the corresponding results shown in Figure 3. After 50000 steps training, LGI could not only reconstruct the input image with high precision but also predict the 'mentally' moved object with specified morphology, correct manipulated direction and position just after the command sentence completed. The predicted text can complete the word ‘move’ given the first letter ‘m’ (till now, LGI has only learned syntaxes of ‘move left or right’). LGI tried to predict the second word ‘right’ with initial letter ‘r’, however, after knowing the command text is ‘l’, it turned to complete the following symbols with ‘eft’. It doesn’t care if the sentence length is 12 or 11, the predicted image and text just came at proper time and position. Even if the command asked to move out of screen, LGI still could reconstruct the partially occluded image with high fidelity.",
"Based on the same network, LGI continued to learn syntax ‘this is …’. Just like a parent teaching child numbers by pointing to number instances, Figure 4 demonstrates that, after training of 50000 steps, LGI could classify figures in various morphology with correct identity (accuracy = 72.7%). Note that, the classification process is not performed by softmax operation, but by directly textizing operation (i.e. rounding followed by a symbol mapping operation), which is more biologically plausible than the softmax operation.",
"After that, LGI learned the syntax ‘the size is big/small’, followed by ‘the size is not small/big’. Figure 5 illustrates that LGI could correctly categorize whether the digit size was small or big with proper text output. And we witness that, based on the syntax of ‘the size is big/small’ (train steps =1000), the negative adverb ‘not’ in the language text ‘the size is not small/big’ was much easier to be learned (train steps =200, with same hyper-parameters). This is quite similar to the cumulative learning process of the human being.",
"And then, LGI rapidly learned three more syntaxes: ‘give me a …’, ‘enlarge/shrink’, and ‘rotate …’, whose results are shown in Figure 6. After training (5000 steps), LGI could generate a correct digit figure given the language command ‘give me a [number]’ (Figure 6.A). The generated digit instance is somewhat the ‘averaged’ version of all training examples of the same digit identity. In the future, the generative adversarial network (GAN) technique could be included to generate object instances with specific details. However, using more specific language, such as ‘give me a red Arial big 9’ to generate the characterized instance can better resemble the human thinking process than GAN. LGI can also learn to change the size and orientation of an imagined object. Figure 6.B-C illustrates the morphology of the final imagined instance could be kept unchanged after experiencing various manipulations. Some other syntaxes or tasks could be integrated into LGI in a similar way."
],
"extractive_spans": [
"move left",
"move right",
"this is …",
"the size is big/small",
"give me a …",
"enlarge/shrink",
"rotate …"
],
"free_form_answer": "",
"highlighted_evidence": [
"age and imagined pictures.\n\nExperiment\nThe first syntaxes that LG",
"The first syntaxes that LGI has learned are the ‘move left’ and ‘move right’ random pixels, with the corresponding results shown in Figure 3.",
"Based on the same network, LGI continued to learn syntax ‘this is …’. Just like a parent teaching child numbers by pointing to number instances, Figure 4 demonstrates that, after training of 50000 steps, LGI could classify figures in various morphology with correct identity (accuracy = 72.7%).",
"After that, LGI learned the syntax ‘the size is big/small’, followed by ‘the size is not small/big’.",
"And then, LGI rapidly learned three more syntaxes: ‘give me a …’, ‘enlarge/shrink’, and ‘rotate …’, whose results are shown in Figure 6. After training (5000 steps), LGI could generate a correct digit figure given the language command ‘give me a [number]’ (Figure 6.A)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The first syntaxes that LGI has learned are the ‘move left’ and ‘move right’ random pixels, with the corresponding results shown in Figure 3. After 50000 steps training, LGI could not only reconstruct the input image with high precision but also predict the 'mentally' moved object with specified morphology, correct manipulated direction and position just after the command sentence completed. The predicted text can complete the word ‘move’ given the first letter ‘m’ (till now, LGI has only learned syntaxes of ‘move left or right’). LGI tried to predict the second word ‘right’ with initial letter ‘r’, however, after knowing the command text is ‘l’, it turned to complete the following symbols with ‘eft’. It doesn’t care if the sentence length is 12 or 11, the predicted image and text just came at proper time and position. Even if the command asked to move out of screen, LGI still could reconstruct the partially occluded image with high fidelity.",
"Based on the same network, LGI continued to learn syntax ‘this is …’. Just like a parent teaching child numbers by pointing to number instances, Figure 4 demonstrates that, after training of 50000 steps, LGI could classify figures in various morphology with correct identity (accuracy = 72.7%). Note that, the classification process is not performed by softmax operation, but by directly textizing operation (i.e. rounding followed by a symbol mapping operation), which is more biologically plausible than the softmax operation.",
"After that, LGI learned the syntax ‘the size is big/small’, followed by ‘the size is not small/big’. Figure 5 illustrates that LGI could correctly categorize whether the digit size was small or big with proper text output. And we witness that, based on the syntax of ‘the size is big/small’ (train steps =1000), the negative adverb ‘not’ in the language text ‘the size is not small/big’ was much easier to be learned (train steps =200, with same hyper-parameters). This is quite similar to the cumulative learning process of the human being.",
"And then, LGI rapidly learned three more syntaxes: ‘give me a …’, ‘enlarge/shrink’, and ‘rotate …’, whose results are shown in Figure 6. After training (5000 steps), LGI could generate a correct digit figure given the language command ‘give me a [number]’ (Figure 6.A). The generated digit instance is somewhat the ‘averaged’ version of all training examples of the same digit identity. In the future, the generative adversarial network (GAN) technique could be included to generate object instances with specific details. However, using more specific language, such as ‘give me a red Arial big 9’ to generate the characterized instance can better resemble the human thinking process than GAN. LGI can also learn to change the size and orientation of an imagined object. Figure 6.B-C illustrates the morphology of the final imagined instance could be kept unchanged after experiencing various manipulations. Some other syntaxes or tasks could be integrated into LGI in a similar way."
],
"extractive_spans": [
"move left",
"move right",
"this is …",
"the size is big/small",
"the size is not small/big",
"give me a …",
"enlarge/shrink",
"rotate …"
],
"free_form_answer": "",
"highlighted_evidence": [
"The first syntaxes that LGI has learned are the ‘move left’ and ‘move right’ random pixels, with the corresponding results shown in Figure 3. ",
"Based on the same network, LGI continued to learn syntax ‘this is …’.",
"After that, LGI learned the syntax ‘the size is big/small’, followed by ‘the size is not small/big’. ",
"And then, LGI rapidly learned three more syntaxes: ‘give me a …’, ‘enlarge/shrink’, and ‘rotate …’, whose results are shown in Figure 6."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The first syntaxes that LGI has learned are the ‘move left’ and ‘move right’ random pixels, with the corresponding results shown in Figure 3. After 50000 steps training, LGI could not only reconstruct the input image with high precision but also predict the 'mentally' moved object with specified morphology, correct manipulated direction and position just after the command sentence completed. The predicted text can complete the word ‘move’ given the first letter ‘m’ (till now, LGI has only learned syntaxes of ‘move left or right’). LGI tried to predict the second word ‘right’ with initial letter ‘r’, however, after knowing the command text is ‘l’, it turned to complete the following symbols with ‘eft’. It doesn’t care if the sentence length is 12 or 11, the predicted image and text just came at proper time and position. Even if the command asked to move out of screen, LGI still could reconstruct the partially occluded image with high fidelity.",
"Based on the same network, LGI continued to learn syntax ‘this is …’. Just like a parent teaching child numbers by pointing to number instances, Figure 4 demonstrates that, after training of 50000 steps, LGI could classify figures in various morphology with correct identity (accuracy = 72.7%). Note that, the classification process is not performed by softmax operation, but by directly textizing operation (i.e. rounding followed by a symbol mapping operation), which is more biologically plausible than the softmax operation.",
"After that, LGI learned the syntax ‘the size is big/small’, followed by ‘the size is not small/big’. Figure 5 illustrates that LGI could correctly categorize whether the digit size was small or big with proper text output. And we witness that, based on the syntax of ‘the size is big/small’ (train steps =1000), the negative adverb ‘not’ in the language text ‘the size is not small/big’ was much easier to be learned (train steps =200, with same hyper-parameters). This is quite similar to the cumulative learning process of the human being.",
"And then, LGI rapidly learned three more syntaxes: ‘give me a …’, ‘enlarge/shrink’, and ‘rotate …’, whose results are shown in Figure 6. After training (5000 steps), LGI could generate a correct digit figure given the language command ‘give me a [number]’ (Figure 6.A). The generated digit instance is somewhat the ‘averaged’ version of all training examples of the same digit identity. In the future, the generative adversarial network (GAN) technique could be included to generate object instances with specific details. However, using more specific language, such as ‘give me a red Arial big 9’ to generate the characterized instance can better resemble the human thinking process than GAN. LGI can also learn to change the size and orientation of an imagined object. Figure 6.B-C illustrates the morphology of the final imagined instance could be kept unchanged after experiencing various manipulations. Some other syntaxes or tasks could be integrated into LGI in a similar way."
],
"extractive_spans": [
"move left",
"move right",
"this is …",
"the size is big/small’",
"the size is not small/big",
"give me a …",
"enlarge/shrink",
"rotate …’"
],
"free_form_answer": "",
"highlighted_evidence": [
"The first syntaxes that LGI has learned are the ‘move left’ and ‘move right’ random pixels, with the corresponding results shown in Figure 3. ",
"Based on the same network, LGI continued to learn syntax ‘this is …’. Just like a parent teaching child numbers by pointing to number instances, Figure 4 demonstrates that, after training of 50000 steps, LGI could classify figures in various morphology with correct identity (accuracy = 72.7%). ",
"After that, LGI learned the syntax ‘the size is big/small’, followed by ‘the size is not small/big’. ",
"And then, LGI rapidly learned three more syntaxes: ‘give me a …’, ‘enlarge/shrink’, and ‘rotate …’, whose results are shown in Figure 6. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"908ce25dec40707ddf712388260105dd17ab803f",
"21d05cfbc032de442b7d304fa61c30e9c57f8692",
"970736929d0b524d868f0d6152ca5cf30406fc03"
],
"answer": [
{
"evidence": [
"In this paper, we first introduced a PFC layer to involve representations from both language and vision subsystems to form a human-like thinking system (the LGI system). The LGI contains three subsystems: the vision, language, and PFC subsystem, which are trained separately. The development, recognition and learning mechanism is discussed in the cocurrent paper [10]. In the language subsystem, we use an LSTM layer to mimic the human IPS to extract the quantity information from language text and proposed a biologically plausible textizer to produce text symbols output, instead of traditional softmax classifier. We propose to train the LGI with the NFP loss function, which endows the capacity to describe the image content in form of symbol text and manipulated images according to language commands. LGI shows its ability to learn eight different syntaxes or tasks in a cumulative learning way, and form the first machine thinking loop with the interaction between imagined pictures and language text."
],
"extractive_spans": [
"the human IPS to extract the quantity information from language text and proposed a biologically plausible textizer to produce text symbols output"
],
"free_form_answer": "",
"highlighted_evidence": [
" In the language subsystem, we use an LSTM layer to mimic the human IPS to extract the quantity information from language text and proposed a biologically plausible textizer to produce text symbols output, instead of traditional softmax classifier. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"The human-like thinking system often requires specific neural substrates to support the corresponding functionalities. The most important brain area related to thinking is the prefrontal cortex (PFC), where the working memory takes place, including but not confined to, the maintenance and manipulation of particular information [3]. With the PFC, human beings can analyze and execute various tasks via ‘phonological loop’ and ‘visuospatial scratchpad’ etc. [4,5]. Inspired by the human-like brain organization, we build a ‘PFC’ network to combine language and vision streams to achieve tasks such as language controlled imagination, and imagination based thinking process. Our results show that the LGI network could incrementally learn eight syntaxes rapidly. Based on the LGI, we present the first language guided continual thinking process, which shows considerable promise for the human-like strong machine intelligence."
],
"extractive_spans": [],
"free_form_answer": "It combines language and vision streams similar to the human prefrontal cortex.",
"highlighted_evidence": [
"The most important brain area related to thinking is the prefrontal cortex (PFC), where the working memory takes place, including but not confined to, the maintenance and manipulation of particular information [3].",
" Inspired by the human-like brain organization, we build a ‘PFC’ network to combine language and vision streams to achieve tasks such as language controlled imagination, and imagination based thinking process."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"004e97935b383dd4a88ed63e02b283bb09cdd4fc",
"33b95bea850306e67a819aeda39fafa4e8959cfe",
"a4bfd10bb200832bb3390606fbcbe74e0271ec70"
],
"answer": [
{
"evidence": [
"In this paper, we first introduced a PFC layer to involve representations from both language and vision subsystems to form a human-like thinking system (the LGI system). The LGI contains three subsystems: the vision, language, and PFC subsystem, which are trained separately. The development, recognition and learning mechanism is discussed in the cocurrent paper [10]. In the language subsystem, we use an LSTM layer to mimic the human IPS to extract the quantity information from language text and proposed a biologically plausible textizer to produce text symbols output, instead of traditional softmax classifier. We propose to train the LGI with the NFP loss function, which endows the capacity to describe the image content in form of symbol text and manipulated images according to language commands. LGI shows its ability to learn eight different syntaxes or tasks in a cumulative learning way, and form the first machine thinking loop with the interaction between imagined pictures and language text."
],
"extractive_spans": [
" mimic the human IPS to extract the quantity information from language text and proposed a biologically plausible textizer to produce text symbols output"
],
"free_form_answer": "",
"highlighted_evidence": [
"In the language subsystem, we use an LSTM layer to mimic the human IPS to extract the quantity information from language text and proposed a biologically plausible textizer to produce text symbols output, instead of traditional softmax classifier. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, we first introduced a PFC layer to involve representations from both language and vision subsystems to form a human-like thinking system (the LGI system). The LGI contains three subsystems: the vision, language, and PFC subsystem, which are trained separately. The development, recognition and learning mechanism is discussed in the cocurrent paper [10]. In the language subsystem, we use an LSTM layer to mimic the human IPS to extract the quantity information from language text and proposed a biologically plausible textizer to produce text symbols output, instead of traditional softmax classifier. We propose to train the LGI with the NFP loss function, which endows the capacity to describe the image content in form of symbol text and manipulated images according to language commands. LGI shows its ability to learn eight different syntaxes or tasks in a cumulative learning way, and form the first machine thinking loop with the interaction between imagined pictures and language text."
],
"extractive_spans": [
"textizer to produce text symbols output",
"extract the quantity information from language text "
],
"free_form_answer": "",
"highlighted_evidence": [
"In the language subsystem, we use an LSTM layer to mimic the human IPS to extract the quantity information from language text and proposed a biologically plausible textizer to produce text symbols output, instead of traditional softmax classifier."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The language processing component first binarizes the input text symbol-wise into a sequence of binary vectors INLINEFORM0 , where T is the text length. To improve the language command recognition, we added one LSTM layer to extract the quantity information of the text (for example, suppose text = ‘move left 12’, the expected output INLINEFORM1 is 1 dimensional quantity 12 at the last time point). This layer mimics the number processing functionality of human Intra-Parietal Sulcus (IPS), so it is given the name IPS layer. The PFC outputs the desired activation of INLINEFORM2 , which can either be decoded by the ‘texitizer’ into predicted text or serve as INLINEFORM3 for the next iteration of the imagination process. Here, we propose a textizer (a rounding operation, followed by symbol mapping from binary vector, whose detailed discussion can be referred to the Supplementary section A) to classify the predicted symbol instead of softmax operation which has no neuroscience foundation."
],
"extractive_spans": [],
"free_form_answer": "It mimics the number processing functionality of human Intra-Parietal Sulcus.",
"highlighted_evidence": [
"To improve the language command recognition, we added one LSTM layer to extract the quantity information of the text (for example, suppose text = ‘move left 12’, the expected output INLINEFORM1 is 1 dimensional quantity 12 at the last time point). This layer mimics the number processing functionality of human Intra-Parietal Sulcus (IPS), so it is given the name IPS layer."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"044289bf875bcdce7288f806b37a0c582449031c",
"90cdc990261ecd4bfbedf92974c38f903087efe4",
"eefaa3033143834a30df3c0a7fff4138c97557c5"
],
"answer": [
{
"evidence": [
"Imagination is another key component of human thinking. For the game Go [22, 23], the network using a reinforcement learning strategy has to be trained with billions of games in order to acquire a feeling (Q value estimated for each potential action) to move the chess. As human beings, after knowing the rule conveyed by language, we can quickly start a game with proper moves using a try-in-imagination strategy without requiring even a single practice. With imagination, people can change the answering contents (or even tell good-will lies) by considering or imagining the consequence of the next few output sentences. Machine equipped with the unique ability of imagination could easily select clever actions for multiple tasks without being trained heavily."
],
"extractive_spans": [],
"free_form_answer": "Ability to change the answering contents by considering the consequence of the next few output sentences.",
"highlighted_evidence": [
"With imagination, people can change the answering contents (or even tell good-will lies) by considering or imagining the consequence of the next few output sentences. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Modern autoencoder techniques could synthesize an unseen view for the desired viewpoint. Using car as an example [17], during training, the autoencoder learns the 3D characteristics of a car with a pair of images from two views of the same car together with the viewpoint of the output view. During testing, the autoencoder could predict the desired image from a single image of the car given the expected viewpoint. However, this architecture is task-specific, namely that the network can only make predictions on cars' unseen views. To include multiple tasks, we added an additional PFC layer that can receive task commands conveyed via language stream and object representation via the visual encoder pathway, and output the modulated images according to task commands and the desired text prediction associated with the images. In addition, by transmitting the output image from the decoder to the encoder, an imagination loop is formed, which enables the continual operation of a human-like thinking process involving both language and image."
],
"extractive_spans": [
" transmitting the output image from the decoder to the encoder, an imagination loop is formed, which enables the continual operation of a human-like thinking process involving both language and image"
],
"free_form_answer": "",
"highlighted_evidence": [
"Modern autoencoder techniques could synthesize an unseen view for the desired viewpoint. Using car as an example [17], during training, the autoencoder learns the 3D characteristics of a car with a pair of images from two views of the same car together with the viewpoint of the output view. During testing, the autoencoder could predict the desired image from a single image of the car given the expected viewpoint. However, this architecture is task-specific, namely that the network can only make predictions on cars' unseen views. To include multiple tasks, we added an additional PFC layer that can receive task commands conveyed via language stream and object representation via the visual encoder pathway, and output the modulated images according to task commands and the desired text prediction associated with the images. In addition, by transmitting the output image from the decoder to the encoder, an imagination loop is formed, which enables the continual operation of a human-like thinking process involving both language and image."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Human thinking is regarded as ‘mental ideas flow guided by language to achieve a goal’. For instance, after seeing heavy rain, you may say internally ‘holding an umbrella could avoid getting wet’, and then you will take an umbrella before leaving. In the process, we know that the visual input of ‘water drop’ is called rain, and can imagine ‘holding an umbrella’ could keep off the rain, and can even experience the feeling of being wet. This continual thinking capacity distinguishes us from the machine, even though the latter can also recognize images, process language, and sense rain-drops. Continual thinking requires the capacity to generate mental imagination guided by language, and extract language representations from a real or imagined scenario."
],
"extractive_spans": [
"Continual thinking requires the capacity to generate mental imagination guided by language, and extract language representations from a real or imagined scenario"
],
"free_form_answer": "",
"highlighted_evidence": [
"In the process, we know that the visual input of ‘water drop’ is called rain, and can imagine ‘holding an umbrella’ could keep off the rain, and can even experience the feeling of being wet. This continual thinking capacity distinguishes us from the machine, even though the latter can also recognize images, process language, and sense rain-drops. Continual thinking requires the capacity to generate mental imagination guided by language, and extract language representations from a real or imagined scenario."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"How do the authors measure the extent to which LGI has learned the task?",
"Which 8 tasks has LGI learned?",
"In what was does an LSTM mimic the prefrontal cortex?",
"In what way does an LSTM mimic the intra parietal sulcus?",
"How do the authors define imagination, or imagined scenarios?"
],
"question_id": [
"693cdb9978749db04ba34d9c168e71534f00a226",
"71fd0efea1b441d86d9a75255815ba3efe09779b",
"fb9e333a4e5d5141fe8e97b24b8f7e5685afbf09",
"cb029240d4dedde74fcafad6a46c1cfc2621b934",
"11a8531699952f5a2286a4311f0fe80ed1befa1e",
"bcf222ad4bb537b01019ed354ea03cd6bf2c1f8e"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: The LGI architecture. It contains three subsystems that are trained separated. In the vision subsystem, the encoder can transfer an input or imagined (or predicted) image into a population representation vector V4 at the AIT layer (mimicking the Anterior Temporal Lobe for high-level image representation), and the decoder can reconstruct a V′3 output from PFC to a predicted image, which can be fed into the encoder to form the imagination loop. In the language subsystem, a binarizer can transfer the input text symbols into binary representation vectors L0, and a texitizer can transfer the predicted vector L′0 from the PFC into predicted text symbols, which can also be fed into the language loop. There is an IPS layer implemented by an LSTM to extract quantity information L1 from the text vector L0. The PFC layer serves as working memory, that takes the concatenated input [L0,L1,V3,V4] from both language and vision subsystems, and output the predicted next frame representation that could be fed back into both subsystems to form an imagination loop. LIG can use the short cut imagination path (rendered in grey) to rapidly feel the predicted scenario without fully reconstructing the predicted images.",
"Figure 2: Training based on the next frame prediction (NFP). The LSTM-like PFC is trained by the NFP principle, where the goal of the PFC is to output the representation vectors (including both language and vision) of the next frame, as indicated by red arrows. The red dash arrow indicates that, at time T, the PFC of LGI curately generated the mentally manipulated digit instance, which required the understanding of the previous text language and observed images.",
"Figure 3: Mental manipulation of images based on syntaxes of ‘move left x’ and ‘move right x’, where x is a random number, ranging from 0 to 28. LGI has the capacity to correctly predict the next text symbols and image manipulation (with correct morphology, position, direction) at the proper time point. It can recognize the sentence with flexible text length and digit length.",
"Figure 4: LGI learns to classify digits with syntax ‘this is . . . ’. LGI understood the meaning of the command text and managed to extract digit identity according to the morphology of digit instance. Note that the classification is performed by the proposed textizer rather than softmax.",
"Figure 5: LGI learns to judge the digit size with syntaxes ’the size is big/small’ and ’the size is not small/big’. LGI could understand the text command, offer correct judgment on digit size, and properly adjust the answer when encountered negative adverb ‘not’.",
"Figure 6: LGI learns to generate a fictitious digit instance with syntax ‘give me a [number]’, and ‘mentally’ manipulate objects with syntaxes ‘enlarge’, ‘shrink’, and ‘rotate . . . ’ etc.",
"Figure 7: The language guided thinking process. LGI generated an instance of digit ‘9’ without any input image. Then the instance was ‘mentally’ rotated 180 degree, based on which LGI found that the digit identity was changed to 6. After that, LGI enlarged the instance and identified its proper size."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"5-Figure4-1.png",
"6-Figure5-1.png",
"6-Figure6-1.png",
"7-Figure7-1.png"
]
} | [
"In what was does an LSTM mimic the prefrontal cortex?",
"In what way does an LSTM mimic the intra parietal sulcus?",
"How do the authors define imagination, or imagined scenarios?"
] | [
[
"1905.07562-Introduction-3",
"1905.07562-Conclusion-0"
],
[
"1905.07562-Conclusion-0",
"1905.07562-Architecture-1"
],
[
"1905.07562-Introduction-0",
"1905.07562-Discussion-2",
"1905.07562-Related work-1"
]
] | [
"It combines language and vision streams similar to the human prefrontal cortex.",
"It mimics the number processing functionality of human Intra-Parietal Sulcus.",
"Ability to change the answering contents by considering the consequence of the next few output sentences."
] | 118 |
1911.12893 | GitHub Typo Corpus: A Large-Scale Multilingual Dataset of Misspellings and Grammatical Errors | The lack of large-scale datasets has been a major hindrance to the development of NLP tasks such as spelling correction and grammatical error correction (GEC). As a complementary new resource for these tasks, we present the GitHub Typo Corpus, a large-scale, multilingual dataset of misspellings and grammatical errors along with their corrections harvested from GitHub, a large and popular platform for hosting and sharing git repositories. The dataset, which we have made publicly available, contains more than 350k edits and 65M characters in more than 15 languages, making it the largest dataset of misspellings to date. We also describe our process for filtering true typo edits based on learned classifiers on a small annotated subset, and demonstrate that typo edits can be identified with F1 ~ 0.9 using a very simple classifier with only three features. The detailed analyses of the dataset show that existing spelling correctors merely achieve an F-measure of approx. 0.5, suggesting that the dataset serves as a new, rich source of spelling errors that complement existing datasets. | {
"paragraphs": [
[
"Spelling correction BIBREF0, BIBREF1, BIBREF2 and grammatical error correction (GEC) BIBREF3 are two fundamental tasks that have important implications for downstream NLP tasks and for education in general. In recent years, the use of statistical machine translation (SMT) and neural sequence-to-sequence (seq2seq) models has been becoming increasingly popular for solving these tasks. Such modern NLP models are usually data hungry and require a large amount of parallel training data consisting of sentences before and after the correction. However, only relatively small datasets are available for these tasks, compared to other NLP tasks such as machine translation. This is especially the case for spelling correction, for which only a small number of datasets consisting of individual misspelled words are available, including the Birkbeck spelling error corpus and a list of typos collected from Twitter.",
"Due to this lack of large-scale datasets, many research studies BIBREF4, BIBREF2, BIBREF5 resort to automatic generation of artificial errors (also called pseudo-errors). Although such methods are efficient and have seen some success, they do not guarantee that generated errors reflect the range and the distribution of true errors made by humans BIBREF6.",
"As one way to complement this lack of resources, Wikipedia has been utilized as a rich source of textual edits, including typos BIBREF7, BIBREF8, BIBREF9. However, the edits harvested from Wikipedia are often very noisy and diverse in their types, containing edits from typos to adding and modifying information. To make the matters worse, Wikipedia suffers from vandalism, where articles are edited in a malicious manner, which requires extensive detection and filtering.",
"In order to create a high-quality, large-scale dataset of misspelling and grammatical errors (collectively called typos in this paper), we leverage the data from GitHub, the largest platform for hosting and sharing repositories maintained by git, a popular version control system commonly used for software development. Changes made to git repositories (called commits, see Section 3 for the definition) are usually tagged with commit messages, making detection of typos a trivial task. Also, GitHub suffers less from vandalism, since commits in many repositories are code reviewed, a process where every change is manually reviewed by other team members before merged into the repository. This guarantees that the edits indeed fix existing spelling and/or grammatical issues.",
"This paper describes our process for building the GitHub Typo Corpus, a large-scale, multilingual dataset of misspellings and grammatical errors, along with their corrections. The process for building the dataset can be summarized as follows:",
"Extract eligible repositories and typo commits from GitHub based on the meta data of the repository and the commit message",
"Filter out edits that are not written in human language",
"Identify true typo edits (vs semantic edits) by using learned classifiers on a small annotated dataset",
"We demonstrate that a very simple logistic regression model with only three features can classify typos and non-typo edits correctly with $F1 \\sim 0.9$. This resulted in a dataset containing more than 350k edits and 64M characters in more than 15 languages. To the best of our knowledge, this is the largest multilingual dataset of misspellings to date. We made the dataset publicly available (https://github.com/mhagiwara/github-typo-corpus) along with the automatically assigned typo labels as well as the source code to extract typos. We also provide the detailed analyses of the dataset, where we demonstrate that the F measure of existing spell checkers merely reaches $\\sim 0.5$, arguing that the GitHub Typo Corpus provides a new, rich source of naturally-occurring misspellings and grammatical errors that complement existing datasets."
],
[
"As mentioned above, a closely related line of work is the use of Wikipedia edits for various tasks, including GEC. Grundkiewicz:2014 constructed the WikiEd Error Corpus, a dataset consisting of error edits harvested from the Wikipedia edit history and demonstrated that the newly-built resource was effective for improving the performance of GEC systems. Boyd:2018 built a German GEC system leveraging the WikiEd Error Corpus and showed that the use of the Wikipedia edit data led to improved performance. In both cases, the dataset required extensive filtering based on a set of heuristic rules or heavy linguistic analysis.",
"Spelling correction is itself an important sub-problem of grammatical error correction (GEC). Many GEC and essay scoring systems BIBREF10, BIBREF11, BIBREF12 assume that spelling errors in the input text are fixed before it is fed to the main model, by pre-processing them using open-source tools such as Enchant and LanguageTool. In many GEC corpora, spelling errors account for approximately 10% of total errors (Table TABREF10), meaning that improving the accuracy of spelling correction can have a non-negligible impact on the performance of GEC.",
"Datasets of real-world typos have applications in building models robust to spelling errors BIBREF16. We note that Mizumoto:2017 argue against the necessity of spell checking on learner English, which has little effect on the performance of PoS (part-of-speech) tagging and chunking."
],
[
"First, we define and clarify the terminology that we use throughout this paper. See Figure FIGREF3 for an illustration of the concepts and how they relate to each other.",
"Repository ... in git terms, a repository is a database of files whose versions are controlled under git. A single repository may contain multiple files and directories just like a computer file system.",
"Commit ... a commit is a collection of one or more changes made to a git repository at a time. Changes in a single commit can span over multiple files and multiple parts of a file.",
"Edit ... in this paper, an edit is a pair of lines to which changes are made in a commit (note the special usage here). The line before the change is called the source and the line after is the target. In other words, an edit is a pair of the source and the target. Note that a single edit may contain changes to multiple parts of the source (for example, multiple words that are not contiguous).",
"Typo ... finally, in this paper a typo refers to an edit where the target fixes some mechanical, spelling and/or grammatical errors in the source, while preserving the meaning between the two.",
"Our goal is to collect typos from GitHub and build a dataset that is high in both quantity and quality."
],
[
"This section describes the process for collecting a large amount of typos from GitHub, which consists two steps: 1) collecting target repositories that meet some criteria and 2) collecting commits and edits from them. See Figure FIGREF15 for the overview of the typo-collecting process."
],
[
"The first step for collecting typos is to collect as many eligible GitHub repositories as possible from which commits and edits are extracted. A repository must meet some criteria in order to be included in the corpus, such as size (it needs to big enough to contain at least some amount of typo edits), license (it has to be distributed under a permissive license to allow derived work), and quality (it has to demonstrate some signs of quality, such as the number of stars).",
"Although GitHub provides a set of APIs (application programming interfaces) that allow end-users to access its data in a programmatic manner, it doesn't allow flexible querying on the repository meta data necessary for our data collection purposes. Therefore, we turn to GH Archive, which collects all the GitHub event data and make them accessible through flexible APIs. Specifically, we collected every repository from GH Archive that:",
"Has at least one pull request or pull request review comment event between November 2017 and September 2019,",
"Has 50 or more starts,",
"Has a size between 1MB and 1GB, and",
"Has a permissive license.",
"Note the “and” in the list above—a repository needs to meet all the conditions mentioned above to be eligible. The first two criteria (pull request events and the number of starts) are a sign of a quality repository. As for the license, we allowed apache-2.0 (Apache License 2.0), mit (MIT License), bsd-3-clause (BSD 3-Clause License), bsd-2-clause (BSD 2-Clause License), cc0-1.0 (Creative Commons Zero v1.0 Universal), unlicense (Unlicense), cc-by-4.0 (Creative Commons Attribution 4.0), and bsl-1.0 (Boost Software License 1.0 (BSL-1.0). A repository's number of stars, size, and license are determined as of the event in the first condition.",
"This resulted in a total of 43,462 eligible repositories."
],
[
"The second step for collecting typos is to extract commits and edits from the eligible repositories. This step is more straightforward—for each eligible repository, we cloned it using the GitPython library and enumerated all the commits in the master branch. A commit is considered eligible if the commit message contains the string typo in it. For each eligible commit, we then take the diff between the commit and its parent, scan the result sequentially, and collect all the pairs of a deletion line and a subsequent insertion line as an edit, unless the commit contains more than 10 edits, which is a sign of a non-typo commit. See the first box in Figure FIGREF3 for an illustration. As a result, we collected a total of 335,488 commits and 685,377 edits. The final dataset (see the second box in Figure FIGREF3 for a sample) is formatted in JSONL (JSON per line), where each line corresponds to a single commit with its metadata (its repository, commit hash, commit message, as well as a list of edits) in JSON, a format easily parsable by any programming language."
],
[
"Not all the edits collected in the process described so far are related to typos in natural language text. First, edits may also be made to parts of a repository that are written in programming language versus human language. Second, not every edit in a commit described “typo” is necessarily a typo edit, because a developer may make a single commit comprised of multiple edits, some of which may not be typo-related.",
"We remove the first type of edits by using language detection, and detect (not remove) the second type of edits by building a supervised classifier. The following subsections detail the process. See Figure FIGREF15 (right) for an overview of the typo filtering process."
],
[
"Due to its nature, repositories on GitHub contain a large amount of code (in programming language) as well as natural language texts. We used NanigoNet, a language detector based on GCNNs (Gated Convolutional Neural Networks) BIBREF17 that supports human languages as well as programming languages. Specifically, we ran the language detector against both the source and the target and discarded all the edits where either is determined as written in a non-human language. We also discarded an edit if the detected language doesn't match between the source and the target. This left us with a total of 203,270 commits and 353,055 edits, which are all included in the final dataset."
],
[
"In this second phase of filtering, we identify all non-typo edits that are not intended to fix mechanical, spelling, or grammatical errors, but to modify the intended meaning between the source and the target.",
"In order to investigate the characteristics of such edits empirically, we first extracted 200 edits for each one of the three largest languages in the GitHub Typo Corpus: English (eng), Simplified Chinese (cmn-hans), and Japanese (jpn). We then had fluent speakers of each language go over the list and annotate each edit with the following four edit categories:",
"Mechanical ... a mechanical edit fixes errors in punctuation and capitalization.",
"Spell ... a spell edit fixes misspellings in words. This also includes conversion errors in non-Latin languages (e.g., Chinese and Japanese).",
"Grammatical ... a grammatical edit fixes grammatical errors in the source.",
"Semantic ... a semantic edit changes the intended meaning between the source and the target.",
"See Figure FIGREF27 for some examples of different edit types on each language. If one edit contains more than one type of changes, the least superficial category is assigned. For example, if there are both spell and grammatical changes in a single edit, the “grammatical” category is assigned to the edit. We note that the first three (mechanical, spell, and grammatical edits, also called typos) are within the scope of the dataset we build, while the last one (semantic edits) is not. Thus, our goal is to identify the last type of edits as accurately as possible in a scalable manner. We will show the statistics of the annotated data in Section 6.",
"We note that the distinction between different categories, especially between spell and grammatical, is not always obvious. For example, even if one mistypes a word “what” to “want” resulting in an ungrammatical sentence, we wouldn't consider this as a grammatical edit but as a spell edit. We clarify the difference by focusing on the process where the error is introduced in the first place. Conceptually, if one assumes that the source is generated by introducing errors to the target through a noisy channel model BIBREF18, BIBREF19, a spell edit is something where noise is introduced to some implicit character-generating process, while a grammatical edit is the one which corrupts some implicit grammatical process (for example, production rules of a context-free grammar)."
],
[
"Finally, after annotating a small amount of samples for the three languages, we computed some basic statistics about each edit that may help in classifying typo edits from non-typo ones. Specifically, we computed three statistics:",
"Ratio of the target perplexity over the source calculated by a language model",
"Normalized edit distance between the source and the target",
"Binary variable indicating whether the edit purely consists of changes in numbers",
"The rationale behind the third feature is that we observed that purely numerical changes always end up being tagged as semantic edits.",
"The perplexity of a text ${\\mathbf {x}} = x_1 x_2, ..., x_L$ is defined by:",
"where $p(x)$ is determined by a trained language model. We hypothesize that perplexity captures the “fluency” of the input text to some degree, and by taking the ratio between the source and the target, the feature can represent the degree to which the fluency is improved before and after the edit.",
"As for the language model, we trained a character level Long Short Term Memory (LSTM) language model developed in BIBREF20 per language, which consists of a trainable embedding layer, three layers of a stacked recurrent neural network, and a softmax classifier. The LSTM hidden state and word embedding sizes are set to be 1000 and 200, respectively. We used 100,000 sentences from the W2C Web Corpus BIBREF21 for training (except for Chinese, where we used 28,000 sentences) and 1,000 sentences for validation for all the languages.",
"The normalized edit distance between the source $\\mathbf {x} = x_1 x_2, ..., x_{L_x}$ and the target $\\mathbf {y} = y_1 y_2, ..., y_{L_y}$ is defined by:",
"where $d({\\mathbf {x}}, {\\mathbf {y}})$ is the (unnormalized) edit distance between ${\\mathbf {x}}$ and ${\\mathbf {y}}$. This feature can capture the amount of the change made between the source and the target, based on our hypothesis that many typo edits only involve a small amount of changes.",
"See Figure FIGREF33 for an overview of the distributions of these computed statistics per category for English. We observed similar trends for other two languages (Chinese and Japanese), except for a slightly larger number of spell edits, mainly due to the non-Latin character conversion errors. We also confirmed that the difference of perplexities between the source and the target for typo edits (i.e., mechanical, spell, and grammatical edits) was statistically significant for all three languages (two-tailed t-test, $p < .01$). This means that these edits, on average, turn the source text into a more fluent text in the target."
],
[
"We then built a logistic regression classifier (with no regularization) per language using the annotated edits and their labels. The classifier has only three features mentioned above plus a bias term. We confirmed that, for every language, all the features are contributing to the prediction of typo edits controlling for other features in a statistically significant way $(p < .05)$. Table TABREF40 shows the performance of the trained classifier based on 10-fold cross validation on the annotated data. The results show that for all the languages mentioned here, the classifier successfully classifies typo edits with an F1-value of approx. 0.9. This means that the harvested edits are fairly clean in the first place (only one third is semantic edits versus others) and it is straightforward to distinguish the two using a simple classifier. In the GitHub Typo Corpus, we annotate every edit in those three languages with the predicted “typo-ness” score (the prediction probability produced from the logistic regression classifier) as well as a binary label indicating whether the edit is predicted as a typo, which may help the users of the dataset determine which edits to use for their purposes."
],
[
"In this section, we provide detailed quantitative and qualitative analyses of the GitHub Typo Corpus."
],
[
"Table TABREF41 shows the statistics of the GitHub Typo Corpus, broken down per language. The distribution of languages is heavily skewed towards English, although we observe the dataset includes a diverse set of other languages. There are 15 languages that have 100 or more edits in the dataset.",
"In addition to an obvious fact that a large fraction of the code on GitHub is written in English, one reason of the bias towards English may be due to our commit collection process, where we used an English keyword “typo” to harvest eligible commit. Although it is a norm on GitHub (and in software development in general) to write commit messages in English no matter what language you are working in, we may be able to collect a more diverse set of commits if we build models to filter through commit messages written in other languages, which is future work."
],
[
"In order to provide a more qualitative look into the dataset, we analyzed all the edits in the top three languages and extracted atomic edits. An atomic edit is defined as a sequence of contiguous characters that are inserted, deleted, or substituted between the source and the target. We extracted these atomic edits by aligning the characters between the source and the target by minimizing the edit distance, then by extracting contiguous edits that are insertion, deletion, or substitution.",
"As one can see from Figure FIGREF45, simple spelling edits such as inserting “s” and deleting “e” dominate the lists. In fact, many of the frequent atomic edits even in Chinese and Japanese are made against English words (see Figure FIGREF27 for examples—you notice many English words such as “GB-18030” and “Gemfile” in non-English text). You also notice a number of grammatical edits in Chinese (e.g., confusion between the possessive particle de and the adjectival particle de) and Japanese (e.g., omissions of case particles such as wo, no, and ni). This demonstrates that the dataset can serve as a rich source of not only spelling but also naturally-occurring grammatical errors."
],
[
"We conclude the analysis section by providing a comprehensive analysis on the types of spelling and grammatical edits, as well as the performance of existing spell checkers on the GitHub Typo Corpus. The first three columns of Table TABREF46 show a breakdown of edit types in the aforementioned set of annotated typo edits in English (Section SECREF26) analyzed by ERRANT BIBREF22, BIBREF23. This shows that the dataset contains diverse types of edits, including orthographic, punctuation, and spelling errors.",
"We then applied Aspell and Enchant, two commonly used spell checking libraries, and measured their performance against each one of the edit types. The results show that the performance of the spell checkers is fairly low ($F0.5 \\approx 0.5$) even for its main target category (SPELL), which suggests that the GitHub Typo Corpus contains many challenging typo edits that existing spell checkers may have a hard time dealing with, and the dataset may provide a rich, complementary source of spelling errors for developing better spell checkers and grammatical error correctors."
],
[
"This paper describes the process where we built the GitHub Typo Corpus, a large-scale multilingual dataset of misspellings and grammatical errors along with their corrections harvested from GitHub, the largest platform for publishing and sharing git repositories. The dataset contains more than 350k edits and 64M characters in more than 15 languages, making it the largest dataset of misspellings to date. We automatically identified typo edits (be it mechanical, spell, or grammatical) versus semantic ones by building a simple logistic regression classifier with only three features which achieved 0.9 F1-measure. We provided detailed qualitative and quantitative analyses of the datasets, demonstrating that the dataset serves as a rich source of spelling and grammatical errors, and existing spell checkers can only achieve an F-measure of $\\sim 0.5$.",
"We are planning on keep publishing new, extended versions of this dataset as new repositories and commits become available on GitHub. As mentioned before, collection of a more linguistically diverse set of commits and edits is also future work. We genuinely hope that this work can contribute to the development of the next generation of even more powerful spelling correction and grammatical error correction systems."
],
[
"The authors would like to thank Tomoya Mizumoto at RIKEN AIP/Future Corporation and Kentaro Inui at RIKEN AIP/Tohoku University for their useful comments and discussion on this project."
]
],
"section_name": [
"Introduction",
"Related Work",
"Definitions",
"Data Collection",
"Data Collection ::: Collecting Repositories",
"Data Collection ::: Collecting Commits and Edits",
"Data Filtering",
"Data Filtering ::: Language Detection",
"Data Filtering ::: Annotation of Edits",
"Data Filtering ::: Statistics of Annotated Edits",
"Data Filtering ::: Classification of Typo Edits",
"Analyses",
"Analyses ::: Statistics of the Dataset",
"Analyses ::: Distribution of Atomic Edits",
"Analyses ::: Evaluating Existing Spell Checker",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"0c3d7e78cb3a3263456efad9640904abe576a212",
"e588e77958950c7b0c24033865b361733de462b2",
"e71ec22f23e59926d9965b8ab20e38e970a4d09e"
],
"answer": [
{
"evidence": [
"We then built a logistic regression classifier (with no regularization) per language using the annotated edits and their labels. The classifier has only three features mentioned above plus a bias term. We confirmed that, for every language, all the features are contributing to the prediction of typo edits controlling for other features in a statistically significant way $(p < .05)$. Table TABREF40 shows the performance of the trained classifier based on 10-fold cross validation on the annotated data. The results show that for all the languages mentioned here, the classifier successfully classifies typo edits with an F1-value of approx. 0.9. This means that the harvested edits are fairly clean in the first place (only one third is semantic edits versus others) and it is straightforward to distinguish the two using a simple classifier. In the GitHub Typo Corpus, we annotate every edit in those three languages with the predicted “typo-ness” score (the prediction probability produced from the logistic regression classifier) as well as a binary label indicating whether the edit is predicted as a typo, which may help the users of the dataset determine which edits to use for their purposes."
],
"extractive_spans": [
"logistic regression classifier"
],
"free_form_answer": "",
"highlighted_evidence": [
"We then built a logistic regression classifier (with no regularization) per language using the annotated edits and their labels."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As for the language model, we trained a character level Long Short Term Memory (LSTM) language model developed in BIBREF20 per language, which consists of a trainable embedding layer, three layers of a stacked recurrent neural network, and a softmax classifier. The LSTM hidden state and word embedding sizes are set to be 1000 and 200, respectively. We used 100,000 sentences from the W2C Web Corpus BIBREF21 for training (except for Chinese, where we used 28,000 sentences) and 1,000 sentences for validation for all the languages.",
"We demonstrate that a very simple logistic regression model with only three features can classify typos and non-typo edits correctly with $F1 \\sim 0.9$. This resulted in a dataset containing more than 350k edits and 64M characters in more than 15 languages. To the best of our knowledge, this is the largest multilingual dataset of misspellings to date. We made the dataset publicly available (https://github.com/mhagiwara/github-typo-corpus) along with the automatically assigned typo labels as well as the source code to extract typos. We also provide the detailed analyses of the dataset, where we demonstrate that the F measure of existing spell checkers merely reaches $\\sim 0.5$, arguing that the GitHub Typo Corpus provides a new, rich source of naturally-occurring misspellings and grammatical errors that complement existing datasets."
],
"extractive_spans": [
"Long Short Term Memory (LSTM) language model",
"logistic regression model"
],
"free_form_answer": "",
"highlighted_evidence": [
"As for the language model, we trained a character level Long Short Term Memory (LSTM) language model developed in BIBREF20 per language, which consists of a trainable embedding layer, three layers of a stacked recurrent neural network, and a softmax classifier. ",
"We demonstrate that a very simple logistic regression model with only three features can classify typos and non-typo edits correctly with $F1 \\sim 0.9$."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As for the language model, we trained a character level Long Short Term Memory (LSTM) language model developed in BIBREF20 per language, which consists of a trainable embedding layer, three layers of a stacked recurrent neural network, and a softmax classifier. The LSTM hidden state and word embedding sizes are set to be 1000 and 200, respectively. We used 100,000 sentences from the W2C Web Corpus BIBREF21 for training (except for Chinese, where we used 28,000 sentences) and 1,000 sentences for validation for all the languages.",
"We then built a logistic regression classifier (with no regularization) per language using the annotated edits and their labels. The classifier has only three features mentioned above plus a bias term. We confirmed that, for every language, all the features are contributing to the prediction of typo edits controlling for other features in a statistically significant way $(p < .05)$. Table TABREF40 shows the performance of the trained classifier based on 10-fold cross validation on the annotated data. The results show that for all the languages mentioned here, the classifier successfully classifies typo edits with an F1-value of approx. 0.9. This means that the harvested edits are fairly clean in the first place (only one third is semantic edits versus others) and it is straightforward to distinguish the two using a simple classifier. In the GitHub Typo Corpus, we annotate every edit in those three languages with the predicted “typo-ness” score (the prediction probability produced from the logistic regression classifier) as well as a binary label indicating whether the edit is predicted as a typo, which may help the users of the dataset determine which edits to use for their purposes."
],
"extractive_spans": [
"logistic regression classifier",
"trainable embedding layer, three layers of a stacked recurrent neural network, and a softmax classifier"
],
"free_form_answer": "",
"highlighted_evidence": [
"As for the language model, we trained a character level Long Short Term Memory (LSTM) language model developed in BIBREF20 per language, which consists of a trainable embedding layer, three layers of a stacked recurrent neural network, and a softmax classifier.",
"We then built a logistic regression classifier (with no regularization) per language using the annotated edits and their labels."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ed6c584de7663dc34aa084aeb15fd0c5436b6921",
"4eda23be4a2cac3a9b7e055cf58be896b613e260"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 4: Distribution of counts, perplexity ratio, and normalized edit distance per category",
"See Figure FIGREF33 for an overview of the distributions of these computed statistics per category for English. We observed similar trends for other two languages (Chinese and Japanese), except for a slightly larger number of spell edits, mainly due to the non-Latin character conversion errors. We also confirmed that the difference of perplexities between the source and the target for typo edits (i.e., mechanical, spell, and grammatical edits) was statistically significant for all three languages (two-tailed t-test, $p < .01$). This means that these edits, on average, turn the source text into a more fluent text in the target."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 4: Distribution of counts, perplexity ratio, and normalized edit distance per category",
"See Figure FIGREF33 for an overview of the distributions of these computed statistics per category for English. We observed similar trends for other two languages (Chinese and Japanese), except for a slightly larger number of spell edits, mainly due to the non-Latin character conversion errors. We also confirmed that the difference of perplexities between the source and the target for typo edits (i.e., mechanical, spell, and grammatical edits) was statistically significant for all three languages (two-tailed t-test, $p < .01$). This means that these edits, on average, turn the source text into a more fluent text in the target."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"In addition to an obvious fact that a large fraction of the code on GitHub is written in English, one reason of the bias towards English may be due to our commit collection process, where we used an English keyword “typo” to harvest eligible commit. Although it is a norm on GitHub (and in software development in general) to write commit messages in English no matter what language you are working in, we may be able to collect a more diverse set of commits if we build models to filter through commit messages written in other languages, which is future work."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In addition to an obvious fact that a large fraction of the code on GitHub is written in English, one reason of the bias towards English may be due to our commit collection process, where we used an English keyword “typo” to harvest eligible commit."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3ff47e3b685313782382141dfaf192fc2adf2a99",
"6e7187758c0de141f88bdcf9edeb0d4ad886d6d3",
"d3c7cd11404ed6bc6ea35159c905f07e3eb9314a"
],
"answer": [
{
"evidence": [
"Due to its nature, repositories on GitHub contain a large amount of code (in programming language) as well as natural language texts. We used NanigoNet, a language detector based on GCNNs (Gated Convolutional Neural Networks) BIBREF17 that supports human languages as well as programming languages. Specifically, we ran the language detector against both the source and the target and discarded all the edits where either is determined as written in a non-human language. We also discarded an edit if the detected language doesn't match between the source and the target. This left us with a total of 203,270 commits and 353,055 edits, which are all included in the final dataset."
],
"extractive_spans": [
"used NanigoNet, a language detector based on GCNNs (Gated Convolutional Neural Networks) BIBREF17 that supports human languages as well as programming languages"
],
"free_form_answer": "",
"highlighted_evidence": [
"Due to its nature, repositories on GitHub contain a large amount of code (in programming language) as well as natural language texts. We used NanigoNet, a language detector based on GCNNs (Gated Convolutional Neural Networks) BIBREF17 that supports human languages as well as programming languages. Specifically, we ran the language detector against both the source and the target and discarded all the edits where either is determined as written in a non-human language. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Due to its nature, repositories on GitHub contain a large amount of code (in programming language) as well as natural language texts. We used NanigoNet, a language detector based on GCNNs (Gated Convolutional Neural Networks) BIBREF17 that supports human languages as well as programming languages. Specifically, we ran the language detector against both the source and the target and discarded all the edits where either is determined as written in a non-human language. We also discarded an edit if the detected language doesn't match between the source and the target. This left us with a total of 203,270 commits and 353,055 edits, which are all included in the final dataset."
],
"extractive_spans": [
" We used NanigoNet, a language detector based on GCNNs"
],
"free_form_answer": "",
"highlighted_evidence": [
" We used NanigoNet, a language detector based on GCNNs (Gated Convolutional Neural Networks) BIBREF17 that supports human languages as well as programming languages."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Due to its nature, repositories on GitHub contain a large amount of code (in programming language) as well as natural language texts. We used NanigoNet, a language detector based on GCNNs (Gated Convolutional Neural Networks) BIBREF17 that supports human languages as well as programming languages. Specifically, we ran the language detector against both the source and the target and discarded all the edits where either is determined as written in a non-human language. We also discarded an edit if the detected language doesn't match between the source and the target. This left us with a total of 203,270 commits and 353,055 edits, which are all included in the final dataset."
],
"extractive_spans": [
"NanigoNet"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used NanigoNet, a language detector based on GCNNs (Gated Convolutional Neural Networks) BIBREF17 that supports human languages as well as programming languages."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"9fc06e0949ceba52e57cbfaeb39bcee037420fa5",
"886e47731257d03015a245bfe149b362d8cdd39f",
"9a52e3d4767998dd57c5486023316f756e2c8cb9"
],
"answer": [
{
"evidence": [
"Although GitHub provides a set of APIs (application programming interfaces) that allow end-users to access its data in a programmatic manner, it doesn't allow flexible querying on the repository meta data necessary for our data collection purposes. Therefore, we turn to GH Archive, which collects all the GitHub event data and make them accessible through flexible APIs. Specifically, we collected every repository from GH Archive that:",
"Has at least one pull request or pull request review comment event between November 2017 and September 2019,",
"Has 50 or more starts,",
"Has a size between 1MB and 1GB, and",
"Has a permissive license.",
"Note the “and” in the list above—a repository needs to meet all the conditions mentioned above to be eligible. The first two criteria (pull request events and the number of starts) are a sign of a quality repository. As for the license, we allowed apache-2.0 (Apache License 2.0), mit (MIT License), bsd-3-clause (BSD 3-Clause License), bsd-2-clause (BSD 2-Clause License), cc0-1.0 (Creative Commons Zero v1.0 Universal), unlicense (Unlicense), cc-by-4.0 (Creative Commons Attribution 4.0), and bsl-1.0 (Boost Software License 1.0 (BSL-1.0). A repository's number of stars, size, and license are determined as of the event in the first condition."
],
"extractive_spans": [
"Has at least one pull request or pull request review comment event between November 2017 and September 2019,\n\nHas 50 or more starts,\n\nHas a size between 1MB and 1GB, and\n\nHas a permissive license."
],
"free_form_answer": "",
"highlighted_evidence": [
"Specifically, we collected every repository from GH Archive that:\n\nHas at least one pull request or pull request review comment event between November 2017 and September 2019,\n\nHas 50 or more starts,\n\nHas a size between 1MB and 1GB, and\n\nHas a permissive license.\n\nNote the “and” in the list above—a repository needs to meet all the conditions mentioned above to be eligible. The first two criteria (pull request events and the number of starts) are a sign of a quality repository. As for the license, we allowed apache-2.0 (Apache License 2.0), mit (MIT License), bsd-3-clause (BSD 3-Clause License), bsd-2-clause (BSD 2-Clause License), cc0-1.0 (Creative Commons Zero v1.0 Universal), unlicense (Unlicense), cc-by-4.0 (Creative Commons Attribution 4.0), and bsl-1.0 (Boost Software License 1.0 (BSL-1.0). A repository's number of stars, size, and license are determined as of the event in the first condition."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The first step for collecting typos is to collect as many eligible GitHub repositories as possible from which commits and edits are extracted. A repository must meet some criteria in order to be included in the corpus, such as size (it needs to big enough to contain at least some amount of typo edits), license (it has to be distributed under a permissive license to allow derived work), and quality (it has to demonstrate some signs of quality, such as the number of stars)."
],
"extractive_spans": [
"GitHub repositories"
],
"free_form_answer": "",
"highlighted_evidence": [
"The first step for collecting typos is to collect as many eligible GitHub repositories as possible from which commits and edits are extracted. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Although GitHub provides a set of APIs (application programming interfaces) that allow end-users to access its data in a programmatic manner, it doesn't allow flexible querying on the repository meta data necessary for our data collection purposes. Therefore, we turn to GH Archive, which collects all the GitHub event data and make them accessible through flexible APIs. Specifically, we collected every repository from GH Archive that:",
"Has at least one pull request or pull request review comment event between November 2017 and September 2019,",
"Has 50 or more starts,",
"Has a size between 1MB and 1GB, and",
"Has a permissive license."
],
"extractive_spans": [
"Has at least one pull request or pull request review comment event between November 2017 and September 2019,",
"50 or more starts",
"size between 1MB and 1GB",
"permissive license"
],
"free_form_answer": "",
"highlighted_evidence": [
"Therefore, we turn to GH Archive, which collects all the GitHub event data and make them accessible through flexible APIs. Specifically, we collected every repository from GH Archive that:\n\nHas at least one pull request or pull request review comment event between November 2017 and September 2019,\n\nHas 50 or more starts,\n\nHas a size between 1MB and 1GB, and\n\nHas a permissive license."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"18f02e8ce2858a693314944b64deb38694af8098",
"2e6a5d24ff62761a43a59bb21497afee1a48c81f",
"85a1d552a84cb994b3e32b68d9b45e5f0fa68a4a"
],
"answer": [
{
"evidence": [
"See Figure FIGREF27 for some examples of different edit types on each language. If one edit contains more than one type of changes, the least superficial category is assigned. For example, if there are both spell and grammatical changes in a single edit, the “grammatical” category is assigned to the edit. We note that the first three (mechanical, spell, and grammatical edits, also called typos) are within the scope of the dataset we build, while the last one (semantic edits) is not. Thus, our goal is to identify the last type of edits as accurately as possible in a scalable manner. We will show the statistics of the annotated data in Section 6."
],
"extractive_spans": [
"mechanical, spell, and grammatical edits"
],
"free_form_answer": "",
"highlighted_evidence": [
" We note that the first three (mechanical, spell, and grammatical edits, also called typos) are within the scope of the dataset we build, while the last one (semantic edits) is not."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Finally, after annotating a small amount of samples for the three languages, we computed some basic statistics about each edit that may help in classifying typo edits from non-typo ones. Specifically, we computed three statistics:",
"Ratio of the target perplexity over the source calculated by a language model",
"Normalized edit distance between the source and the target",
"Binary variable indicating whether the edit purely consists of changes in numbers"
],
"extractive_spans": [
"Ratio of the target perplexity over the source calculated by a language model",
"Normalized edit distance between the source and the target",
"Binary variable indicating whether the edit purely consists of changes in numbers"
],
"free_form_answer": "",
"highlighted_evidence": [
"Finally, after annotating a small amount of samples for the three languages, we computed some basic statistics about each edit that may help in classifying typo edits from non-typo ones. Specifically, we computed three statistics:\n\nRatio of the target perplexity over the source calculated by a language model\n\nNormalized edit distance between the source and the target\n\nBinary variable indicating whether the edit purely consists of changes in numbers"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Finally, after annotating a small amount of samples for the three languages, we computed some basic statistics about each edit that may help in classifying typo edits from non-typo ones. Specifically, we computed three statistics:",
"Ratio of the target perplexity over the source calculated by a language model",
"Normalized edit distance between the source and the target",
"Binary variable indicating whether the edit purely consists of changes in numbers"
],
"extractive_spans": [
"Ratio of the target perplexity over the source calculated by a language model",
"Normalized edit distance between the source and the target",
"Binary variable indicating whether the edit purely consists of changes in numbers"
],
"free_form_answer": "",
"highlighted_evidence": [
"Finally, after annotating a small amount of samples for the three languages, we computed some basic statistics about each edit that may help in classifying typo edits from non-typo ones. Specifically, we computed three statistics:\n\nRatio of the target perplexity over the source calculated by a language model\n\nNormalized edit distance between the source and the target\n\nBinary variable indicating whether the edit purely consists of changes in numbers"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1f65cd0b92ca2e9f2483e7868dc5f3a7ff98f545",
"2b29fda881cbc88198075590bdeac2356e72d6f5",
"2d5781ae79ab0ae69ad53fd1829d6f548218c990"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Statistics of the dataset (top 10 languages)"
],
"extractive_spans": [],
"free_form_answer": "the top 10 languages are English, simplified Chinese, Japanese, Russian, French, German, Portuguese, Spanish, Korean and Hindi",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Statistics of the dataset (top 10 languages)"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF41 shows the statistics of the GitHub Typo Corpus, broken down per language. The distribution of languages is heavily skewed towards English, although we observe the dataset includes a diverse set of other languages. There are 15 languages that have 100 or more edits in the dataset.",
"FLOAT SELECTED: Table 3: Statistics of the dataset (top 10 languages)"
],
"extractive_spans": [],
"free_form_answer": "English, Chinese, Japanese, Russian, French, German, Portugese, Spanish, Korean, Hindi and Others",
"highlighted_evidence": [
"Table TABREF41 shows the statistics of the GitHub Typo Corpus, broken down per language. The distribution of languages is heavily skewed towards English, although we observe the dataset includes a diverse set of other languages. There are 15 languages that have 100 or more edits in the dataset.",
"FLOAT SELECTED: Table 3: Statistics of the dataset (top 10 languages)"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF41 shows the statistics of the GitHub Typo Corpus, broken down per language. The distribution of languages is heavily skewed towards English, although we observe the dataset includes a diverse set of other languages. There are 15 languages that have 100 or more edits in the dataset."
],
"extractive_spans": [],
"free_form_answer": "English, Chinese (smpl.), Japanese, Russian, French, German, Portuguese, Spanish, Korean , Hindi",
"highlighted_evidence": [
"Table TABREF41 shows the statistics of the GitHub Typo Corpus, broken down per language. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Which classifiers did they experiment with?",
"Is the distribution of the edits uniform across all languages?",
"How did they identify what language the text was?",
"Which repositories did they collect from?",
"Which three features do they use?",
"Which languages are covered in the corpus?"
],
"question_id": [
"af45ff2c4209f14235482329d0729864fb2bd4b0",
"d2451d32c5a11a0eb8356a5e9d94a9231b59f198",
"90dde59e1857a0d2b1ee4615ab017fee0741f29f",
"811b67460e65232b8f363dc3f329ffecdfcc4ab2",
"68aa460ad357b4228b16b31b2cbec986215813bf",
"4542b162a5be00206fd14570898a7925cb267599"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Overview of the corpus and its related concepts. Example taken from the Diff page on Wikipedia",
"Table 1: Percentage of spelling errors in GEC corpora",
"Figure 2: Data collection and filtering process",
"Figure 3: Examples of different types of edits in top three languages",
"Table 2: The cross validation result of typo edit classifiers",
"Table 3: Statistics of the dataset (top 10 languages)",
"Figure 5: Most frequent atomic edits per language. Underscore _ corresponds to a whitespace and φ is an empty string.",
"Figure 4: Distribution of counts, perplexity ratio, and normalized edit distance per category",
"Table 4: Distribution of edit types and the performance of spell checkers on the GitHub Typo Corpus"
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"3-Figure2-1.png",
"5-Figure3-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"6-Figure5-1.png",
"6-Figure4-1.png",
"7-Table4-1.png"
]
} | [
"Which languages are covered in the corpus?"
] | [
[
"1911.12893-Analyses ::: Statistics of the Dataset-0",
"1911.12893-6-Table3-1.png"
]
] | [
"English, Chinese (smpl.), Japanese, Russian, French, German, Portuguese, Spanish, Korean , Hindi"
] | 119 |
1910.07973 | Universal Text Representation from BERT: An Empirical Study | We present a systematic investigation of layer-wise BERT activations for general-purpose text representations to understand what linguistic information they capture and how transferable they are across different tasks. Sentence-level embeddings are evaluated against two state-of-the-art models on downstream and probing tasks from SentEval, while passage-level embeddings are evaluated on four question-answering (QA) datasets under a learning-to-rank problem setting. Embeddings from the pre-trained BERT model perform poorly in semantic similarity and sentence surface information probing tasks. Fine-tuning BERT on natural language inference data greatly improves the quality of the embeddings. Combining embeddings from different BERT layers can further boost performance. BERT embeddings outperform BM25 baseline significantly on factoid QA datasets at the passage level, but fail to perform better than BM25 on non-factoid datasets. For all QA datasets, there is a gap between embedding-based method and in-domain fine-tuned BERT (we report new state-of-the-art results on two datasets), which suggests deep interactions between question and answer pairs are critical for those hard tasks. | {
"paragraphs": [
[
"Universal text representations are important for many NLP tasks as modern deep learning models are becoming more and more data-hungry and computationally expensive. On one hand, most research and industry tasks face data sparsity problem due to the high cost of annotation. Universal text representations can mitigate this problem to a certain extent by performing implicit transfer learning among tasks. On the other hand, modern deep learning models with millions of parameters are expensive to train and host, while models using text representation as the building blocks can achieve similar performance with much fewer tunable parameters. The pre-computed text embeddings can also help decrease model latency dramatically at inference time.",
"Since the introduction of pre-trained word embeddings such as word2vec BIBREF0 and GloVe BIBREF1, a lot of efforts have been devoted to developing universal sentence embeddings. Initial attempts at learning sentence representation using unsupervised approaches did not yield satisfactory performance. Recent work BIBREF2 has shown that models trained in supervised fashion on datasets like Stanford Natural Language Inference (SNLI) corpus BIBREF3 can consistently outperform unsupervised methods like SkipThought vectors BIBREF4. More recently, Universal Sentence Encoder BIBREF5 equipped with the Transformer BIBREF6 as the encoder, co-trained on a large amount of unsupervised training data and SNLI corpus, has demonstrated surprisingly good performance with minimal amounts of supervised training data for a transfer task.",
"BERT BIBREF7, one of the latest models that leverage heavily on language model pre-training, has achieved state-of-the-art performance in many natural language understanding tasks ranging from sequence and sequence pair classification to question answering. The fact that pre-trained BERT can be easily fine-tuned with just one additional output layer to create a state-of-the-art model for a wide range of tasks suggests that BERT representations are potential universal text embeddings.",
"Passages that consist of multiple sentences are coherent units of natural languages that convey information at a pragmatic or discourse level. While there are many models for generating and evaluating sentence embeddings, there hasn't been a lot of work on passage level embedding generation and evaluation.",
"In this paper, we conducted an empirical study of layer-wise activations of BERT as general-purpose text embeddings. We want to understand to what extent does the BERT representation capture syntactic and semantic information. The sentence-level embeddings are evaluated on downstream and probing tasks using the SentEval toolkit BIBREF8, while the passage-level encodings are evaluated on four passage-level QA datasets (both factoid and non-factoid) under a learning-to-rank setting. Different methods of combining query embeddings with passage-level answer embeddings are examined."
],
[
"We use the SentEval toolkit to evaluate the quality of sentence representations from BERT activations. The evaluation encompasses a variety of downstream and probing tasks. Downstream tasks include text classification, natural language inference, paraphrase detection, and semantic similarity. Probing tasks use single sentence embedding as input, are designed to probe sentence-level linguistic phenomena, from superficial properties of sentences to syntactic information to semantic acceptability. For details about the tasks, please refer to BIBREF8 and BIBREF9. We compare the BERT embeddings against two state-of-the-art sentence embeddings, Universal Sentence Encoder BIBREF5, InferSent BIBREF2, and a baseline of averaging GloVe word embeddings.",
"Effect of Encoder Layer: We compare the performance of embeddings extracted from different encoder layers of a pre-trained BERT using bert-as-service BIBREF10. Since we are interested in the linguistic information encoded in the embeddings, we only add a logistic regression layer on top of the embeddings for each classification task. The results of using [CLS] token activations as embeddings are presented in Figure FIGREF1. The raw values are provided in the Appendix. In the heatmap, the raw values of metrics are normalized by the best performance of a particular task from all the models we evaluated including BERT. The tasks in the figure are grouped by task category. For example, all semantic similarity related tasks are placed at the top of the figure.",
"As can be seen from the figure, embeddings from top layers generally perform better than lower layers. However, for certain semantic probing tasks such as tense classification, subject, and object number classifications, middle layer embeddings perform the best. Intuitively, embeddings from top layer should be more biased towards the target of BERT pre-training tasks, while bottom layer embeddings should be close to the word embeddings. We observed a higher correlation in performance between bottom layer embeddings and GloVe embeddings than embeddings from other layers. Overall, pre-trained BERT embeddings perform well in text classification and syntactic probing tasks. The biggest limitation lies in the semantic similarity and sentence surface information probing tasks, where we observed a big gap between BERT and other state-of-the-art models.",
"Effect of Pooling Methods: We examined different methods of extracting BERT hidden state activations. The pooling methods we evaluated include: CLS-pooling (the hidden state corresponding to the [CLS] token), SEP-pooling (the hidden state corresponding to the [SEP] token), Mean-pooling (the average of the hidden state of the encoding layer on the time axis), and Max-pooling (the maximum of the hidden state of the encoding layer on the time axis). To eliminate the layer-wise effects, we averaged the performance of each pooling method over different layers. The results are summarized in Table TABREF2, where the score for each task category is calculated by averaging the normalized values for the tasks within each category. Although the activations of [CLS] token hidden states are often used in fine-tuning BERT for classification tasks, Mean-pooling of hidden states performs the best in all task categories among all the pooling methods.",
"Pre-trained vs. Fine-tuned BERT: All the models we considered in this paper benefit from supervised training on natural language inference datasets. In this section, we compare the performance of embeddings from pre-trained BERT and fine-tuned BERT. Two natural language inference datasets, MNLI BIBREF11 and SNLI, were considered in the experiment. Inspired by the fact that embeddings from different layers excel in different tasks, we also conducted experiments by concatenating embeddings from multiple layers. The results are presented in Table TABREF3, and the raw values are provided in the Appendix.",
"As we can see from the table, embeddings from pre-trained BERT are good at capturing sentence-level syntactic information and semantic information, but poor at semantic similarity tasks and surface information tasks. Our findings are consistent with BIBREF12 work on assessing BERT's syntactic abilities. Fine-tuning on natural language inference datasets improves the quality of sentence embedding, especially on semantic similarity tasks and entailment tasks. Combining embeddings from two layers can further boost the performance on sentence surface and syntactic information probing tasks. Experiments were also conducted by combining embeddings from multiple layers. However, there is no significant and consistent improvement over pooling just from two layers. Adding multi-layer perceptron (MLP) instead of logistic regression layer on top of the embeddings also provides no significant changes in performance, which suggests that most linguistic properties can be extracted with just a linear readout of the embeddings. Our best model is the combination of embeddings from the top and bottom layer of the BERT fine-tuned on SNLI dataset."
],
[
"In this section, we evaluate BERT embeddings at passage level on question-answering datasets under a learning-to-rank problem setting.",
"Datasets: We experimented on four datasets: (1) WikiPassageQA BIBREF13, (2) InsuranceQA (version 1.0) BIBREF14, (3) Quasar-t BIBREF15, and (4) SearchQA BIBREF16. They cover both factoid and non-factoid QA and different average passage length. The statistics of the four datasets are provided in the Appendix. To generate passage-level question-answering data from Quasart-t and SearchQA, we used the retrieved passages for each question from OpenQA, and generated question-passage relevance label based on whether the ground truth answer is contained in the passage.",
"Experiment Setting: We use the same pooling methods as in the sentence embedding experiment to extract passage embeddings, and make sure that the passage length is within BERT's maximum sequence length. Different methods of combining query embeddings with answer passage embeddings were explored including: cosine similarity (no trainable parameter), bilinear function, concatenation, and $(u, v, u * v, |u - v|)$ where $u$ and $v$ are query embedding and answer embedding, respectively. A logistic regression layer or an MLP layer is added on top of the embeddings to output a ranking score. We apply the pairwise rank hinge loss $l(q, +a, -a; \\theta ) = max\\lbrace 0, - S(q, +a; \\theta )+S(q, -a; \\theta )\\rbrace $ to every tuple of $(query, +answer, -answer)$. Ranking metrics such as MRR (mean reciprocal rank), MAP (mean average precision), Precision@K and Recall@K are used to measure the performance. We compared BERT passage embeddings against the baseline of BM25, other state-of-the-art models, and a fine-tuned BERT on in-domain supervised data which serves as the upper bound. For in-domain BERT fine-tuning, we feed the hidden state of the [CLS] token from the top layer into a two-layer MLP which outputs a relevance score between the question and candidate answer passage. We fine-tune all BERT parameters except the word embedding layers.",
"Results: The comparison between BERT embeddings and other models is presented in Table TABREF5. Overall, in-domain fine-tuned BERT delivers the best performance. We report new state-of-the-art results on WikiPassageQA ($33\\%$ improvement in MAP) and InsuranceQA (version 1.0) ($3.6\\%$ improvement in P@1) by supervised fine-tuning BERT using pairwise rank hinge loss. When evaluated on non-factoid QA datasets, there is a big gap between BERT embeddings and the fully fine-tuned BERT, which suggests that deep interactions between questions and answers are critical to the task. However, the gap is much smaller for factoid QA datasets. Since non-factoid QA depends more on content matching rather than vocabulary matching, the results are kind of expected. Similar to BERT for sentence embeddings, mean-pooling and combining the top and bottom layer embeddings lead to better performance, and $(u, v, u * v, |u - v|)$ shows the strongest results among other interaction schemes. Different from sentence-level embeddings, fine-tuning BERT on SNLI doesn't lead to significant improvement, which suggests possible domain mismatch between SNLI and the QA datasets. MLP layer usually provided a 1-2 percent boost in performance compared to the logistic regression layer. For WikiPassageQA, BERT embeddings perform comparably as BM25 baseline. For InsuranceQA, BERT embeddings outperform a strong representation-based matching model DSSM BIBREF18, but still far behind the state-of-the-art interaction-based model SUBMULT+NN BIBREF17 and fully fine-tuned BERT. On factoid datasets (Quasar-t and SearchQA), BERT embeddings outperform BM25 baseline significantly."
],
[
"In this paper, we conducted an empirical investigation of BERT activations as universal text embeddings. We show that sentence embeddings from BERT perform strongly on SentEval tasks, and combining embeddings from the top and bottom layers of BERT fine-tuned on SNLI provides the best performance. At passage-level, we evaluated BERT embeddings on four QA datasets. Models based on BERT passage embeddings outperform BM25 baseline significantly on factoid QA datasets but fail to perform better than BM25 on non-factoid datasets. We observed a big gap between embedding-based models and in-domain the fully fine-tuned BERT on QA datasets. Future research is needed to better model the interactions between pairs of text embeddings."
]
],
"section_name": [
"Introduction",
"BERT Sentence Embedding",
"BERT Passage Embedding",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"4d46ba098196a45a5af5d59be57c1bec5fb865c5",
"5cf4bb54c868fda47fadeb4e1fe7e6c284daa96e",
"6e3f738468f3b2136424d5ae43282fa40b213dca"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" We experimented on four datasets: (1) WikiPassageQA BIBREF13, (2) InsuranceQA (version 1.0) BIBREF14, (3) Quasar-t BIBREF15, and (4) SearchQA BIBREF16. "
],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"1808ffd5e2ce7d8d4d60330a1c171fa78dc363ae",
"cc2dfd8cab876f712abe5d0e215f4815cac53559",
"fea49dcca640ccb91684c399f0fc2e10d108d654"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"689ab9a0692618cbf89ecf87259528a39dc39977",
"6ed9b43f78ef8b5f960193786ad210e168d0d5f1",
"fe61a177a273bd1b685898335bbc863cfa5d495f"
],
"answer": [
{
"evidence": [
"Pre-trained vs. Fine-tuned BERT: All the models we considered in this paper benefit from supervised training on natural language inference datasets. In this section, we compare the performance of embeddings from pre-trained BERT and fine-tuned BERT. Two natural language inference datasets, MNLI BIBREF11 and SNLI, were considered in the experiment. Inspired by the fact that embeddings from different layers excel in different tasks, we also conducted experiments by concatenating embeddings from multiple layers. The results are presented in Table TABREF3, and the raw values are provided in the Appendix.",
"FLOAT SELECTED: Table 2: Comparison across models. PT stands for pre-trained BERT. MNLI and SNLI stand for BERT fine-tuned on MNLI, SNLI, representatively. Letters in parentheses represent BERT pooling layers. “t” means top layer, “b” means bottom layer. Mean-pooling is used for all BERT embeddings. Logistic regression layer is added on top of the embeddings."
],
"extractive_spans": [],
"free_form_answer": "Top and bottom layers",
"highlighted_evidence": [
"Inspired by the fact that embeddings from different layers excel in different tasks, we also conducted experiments by concatenating embeddings from multiple layers. The results are presented in Table TABREF3, and the raw values are provided in the Appendix.",
"FLOAT SELECTED: Table 2: Comparison across models. PT stands for pre-trained BERT. MNLI and SNLI stand for BERT fine-tuned on MNLI, SNLI, representatively. Letters in parentheses represent BERT pooling layers. “t” means top layer, “b” means bottom layer. Mean-pooling is used for all BERT embeddings. Logistic regression layer is added on top of the embeddings."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As we can see from the table, embeddings from pre-trained BERT are good at capturing sentence-level syntactic information and semantic information, but poor at semantic similarity tasks and surface information tasks. Our findings are consistent with BIBREF12 work on assessing BERT's syntactic abilities. Fine-tuning on natural language inference datasets improves the quality of sentence embedding, especially on semantic similarity tasks and entailment tasks. Combining embeddings from two layers can further boost the performance on sentence surface and syntactic information probing tasks. Experiments were also conducted by combining embeddings from multiple layers. However, there is no significant and consistent improvement over pooling just from two layers. Adding multi-layer perceptron (MLP) instead of logistic regression layer on top of the embeddings also provides no significant changes in performance, which suggests that most linguistic properties can be extracted with just a linear readout of the embeddings. Our best model is the combination of embeddings from the top and bottom layer of the BERT fine-tuned on SNLI dataset."
],
"extractive_spans": [
" the top and bottom layer of the BERT fine-tuned on SNLI dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our best model is the combination of embeddings from the top and bottom layer of the BERT fine-tuned on SNLI dataset."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Results: The comparison between BERT embeddings and other models is presented in Table TABREF5. Overall, in-domain fine-tuned BERT delivers the best performance. We report new state-of-the-art results on WikiPassageQA ($33\\%$ improvement in MAP) and InsuranceQA (version 1.0) ($3.6\\%$ improvement in P@1) by supervised fine-tuning BERT using pairwise rank hinge loss. When evaluated on non-factoid QA datasets, there is a big gap between BERT embeddings and the fully fine-tuned BERT, which suggests that deep interactions between questions and answers are critical to the task. However, the gap is much smaller for factoid QA datasets. Since non-factoid QA depends more on content matching rather than vocabulary matching, the results are kind of expected. Similar to BERT for sentence embeddings, mean-pooling and combining the top and bottom layer embeddings lead to better performance, and $(u, v, u * v, |u - v|)$ shows the strongest results among other interaction schemes. Different from sentence-level embeddings, fine-tuning BERT on SNLI doesn't lead to significant improvement, which suggests possible domain mismatch between SNLI and the QA datasets. MLP layer usually provided a 1-2 percent boost in performance compared to the logistic regression layer. For WikiPassageQA, BERT embeddings perform comparably as BM25 baseline. For InsuranceQA, BERT embeddings outperform a strong representation-based matching model DSSM BIBREF18, but still far behind the state-of-the-art interaction-based model SUBMULT+NN BIBREF17 and fully fine-tuned BERT. On factoid datasets (Quasar-t and SearchQA), BERT embeddings outperform BM25 baseline significantly."
],
"extractive_spans": [
"combining the top and bottom layer embeddings"
],
"free_form_answer": "",
"highlighted_evidence": [
"Similar to BERT for sentence embeddings, mean-pooling and combining the top and bottom layer embeddings lead to better performance, and $(u, v, u * v, |u - v|)$ shows the strongest results among other interaction schemes."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"160ddf474627ae1c348b81414b89428c874180ab",
"6dbb86df6e76de08db144ad57f8b9a19ec1ed855",
"f7e92250c8ac68aac9d31c300116aeea766a4092"
],
"answer": [
{
"evidence": [
"Pre-trained vs. Fine-tuned BERT: All the models we considered in this paper benefit from supervised training on natural language inference datasets. In this section, we compare the performance of embeddings from pre-trained BERT and fine-tuned BERT. Two natural language inference datasets, MNLI BIBREF11 and SNLI, were considered in the experiment. Inspired by the fact that embeddings from different layers excel in different tasks, we also conducted experiments by concatenating embeddings from multiple layers. The results are presented in Table TABREF3, and the raw values are provided in the Appendix."
],
"extractive_spans": [
"MNLI BIBREF11",
"SNLI"
],
"free_form_answer": "",
"highlighted_evidence": [
"Two natural language inference datasets, MNLI BIBREF11 and SNLI, were considered in the experiment."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Pre-trained vs. Fine-tuned BERT: All the models we considered in this paper benefit from supervised training on natural language inference datasets. In this section, we compare the performance of embeddings from pre-trained BERT and fine-tuned BERT. Two natural language inference datasets, MNLI BIBREF11 and SNLI, were considered in the experiment. Inspired by the fact that embeddings from different layers excel in different tasks, we also conducted experiments by concatenating embeddings from multiple layers. The results are presented in Table TABREF3, and the raw values are provided in the Appendix.",
"As we can see from the table, embeddings from pre-trained BERT are good at capturing sentence-level syntactic information and semantic information, but poor at semantic similarity tasks and surface information tasks. Our findings are consistent with BIBREF12 work on assessing BERT's syntactic abilities. Fine-tuning on natural language inference datasets improves the quality of sentence embedding, especially on semantic similarity tasks and entailment tasks. Combining embeddings from two layers can further boost the performance on sentence surface and syntactic information probing tasks. Experiments were also conducted by combining embeddings from multiple layers. However, there is no significant and consistent improvement over pooling just from two layers. Adding multi-layer perceptron (MLP) instead of logistic regression layer on top of the embeddings also provides no significant changes in performance, which suggests that most linguistic properties can be extracted with just a linear readout of the embeddings. Our best model is the combination of embeddings from the top and bottom layer of the BERT fine-tuned on SNLI dataset."
],
"extractive_spans": [
"MNLI",
"SNLI"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this section, we compare the performance of embeddings from pre-trained BERT and fine-tuned BERT. Two natural language inference datasets, MNLI BIBREF11 and SNLI, were considered in the experiment.",
"Fine-tuning on natural language inference datasets improves the quality of sentence embedding, especially on semantic similarity tasks and entailment tasks."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Pre-trained vs. Fine-tuned BERT: All the models we considered in this paper benefit from supervised training on natural language inference datasets. In this section, we compare the performance of embeddings from pre-trained BERT and fine-tuned BERT. Two natural language inference datasets, MNLI BIBREF11 and SNLI, were considered in the experiment. Inspired by the fact that embeddings from different layers excel in different tasks, we also conducted experiments by concatenating embeddings from multiple layers. The results are presented in Table TABREF3, and the raw values are provided in the Appendix."
],
"extractive_spans": [
"Two natural language inference datasets, MNLI BIBREF11 and SNLI"
],
"free_form_answer": "",
"highlighted_evidence": [
"Two natural language inference datasets, MNLI BIBREF11 and SNLI, were considered in the experiment."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"99972b83b19c6967b1074a172816fd3857ee1645",
"c285ec044d021c721f62aced2b593580b0b9742a",
"f7c0029aca9c5e39e9f211538e9f1998c2e93caf"
],
"answer": [
{
"evidence": [
"Datasets: We experimented on four datasets: (1) WikiPassageQA BIBREF13, (2) InsuranceQA (version 1.0) BIBREF14, (3) Quasar-t BIBREF15, and (4) SearchQA BIBREF16. They cover both factoid and non-factoid QA and different average passage length. The statistics of the four datasets are provided in the Appendix. To generate passage-level question-answering data from Quasart-t and SearchQA, we used the retrieved passages for each question from OpenQA, and generated question-passage relevance label based on whether the ground truth answer is contained in the passage."
],
"extractive_spans": [
"(1) WikiPassageQA BIBREF13",
"(2) InsuranceQA (version 1.0) BIBREF14",
"(3) Quasar-t BIBREF15",
"(4) SearchQA BIBREF16"
],
"free_form_answer": "",
"highlighted_evidence": [
"We experimented on four datasets: (1) WikiPassageQA BIBREF13, (2) InsuranceQA (version 1.0) BIBREF14, (3) Quasar-t BIBREF15, and (4) SearchQA BIBREF16."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Datasets: We experimented on four datasets: (1) WikiPassageQA BIBREF13, (2) InsuranceQA (version 1.0) BIBREF14, (3) Quasar-t BIBREF15, and (4) SearchQA BIBREF16. They cover both factoid and non-factoid QA and different average passage length. The statistics of the four datasets are provided in the Appendix. To generate passage-level question-answering data from Quasart-t and SearchQA, we used the retrieved passages for each question from OpenQA, and generated question-passage relevance label based on whether the ground truth answer is contained in the passage."
],
"extractive_spans": [
"WikiPassageQA",
"InsuranceQA",
"Quasar-t",
"SearchQA"
],
"free_form_answer": "",
"highlighted_evidence": [
"Datasets: We experimented on four datasets: (1) WikiPassageQA BIBREF13, (2) InsuranceQA (version 1.0) BIBREF14, (3) Quasar-t BIBREF15, and (4) SearchQA BIBREF16. They cover both factoid and non-factoid QA and different average passage length."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Datasets: We experimented on four datasets: (1) WikiPassageQA BIBREF13, (2) InsuranceQA (version 1.0) BIBREF14, (3) Quasar-t BIBREF15, and (4) SearchQA BIBREF16. They cover both factoid and non-factoid QA and different average passage length. The statistics of the four datasets are provided in the Appendix. To generate passage-level question-answering data from Quasart-t and SearchQA, we used the retrieved passages for each question from OpenQA, and generated question-passage relevance label based on whether the ground truth answer is contained in the passage."
],
"extractive_spans": [
"WikiPassageQA",
"InsuranceQA ",
"Quasar-t ",
"SearchQA"
],
"free_form_answer": "",
"highlighted_evidence": [
"We experimented on four datasets: (1) WikiPassageQA BIBREF13, (2) InsuranceQA (version 1.0) BIBREF14, (3) Quasar-t BIBREF15, and (4) SearchQA BIBREF16. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"48294b2471ff037fb2556c4c8700afda0379fff9",
"dad2db74da65b14dfd031b3f4aa2fdf9f661bf46"
],
"answer": [
{
"evidence": [
"We use the SentEval toolkit to evaluate the quality of sentence representations from BERT activations. The evaluation encompasses a variety of downstream and probing tasks. Downstream tasks include text classification, natural language inference, paraphrase detection, and semantic similarity. Probing tasks use single sentence embedding as input, are designed to probe sentence-level linguistic phenomena, from superficial properties of sentences to syntactic information to semantic acceptability. For details about the tasks, please refer to BIBREF8 and BIBREF9. We compare the BERT embeddings against two state-of-the-art sentence embeddings, Universal Sentence Encoder BIBREF5, InferSent BIBREF2, and a baseline of averaging GloVe word embeddings."
],
"extractive_spans": [
"Downstream tasks include text classification, natural language inference, paraphrase detection, and semantic similarity",
"probe sentence-level linguistic phenomena"
],
"free_form_answer": "",
"highlighted_evidence": [
"The evaluation encompasses a variety of downstream and probing tasks. Downstream tasks include text classification, natural language inference, paraphrase detection, and semantic similarity. Probing tasks use single sentence embedding as input, are designed to probe sentence-level linguistic phenomena, from superficial properties of sentences to syntactic information to semantic acceptability."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"What is the BM25 baseline?",
"Which BERT layers were combined to boost performance?",
"Which NLI data was used to improve the quality of the embeddings?",
"Which four QA datasets are examined?",
"Which two tasks from SentEval are the sentence embeddings evaluated against?"
],
"question_id": [
"a17fc7b96753f85aee1d2036e2627570f4b50c30",
"c6170bb09ba2a416f8fa9b542f0ab05a64dbf2e4",
"fe080c6393f126b55ae456b81133bfc8ecbe85c2",
"53a8c3cf22d6bf6477bc576a85a83d8447ee0484",
"3a33512d253005ac280ee9ca4f9dfa69aa38d48f",
"f7f2968feb28c2907266c892f051ae9f7d6286e6"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Sentence embedding performance of [CLS] token activation from different layers of BERT. Color value of 1 corresponds to the best performance on a given task. Numbers on the x-axis represent the pooling layer with -1 being the top encoder layer, -12 being the bottom layer.",
"Table 1: Comparison of pooling methods",
"Table 2: Comparison across models. PT stands for pre-trained BERT. MNLI and SNLI stand for BERT fine-tuned on MNLI, SNLI, representatively. Letters in parentheses represent BERT pooling layers. “t” means top layer, “b” means bottom layer. Mean-pooling is used for all BERT embeddings. Logistic regression layer is added on top of the embeddings.",
"Table 3: Results of BERT passage-level embeddings on question-answering datasets"
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png"
]
} | [
"Which BERT layers were combined to boost performance?"
] | [
[
"1910.07973-4-Table2-1.png",
"1910.07973-BERT Sentence Embedding-5",
"1910.07973-BERT Sentence Embedding-4",
"1910.07973-BERT Passage Embedding-3"
]
] | [
"Top and bottom layers"
] | 120 |
1603.08868 | A Readable Read: Automatic Assessment of Language Learning Materials based on Linguistic Complexity | Corpora and web texts can become a rich language learning resource if we have a means of assessing whether they are linguistically appropriate for learners at a given proficiency level. In this paper, we aim at addressing this issue by presenting the first approach for predicting linguistic complexity for Swedish second language learning material on a 5-point scale. After showing that the traditional Swedish readability measure, L\"asbarhetsindex (LIX), is not suitable for this task, we propose a supervised machine learning model, based on a range of linguistic features, that can reliably classify texts according to their difficulty level. Our model obtained an accuracy of 81.3% and an F-score of 0.8, which is comparable to the state of the art in English and is considerably higher than previously reported results for other languages. We further studied the utility of our features with single sentences instead of full texts since sentences are a common linguistic unit in language learning exercises. We trained a separate model on sentence-level data with five classes, which yielded 63.4% accuracy. Although this is lower than the document level performance, we achieved an adjacent accuracy of 92%. Furthermore, we found that using a combination of different features, compared to using lexical features alone, resulted in 7% improvement in classification accuracy at the sentence level, whereas at the document level, lexical features were more dominant. Our models are intended for use in a freely accessible web-based language learning platform for the automatic generation of exercises. | {
"paragraphs": [
[
"Linguistic information provided by Natural Language Processing (NLP) tools has good potential for turning the continuously growing amount of digital text into interactive and personalized language learning material. Our work aims at overcoming one of the fundamental obstacles in this domain of research, namely how to assess the linguistic complexity of texts and sentences from the perspective of second and foreign language (L2) learners.",
"There are a number of readability models relying on NLP tools to predict the difficulty (readability) level of a text BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . The linguistic features explored so far for this task incorporate information, among others, from part-of-speech (POS) taggers and dependency parsers. Cognitively motivated features have also been proposed, for example, in the Coh-Metrix BIBREF2 . Although the majority of previous work focuses primarily on document-level analysis, a finer-grained, sentence-level readability has received increasing interest in recent years BIBREF6 , BIBREF7 , BIBREF8 .",
"The previously mentioned studies target mainly native language (L1) readers including people with low literacy levels or mild cognitive disabilities. Our focus, however, is on building a model for predicting the proficiency level of texts and sentences used in L2 teaching materials. This aspect has been explored for English BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , French BIBREF13 , Portuguese BIBREF14 and, without the use of NLP, for Dutch BIBREF15 .",
"Readability for the Swedish language has a rather long tradition. One of the most popular, easy-to-compute formulas is LIX (Läsbarthetsindex, `Readability index') proposed in BIBREF16 . This measure combines the average number of words per sentence in the text with the percentage of long words, i.e. tokens consisting of more than six characters. Besides traditional formulas, supervised machine learning approaches have also been tested. Swedish document-level readability with a native speaker focus is described in BIBREF4 and BIBREF17 . For L2 Swedish, only a binary sentence-level model exists BIBREF8 , but comprehensive and highly accurate document- and sentence-level models for multiple proficiency levels have not been developed before.",
"In this paper, we present a machine learning model trained on course books currently in use in L2 Swedish classrooms. Our goal was to predict linguistic complexity of material written by teachers and course book writers for learners, rather than assessing learner-produced texts. We adopted the scale from the Common European Framework of Reference for Languages (CEFR) BIBREF18 which contains guidelines for the creation of teaching material and the assessment of L2 proficiency. CEFR proposes six levels of language proficiency: A1 (beginner), A2 (elementary), B1 (intermediate), B2 (upper intermediate), C1 (advanced) and C2 (proficient). Since sentences are a common unit in language exercises, but remain less explored in the readability literature, we also investigate the applicability of our approach to sentences, performing a 5-way classification (levels A1-C1). Our document-level model achieves a state-of-the-art performance (F-score of 0.8), however, there is room for improvement in sentence-level predictions. We plan to make our results available through the online intelligent computer-assisted language learning platform Lärka, both as corpus-based exercises for teachers and learners of L2 Swedish and as web-services for researchers and developers.",
"In the following sections, we first describe our datasets (section SECREF2 ) and features (section SECREF3 ), then we present the details and the results of our experiments in section SECREF4 . Finally, section SECREF5 concludes our work and outlines further directions of research within this area."
],
[
"Our dataset is a subset of COCTAILL, a corpus of course books covering five CEFR levels (A1-C1) BIBREF19 . This corpus consists of twelve books (from four different publishers) whose usability and level have been confirmed by Swedish L2 teachers. The course books have been annotated both content-wise (e.g. exercises, lists) and linguistically (e.g. with POS and dependency tags) BIBREF19 . We collected a total of 867 texts (reading passages) from this corpus. We excluded texts that are primarily based on dialogues from the current experiments due to their specific linguistic structure, with the aim of scaling down differences connected to text genres rather than linguistic complexity. We plan to study the readability of dialogues and compare them to non-dialogue texts in the future.",
"Besides reading passages, i.e. texts, the COCTAILL corpus contains a number of sentences independent from each other, i.e. not forming a coherent text, in the form of lists of sentences and language examples. This latter category consists of sentences illustrating the use of specific grammatical patterns or lexical items. Collecting these sentences, we built a sentence-level dataset consisting of 1874 instances. The information encoded in the content-level annotation of COCTAILL (XML tags list, language_example and the attribute unit) enabled us to include only complete sentences and exclude sentences containing gaps and units larger or smaller than a sentence (e.g. texts, phrases, single words etc.). The CEFR level of both sentences and texts has been derived from the CEFR level of the lesson (chapter) they appeared in. In Table TABREF3 , columns 2-5 give an overview of the distribution of texts across levels and their mean length in sentences. The distribution of sentences per level is presented in the last two columns of Table TABREF3 . COCTAILL contained a somewhat more limited amount of B2 and C1 level sentences in the form of lists and language examples, possibly because learners handle larger linguistic units with more ease at higher proficiency levels."
],
[
"We developed our features based on information both from previous literature BIBREF9 , BIBREF3 , BIBREF13 , BIBREF4 , BIBREF8 and a grammar book for Swedish L2 learners BIBREF20 . The set of features can be divided in the following five subgroups: length-based, lexical, morphological, syntactic and semantic features (Table TABREF6 ).",
"Length-based (Len): These features include sentence length in number of tokens (#1) and characters (#4), extra-long words (longer than thirteen characters) and the traditional Swedish readability formula, LIX (see section SECREF1 ). For the sentence-level analysis, instead of the ratio of number of tokens to the number of sentences in the text, we considered the number of tokens in one sentence.",
"Lexical (Lex): Similar to BIBREF8 , we used information from the Kelly list BIBREF21 , a lexical resource providing a CEFR level and frequencies per lemma based on a corpus of web texts. Thus, this word list is entirely independent from our dataset. Instead of percentages, we used incidence scores (IncSc) per 1000 words to reduce the influence of sentence length on feature values. The IncSc of a category was computed as 1000 divided by the number of tokens in the text or sentence multiplied by the count of the category in the sentence. We calculated the IncSc of words belonging to each CEFR level (#6 - #11). In features #12 and #13 we considered difficult all tokens whose level was above the CEFR level of the text or sentence. We computed also the IncSc of tokens not present in the Kelly list (#14), tokens for which the lemmatizer did not find a corresponding lemma form (# 15), as well as average log frequencies (#16).",
"Morphological (Morph): We included the variation (the ratio of a category to the ratio of lexical tokens - i.e. nouns, verbs, adjectives and adverbs) and the IncSc of all lexical categories together with the IncSc of punctuations, particles, sub- and conjunctions (#34, #51). Some additional features, using insights from L2 teaching material BIBREF20 , captured fine-grained inflectional information such as the IncSc of neuter gender nouns and the ratio of different verb forms to all verbs (#52 - #56). Instead of simple type-token ratio (TTR) we used a bilogarithmic and a square root TTR as in BIBREF3 . Moreover, nominal ratio BIBREF4 , the ratio of pronouns to prepositions BIBREF13 , and two lexical density features were also included: the ratio of lexical words to all non-lexical categories (#48) and to all tokens (#49). Relative structures (#57) consisted of relative adverbs, determiners, pronouns and possessives.",
"Syntactic (Synt): Some of these features were based on the length (depth) and the direction of dependency arcs (#17 - #21). We complemented this, among others, with the IncSc of relative clauses in clefts (#26), and the IncSc of pre-and postmodifiers (e.g. adjectives and prepositional phrases) BIBREF4 .",
"Semantic (Sem): Features based on information from SALDO BIBREF23 , a Swedish lexical-semantic resource. We used the average number of senses per token as in BIBREF8 and included also the average number of noun senses per nouns. Once reliable word-sense disambiguation methods become available for Swedish, additional features based on word senses could be taken into consideration here.",
"The complete set of 61 features is presented in Table TABREF6 . Throughout this paper we will refer to the machine learning models using this set of features, unless otherwise specified. Features for both document- and sentence-level analyses were extracted for each sentence, the values being averaged over all sentences in the text in the document-level experiments to ensure comparability."
],
[
"We explored different classification algorithms for this task using the machine learning toolkit WEKA BIBREF24 . These included: (1) a multinomial logistic regression model with ridge estimator, (2) a multilayer perceptron, (3) a support vector machine learner, Sequential Minimal Optimization (SMO), and (4) a decision tree (J48). For each of these, the default parameter settings have been used as implemented in WEKA.",
"We considered classification accuracy, F-score and Root Mean Squared Error (RMSE) as evaluation measures for our approach. We also included a confusion matrix, as we deal with a dataset that is unbalanced across CEFR levels. The scores were obtained by performing a ten-fold Cross-Validation (CV)."
],
[
"We trained document-level classification models, comparing the performance between different subgroups of features. We had two baselines: a majority classifier (Majority), with B2 as majority class, and the LIX readability score. Table TABREF9 shows the type of subgroup (Type), the number of features (Nr) and three evaluation metrics using logistic regression.",
"Not only was accuracy very low with LIX, but this measure also classified 91.6% of the instances as B2 level. Length-based, semantic and syntactic features in isolation showed similar or only slightly better performance than the baselines, therefore we excluded them from Table TABREF9 . Lexical features, however, had a strong discriminatory power without an increase in bias towards the majority classes. Using this subset of features only, we achieved approximately the same performance (0.8 F) as with the complete set of features, All (0.81 F). This suggests that lexical information alone can successfully distinguish the CEFR level of course book texts at the document level. Using the complete feature set we obtained 81% accuracy and 97% adjacent accuracy (when misclassifications to adjacent classes are considered correct). The same scores with lexical features (Lex) only were 80.3% (accuracy) and 98% (adjacent accuracy).",
"Accuracy scores using other learning algorithms were significantly lower (see Table TABREF10 ), therefore, we report only the results of the logistic regression classifier in the subsequent sections.",
"Instead of classification, some readability studies (e.g. BIBREF10 , BIBREF14 ) employed linear regression for this task. For a better comparability, we applied also a linear regression model to our data which yielded a correlation of 0.8 and an RMSE of 0.65.",
"To make sure that our system was not biased towards the majority classes B1 and B2, we inspected the confusion matrix (Table TABREF11 ) after classification using All. We can observe from Table TABREF11 that the system performs better at A1 and C1 levels, where confusion occurred only with adjacent classes. Similar to the findings in BIBREF13 for French, classes in the middle of the scale were harder to distinguish. Most misclassifications in our material occurred at A2 level (23%) followed by B1 and B2 level, (20% and 17% respectively).",
"To establish the external validity of our approach, we tested it on a subset of LäSBarT BIBREF4 , a corpus of Swedish easy-to-read (ETR) texts previously employed for Swedish L1 readability studies BIBREF4 , BIBREF17 . We used 18 fiction texts written for children between ages nine to twelve, half of which belonged to the ETR category and the rest were unsimplified. Our model generalized well to unseen data, it classified all ETR texts as B1 and all ordinary texts as C1 level, thus correctly identifying in all cases the relative difference in complexity between the documents of the two categories.",
"Although a direct comparison with other studies is difficult because of the target language, the nature of the datasets and the number of classes used, in terms of absolute numbers, our model achieves comparable performance with the state-of-the-art systems for English BIBREF9 , BIBREF12 . Other studies for non-English languages using CEFR levels include: BIBREF13 , reporting 49.1% accuracy for a French system distinguishing six classes; and BIBREF14 achieving 29.7% accuracy on a smaller Portuguese dataset with five levels."
],
[
"After building good classification models at document level, we explored the usability of our approach at the sentence level. Sentences are particularly useful in Computer-Assisted Language Learning (CALL) applications, among others, for generating sentence-based multiple choice exercises, e.g. BIBREF25 , or vocabulary examples BIBREF26 . Furthermore, multi-class readability classification of sentence-level material intended for second language learners has not been previously investigated in the literature.",
"With the same methodology (section SECREF7 ) and feature set (section SECREF3 ) used at the document level, we trained and tested classification models based on the sentence-level data (see section SECREF2 ). The results are shown in Table TABREF13 .",
"Although the majority baseline in the case of sentences was 7% higher than the one for texts (Table TABREF9 ), the classification accuracy for sentences using all features was only 63.4%. This is a considerable drop (-18%) in performance compared to the document level (81.3% accuracy). It is possible that the features did not capture differences between the sentences because the amount of context is more limited on the fine-grained level. It is interesting to note that, although there was no substantial performance difference between Lex and All at a document level, the model with all the features performed 7% better at sentence level.",
"Most misclassifications occurred, however, within a distance of one class only, thus the adjacent accuracy of the sentence-level model was still high, 92% (see Table TABREF14 ). Predictions were noticeably more accurate for classes A1, A2 and B1 which had a larger number of instances.",
"In the next step, we applied the sentence-level model on the document-level data to explore how homogeneous texts were in terms of the CEFR level of the sentences they contained. Figure FIGREF15 shows that texts at each CEFR level contain a substantial amount of sentences of the same level of the whole text, but they also include sentences classified as belonging to other CEFR levels.",
"Finally, as in the case of the document-level analysis, we tested our sentence-level model also on an independent dataset (SenRead), a small corpus of sentences with gold-standard CEFR annotation. This data was created during a user-based evaluation study BIBREF27 and it consists of 196 sentences from generic corpora, i.e. originally not L2 learner-focused corpora, rated as being suitable at B1 or being at a level higher than B1. We used this corpus along with the judgments of the three participating teachers. Since SenRead had only two categories - INLINEFORM0 and INLINEFORM1 , we combined the model's predictions into two classes - A1, A2, B1 were considered as INLINEFORM2 B1 and B2, C1 were considered as INLINEFORM3 B1. The majority baseline for the dataset was 65%, INLINEFORM4 B1 being the class with most instances. The model trained on COCTAILL sentences predicted with 73% accuracy teachers' judgments, an 8% improvement over the majority baseline. There was a considerable difference between the precision score of the two classes, which was 85.4% for INLINEFORM5 B1, and only 48.5% for INLINEFORM6 B1.",
"Previously published results on sentence-level data include BIBREF6 , who report 66% accuracy for a binary classification task for English and BIBREF7 who obtained an accuracy between 78.9% and 83.7% for Italian binary class data using different kinds of datasets. Neither of these studies, however, had a non-native speaker focus. BIBREF8 report 71% accuracy for Swedish binary sentence-level classification from an L2 point of view. Both the adjacent accuracy of our sentence-level model (92%) and the accuracy score obtained with that model on SenRead (73%) improve on that score. It is also worth mentioning that the labels in the dataset from BIBREF8 were based on the assumption that all sentences in a text belong to the same difficulty level which, being an approximation (as also Figure FIGREF15 shows), introduced some noise in that data.",
"Although more analysis would be needed to refine the sentence-level model, our current results indicate that a rich feature set that considers multiple linguistic dimensions may result in an improved performance. In the future, the dataset could be expanded with more gold-standard sentences, which may improve accuracy. Furthermore, an interesting direction to pursue would be to verify whether providing finer-grained readability judgments is a more challenging task also for human raters."
],
[
"We proposed an approach to assess the proficiency (CEFR) level of Swedish L2 course book texts based on a variety of features. Our document-level model, the first for L2 Swedish, achieved an F-score of 0.8, hence, it can reliably distinguish between proficiency levels. Compared to the wide-spread readability measure for Swedish, LIX, we achieved a substantial gain in terms of both accuracy and F-score (46% and 0.6 higher respectively). The accuracy of the sentence-level model remained lower than that of the text-level model, nevertheless, using the complete feature set the system performed 23% and 22% above the majority baseline and LIX respectively. Misclassifications of more than one level did not occur in more than 8% of sentences, thus, in terms of adjacent accuracy, our sentence-level model improved on previous results for L2 Swedish readability BIBREF8 .",
"Most notably, we have found that taking into consideration multiple linguistic dimensions when assessing linguistic complexity is especially useful for sentence-level analysis. In our experiments, using only word-frequency features was almost as predictive as a combination of all features for the document level, but the latter made more accurate predictions for sentences, resulting in a 7% difference in accuracy. Besides L2 course book materials, we tested both our document- and sentence-level models also on unseen data with promising results.",
"In the future, a more detailed investigation is needed to understand the performance drop between document and sentence level. Acquiring more sentence-level annotated data and exploring new features relying on lexical-semantic resources for Swedish would be interesting directions to pursue. Furthermore, we intend to test the utility of this approach in a real-world web application involving language learners and teachers."
]
],
"section_name": [
"Introduction",
"Datasets",
"Features",
"Experimental Setup",
"Document-Level Experiments",
"Sentence-Level Experiments",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"4a8f8274ed7e24f722581ce64759beb2e2653968",
"5bed2bd125c7bf4adb3fe4a6a78d7c7b525ef3b0",
"b959e087f716d8cea3dd3ecaeec38af5567bdad2"
],
"answer": [
{
"evidence": [
"We explored different classification algorithms for this task using the machine learning toolkit WEKA BIBREF24 . These included: (1) a multinomial logistic regression model with ridge estimator, (2) a multilayer perceptron, (3) a support vector machine learner, Sequential Minimal Optimization (SMO), and (4) a decision tree (J48). For each of these, the default parameter settings have been used as implemented in WEKA."
],
"extractive_spans": [
"a multinomial logistic regression model with ridge estimator",
"a multilayer perceptron",
"a support vector machine learner",
"Sequential Minimal Optimization (SMO)",
"a decision tree (J48)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We explored different classification algorithms for this task using the machine learning toolkit WEKA BIBREF24 . These included: (1) a multinomial logistic regression model with ridge estimator, (2) a multilayer perceptron, (3) a support vector machine learner, Sequential Minimal Optimization (SMO), and (4) a decision tree (J48). For each of these, the default parameter settings have been used as implemented in WEKA."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We explored different classification algorithms for this task using the machine learning toolkit WEKA BIBREF24 . These included: (1) a multinomial logistic regression model with ridge estimator, (2) a multilayer perceptron, (3) a support vector machine learner, Sequential Minimal Optimization (SMO), and (4) a decision tree (J48). For each of these, the default parameter settings have been used as implemented in WEKA."
],
"extractive_spans": [
"(1) a multinomial logistic regression model with ridge estimator",
"(2) a multilayer perceptron",
"(3) a support vector machine learner, Sequential Minimal Optimization (SMO)",
"(4) a decision tree (J48)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We explored different classification algorithms for this task using the machine learning toolkit WEKA BIBREF24 . These included: (1) a multinomial logistic regression model with ridge estimator, (2) a multilayer perceptron, (3) a support vector machine learner, Sequential Minimal Optimization (SMO), and (4) a decision tree (J48). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We explored different classification algorithms for this task using the machine learning toolkit WEKA BIBREF24 . These included: (1) a multinomial logistic regression model with ridge estimator, (2) a multilayer perceptron, (3) a support vector machine learner, Sequential Minimal Optimization (SMO), and (4) a decision tree (J48). For each of these, the default parameter settings have been used as implemented in WEKA."
],
"extractive_spans": [
"multinomial logistic regression model with ridge estimator",
"multilayer perceptron",
"support vector machine learner, Sequential Minimal Optimization",
"decision tree"
],
"free_form_answer": "",
"highlighted_evidence": [
"We explored different classification algorithms for this task using the machine learning toolkit WEKA BIBREF24 . These included: (1) a multinomial logistic regression model with ridge estimator, (2) a multilayer perceptron, (3) a support vector machine learner, Sequential Minimal Optimization (SMO), and (4) a decision tree (J48)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"46cddcdccbbc176eea3827a6638f38e25e31ee58",
"56f99c2e87a7cfe26f727bfdf3d60582c7c9b314",
"e5ddb239cb2bef84807444f5af40e203f6dda8d2"
],
"answer": [
{
"evidence": [
"Our dataset is a subset of COCTAILL, a corpus of course books covering five CEFR levels (A1-C1) BIBREF19 . This corpus consists of twelve books (from four different publishers) whose usability and level have been confirmed by Swedish L2 teachers. The course books have been annotated both content-wise (e.g. exercises, lists) and linguistically (e.g. with POS and dependency tags) BIBREF19 . We collected a total of 867 texts (reading passages) from this corpus. We excluded texts that are primarily based on dialogues from the current experiments due to their specific linguistic structure, with the aim of scaling down differences connected to text genres rather than linguistic complexity. We plan to study the readability of dialogues and compare them to non-dialogue texts in the future."
],
"extractive_spans": [
"subset of COCTAILL"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our dataset is a subset of COCTAILL, a corpus of course books covering five CEFR levels (A1-C1) BIBREF19 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our dataset is a subset of COCTAILL, a corpus of course books covering five CEFR levels (A1-C1) BIBREF19 . This corpus consists of twelve books (from four different publishers) whose usability and level have been confirmed by Swedish L2 teachers. The course books have been annotated both content-wise (e.g. exercises, lists) and linguistically (e.g. with POS and dependency tags) BIBREF19 . We collected a total of 867 texts (reading passages) from this corpus. We excluded texts that are primarily based on dialogues from the current experiments due to their specific linguistic structure, with the aim of scaling down differences connected to text genres rather than linguistic complexity. We plan to study the readability of dialogues and compare them to non-dialogue texts in the future."
],
"extractive_spans": [
"a subset of COCTAILL"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our dataset is a subset of COCTAILL, a corpus of course books covering five CEFR levels (A1-C1) BIBREF19 . This corpus consists of twelve books (from four different publishers) whose usability and level have been confirmed by Swedish L2 teachers. The course books have been annotated both content-wise (e.g. exercises, lists) and linguistically (e.g. with POS and dependency tags) BIBREF19 . We collected a total of 867 texts (reading passages) from this corpus. We excluded texts that are primarily based on dialogues from the current experiments due to their specific linguistic structure, with the aim of scaling down differences connected to text genres rather than linguistic complexity. We plan to study the readability of dialogues and compare them to non-dialogue texts in the future."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our dataset is a subset of COCTAILL, a corpus of course books covering five CEFR levels (A1-C1) BIBREF19 . This corpus consists of twelve books (from four different publishers) whose usability and level have been confirmed by Swedish L2 teachers. The course books have been annotated both content-wise (e.g. exercises, lists) and linguistically (e.g. with POS and dependency tags) BIBREF19 . We collected a total of 867 texts (reading passages) from this corpus. We excluded texts that are primarily based on dialogues from the current experiments due to their specific linguistic structure, with the aim of scaling down differences connected to text genres rather than linguistic complexity. We plan to study the readability of dialogues and compare them to non-dialogue texts in the future."
],
"extractive_spans": [
"a subset of COCTAILL, a corpus of course books covering five CEFR levels (A1-C1)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our dataset is a subset of COCTAILL, a corpus of course books covering five CEFR levels (A1-C1) BIBREF19 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"0824449d4f95070c86ea0b9068036e1899077435",
"1f98568d94c0de8c0256fda063b3de2fc6f56e81",
"4ca02e1aeb37d07e6bcca7b83998be38aa99e4c9"
],
"answer": [
{
"evidence": [
"Most notably, we have found that taking into consideration multiple linguistic dimensions when assessing linguistic complexity is especially useful for sentence-level analysis. In our experiments, using only word-frequency features was almost as predictive as a combination of all features for the document level, but the latter made more accurate predictions for sentences, resulting in a 7% difference in accuracy. Besides L2 course book materials, we tested both our document- and sentence-level models also on unseen data with promising results.",
"FLOAT SELECTED: Table 2. The complete feature set."
],
"extractive_spans": [],
"free_form_answer": "Using all the 61 features helped them improve the classification",
"highlighted_evidence": [
"In our experiments, using only word-frequency features was almost as predictive as a combination of all features for the document level, but the latter made more accurate predictions for sentences, resulting in a 7% difference in accuracy. ",
"FLOAT SELECTED: Table 2. The complete feature set."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Most notably, we have found that taking into consideration multiple linguistic dimensions when assessing linguistic complexity is especially useful for sentence-level analysis. In our experiments, using only word-frequency features was almost as predictive as a combination of all features for the document level, but the latter made more accurate predictions for sentences, resulting in a 7% difference in accuracy. Besides L2 course book materials, we tested both our document- and sentence-level models also on unseen data with promising results."
],
"extractive_spans": [
"a combination of all features for the document level"
],
"free_form_answer": "",
"highlighted_evidence": [
"n our experiments, using only word-frequency features was almost as predictive as a combination of all features for the document level, but the latter made more accurate predictions for sentences, resulting in a 7% difference in accuracy. Besides L2 course book materials, we tested both our document- and sentence-level models also on unseen data with promising results."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We developed our features based on information both from previous literature BIBREF9 , BIBREF3 , BIBREF13 , BIBREF4 , BIBREF8 and a grammar book for Swedish L2 learners BIBREF20 . The set of features can be divided in the following five subgroups: length-based, lexical, morphological, syntactic and semantic features (Table TABREF6 )."
],
"extractive_spans": [
"length-based, lexical, morphological, syntactic and semantic features"
],
"free_form_answer": "",
"highlighted_evidence": [
"The set of features can be divided in the following five subgroups: length-based, lexical, morphological, syntactic and semantic features (Table TABREF6 )."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"00b6ddc6096bba05750a4f2cc8472a3c6d3cf041",
"7106e52847ff7e09dd9cec3442696971aef41732",
"ccede5fc55f6095a6209a3ccd3c9b5aee95cea96"
],
"answer": [
{
"evidence": [
"We developed our features based on information both from previous literature BIBREF9 , BIBREF3 , BIBREF13 , BIBREF4 , BIBREF8 and a grammar book for Swedish L2 learners BIBREF20 . The set of features can be divided in the following five subgroups: length-based, lexical, morphological, syntactic and semantic features (Table TABREF6 )."
],
"extractive_spans": [
"length-based",
"lexical",
"morphological",
"syntactic",
"semantic"
],
"free_form_answer": "",
"highlighted_evidence": [
"We developed our features based on information both from previous literature BIBREF9 , BIBREF3 , BIBREF13 , BIBREF4 , BIBREF8 and a grammar book for Swedish L2 learners BIBREF20 . The set of features can be divided in the following five subgroups: length-based, lexical, morphological, syntactic and semantic features (Table TABREF6 )."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We developed our features based on information both from previous literature BIBREF9 , BIBREF3 , BIBREF13 , BIBREF4 , BIBREF8 and a grammar book for Swedish L2 learners BIBREF20 . The set of features can be divided in the following five subgroups: length-based, lexical, morphological, syntactic and semantic features (Table TABREF6 ).",
"FLOAT SELECTED: Table 2. The complete feature set."
],
"extractive_spans": [],
"free_form_answer": "Sentence length\nModal verbs to verbs\nAverage token length\nParticle IncSc\nExtra-long words\nSG pronoun IncSc\nNumber of characters\nPunctuation IncSc\nLIX\nSubjunction IncSc\nS-verb IncSc\nA1 lemma IncSc\nS-verbs to verbs\nA2 lemma IncSc\nAdjective IncSc\nB1 lemma IncSc\nAdjective variation\nB2 lemma IncSc\nAdverb IncSc\nC1 lemma IncSc\nAdverb variation\nC2 lemma IncSc\nNoun IncSc\nDifficult word IncSc\nNoun variation\nDifficult noun and verb IncSc\nVerb IncSc\nOut-of-Kelly IncSc\nVerb variation\nMissing lemma form IncSc\nNominal ratio\nAvg. Kelly log frequency\nNouns to verbs\nFunction word IncSc\nAverage dependency length\nLexical words to non-lexical words\nDependency arcs longer than\nLexical words to all tokens\nLongest dependency from root node\nNeuter gender noun IncSc\nRatio of right dependency arcs\nCon- and subjunction IncSc\nRatio of left dependency arcs\nPast participles to verbs\nModifier variation\nPresent participles to verbs\nPre-modifier IncSc\nPast verbs to verbs\nPost-modifier IncSc\nPresent verbs to verbs\nSubordinate IncSc\nSupine verbs to verbs\nRelative clause IncSc\nRelative structure IncSc\nPrepositional complement IncSc\nBilog type-token ratio\nSquare root type-token ratio\nAvg. nr. of senses per token\nPronouns to nouns\nNoun senses per noun\nPronouns to prepositions",
"highlighted_evidence": [
"We developed our features based on information both from previous literature BIBREF9 , BIBREF3 , BIBREF13 , BIBREF4 , BIBREF8 and a grammar book for Swedish L2 learners BIBREF20 . The set of features can be divided in the following five subgroups: length-based, lexical, morphological, syntactic and semantic features (Table TABREF6 ).",
"FLOAT SELECTED: Table 2. The complete feature set."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We developed our features based on information both from previous literature BIBREF9 , BIBREF3 , BIBREF13 , BIBREF4 , BIBREF8 and a grammar book for Swedish L2 learners BIBREF20 . The set of features can be divided in the following five subgroups: length-based, lexical, morphological, syntactic and semantic features (Table TABREF6 )."
],
"extractive_spans": [
"lexical, morphological, syntactic and semantic features"
],
"free_form_answer": "",
"highlighted_evidence": [
"The set of features can be divided in the following five subgroups: length-based, lexical, morphological, syntactic and semantic features (Table TABREF6 )."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0f946cb631a94c6963379d72b87bf555b45964bc",
"41cc1fd8353d2e03018f5616e20852b17059b152"
],
"answer": [
{
"evidence": [
"Although a direct comparison with other studies is difficult because of the target language, the nature of the datasets and the number of classes used, in terms of absolute numbers, our model achieves comparable performance with the state-of-the-art systems for English BIBREF9 , BIBREF12 . Other studies for non-English languages using CEFR levels include: BIBREF13 , reporting 49.1% accuracy for a French system distinguishing six classes; and BIBREF14 achieving 29.7% accuracy on a smaller Portuguese dataset with five levels."
],
"extractive_spans": [
"BIBREF9 , BIBREF12"
],
"free_form_answer": "",
"highlighted_evidence": [
"Although a direct comparison with other studies is difficult because of the target language, the nature of the datasets and the number of classes used, in terms of absolute numbers, our model achieves comparable performance with the state-of-the-art systems for English BIBREF9 , BIBREF12 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Although a direct comparison with other studies is difficult because of the target language, the nature of the datasets and the number of classes used, in terms of absolute numbers, our model achieves comparable performance with the state-of-the-art systems for English BIBREF9 , BIBREF12 . Other studies for non-English languages using CEFR levels include: BIBREF13 , reporting 49.1% accuracy for a French system distinguishing six classes; and BIBREF14 achieving 29.7% accuracy on a smaller Portuguese dataset with five levels."
],
"extractive_spans": [
"BIBREF9",
"BIBREF12"
],
"free_form_answer": "",
"highlighted_evidence": [
"Although a direct comparison with other studies is difficult because of the target language, the nature of the datasets and the number of classes used, in terms of absolute numbers, our model achieves comparable performance with the state-of-the-art systems for English BIBREF9 , BIBREF12 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"what classifiers did they train?",
"what dataset did they use?",
"what combination of features helped improve the classification?",
"what linguistics features did they apply?",
"what is the state of the art in English?"
],
"question_id": [
"38289bd9592db4d3670b65a0fef1fe8a309fee61",
"cb7a00233502c4b7801d34bc95d6d22d79776ae8",
"35d2eae3a7c9bed54196334a09344591f9cbb5c8",
"a70656fc61bf526dd21db7d2ec697b29a5a9c24e",
"f381b0ef693243d67657f6c34bbce015f6b1fd07"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1. The distribution of items per CEFR level in the datasets.",
"Table 2. The complete feature set.",
"Table 3. Document-level classification results.",
"Table 4. Accuracy scores (in %) for other learning algorithms.",
"Table 5. Confusion matrix for feature set ALL at document level.",
"Table 6. Sentence-level classification results.",
"Table 7. Confusion matrix for feature set ALL at sentence level.",
"Fig. 1. Distribution of sentences per CEFR level in the document-level data."
],
"file": [
"5-Table1-1.png",
"8-Table2-1.png",
"9-Table3-1.png",
"9-Table4-1.png",
"10-Table5-1.png",
"11-Table6-1.png",
"12-Table7-1.png",
"12-Figure1-1.png"
]
} | [
"what combination of features helped improve the classification?",
"what linguistics features did they apply?"
] | [
[
"1603.08868-8-Table2-1.png",
"1603.08868-Conclusion and Future Work-1",
"1603.08868-Features-0"
],
[
"1603.08868-8-Table2-1.png",
"1603.08868-Features-0"
]
] | [
"Using all the 61 features helped them improve the classification",
"Sentence length\nModal verbs to verbs\nAverage token length\nParticle IncSc\nExtra-long words\nSG pronoun IncSc\nNumber of characters\nPunctuation IncSc\nLIX\nSubjunction IncSc\nS-verb IncSc\nA1 lemma IncSc\nS-verbs to verbs\nA2 lemma IncSc\nAdjective IncSc\nB1 lemma IncSc\nAdjective variation\nB2 lemma IncSc\nAdverb IncSc\nC1 lemma IncSc\nAdverb variation\nC2 lemma IncSc\nNoun IncSc\nDifficult word IncSc\nNoun variation\nDifficult noun and verb IncSc\nVerb IncSc\nOut-of-Kelly IncSc\nVerb variation\nMissing lemma form IncSc\nNominal ratio\nAvg. Kelly log frequency\nNouns to verbs\nFunction word IncSc\nAverage dependency length\nLexical words to non-lexical words\nDependency arcs longer than\nLexical words to all tokens\nLongest dependency from root node\nNeuter gender noun IncSc\nRatio of right dependency arcs\nCon- and subjunction IncSc\nRatio of left dependency arcs\nPast participles to verbs\nModifier variation\nPresent participles to verbs\nPre-modifier IncSc\nPast verbs to verbs\nPost-modifier IncSc\nPresent verbs to verbs\nSubordinate IncSc\nSupine verbs to verbs\nRelative clause IncSc\nRelative structure IncSc\nPrepositional complement IncSc\nBilog type-token ratio\nSquare root type-token ratio\nAvg. nr. of senses per token\nPronouns to nouns\nNoun senses per noun\nPronouns to prepositions"
] | 121 |
1910.01340 | TexTrolls: Identifying Russian Trolls on Twitter from a Textual Perspective | The online new emerging suspicious users, that usually are called trolls, are one of the main sources of hate, fake, and deceptive online messages. Some agendas are utilizing these harmful users to spread incitement tweets, and as a consequence, the audience get deceived. The challenge in detecting such accounts is that they conceal their identities which make them disguised in social media, adding more difficulty to identify them using just their social network information. Therefore, in this paper, we propose a text-based approach to detect the online trolls such as those that were discovered during the US 2016 presidential elections. Our approach is mainly based on textual features which utilize thematic information, and profiling features to identify the accounts from their way of writing tweets. We deduced the thematic information in a unsupervised way and we show that coupling them with the textual features enhanced the performance of the proposed model. In addition, we find that the proposed profiling features perform the best comparing to the textual features. | {
"paragraphs": [
[
"Recent years have seen a large increase in the amount of disinformation and fake news spread on social media. False information was used to spread fear and anger among people, which in turn, provoked crimes in some countries. The US in the recent years experienced many similar cases during the presidential elections, such as the one commonly known as “Pizzagate\" . Later on, Twitter declared that they had detected a suspicious campaign originated in Russia by an organization named Internet Research Agency (IRA), and targeted the US to affect the results of the 2016 presidential elections. The desired goals behind these accounts are to spread fake and hateful news to further polarize the public opinion. Such attempts are not limited to Twitter, since Facebook announced in mid-2019 that they detected a similar attempt originating from UAE, Egypt and Saudi Arabia and targeting other countries such as Qatar, Palestine, Lebanon and Jordan. This attempt used Facebook pages, groups, and user accounts with fake identities to spread fake news supporting their ideological agendas. The automatic detection of such attempts is very challenging, since the true identity of these suspicious accounts is hidden by imitating the profiles of real persons from the targeted audience; in addition, sometimes they publish their suspicious idea in a vague way through their tweets' messages.",
"A previous work BIBREF0 showed that such suspicious accounts are not bots in a strict sense and they argue that they could be considered as “software-assisted human workers\". According to BIBREF1, the online suspicious accounts can be categorized into 3 main types: Robots, Cyborgs, and Human Spammers. We consider IRA accounts as another new emerging type called trolls, which is similar to Cyborgs except that the former focuses on targeting communities instead of individuals.",
"In this work, we identify online trolls in Twitter, namely IRA trolls, from a textual perspective. We study the effect of a set of text-based features and we propose a machine learning model to detect them. We aim to answer three research questions: RQ1. Does the thematic information improve the detection performance?, RQ2. Can we detect IRA trolls from only a textual perspective? and RQ3. How IRA campaign utilized the emotions to affect the public opinions?",
"The rest of the paper is structured as follows. In the following section, we present an overview on the literature work on IRA trolls. In Section SECREF3, we describe how the used dataset was compiled. Section SECREF4 describes our proposed features for our approach. The experiments, results, and analyses are presented in Section SECREF5. Finally, we draw some conclusions and discuss possible future work on IRA trolls."
],
[
"After the 2016 US elections, Twitter has detected a suspicious attempt by a large set of accounts to influence the results of the elections. Due to this event, an emerging research works about the Russian troll accounts started to appear BIBREF2, BIBREF3, BIBREF0, BIBREF4, BIBREF5.",
"The research works studied IRA trolls from several perspectives. The work in BIBREF4 studied the links' domains that were mentioned by IRA trolls and how much they overlap with other links used in tweets related to \"Brexit\". In addition, they compare \"Left\" and \"Right\" ideological trolls in terms of the number of re-tweets they received, number of followers, etc, and the online propaganda strategies they used. The authors in BIBREF2 analyzed IRA campaign in both Twitter and Facebook, and they focus on the evolution of IRA paid advertisements on Facebook before and after the US presidential elections from a thematic perspective.",
"The analysis work on IRA trolls not limited only to the tweets content, but it also considered the profile description, screen name, application client, geo-location, timezone, and number of links used per each media domain BIBREF3. There is a probability that Twitter has missed some IRA accounts that maybe were less active than the others. Based on this hypothesis, the work in BIBREF0 built a machine learning model based on profile, language distribution, and stop-words usage features to detect IRA trolls in a newly sampled data from Twitter. Other works tried to model IRA campaign not only by focusing on the trolls accounts, but also by examining who interacted with the trolls by sharing their contents BIBREF6. Similarly, the work BIBREF5 proposed a model that made use of the political ideologies of users, bot likelihood, and activity-related account metadata to predict users who spread the trolls’ contents."
],
[
"To model the identification process of the Russian trolls, we considered a large dataset of both regular users (legitimate accounts) and IRA troll accounts. Following we describe the dataset. In Table TABREF6 we summarizes its statistics."
],
[
"We used the IRA dataset that was released by Twitter after identifying the Russian trolls. The original dataset contains $3,841$ accounts, but we use a lower number of accounts and tweets after filtering them. We focus on accounts that use English as main language. In fact, our goal is to detect Russian accounts that mimic a regular US user. Then, we remove from these accounts non-English tweets, and maintain only tweets that were tweeted originally by them. Our final IRA accounts list contains 2,023 accounts."
],
[
"To contrast IRA behaviour, we sampled a large set of accounts to represent the ordinary behaviour of accounts from US. We collected a random sample of users that they post at least 5 tweets between 1st of August and 31 of December, 2016 (focusing on the US 2016 debates: first, second, third and vice president debates and the election day) by querying Twitter API hashtags related to the elections and its parties (e.g #trump, #clinton, #election, #debate, #vote, etc.). In addition, we selected the accounts that have location within US and use English as language of the Twitter interface. We focus on users during the presidential debates and elections dates because we suppose that the peak of trolls efforts concentrated during this period.",
"The final dataset is totally imbalanced (2% for IRA trolls and 98% for the regular users). This class imbalance situation represent a real scenario. From Table TABREF6, we can notice that the number of total tweets of the IRA trolls is similar to the one obtained from the regular users. This is due to the fact that IRA trolls were posting a lot of tweets before and during the elections in an attempt to try to make their messages reach the largest possible audience."
],
[
"In order to identify IRA trolls, we use a rich set of textual features. With this set of features we aim to model the tweets of the accounts from several perspectives."
],
[
"Previous works BIBREF7 have investigated IRA campaign efforts on Facebook, and they found that IRA pages have posted more than $\\sim $80K posts focused on division issues in US. Later on, the work in BIBREF2 has analyzed Facebook advertised posts by IRA and they specified the main themes that these advertisements discussed. Given the results of the previous works, we applied a topic modeling technique on our dataset to extract its main themes. We aim to detect IRA trolls by identifying their suspicious ideological changes across a set of themes.",
"Given our dataset, we applied Latent Dirichlet Allocation (LDA) topic modeling algorithm BIBREF8 on the tweets after a prepossessing step where we maintained only nouns and proper nouns. In addition, we removed special characters (except HASH \"#\" sign for the hashtags) and lowercase the final tweet. To ensure the quality of the themes, we removed the hashtags we used in the collecting process where they may bias the modeling algorithm. We tested multiple number of themes and we chose seven of them. We manually observed the content of these themes to label them. The extracted themes are: Police shootings, Islam and War, Supporting Trump, Black People, Civil Rights, Attacking Hillary, and Crimes. In some themes, like Supporting Trump and Attacking Hillary, we found contradicted opinions, in favor and against the main themes, but we chose the final stance based on the most representative hashtags and words in each of them (see Figure FIGREF11). Also, the themes Police Shooting and Crimes are similar, but we found that some words such as: police, officers, cops, shooting, gun, shot, etc. are the most discriminative between these two themes. In addition, we found that the Crimes theme focuses more on raping crimes against children and women. Our resulted themes are generally consistent with the ones obtained from the Facebook advertised posts in BIBREF2, and this emphasizes that IRA efforts organized in a similar manner in both social media platforms.",
"Based on our thematic information, we model the users textual features w.r.t. each of these themes. In other words, we model a set of textual features independently for each of the former themes to capture the emotional, stance, and others changes in the users tweets.",
"For the theme-based features, we use the following features that we believe that they change based on the themes:",
"Emotions: Since the results of the previous works BIBREF2, BIBREF7 showed that IRA efforts engineered to seed discord among individuals in US, we use emotions features to detect their emotional attempts to manipulate the public opinions (e.g. fear spreading behavior). For that, we use the NRC emotions lexicon BIBREF9 that contains $\\sim $14K words labeled using the eight Plutchik's emotions.",
"Sentiment: We extract the sentiment of the tweets from NRC BIBREF9, positive and negative.",
"Bad & Sexual Cues: During the manual analysis of a sample from IRA tweets, we found that some users use bad slang word to mimic the language of a US citizen. Thus, we model the presence of such words using a list of bad and sexual words from BIBREF10.",
"Stance Cues: Stance detection has been studied in different contexts to detect the stance of a tweet reply with respect to a main tweet/thread BIBREF11. Using this feature, we aim to detect the stance of the users regarding the different topics we extracted. To model the stance we use a set of stance lexicons employed in previous works BIBREF12, BIBREF13. Concretely, we focus on the following categories: belief, denial, doubt, fake, knowledge, negation, question, and report.",
"Bias Cues: We rely on a set of lexicons to capture the bias in text. We model the presence of the words in one of the following cues categories: assertives verbs BIBREF14, bias BIBREF15, factive verbs BIBREF16, implicative verbs BIBREF17, hedges BIBREF18, report verbs BIBREF15. A previous work has used these bias cues to identify bias in suspicious news posts in Twitter BIBREF19.",
"LIWC: We use a set of linguistic categories from the LIWC linguistic dictionary BIBREF20. The used categories are: pronoun, anx, cogmech, insight, cause, discrep, tentat, certain, inhib, incl.",
"Morality: Cues based on the morality foundation theory BIBREF21 where words labeled in one of a set of categories: care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation.",
"Given $V_i$ as the concatenation of the previous features vectors of a tweet$_i$, we represent each user's tweets by considering the average and standard deviation of her tweets' $V_{1,2,..N}$ in each theme $j$ independently and we concatenate them. Mathematically, a user $x$ final feature vector is defined as follows:",
"where given the jth theme, $N_j$ is the total number of tweets of the user, $V_{ij}$ is the ith tweet feature vector, $\\overline{V_j}$ is the mean of the tweets' feature vectors. With this representation we aim at capturing the \"Flip-Flop\" behavior of IRA trolls among the themes (see Section SECREF33)."
],
[
"As Twitter declared, although the IRA campaign was originated in Russia, it has been found that IRA trolls concealed their identity by tweeting in English. Furthermore, for any possibility of unmasking their identity, the majority of IRA trolls changed their location to other countries and the language of the Twitter interface they use. Thus, we propose the following features to identify these users using only their tweets text:",
"Native Language Identification (NLI): This feature was inspired by earlier works on identifying native language of essays writers BIBREF22. We aim to detect IRA trolls by identifying their way of writing English tweets. As shown in BIBREF19, English tweets generated by non-English speakers have a different syntactic pattern . Thus, we use state-of-the-art NLI features to detect this unique pattern BIBREF23, BIBREF24, BIBREF25; the feature set consists of bag of stopwords, Part-of-speech tags (POS), and syntactic dependency relations (DEPREL). We extract the POS and the DEPREL information using spaCy, an off-the-shelf POS tagger. We clean the tweets from the special characters and maintained dots, commas, and first-letter capitalization of words. We use regular expressions to convert a sequence of dots to a single dot, and similarly for sequence of characters.",
"Stylistic: We extract a set of stylistic features following previous works in the authorship attribution domain BIBREF27, BIBREF28, BIBREF29, such as: the count of special characters, consecutive characters and letters, URLs, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length.",
"Similar to the feature representation of the theme-based features, we represent each user's tweets by considering the average and standard deviation of her tweets' $V_{1,2,..N}$, given $V_i$ as the concatenation of the previous two features vectors of a tweet$_i$. A user $x$ final feature vector is defined as follows:",
"where $N$ is her total number of tweets, $V_i$ is the i$th$ tweet feature vector, $\\overline{V}$ is the mean of her tweets feature vectors."
],
[
"We report precision, recall and F1 score. Given the substantial class imbalance in the dataset, we use the macro weighted version of the F1 metric. We tested several classifiers and Logistic Regression showed the best F1$_{macro}$ value. We kept the default parameters values. We report results for 5-folds cross-validation."
],
[
"In order to evaluate our feature set, we use Random Selection, Majority Class, and bag-of-words baselines. In the bag-of-words baseline, we aggregate all the tweets of a user into one document. A previous work BIBREF30 showed that IRA trolls were playing a hashtag game which is a popular word game played on Twitter, where users add a hashtag to their tweets and then answer an implied question BIBREF31. IRA trolls used this game in a similar way but focusing more on offending or attacking others; an example from IRA tweets: \"#OffendEveryoneIn4Words undocumented immigrants are ILLEGALS\". Thus, we use as a baseline Tweet2vec BIBREF32 which is a a character-based Bidirectional Gated Recurrent neural network reads tweets and predicts their hashtags. We aim to assess if the tweets hashtags can help identifying the IRA tweets. The model reads the tweets in a form of character one-hot encodings and uses them for training with their hashtags as labels. To train the model, we use our collected dataset which consists of $\\sim $3.7M tweets. To represent the tweets in this baseline, we use the decoded embedding produced by the model and we feed them to the Logistic Regression classifier.",
"IRA dataset provided by Twitter contains less information about the accounts details, and they limited to: profile description, account creation date, number of followers and followees, location, and account language. Therefore, as another baseline we use the number of followers and followees to assess their identification ability (we will mention them as Network Features in the rest of the paper)."
],
[
"Table TABREF32 presents the classification results showing the performance of each feature set independently. Generally, we can see that the thematic information improves the performance of the proposed features clearly (RQ1), and with the largest amount in the Emotions features (see $-_{themes}$ and $+_{themes}$ columns). This result emphasizes the importance of the thematic information. Also, we see that the emotions performance increases with the largest amount considering F1$_{macro}$ value; this motivates us to analyze the emotions in IRA tweets (see the following section).",
"The result of the NLI feature in the table is interesting; we are able to detect IRA trolls from their writing style with a F1$_{macro}$ value of 0.91. Considering the results in Table TABREF32, we can notice that we are able to detect the IRA trolls effectively using only textual features (RQ2).",
"Finally, the baselines results show us that the Network features do not perform well. A previous work BIBREF3 showed that IRA trolls tend to follow a lot of users, and nudging other users to follow them (e.g. by writing \"follow me\" in their profile description) to fuse their identity (account information) with the regular users. Finally, similar to the Network features, the Tweet2vec baseline performs poorly. This indicates that, although IRA trolls used the hashtag game extensively in their tweets, the Tweet2vec baseline is not able to identify them."
],
[
"Given that the Emotions features boosted the F1$_{macro}$ with the highest value comparing to the other theme-based features, in Figure FIGREF34 we analyze IRA trolls from emotional perspective to answer RQ3. The analysis shows that the themes that were used to attack immigrants (Black People and Islam and War) have the fear emotion in their top two emotions. While on the other hand, a theme like Supporting Trump has a less amount of fear emotion, and the joy emotion among the top emotions.",
"Why do the thematic information help? The Flip-Flop behavior. As an example, let's considering the fear and joy emotions in Figure FIGREF34. We can notice that all the themes that used to nudge the division issues have a decreasing dashed line, where others such as Supporting Trump theme has an extremely increasing dashed line. Therefore, we manually analyzed the tweets of some IRA accounts and we found this observation clear, as an example from user $x$:",
"Islam and War: (A) @RickMad: Questions are a joke, a Muslim asks how SHE will be protected from Islamaphobia! Gmaffb! How will WE be protected from terrori…",
"Supporting Trump: (B) @realDonaldTrump: That was really exciting. Made all of my points. MAKE AMERICA GREAT AGAIN!",
"Figure FIGREF35 shows the flipping behaviour for user $x$ by extracting the mean value of the fear and joy emotions. The smaller difference between the fear and joy emotions in the Islam and War theme for this user is due to the ironic way of tweeting for the user (e.g. the beginning of tweet A: \"Questions are a joke\"). Even though, the fear emotion is still superior to the joy. We notice a similar pattern in some of the regular users, although much more evident among IRA trolls.",
"To understand more the NLI features performance, given their high performance comparing to the other features, we extract the top important tokens for each of the NLI feature subsets (see Figure FIGREF37). Some of the obtained results confirmed what was found previously. For instance, the authors in BIBREF19 found that Russians write English tweets with more prepositions comparing to native speakers of other languages (e.g. as, about, because in (c) Stop-words and RP in (a) POS in Figure FIGREF37). Further research must be conducted to investigate in depth the rest of the results.",
"Linguistic Analysis. We measure statistically significant differences in the cues markers of Morality, LIWC, Bias and Subjectivity, Stance, and Bad and Sexual words across IRA trolls and regular users. These findings presented in Table TABREF38 allows for a deeper understanding of IRA trolls.",
"False Positive Cases. The proposed features showed to be effective in the classification process. We are interested in understanding the causes of misclassifying some of IRA trolls. Therefore, we manually investigated the false positive tweets and we found that there are three main reasons: 1) Some trolls were tweeting in a questioning way by asking about general issues; we examined their tweets but we did not find a clear ideological orientation or a suspicious behaviour in their tweets. 2) Some accounts were sharing traditional social media posts (e.g. \"http://t.co/GGpZMvnEAj cat vs trashcan\"); the majority of the false positive IRA trolls are categorized under this reason. In addition, these posts were given a false theme name; the tweet in the previous example assigned to Attacking Hillary theme. 3) Lack of content. Some of the misclassified trolls mention only external links without a clear textual content. This kind of trolls needs a second step to investigate the content of the external links. Thus, we tried to read the content of these links but we found that the majority of them referred to deleted tweets. Probably this kind of accounts was used to \"raise the voice\" of other trolls, as well as, we argue that the three kinds of IRA trolls were used for \"likes boosting\"."
],
[
"In this work, we present a textual approach to detect social media trolls, namely IRA accounts. Due to the anonymity characteristic that social media provide to users, these kinds of suspicious behavioural accounts have started to appear. We built a new machine learning model based on theme-based and profiling features that in cross-validation evaluation achieved a F1$_{macro}$ value of 0.94. We applied a topic modeling algorithm to go behind the superficial textual information of the tweets. Our experiments showed that the extracted themes boosted the performance of the proposed model when coupled with other surface text features. In addition, we proposed NLI features to identify IRA trolls from their writing style, which showed to be very effective. Finally, for a better understanding we analyzed the IRA accounts from emotional and linguistic perspectives.",
"Through the manually checking of IRA accounts, we noticed that frequently irony was employed. As a future work, it would be interesting to identify these accounts by integrating an irony detection module."
]
],
"section_name": [
"Introduction",
"Related Work on IRA Trolls",
"Data",
"Data ::: Russian Trolls (IRA)",
"Data ::: Regular Accounts",
"Textual Representation",
"Textual Representation ::: Thematic Information",
"Textual Representation ::: Profiling IRA Accounts",
"Experiments and Analysis ::: Experimental Setup",
"Experiments and Analysis ::: Baselines",
"Experiments and Analysis ::: Results",
"Experiments and Analysis ::: Analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"5689c87b4eb5bb03d72cf498e5dbeaabd6147bc6",
"c8907b56e6b675bf3df617853fecb09cab78a9c1",
"d0aac11e4f2a2b3634177cfd63678520909297ff"
],
"answer": [
{
"evidence": [
"We used the IRA dataset that was released by Twitter after identifying the Russian trolls. The original dataset contains $3,841$ accounts, but we use a lower number of accounts and tweets after filtering them. We focus on accounts that use English as main language. In fact, our goal is to detect Russian accounts that mimic a regular US user. Then, we remove from these accounts non-English tweets, and maintain only tweets that were tweeted originally by them. Our final IRA accounts list contains 2,023 accounts.",
"Recent years have seen a large increase in the amount of disinformation and fake news spread on social media. False information was used to spread fear and anger among people, which in turn, provoked crimes in some countries. The US in the recent years experienced many similar cases during the presidential elections, such as the one commonly known as “Pizzagate\" . Later on, Twitter declared that they had detected a suspicious campaign originated in Russia by an organization named Internet Research Agency (IRA), and targeted the US to affect the results of the 2016 presidential elections. The desired goals behind these accounts are to spread fake and hateful news to further polarize the public opinion. Such attempts are not limited to Twitter, since Facebook announced in mid-2019 that they detected a similar attempt originating from UAE, Egypt and Saudi Arabia and targeting other countries such as Qatar, Palestine, Lebanon and Jordan. This attempt used Facebook pages, groups, and user accounts with fake identities to spread fake news supporting their ideological agendas. The automatic detection of such attempts is very challenging, since the true identity of these suspicious accounts is hidden by imitating the profiles of real persons from the targeted audience; in addition, sometimes they publish their suspicious idea in a vague way through their tweets' messages.",
"In this work, we identify online trolls in Twitter, namely IRA trolls, from a textual perspective. We study the effect of a set of text-based features and we propose a machine learning model to detect them. We aim to answer three research questions: RQ1. Does the thematic information improve the detection performance?, RQ2. Can we detect IRA trolls from only a textual perspective? and RQ3. How IRA campaign utilized the emotions to affect the public opinions?"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"\n",
"We used the IRA dataset that was released by Twitter after identifying the Russian trolls. The original dataset contains $3,841$ accounts, but we use a lower number of accounts and tweets after filtering them. We focus on accounts that use English as main language. In fact, our goal is to detect Russian accounts that mimic a regular US user. Then, we remove from these accounts non-English tweets, and maintain only tweets that were tweeted originally by them. Our final IRA accounts list contains 2,023 accounts.",
"Twitter declared that they had detected a suspicious campaign originated in Russia by an organization named Internet Research Agency (IRA), and targeted the US to affect the results of the 2016 presidential elections. ",
"In this work, we identify online trolls in Twitter, namely IRA trolls, from a textual perspective."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We used the IRA dataset that was released by Twitter after identifying the Russian trolls. The original dataset contains $3,841$ accounts, but we use a lower number of accounts and tweets after filtering them. We focus on accounts that use English as main language. In fact, our goal is to detect Russian accounts that mimic a regular US user. Then, we remove from these accounts non-English tweets, and maintain only tweets that were tweeted originally by them. Our final IRA accounts list contains 2,023 accounts.",
"To contrast IRA behaviour, we sampled a large set of accounts to represent the ordinary behaviour of accounts from US. We collected a random sample of users that they post at least 5 tweets between 1st of August and 31 of December, 2016 (focusing on the US 2016 debates: first, second, third and vice president debates and the election day) by querying Twitter API hashtags related to the elections and its parties (e.g #trump, #clinton, #election, #debate, #vote, etc.). In addition, we selected the accounts that have location within US and use English as language of the Twitter interface. We focus on users during the presidential debates and elections dates because we suppose that the peak of trolls efforts concentrated during this period."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We used the IRA dataset that was released by Twitter after identifying the Russian trolls.",
"We focus on accounts that use English as main language. ",
"Then, we remove from these accounts non-English tweets, and maintain only tweets that were tweeted originally by them. ",
"To contrast IRA behaviour, we sampled a large set of accounts to represent the ordinary behaviour of accounts from US.",
"In addition, we selected the accounts that have location within US and use English as language of the Twitter interface. "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We used the IRA dataset that was released by Twitter after identifying the Russian trolls. The original dataset contains $3,841$ accounts, but we use a lower number of accounts and tweets after filtering them. We focus on accounts that use English as main language. In fact, our goal is to detect Russian accounts that mimic a regular US user. Then, we remove from these accounts non-English tweets, and maintain only tweets that were tweeted originally by them. Our final IRA accounts list contains 2,023 accounts."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We focus on accounts that use English as main language. In fact, our goal is to detect Russian accounts that mimic a regular US user. Then, we remove from these accounts non-English tweets, and maintain only tweets that were tweeted originally by them."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"07713a13d7883140d2da6d7212223867c8946112",
"a7a2446f98c7672b67faeeb03f3db57044d1a703",
"e98167a756134aee38f0ab2641b69337da868a5b"
],
"answer": [
{
"evidence": [
"In order to evaluate our feature set, we use Random Selection, Majority Class, and bag-of-words baselines. In the bag-of-words baseline, we aggregate all the tweets of a user into one document. A previous work BIBREF30 showed that IRA trolls were playing a hashtag game which is a popular word game played on Twitter, where users add a hashtag to their tweets and then answer an implied question BIBREF31. IRA trolls used this game in a similar way but focusing more on offending or attacking others; an example from IRA tweets: \"#OffendEveryoneIn4Words undocumented immigrants are ILLEGALS\". Thus, we use as a baseline Tweet2vec BIBREF32 which is a a character-based Bidirectional Gated Recurrent neural network reads tweets and predicts their hashtags. We aim to assess if the tweets hashtags can help identifying the IRA tweets. The model reads the tweets in a form of character one-hot encodings and uses them for training with their hashtags as labels. To train the model, we use our collected dataset which consists of $\\sim $3.7M tweets. To represent the tweets in this baseline, we use the decoded embedding produced by the model and we feed them to the Logistic Regression classifier."
],
"extractive_spans": [
"character-based Bidirectional Gated Recurrent neural network"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to evaluate our feature set, we use Random Selection, Majority Class, and bag-of-words baselines. In the bag-of-words baseline, we aggregate all the tweets of a user into one document. A previous work BIBREF30 showed that IRA trolls were playing a hashtag game which is a popular word game played on Twitter, where users add a hashtag to their tweets and then answer an implied question BIBREF31. IRA trolls used this game in a similar way but focusing more on offending or attacking others; an example from IRA tweets: \"#OffendEveryoneIn4Words undocumented immigrants are ILLEGALS\". Thus, we use as a baseline Tweet2vec BIBREF32 which is a a character-based Bidirectional Gated Recurrent neural network reads tweets and predicts their hashtags. We aim to assess if the tweets hashtags can help identifying the IRA tweets. The model reads the tweets in a form of character one-hot encodings and uses them for training with their hashtags as labels. To train the model, we use our collected dataset which consists of $\\sim $3.7M tweets. To represent the tweets in this baseline, we use the decoded embedding produced by the model and we feed them to the Logistic Regression classifier."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to evaluate our feature set, we use Random Selection, Majority Class, and bag-of-words baselines. In the bag-of-words baseline, we aggregate all the tweets of a user into one document. A previous work BIBREF30 showed that IRA trolls were playing a hashtag game which is a popular word game played on Twitter, where users add a hashtag to their tweets and then answer an implied question BIBREF31. IRA trolls used this game in a similar way but focusing more on offending or attacking others; an example from IRA tweets: \"#OffendEveryoneIn4Words undocumented immigrants are ILLEGALS\". Thus, we use as a baseline Tweet2vec BIBREF32 which is a a character-based Bidirectional Gated Recurrent neural network reads tweets and predicts their hashtags. We aim to assess if the tweets hashtags can help identifying the IRA tweets. The model reads the tweets in a form of character one-hot encodings and uses them for training with their hashtags as labels. To train the model, we use our collected dataset which consists of $\\sim $3.7M tweets. To represent the tweets in this baseline, we use the decoded embedding produced by the model and we feed them to the Logistic Regression classifier."
],
"extractive_spans": [
"Random Selection",
"Majority Class",
"bag-of-words",
"Tweet2vec BIBREF32"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to evaluate our feature set, we use Random Selection, Majority Class, and bag-of-words baselines. In the bag-of-words baseline, we aggregate all the tweets of a user into one document.",
"Thus, we use as a baseline Tweet2vec BIBREF32 which is a a character-based Bidirectional Gated Recurrent neural network reads tweets and predicts their hashtags."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We report precision, recall and F1 score. Given the substantial class imbalance in the dataset, we use the macro weighted version of the F1 metric. We tested several classifiers and Logistic Regression showed the best F1$_{macro}$ value. We kept the default parameters values. We report results for 5-folds cross-validation."
],
"extractive_spans": [
"Logistic Regression classifier"
],
"free_form_answer": "",
"highlighted_evidence": [
"We tested several classifiers and Logistic Regression showed the best F1$_{macro}$ value."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"62295e462b83895d0b298ef3b97f2709532f169d",
"ba6cd1923e16c378fcf4dc6f32d73df69a7e26eb",
"c29d6b91519eafb2a23191f07de85cb4f002090f"
],
"answer": [
{
"evidence": [
"Recent years have seen a large increase in the amount of disinformation and fake news spread on social media. False information was used to spread fear and anger among people, which in turn, provoked crimes in some countries. The US in the recent years experienced many similar cases during the presidential elections, such as the one commonly known as “Pizzagate\" . Later on, Twitter declared that they had detected a suspicious campaign originated in Russia by an organization named Internet Research Agency (IRA), and targeted the US to affect the results of the 2016 presidential elections. The desired goals behind these accounts are to spread fake and hateful news to further polarize the public opinion. Such attempts are not limited to Twitter, since Facebook announced in mid-2019 that they detected a similar attempt originating from UAE, Egypt and Saudi Arabia and targeting other countries such as Qatar, Palestine, Lebanon and Jordan. This attempt used Facebook pages, groups, and user accounts with fake identities to spread fake news supporting their ideological agendas. The automatic detection of such attempts is very challenging, since the true identity of these suspicious accounts is hidden by imitating the profiles of real persons from the targeted audience; in addition, sometimes they publish their suspicious idea in a vague way through their tweets' messages.",
"Previous works BIBREF7 have investigated IRA campaign efforts on Facebook, and they found that IRA pages have posted more than $\\sim $80K posts focused on division issues in US. Later on, the work in BIBREF2 has analyzed Facebook advertised posts by IRA and they specified the main themes that these advertisements discussed. Given the results of the previous works, we applied a topic modeling technique on our dataset to extract its main themes. We aim to detect IRA trolls by identifying their suspicious ideological changes across a set of themes.",
"Given our dataset, we applied Latent Dirichlet Allocation (LDA) topic modeling algorithm BIBREF8 on the tweets after a prepossessing step where we maintained only nouns and proper nouns. In addition, we removed special characters (except HASH \"#\" sign for the hashtags) and lowercase the final tweet. To ensure the quality of the themes, we removed the hashtags we used in the collecting process where they may bias the modeling algorithm. We tested multiple number of themes and we chose seven of them. We manually observed the content of these themes to label them. The extracted themes are: Police shootings, Islam and War, Supporting Trump, Black People, Civil Rights, Attacking Hillary, and Crimes. In some themes, like Supporting Trump and Attacking Hillary, we found contradicted opinions, in favor and against the main themes, but we chose the final stance based on the most representative hashtags and words in each of them (see Figure FIGREF11). Also, the themes Police Shooting and Crimes are similar, but we found that some words such as: police, officers, cops, shooting, gun, shot, etc. are the most discriminative between these two themes. In addition, we found that the Crimes theme focuses more on raping crimes against children and women. Our resulted themes are generally consistent with the ones obtained from the Facebook advertised posts in BIBREF2, and this emphasizes that IRA efforts organized in a similar manner in both social media platforms."
],
"extractive_spans": [
" Latent Dirichlet Allocation (LDA)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Later on, Twitter declared that they had detected a suspicious campaign originated in Russia by an organization named Internet Research Agency (IRA), and targeted the US to affect the results of the 2016 presidential elections.",
"We aim to detect IRA trolls by identifying their suspicious ideological changes across a set of themes.",
"Given our dataset, we applied Latent Dirichlet Allocation (LDA) topic modeling algorithm BIBREF8 on the tweets after a prepossessing step where we maintained only nouns and proper nouns."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Given our dataset, we applied Latent Dirichlet Allocation (LDA) topic modeling algorithm BIBREF8 on the tweets after a prepossessing step where we maintained only nouns and proper nouns. In addition, we removed special characters (except HASH \"#\" sign for the hashtags) and lowercase the final tweet. To ensure the quality of the themes, we removed the hashtags we used in the collecting process where they may bias the modeling algorithm. We tested multiple number of themes and we chose seven of them. We manually observed the content of these themes to label them. The extracted themes are: Police shootings, Islam and War, Supporting Trump, Black People, Civil Rights, Attacking Hillary, and Crimes. In some themes, like Supporting Trump and Attacking Hillary, we found contradicted opinions, in favor and against the main themes, but we chose the final stance based on the most representative hashtags and words in each of them (see Figure FIGREF11). Also, the themes Police Shooting and Crimes are similar, but we found that some words such as: police, officers, cops, shooting, gun, shot, etc. are the most discriminative between these two themes. In addition, we found that the Crimes theme focuses more on raping crimes against children and women. Our resulted themes are generally consistent with the ones obtained from the Facebook advertised posts in BIBREF2, and this emphasizes that IRA efforts organized in a similar manner in both social media platforms."
],
"extractive_spans": [
"Latent Dirichlet Allocation (LDA) topic modeling algorithm BIBREF8"
],
"free_form_answer": "",
"highlighted_evidence": [
"Given our dataset, we applied Latent Dirichlet Allocation (LDA) topic modeling algorithm BIBREF8 on the tweets after a prepossessing step where we maintained only nouns and proper nouns."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Previous works BIBREF7 have investigated IRA campaign efforts on Facebook, and they found that IRA pages have posted more than $\\sim $80K posts focused on division issues in US. Later on, the work in BIBREF2 has analyzed Facebook advertised posts by IRA and they specified the main themes that these advertisements discussed. Given the results of the previous works, we applied a topic modeling technique on our dataset to extract its main themes. We aim to detect IRA trolls by identifying their suspicious ideological changes across a set of themes.",
"Given our dataset, we applied Latent Dirichlet Allocation (LDA) topic modeling algorithm BIBREF8 on the tweets after a prepossessing step where we maintained only nouns and proper nouns. In addition, we removed special characters (except HASH \"#\" sign for the hashtags) and lowercase the final tweet. To ensure the quality of the themes, we removed the hashtags we used in the collecting process where they may bias the modeling algorithm. We tested multiple number of themes and we chose seven of them. We manually observed the content of these themes to label them. The extracted themes are: Police shootings, Islam and War, Supporting Trump, Black People, Civil Rights, Attacking Hillary, and Crimes. In some themes, like Supporting Trump and Attacking Hillary, we found contradicted opinions, in favor and against the main themes, but we chose the final stance based on the most representative hashtags and words in each of them (see Figure FIGREF11). Also, the themes Police Shooting and Crimes are similar, but we found that some words such as: police, officers, cops, shooting, gun, shot, etc. are the most discriminative between these two themes. In addition, we found that the Crimes theme focuses more on raping crimes against children and women. Our resulted themes are generally consistent with the ones obtained from the Facebook advertised posts in BIBREF2, and this emphasizes that IRA efforts organized in a similar manner in both social media platforms."
],
"extractive_spans": [
"Latent Dirichlet Allocation (LDA) topic modeling"
],
"free_form_answer": "",
"highlighted_evidence": [
"Given the results of the previous works, we applied a topic modeling technique on our dataset to extract its main themes.",
"Given our dataset, we applied Latent Dirichlet Allocation (LDA) topic modeling algorithm BIBREF8 on the tweets after a prepossessing step where we maintained only nouns and proper nouns."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"5aae27c325e07afe52bdecc24625c23a7b47f419",
"94cbc9c97360cbcf259aaf470e1998f97ac9e3df",
"a7620ee6469182b5580a99d63df98f61ddefd266"
],
"answer": [
{
"evidence": [
"Recent years have seen a large increase in the amount of disinformation and fake news spread on social media. False information was used to spread fear and anger among people, which in turn, provoked crimes in some countries. The US in the recent years experienced many similar cases during the presidential elections, such as the one commonly known as “Pizzagate\" . Later on, Twitter declared that they had detected a suspicious campaign originated in Russia by an organization named Internet Research Agency (IRA), and targeted the US to affect the results of the 2016 presidential elections. The desired goals behind these accounts are to spread fake and hateful news to further polarize the public opinion. Such attempts are not limited to Twitter, since Facebook announced in mid-2019 that they detected a similar attempt originating from UAE, Egypt and Saudi Arabia and targeting other countries such as Qatar, Palestine, Lebanon and Jordan. This attempt used Facebook pages, groups, and user accounts with fake identities to spread fake news supporting their ideological agendas. The automatic detection of such attempts is very challenging, since the true identity of these suspicious accounts is hidden by imitating the profiles of real persons from the targeted audience; in addition, sometimes they publish their suspicious idea in a vague way through their tweets' messages.",
"As Twitter declared, although the IRA campaign was originated in Russia, it has been found that IRA trolls concealed their identity by tweeting in English. Furthermore, for any possibility of unmasking their identity, the majority of IRA trolls changed their location to other countries and the language of the Twitter interface they use. Thus, we propose the following features to identify these users using only their tweets text:",
"Native Language Identification (NLI): This feature was inspired by earlier works on identifying native language of essays writers BIBREF22. We aim to detect IRA trolls by identifying their way of writing English tweets. As shown in BIBREF19, English tweets generated by non-English speakers have a different syntactic pattern . Thus, we use state-of-the-art NLI features to detect this unique pattern BIBREF23, BIBREF24, BIBREF25; the feature set consists of bag of stopwords, Part-of-speech tags (POS), and syntactic dependency relations (DEPREL). We extract the POS and the DEPREL information using spaCy, an off-the-shelf POS tagger. We clean the tweets from the special characters and maintained dots, commas, and first-letter capitalization of words. We use regular expressions to convert a sequence of dots to a single dot, and similarly for sequence of characters.",
"Stylistic: We extract a set of stylistic features following previous works in the authorship attribution domain BIBREF27, BIBREF28, BIBREF29, such as: the count of special characters, consecutive characters and letters, URLs, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length."
],
"extractive_spans": [
"Part-of-speech tags (POS)",
"syntactic dependency relations",
"count of special characters",
"consecutive characters and letters",
"URLs",
"hashtags",
"users' mentions",
"uppercase ratio",
"tweet length"
],
"free_form_answer": "",
"highlighted_evidence": [
"Later on, Twitter declared that they had detected a suspicious campaign originated in Russia by an organization named Internet Research Agency (IRA), and targeted the US to affect the results of the 2016 presidential elections. ",
"As Twitter declared, although the IRA campaign was originated in Russia, it has been found that IRA trolls concealed their identity by tweeting in English. Furthermore, for any possibility of unmasking their identity, the majority of IRA trolls changed their location to other countries and the language of the Twitter interface they use. Thus, we propose the following features to identify these users using only their tweets text:",
"Native Language Identification (NLI): This feature was inspired by earlier works on identifying native language of essays writers BIBREF22. We aim to detect IRA trolls by identifying their way of writing English tweets. As shown in BIBREF19, English tweets generated by non-English speakers have a different syntactic pattern . Thus, we use state-of-the-art NLI features to detect this unique pattern BIBREF23, BIBREF24, BIBREF25; the feature set consists of bag of stopwords, Part-of-speech tags (POS), and syntactic dependency relations (DEPREL).",
"Stylistic: We extract a set of stylistic features following previous works in the authorship attribution domain BIBREF27, BIBREF28, BIBREF29, such as: the count of special characters, consecutive characters and letters, URLs, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Native Language Identification (NLI): This feature was inspired by earlier works on identifying native language of essays writers BIBREF22. We aim to detect IRA trolls by identifying their way of writing English tweets. As shown in BIBREF19, English tweets generated by non-English speakers have a different syntactic pattern . Thus, we use state-of-the-art NLI features to detect this unique pattern BIBREF23, BIBREF24, BIBREF25; the feature set consists of bag of stopwords, Part-of-speech tags (POS), and syntactic dependency relations (DEPREL). We extract the POS and the DEPREL information using spaCy, an off-the-shelf POS tagger. We clean the tweets from the special characters and maintained dots, commas, and first-letter capitalization of words. We use regular expressions to convert a sequence of dots to a single dot, and similarly for sequence of characters.",
"Stylistic: We extract a set of stylistic features following previous works in the authorship attribution domain BIBREF27, BIBREF28, BIBREF29, such as: the count of special characters, consecutive characters and letters, URLs, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length.",
"Textual Representation ::: Profiling IRA Accounts"
],
"extractive_spans": [
"bag of stopwords",
"Part-of-speech tags",
"syntactic dependency relations",
"count of special characters",
"consecutive characters and letters",
"URLs",
"hashtags",
"users' mentions",
"uppercase ratio",
"tweet length"
],
"free_form_answer": "",
"highlighted_evidence": [
"Thus, we use state-of-the-art NLI features to detect this unique pattern BIBREF23, BIBREF24, BIBREF25; the feature set consists of bag of stopwords, Part-of-speech tags (POS), and syntactic dependency relations (DEPREL).",
"Stylistic: We extract a set of stylistic features following previous works in the authorship attribution domain BIBREF27, BIBREF28, BIBREF29, such as: the count of special characters, consecutive characters and letters, URLs, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length.",
"Textual Representation ::: Profiling IRA Accounts"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As Twitter declared, although the IRA campaign was originated in Russia, it has been found that IRA trolls concealed their identity by tweeting in English. Furthermore, for any possibility of unmasking their identity, the majority of IRA trolls changed their location to other countries and the language of the Twitter interface they use. Thus, we propose the following features to identify these users using only their tweets text:",
"Native Language Identification (NLI): This feature was inspired by earlier works on identifying native language of essays writers BIBREF22. We aim to detect IRA trolls by identifying their way of writing English tweets. As shown in BIBREF19, English tweets generated by non-English speakers have a different syntactic pattern . Thus, we use state-of-the-art NLI features to detect this unique pattern BIBREF23, BIBREF24, BIBREF25; the feature set consists of bag of stopwords, Part-of-speech tags (POS), and syntactic dependency relations (DEPREL). We extract the POS and the DEPREL information using spaCy, an off-the-shelf POS tagger. We clean the tweets from the special characters and maintained dots, commas, and first-letter capitalization of words. We use regular expressions to convert a sequence of dots to a single dot, and similarly for sequence of characters.",
"Stylistic: We extract a set of stylistic features following previous works in the authorship attribution domain BIBREF27, BIBREF28, BIBREF29, such as: the count of special characters, consecutive characters and letters, URLs, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length."
],
"extractive_spans": [
"bag of stopwords, Part-of-speech tags (POS), and syntactic dependency relations (DEPREL)",
"count of special characters, consecutive characters and letters, URLs, hashtags, users' mentions",
"uppercase ratio and the tweet length"
],
"free_form_answer": "",
"highlighted_evidence": [
"Thus, we propose the following features to identify these users using only their tweets text:\n\nNative Language Identification (NLI): This feature was inspired by earlier works on identifying native language of essays writers BIBREF22. We aim to detect IRA trolls by identifying their way of writing English tweets. As shown in BIBREF19, English tweets generated by non-English speakers have a different syntactic pattern . Thus, we use state-of-the-art NLI features to detect this unique pattern BIBREF23, BIBREF24, BIBREF25; the feature set consists of bag of stopwords, Part-of-speech tags (POS), and syntactic dependency relations (DEPREL). We extract the POS and the DEPREL information using spaCy, an off-the-shelf POS tagger. We clean the tweets from the special characters and maintained dots, commas, and first-letter capitalization of words. We use regular expressions to convert a sequence of dots to a single dot, and similarly for sequence of characters.\n\nStylistic: We extract a set of stylistic features following previous works in the authorship attribution domain BIBREF27, BIBREF28, BIBREF29, such as: the count of special characters, consecutive characters and letters, URLs, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a30f13295fc44db325697f0b280d8c7f27bfe853",
"a59e3cf7ffc254c79a78f9939a9e3741e76fcf7b"
],
"answer": [
{
"evidence": [
"Based on our thematic information, we model the users textual features w.r.t. each of these themes. In other words, we model a set of textual features independently for each of the former themes to capture the emotional, stance, and others changes in the users tweets.",
"For the theme-based features, we use the following features that we believe that they change based on the themes:",
"Emotions: Since the results of the previous works BIBREF2, BIBREF7 showed that IRA efforts engineered to seed discord among individuals in US, we use emotions features to detect their emotional attempts to manipulate the public opinions (e.g. fear spreading behavior). For that, we use the NRC emotions lexicon BIBREF9 that contains $\\sim $14K words labeled using the eight Plutchik's emotions.",
"Sentiment: We extract the sentiment of the tweets from NRC BIBREF9, positive and negative.",
"Bad & Sexual Cues: During the manual analysis of a sample from IRA tweets, we found that some users use bad slang word to mimic the language of a US citizen. Thus, we model the presence of such words using a list of bad and sexual words from BIBREF10.",
"Stance Cues: Stance detection has been studied in different contexts to detect the stance of a tweet reply with respect to a main tweet/thread BIBREF11. Using this feature, we aim to detect the stance of the users regarding the different topics we extracted. To model the stance we use a set of stance lexicons employed in previous works BIBREF12, BIBREF13. Concretely, we focus on the following categories: belief, denial, doubt, fake, knowledge, negation, question, and report.",
"Bias Cues: We rely on a set of lexicons to capture the bias in text. We model the presence of the words in one of the following cues categories: assertives verbs BIBREF14, bias BIBREF15, factive verbs BIBREF16, implicative verbs BIBREF17, hedges BIBREF18, report verbs BIBREF15. A previous work has used these bias cues to identify bias in suspicious news posts in Twitter BIBREF19.",
"LIWC: We use a set of linguistic categories from the LIWC linguistic dictionary BIBREF20. The used categories are: pronoun, anx, cogmech, insight, cause, discrep, tentat, certain, inhib, incl.",
"Morality: Cues based on the morality foundation theory BIBREF21 where words labeled in one of a set of categories: care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation."
],
"extractive_spans": [
"eight Plutchik's emotions",
"positive and negative",
"list of bad and sexual words from BIBREF10",
"belief, denial, doubt, fake, knowledge, negation, question, and report",
"assertives verbs BIBREF14, bias BIBREF15, factive verbs BIBREF16, implicative verbs BIBREF17, hedges BIBREF18, report verbs BIBREF15",
"pronoun, anx, cogmech, insight, cause, discrep, tentat, certain, inhib, incl",
"care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation"
],
"free_form_answer": "",
"highlighted_evidence": [
"Based on our thematic information, we model the users textual features w.r.t. each of these themes.",
"For the theme-based features, we use the following features that we believe that they change based on the themes:\n\nEmotions: Since the results of the previous works BIBREF2, BIBREF7 showed that IRA efforts engineered to seed discord among individuals in US, we use emotions features to detect their emotional attempts to manipulate the public opinions (e.g. fear spreading behavior). For that, we use the NRC emotions lexicon BIBREF9 that contains $\\sim $14K words labeled using the eight Plutchik's emotions.\n\nSentiment: We extract the sentiment of the tweets from NRC BIBREF9, positive and negative.\n\nBad & Sexual Cues: During the manual analysis of a sample from IRA tweets, we found that some users use bad slang word to mimic the language of a US citizen. Thus, we model the presence of such words using a list of bad and sexual words from BIBREF10.\n\nStance Cues: Stance detection has been studied in different contexts to detect the stance of a tweet reply with respect to a main tweet/thread BIBREF11. Using this feature, we aim to detect the stance of the users regarding the different topics we extracted. To model the stance we use a set of stance lexicons employed in previous works BIBREF12, BIBREF13. Concretely, we focus on the following categories: belief, denial, doubt, fake, knowledge, negation, question, and report.\n\nBias Cues: We rely on a set of lexicons to capture the bias in text. We model the presence of the words in one of the following cues categories: assertives verbs BIBREF14, bias BIBREF15, factive verbs BIBREF16, implicative verbs BIBREF17, hedges BIBREF18, report verbs BIBREF15. A previous work has used these bias cues to identify bias in suspicious news posts in Twitter BIBREF19.\n\nLIWC: We use a set of linguistic categories from the LIWC linguistic dictionary BIBREF20. The used categories are: pronoun, anx, cogmech, insight, cause, discrep, tentat, certain, inhib, incl.\n\nMorality: Cues based on the morality foundation theory BIBREF21 where words labeled in one of a set of categories: care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Given our dataset, we applied Latent Dirichlet Allocation (LDA) topic modeling algorithm BIBREF8 on the tweets after a prepossessing step where we maintained only nouns and proper nouns. In addition, we removed special characters (except HASH \"#\" sign for the hashtags) and lowercase the final tweet. To ensure the quality of the themes, we removed the hashtags we used in the collecting process where they may bias the modeling algorithm. We tested multiple number of themes and we chose seven of them. We manually observed the content of these themes to label them. The extracted themes are: Police shootings, Islam and War, Supporting Trump, Black People, Civil Rights, Attacking Hillary, and Crimes. In some themes, like Supporting Trump and Attacking Hillary, we found contradicted opinions, in favor and against the main themes, but we chose the final stance based on the most representative hashtags and words in each of them (see Figure FIGREF11). Also, the themes Police Shooting and Crimes are similar, but we found that some words such as: police, officers, cops, shooting, gun, shot, etc. are the most discriminative between these two themes. In addition, we found that the Crimes theme focuses more on raping crimes against children and women. Our resulted themes are generally consistent with the ones obtained from the Facebook advertised posts in BIBREF2, and this emphasizes that IRA efforts organized in a similar manner in both social media platforms.",
"Based on our thematic information, we model the users textual features w.r.t. each of these themes. In other words, we model a set of textual features independently for each of the former themes to capture the emotional, stance, and others changes in the users tweets.",
"Emotions: Since the results of the previous works BIBREF2, BIBREF7 showed that IRA efforts engineered to seed discord among individuals in US, we use emotions features to detect their emotional attempts to manipulate the public opinions (e.g. fear spreading behavior). For that, we use the NRC emotions lexicon BIBREF9 that contains $\\sim $14K words labeled using the eight Plutchik's emotions.",
"Sentiment: We extract the sentiment of the tweets from NRC BIBREF9, positive and negative.",
"Bad & Sexual Cues: During the manual analysis of a sample from IRA tweets, we found that some users use bad slang word to mimic the language of a US citizen. Thus, we model the presence of such words using a list of bad and sexual words from BIBREF10.",
"Stance Cues: Stance detection has been studied in different contexts to detect the stance of a tweet reply with respect to a main tweet/thread BIBREF11. Using this feature, we aim to detect the stance of the users regarding the different topics we extracted. To model the stance we use a set of stance lexicons employed in previous works BIBREF12, BIBREF13. Concretely, we focus on the following categories: belief, denial, doubt, fake, knowledge, negation, question, and report.",
"Bias Cues: We rely on a set of lexicons to capture the bias in text. We model the presence of the words in one of the following cues categories: assertives verbs BIBREF14, bias BIBREF15, factive verbs BIBREF16, implicative verbs BIBREF17, hedges BIBREF18, report verbs BIBREF15. A previous work has used these bias cues to identify bias in suspicious news posts in Twitter BIBREF19.",
"LIWC: We use a set of linguistic categories from the LIWC linguistic dictionary BIBREF20. The used categories are: pronoun, anx, cogmech, insight, cause, discrep, tentat, certain, inhib, incl.",
"Morality: Cues based on the morality foundation theory BIBREF21 where words labeled in one of a set of categories: care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation."
],
"extractive_spans": [],
"free_form_answer": "emotion features, bad and sexual language features, stance towards some topics, bias cues, linguistic features from LIWC and morality cues",
"highlighted_evidence": [
"Given our dataset, we applied Latent Dirichlet Allocation (LDA) topic modeling algorithm BIBREF8 on the tweets after a prepossessing step where we maintained only nouns and proper nouns.",
"Based on our thematic information, we model the users textual features w.r.t. each of these themes.",
"Emotions: Since the results of the previous works BIBREF2, BIBREF7 showed that IRA efforts engineered to seed discord among individuals in US, we use emotions features to detect their emotional attempts to manipulate the public opinions (e.g. fear spreading behavior). For that, we use the NRC emotions lexicon BIBREF9 that contains $\\sim $14K words labeled using the eight Plutchik's emotions.",
"Sentiment: We extract the sentiment of the tweets from NRC BIBREF9, positive and negative.",
"Bad & Sexual Cues: During the manual analysis of a sample from IRA tweets, we found that some users use bad slang word to mimic the language of a US citizen. Thus, we model the presence of such words using a list of bad and sexual words from BIBREF10.",
"Stance Cues: Stance detection has been studied in different contexts to detect the stance of a tweet reply with respect to a main tweet/thread BIBREF11. Using this feature, we aim to detect the stance of the users regarding the different topics we extracted. To model the stance we use a set of stance lexicons employed in previous works BIBREF12, BIBREF13. Concretely, we focus on the following categories: belief, denial, doubt, fake, knowledge, negation, question, and report.",
"Bias Cues: We rely on a set of lexicons to capture the bias in text. We model the presence of the words in one of the following cues categories: assertives verbs BIBREF14, bias BIBREF15, factive verbs BIBREF16, implicative verbs BIBREF17, hedges BIBREF18, report verbs BIBREF15. A previous work has used these bias cues to identify bias in suspicious news posts in Twitter BIBREF19.",
"LIWC: We use a set of linguistic categories from the LIWC linguistic dictionary BIBREF20. The used categories are: pronoun, anx, cogmech, insight, cause, discrep, tentat, certain, inhib, incl.\n\n",
"Morality: Cues based on the morality foundation theory BIBREF21 where words labeled in one of a set of categories: care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Are results reported only on English data?",
"What type of model were the features used in?",
"What unsupervised approach was used to deduce the thematic information?",
"What profile features are used?",
"What textual features are used?"
],
"question_id": [
"c176eb1ccaa0e50fb7512153f0716e60bf74aa53",
"e0b54906184a4ad87d127bed22194e62de38222b",
"1f8044487af39244d723582b8a68f94750eed2cc",
"595fe416a100bc7247444f25b11baca6e08d9291",
"1f011fa772ce802e74eda89f706cdb1aa2833686"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Description of the dataset.",
"Fig. 1: (a) Supporting Trump and (b) Attacking Hillary themes words clouds.",
"Table 2: Classification results. We report the results of each feature set independently.",
"Fig. 3: Flipping emotions between themes by user x (an IRA troll).",
"Fig. 4: The top 10 important tokens in each of the NLI features.",
"Table 3: Linguistic analysis of Morality, LIWC, Bias and Subjectivity, Stance, and Bad and Sexual cues shown as the percentage of averaged value of tweets with one or more cues across IRA trolls (X) and regular users (Y) in a shape of X(arrows)Y. We report only significant differences: p-value ≤ 0.001↑↑↑, ≤ 0.01↑↑, ≤ 0.05↑ estimated using the Mann-Whitney U test. The tweets average value is the mean value across the themes."
],
"file": [
"3-Table1-1.png",
"5-Figure1-1.png",
"9-Table2-1.png",
"10-Figure3-1.png",
"11-Figure4-1.png",
"12-Table3-1.png"
]
} | [
"What textual features are used?"
] | [
[
"1910.01340-Textual Representation ::: Thematic Information-2",
"1910.01340-Textual Representation ::: Thematic Information-4",
"1910.01340-Textual Representation ::: Thematic Information-3",
"1910.01340-Textual Representation ::: Thematic Information-6",
"1910.01340-Textual Representation ::: Thematic Information-1",
"1910.01340-Textual Representation ::: Thematic Information-7",
"1910.01340-Textual Representation ::: Thematic Information-10",
"1910.01340-Textual Representation ::: Thematic Information-9",
"1910.01340-Textual Representation ::: Thematic Information-5",
"1910.01340-Textual Representation ::: Thematic Information-8"
]
] | [
"emotion features, bad and sexual language features, stance towards some topics, bias cues, linguistic features from LIWC and morality cues"
] | 122 |
1709.09749 | KeyVec: Key-semantics Preserving Document Representations | Previous studies have demonstrated the empirical success of word embeddings in various applications. In this paper, we investigate the problem of learning distributed representations for text documents which many machine learning algorithms take as input for a number of NLP tasks. We propose a neural network model, KeyVec, which learns document representations with the goal of preserving key semantics of the input text. It enables the learned low-dimensional vectors to retain the topics and important information from the documents that will flow to downstream tasks. Our empirical evaluations show the superior quality of KeyVec representations in two different document understanding tasks. | {
"paragraphs": [
[
"In recent years, the use of word representations, such as word2vec BIBREF0 , BIBREF1 and GloVe BIBREF2 , has become a key “secret sauce” for the success of many natural language processing (NLP), information retrieval (IR) and machine learning (ML) tasks. The empirical success of word embeddings raises an interesting research question: Beyond words, can we learn fixed-length distributed representations for pieces of texts? The texts can be of variable-length, ranging from paragraphs to documents. Such document representations play a vital role in a large number of downstream NLP/IR/ML applications, such as text clustering, sentiment analysis, and document retrieval, which treat each piece of text as an instance. Learning a good representation that captures the semantics of each document is thus essential for the success of such applications.",
"In this paper, we introduce KeyVec, a neural network model that learns densely distributed representations for documents of variable-length. In order to capture semantics, the document representations are trained and optimized in a way to recover key information of the documents. In particular, given a document, the KeyVec model constructs a fixed-length vector to be able to predict both salient sentences and key words in the document. In this way, KeyVec conquers the problem of prior embedding models which treat every word and every sentence equally, failing to identify the key information that a document conveys. As a result, the vectorial representations generated by KeyVec can naturally capture the topics of the documents, and thus should yield good performance in downstream tasks.",
"We evaluate our KeyVec on two text understanding tasks: document retrieval and document clustering. As shown in the experimental section SECREF5 , KeyVec yields generic document representations that perform better than state-of-the-art embedding models."
],
[
" Le et al. proposed a Paragraph Vector model, which extends word2vec to vectorial representations for text paragraphs BIBREF3 , BIBREF4 . It projects both words and paragraphs into a single vector space by appending paragraph-specific vectors to typical word2vec. Different from our KeyVec, Paragraph Vector does not specifically model key information of a given piece of text, while capturing its sequential information. In addition, Paragraph Vector requires extra iterative inference to generate embeddings for unseen paragraphs, whereas our KeyVec embeds new documents simply via a single feed-forward run.",
"In another recent work BIBREF5 , Djuric et al. introduced a Hierarchical Document Vector (HDV) model to learn representations from a document stream. Our KeyVec differs from HDV in that we do not assume the existence of a document stream and HDV does not model sentences."
],
[
"Given a document INLINEFORM0 consisting of INLINEFORM1 sentences INLINEFORM2 , our KeyVec model aims to learn a fixed-length vectorial representation of INLINEFORM3 , denoted as INLINEFORM4 . Figure FIGREF1 illustrates an overview of the KeyVec model consisting of two cascaded neural network components: a Neural Reader and a Neural Encoder, as described below."
],
[
"The Neural Reader learns to understand the topics of every given input document with paying attention to the salient sentences. It computes a dense representation for each sentence in the given document, and derives its probability of being a salient sentence. The identified set of salient sentences, together with the derived probabilities, will be used by the Neural Encoder to generate a document-level embedding.",
"Since the Reader operates in embedding space, we first represent discrete words in each sentence by their word embeddings. The sentence encoder in Reader then derives sentence embeddings from the word representations to capture the semantics of each sentence. After that, a Recurrent Neural Network (RNN) is employed to derive document-level semantics by consolidating constituent sentence embeddings. Finally, we identify key sentences in every document by computing the probability of each sentence being salient.",
"Specifically, for the INLINEFORM0 -th sentence INLINEFORM1 with INLINEFORM2 words, Neural Reader maps each word INLINEFORM3 into a word embedding INLINEFORM4 . Pre-trained word embeddings like word2vec or GloVe may be used to initialize the embedding table. In our experiments, we use domain-specific word embeddings trained by word2vec on our corpus.",
"Given the set of word embeddings for each sentence, Neural Reader then derives sentence-level embeddings INLINEFORM0 using a sentence encoder INLINEFORM1 :",
" DISPLAYFORM0 ",
"where INLINEFORM0 is implemented by a Convolutional Neural Network (CNN) with a max-pooling operation, in a way similar to BIBREF6 . Note that other modeling choices, such as an RNN, are possible as well. We used a CNN here because of its simplicity and high efficiency when running on GPUs. The sentence encoder generates an embedding INLINEFORM1 of 150 dimensions for each sentence.",
"Given the embeddings of sentences INLINEFORM0 in a document INLINEFORM1 , Neural Reader computes the probability of each sentence INLINEFORM2 being a key sentence, denoted as INLINEFORM3 .",
"We employ a Long Short-Term Memory (LSTM) BIBREF7 to compose constituent sentence embeddings into a document representation. At the INLINEFORM0 -th time step, LSTM takes as input the current sentence embedding INLINEFORM1 , and computes a hidden state INLINEFORM2 . We place an LSTM in both directions, and concatenate the outputs of the two LSTMs. For the INLINEFORM3 -th sentence, INLINEFORM4 is semantically richer than sentence embedding INLINEFORM5 , as INLINEFORM6 incorporates the context information from surrounding sentences to model the temporal interactions between sentences. The probability of sentence INLINEFORM7 being a key sentence then follows a logistic sigmoid of a linear function of INLINEFORM8 :",
" DISPLAYFORM0 ",
"where INLINEFORM0 is a trainable weight vector, and INLINEFORM1 is a trainable bias scalar."
],
[
"The Neural Encoder computes document-level embeddings based on the salient sentences identified by the Reader. In order to capture the topics of a document and the importance of its individual sentences, we perform a weighted pooling over the constituent sentences, with the weights specified by INLINEFORM0 , which gives the document-level embedding INLINEFORM1 through a INLINEFORM2 transformation:",
" DISPLAYFORM0 ",
"where INLINEFORM0 is a trainable weight matrix, and INLINEFORM1 is a trainable bias vector.",
"Weighted pooling functions are commonly used as the attention mechanism BIBREF8 in neural sequence learning tasks. The “share” each sentence contributes to the final embedding is proportional to its probability of being a salient sentence. As a result, INLINEFORM0 will be dominated by salient sentences with high INLINEFORM1 , which preserves the key information in a document, and thus allows long documents to be encoded and embedded semantically."
],
[
" In this section, we describe the learning process of the parameters of KeyVec. Similarly to most neural network models, KeyVec can be trained using Stochastic Gradient Descent (SGD), where the Neural Reader and Neural Encoder are jointly optimized. In particular, the parameters of Reader and Encoder are learned simultaneously by maximizing the joint likelihood of the two components:",
" DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 denotes the log likelihood functions of Reader and Encoder, respectively."
],
[
"To optimize Reader, we take a surrogate approach to heuristically generate a set of salient sentences from a document collection, which constitute a training dataset for learning the probabilities of salient sentences INLINEFORM0 parametrized by INLINEFORM1 . More specifically, given a training set INLINEFORM2 of documents (e.g., body-text of research papers) and their associated summaries (e.g., abstracts) INLINEFORM3 , where INLINEFORM4 is a gold summary of document INLINEFORM5 , we employ a state-of-the-art sentence similarity model, DSSM BIBREF9 , BIBREF10 , to find the set of top- INLINEFORM6 sentences INLINEFORM8 in INLINEFORM9 , such that the similarity between INLINEFORM10 and any sentence in the gold summary INLINEFORM11 is above a pre-defined threshold. Note that here we assume each training document is associated with a gold summary composed of sentences that might not come from INLINEFORM12 . We make this assumption only for the sake of generating the set of salient sentences INLINEFORM13 which is usually not readily available.",
"The log likelihood objective of the Neural Reader is then given by maximizing the probability of INLINEFORM0 being the set of key sentences, denoted as INLINEFORM1 :",
" DISPLAYFORM0 ",
"where INLINEFORM0 is the set of non-key sentences. Intuitively, this likelihood function gives the probability of each sentence in the generated key sentence set INLINEFORM1 being a key sentence, and the rest of sentences being non-key ones."
],
[
"The final output of Encoder is a document embedding INLINEFORM0 , derived from LSTM's hidden states INLINEFORM1 of Reader. Given our goal of developing a general-purpose model for embedding documents, we would like INLINEFORM2 to be semantically rich to encode as much key information as possible. To this end, we impose an additional objective on Encoder: the final document embedding needs to be able to reproduce the key words in the document, as illustrated in Figure FIGREF1 .",
"In document INLINEFORM0 , the set of key words INLINEFORM1 is composed of top 30 words in INLINEFORM2 (i.e., the gold summary of INLINEFORM3 ) with the highest TF-IDF scores. Encoder's objective is then formalized by maximizing the probability of predicting the key words in INLINEFORM4 using the document embedding INLINEFORM5 :",
" DISPLAYFORM0 ",
"where INLINEFORM0 is implemented as a softmax function with output dimensionality being the size of the vocabulary.",
"Combining the objectives of Reader and Encoder yields the joint objective function in Eq ( EQREF9 ). By jointly optimizing the two objectives with SGD, the KeyVec model is capable of learning to identify salient sentences from input documents, and thus generating semantically rich document-level embeddings."
],
[
" To verify the effectiveness, we evaluate the KeyVec model on two text understanding tasks that take continuous distributed vectors as the representations for documents: document retrieval and document clustering."
],
[
"The goal of the document retrieval task is to decide if a document should be retrieved given a query. In the experiments, our document pool contained 669 academic papers published by IEEE, from which top- INLINEFORM0 relevant papers are retrieved. We created 70 search queries, each composed of the text in a Wikipedia page on a field of study (e.g., https://en.wikipedia.org/wiki/Deep_learning). We retrieved relevant papers based on cosine similarity between document embeddings of 100 dimensions for Wikipedia pages and academic papers. For each query, a good document-embedding model should lead to a list of academic papers in one of the 70 fields of study.",
"Table TABREF15 presents P@10, MAP and MRR results of our KeyVec model and competing embedding methods in academic paper retrieval. word2vec averaging generates an embedding for a document by averaging the word2vec vectors of its constituent words. In the experiment, we used two different versions of word2vec: one from public release, and the other one trained specifically on our own academic corpus (113 GB). From Table TABREF15 , we observe that as a document-embedding model, Paragraph Vector gave better retrieval results than word2vec averagings did. In contrast, our KeyVec outperforms all the competitors given its unique capability of capturing and embedding the key information of documents."
],
[
"In the document clustering task, we aim to cluster the academic papers by the venues in which they are published. There are a total of 850 academic papers, and 186 associated venues which are used as ground-truth for evaluation. Each academic paper is represented as a vector of 100 dimensions.",
"To compare embedding methods in academic paper clustering, we calculate F1, V-measure (a conditional entropy-based clustering measure BIBREF11 ), and ARI (Adjusted Rand index BIBREF12 ). As shown in Table TABREF18 , similarly to document retrieval, Paragraph Vector performed better than word2vec averagings in clustering documents, while our KeyVec consistently performed the best among all the compared methods."
],
[
"In this work, we present a neural network model, KeyVec, that learns continuous representations for text documents in which key semantic patterns are retained.",
"In the future, we plan to employ the Minimum Risk Training scheme to train Neural Reader directly on original summary, without needing to resort to a sentence similarity model."
]
],
"section_name": [
"Introduction",
"Related Work",
"KeyVec Model",
"Neural Reader",
"Neural Encoder",
"Model Learning",
"Reader's Objective: ℒ 𝚛𝚎𝚊𝚍 \\mathcal {L}_{\\tt read}",
"Encoder's Objective: ℒ 𝚎𝚗𝚌 \\mathcal {L}_{\\tt enc}",
"Experiments and Results",
"Document Retrieval",
"Document Clustering",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"1373332cd40df3ca59573044c9f33918b48db254",
"cf0d3e281fe63d02cdfe61bf6e451fa3b88f4546",
"e988ffba5a0c42e1be82ee663af9573beb32839e"
],
"answer": [
{
"evidence": [
"Table TABREF15 presents P@10, MAP and MRR results of our KeyVec model and competing embedding methods in academic paper retrieval. word2vec averaging generates an embedding for a document by averaging the word2vec vectors of its constituent words. In the experiment, we used two different versions of word2vec: one from public release, and the other one trained specifically on our own academic corpus (113 GB). From Table TABREF15 , we observe that as a document-embedding model, Paragraph Vector gave better retrieval results than word2vec averagings did. In contrast, our KeyVec outperforms all the competitors given its unique capability of capturing and embedding the key information of documents."
],
"extractive_spans": [
"word2vec averaging",
"Paragraph Vector"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF15 presents P@10, MAP and MRR results of our KeyVec model and competing embedding methods in academic paper retrieval. word2vec averaging generates an embedding for a document by averaging the word2vec vectors of its constituent words. In the experiment, we used two different versions of word2vec: one from public release, and the other one trained specifically on our own academic corpus (113 GB). From Table TABREF15 , we observe that as a document-embedding model, Paragraph Vector gave better retrieval results than word2vec averagings did."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To compare embedding methods in academic paper clustering, we calculate F1, V-measure (a conditional entropy-based clustering measure BIBREF11 ), and ARI (Adjusted Rand index BIBREF12 ). As shown in Table TABREF18 , similarly to document retrieval, Paragraph Vector performed better than word2vec averagings in clustering documents, while our KeyVec consistently performed the best among all the compared methods."
],
"extractive_spans": [
"Paragraph Vector",
"word2vec averagings"
],
"free_form_answer": "",
"highlighted_evidence": [
"To compare embedding methods in academic paper clustering, we calculate F1, V-measure (a conditional entropy-based clustering measure BIBREF11 ), and ARI (Adjusted Rand index BIBREF12 ). As shown in Table TABREF18 , similarly to document retrieval, Paragraph Vector performed better than word2vec averagings in clustering documents, while our KeyVec consistently performed the best among all the compared methods."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Evaluation of document retrieval with different embedding models"
],
"extractive_spans": [],
"free_form_answer": "Word2vec averaging (public release 300d), word2vec averaging (academic corpus), Paragraph Vector",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Evaluation of document retrieval with different embedding models"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"10ea6836c7e313b0ef86bcca54dfc439729d90fb",
"747f13baf31b8679235866a2d42a5819cb27c8e8"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"22b3ad2efdc61e8edeec522b9522f7c274059afe",
"728f7ce583b2227b97a193e21bb5e65a76c02193",
"bdf0171ca5f5939c410899598db27de6f4b1ac7e"
],
"answer": [
{
"evidence": [
"To verify the effectiveness, we evaluate the KeyVec model on two text understanding tasks that take continuous distributed vectors as the representations for documents: document retrieval and document clustering."
],
"extractive_spans": [
"document retrieval",
"document clustering"
],
"free_form_answer": "",
"highlighted_evidence": [
"To verify the effectiveness, we evaluate the KeyVec model on two text understanding tasks that take continuous distributed vectors as the representations for documents: document retrieval and document clustering."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To verify the effectiveness, we evaluate the KeyVec model on two text understanding tasks that take continuous distributed vectors as the representations for documents: document retrieval and document clustering."
],
"extractive_spans": [
"document retrieval",
"document clustering"
],
"free_form_answer": "",
"highlighted_evidence": [
"To verify the effectiveness, we evaluate the KeyVec model on two text understanding tasks that take continuous distributed vectors as the representations for documents: document retrieval and document clustering."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To verify the effectiveness, we evaluate the KeyVec model on two text understanding tasks that take continuous distributed vectors as the representations for documents: document retrieval and document clustering.",
"Document Retrieval"
],
"extractive_spans": [
" we evaluate the KeyVec model on two text understanding tasks that take continuous distributed vectors as the representations for documents: document retrieval and document clustering."
],
"free_form_answer": "",
"highlighted_evidence": [
"To verify the effectiveness, we evaluate the KeyVec model on two text understanding tasks that take continuous distributed vectors as the representations for documents: document retrieval and document clustering.\n\nDocument Retrieval"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"473972f8a121a7a1f3e1901a988edd5d339bb113",
"6ffdd9e584076feac0bc49c10661bfdaa638ffba",
"80a5946cfd19c1293be21bc719a266852af124e9"
],
"answer": [
{
"evidence": [
"To verify the effectiveness, we evaluate the KeyVec model on two text understanding tasks that take continuous distributed vectors as the representations for documents: document retrieval and document clustering."
],
"extractive_spans": [
"document retrieval",
"document clustering"
],
"free_form_answer": "",
"highlighted_evidence": [
"To verify the effectiveness, we evaluate the KeyVec model on two text understanding tasks that take continuous distributed vectors as the representations for documents: document retrieval and document clustering."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate our KeyVec on two text understanding tasks: document retrieval and document clustering. As shown in the experimental section SECREF5 , KeyVec yields generic document representations that perform better than state-of-the-art embedding models."
],
"extractive_spans": [
" document retrieval and document clustering"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our KeyVec on two text understanding tasks: document retrieval and document clustering. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To verify the effectiveness, we evaluate the KeyVec model on two text understanding tasks that take continuous distributed vectors as the representations for documents: document retrieval and document clustering."
],
"extractive_spans": [
" document retrieval",
"document clustering"
],
"free_form_answer": "",
"highlighted_evidence": [
"To verify the effectiveness, we evaluate the KeyVec model on two text understanding tasks that take continuous distributed vectors as the representations for documents: document retrieval and document clustering."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7746ce25286cc80f024ab1953d0687edaf66a9bc",
"8515612833b43d7d397848366863158995ac8390",
"9a843dceb6013df39c72ea59b15c6b73495bace0"
],
"answer": [
{
"evidence": [
"The goal of the document retrieval task is to decide if a document should be retrieved given a query. In the experiments, our document pool contained 669 academic papers published by IEEE, from which top- INLINEFORM0 relevant papers are retrieved. We created 70 search queries, each composed of the text in a Wikipedia page on a field of study (e.g., https://en.wikipedia.org/wiki/Deep_learning). We retrieved relevant papers based on cosine similarity between document embeddings of 100 dimensions for Wikipedia pages and academic papers. For each query, a good document-embedding model should lead to a list of academic papers in one of the 70 fields of study.",
"In the document clustering task, we aim to cluster the academic papers by the venues in which they are published. There are a total of 850 academic papers, and 186 associated venues which are used as ground-truth for evaluation. Each academic paper is represented as a vector of 100 dimensions."
],
"extractive_spans": [
"669 academic papers published by IEEE",
"850 academic papers"
],
"free_form_answer": "",
"highlighted_evidence": [
"In the experiments, our document pool contained 669 academic papers published by IEEE, from which top- INLINEFORM0 relevant papers are retrieved. We created 70 search queries, each composed of the text in a Wikipedia page on a field of study (e.g., https://en.wikipedia.org/wiki/Deep_learning). We retrieved relevant papers based on cosine similarity between document embeddings of 100 dimensions for Wikipedia pages and academic papers. ",
"There are a total of 850 academic papers, and 186 associated venues which are used as ground-truth for evaluation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The goal of the document retrieval task is to decide if a document should be retrieved given a query. In the experiments, our document pool contained 669 academic papers published by IEEE, from which top- INLINEFORM0 relevant papers are retrieved. We created 70 search queries, each composed of the text in a Wikipedia page on a field of study (e.g., https://en.wikipedia.org/wiki/Deep_learning). We retrieved relevant papers based on cosine similarity between document embeddings of 100 dimensions for Wikipedia pages and academic papers. For each query, a good document-embedding model should lead to a list of academic papers in one of the 70 fields of study."
],
"extractive_spans": [
"669 academic papers published by IEEE"
],
"free_form_answer": "",
"highlighted_evidence": [
"In the experiments, our document pool contained 669 academic papers published by IEEE, from which top- INLINEFORM0 relevant papers are retrieved."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Document Retrieval",
"The goal of the document retrieval task is to decide if a document should be retrieved given a query. In the experiments, our document pool contained 669 academic papers published by IEEE, from which top- INLINEFORM0 relevant papers are retrieved. We created 70 search queries, each composed of the text in a Wikipedia page on a field of study (e.g., https://en.wikipedia.org/wiki/Deep_learning). We retrieved relevant papers based on cosine similarity between document embeddings of 100 dimensions for Wikipedia pages and academic papers. For each query, a good document-embedding model should lead to a list of academic papers in one of the 70 fields of study.",
"In the document clustering task, we aim to cluster the academic papers by the venues in which they are published. There are a total of 850 academic papers, and 186 associated venues which are used as ground-truth for evaluation. Each academic paper is represented as a vector of 100 dimensions."
],
"extractive_spans": [],
"free_form_answer": "For the document retrieval task - the dataset of the document pool contained 669 academic papers published by IEEE. Fro the document clustering task - the dataset of 850 academic papers, and 186 associated venues which are used as ground-truth for evaluation.",
"highlighted_evidence": [
"Document Retrieval\nThe goal of the document retrieval task is to decide if a document should be retrieved given a query. In the experiments, our document pool contained 669 academic papers published by IEEE, from which top- INLINEFORM0 relevant papers are retrieved. ",
"In the document clustering task, we aim to cluster the academic papers by the venues in which they are published. There are a total of 850 academic papers, and 186 associated venues which are used as ground-truth for evaluation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"what other representations do they compare with?",
"how many layers are in the neural network?",
"what empirical evaluations performed?",
"which document understanding tasks did they evaluate on?",
"what dataset was used?"
],
"question_id": [
"181027f398a6b79b1ba44d8d41cc1aba0d6f5212",
"ab097db03652b8b38edddc074f23e2adf9278cba",
"5d4190403eb800bb17eec71e979788e11cf74e67",
"56d41e0fcc288c1e65806ae77097d685c83e22db",
"1237b6fcc64b43901415f3ded17cc210a54ab698"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: KEYVEC Model (best viewed in color)",
"Table 1: Evaluation of document retrieval with different embedding models",
"Table 2: Evaluation of document clustering with different embedding models"
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"4-Table2-1.png"
]
} | [
"what other representations do they compare with?",
"what dataset was used?"
] | [
[
"1709.09749-Document Retrieval-1",
"1709.09749-Document Clustering-1",
"1709.09749-4-Table1-1.png"
],
[
"1709.09749-Document Retrieval-0",
"1709.09749-Document Clustering-0"
]
] | [
"Word2vec averaging (public release 300d), word2vec averaging (academic corpus), Paragraph Vector",
"For the document retrieval task - the dataset of the document pool contained 669 academic papers published by IEEE. Fro the document clustering task - the dataset of 850 academic papers, and 186 associated venues which are used as ground-truth for evaluation."
] | 123 |
2003.09244 | Language Technology Programme for Icelandic 2019-2023 | In this paper, we describe a new national language technology programme for Icelandic. The programme, which spans a period of five years, aims at making Icelandic usable in communication and interactions in the digital world, by developing accessible, open-source language resources and software. The research and development work within the programme is carried out by a consortium of universities, institutions, and private companies, with a strong emphasis on cooperation between academia and industries. Five core projects will be the main content of the programme: language resources, speech recognition, speech synthesis, machine translation, and spell and grammar checking. We also describe other national language technology programmes and give an overview over the history of language technology in Iceland. | {
"paragraphs": [
[
"During the last decade, we have witnessed enormous advances in language technology (LT). Applications that allow users to interact with technology via spoken or written natural language are emerging in all areas, and access to language resources and open-source software libraries enables faster development for new domains and languages.",
"However, LT is highly language dependent and it takes considerable resources to develop LT for new languages. The recent LT development has focused on languages that have both a large number of speakers and huge amounts of digitized language resources, like English, German, Spanish, Japanese, etc. Other languages, that have few speakers and/or lack digitized language resources, run the risk of being left behind.",
"Icelandic is an example of a language with almost a negligible number of speakers, in terms of market size, since only about 350,000 people speak Icelandic as their native language. Icelandic is therefore seldom on the list of supported languages in LT software and applications.",
"The Icelandic Government decided in 2017 to fund a five-year programme for Icelandic LT, based on a report written by a group of LT experts BIBREF0. After more than two years of preparation, a consortium consisting of universities, institutions, associations, and private companies started the work on the programme on the 1st of October 2019. The goal of the programme is to ensure that Icelandic can be made available in LT applications, and thus will be usable in all areas of communication. Furthermore, that access to information and other language-based communication and interaction in Icelandic will be accessible to all, e.g. via speech synthesis or speech-to-text systems.",
"The focus of the programme will be on the development of text and speech-based language resources, on the development of core natural language processing (NLP) tools like tokenisers, taggers and parsers, and finally, to publish open-source software in the areas of speech recognition, speech synthesis, machine translation, and spell and grammar checking. All deliverables of the programme will be published under open licenses, to encourage use of resources and software in commercial products.",
"While the government-funded programme for the development of resources and infrastructure software builds the backbone of the Icelandic LT programme, another branch is a competitive fund for research and development. This Strategic Research and Development Programme for Language Technology is managed by the Icelandic Centre for Research, Rannís, which publishes calls for applications on a regular basis.",
"The third pillar of the programme is the revival of the joint Master's programme in LT at Reykjavik University (RU) and the University of Iceland (UI). The goal is further to increase the number of PhD students and to build strong knowledge centres for sustainable LT development in Iceland.",
"The budget estimation for the programme, including the competitive fund, education plan and infrastructure costs, is around 14 million euros. Additionally, around 3.6 million euros is expected to be the contribution of the industry through the competitive fund.",
"This paper is structured as follows: In Section SECREF2 we discuss national LT programmes that have been run in other European countries and helped developing the Icelandic project plan. Section SECREF3 gives an overview over the 20 years of LT development in Iceland. Section SECREF4 shows the organisation of the new programme, and in Section SECREF5 we describe the core projects that have been defined for it. Finally, a conclusion is presented in Section SECREF6."
],
[
"In recent years, there has been much international discussion on how the future of languages depends on them being usable in the digital world. This concern has led to a number of national LT programmes. We studied three of these national programmes: the STEVIN programme in the Netherlands which ran between 2004 and 2011, the Plan for the Advancement of Language Technology in Spain, and, in particular, the Estonian LT programmes that have been running since 2006."
],
[
"The STEVIN programme was launched in 2004 to strengthen the position of Dutch in LT by building essential resources for the language. Its objectives were to raise awareness of LT in order to stimulate demand for LT products, to promote strategic research in the field and develop essential resources, and to organise the management, maintenance and distribution of language resources that have been developed BIBREF1. The programme was based on cooperation between government, academia and industry, both in Flanders and the Netherlands. It encompassed a range of projects from basic resources to applications for language users, and attention was paid to distribution, dissemination and valorisation of project results by means of the HLT Agency, which also had a role in clearing intellectual property rights (IPRs) and issuing licence agreements BIBREF2.",
"The general targets of the STEVIN programme were reached to a large extent. According to a report on the results of the programme BIBREF3, it resulted in a network with strong ties between academia and industry, beneficial for future utilisation of the STEVIN results. The evaluators of the programme qualified it as successful, but had recommendations for a future programme, if initiated. They suggested more interaction with other similar (inter)national R&D programmes, asserted that the complexity of IPR issues had been seriously underestimated and called for a better clarification of the role of open-source. The total cost of the STEVIN programme was over 10 million euros, of which well over 80% was spent on R&D projects."
],
[
"The Spanish LT programme Plan for Advancement of Language Technology started in 2016, and is scheduled to finish in 2020. Its aims are to develop infrastructure for LT in Spain, specifically for Spanish and the co-official languages, Basque, Catalan, Galician and Aranese. Furthermore, to promote the LT industry by boosting knowledge transfer between research and industry actors, and to improve the quality and capacity of public services by employing NLP and machine translation (MT) technology. Government should be the leading participant in LT with high-profile projects in healthcare, as well as in the judicial and educational systems, and in tourism BIBREF4.",
"The plan was to facilitate the development of tools and linguistic resources. Examples of tools are named entity recognisers, word-sense disambiguation, tools for computing semantic similarity and text classification, automatic summarisation and MT. Examples of linguistic resources to be developed in the programme are parallel corpora, lists of proper nouns, terminology lists and dictionaries.",
"The estimated total cost of the programme was 90 million euros. As the programme had just recently started when the Icelandic programme was being planned, we did not have any information on what went well and what could have been done better."
],
[
"Regarding LT, the Estonian situation is, in many ways, similar to that of Iceland: It has too few users for companies to see opportunities in embarking on development of (costly) LT, but on the other hand society is technologically advanced – people use, or want to be able to use, LT software. In Estonia, the general public wants Estonian to maintain its status, and like Icelandic, the language has a complex inflection system and very active word generation. The problems faced by Estonia are therefore not unlike those that Iceland faces.",
"In Estonia, three consecutive national programmes have been launched. The third national programme, Estonian Language Technology 2018–2027, is currently under way. While the Estonian Ministry of Education and Research has been responsible for the programmes, the universities in Tallinn and Tartu, together with the Institute of the Estonian Language, led the implementation.",
"The National Programme for Estonian Language Technology was launched in 2006. The first phase ran from 2006 to 2010. All results of this first phase, language resources and software prototypes, were released as public domain. All such resources and tools are preserved long term and available from the Center of Estonian Language Resources. 33 projects were funded, which included the creation of reusable language resources and development of essential linguistic software, as well as bringing the relevant infrastructure up to date BIBREF5. The programme managed to significantly improve upon existing Estonian language resources, both in size, annotation and standardisation. In creating software, most noticeable results were in speech technology. Reporting on the results of the programme BIBREF5 stress that the first phase of the programme created favourable conditions for LT development in Estonia. According to an evaluation of the success of the programme, at least 84% of the projects had satisfactory results. The total budged for this first phase was 3.4 million euros.",
"The second phase of the programme ran from 2011 to 2017 with a total budget of approx. 5.5 million euros. It focused on the implementation and integration of existing resources and software prototypes in public services. Project proposals were called for, funding several types of actions in an open competition. The main drawback of this method is that it does not fully cover the objectives, and LT support for Estonian is thus not systematically developed. Researchers were also often mostly interested in results using prototypes rather than stable applications. As most of the projects were instigated at public institutes, relation to IT business was weak. Furthermore, the programme does not deal explicitly with LT education. On the other hand, the state of LT in Estonia soon become relatively good compared to languages with similar number of speakers BIBREF6."
],
[
"The history of Icelandic LT is usually considered to have begun around the turn of the century, even though a couple of LT resources and products were developed in the years leading up to that. Following the report of an expert group appointed by the Minister of Education, Science and Culture BIBREF7, the Icelandic Government launched a special LT Programme in the year 2000, with the aim of supporting institutions and companies to create basic resources for Icelandic LT work. This initiative resulted in a few projects which laid the ground for future work in the field. The most important of these were a 25 million token, balanced, tagged corpus, a full-form database of Icelandic inflections, a training model for PoS taggers, an improved speech synthesiser, and an isolated word speech recogniser BIBREF8.",
"After the LT Programme ended in 2004, researchers from three institutions, UI, RU, and the Árni Magnússon Institute for Icelandic Studies (AMI), joined forces in a consortium called the Icelandic Centre for Language Technology (ICLT), in order to follow up on the tasks of the Programme. In the following years, these researchers developed a few more tools and resources with support from The Icelandic Research Fund, notably a rule-based tagger, a shallow parser, a lemmatiser, and a historical treebank BIBREF9.",
"In 2011–2012, researchers from the ICLT also participated in two speech technology projects initiated by others: A new speech synthesiser for Icelandic which was developed by the Polish company Ivona, now a subsidiary of Amazon, for the Icelandic Association for the Visually Impaired, and a speech recogniser for Icelandic developed by Google BIBREF9.",
"Iceland was an active participant in the META-NORD project, a subproject of META-NET, from 2011 to 2013. Within that project, a number of language resources for Icelandic were collected, enhanced, and made available, both through META-SHARE and through a local website, málföng.is (málföng being a neologism for `language resources'). Among the main deliveries of META-NET were the Language White Papers BIBREF10. The paper on Icelandic BIBREF11 highlighted the alarming status of Icelandic LT. Icelandic was among four languages that received the lowest score, “support is weak or non-existent” in all four areas that were evaluated.",
"The White Paper received considerable attention in Icelandic media and its results were discussed in the Icelandic Parliament. In 2014, the Parliament unanimously accepted a resolution where the Minister of Education, Science and Culture was given mandate to appoint an expert group which should come up with a long-term LT plan for Icelandic. The group delivered its report to the Minister in December 2014. The result was that a small LT Fund was established in 2015.",
"During the last years, a strong centre for speech technology has been established at RU, where development in speech recognition and synthesis has been ongoing since 2011. Acoustic data for speech recognition was collected and curated at RU BIBREF12, BIBREF13, BIBREF14 and a baseline speech recognition system for Icelandic was developed BIBREF15. Specialised speech recognisers have also been developed at RU for the National University Hospital and Althingi BIBREF16, BIBREF17, BIBREF18. A work on a baseline speech synthesis system for Icelandic has also been carried out at RU BIBREF19, BIBREF20.",
"The AMI has built a 1.3-billion-word corpus, the Icelandic Gigaword Corpus (IGC) BIBREF21, partially funded by the Icelandic Infrastructure Fund. Further, a private company, Miðeind Ltd., has developed a context-free parser BIBREF22 partially funded by the LT Fund.",
"In October 2016, the Minister of Education, Science and Culture appointed a special LT steering group, consisting of representatives from the Ministry, from academia, and from the Confederation of Icelandic Enterprise (CIE). The steering group commissioned three LT experts to work out a detailed five-year Project Plan for Icelandic LT. The experts delivered their proposals, Language Technology for Icelandic 2018–2022 – Project Plan BIBREF0 to the Minister in June 2017."
],
[
"The Icelandic Government decided soon after the publication of the report Language Technology for Icelandic 2018–2022 – Project Plan to use the report as a base for a five-year government funded LT programme for Icelandic. The self-owned foundation Almannarómur, founded in 2014 to support the development of Icelandic LT, was to be prepared to take over a role as a Centre of Icelandic LT and to elaborate on how the programme could be organised and executed to meet the goals defined in the report.",
"The Icelandic Ministry of Education, Science and Culture signed an agreement with Almannarómur in August 2018, giving Almannarómur officially the function of organising the execution of the LT programme for Icelandic. Following a European Tender published in March 2019, Almannarómur decided to make an agreement with a consortium of universities, institutions, associations, and private companies (nine in total) in Iceland (listed in Table TABREF6) to perform the research and development part of the programme. This Consortium for Icelandic LT (Samstarf um íslenska máltækni – SÍM) is a joint effort of LT experts in Iceland from academia and industry. SÍM is not a legal entity but builds the cooperation on a consortium agreement signed by all members. During the preparation of the project, an expert panel of three experienced researchers from Denmark, the Netherlands, and Estonia was established to oversee the project planning and to evaluate deliverables at predefined milestones during the project.",
"SÍM has created teams across the member organisations, each taking charge of a core project and/or defined subtasks. This way the best use of resources is ensured, since the team building is not restricted to one organisation per project. One project manager coordinates the work and handles communication and reporting to Almannarómur and the expert panel.",
"Besides the role of the executive of the research and development programme itself, Almannarómur will conduct communication between the executing parties and the local industry, as well as foreign companies and institutions. Together with the executing parties, Almannarómur will also host conferences and events to promote the programme and bring together interested parties."
],
[
"In this section, we describe the five core projects that have been defined in the Icelandic LT programme."
],
[
"As mentioned above, a number of language resources have been made available at the repository málföng. Most of these are now also available at the CLARIN-IS website and will be integrated into the CLARIN Virtual Language Observatory. Below we give a brief and non-exhaustive overview of language resources for Icelandic which will be developed in the programme.",
"Tagged corpora. The IGC BIBREF21 contains 1.3 billion running words, tagged and lemmatised. It is much bigger than previous tagged corpora, most notably the Icelandic Frequency Dictionary (IFD; Pind et al., 1991), which was the first morphologically tagged corpus of Icelandic texts, containing half a million words tokens from various texts, and the Tagged Icelandic Corpus (MÍM; Helgadóttir et al,. 2012), a balanced corpus of texts from the first decade of the 21st century, containing around 25 million tokens. A gold standard tagged corpus was created from a subset of MÍM BIBREF23. Some revisions of the morphosyntactic tagset used for tagging Icelandic texts will be done in the programme, and the gold standard updated accordingly.",
"We will update the IGC with new data from more sources and continue collecting data from rights holders who have given their permission for using their material. A new version will be released each year during the five-year programme.",
"Treebanks. The largest of the syntactically parsed treebanks that exist is the Icelandic Parsed Historical Corpus (IcePaHC; Wallenberg et al., 2011; Rögnvaldsson et al., 2011, 2012), which contains one million words from the 12th to the 21st century. The scheme used for the syntactic annotation is based on the Penn Parsed Corpora of Historical English BIBREF24, BIBREF25. On the other hand, no Universal Dependencies (UD)-treebanks are available for Icelandic. Within the programme, a UD-treebank will by built, based on IcePaHC, and extended with new material.",
"Morphological database. The Database of Icelandic Morphology (DIM; Bjarnadóttir et al., 2019) contains inflectional paradigms of about 287,000 lemmas. A part of the database, DMII-Core, only includes data in a prescriptive context and is suited for language learners, creating teaching material and other prescriptive uses. It consists of the inflection of approx. 50,000 words. We will extend it by reviewing ambiguous inflection forms. We will define format for data publication as the core will be available for use by a third party. For the sake of simplifying the process of adding material to the database and its maintenance, we will take advantage of the lexicon acquisition tool described in Section SECREF16 and adapt it for DIM.",
"Hyphenation tool. Hyphenation from one language to another often seems rather idiosyncratic but within one and the same language, such as Icelandic, such rules are often reasonably clear. A list of more than 200,000 Icelandic words with permissible hyphenations is available in the language resources repository. It will be expanded based on words from the DIM. A new hyphenation tool, trained on the extended list, will be built in the programme. The tool makes a suggestion for correct hyphenation possibilities of words that are not found on the hyphenation list.",
"Icelandic wordnet. The Icelandic wordnet BIBREF26, which contains 200,000 phrasemes of various kinds and about 100,000 compounds, is not a traditional dictionary as it analyses internal connections semantically and syntactically within Icelandic vocabulary. We will define a more appropriate data format and convert the wordnet data to that format. In addition, we will work on improving the wordnet itself by filling in gaps in various categories."
],
[
"A wide variety of NLP tools are to be developed or improved upon within the programme. It is of vital importance to develop quality NLP tools, as many tools often form a pipeline that analyses data and delivers the results to tools used by end users, and, in the pipeline, errors can accumulate and perpetuate.",
"When the programme started, there were a few available tools for Icelandic. IceNLP BIBREF27 is a suite of NLP tools containing modules for tokenisation, PoS-tagging, lemmatising, parsing and named entity recognition. Greynir BIBREF22 is a full parser which also includes a tokeniser and recognises some types of named entities. Nefnir BIBREF28 is a lemmatiser which uses suffix substitution rules, derived from the Database of Icelandic Morphology BIBREF29, giving results that outperform IceNLP. ABLTagger BIBREF30 is a PoS tagger that outperforms other taggers that have been trained for tagging Icelandic texts.",
"Some of these tools give good results, but can be improved upon. For other tasks, new tools need to be built. As part of the release process care will be taken to ensure all resulting software are up to high quality standards, and well documented to facilitate use by third parties. Where applicable, RESTful APIs will also be set up to further promote the usage of the products.",
"Tokeniser. A basic step in NLP is to segment text into units, normally sentences and tokens. Since any errors made at this stage will cascade through the process, it is important that the tokeniser is as accurate as possible. A tokeniser for Icelandic needs to be able to correctly recognises abbreviations, time units, dates, etc. It must also be adjustable and able to run using different settings, since its output must be adaptable to different projects and different uses.",
"Previously, two tokenisers have been built for Icelandic, one is a part of IceNLP and the other a part of Greynir. As Greynir is still in active development, it will be used as a base for the LT project's development. In order to be able to test the tokenisers' accuracy, a test set that takes different tokeniser settings into account will be developed.",
"PoS tagger. Precise PoS-tagging is important in many LT projects because information on word class or morphological features is often needed in later stages of an NLP pipeline. Improved tagging accuracy, thus often results in an improvement in the overall quality of LT software.",
"A number of PoS-taggers have been developed for Icelandic, with the best results achieved by a recent bidirectional LSTM tagging model BIBREF30. While developing PoS taggers for Icelandic further using state-of-the-art methods, we will also study and try to estimate how much accuracy can theoretically be reached in tagging a variety of Icelandic text styles, using the tag set chosen for the LT programme (see Section SECREF7).",
"Lemmatiser. A new lemmatiser for Icelandic, Nefnir, has recently been published BIBREF28. It has been shown to be quite accurate, although a standardised test set is not available to compare it to other lemmatisers, like Lemmald BIBREF31. Its main weakness is in lemmatising unknown words, which is a hard problem for inflected languages. We will study if its accuracy can be improved in that regard.",
"Parser. Three parsers have previously been developed for Icelandic. IceNLP includes a shallow parser based on a cascade of finite-state transducers BIBREF32. Greynir, on the other hand, fully parses sentences according to a hand-crafted context-free grammar. A parsing pipeline for Icelandic based on the IcePaHC corpus and the Berkeley-parser has also been released BIBREF33. No Universal Dependencies (UD) parser is available for Icelandic and no UD treebank, but in a project that started in 2019, independent of the LT programme, IcePaHC BIBREF34 will be converted to a UD treebank.",
"The IceNLP and Greynir parsers will be evaluated and either one of them or both developed further. We will also adapt a UD-parser to Icelandic UD-grammar.",
"Named entity recogniser. Some work has been carried out on named entity recognition for Icelandic. IceNLP contains a rule-based module that has achieved 71-79% accuracy and a recent tool based on a bidirectional LSTM BIBREF35 obtained an F1 score of 81.3%. There is also a named entity recogniser for proper names in Greynir, but its accuracy has not yet been evaluated. Within the programme, different training methods will be experimented with and evaluated, and the most promising tools evaluated further.",
"Semantic analysis. A variety of different tasks involve semantic analysis, including word-sense disambiguation (WSD), anaphora resolution, identifying co-references, analysing semantic similarity between compound verbs and phrases, and more.",
"We will work on these four aspects of semantic analysis listed above. In recent years, not much work has been carried out in this field for Icelandic. This part of the LT programme will thus start with researching the current state-of-the-art and defining realistic goals.",
"Lexicon acquisition tool. When constructing and maintaining lexical databases, such as DIM, the Icelandic wordnet or other related resources, it is vital to be able to systematically add neologies and words that are missing from the datasets, especially those commonly used in the language. Within the LT programme a flexible lexicon acquisition tool will be developed. It will be able to identify and collect unknown words and word forms, together with statistics, through structured lexical acquisition from the Icelandic Gigaword Corpus, which is constantly being updated, and other data sources in the same format."
],
[
"The main aim of the automatic speech recognition (ASR) project is to gather all necessary language and software resources to implement and build standard speech recognition systems for Icelandic. The project should enable developers to either research, develop or implement ASR without having to gather language resources. To achieve this goal, the project is divided into data gathering, recipe development, and software implementation and research.",
"Data gathering. The data gathering part of the project encompasses a wide variety of speech and transcript resources. A continuation of the Málrómur project BIBREF14 has already been implemented using Mozilla Common Voice. Here the aim is to double the size of the existing data set, get a more even distribution of speakers across geographic locations and age groups, and gather data from second language speakers. Additionally, radio and television transcripts are being gathered on a large scale and prepared for publication for ASR development. Conversations, queries and lectures will also be transcribed and published, and large open historical data sets will be aligned and prepared for publication.",
"Recipe development. ASR recipes for Icelandic will be developed further using more language resources BIBREF15 and specific application areas such as conversations, question answering and voice commands will be given a special attention. ASR systems that focus on teenagers, children and second language speakers are also within the scope of the project. These recipes are then used to create resources for smart-phone and web-based integration of ASR for Icelandic.",
"Software implementation and research. The research areas are chosen so to enhance the language resource development for Icelandic. A punctuation system for Icelandic will be analysed and implemented. Compound words are common in Icelandic and the language also has a relatively rich inflection structure so it is important to address those features for language modeling. Pronunciation analysis, speaker diarization and speech analysis will also be addressed especially for Icelandic, and acoustic modelling for children and teenagers receive attention in the project."
],
[
". The text-to-speech project will produce language resources that enable voice building for Icelandic.",
"Unit selection. Eight voices for unit-selection TTS will be recorded, with the aim of attaining diversity in age and dialect, with an equal number of male and female voices. The reason why unit-selection is chosen is to increase the likelihood that the project will produce useful and viable voices that can be used in addition to the two unit-selection voices that already exist for Icelandic.",
"Statistical parametric speech synthesis. Forty voices for statistical parametric speech synthesis (SPSS) will be recorded during the project. The plan is to publish open-source unit-selection and SPSS recipes with all necessary language resources so that programmers and researchers can continue to develop voices for Icelandic.",
"Suitable TTS voices for web-reading and smartphones will be developed within an open-source paradigm. This will allow the industry to use the voices developed within the project.",
"Research. The targeted research part of the project will facilitate the recipe development and software implementation. Quality assessment systems will be set up, text normalization for Icelandic will be developed fully, and intonation analysis for Icelandic will be implemented and applied to TTS."
],
[
"The Spell and Grammar Checking project will develop and make freely available, under open-source licensing, important data sets and tools for further establishment of automated text correction systems for Icelandic. The project makes extensive use of other resources that have been developed independently, or will be developed within the larger framework of the current LT Programme for Icelandic, including the Database of Icelandic Morphology BIBREF29, the Greynir system BIBREF22, and the Icelandic Gigaword corpus BIBREF21. On the one hand, the project focuses on developing error corpora for Icelandic, and on the other, it focuses on creating a set of correction tools. Challenges associated with richly inflected languages continue to be a matter of central interest in this project, like previous work on Icelandic spelling correction BIBREF36.",
"Text correction data. The data construction aspect of the project will develop three error corpora that can be used for quantitative analysis of errors in written Icelandic text. The error corpora will also serve as a foundation for training data-driven training correction systems. One corpus will focus on the written language of Icelandic speakers who are not known to have unusual language properties. Another corpus will focus on speakers who are in the process of learning Icelandic as a second language, and a third one will include data from dyslexic speakers.",
"Software development. The software development tasks of the spell and grammar checking project will build a working open source correction system whose development is informed by the analysis of the data sets created within the project. The spell and grammar checker will be based on the foundation for processing Icelandic text provided by the Greynir system."
],
[
"The purpose of the MT project is to build open-source systems capable of translating between Icelandic and English, in both directions, is$\\rightarrow $en and en$\\rightarrow $is. The goal is that the translation quality will be good enough to be useful for translators in specific domains. A part of the MT project is indeed to define in which translation domain most value can be gained with the systems.",
"Very limited work on MT for Icelandic has been carried out since the turn of the century. A prototype of an open-source is$\\rightarrow $en rule-based MT system has been developed using the Apertium platform BIBREF37, but this system is not currently in public use.",
"The AMI has recently compiled an English-Icelandic parallel corpus, ParIce, the first parallel corpus built for the purposes of MT research and development for Icelandic BIBREF38. The primary goal of the compilation of ParIce was to build a corpus large enough and of good enough quality for training useful MT systems. ParIce currently consists of 39 million Icelandic words in 3.5 million segment pairs. The largest parts of ParIce consists of film and TV subtitles from the Opus corpus BIBREF39, and texts from the European Medicines Agency document portal, included in the Tilde MODEL corpus BIBREF40.",
"Google Translate supports translations between Icelandic and various languages and is currently used widely by Icelanders and foreigners for obtaining understandable translations of given texts (the task of assimilation). The problem with Google's system is, however, that neither the source code nor the training data is publicly available. Moreover, the system is a general translation engine, but not developed specifically for translating texts in a particular domain.",
"Our MT project in the new LT programme consists of the following sub-parts:",
"Parallel data. Icelandic's rich morphology and relatively free word order is likely to demand large amount of training data in order to develop MT systems that produce adequate and fluent translations. The ParIce corpus currently consists of only 3.5 million sentence pairs which is rather small in relation to parallel corpora in general. The goal of this phase is to create an aligned and filtered parallel corpus of translated documents from the European Economic Area (EEA) domain (e.g. regulations and directives). As of 2017, around 7,000 documents were available in Icelandic with corresponding documents in English. The aim is to pair all accessible documents in the course of the project.",
"Back-translation. In order to augment the training data, back-translated texts will be used. Monolingual Icelandic texts will be selected and translated to English with one of the baseline system (see below). By doing so, more training data can be obtained for the en$\\rightarrow $is direction. An important part of using back-translated texts during training is filtering out translations that may otherwise lead to poor quality of the augmented part.",
"Baseline system. In this part, three baseline MT systems will be developed. First, a statistical phrase-based MT system based on Moses BIBREF41, second, a bidirectional LSTM model using the neural translation system OpenNMT BIBREF42, and, third, a system based on an attention-based neural network BIBREF43 using Tensor2Tensor. All the three systems will be trained on ParIce, and the additional data from tasks 1 and 2 above. Eventually, the goal is to choose the best performing MT-system for further development of MT for Icelandic.",
"MT interface. An API and a web user interface for the three baseline systems, mentioned in item 3 above, will be developed to give interested parties access to the systems under development, and to establish a testing environment in which members of the public can submit their own text. Thus, results from the three systems can be compared directly, as well as to the translations produced by Google Translate. Moreover, in this part, a crowd-sourcing mechanism will be developed, i.e. a functionality to allow users to submit improved translations back to the system for inclusion in the training corpus.",
"Pre- and postprocessing. Preprocessing in MT is the task of changing the training corpus/source text in some manner for the purpose of making the translation task easier or mark particular words/phrases that should not be translated. Postprocessing is then the task of restoring the generated target language to its normal form. An example of pre- and postprocessing in our project is the handling of named entities (NEs). NEs are found and matched within source and target sentence pairs in the training corpus, and replaced by placeholders with information about case and singular/plural number. NE-to-placeholder substitution is implemented in the input and placeholder-to-NE substitution in the output pipelines of the translation system."
],
[
"We have described a five-year, national LT programme for Icelandic. The goal is to make Icelandic useable in communication and interactions in the digital world. Further, to establish graduate and post-graduate education in LT in Iceland to enable the building of strong knowledge centres in LT in the country.",
"After studying somewhat similar national programmes in other European countries, we have defined the most important factors that in our opinion will help lead to the success of the programme: First, we have defined core projects that comprise the most important language resources and software tools necessary for various LT applications. Second, all deliverables will be published under as open licenses as possible and all resources and software will be easily accessible. The deliverables will be packaged and published for use in commercial applications, where applicable. Third, from the beginning of the programme, we encourage innovation projects from academia and industry through a competitive R&D fund, and fourth, constant communication with users and industry through conferences, events and direct interaction will be maintained, with the aim of putting deliverables to use in products as soon as possible. The cooperation between academia and industry is also reflected in the consortium of universities, institutions, associations, and private companies, performing the R&D work for all core projects.",
"The described plan is tied in with 20 years of LT history in Iceland, and despite the steep path to getting where we are, we have every reason to be optimistic about the future of Icelandic LT."
]
],
"section_name": [
"Introduction",
"Other European LT Programmes",
"Other European LT Programmes ::: The Netherlands",
"Other European LT Programmes ::: Spain",
"Other European LT Programmes ::: Estonia",
"History of Icelandic LT",
"Organisation of the Icelandic LT Programme 2019–2023",
"Core Projects",
"Core Projects ::: Language Resources",
"Core Projects ::: NLP Tools",
"Core Projects ::: Automatic Speech Recognition (ASR)",
"Core Projects ::: Speech Synthesis (TTS)",
"Core Projects ::: Spell and Grammar Checking",
"Core Projects ::: Machine Translation",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"6cbd0f38ced1632c00a127a393a7056d4e8c90c4",
"90b6c81a3ef7d7b3aed2368a488dd7d5757b4b73",
"90c8b22e6fc01a5a41db73fef41c2686c1d9432c"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Members of the SÍM consortium for Icelandic LT",
"The Icelandic Ministry of Education, Science and Culture signed an agreement with Almannarómur in August 2018, giving Almannarómur officially the function of organising the execution of the LT programme for Icelandic. Following a European Tender published in March 2019, Almannarómur decided to make an agreement with a consortium of universities, institutions, associations, and private companies (nine in total) in Iceland (listed in Table TABREF6) to perform the research and development part of the programme. This Consortium for Icelandic LT (Samstarf um íslenska máltækni – SÍM) is a joint effort of LT experts in Iceland from academia and industry. SÍM is not a legal entity but builds the cooperation on a consortium agreement signed by all members. During the preparation of the project, an expert panel of three experienced researchers from Denmark, the Netherlands, and Estonia was established to oversee the project planning and to evaluate deliverables at predefined milestones during the project."
],
"extractive_spans": [],
"free_form_answer": "Creditinfo, Grammatek, Mideind and Tiro",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Members of the SÍM consortium for Icelandic LT",
"Following a European Tender published in March 2019, Almannarómur decided to make an agreement with a consortium of universities, institutions, associations, and private companies (nine in total) in Iceland (listed in Table TABREF6) to perform the research and development part of the programme."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The Icelandic Ministry of Education, Science and Culture signed an agreement with Almannarómur in August 2018, giving Almannarómur officially the function of organising the execution of the LT programme for Icelandic. Following a European Tender published in March 2019, Almannarómur decided to make an agreement with a consortium of universities, institutions, associations, and private companies (nine in total) in Iceland (listed in Table TABREF6) to perform the research and development part of the programme. This Consortium for Icelandic LT (Samstarf um íslenska máltækni – SÍM) is a joint effort of LT experts in Iceland from academia and industry. SÍM is not a legal entity but builds the cooperation on a consortium agreement signed by all members. During the preparation of the project, an expert panel of three experienced researchers from Denmark, the Netherlands, and Estonia was established to oversee the project planning and to evaluate deliverables at predefined milestones during the project.",
"FLOAT SELECTED: Table 1: Members of the SÍM consortium for Icelandic LT"
],
"extractive_spans": [],
"free_form_answer": "The Árni Magnússon Instit. for Icelandic Studies, Reykjavik University (RU), University of Iceland (UI), RÚV, Creditinfo, The Association of the Visually Impaired, Grammatek, Miðeind. Tiro",
"highlighted_evidence": [
"Following a European Tender published in March 2019, Almannarómur decided to make an agreement with a consortium of universities, institutions, associations, and private companies (nine in total) in Iceland (listed in Table TABREF6) to perform the research and development part of the programme. ",
"FLOAT SELECTED: Table 1: Members of the SÍM consortium for Icelandic LT"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Members of the SÍM consortium for Icelandic LT"
],
"extractive_spans": [],
"free_form_answer": "Crediyinfo, Grammatek, \nMideind,\nTiro",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Members of the SÍM consortium for Icelandic LT"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
]
},
{
"annotation_id": [
"66b0f741e4fbd06f8bdda25c5e017238657a100f",
"77dffa78c2abea2e7ede80d7b656e6f49f64dffb",
"84a0aeaf2d358e241fa70a1a9145847cb324d248"
],
"answer": [
{
"evidence": [
"As mentioned above, a number of language resources have been made available at the repository málföng. Most of these are now also available at the CLARIN-IS website and will be integrated into the CLARIN Virtual Language Observatory. Below we give a brief and non-exhaustive overview of language resources for Icelandic which will be developed in the programme.",
"We will update the IGC with new data from more sources and continue collecting data from rights holders who have given their permission for using their material. A new version will be released each year during the five-year programme.",
"Treebanks. The largest of the syntactically parsed treebanks that exist is the Icelandic Parsed Historical Corpus (IcePaHC; Wallenberg et al., 2011; Rögnvaldsson et al., 2011, 2012), which contains one million words from the 12th to the 21st century. The scheme used for the syntactic annotation is based on the Penn Parsed Corpora of Historical English BIBREF24, BIBREF25. On the other hand, no Universal Dependencies (UD)-treebanks are available for Icelandic. Within the programme, a UD-treebank will by built, based on IcePaHC, and extended with new material.",
"Morphological database. The Database of Icelandic Morphology (DIM; Bjarnadóttir et al., 2019) contains inflectional paradigms of about 287,000 lemmas. A part of the database, DMII-Core, only includes data in a prescriptive context and is suited for language learners, creating teaching material and other prescriptive uses. It consists of the inflection of approx. 50,000 words. We will extend it by reviewing ambiguous inflection forms. We will define format for data publication as the core will be available for use by a third party. For the sake of simplifying the process of adding material to the database and its maintenance, we will take advantage of the lexicon acquisition tool described in Section SECREF16 and adapt it for DIM."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Below we give a brief and non-exhaustive overview of language resources for Icelandic which will be developed in the programme.",
"We will update the IGC with new data from more sources and continue collecting data from rights holders who have given their permission for using their material.",
"Within the programme, a UD-treebank will by built, based on IcePaHC, and extended with new material.",
"We will extend it by reviewing ambiguous inflection forms."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"The focus of the programme will be on the development of text and speech-based language resources, on the development of core natural language processing (NLP) tools like tokenisers, taggers and parsers, and finally, to publish open-source software in the areas of speech recognition, speech synthesis, machine translation, and spell and grammar checking. All deliverables of the programme will be published under open licenses, to encourage use of resources and software in commercial products."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The focus of the programme will be on the development of text and speech-based language resources, on the development of core natural language processing (NLP) tools like tokenisers, taggers and parsers, and finally, to publish open-source software in the areas of speech recognition, speech synthesis, machine translation, and spell and grammar checking."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"As mentioned above, a number of language resources have been made available at the repository málföng. Most of these are now also available at the CLARIN-IS website and will be integrated into the CLARIN Virtual Language Observatory. Below we give a brief and non-exhaustive overview of language resources for Icelandic which will be developed in the programme.",
"After studying somewhat similar national programmes in other European countries, we have defined the most important factors that in our opinion will help lead to the success of the programme: First, we have defined core projects that comprise the most important language resources and software tools necessary for various LT applications. Second, all deliverables will be published under as open licenses as possible and all resources and software will be easily accessible. The deliverables will be packaged and published for use in commercial applications, where applicable. Third, from the beginning of the programme, we encourage innovation projects from academia and industry through a competitive R&D fund, and fourth, constant communication with users and industry through conferences, events and direct interaction will be maintained, with the aim of putting deliverables to use in products as soon as possible. The cooperation between academia and industry is also reflected in the consortium of universities, institutions, associations, and private companies, performing the R&D work for all core projects."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Below we give a brief and non-exhaustive overview of language resources for Icelandic which will be developed in the programme.",
"Second, all deliverables will be published under as open licenses as possible and all resources and software will be easily accessible. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
]
},
{
"annotation_id": [
"4103e2c2d674ddb6e42b94f6553448f923323392",
"ce2ed1b8d2524776003c099fbcc63ef040693f14"
],
"answer": [
{
"evidence": [
"Previously, two tokenisers have been built for Icelandic, one is a part of IceNLP and the other a part of Greynir. As Greynir is still in active development, it will be used as a base for the LT project's development. In order to be able to test the tokenisers' accuracy, a test set that takes different tokeniser settings into account will be developed.",
"Software development. The software development tasks of the spell and grammar checking project will build a working open source correction system whose development is informed by the analysis of the data sets created within the project. The spell and grammar checker will be based on the foundation for processing Icelandic text provided by the Greynir system.",
"Software implementation and research. The research areas are chosen so to enhance the language resource development for Icelandic. A punctuation system for Icelandic will be analysed and implemented. Compound words are common in Icelandic and the language also has a relatively rich inflection structure so it is important to address those features for language modeling. Pronunciation analysis, speaker diarization and speech analysis will also be addressed especially for Icelandic, and acoustic modelling for children and teenagers receive attention in the project."
],
"extractive_spans": [],
"free_form_answer": "A lot of new software will be developed in all areas of the programme, some will be extensions of already available Greynir software.",
"highlighted_evidence": [
"As Greynir is still in active development, it will be used as a base for the LT project's development.",
"The spell and grammar checker will be based on the foundation for processing Icelandic text provided by the Greynir system.",
"Software implementation and research. The research areas are chosen so to enhance the language resource development for Icelandic. A punctuation system for Icelandic will be analysed and implemented."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"When the programme started, there were a few available tools for Icelandic. IceNLP BIBREF27 is a suite of NLP tools containing modules for tokenisation, PoS-tagging, lemmatising, parsing and named entity recognition. Greynir BIBREF22 is a full parser which also includes a tokeniser and recognises some types of named entities. Nefnir BIBREF28 is a lemmatiser which uses suffix substitution rules, derived from the Database of Icelandic Morphology BIBREF29, giving results that outperform IceNLP. ABLTagger BIBREF30 is a PoS tagger that outperforms other taggers that have been trained for tagging Icelandic texts.",
"Previously, two tokenisers have been built for Icelandic, one is a part of IceNLP and the other a part of Greynir. As Greynir is still in active development, it will be used as a base for the LT project's development. In order to be able to test the tokenisers' accuracy, a test set that takes different tokeniser settings into account will be developed.",
"The IceNLP and Greynir parsers will be evaluated and either one of them or both developed further. We will also adapt a UD-parser to Icelandic UD-grammar.",
"Lexicon acquisition tool. When constructing and maintaining lexical databases, such as DIM, the Icelandic wordnet or other related resources, it is vital to be able to systematically add neologies and words that are missing from the datasets, especially those commonly used in the language. Within the LT programme a flexible lexicon acquisition tool will be developed. It will be able to identify and collect unknown words and word forms, together with statistics, through structured lexical acquisition from the Icelandic Gigaword Corpus, which is constantly being updated, and other data sources in the same format.",
"Software implementation and research. The research areas are chosen so to enhance the language resource development for Icelandic. A punctuation system for Icelandic will be analysed and implemented. Compound words are common in Icelandic and the language also has a relatively rich inflection structure so it is important to address those features for language modeling. Pronunciation analysis, speaker diarization and speech analysis will also be addressed especially for Icelandic, and acoustic modelling for children and teenagers receive attention in the project.",
"Software development. The software development tasks of the spell and grammar checking project will build a working open source correction system whose development is informed by the analysis of the data sets created within the project. The spell and grammar checker will be based on the foundation for processing Icelandic text provided by the Greynir system.",
"Baseline system. In this part, three baseline MT systems will be developed. First, a statistical phrase-based MT system based on Moses BIBREF41, second, a bidirectional LSTM model using the neural translation system OpenNMT BIBREF42, and, third, a system based on an attention-based neural network BIBREF43 using Tensor2Tensor. All the three systems will be trained on ParIce, and the additional data from tasks 1 and 2 above. Eventually, the goal is to choose the best performing MT-system for further development of MT for Icelandic.",
"MT interface. An API and a web user interface for the three baseline systems, mentioned in item 3 above, will be developed to give interested parties access to the systems under development, and to establish a testing environment in which members of the public can submit their own text. Thus, results from the three systems can be compared directly, as well as to the translations produced by Google Translate. Moreover, in this part, a crowd-sourcing mechanism will be developed, i.e. a functionality to allow users to submit improved translations back to the system for inclusion in the training corpus."
],
"extractive_spans": [
"IceNLP",
"Greynir ",
"Nefnir ",
"ABLTagger",
"a flexible lexicon acquisition tool",
"A punctuation system for Icelandic ",
" open source correction system",
"a statistical phrase-based MT system ",
" a bidirectional LSTM model using the neural translation system OpenNMT",
"a system based on an attention-based neural network",
"An API and a web user interface"
],
"free_form_answer": "",
"highlighted_evidence": [
"IceNLP BIBREF27 is a suite of NLP tools containing modules for tokenisation, PoS-tagging, lemmatising, parsing and named entity recognition. Greynir BIBREF22 is a full parser which also includes a tokeniser and recognises some types of named entities. Nefnir BIBREF28 is a lemmatiser which uses suffix substitution rules, derived from the Database of Icelandic Morphology BIBREF29, giving results that outperform IceNLP. ABLTagger BIBREF30 is a PoS tagger that outperforms other taggers that have been trained for tagging Icelandic texts.",
" As Greynir is still in active development, it will be used as a base for the LT project's development. ",
"The IceNLP and Greynir parsers will be evaluated and either one of them or both developed further. ",
"Within the LT programme a flexible lexicon acquisition tool will be developed.",
"A punctuation system for Icelandic will be analysed and implemented. ",
"Software development. The software development tasks of the spell and grammar checking project will build a working open source correction system whose development is informed by the analysis of the data sets created within the project. The spell and grammar checker will be based on the foundation for processing Icelandic text provided by the Greynir system.",
"In this part, three baseline MT systems will be developed. First, a statistical phrase-based MT system based on Moses BIBREF41, second, a bidirectional LSTM model using the neural translation system OpenNMT BIBREF42, and, third, a system based on an attention-based neural network BIBREF43 using Tensor2Tensor. ",
"MT interface. An API and a web user interface for the three baseline systems, mentioned in item 3 above, will be developed to give interested parties access to the systems under development, and to establish a testing environment in which members of the public can submit their own text. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
]
},
{
"annotation_id": [
"8984663cd3f1a13d1728fc5c343c73f3236f2a6d",
"94add5d15b51cdc01461f42219d9283f1de0c852",
"f194100f440d2541fc1d619ad305d52a0427d0d9"
],
"answer": [
{
"evidence": [
"In recent years, there has been much international discussion on how the future of languages depends on them being usable in the digital world. This concern has led to a number of national LT programmes. We studied three of these national programmes: the STEVIN programme in the Netherlands which ran between 2004 and 2011, the Plan for the Advancement of Language Technology in Spain, and, in particular, the Estonian LT programmes that have been running since 2006."
],
"extractive_spans": [
"STEVIN programme in the Netherlands",
"Plan for the Advancement of Language Technology in Spain",
"Estonian LT programmes"
],
"free_form_answer": "",
"highlighted_evidence": [
"We studied three of these national programmes: the STEVIN programme in the Netherlands which ran between 2004 and 2011, the Plan for the Advancement of Language Technology in Spain, and, in particular, the Estonian LT programmes that have been running since 2006."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In recent years, there has been much international discussion on how the future of languages depends on them being usable in the digital world. This concern has led to a number of national LT programmes. We studied three of these national programmes: the STEVIN programme in the Netherlands which ran between 2004 and 2011, the Plan for the Advancement of Language Technology in Spain, and, in particular, the Estonian LT programmes that have been running since 2006."
],
"extractive_spans": [
"STEVIN programme in the Netherlands",
" Plan for the Advancement of Language Technology in Spain",
"Estonian LT programmes"
],
"free_form_answer": "",
"highlighted_evidence": [
"We studied three of these national programmes: the STEVIN programme in the Netherlands which ran between 2004 and 2011, the Plan for the Advancement of Language Technology in Spain, and, in particular, the Estonian LT programmes that have been running since 2006."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In recent years, there has been much international discussion on how the future of languages depends on them being usable in the digital world. This concern has led to a number of national LT programmes. We studied three of these national programmes: the STEVIN programme in the Netherlands which ran between 2004 and 2011, the Plan for the Advancement of Language Technology in Spain, and, in particular, the Estonian LT programmes that have been running since 2006."
],
"extractive_spans": [
"Netherlands",
"Spain",
"Estonian"
],
"free_form_answer": "",
"highlighted_evidence": [
"We studied three of these national programmes: the STEVIN programme in the Netherlands which ran between 2004 and 2011, the Plan for the Advancement of Language Technology in Spain, and, in particular, the Estonian LT programmes that have been running since 2006."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
]
},
{
"annotation_id": [
"8a0d105a9fa5d016b130bd5e2be1dd46c0affb97",
"9f96c6fbf16806ded186c2b37f05db6d003c9b16",
"f32658bf67d9f2f27584be22f67192c94e656487"
],
"answer": [
{
"evidence": [
"The history of Icelandic LT is usually considered to have begun around the turn of the century, even though a couple of LT resources and products were developed in the years leading up to that. Following the report of an expert group appointed by the Minister of Education, Science and Culture BIBREF7, the Icelandic Government launched a special LT Programme in the year 2000, with the aim of supporting institutions and companies to create basic resources for Icelandic LT work. This initiative resulted in a few projects which laid the ground for future work in the field. The most important of these were a 25 million token, balanced, tagged corpus, a full-form database of Icelandic inflections, a training model for PoS taggers, an improved speech synthesiser, and an isolated word speech recogniser BIBREF8."
],
"extractive_spans": [],
"free_form_answer": "Around year 2000",
"highlighted_evidence": [
"The history of Icelandic LT is usually considered to have begun around the turn of the century, even though a couple of LT resources and products were developed in the years leading up to that. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The history of Icelandic LT is usually considered to have begun around the turn of the century, even though a couple of LT resources and products were developed in the years leading up to that. Following the report of an expert group appointed by the Minister of Education, Science and Culture BIBREF7, the Icelandic Government launched a special LT Programme in the year 2000, with the aim of supporting institutions and companies to create basic resources for Icelandic LT work. This initiative resulted in a few projects which laid the ground for future work in the field. The most important of these were a 25 million token, balanced, tagged corpus, a full-form database of Icelandic inflections, a training model for PoS taggers, an improved speech synthesiser, and an isolated word speech recogniser BIBREF8."
],
"extractive_spans": [
"in the year 2000"
],
"free_form_answer": "",
"highlighted_evidence": [
"Following the report of an expert group appointed by the Minister of Education, Science and Culture BIBREF7, the Icelandic Government launched a special LT Programme in the year 2000, with the aim of supporting institutions and companies to create basic resources for Icelandic LT work."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The history of Icelandic LT is usually considered to have begun around the turn of the century, even though a couple of LT resources and products were developed in the years leading up to that. Following the report of an expert group appointed by the Minister of Education, Science and Culture BIBREF7, the Icelandic Government launched a special LT Programme in the year 2000, with the aim of supporting institutions and companies to create basic resources for Icelandic LT work. This initiative resulted in a few projects which laid the ground for future work in the field. The most important of these were a 25 million token, balanced, tagged corpus, a full-form database of Icelandic inflections, a training model for PoS taggers, an improved speech synthesiser, and an isolated word speech recogniser BIBREF8."
],
"extractive_spans": [
"in the year 2000",
"couple of LT resources and products were developed in the years leading up to that"
],
"free_form_answer": "",
"highlighted_evidence": [
"The history of Icelandic LT is usually considered to have begun around the turn of the century, even though a couple of LT resources and products were developed in the years leading up to that. Following the report of an expert group appointed by the Minister of Education, Science and Culture BIBREF7, the Icelandic Government launched a special LT Programme in the year 2000, with the aim of supporting institutions and companies to create basic resources for Icelandic LT work."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What private companies are members of consortium?",
"Does programme plans gathering and open sourcing some large dataset for Icelandic language?",
"What concrete software is planned to be developed by the end of the programme?",
"What other national language technology programs are described in the paper?",
"When did language technology start in Iceland?"
],
"question_id": [
"31cba86bc45970337ba035ecf36d8954a9a5206a",
"3a25f82512d56d9e1ffba72f977f515ae3ba3cca",
"b59f3a58939f7ac007d3263a459c56ebefc4b49a",
"b4b7333805cb6fdde44907747887a971422dc298",
"871f7661f5a3da366b0b5feaa36f54fd3dedae8e"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Members of the SÍM consortium for Icelandic LT"
],
"file": [
"4-Table1-1.png"
]
} | [
"What private companies are members of consortium?",
"What concrete software is planned to be developed by the end of the programme?",
"When did language technology start in Iceland?"
] | [
[
"2003.09244-Organisation of the Icelandic LT Programme 2019–2023-1",
"2003.09244-4-Table1-1.png"
],
[
"2003.09244-Core Projects ::: NLP Tools-1",
"2003.09244-Core Projects ::: Machine Translation-8",
"2003.09244-Core Projects ::: NLP Tools-13",
"2003.09244-Core Projects ::: Spell and Grammar Checking-2",
"2003.09244-Core Projects ::: Automatic Speech Recognition (ASR)-3",
"2003.09244-Core Projects ::: Machine Translation-7",
"2003.09244-Core Projects ::: NLP Tools-9",
"2003.09244-Core Projects ::: NLP Tools-4"
],
[
"2003.09244-History of Icelandic LT-0"
]
] | [
"Crediyinfo, Grammatek, \nMideind,\nTiro",
"A lot of new software will be developed in all areas of the programme, some will be extensions of already available Greynir software.",
"Around year 2000"
] | 124 |
1908.07491 | Controversy in Context | With the growing interest in social applications of Natural Language Processing and Computational Argumentation, a natural question is how controversial a given concept is. Prior works relied on Wikipedia's metadata and on content analysis of the articles pertaining to a concept in question. Here we show that the immediate textual context of a concept is strongly indicative of this property, and, using simple and language-independent machine-learning tools, we leverage this observation to achieve state-of-the-art results in controversiality prediction. In addition, we analyze and make available a new dataset of concepts labeled for controversiality. It is significantly larger than existing datasets, and grades concepts on a 0-10 scale, rather than treating controversiality as a binary label. | {
"paragraphs": [
[
"Indicating that a web page is controversial, or disputed - for example, in a search result - facilitates an educated consumption of the information therein, suggesting the content may not represent the “full picture”. Here, we consider the problem of estimating the level of controversiality associated with a given Wikipedia concept (title). We demonstrate that the textual contexts in which the concept is referenced can be leveraged to facilitate this.",
"The definition of which concepts are controversial is controversial by itself; an accurate definition of this elusive notion attracted the attention of researchers from various fields, see for example some recent attempts in BIBREF0, BIBREF1, BIBREF2.",
"Most people would agree, for example, that Global warming is a controversial concept, whereas Summer is not. However, the concept Pollution may be seen as neutral by some, yet controversial by others, who associate it with environmental debates. In other words, different people may have different opinions, potentially driven by different contexts salient in their mind. Yet, as reported in the sequel, an appreciable level of agreement can be reached, even without explicit context.",
"Focusing here on Wikipedia concepts, we adopt as an initial “ground truth” the titles listed on the Wikipedia list of controversial issues, which is curated based on so-called “edit wars”. We then manually annotate a set of Wikipedia titles which are locked for editing, and evaluate our system on this much larger and more challenging dataset.",
"To estimate the level of controversy associated with a Wikipedia concept, we propose to simply examine the words in the sentences in which the concept is referenced. Because a concept can often be found in multiple contexts, the estimation can be seen as reflecting the “general opinion” about it in the corpus. This contrasts previous works, which consider this a binary problem, and employ a complex combination of features extracted from Wikipedia's article contents and inter-references, and more extensively – from the rich edit history thereof."
],
[
"Analysis of controversy in Wikipedia, online news and social media has attracted considerable attention in recent years. Exploiting the collaborative structure of Wikipedia, estimators of the level of controversy in a Wikipedia article were developed based on the edit-history of the article BIBREF0, BIBREF3. Along these lines, BIBREF4 detect controversy based on mutual reverts, bi-polarity in the collaboration network, and mutually-reinforced scores for editors and articles. Similarly, BIBREF1 classify whether a Wikipedia page is controversial through the combined evaluation of the topically neighboring set of pages.",
"Content analysis of controversial Wikipedia articles has been used to evaluate the level of controversy of other documents (e.g., web pages) by mapping them to related Wikipedia articles BIBREF5. BIBREF6 further build a language model, which enhances predictions made by existing classifiers, by inferring word probabilities from Wikipedia articles prominent in Wikipedia controversy features (mainly signals in edit history as discussed above) and from articles retrieved by manually selected query terms, believed to indicate controversy.",
"BIBREF7 detect controversy in news items by inspecting terms with excessive frequency in contexts containing sentiment words, and BIBREF8 study controversy in user comments of news articles using lexicons. Finally, BIBREF9 suggest that controversy is not a universal but rather a community-related concept, and, therefore, should be studied in context.",
"Here we measure a concept's controversiality from the explicit sentence-level context in which it is mentioned. In this, our approach is reminiscent of BIBREF10, who suggest a similar approach to detect abstract concepts."
],
[
"We consider three datasets, two of which are a contribution of this work.",
"Dataset I consists of 480 concepts previously analyzed in BIBREF1, BIBREF4. 240 are positive examples, titles from the Wikipedia list of controversial issues, and 240 are negative examples chosen at random and exclusive of the positives. Over this dataset, we compare the methodology suggested here to those reported by BIBREF1, BIBREF4. As the latter report overall accuracy of their binary prediction, we convert our controversiality estimates to a binary classification by classifying the higher-scored half as controversial, and the lower half as non-controversial.",
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017). As positive examples we take, from this list, all concepts which appear more than 50 times in Wikipedia. This leaves 608 controversial Wikipedia concepts. For negative examples, we follow BIBREF1, BIBREF4 and select a like number of concepts at random. Here too, since each concept only has a binary label, we convert our estimation into a binary classification, and report accuracy.",
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10. We used this dataset for testing the models trained on Dataset I.",
"In all datasets, to obtain the sentence-level context of the concepts (positive and negative), we randomly select two equal-sized sets of Wikipedia sentences, that explicitly reference these concepts – i.e., that contain a hyperlink to the article titled by the concept. Importantly, in each sentence we mask the words that reference the concept – i.e., the surface form of the hyperlink leading to the concept – by a fixed, singular token, thus focusing solely on the context within which the concepts are mentioned."
],
[
"We employ three estimation schemes based on the textual contexts of concepts. The first relies on the context via pre-trained word embeddings of the concepts, which, in turn, are derived from the concepts' distributional properties in large samples of free texts. The other two schemes directly access the sentence-level contexts of the concepts.",
"Nearest neighbors (NN) Estimator: We used the pre-trained GloVe embeddings BIBREF11 of concepts to implement a nearest-neighbor estimator as follows. Given a concept $c$, we extract all labeled concepts within a given radius $r$ (cosine similarity $0.3$). In one variant, $c$'s controversiality score is taken to be the fraction of controversial concepts among them. In another variant, labeled concepts are weighted by their cosine similarity to $c$.",
"Naive Bayes (NB) Estimator: A Naive Bayes model was learned, with a bag-of-words feature set, using the word counts in the sentences of our training data – the contexts of the controversial and non-controversial concepts. The controversiality score of a concept $c$ for its occurrence in a sentence $s$, is taken as the posterior probability (according to the NB model) of $s$ to contain a controversial concept, given the words of $s$ excluding $c$, and taking a prior of $0.5$ for controversiality (as is the case in the datasets). The controversiality score of $c$ is then defined as the average score over all sentences referencing $c$.",
"Recurrent neural network (RNN): A bidirectional RNN using the architecture suggested in BIBREF10 was similarly trained. The network receives as input a concept and a referring sentence, and outputs a score. The controversiality score of a concept is defined, as above, to be the average of these scores."
],
[
"We first examined the estimators in $k$-fold cross-validation scheme on the datasets I and II with $k=10$: the set of positive (controversial) concepts was split into 10 equal size sets, and the corresponding sentences were split accordingly. Each set was matched with similarly sized sets of negative (non-controversial) concepts and corresponding sentences. For each fold, a model was generated from the training sentences and used to score the test concepts. Scores were converted into a binary classification, as described in SECREF3, and accuracy was computed accordingly. Finally, the accuracy over the $k$ folds was averaged."
],
[
"In a preliminary task, we looked for words which may designate sentences associated with controversial concepts. To this end, we ranked the words appearing in positive sentences according to their information gain for this task. The top of the list comprises the following: that, sexual, people, movement, religious, issues, rights.",
"The Wikipedia list of controversial issues specifies categories for the listed concepts, like Politics and economics, Religion, History, and Sexuality (some concepts are associated with two or more categories). While some top-ranked words - that, people, issues - do seem to directly indicate controversiality BIBREF12, BIBREF13, others seem to have more to do with the category they belong to. Although these categories may indeed indicate controversiality, we consider this as an indirect or implicit indication, since it is more related to the controversial theme than to controversiality per-se.",
"To control for this effect, we performed a second experiment where we set the concepts from one category as the test set, and used the others for training (concepts associated with the excluded category are left out, regardless of whether they are also associated with one of the training categories). We did this for 5 categories: History, Politics and economics, Religion, Science, and Sexuality. This way, thematic relatedness observed in the training set should have little or no effect on correctly estimating the level of controversy associated of concepts in the test set, and may even “mislead” the estimator. We note that previous work on controversiality does not seem to address this issue, probably because the meta-data used is less sensitive to it."
],
[
"Table TABREF14 compares the accuracy reported on Dataset I for the methods suggested in BIBREF1, BIBREF4 with the accuracy obtained by our methods, as well as the latter on Dataset II, using 10-fold cross-validation in all cases. Table TABREF14 reports accuracy results of the more stringent analysis described in section SECREF13.",
"BIBREF4 review several controversy classifiers. The most accurate one, the Structure classifier, builds, among others, collaboration networks by considering high-level behavior of editors both in their individual forms, and their pairwise interactions. A collaboration profile containing these individual and pairwise features is built for each two interacting editors and is classified based on the agreement or disagreement relation between them. This intensive computation renders that classifier impractical. Table TABREF14 therefore also includes the most accurate classifier BIBREF4 consider practical.",
"As seen in Table TABREF14, for the usual 10-fold analysis the simple classifiers suggested here are on par with the best and more complex classifier reported in BIBREF4. Moreover, in the leave-one-category-out setting (Table TABREF14), accuracy indeed drops, but only marginally. We also observe the superiority of classifiers that directly access the context (NB and RNN) over classifiers that access it via word embedding (NN).",
"Table TABREF14 presents results obtained when models trained on Dataset I are applied to Dataset III. For this experiment we also included a BERT network BIBREF14 fine tuned on Dataset I. The Pearson correlation between the scores obtained via manual annotation and the scores generated by our automatic estimators suggests a rather strong linear relationship between the two. Accuracy was computed as for previous datasets, by taking here as positive examples the concepts receiving 6 or more positive votes, and as negative a random sample of 670 concepts out of the 1182 concepts receiving no positive vote."
],
[
"We demonstrated that the sentence–level context in which a concept appears is indicative of its controversiality. This follows BIBREF10, who show this for concept abstractness and suggest to explore further properties identifiable in this way. Importantly, we observed that this method may pick up signals which are not directly related to the property of interest. For example, since many controversial concepts have to do with religion, part of what this method may learn is thematic relatedness to religion. However, when controlling for this effect, the drop in accuracy is fairly small.",
"The major advantages of our estimation scheme are its simplicity and reliance on abundantly accessible features. At the same time, its accuracy is similar to state-of-the-art classifiers, which depend on complex meta-data, and rely on sophisticated - in some cases impractical - algorithmic techniques. Because the features herein are so simple, our estimators are convertible to any corpus, in any language, even of moderate size.",
"Recently, IBM introduced Project Debater BIBREF15, an AI system that debates humans on controversial topics. Training and evaluating such a system undoubtedly requires an extensive supply of such topics, which can be enabled by the automatic extraction methods suggested here as well as the new datasets."
],
[
"We are grateful to Shiri Dori-Hacohen and Hoda Sepehri Rad for sharing their data with us and giving us permission to use it."
]
],
"section_name": [
"Introduction",
"Related work",
"Estimating a concept's controversiality level ::: Datasets",
"Estimating a concept's controversiality level ::: Controversiality Estimators",
"Estimating a concept's controversiality level ::: Validation ::: Random @!START@$k$@!END@-fold",
"Estimating a concept's controversiality level ::: Validation ::: Leave one category out",
"Results",
"Conclusions",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"53c914bd4056c4ef4539d048ced41f2e40722c38",
"5bcbc663a4cbbf57a2c43d708bf0a1ab71ab2c36",
"6f3db40fb4f6c5a66f0727f3a9b4d023a06b065e"
],
"answer": [
{
"evidence": [
"To estimate the level of controversy associated with a Wikipedia concept, we propose to simply examine the words in the sentences in which the concept is referenced. Because a concept can often be found in multiple contexts, the estimation can be seen as reflecting the “general opinion” about it in the corpus. This contrasts previous works, which consider this a binary problem, and employ a complex combination of features extracted from Wikipedia's article contents and inter-references, and more extensively – from the rich edit history thereof."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"This contrasts previous works, which consider this a binary problem, and employ a complex combination of features extracted from Wikipedia's article contents and inter-references, and more extensively – from the rich edit history thereof."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"To estimate the level of controversy associated with a Wikipedia concept, we propose to simply examine the words in the sentences in which the concept is referenced. Because a concept can often be found in multiple contexts, the estimation can be seen as reflecting the “general opinion” about it in the corpus. This contrasts previous works, which consider this a binary problem, and employ a complex combination of features extracted from Wikipedia's article contents and inter-references, and more extensively – from the rich edit history thereof."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To estimate the level of controversy associated with a Wikipedia concept, we propose to simply examine the words in the sentences in which the concept is referenced. Because a concept can often be found in multiple contexts, the estimation can be seen as reflecting the “general opinion” about it in the corpus. This contrasts previous works, which consider this a binary problem, and employ a complex combination of features extracted from Wikipedia's article contents and inter-references, and more extensively – from the rich edit history thereof."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"To estimate the level of controversy associated with a Wikipedia concept, we propose to simply examine the words in the sentences in which the concept is referenced. Because a concept can often be found in multiple contexts, the estimation can be seen as reflecting the “general opinion” about it in the corpus. This contrasts previous works, which consider this a binary problem, and employ a complex combination of features extracted from Wikipedia's article contents and inter-references, and more extensively – from the rich edit history thereof."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To estimate the level of controversy associated with a Wikipedia concept, we propose to simply examine the words in the sentences in which the concept is referenced. Because a concept can often be found in multiple contexts, the estimation can be seen as reflecting the “general opinion” about it in the corpus. This contrasts previous works, which consider this a binary problem, and employ a complex combination of features extracted from Wikipedia's article contents and inter-references, and more extensively – from the rich edit history thereof."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"20ffc3d95856f8af453bb423773c48a4abc08d4c",
"7634819bec2ee5ef349e2bd9607e09c2821a3fcd",
"b31e3312b5b2fc83aeb56496c35ce188add4910d"
],
"answer": [
{
"evidence": [
"Dataset I consists of 480 concepts previously analyzed in BIBREF1, BIBREF4. 240 are positive examples, titles from the Wikipedia list of controversial issues, and 240 are negative examples chosen at random and exclusive of the positives. Over this dataset, we compare the methodology suggested here to those reported by BIBREF1, BIBREF4. As the latter report overall accuracy of their binary prediction, we convert our controversiality estimates to a binary classification by classifying the higher-scored half as controversial, and the lower half as non-controversial."
],
"extractive_spans": [
"480 concepts previously analyzed in BIBREF1, BIBREF4"
],
"free_form_answer": "",
"highlighted_evidence": [
"Dataset I consists of 480 concepts previously analyzed in BIBREF1, BIBREF4. 240 are positive examples, titles from the Wikipedia list of controversial issues, and 240 are negative examples chosen at random and exclusive of the positives."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Dataset I consists of 480 concepts previously analyzed in BIBREF1, BIBREF4. 240 are positive examples, titles from the Wikipedia list of controversial issues, and 240 are negative examples chosen at random and exclusive of the positives. Over this dataset, we compare the methodology suggested here to those reported by BIBREF1, BIBREF4. As the latter report overall accuracy of their binary prediction, we convert our controversiality estimates to a binary classification by classifying the higher-scored half as controversial, and the lower half as non-controversial."
],
"extractive_spans": [],
"free_form_answer": "Dataset I created and analyzed in BIBREF1, BIBREF4",
"highlighted_evidence": [
"Dataset I consists of 480 concepts previously analyzed in BIBREF1, BIBREF4. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Dataset I consists of 480 concepts previously analyzed in BIBREF1, BIBREF4. 240 are positive examples, titles from the Wikipedia list of controversial issues, and 240 are negative examples chosen at random and exclusive of the positives. Over this dataset, we compare the methodology suggested here to those reported by BIBREF1, BIBREF4. As the latter report overall accuracy of their binary prediction, we convert our controversiality estimates to a binary classification by classifying the higher-scored half as controversial, and the lower half as non-controversial.",
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017). As positive examples we take, from this list, all concepts which appear more than 50 times in Wikipedia. This leaves 608 controversial Wikipedia concepts. For negative examples, we follow BIBREF1, BIBREF4 and select a like number of concepts at random. Here too, since each concept only has a binary label, we convert our estimation into a binary classification, and report accuracy.",
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10. We used this dataset for testing the models trained on Dataset I."
],
"extractive_spans": [
"Dataset I consists of 480 concepts previously analyzed in BIBREF1, BIBREF4. 240 are positive examples, titles from the Wikipedia list of controversial issues, and 240 are negative examples chosen at random and exclusive of the positives",
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017).",
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. "
],
"free_form_answer": "",
"highlighted_evidence": [
"Dataset I consists of 480 concepts previously analyzed in BIBREF1, BIBREF4. 240 are positive examples, titles from the Wikipedia list of controversial issues, and 240 are negative examples chosen at random and exclusive of the positives.",
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017). As positive examples we take, from this list, all concepts which appear more than 50 times in Wikipedia. This leaves 608 controversial Wikipedia concepts. For negative examples, we follow BIBREF1, BIBREF4 and select a like number of concepts at random.",
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10. We used this dataset for testing the models trained on Dataset I."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"0b144510ef1c462e7877d178591bd769dec000d1",
"4c801c090fa7c693df909efc791fb80d76c34f7f",
"81f01234477455c20bacd5ebd3f0f41be3ce1c77"
],
"answer": [
{
"evidence": [
"We consider three datasets, two of which are a contribution of this work.",
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017). As positive examples we take, from this list, all concepts which appear more than 50 times in Wikipedia. This leaves 608 controversial Wikipedia concepts. For negative examples, we follow BIBREF1, BIBREF4 and select a like number of concepts at random. Here too, since each concept only has a binary label, we convert our estimation into a binary classification, and report accuracy.",
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10. We used this dataset for testing the models trained on Dataset I."
],
"extractive_spans": [
"608 controversial Wikipedia concepts",
"3561 concepts"
],
"free_form_answer": "",
"highlighted_evidence": [
"We consider three datasets, two of which are a contribution of this work.",
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017). As positive examples we take, from this list, all concepts which appear more than 50 times in Wikipedia. This leaves 608 controversial Wikipedia concepts.",
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017). As positive examples we take, from this list, all concepts which appear more than 50 times in Wikipedia. This leaves 608 controversial Wikipedia concepts. For negative examples, we follow BIBREF1, BIBREF4 and select a like number of concepts at random. Here too, since each concept only has a binary label, we convert our estimation into a binary classification, and report accuracy."
],
"extractive_spans": [],
"free_form_answer": "About 1216 in dataset II, 3561 in dataset III.",
"highlighted_evidence": [
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017). As positive examples we take, from this list, all concepts which appear more than 50 times in Wikipedia. This leaves 608 controversial Wikipedia concepts. For negative examples, we follow BIBREF1, BIBREF4 and select a like number of concepts at random."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Dataset I consists of 480 concepts previously analyzed in BIBREF1, BIBREF4. 240 are positive examples, titles from the Wikipedia list of controversial issues, and 240 are negative examples chosen at random and exclusive of the positives. Over this dataset, we compare the methodology suggested here to those reported by BIBREF1, BIBREF4. As the latter report overall accuracy of their binary prediction, we convert our controversiality estimates to a binary classification by classifying the higher-scored half as controversial, and the lower half as non-controversial.",
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017). As positive examples we take, from this list, all concepts which appear more than 50 times in Wikipedia. This leaves 608 controversial Wikipedia concepts. For negative examples, we follow BIBREF1, BIBREF4 and select a like number of concepts at random. Here too, since each concept only has a binary label, we convert our estimation into a binary classification, and report accuracy.",
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10. We used this dataset for testing the models trained on Dataset I."
],
"extractive_spans": [],
"free_form_answer": "Dataset I - 480 concepts, 240 controversial examples, and 240 not-controversial examples.\nDataset II - 608 controversial concepts\nDataset III - 3561 controversial concepts",
"highlighted_evidence": [
"Dataset I consists of 480 concepts previously analyzed in BIBREF1, BIBREF4. 240 are positive examples, titles from the Wikipedia list of controversial issues, and 240 are negative examples chosen at random and exclusive of the positives. ",
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017). As positive examples we take, from this list, all concepts which appear more than 50 times in Wikipedia. This leaves 608 controversial Wikipedia concepts. For negative examples, we follow BIBREF1, BIBREF4 and select a like number of concepts at random. ",
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10. We used this dataset for testing the models trained on Dataset I."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"9048ef92974812e8d56f2e5646a8faa3772d2577",
"90668968ed6c411b2e596a07e3a44767a4365764",
"f1280392d6828ff108d1dcac9227e17517664986"
],
"answer": [
{
"evidence": [
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10. We used this dataset for testing the models trained on Dataset I."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10. We used this dataset for testing the models trained on Dataset I."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10. We used this dataset for testing the models trained on Dataset I."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"They were then crowd-annotated, with 10 or more annotators per concept."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"107cf459cf81c7bb69fcd2f64a1c7d8d966879ee",
"bfebe4ffea719f5522ae76edd287554ecd22188c",
"de23c42be3a336e0dd1ad6ab722834d4d9e8a788"
],
"answer": [
{
"evidence": [
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10. We used this dataset for testing the models trained on Dataset I."
],
"extractive_spans": [],
"free_form_answer": "The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. After that, annotations were normalized to controversiality scores on an integer scale of 0 - 10",
"highlighted_evidence": [
"The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10. We used this dataset for testing the models trained on Dataset I."
],
"extractive_spans": [],
"free_form_answer": "10 or more annotators marked whether a topic was controversial or not. The score was then normalized on an integer scale of 0-10.",
"highlighted_evidence": [
"They were then crowd-annotated, with 10 or more annotators per concept. The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017). As positive examples we take, from this list, all concepts which appear more than 50 times in Wikipedia. This leaves 608 controversial Wikipedia concepts. For negative examples, we follow BIBREF1, BIBREF4 and select a like number of concepts at random. Here too, since each concept only has a binary label, we convert our estimation into a binary classification, and report accuracy.",
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10. We used this dataset for testing the models trained on Dataset I."
],
"extractive_spans": [
"As positive examples we take, from this list, all concepts which appear more than 50 times in Wikipedia.",
"For negative examples, we follow BIBREF1, BIBREF4 and select a like number of concepts at random",
"The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10."
],
"free_form_answer": "",
"highlighted_evidence": [
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017). As positive examples we take, from this list, all concepts which appear more than 50 times in Wikipedia. This leaves 608 controversial Wikipedia concepts. For negative examples, we follow BIBREF1, BIBREF4 and select a like number of concepts at random.",
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"56f1b88b0b4d5f6261a823e0342d2fc9a8fa2e7f",
"5eca83c05f3a46d698fca54f0d20a0c13846c4df",
"bde45bbc400f94438eb8c3f619518cb7527346ae"
],
"answer": [
{
"evidence": [
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017). As positive examples we take, from this list, all concepts which appear more than 50 times in Wikipedia. This leaves 608 controversial Wikipedia concepts. For negative examples, we follow BIBREF1, BIBREF4 and select a like number of concepts at random. Here too, since each concept only has a binary label, we convert our estimation into a binary classification, and report accuracy.",
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10. We used this dataset for testing the models trained on Dataset I."
],
"extractive_spans": [
"Wikipedia list of controversial issues",
"concepts whose Wikipedia pages are under edit protection"
],
"free_form_answer": "",
"highlighted_evidence": [
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017).",
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Dataset I consists of 480 concepts previously analyzed in BIBREF1, BIBREF4. 240 are positive examples, titles from the Wikipedia list of controversial issues, and 240 are negative examples chosen at random and exclusive of the positives. Over this dataset, we compare the methodology suggested here to those reported by BIBREF1, BIBREF4. As the latter report overall accuracy of their binary prediction, we convert our controversiality estimates to a binary classification by classifying the higher-scored half as controversial, and the lower half as non-controversial.",
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017). As positive examples we take, from this list, all concepts which appear more than 50 times in Wikipedia. This leaves 608 controversial Wikipedia concepts. For negative examples, we follow BIBREF1, BIBREF4 and select a like number of concepts at random. Here too, since each concept only has a binary label, we convert our estimation into a binary classification, and report accuracy.",
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10. We used this dataset for testing the models trained on Dataset I."
],
"extractive_spans": [
"Wikipedia "
],
"free_form_answer": "",
"highlighted_evidence": [
"Dataset I consists of 480 concepts previously analyzed in BIBREF1, BIBREF4. 240 are positive examples, titles from the Wikipedia list of controversial issues, and 240 are negative examples chosen at random and exclusive of the positives",
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017).",
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017). As positive examples we take, from this list, all concepts which appear more than 50 times in Wikipedia. This leaves 608 controversial Wikipedia concepts. For negative examples, we follow BIBREF1, BIBREF4 and select a like number of concepts at random. Here too, since each concept only has a binary label, we convert our estimation into a binary classification, and report accuracy.",
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial. They were then crowd-annotated, with 10 or more annotators per concept. The annotation instructions were: “Given a topic and its description on Wikipedia, mark if this is a topic that people are likely to argue about.”. Average pairwise kappa agreement on this task was 0.532. Annotations were normalized to controversiality scores on an integer scale of 0 - 10. We used this dataset for testing the models trained on Dataset I."
],
"extractive_spans": [],
"free_form_answer": "The topics from Wikipedia list of controversial issues that appear more than 50 times in Wikipedia, topics with their Wikipedia pages under edit protection.",
"highlighted_evidence": [
"Dataset II is based on a more recent version of the Wikipedia list of controversial issues (May 2017). As positive examples we take, from this list, all concepts which appear more than 50 times in Wikipedia. ",
"Dataset III is extracted from 3561 concepts whose Wikipedia pages are under edit protection, assuming that many of them are likely to be controversial."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"87e7e686dd42e6fea04e8725ea2933cab5f014cc",
"edf4662a048c96ae46d4010323cf5deb6d7951eb"
],
"answer": [
{
"evidence": [
"Nearest neighbors (NN) Estimator: We used the pre-trained GloVe embeddings BIBREF11 of concepts to implement a nearest-neighbor estimator as follows. Given a concept $c$, we extract all labeled concepts within a given radius $r$ (cosine similarity $0.3$). In one variant, $c$'s controversiality score is taken to be the fraction of controversial concepts among them. In another variant, labeled concepts are weighted by their cosine similarity to $c$.",
"Naive Bayes (NB) Estimator: A Naive Bayes model was learned, with a bag-of-words feature set, using the word counts in the sentences of our training data – the contexts of the controversial and non-controversial concepts. The controversiality score of a concept $c$ for its occurrence in a sentence $s$, is taken as the posterior probability (according to the NB model) of $s$ to contain a controversial concept, given the words of $s$ excluding $c$, and taking a prior of $0.5$ for controversiality (as is the case in the datasets). The controversiality score of $c$ is then defined as the average score over all sentences referencing $c$.",
"Recurrent neural network (RNN): A bidirectional RNN using the architecture suggested in BIBREF10 was similarly trained. The network receives as input a concept and a referring sentence, and outputs a score. The controversiality score of a concept is defined, as above, to be the average of these scores."
],
"extractive_spans": [
"Nearest neighbors (NN) Estimator",
"Naive Bayes (NB) Estimator",
"Recurrent neural network (RNN)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Nearest neighbors (NN) Estimator: We used the pre-trained GloVe embeddings BIBREF11 of concepts to implement a nearest-neighbor estimator as follows.",
"Naive Bayes (NB) Estimator: A Naive Bayes model was learned, with a bag-of-words feature set, using the word counts in the sentences of our training data – the contexts of the controversial and non-controversial concepts.",
"Recurrent neural network (RNN): A bidirectional RNN using the architecture suggested in BIBREF10 was similarly trained."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF14 compares the accuracy reported on Dataset I for the methods suggested in BIBREF1, BIBREF4 with the accuracy obtained by our methods, as well as the latter on Dataset II, using 10-fold cross-validation in all cases. Table TABREF14 reports accuracy results of the more stringent analysis described in section SECREF13.",
"FLOAT SELECTED: Table 1: Accuracy obtained by controversiality classifiers with 10-fold cross validation."
],
"extractive_spans": [],
"free_form_answer": "Classifiers by Rad and Barbosa (2012) and by Dori-Hacohen et al. (2016).",
"highlighted_evidence": [
"Table TABREF14 compares the accuracy reported on Dataset I for the methods suggested in BIBREF1, BIBREF4 with the accuracy obtained by our methods, as well as the latter on Dataset II, using 10-fold cross-validation in all cases. ",
"FLOAT SELECTED: Table 1: Accuracy obtained by controversiality classifiers with 10-fold cross validation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"ad839550a87bd69fb3aa71b5240b1a09166e21e4",
"b1425cba33b0b11269d46b1fc2794755bdba754e"
],
"answer": [
{
"evidence": [
"Nearest neighbors (NN) Estimator: We used the pre-trained GloVe embeddings BIBREF11 of concepts to implement a nearest-neighbor estimator as follows. Given a concept $c$, we extract all labeled concepts within a given radius $r$ (cosine similarity $0.3$). In one variant, $c$'s controversiality score is taken to be the fraction of controversial concepts among them. In another variant, labeled concepts are weighted by their cosine similarity to $c$.",
"Naive Bayes (NB) Estimator: A Naive Bayes model was learned, with a bag-of-words feature set, using the word counts in the sentences of our training data – the contexts of the controversial and non-controversial concepts. The controversiality score of a concept $c$ for its occurrence in a sentence $s$, is taken as the posterior probability (according to the NB model) of $s$ to contain a controversial concept, given the words of $s$ excluding $c$, and taking a prior of $0.5$ for controversiality (as is the case in the datasets). The controversiality score of $c$ is then defined as the average score over all sentences referencing $c$.",
"Recurrent neural network (RNN): A bidirectional RNN using the architecture suggested in BIBREF10 was similarly trained. The network receives as input a concept and a referring sentence, and outputs a score. The controversiality score of a concept is defined, as above, to be the average of these scores."
],
"extractive_spans": [
"nearest-neighbor estimator",
"Naive Bayes model",
"bidirectional RNN"
],
"free_form_answer": "",
"highlighted_evidence": [
"Nearest neighbors (NN) Estimator: We used the pre-trained GloVe embeddings BIBREF11 of concepts to implement a nearest-neighbor estimator as follows. ",
"Naive Bayes (NB) Estimator: A Naive Bayes model was learned, with a bag-of-words feature set, using the word counts in the sentences of our training data – the contexts of the controversial and non-controversial concepts.",
"Recurrent neural network (RNN): A bidirectional RNN using the architecture suggested in BIBREF10 was similarly trained."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
"",
"",
""
],
"question": [
"is this the first dataset with a grading scaling rather than binary?",
"what are the existing datasets for this task?",
"what is the size of the introduced dataset?",
"did they crowdsource annotations?",
"how was labeling done?",
"where does their dataset come from?",
"what are the baselines?",
"what tools did they use?"
],
"question_id": [
"acac0606aab83cae5d13047863c7af542d58e54c",
"2ee4ecf98ef7d02c9e4d103968098fe35f067bbb",
"82f8843b59668567bba09fc8f93963ca7d1fe107",
"376e8ed6e039e07c892c77b7525778178d56acb7",
"4de6bcddd46726bf58326304b0490fdb9e7e86ec",
"e831ce6c406bf5d1c493162732e1b320abb71b6f",
"634a071b13eb7139e77872ecfdc135a2eb2f89da",
"8861138891669a45de3955c802c55a37be717977"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Accuracy obtained by controversiality classifiers with 10-fold cross validation.",
"Table 2: Accuracy obtained by controversiality classifiers using leave-one-category-out cross validation.",
"Table 3: Pearson Correlation and Accuracy obtained by using the models from Dataset I on Dataset III."
],
"file": [
"4-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png"
]
} | [
"what are the existing datasets for this task?",
"what is the size of the introduced dataset?",
"how was labeling done?",
"where does their dataset come from?",
"what are the baselines?"
] | [
[
"1908.07491-Estimating a concept's controversiality level ::: Datasets-2",
"1908.07491-Estimating a concept's controversiality level ::: Datasets-1",
"1908.07491-Estimating a concept's controversiality level ::: Datasets-3"
],
[
"1908.07491-Estimating a concept's controversiality level ::: Datasets-2",
"1908.07491-Estimating a concept's controversiality level ::: Datasets-1",
"1908.07491-Estimating a concept's controversiality level ::: Datasets-0",
"1908.07491-Estimating a concept's controversiality level ::: Datasets-3"
],
[
"1908.07491-Estimating a concept's controversiality level ::: Datasets-2",
"1908.07491-Estimating a concept's controversiality level ::: Datasets-3"
],
[
"1908.07491-Estimating a concept's controversiality level ::: Datasets-2",
"1908.07491-Estimating a concept's controversiality level ::: Datasets-1",
"1908.07491-Estimating a concept's controversiality level ::: Datasets-3"
],
[
"1908.07491-4-Table1-1.png",
"1908.07491-Results-0",
"1908.07491-Estimating a concept's controversiality level ::: Controversiality Estimators-1",
"1908.07491-Estimating a concept's controversiality level ::: Controversiality Estimators-2",
"1908.07491-Estimating a concept's controversiality level ::: Controversiality Estimators-3"
]
] | [
"Dataset I created and analyzed in BIBREF1, BIBREF4",
"Dataset I - 480 concepts, 240 controversial examples, and 240 not-controversial examples.\nDataset II - 608 controversial concepts\nDataset III - 3561 controversial concepts",
"10 or more annotators marked whether a topic was controversial or not. The score was then normalized on an integer scale of 0-10.",
"The topics from Wikipedia list of controversial issues that appear more than 50 times in Wikipedia, topics with their Wikipedia pages under edit protection.",
"Classifiers by Rad and Barbosa (2012) and by Dori-Hacohen et al. (2016)."
] | 126 |
1805.11850 | Neural Joking Machine : Humorous image captioning | What is an effective expression that draws laughter from human beings? In the present paper, in order to consider this question from an academic standpoint, we generate an image caption that draws a"laugh"by a computer. A system that outputs funny captions based on the image caption proposed in the computer vision field is constructed. Moreover, we also propose the Funny Score, which flexibly gives weights according to an evaluation database. The Funny Score more effectively brings out"laughter"to optimize a model. In addition, we build a self-collected BoketeDB, which contains a theme (image) and funny caption (text) posted on"Bokete", which is an image Ogiri website. In an experiment, we use BoketeDB to verify the effectiveness of the proposed method by comparing the results obtained using the proposed method and those obtained using MS COCO Pre-trained CNN+LSTM, which is the baseline and idiot created by humans. We refer to the proposed method, which uses the BoketeDB pre-trained model, as the Neural Joking Machine (NJM). | {
"paragraphs": [
[
"Laughter is a special, higher-order function that only humans possess. In the analysis of laughter, as Wikipedia says, “Laughter is thought to be a shift of composition (schema)\", and laughter frequently occurs when there is a change from a composition of receiver. However, the viewpoint of laughter differs greatly depending on the position of the receiver. Therefore, the quantitative measurement of laughter is very difficult. Image Ogiri on web services such as \"Bokete\" BIBREF0 have recently appeared, where users post funny captions for thematic images and the captions are evaluated in an SNS-like environment. Users compete to obtain the greatest number of “stars”. Although quantification of laughter is considered to be a very difficult task, the correspondence between evaluations and images on Bokete allows us to treat laughter quantitatively. Image captioning is an active topic in computer vision, and we believe that humorous image captioning can be realized. The main contributions of the present paper are as follows:",
"BoketeDB",
"In the experimental section, we compare the proposed method based on Funny Score and BoketeDB pre-trained parameters with a baseline provided by MS COCO Pre-trained CNN+LSTM. We also compare the results of the NJM with funny captions provided by humans. In an evaluation by humans, the results provided by the proposed method were ranked lower than those provided by humans (22.59% vs. 67.99%) but were ranked higher than the baseline (9.41%). Finally, we show the generated funny captions for several images."
],
[
"Through the great research progress with deep neural networks (DNNs), the combination of a convolutional neural network and a recurrent neural network (CNN+RNN) is a successful model for both feature extraction and sequential processing BIBREF1 . Although there is no clear division, a CNN is often used for image processing, whereas an RNN is used for text processing. Moreover, these two domains are integrated. One successful application is image caption generation with CNN+LSTM (CNN+Long-Short Term Memory) BIBREF2 . This technique enables text to be automatically generated from an image input. However, we believe that image captions require human intuition and emotion. In the present paper, we help to guide an image caption has funny expression. In the following, we introduce related research on humorous image caption generation.",
"Wang et al. proposed an automatic “meme\" generation technique BIBREF3 . A meme is a funny image that often includes humorous text. Wang et al. statistically analyzed the correlation between memes and comments in order to automatically generate a meme by modeling probabilistic dependencies, such as those of images and text.",
"Chandrasekaran et al. conducted a humor enhancement of an image BIBREF4 by constructing an analyzer to quantify “visual humor” in an image input. They also constructed datasets including interesting (3,200) and non-interesting (3,200) human-labeled images to evaluate visual humor. The “funniness” of an image can be trained by defining five stages."
],
[
"We effectively train a funny caption generator by using the proposed Funny Score by weight evaluation. We adopt CNN+LSTM as a baseline, but we have been exploring an effective scoring function and database construction. We refer to the proposed method as the Neural Joking Machine (NJM), which is combined with the BoketeDB pre-trained model, as described in Section SECREF4 ."
],
[
"The flow of the proposed method is shown in Figure FIGREF2 . Basically, we adopted the CNN+LSTM model used in Show and Tell, but the CNN is replaced by ResNet-152 as an image feature extraction method. In the next subsection, we describe in detail how to calculate a loss function with a Funny Score. The function appropriately evaluates the number of stars and its “funniness”."
],
[
"The Bokete Ogiri website uses the number of stars to evaluate the degree of funniness of a caption. The user evaluates the “funniness” of a posted caption and assigns one to three stars to the caption. Therefore, funnier captions tend to be assigned a lot of stars. We focus on the number of stars in order to propose an effective training method, in which the Funny Score enables us to evaluate the funniness of a caption. Based on the results of our pre-experiment, a Funny Score of 100 stars is treated as a threshold. In other words, the Funny Score outputs a loss value INLINEFORM0 when #star is less than 100. In contrast, the Funny Score returns INLINEFORM1 when #star is over 100. The loss value INLINEFORM2 is calculated with the LSTM as an average of each mini-batch."
],
[
"We have downloaded pairs of images and funny captions in order to construct a Bokete Database (BoketeDB). As of March 2018, 60M funny captions and 3.4M images have been posted on the Bokete Ogiri website. In the present study, we consider 999,571 funny captions for 70,981 images. A number of pair between image and funny caption is posted in temporal order on the Ogiri website Bokete. We collected images and funny captions to make corresponding image and caption pairs. Thus, we obtained a database for generating funny captions like an image caption one.",
"Comparison with MS COCO BIBREF5 . MS COCO contains a correspondence for each of 160,000 images to one of five types of captions. In comparison with MS COCO, BoketeDB has approximately half the number of the images and 124% the number of captions."
],
[
"We conducted evaluations to confirm the effectiveness of the proposed method. We describe the experimental method in Section SECREF11 , and the experimental results are presented in Section SECREF12 ."
],
[
"Here, we describe the experimental method used to validate the effectiveness of the NJM. We compare the proposed method with two other methods of generating funny captions: 1) human generated captions, which are highly ranked on Bokete (indicated by “Human\" in Table TABREF10 ), and 2) Japanese image caption generation using CNN+LSTM pre-trained by STAIR caption BIBREF7 . Based on the captions provided by MS COCO, the STAIR caption is translated from English to Japanese (indicated by “STAIR caption” in Table TABREF10 ). We use a questionnaire as the evaluation method. We selected a total of 30 themes from the Bokete Ogiri website that included “people”, “two or more people”, “animals”, “landscape”, “inorganics”, and “illustrations”. The questionnaire asks respondents to rank the captions provided by humans, the NJM, and STAIR caption in order of “funniness”. The questionnaire does not reveal the origins of the captions."
],
[
"In this subsection, we present the experimental results along with a discussion. Table TABREF10 shows the experimental results of the questionnaire. A total of 16 personal questionnaires were completed. Table TABREF10 shows the percentages of captions of each rank for each method of caption generation considered herein. Captions generated by humans were ranked “funniest” 67.99% of the time, followed by the NJM at 22.59%. The baseline captions, STAIR caption, were ranked “funniest” 9.41% of the time. These results suggest that captions generated by the NJM are less funny than those generated by humans. However, the NJM is ranked much higher than STAIR caption."
],
[
"We are currently posting funny captions generated by the NJM to the Bokete Ogiri website in order to evaluate the proposed method. Here, we compare the proposed method with STAIR captions. As reported by Bokete users, the funny captions generated by STAIR caption averaged 1.71 stars, whereas the NJM averaged 3.23 stars. Thus, the NJM is funnier than the baseline STAIR caption according to Bokete users. We believe that this difference is the result of using (i) Funny Score to effectively train the generator regarding funny captions and (ii) the self-collected BoketeDB, which is a large-scale database for funny captions."
],
[
"Finally, we present the visual results in Figure FIGREF14 , which includes examples of funny captions obtained using NJM. Although the original caption is in Japanese, we also translated the captions into English. Enjoy!"
],
[
"In the present paper, we proposed a method by which to generate captions that draw laughter. We built the BoketeDB, which contains pairs comprising a theme (image) and a corresponding funny caption (text) posted on the Bokete Ogiri website. We effectively trained a funny caption generator with the proposed Funny Score by weight evaluation. Although we adopted CNN+LSTM as a baseline, we have been exploring an effective scoring function and database construction. The experiments of the present study suggested that the NJM was much funnier than the baseline STAIR caption."
]
],
"section_name": [
"Introduction",
"Related Research",
"Proposed Method",
"CNN+LSTM",
"Funny Score",
"BoketeDB",
"Experiment",
"Experimental contents",
"Questionnaire Results",
"Posting to Bokete",
"Visual results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"0f3e939701493561142fa56ae5a407c69fa4176e",
"7724b0747279716cc9dee52eb2602011a3909950",
"c2e04ff84377b77ba7d7a3f67af865566b659621"
],
"answer": [
{
"evidence": [
"In this subsection, we present the experimental results along with a discussion. Table TABREF10 shows the experimental results of the questionnaire. A total of 16 personal questionnaires were completed. Table TABREF10 shows the percentages of captions of each rank for each method of caption generation considered herein. Captions generated by humans were ranked “funniest” 67.99% of the time, followed by the NJM at 22.59%. The baseline captions, STAIR caption, were ranked “funniest” 9.41% of the time. These results suggest that captions generated by the NJM are less funny than those generated by humans. However, the NJM is ranked much higher than STAIR caption.",
"We are currently posting funny captions generated by the NJM to the Bokete Ogiri website in order to evaluate the proposed method. Here, we compare the proposed method with STAIR captions. As reported by Bokete users, the funny captions generated by STAIR caption averaged 1.71 stars, whereas the NJM averaged 3.23 stars. Thus, the NJM is funnier than the baseline STAIR caption according to Bokete users. We believe that this difference is the result of using (i) Funny Score to effectively train the generator regarding funny captions and (ii) the self-collected BoketeDB, which is a large-scale database for funny captions.",
"We effectively train a funny caption generator by using the proposed Funny Score by weight evaluation. We adopt CNN+LSTM as a baseline, but we have been exploring an effective scoring function and database construction. We refer to the proposed method as the Neural Joking Machine (NJM), which is combined with the BoketeDB pre-trained model, as described in Section SECREF4 .",
"Here, we describe the experimental method used to validate the effectiveness of the NJM. We compare the proposed method with two other methods of generating funny captions: 1) human generated captions, which are highly ranked on Bokete (indicated by “Human\" in Table TABREF10 ), and 2) Japanese image caption generation using CNN+LSTM pre-trained by STAIR caption BIBREF7 . Based on the captions provided by MS COCO, the STAIR caption is translated from English to Japanese (indicated by “STAIR caption” in Table TABREF10 ). We use a questionnaire as the evaluation method. We selected a total of 30 themes from the Bokete Ogiri website that included “people”, “two or more people”, “animals”, “landscape”, “inorganics”, and “illustrations”. The questionnaire asks respondents to rank the captions provided by humans, the NJM, and STAIR caption in order of “funniness”. The questionnaire does not reveal the origins of the captions."
],
"extractive_spans": [],
"free_form_answer": "NJM vas selected as the funniest caption among the three options 22.59% of the times, and NJM captions posted to Bokete averaged 3.23 stars",
"highlighted_evidence": [
"In this subsection, we present the experimental results along with a discussion. Table TABREF10 shows the experimental results of the questionnaire. A total of 16 personal questionnaires were completed. Table TABREF10 shows the percentages of captions of each rank for each method of caption generation considered herein. Captions generated by humans were ranked “funniest” 67.99% of the time, followed by the NJM at 22.59%. The baseline captions, STAIR caption, were ranked “funniest” 9.41% of the time. These results suggest that captions generated by the NJM are less funny than those generated by humans. However, the NJM is ranked much higher than STAIR caption.",
"We are currently posting funny captions generated by the NJM to the Bokete Ogiri website in order to evaluate the proposed method. Here, we compare the proposed method with STAIR captions. As reported by Bokete users, the funny captions generated by STAIR caption averaged 1.71 stars, whereas the NJM averaged 3.23 stars.",
"We effectively train a funny caption generator by using the proposed Funny Score by weight evaluation. We adopt CNN+LSTM as a baseline, but we have been exploring an effective scoring function and database construction. We refer to the proposed method as the Neural Joking Machine (NJM), which is combined with the BoketeDB pre-trained model, as described in Section SECREF4 .",
"Here, we describe the experimental method used to validate the effectiveness of the NJM. We compare the proposed method with two other methods of generating funny captions: 1) human generated captions, which are highly ranked on Bokete (indicated by “Human\" in Table TABREF10 ), and 2) Japanese image caption generation using CNN+LSTM pre-trained by STAIR caption BIBREF7"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1. Comparison of the output results: The “Human” row indicates captions provided by human users and was ranked highest on the Bokete website. The “NJM” row indicates the results of applying the proposed model based of Funny Score and BoketeDB. The “STAIR caption” row indicates the results provided by Japanese translation of MS COCO."
],
"extractive_spans": [],
"free_form_answer": "It obtained a score of 22.59%",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1. Comparison of the output results: The “Human” row indicates captions provided by human users and was ranked highest on the Bokete website. The “NJM” row indicates the results of applying the proposed model based of Funny Score and BoketeDB. The “STAIR caption” row indicates the results provided by Japanese translation of MS COCO."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this subsection, we present the experimental results along with a discussion. Table TABREF10 shows the experimental results of the questionnaire. A total of 16 personal questionnaires were completed. Table TABREF10 shows the percentages of captions of each rank for each method of caption generation considered herein. Captions generated by humans were ranked “funniest” 67.99% of the time, followed by the NJM at 22.59%. The baseline captions, STAIR caption, were ranked “funniest” 9.41% of the time. These results suggest that captions generated by the NJM are less funny than those generated by humans. However, the NJM is ranked much higher than STAIR caption."
],
"extractive_spans": [],
"free_form_answer": "Captions generated by NJM were ranked \"funniest\" 22.59% of the time.",
"highlighted_evidence": [
"Captions generated by humans were ranked “funniest” 67.99% of the time, followed by the NJM at 22.59%."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0c0c9d2c5fc4c144d6fbc1f0e875219f6407ed06",
"0e46e2a2f1177666a9579aee6d73cff0a48d71a0",
"9ba24b8216f1e9cad0783dcfb099135aeb3700cb"
],
"answer": [
{
"evidence": [
"Here, we describe the experimental method used to validate the effectiveness of the NJM. We compare the proposed method with two other methods of generating funny captions: 1) human generated captions, which are highly ranked on Bokete (indicated by “Human\" in Table TABREF10 ), and 2) Japanese image caption generation using CNN+LSTM pre-trained by STAIR caption BIBREF7 . Based on the captions provided by MS COCO, the STAIR caption is translated from English to Japanese (indicated by “STAIR caption” in Table TABREF10 ). We use a questionnaire as the evaluation method. We selected a total of 30 themes from the Bokete Ogiri website that included “people”, “two or more people”, “animals”, “landscape”, “inorganics”, and “illustrations”. The questionnaire asks respondents to rank the captions provided by humans, the NJM, and STAIR caption in order of “funniness”. The questionnaire does not reveal the origins of the captions."
],
"extractive_spans": [],
"free_form_answer": "The captions are ranked by humans in order of \"funniness\".",
"highlighted_evidence": [
"We use a questionnaire as the evaluation method.",
"The questionnaire asks respondents to rank the captions provided by humans, the NJM, and STAIR caption in order of “funniness”. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Here, we describe the experimental method used to validate the effectiveness of the NJM. We compare the proposed method with two other methods of generating funny captions: 1) human generated captions, which are highly ranked on Bokete (indicated by “Human\" in Table TABREF10 ), and 2) Japanese image caption generation using CNN+LSTM pre-trained by STAIR caption BIBREF7 . Based on the captions provided by MS COCO, the STAIR caption is translated from English to Japanese (indicated by “STAIR caption” in Table TABREF10 ). We use a questionnaire as the evaluation method. We selected a total of 30 themes from the Bokete Ogiri website that included “people”, “two or more people”, “animals”, “landscape”, “inorganics”, and “illustrations”. The questionnaire asks respondents to rank the captions provided by humans, the NJM, and STAIR caption in order of “funniness”. The questionnaire does not reveal the origins of the captions."
],
"extractive_spans": [
"a questionnaire"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use a questionnaire as the evaluation method. We selected a total of 30 themes from the Bokete Ogiri website that included “people”, “two or more people”, “animals”, “landscape”, “inorganics”, and “illustrations”. The questionnaire asks respondents to rank the captions provided by humans, the NJM, and STAIR caption in order of “funniness”. The questionnaire does not reveal the origins of the captions."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Here, we describe the experimental method used to validate the effectiveness of the NJM. We compare the proposed method with two other methods of generating funny captions: 1) human generated captions, which are highly ranked on Bokete (indicated by “Human\" in Table TABREF10 ), and 2) Japanese image caption generation using CNN+LSTM pre-trained by STAIR caption BIBREF7 . Based on the captions provided by MS COCO, the STAIR caption is translated from English to Japanese (indicated by “STAIR caption” in Table TABREF10 ). We use a questionnaire as the evaluation method. We selected a total of 30 themes from the Bokete Ogiri website that included “people”, “two or more people”, “animals”, “landscape”, “inorganics”, and “illustrations”. The questionnaire asks respondents to rank the captions provided by humans, the NJM, and STAIR caption in order of “funniness”. The questionnaire does not reveal the origins of the captions.",
"We are currently posting funny captions generated by the NJM to the Bokete Ogiri website in order to evaluate the proposed method. Here, we compare the proposed method with STAIR captions. As reported by Bokete users, the funny captions generated by STAIR caption averaged 1.71 stars, whereas the NJM averaged 3.23 stars. Thus, the NJM is funnier than the baseline STAIR caption according to Bokete users. We believe that this difference is the result of using (i) Funny Score to effectively train the generator regarding funny captions and (ii) the self-collected BoketeDB, which is a large-scale database for funny captions."
],
"extractive_spans": [],
"free_form_answer": "With a questionnaire asking subjects to rank methods according to its \"funniness\". Also, by posting the captions to Bokete to evaluate them by received stars",
"highlighted_evidence": [
"Here, we describe the experimental method used to validate the effectiveness of the NJM. We compare the proposed method with two other methods of generating funny captions: 1) human generated captions, which are highly ranked on Bokete (indicated by “Human\" in Table TABREF10 ), and 2) Japanese image caption generation using CNN+LSTM pre-trained by STAIR caption BIBREF7 . Based on the captions provided by MS COCO, the STAIR caption is translated from English to Japanese (indicated by “STAIR caption” in Table TABREF10 ). We use a questionnaire as the evaluation method. We selected a total of 30 themes from the Bokete Ogiri website that included “people”, “two or more people”, “animals”, “landscape”, “inorganics”, and “illustrations”. The questionnaire asks respondents to rank the captions provided by humans, the NJM, and STAIR caption in order of “funniness”. The questionnaire does not reveal the origins of the captions.",
"We are currently posting funny captions generated by the NJM to the Bokete Ogiri website in order to evaluate the proposed method. Here, we compare the proposed method with STAIR captions. As reported by Bokete users, the funny captions generated by STAIR caption averaged 1.71 stars, whereas the NJM averaged 3.23 stars."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"3aede2ae132eb6a675aae77f40615d9be971b917",
"945e9e9105758091b1d69ef3dfd7bf4bb3f47137",
"dffef7b42d03bd327036165d7d48e2631035cc69"
],
"answer": [
{
"evidence": [
"We have downloaded pairs of images and funny captions in order to construct a Bokete Database (BoketeDB). As of March 2018, 60M funny captions and 3.4M images have been posted on the Bokete Ogiri website. In the present study, we consider 999,571 funny captions for 70,981 images. A number of pair between image and funny caption is posted in temporal order on the Ogiri website Bokete. We collected images and funny captions to make corresponding image and caption pairs. Thus, we obtained a database for generating funny captions like an image caption one."
],
"extractive_spans": [
"999,571 funny captions for 70,981 images"
],
"free_form_answer": "",
"highlighted_evidence": [
"We have downloaded pairs of images and funny captions in order to construct a Bokete Database (BoketeDB). As of March 2018, 60M funny captions and 3.4M images have been posted on the Bokete Ogiri website. In the present study, we consider 999,571 funny captions for 70,981 images. A number of pair between image and funny caption is posted in temporal order on the Ogiri website Bokete. We collected images and funny captions to make corresponding image and caption pairs. Thus, we obtained a database for generating funny captions like an image caption one."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We have downloaded pairs of images and funny captions in order to construct a Bokete Database (BoketeDB). As of March 2018, 60M funny captions and 3.4M images have been posted on the Bokete Ogiri website. In the present study, we consider 999,571 funny captions for 70,981 images. A number of pair between image and funny caption is posted in temporal order on the Ogiri website Bokete. We collected images and funny captions to make corresponding image and caption pairs. Thus, we obtained a database for generating funny captions like an image caption one."
],
"extractive_spans": [
" 999,571 funny captions for 70,981 images"
],
"free_form_answer": "",
"highlighted_evidence": [
"We have downloaded pairs of images and funny captions in order to construct a Bokete Database (BoketeDB). As of March 2018, 60M funny captions and 3.4M images have been posted on the Bokete Ogiri website. In the present study, we consider 999,571 funny captions for 70,981 images. A number of pair between image and funny caption is posted in temporal order on the Ogiri website Bokete. We collected images and funny captions to make corresponding image and caption pairs. Thus, we obtained a database for generating funny captions like an image caption one."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We have downloaded pairs of images and funny captions in order to construct a Bokete Database (BoketeDB). As of March 2018, 60M funny captions and 3.4M images have been posted on the Bokete Ogiri website. In the present study, we consider 999,571 funny captions for 70,981 images. A number of pair between image and funny caption is posted in temporal order on the Ogiri website Bokete. We collected images and funny captions to make corresponding image and caption pairs. Thus, we obtained a database for generating funny captions like an image caption one."
],
"extractive_spans": [],
"free_form_answer": "999571 captions for 70981 images.",
"highlighted_evidence": [
" In the present study, we consider 999,571 funny captions for 70,981 images."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"f5ad0941473be2367b5c12714496aabfb96924c8",
"f77d318eeb37dfbc3123433b29b36c597631b361"
],
"answer": [
{
"evidence": [
"The Bokete Ogiri website uses the number of stars to evaluate the degree of funniness of a caption. The user evaluates the “funniness” of a posted caption and assigns one to three stars to the caption. Therefore, funnier captions tend to be assigned a lot of stars. We focus on the number of stars in order to propose an effective training method, in which the Funny Score enables us to evaluate the funniness of a caption. Based on the results of our pre-experiment, a Funny Score of 100 stars is treated as a threshold. In other words, the Funny Score outputs a loss value INLINEFORM0 when #star is less than 100. In contrast, the Funny Score returns INLINEFORM1 when #star is over 100. The loss value INLINEFORM2 is calculated with the LSTM as an average of each mini-batch."
],
"extractive_spans": [],
"free_form_answer": "Based on the number of stars users assign funny captions, an LSTM calculates the loss value L as an average of each mini-batch and returns L when the number of stars is less than 100, otherwise L-1.0",
"highlighted_evidence": [
"The Bokete Ogiri website uses the number of stars to evaluate the degree of funniness of a caption. The user evaluates the “funniness” of a posted caption and assigns one to three stars to the caption. Therefore, funnier captions tend to be assigned a lot of stars. We focus on the number of stars in order to propose an effective training method, in which the Funny Score enables us to evaluate the funniness of a caption. Based on the results of our pre-experiment, a Funny Score of 100 stars is treated as a threshold. In other words, the Funny Score outputs a loss value INLINEFORM0 when #star is less than 100. In contrast, the Funny Score returns INLINEFORM1 when #star is over 100. The loss value INLINEFORM2 is calculated with the LSTM as an average of each mini-batch."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The Bokete Ogiri website uses the number of stars to evaluate the degree of funniness of a caption. The user evaluates the “funniness” of a posted caption and assigns one to three stars to the caption. Therefore, funnier captions tend to be assigned a lot of stars. We focus on the number of stars in order to propose an effective training method, in which the Funny Score enables us to evaluate the funniness of a caption. Based on the results of our pre-experiment, a Funny Score of 100 stars is treated as a threshold. In other words, the Funny Score outputs a loss value INLINEFORM0 when #star is less than 100. In contrast, the Funny Score returns INLINEFORM1 when #star is over 100. The loss value INLINEFORM2 is calculated with the LSTM as an average of each mini-batch."
],
"extractive_spans": [],
"free_form_answer": "The funny score is L if the caption has fewer than 100 stars and 1-L if the caption has 100 or more stars, where L is the average loss value calculated with the LSTM on the mini-batch.",
"highlighted_evidence": [
"The user evaluates the “funniness” of a posted caption and assigns one to three stars to the caption.",
"In other words, the Funny Score outputs a loss value INLINEFORM0 when #star is less than 100. In contrast, the Funny Score returns INLINEFORM1 when #star is over 100. The loss value INLINEFORM2 is calculated with the LSTM as an average of each mini-batch."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"What is the performance of NJM?",
"How are the results evaluated?",
"How big is the self-collected corpus?",
"How is the funny score calculated?"
],
"question_id": [
"267d70d9f3339c56831ea150d2213643fbc5129b",
"477da8d997ff87400c6aad19dcc74f8998bc89c3",
"4485e32052741972877375667901f61e602ec4de",
"df4895c6ae426006e75c511a304d56d37c42a1c7"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1. Examples of funny captions generated by NJM from an image input.",
"Figure 2. Proposed CNN+LSTM architecture for funny caption generation.",
"Table 1. Comparison of the output results: The “Human” row indicates captions provided by human users and was ranked highest on the Bokete website. The “NJM” row indicates the results of applying the proposed model based of Funny Score and BoketeDB. The “STAIR caption” row indicates the results provided by Japanese translation of MS COCO.",
"Figure 3. Visual results obtain using the proposed NJM."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"3-Table1-1.png",
"4-Figure3-1.png"
]
} | [
"What is the performance of NJM?",
"How are the results evaluated?",
"How big is the self-collected corpus?",
"How is the funny score calculated?"
] | [
[
"1805.11850-Questionnaire Results-0",
"1805.11850-Posting to Bokete-0",
"1805.11850-3-Table1-1.png",
"1805.11850-Experimental contents-0",
"1805.11850-Proposed Method-0"
],
[
"1805.11850-Posting to Bokete-0",
"1805.11850-Experimental contents-0"
],
[
"1805.11850-BoketeDB-0"
],
[
"1805.11850-Funny Score-0"
]
] | [
"Captions generated by NJM were ranked \"funniest\" 22.59% of the time.",
"With a questionnaire asking subjects to rank methods according to its \"funniness\". Also, by posting the captions to Bokete to evaluate them by received stars",
"999571 captions for 70981 images.",
"The funny score is L if the caption has fewer than 100 stars and 1-L if the caption has 100 or more stars, where L is the average loss value calculated with the LSTM on the mini-batch."
] | 127 |
1710.06923 | Adapting general-purpose speech recognition engine output for domain-specific natural language question answering | Speech-based natural language question-answering interfaces to enterprise systems are gaining a lot of attention. General-purpose speech engines can be integrated with NLP systems to provide such interfaces. Usually, general-purpose speech engines are trained on large `general' corpus. However, when such engines are used for specific domains, they may not recognize domain-specific words well, and may produce erroneous output. Further, the accent and the environmental conditions in which the speaker speaks a sentence may induce the speech engine to inaccurately recognize certain words. The subsequent natural language question-answering does not produce the requisite results as the question does not accurately represent what the speaker intended. Thus, the speech engine's output may need to be adapted for a domain before further natural language processing is carried out. We present two mechanisms for such an adaptation, one based on evolutionary development and the other based on machine learning, and show how we can repair the speech-output to make the subsequent natural language question-answering better. | {
"paragraphs": [
[
"Speech-enabled natural-language question-answering interfaces to enterprise application systems, such as Incident-logging systems, Customer-support systems, Marketing-opportunities systems, Sales data systems etc., are designed to allow end-users to speak-out the problems/questions that they encounter and get automatic responses. The process of converting human spoken speech into text is performed by an Automatic Speech Recognition (ASR) engine. While functional examples of ASR with enterprise systems can be seen in day-to-day use, most of these work under constraints of a limited domain, and/or use of additional domain-specific cues to enhance the speech-to-text conversion process. Prior speech-and-natural language interfaces for such purposes have been rather restricted to either Interactive Voice Recognition (IVR) technology, or have focused on building a very specialized speech engine with domain specific terminology that recognizes key-words in that domain through an extensively customized language model, and trigger specific tasks in the enterprise application system. This makes the interface extremely specialized, rather cumbersome and non-adaptable for other domains. Further, every time a new enterprise application requires a speech and natural language interface, one has to redevelop the entire interface again.",
"An alternative to domain-specific speech recognition engines has been to re-purpose general-purpose speech recognition engines, such as Google Speech API, IBM Watson Speech to text API which can be used across domains with natural language question answering systems. Such general-purpose automatic speech engines (gp-ASR) are deep trained on very large general corpus using deep neural network (DNN) techniques. The deep learnt acoustic and language models enhance the performance of a ASR. However, this comes with its own limitations. For freely spoken natural language sentences, the typical recognition accuracy achievable even for state-of-the-art speech recognition systems have been observed to be about 60% to 90% in real-world environments BIBREF0 . The recognition is worse if we consider factors such as domain-specific words, environmental noise, variations in accent, poor ability to express on the part of the user, or inadequate speech and language resources from the domain to train such speech recognition systems. The subsequent natural language processing, such as that in a question answering system, of such erroneously and partially recognized text becomes rather problematic, as the domain terms may be inaccurately recognized or linguistic errors may creep into the sentence. It is, hence, important to improve the accuracy of the ASR output text.",
"In this paper, we focus on the issues of using a readily available gp-ASR and adapting its output for domain-specific natural language question answering BIBREF1 . We present two mechanisms for adaptation, namely",
"We present the results of these two adaptation and gauge the usefulness of each mechanism. The rest of the paper is organized as follows, in Section SECREF2 we briefly describe the work done in this area which motivates our contribution. The main contribution of our work is captured in Section SECREF3 and we show the performance of our approach through experiments in Section SECREF4 . We conclude in Section SECREF5 ."
],
[
"Most work on ASR error detection and correction has focused on using confidence measures, generally called the log-likelihood score, provided by the speech recognition engine; the text with lower confidence is assumed to be incorrect and subjected to correction. Such confidence based methods are useful only when we have access to the internals of a speech recognition engine built for a specific domain. As mentioned earlier, use of domain-specific engine requires one to rebuild the interface every time the domain is updated, or a new domain is introduced. As mentioned earlier, our focus is to avoid rebuilding the interface each time the domain changes by using an existing ASR. As such our method is specifically a post-ASR system. A post-ASR system provides greater flexibility in terms of absorbing domain variations and adapting the output of ASR in ways that are not possible during training a domain-specific ASR system BIBREF2 .",
"Note that an erroneous ASR output text will lead to an equally (or more) erroneous interpretation by the natural language question-answering system, resulting in a poor performance of the overall QA system",
"Machine learning classifiers have been used in the past for the purpose of combining features to calculate a confidence score for error detection. Non-linguistic and syntactic knowledge for detection of errors in ASR output, using a support vector machine to combine non-linguistic features was proposed in BIBREF3 and Naive Bayes classifier to combine confidence scores at a word and utterance level, and differential scores of the alternative hypotheses was used in BIBREF4 Both BIBREF3 and BIBREF4 rely on the availability of confidence scores output by the ASR engine. A syllable-based noisy channel model combined with higher level semantic knowledge for post recognition error correction, independent of the internal confidence measures of the ASR engine is described in BIBREF5 . In BIBREF6 the authors propose a method to correct errors in spoken dialogue systems. They consider several contexts to correct the speech recognition output including learning a threshold during training to decide when the correction must be carried out in the context of a dialogue system. They however use the confidence scores associated with the output text to do the correction or not. The correction is carried using syntactic-semantic and lexical models to decide whether a recognition result is correct.",
"In BIBREF7 the authors proposes a method to detect and correct ASR output based on Microsoft N-Gram dataset. They use a context-sensitive error correction algorithm for selecting the best candidate for correction using the Microsoft N-Gram dataset which contains real-world data and word sequences extracted from the web which can mimic a comprehensive dictionary of words having a large and all-inclusive vocabulary.",
"In BIBREF8 the authors assume the availability of pronunciation primitive characters as the output of the ASR engine and then use domain-specific named entities to establish the context, leading to the correction of the speech recognition output. The patent BIBREF9 proposes a manual correction of the ASR output transcripts by providing visual display suggesting the correctness of the text output by ASR. Similarly, BIBREF10 propose a re-ranking and classification strategy based on logistic regression model to estimate the probability for choosing word alternates to display to the user in their framework of a tap-to-correct interface.",
"Our proposed machine learning based system is along the lines of BIBREF5 but with differences: (a) while they use a single feature (syllable count) for training, we propose the use of multiple features for training the Naive Bayes classifier, and (b) we do not perform any manual alignment between the ASR and reference text – this is done using an edit distance based technique for sentence alignment. Except for BIBREF5 all reported work in this area make use of features from the internals of the ASR engine for ASR text output error detection.",
"We assume the use of a gp-ASR in the rest of the paper. Though we use examples of natural language sentences in the form of queries or questions, it should be noted that the description is applicable to any conversational natural language sentence."
],
[
"In this paper we focus on question answering interfaces to enterprise systems, though our discussion is valid for any kind of natural language processing sentences that are not necessarily a query. For example, suppose we have a retail-sales management system domain, then end-users would be able to query the system through spoken natural language questions ( INLINEFORM0 ) such as INLINEFORM1 ",
"A perfect ASR would take INLINEFORM0 as the input and produce ( INLINEFORM1 ), namely, INLINEFORM2 ",
"We consider the situation where a ASR takes such a sentence ( INLINEFORM0 ) spoken by a person as input, and outputs an inaccurately recognized text ( INLINEFORM1 ) sentence. In our experiments, when the above question was spoken by a person and processed by a popular ASR engine such as Google Speech API, the output text sentence was ( INLINEFORM2 ) INLINEFORM3 ",
"Namely INLINEFORM0 ",
" It should be noted that an inaccurate output by the ASR engine maybe the result of various factors such as background noise, accent of the person speaking the sentence, the speed at which he or she is speaking the sentence, domain-specific words that are not part of popular vocabulary etc. The subsequent natural language question answering system cannot answer the above output sentence from its retail sales data. Thus the question we tackle here is – how do we adapt or repair the sentence ( INLINEFORM0 ) back to the original sentence ( INLINEFORM1 ) as intended by the speaker. Namely INLINEFORM2 ",
" We present two mechanisms for adaptation or repair of the ASR output, namely INLINEFORM0 , in this paper: (a) an evolutionary development based artificial development mechanism, and (b) a machine-learning mechanism."
],
[
"In the machine learning based mechanism of adaptation, we assume the availability of example pairs of INLINEFORM0 namely (ASR output, the actual transcription of the spoken sentence) for training. We further assume that such a machine-learnt model can help repair an unseen ASR output to its intended correct sentence. We address the following hypothesis",
"Using the information from past recorded errors and the corresponding correction, can we learn how to repair (and thus adapt to a new domain) the text after ASR?",
"Note that this is equivalent to, albiet loosely, learning the error model of a specific ASR. Since we have a small training set, we have used the Naive Bayes classifier that is known to perform well for small datasets with high bias and low variance. We have used the NLTK BIBREF11 Naive Bayes classifier in all our experiments.",
"Let INLINEFORM0 be the erroneous text (which is the ASR output), INLINEFORM1 the corresponding reference text (which is the textual representation of the spoken sentence) and INLINEFORM2 a feature extractor, such that DISPLAYFORM0 ",
"where DISPLAYFORM0 ",
"is a set of INLINEFORM0 features extracted from INLINEFORM1 . Suppose there are several pairs say ( INLINEFORM2 , INLINEFORM3 ) for INLINEFORM4 . Then we can derive INLINEFORM5 for each INLINEFORM6 using ( EQREF7 ). The probability that INLINEFORM7 belongs to the class INLINEFORM8 can be derived through the feature set INLINEFORM9 as follows. INLINEFORM10 ",
"where INLINEFORM0 is the apriori probability of the class INLINEFORM1 and INLINEFORM2 is the probability of occurrence of the features INLINEFORM3 in the class INLINEFORM4 , and INLINEFORM5 is the overall probability of the occurrence of the feature set INLINEFORM6 . Making naive assumption of independence in the features INLINEFORM7 we get DISPLAYFORM0 ",
"In our experiments, the domain specific reference text INLINEFORM0 was spoken by several people and the spoken speech was passed through a general purpose speech recognition engine (ASR) that produced a (possibly) erroneous hypothesis INLINEFORM1 . Each pair of reference and the ASR output (i.e. hypothesis) was then word aligned using edit distance, and the mismatching pairs of words were extracted as INLINEFORM2 pairs. For example, if we have the following spoken sentence: INLINEFORM3 ",
"and the corresponding true transcription INLINEFORM0 ",
"One of the corresponding ASR output INLINEFORM0 was INLINEFORM1 ",
"In this case the INLINEFORM0 pairs are (dear, beer) and (have, has). As another example consider that INLINEFORM1 was spoken but INLINEFORM2 was recognized by the ASR. INLINEFORM3 INLINEFORM4 ",
"Clearly, in this case the INLINEFORM0 pair is (than twenty, jewelry).",
"Let us assume two features, namely, INLINEFORM0 in ( EQREF7 ) is of dimension INLINEFORM1 . Let the two features be INLINEFORM2 . Then, for the INLINEFORM3 pair (than twenty, jewelry) we have INLINEFORM4 ",
"since the number of words in than twenty is 2 and than twenty contains 3 syllables. INLINEFORM0 in this case would be the probability that the number of words in the input are two ( INLINEFORM1 ) when the correction is jewelry. A third example is: INLINEFORM2 INLINEFORM3 ",
"Note that in this case the INLINEFORM0 pair is (peak sales, pixel).",
"Calculating thus the values of INLINEFORM0 for all reference corrections, INLINEFORM1 for all feature values, INLINEFORM2 for all the INLINEFORM3 features in INLINEFORM4 , we are in a position to calculate the RHS of ( EQREF9 ). When this trained classifier is given an erroneous text, features are extracted from this text and the repair works by replacing the erroneous word by a correction that maximizes ( EQREF9 ), INLINEFORM5 ",
"Namely, the INLINEFORM0 for which INLINEFORM1 is maximum."
],
[
"We present the results of our experiments with both the Evo-Devo and the Machine Learning mechanisms described earlier using the U.S. Census Bureau conducted Annual Retail Trade Survey of U.S. Retail and Food Services Firms for the period of 1992 to 2013 BIBREF12 ."
],
[
"We downloaded this survey data and hand crafted a total of 293 textual questions BIBREF13 which could answer the survey data. A set of 6 people (L2 English) generated 50 queries each with the only constraint that these queries should be able to answer the survey data. In all a set of 300 queries were crafted of which duplicate queries were removed to leave 293 queries in all. Of these, we chose 250 queries randomly and distributed among 5 Indian speakers, who were asked to read aloud the queries into a custom-built audio data collecting application. So, in all we had access to 250 audio queries spoken by 5 different Indian speakers; each speaking 50 queries.",
"Each of these 250 audio utterances were passed through 4 different ASR engines, namely, Google ASR (Ga), Kaldi with US acoustic models (Ku), Kaldi with Indian Acoustic models (Ki) and PocketSphinx ASR (Ps). In particular, that audio utterances were in wave format (.wav) with a sampling rate of 8 kHz and 16 bit. In case of Google ASR (Ga), each utterance was first converted into .flac format using the utility sound exchange (sox) commonly available on Unix machines. The .flac audio files were sent to the cloud based Google ASR (Ga) one by one in a batch mode and the text string returned by Ga was stored. In all 7 utterances did not get any text output, presumably Ga was unable to recognize the utterance. For all the other 243 utterances a text output was received.",
"In case of the other ASR engines, namely, Kaldi with US acoustic models (Ku), Kaldi with Indian Acoustic models (Ki) and PocketSphinx ASR (Ps) we first took the queries corresponding to the 250 utterances and built a statistical language model (SLM) and a lexicon using the scripts that are available with PocketSphinx BIBREF14 and Kaldi BIBREF15 . This language model and lexicon was used with the acoustic model that were readily available with Kaldi and Ps. In case of Ku we used the American English acoustic models, while in case of Ki we used the Indian English acoustic model. In case of Ps we used the Voxforge acoustic models BIBREF16 . Each utterance was passed through Kaldi ASR for two different acoustic models to get INLINEFORM0 corresponding to Ku and Ki. Similarly all the 250 audio utterance were passed through the Ps ASR to get the corresponding INLINEFORM1 for Ps. A sample utterance and the output of the four engines is shown in Figure FIGREF12 .",
"Figure FIGREF11 and Table TABREF14 capture the performance of the different speech recognition engines. The performance of the ASR engines varied, with Ki performing the best with 127 of the 250 utterances being correctly recognized while Ps returned only 44 correctly recognized utterances (see Table TABREF14 , Column 4 named \"Correct\") of 250 utterances. The accuracy of the ASR varied widely. For instance, in case of Ps there were as many as 97 instances of the 206 erroneously recognized utterances which had an accuracy of less than 70%.",
"Note that the accuracy is computed as the number of deletions, insertions, substitutions that are required to convert the ASR output to the textual reference (namely, INLINEFORM0 ) and is a common metric used in speech literature BIBREF17 .",
"For all our analysis, we used only those utterances that had an accuracy 70% but less that INLINEFORM0 , namely, 486 instances (see Table TABREF14 , Figure FIGREF13 ). An example showing the same utterance being recognized by four different ASR engines is shown in Figure FIGREF12 . Note that we used INLINEFORM1 corresponding to Ga, Ki and Ku in our analysis (accuracy INLINEFORM2 ) and not INLINEFORM3 corresponding to Ps which has an accuracy of INLINEFORM4 only. This is based on our observation that any ASR output that is lower that INLINEFORM5 accurate is so erroneous that it is not possible to adapt and steer it towards the expected output.",
"The ASR output ( INLINEFORM0 ) are then given as input in the Evo-Devo and Machine Learning mechanism of adaptation."
],
[
"We ran our Evo-Devo mechanism with the 486 ASR sentences (see Table TABREF14 ) and measured the accuracy after each repair. On an average we have achieved about 5 to 10% improvements in the accuracy of the sentences. Fine-tuning the repair and fitness functions, namely Equation (), would probably yield much better performance accuracies. However, experimental results confirm that the proposed Evo-Devo mechanism is an approach that is able to adapt INLINEFORM0 to get closer to INLINEFORM1 . We present a snapshot of the experiments with Google ASR (Ga) and calculate accuracy with respect to the user spoken question as shown in Table TABREF16 .",
"Table TABREF16 clearly demonstrates the promise of the evo-devo mechanism for adaptation/repair. In our experiments we observed that the adaptation/repair of sub-parts in ASR-output ( INLINEFORM0 ) that most probably referred to domain terms occurred well and were easily repaired, thus contributing to increase in accuracy. For non-domain-specific linguistic terms the method requires one to build very good linguistic repair rules, without which the method could lead to a decrease in accuracy. One may need to fine-tune the repair, match and fitness functions for linguistic terms. However, we find the abstraction of evo-devo mechanism is very apt to use."
],
[
"In the machine learning technique of adaptation, we considers INLINEFORM0 pairs as the predominant entity and tests the accuracy of classification of errors.",
"In our experiment, we used a total of 570 misrecognition errors (for example, (dear, beer) and (have, has) derived from INLINEFORM0 or (than twenty, jewelry) derived from INLINEFORM1 ) in the 486 sentences. We performed 10-fold cross validation, each fold containing 513 INLINEFORM2 pairs for training and 57 pairs for testing, Note that we assume the erroneous words in the ASR output being marked by a human oracle, in the training as well as the testing set. Suppose the following example ( INLINEFORM3 ) occurs in the training set: INLINEFORM4 INLINEFORM5 ",
"The classifier is given the pair INLINEFORM0 (latest stills), cumulative sales} to the classifier. And if the following example occurs in the testing set ( INLINEFORM1 ), INLINEFORM2 INLINEFORM3 ",
"the trained model or the classifier is provided INLINEFORM0 (wine) and successful repair would mean it correctly labels (adapts) it to remain the. The features used for classification were ( INLINEFORM1 in Equation ( EQREF8 ))",
"",
"The combination of features INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 namely, (bag of consonants, bag of vowels, left context, number of words, right context) gave the best results with INLINEFORM5 % improvement in accuracy in classification over 10-fold validation.",
"The experimental results for both evo-devo and machine learning based approaches demonstrate that these techniques can be used to correct the erroneous output of ASR. This is what we set out to establish in this paper."
],
[
"General-purpose ASR engines when used for enterprise domains may output erroneous text, especially when encountering domain-specific terms. One may have to adapt/repair the ASR output in order to do further natural language processing such as question-answering. We have presented two mechanisms for adaptation/repair of ASR-output with respect to a domain. The Evo-Devo mechanism provides a bio-inspired abstraction to help structure the adaptation and repair process. This is one of the main contribution of this paper. The machine learning mechanism provides a means of adaptation and repair by examining the feature-space of the ASR output. The results of the experiments show that both these mechanisms are promising and may need further development."
],
[
"Nikhil, Chirag, Aditya have contributed in conducting some of the experiments. We acknowledge their contribution."
]
],
"section_name": [
"Introduction",
"Related Work",
"Errors in ASR output",
"Machine Learning mechanism of adaptation",
"Experiments and results",
"Data Preparation",
" Evo-Devo based experiments",
"Machine Learning experiments",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"372ddfdd8ccb8b7e011f0f7058ae260cf04a5c20",
"40d0ddb0ff2bceceac53f0cf64baf5a7ef12d637",
"60588b72f86f7ae4eeeef80f0f22bf8f246e26ed",
"956e0eeff2061a51e42f4cdc003b9b7dfaf880c8"
],
"answer": [
{
"evidence": [
"We present the results of our experiments with both the Evo-Devo and the Machine Learning mechanisms described earlier using the U.S. Census Bureau conducted Annual Retail Trade Survey of U.S. Retail and Food Services Firms for the period of 1992 to 2013 BIBREF12 ."
],
"extractive_spans": [
"Annual Retail Trade Survey of U.S. Retail and Food Services Firms for the period of 1992 to 2013"
],
"free_form_answer": "",
"highlighted_evidence": [
"We present the results of our experiments with both the Evo-Devo and the Machine Learning mechanisms described earlier using the U.S. Census Bureau conducted Annual Retail Trade Survey of U.S. Retail and Food Services Firms for the period of 1992 to 2013 BIBREF12 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We downloaded this survey data and hand crafted a total of 293 textual questions BIBREF13 which could answer the survey data. A set of 6 people (L2 English) generated 50 queries each with the only constraint that these queries should be able to answer the survey data. In all a set of 300 queries were crafted of which duplicate queries were removed to leave 293 queries in all. Of these, we chose 250 queries randomly and distributed among 5 Indian speakers, who were asked to read aloud the queries into a custom-built audio data collecting application. So, in all we had access to 250 audio queries spoken by 5 different Indian speakers; each speaking 50 queries."
],
"extractive_spans": [
" survey data and hand crafted a total of 293 textual questions BIBREF13"
],
"free_form_answer": "",
"highlighted_evidence": [
"We downloaded this survey data and hand crafted a total of 293 textual questions BIBREF13 which could answer the survey data. A set of 6 people (L2 English) generated 50 queries each with the only constraint that these queries should be able to answer the survey data. In all a set of 300 queries were crafted of which duplicate queries were removed to leave 293 queries in all. Of these, we chose 250 queries randomly and distributed among 5 Indian speakers, who were asked to read aloud the queries into a custom-built audio data collecting application. So, in all we had access to 250 audio queries spoken by 5 different Indian speakers; each speaking 50 queries."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We present the results of our experiments with both the Evo-Devo and the Machine Learning mechanisms described earlier using the U.S. Census Bureau conducted Annual Retail Trade Survey of U.S. Retail and Food Services Firms for the period of 1992 to 2013 BIBREF12 ."
],
"extractive_spans": [
"U.S. Census Bureau conducted Annual Retail Trade Survey of U.S. Retail and Food Services Firms for the period of 1992 to 2013"
],
"free_form_answer": "",
"highlighted_evidence": [
"We present the results of our experiments with both the Evo-Devo and the Machine Learning mechanisms described earlier using the U.S. Census Bureau conducted Annual Retail Trade Survey of U.S. Retail and Food Services Firms for the period of 1992 to 2013 BIBREF12 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We present the results of our experiments with both the Evo-Devo and the Machine Learning mechanisms described earlier using the U.S. Census Bureau conducted Annual Retail Trade Survey of U.S. Retail and Food Services Firms for the period of 1992 to 2013 BIBREF12 ."
],
"extractive_spans": [
"Annual Retail Trade Survey of U.S. Retail and Food Services Firms for the period of 1992 to 2013 BIBREF12"
],
"free_form_answer": "",
"highlighted_evidence": [
"We present the results of our experiments with both the Evo-Devo and the Machine Learning mechanisms described earlier using the U.S. Census Bureau conducted Annual Retail Trade Survey of U.S. Retail and Food Services Firms for the period of 1992 to 2013 BIBREF12 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"a0b403873302db7cada39008f04d01155ef68f4f",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"12bfba8a9d25dc1ca3cda585104505d5e737c708",
"c2a7e19223dd9c637cc34ba015bbd919bf499208",
"e25b6f63f80ae6a0187c7cbfd33b8980ed940a44"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"FLOAT SELECTED: Table 2 ASR engines and their output %accuracy"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2 ASR engines and their output %accuracy"
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"58f3a0d4da795208a496138913b8887b872a8005",
"e20bba251f1f60e72bfafd3d1a0c2a5d6d405de3"
],
"answer": [
{
"evidence": [
"We ran our Evo-Devo mechanism with the 486 ASR sentences (see Table TABREF14 ) and measured the accuracy after each repair. On an average we have achieved about 5 to 10% improvements in the accuracy of the sentences. Fine-tuning the repair and fitness functions, namely Equation (), would probably yield much better performance accuracies. However, experimental results confirm that the proposed Evo-Devo mechanism is an approach that is able to adapt INLINEFORM0 to get closer to INLINEFORM1 . We present a snapshot of the experiments with Google ASR (Ga) and calculate accuracy with respect to the user spoken question as shown in Table TABREF16 .",
"In the machine learning technique of adaptation, we considers INLINEFORM0 pairs as the predominant entity and tests the accuracy of classification of errors.",
"The combination of features INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 namely, (bag of consonants, bag of vowels, left context, number of words, right context) gave the best results with INLINEFORM5 % improvement in accuracy in classification over 10-fold validation."
],
"extractive_spans": [],
"free_form_answer": "Machine learning approach",
"highlighted_evidence": [
"We ran our Evo-Devo mechanism with the 486 ASR sentences (see Table TABREF14 ) and measured the accuracy after each repair. On an average we have achieved about 5 to 10% improvements in the accuracy of the sentences.",
"In the machine learning technique of adaptation, we considers INLINEFORM0 pairs as the predominant entity and tests the accuracy of classification of errors.",
"The combination of features INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 namely, (bag of consonants, bag of vowels, left context, number of words, right context) gave the best results with INLINEFORM5 % improvement in accuracy in classification over 10-fold validation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3e2c9a3aadbfe17ca9cc801fe11352213e6e2e7c",
"633adcd58d77df361fd2aa4a578a8346cfc7fffa"
],
"answer": [
{
"evidence": [
"General-purpose ASR engines when used for enterprise domains may output erroneous text, especially when encountering domain-specific terms. One may have to adapt/repair the ASR output in order to do further natural language processing such as question-answering. We have presented two mechanisms for adaptation/repair of ASR-output with respect to a domain. The Evo-Devo mechanism provides a bio-inspired abstraction to help structure the adaptation and repair process. This is one of the main contribution of this paper. The machine learning mechanism provides a means of adaptation and repair by examining the feature-space of the ASR output. The results of the experiments show that both these mechanisms are promising and may need further development."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" We have presented two mechanisms for adaptation/repair of ASR-output with respect to a domain. The Evo-Devo mechanism provides a bio-inspired abstraction to help structure the adaptation and repair process."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Which dataset do they use?",
"Do they compare their proposed domain adaptation methods to some existing methods?",
"Which of their proposed domain adaptation methods proves best overall?",
"Do they use evolutionary-based optimization algorithms as one of their domain adaptation approaches?"
],
"question_id": [
"00e4c9aa87411dfc5455fc92f10e5c9266e7b95e",
"54b0d2df6ee27aaacdaf7f9c76c897b27e534667",
"b9a3836cff16af7454c7a8b0e5ff90206d0db1f5",
"99554d0c76fbaef90bce972700fa4c315f961c31"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1 Fitness Function.",
"Table 1 Ontology Structure.",
"Fig. 2 T ′ accuracy (y-axis) for the 250 utterance (x-axis) for Ga, Ki, Ku and Ps.",
"Table 2 ASR engines and their output %accuracy",
"Fig. 4 All utterances that have and T ′ accuracy (y-axis) ≥ 70 and < 100 used in all our experiments.",
"Table 3 Evo-Devo experiments with Google ASR (Ga)."
],
"file": [
"7-Figure1-1.png",
"10-Table1-1.png",
"14-Figure2-1.png",
"15-Table2-1.png",
"16-Figure4-1.png",
"17-Table3-1.png"
]
} | [
"Which of their proposed domain adaptation methods proves best overall?"
] | [
[
"1710.06923-Machine Learning experiments-0",
"1710.06923- Evo-Devo based experiments-0",
"1710.06923-Machine Learning experiments-5"
]
] | [
"Machine learning approach"
] | 128 |
1811.04791 | Multilingual and Unsupervised Subword Modeling for Zero-Resource Languages | Unsupervised subword modeling aims to learn low-level representations of speech audio in"zero-resource"settings: that is, without using transcriptions or other resources from the target language (such as text corpora or pronunciation dictionaries). A good representation should capture phonetic content and abstract away from other types of variability, such as speaker differences and channel noise. Previous work in this area has primarily focused on learning from target language data only, and has been evaluated only intrinsically. Here we directly compare multiple methods, including some that use only target language speech data and some that use transcribed speech from other (non-target) languages, and we evaluate using two intrinsic measures as well as on a downstream unsupervised word segmentation and clustering task. We find that combining two existing target-language-only methods yields better features than either method alone. Nevertheless, even better results are obtained by extracting target language bottleneck features using a model trained on other languages. Cross-lingual training using just one other language is enough to provide this benefit, but multilingual training helps even more. In addition to these results, which hold across both intrinsic measures and the extrinsic task, we discuss the qualitative differences between the different types of learned features. | {
"paragraphs": [
[
"Recent years have seen increasing interest in “zero-resource” speech technology: systems developed for a target language without using transcribed data or other hand-curated resources from that language. Such systems could potentially be applied to tasks such as endangered language documentation or query-by-example search for languages without a written form. One challenge for these systems, highlighted by the zrsc shared tasks of 2015 BIBREF0 and 2017 BIBREF1 , is to improve subword modeling, i.e., to extract or learn speech features from the target language audio. Good features should be more effective at discriminating between linguistic units, e.g. words or subwords, while abstracting away from factors such as speaker identity and channel noise.",
"The ZRSCs were motivated largely by questions in artificial intelligence and human perceptual learning, and focused on approaches where no transcribed data from any language is used. Yet from an engineering perspective it also makes sense to explore how training data from higher-resource languages can be used to improve speech features in a zero-resource language.",
"This paper explores several methods for improving subword modeling in zero-resource languages, either with or without the use of labeled data from other languages. Although the individual methods are not new, our work provides a much more thorough empirical evaluation of these methods compared to the existing literature. We experiment with each method both alone and in combinations not tried before, and provide results across a range of target languages, evaluation measures, and tasks.",
"We start by evaluating two methods for feature extraction that are trained using (untranscribed) target language data only: traditional vtln and the more recently proposed cae BIBREF2 . The cae learns to abstract away from signal noise and variability by training on pairs of speech segments extracted using an utd system—i.e., pairs that are likely to be instances of the same word or phrase. We confirm previous work showing that cae features outperform MFCCs on a word discriminability task, although we also show that this benefit is not consistently better than that of simply applying vtln. More interestingly, however, we find that applying vtln to the input of the cae system improves the learned features considerably, leading to better performance than either method alone. These improvements indicate that cae and vtln abstract over different aspects of the signal, and suggest that vtln might also be a useful preprocessing step in other recent neural-network-based unsupervised feature-learning methods.",
"Next, we explore how multilingual annotated data can be used to improve feature extraction for a zero-resource target language. We train multilingual bnfs on between one and ten languages from the GlobalPhone collection and evaluate on six other languages (simulating different zero-resource targets). We show that training on more languages consistently improves performance on word discrimination, and that the improvement is not simply due to more training data: an equivalent amount of data from one language fails to give the same benefit. In fact, we observe the largest gain in performance when adding the second training language, which is already better than adding three times as much data from the same language. Moreover, when compared to our best results from training unsupervised on target language data only, we find that bnfs trained on just a single other language already outperform the target-language-only training, with multilingual bnfs doing better by a wide margin.",
"Although multilingual training outperforms unsupervised target-language training, it could still be possible to improve on the multilingual bnfs by target-language fine-tuning. To test this hypothesis, we tried fine-tuning the multilingual bnfs to the target language by using them as input to the cae. When trained with utd word pairs, we found no benefit to this fine-tuning. However, training with manually labeled word pairs did yield benefits, suggesting that this type of supervision can help fine-tune the bnfs if the word pairs are sufficiently high-quality.",
"The results above were presented as part of an earlier conference version of this paper BIBREF3 . Here, we expand upon that work in several ways. First, we include new results on the corpora and evaluation measures used in the zrsc, to allow more direct comparisons with other work. In doing so, we also provide the first set of results on identical systems evaluated using both the same-different and ABX evaluation measures. This permits the two measures themselves to be better compared. Finally, we provide both a qualitative analysis of the differences between the different features we extract, and a quantitative evaluation on the downstream target-language task of unsupervised full-coverage speech segmentation and clustering using the system of BIBREF4 . This is the first time that multilingual features are used in such a system, which performs a complete segmentation of input speech into hypothesized words. As in our intrinsic evaluations, we find that the multilingual bnfs consistently outperform the best unsupervised cae features, which in turn outperform or do similarly to MFCCs."
],
[
"We start by investigating how unlabeled data from the target language alone can be used for unsupervised subword modeling. Below we first review related work and provide a brief introduction to the cae and vtln methods. We then describe our experiments directly comparing these methods, both alone and in combination."
],
[
"Various approaches have been applied to the problem of unsupervised subword modeling. Some methods work in a strictly bottom-up fashion, for example by extracting posteriorgrams from a (finite or infinite) Gaussian mixture model trained on the unlabeled data BIBREF5 , BIBREF6 , BIBREF7 , or by using neural networks to learn representations using autoencoding BIBREF8 , BIBREF9 , BIBREF10 or other loss functions BIBREF11 . Other methods incorporate weak top-down supervision by first extracting pairs of similar word- or phrase-like units using unsupervised term detection, and using these to constrain the representation learning. Examples include the cae BIBREF2 and ABNet BIBREF12 . Both aim to learn representations that make similar pairs even more similar; the ABNet additionally tries to make different pairs more different.",
"In this work we use the cae in our experiments on unsupervised representation learning, since it performed well in the 2015 ZRSC, achieved some of the best-reported results on the same-different task (which we also consider), and has readily available code. As noted above, the cae attempts to normalize out non-linguistic factors such as speaker, channel, gender, etc., by using top-down information from pairs of similar speech segments. Extracting cae features requires three steps, as illustrated in Figure FIGREF6 . First, an utd system is applied to the target language to extract pairs of speech segments that are likely to be instances of the same word or phrase. Each pair is then aligned at the frame level using dtw, and pairs of aligned frames are presented as the input INLINEFORM0 and target output INLINEFORM1 of a dnn. After training, a middle layer INLINEFORM2 is used as the learned feature representation.",
"The cae and other unsupervised methods described above implicitly aim to abstract away from speaker variability, and indeed they succeed to some extent in doing so BIBREF4 . Nevertheless, they provide less explicit speaker adaptation than standard methods used in supervised ASR, such as fMLLR BIBREF13 , LHUC BIBREF14 or i-vectors BIBREF15 . Explicit speaker adaptation seems to have attracted little attention until recently BIBREF16 in the zero-resource community, perhaps because most of the standard methods assume transcribed data is available.",
"Nevertheless, recent work suggests that at least some of these methods may be applied effectively even in an unsupervised setting. In particular, Heck at al. BIBREF17 , BIBREF18 won the zrsc 2017 using a typical asr pipeline with speaker adaptive fMLLR and other feature transforms. They adapted these methods to the unsupervised setting by first obtaining phone-like units with the dpgmm, an unsupervised clustering technique, and then using the cluster assignments as unsupervised phone labels during asr training.",
"In this work we instead consider a very simple feature-space adaptation method, vtln, which normalizes a speaker's speech by warping the frequency-axis of the spectra. vtln models are trained using maximum likelihood estimation under a given acoustic model—here, an unsupervised gmm. Warp factors can then be extracted for both the training data and for unseen data.",
"Although VTLN has recently been used by a few zero-resource speech systems BIBREF7 , BIBREF17 , BIBREF18 , its impact in these systems is unclear because there is no comparison to a baseline without vtln. BIBREF19 did precisely such a comparison and showed that applying vtln to the input of their unsupervised feature learning method improved its results in a phoneme discrimination task, especially in the cross-speaker case. However, we don't know whether other feature learning methods are similarly benefited by vtln, nor even how vtln on its own performs in comparison to more recent methods. Thus, our first set of experiments is designed to answer these questions by evaluating the benefits of using vtln and cae learning, both on their own and in combination.",
"There is considerable evidence that bnfs extracted using a multilingually trained dnn can improve ASR for target languages with just a few hours of transcribed data BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 . However, there has been little work so far exploring supervised multilingual bnfs for target languages with no transcribed data at all. BIBREF32 , BIBREF23 trained monolingual BNF extractors and showed that applying them cross-lingually improves word discrimination in a zero-resource setting. BIBREF33 , BIBREF19 trained a multilingual dnn to extract BNFs for a zero-resource task, but the dnn itself was trained on untranscribed speech: an unsupervised clustering method was applied to each language to obtain phone-like units, and the dnn was trained on these unsupervised phone labels.",
"We know of only two previous studies of supervised multilingual BNFs for zero-resource speech tasks. In the first BIBREF25 , the authors trained bnfs on either Mandarin, Spanish or both, and used the trained dnns to extract features from English (simulating a zero-resource language). On a query-by-example task, they showed that bnfs always performed better than MFCCs, and that bilingual bnfs performed as well or better than monolingual ones. Further improvements were achieved by applying weak supervision in the target language using a cae trained on English word pairs. However, the authors did not experiment with more than two training languages, and only evaluated on English.",
"In the second study BIBREF34 , the authors built multilingual systems using either seven or ten high-resource languages, and evaluated on the three “development” and two “surprise” languages of the zrsc 2017. However, they included transcribed training data from four out of the five evaluation languages, so only one language's results (Wolof) were truly zero-resource.",
"Our experiments therefore aim to evaluate on a wider range of target languages, and to explore the effects of both the amount of labeled data, and the number of languages from which it is obtained."
],
[
"We use the GlobalPhone corpus of speech read from news articles BIBREF20 . We chose 6 languages from different language families as zero-resource languages on which we evaluate the new feature representations. That means our models do not have any access to the transcriptions of the training data, although transcriptions still need to be available to run the evaluation. The selected languages and dataset sizes are shown in Table TABREF8 . Each GlobalPhone language has recordings from around 100 speakers, with 80% of these in the training sets and no speaker overlap between training, development, and test sets.",
"For baseline features, we use Kaldi BIBREF21 to extract MFCCs+ INLINEFORM0 + INLINEFORM1 and PLPs+ INLINEFORM2 + INLINEFORM3 with a window size of 25 ms and a shift of 10 ms, and we apply per-speaker cmn. We also evaluated MFCCs and PLPs with vtln. The acoustic model used to extract the warp factors was a diagonal-covariance gmm with 1024 components. A single GMM was trained unsupervised on each language's training data.",
"To train the cae, we obtained utd pairs using a freely available utd system BIBREF22 and extracted 36k word pairs for each target language. Published results with this system use PLP features as input, and indeed our preliminary experiments confirmed that MFCCs did not work as well. We therefore report results using only PLP or PLP+VTLN features as input to utd. Following BIBREF23 , BIBREF2 , we train the cae model by first pre-training an autoencoder with eight 100-dimensional layers and a final layer of size 39 layer-wise on the entire training data for 5 epochs with a learning rate of INLINEFORM0 . We then fine-tune the network with same-word pairs as weak supervision for 60 epochs with a learning rate of INLINEFORM1 . Frame pairs are presented to the cae using either MFCC, MFCC+VTLN, or BNF representation, depending on the experiment (preliminary experiments indicated that PLPs performed worse than MFCCs, so MFCCs are used as the stronger baseline). Features are extracted from the final hidden layer of the cae as shown in Figure FIGREF6 .",
"To provide an upper bound on cae performance, we also report results using gold standard same-word pairs for cae training. As in BIBREF2 , BIBREF24 , BIBREF25 , we force-align the target language data and extract all the same-word pairs that are at least 5 characters and 0.5 seconds long (between 89k and 102k pairs for each language).",
"We picked another 10 languages (different from the target languages described in Section SECREF7 ) with a combined 198.3 hours of speech from the GlobalPhone corpus. We consider these as high-resource languages, for which transcriptions are available to train a supervised asr system. The languages and dataset sizes are listed in Table TABREF16 . We also use the English wsj corpus BIBREF35 which is comparable to the GlobalPhone corpus. It contains a total of 81 hours of speech, which we either use in its entirety or from which we use a 15 hour subset; this allows us to compare the effect of increasing the amount of data for one language with training on similar amounts of data but from different languages.",
"Supervised models trained on these high-resource languages are evaluated on the same set of zero-resource languages as in Section SECREF2 . Transcriptions of the latter are still never used during training.",
"For initial monolingual training of asr systems for the high-resource languages, we follow the Kaldi recipes for the GlobalPhone and WSJ corpora and train a sgmm system for each language to get initial context-dependent state alignments; these states serve as targets for dnn training.",
"For multilingual training, we closely follow the existing Kaldi recipe for the Babel corpus. We train a tdnn BIBREF36 with block softmax BIBREF37 , i.e. all hidden layers are shared between languages, but there is a separate output layer for each language. For each training instance only the error at the corresponding language's output layer is used to update the weights. This architecture is illustrated in Figure FIGREF17 . The tdnn has six 625-dimensional hidden layers followed by a 39-dimensional bottleneck layer with ReLU activations and batch normalization. Each language then has its own 625-dimensional affine and a softmax layer. The inputs to the network are 40-dimensional MFCCs with all cepstral coefficients to which we append i-vectors for speaker adaptation. The network is trained with stochastic gradient descent for 2 epochs with an initial learning rate of INLINEFORM0 and a final learning rate of INLINEFORM1 .",
"In preliminary experiments we trained a separate i-vector extractor for each different sized subset of training languages. However, results were similar to training on the pooled set of all 10 high-resource languages, so for expedience we used the 100-dimensional i-vectors from this pooled training for all reported experiments. The i-vectors for the zero-resource languages are obtained from the same extractor. This allows us to also apply speaker adaptation in the zero-resource scenario. Including i-vectors yielded a small performance gain over not doing so; we also tried applying vtln to the MFCCs for tdnn training, but found no additional benefit."
],
[
"All experiments in this section are evaluated using the same-different task BIBREF26 , which tests whether a given speech representation can correctly classify two speech segments as having the same word type or not. For each word pair in a pre-defined set INLINEFORM0 the dtw cost between the acoustic feature vectors under a given representation is computed. Two segments are then considered a match if the cost is below a threshold. Precision and recall at a given threshold INLINEFORM1 are defined as INLINEFORM2 ",
"where INLINEFORM0 is the number of sw, swdp or all discovered matches at that threshold and INLINEFORM1 is the number of actual swdp pairs in INLINEFORM2 . We can compute a precision-recall curve by varying INLINEFORM3 . The final evaluation metric is the ap or the area under that curve. We generate evaluation sets of word pairs for the GlobalPhone development and test sets from all words that are at least 5 characters and 0.5 seconds long, except that we now also include different-word pairs.",
"Previous work BIBREF26 , BIBREF2 calculated recall with all sw pairs for easier computation because their test sets included a negligible number of swsp pairs. In our case the smaller number of speakers in the GlobalPhone corpora results in up to 60% of sw pairs being from the same speaker. We therefore always explicitly compute the recall only for swdp pairs to focus the evaluation of features on their speaker invariance."
],
[
"Table TABREF13 shows AP results on all target languages for cae features learned using raw features as input (as in previous work) and for cae features learned using vtln-adapted features as input to either the utd system, the cae, or both. Baselines are raw MFCCs, or MFCCs with VTLN. MFCCs with VTLN have not previously been compared to more recent unsupervised subword modeling methods, but as our results show, they are a much stronger baseline than MFCCs alone. Indeed, they are nearly as good as cae features (as trained in previous work). However, we obtain much better results by applying vtln to both the cae and utd input features (MFCCs and PLPs, respectively). Individually these changes each result in substantial improvements that are consistent across all 6 languages, and applying VTLN at both stages helps further. Indeed, applying vtln is beneficial even when using gold pairs as cae input, although to a lesser degree.",
"So, although previous studies have indicated that cAE training and VTLN are helpful individually, our experiments provide further evidence and quantification of those results. In addition, we have shown that combining the two methods leads to further improvements, suggesting that cae training and vtln abstract over different aspects of the speech signal and should be used together. The large gains we found with VTLN, and the fact that it was part of the winning system in the 2017 ZRSC, suggest that it is also likely to help in combination with other unsupervised subword modeling methods.",
"As a sanity check we include word error rates (WER) for the asr systems trained on the high-resource languages. Table TABREF20 compares the WER of the monolingual sgmm systems that provide the targets for tdnn training to the WER of the final model trained on all 10 high-resource languages. The multilingual model shows small but consistent improvements for all languages except Vietnamese. Ultimately though, we are not so much interested in the performance on typical asr tasks, but in whether bnfs from this model also generalize to zero-resource applications on unseen languages.",
"Figure FIGREF21 shows ap on the same-different task of multilingual bnfs trained from scratch on an increasing number of languages in two randomly chosen orders. We provide two baselines for comparison, drawn from our results in Table TABREF13 . Firstly, our best cae features trained with utd pairs (row 4, Table TABREF13 ) are a reference for a fully unsupervised system. Secondly, the best cae features trained with gold standard pairs (row 6, Table TABREF13 ) give an upper bound on the cae performance.",
"In all 6 languages, even bnfs from a monolingual tdnn already considerably outperform the cae trained with utd pairs. Adding another language usually leads to an increase in ap, with the bnfs trained on 8–10 high-resource languages performing the best, also always beating the gold cae. The biggest performance gain is obtained from adding a second training language—further increases are mostly smaller. The order of languages has only a small effect, although for example adding other Slavic languages is generally associated with an increase in ap on Croatian. This suggests that it may be beneficial to train on languages related to the zero-resource language if possible, but further experiments need to be conducted to quantify this effect.",
"To determine whether these gains come from the diversity of training languages or just the larger amount of training data, we trained models on the 15 hour subset and the full 81 hours of the English wsj corpus, which corresponds to the amount of data of four GlobalPhone languages. More data does help to some degree, as Figure FIGREF21 shows. But, except for Mandarin, training on just two languages (46 hours) already works better."
],
[
"Next we investigate how labeled data from high-resource languages can be used to obtain improved features on a target zero-resource language for which no labeled data is available."
],
[
"In the previous experiments, we used data from GlobalPhone, which provides corpora collected and formatted similarly for a wide range of languages. However, GlobalPhone is not freely available and no previous zero-resource studies have used these corpora, so in this section we also provide results on the zrsc 2015 BIBREF0 data sets, which have been widely used in other work. The target languages are English (from the Buckeye corpus BIBREF38 ) and Xitsonga (NCHLT corpus BIBREF39 ). Table TABREF8 includes the corpus statistics. These corpora are not split into train/dev/test; since training is unsupervised, the system is simply trained directly on the unlabeled test set (which could also be done in deployment). Importantly, no hyperparameter tuning is done on the Buckeye or Xitsonga data, so these results still provide a useful test of generalization. Notably, the Buckeye English corpus contains conversational speech and is therefore different in style from the rest of our data.",
"For training the cae on the Buckeye English and Xitsonga corpora, we use the same sets of utd pairs as in BIBREF23 , which were discovered from fdlp features. We evaluate using both the same-different measures from above, as well as the ABX phone discriminability task BIBREF40 used in the zrsc and other recent work BIBREF0 , BIBREF1 . The ABX task evaluates phoneme discriminability using minimal pairs: sequences of three phonemes where the central phoneme differs between the two sequences INLINEFORM0 and INLINEFORM1 in the pair, such as b ih n and b eh n. Feature representations are then evaluated on how well they can identify a third triplet INLINEFORM2 as having the same phoneme sequence as either INLINEFORM3 or INLINEFORM4 . See BIBREF0 , BIBREF1 for details on how the scores are computed and averaged over speakers and phonemes to obtain the final ABX error rate. One usually distinguishes between the within-speaker error rate where all three triplets belong to the same speaker, and the cross-speaker error rate where INLINEFORM5 and INLINEFORM6 are from the same and INLINEFORM7 from a different speaker.",
"The ABX evaluation includes all such minimal pair phoneme triplets of the evaluation corpus. These pairs therefore rarely correspond to full words, making it a somewhat abstract task whose results may be difficult to interpret when summarizing it as a single final metric. ABX can however be very suitable for more fine-grained analysis of speech phenomena by including only specific phonetic contrasts in the evaluation BIBREF41 . In contrast, the same-different task always compares whole words and directly evaluates how good feature representations are at telling whether two utterances are the same word or not. Thus it has an immediate link to applications like spoken term detection and it allows easier error analysis. It is also faster to prepare the same-different evaluation set and run the evaluation. We wish to verify that the ABX and same-different measures correlate well, to better compare studies that use only one of them and to allow choosing the task that is more appropriate for the situation at hand.",
"Table TABREF22 shows results on the Xitsonga and Buckeye English corpora. Here we compare ABX error rates computed with the zrsc 2015 BIBREF0 evaluation scripts with ap on the same-different task. To the best of our knowledge, this is the first time such a comparison has been made. The results on both tasks correlate well, especially when looking at the ABX cross-speaker error rate because the same-different evaluation as described in Section SECREF11 also focuses on cross-speaker pairs. As might be expected vtln only improves cross-speaker, but not within-speaker ABX error rates.",
"For comparison we also include ABX results of the official zrsc 2015 topline BIBREF0 , which are posteriorgrams obtained from a supervised speech recognition system, the current state-of-the-art system BIBREF18 which even outperforms the topline for English, and the system of BIBREF42 which is the most recent form of the ABNet BIBREF12 , an architecture that is similar to our cae.",
"These systems score better than all of our features, but are not directly comparable for several reasons. Firstly, it is unclear how these systems were optimized, since there was no separate development set in zrsc 2015. Secondly, our features are all 39-dimensional to be directly comparable with MFCCs, whereas the other two systems have higher dimensionality (and indeed the winning system from zrsc 2017 was even greater, with more than 1000 dimensions BIBREF17 ). Such higher dimensional features may be useful in some circumstances, but lower dimensional features are often more efficient to work with and we don't know whether the competing systems would work as well with fewer dimensions.",
"The bnfs are in any case competitive with the higher dimensional features, and have the advantage that they can be built using standard Kaldi scripts and do not require any training on the target language, so can easily be deployed to new languages. The competitive result of BIBREF42 also shows that in general a system trained on word pairs discovered from a utd system can perform very well."
],
[
"So far we have shown that multilingual bnfs work better than any of the features trained using only the target language data. However, in principle it could be possible to use the target language data to fine tune the bnfs in an unsupervised fashion, improving performance further. We explored this possibility by simply training a cae using bnfs as input rather than PLPs. That is, we trained the cae with the same word pairs as before, but replaced VTLN-adapted MFCCs with the 10-lingual bnfs as input features, without any other changes in the training procedure. Table TABREF23 (penultimate row) shows that the cae trained with utd pairs is able to slightly improve on the bnfs in some cases, but this is not consistent across all languages and for Croatian the cae features are much worse. On the other hand, when trained using gold standard pairs (final row), the resulting cae features are consistently better than the input bnfs. This indicates that bnfs can in principle be improved by target-language fine-tuning, but the top-down supervision needs to be of higher quality than the current UTD system provides.",
"This observation leads to a further question: could we improve the UTD pairs themselves by using our improved features (either bnfs or cae features) as input to the UTD system? If the output is a better set of UTD pairs than the original set, these could potentially be used to further improve the features, and perhaps the process could be iterated. As far as we know, no previously published work has combined unsupervised subword modeling with a utd system. However, after considerable efforts to make this work we found that the ZRTools utd system seems to be too finely tuned towards features that resemble PLPs to get good results from our new features.",
"To understand why the features that help with word and phone discrimination are a problem for the UTD system, we examined the similarity plots for several pairs of utterances. Figures FIGREF24 and FIGREF29 show that cae features and bnfs look quite different from PLPs. Dark areas indicate acoustic similarity and diagonal line segments therefore point to phonetically similar sequences. In Figure FIGREF24 both utterances contain the words estados unidos, but shorter and more faint lines can also be seen for rough matches like the last two syllables of servicio and visas. The ZRTools utd toolkit identifies these diagonal lines with fast computer vision techniques BIBREF22 and then runs a segmental-dtw algorithm only in the candidate regions for efficient discovery of matches.",
"PLPs are designed to contain fine-grained acoustic information about the speech signal and can therefore vary a lot throughout the duration of a phoneme. The diagonal lines in Figure FIGREF24 (a) are therefore very thin and there is a lot of spurious noise that does not necessarily correspond to phonetically similar units. This pattern is similar for VTLN-adapted PLPs in (b), but with less noise.",
"On the other hand, cae features and bnfs are trained to ignore such local variation within phonemes. This results in significantly different appearance of frame-wise cosine similarity plots of two utterances. The trained features remain more constant throughout the duration of a phoneme, resulting in wider diagonal lines in the similarity plots. Especially cae features are very good at learning phoneme-level information, indicated by the large rectangular blocks in Figure FIGREF24 (c) where phonemes of the two utterances match or are very similar. We also found the boundaries of these blocks to align well with actual phoneme boundaries provided by forced alignment. This is despite the cae not having any information about phoneme identities or boundaries during training.",
"While ZRTools still finds the diagonal line segments in cae features and bnfs where matches are likely to occur, the segmental dtw algorithm that then searches for exact matches finds too many of them because the lines are much wider and similarity values overall higher than for PLPs. For example Figure FIGREF29 shows a typical example of phonetically similar, but incorrect matches that are only discovered in cae features and bnfs. Although it might be possible to eventually identify a set of dtw parameters that can work with these types of features, it could be more productive to consider different approaches for features that are relatively stable within phones."
],
[
"Our experiment with the UTD system was disappointing, suggesting that although cae features and bnfs improve intrinsic discriminability measures, they may not work with some downstream zero-resource tools. However, ZRTools is a single example. To further investigate the downstream effects of the learned features, we now consider the task of full-coverage speech segmentation and clustering. The aim here is to tokenize the entire speech input into hypothesized categories, potentially corresponding to words, and to do so without any form of supervision—essentially a form of unsupervised speech recognition. Such systems could prove useful from a speech technology perspective in low-resource settings, and could be useful in studying how human infants acquire language from unlabeled speech input.",
"Here we specifically investigate whether our BNFs improve the Bayesian embedded segmental Gaussian mixture model (BES-GMM), first proposed in BIBREF43 . This approach relies on a mapping where potential word segments (of arbitrary length) are embedded in a fixed-dimensional acoustic vector space. The model, implemented as a Gibbs sampler, builds a whole-word acoustic model in this acoustic embedding space, while jointly performing segmentation. Several acoustic word embedding methods have been considered, but here we use the very simple approach also used in BIBREF4 : any segment is uniformly downsampled so that it is represented by the same fixed number of frame-level features, which are then flattened to obtain the fixed-dimensional embedding BIBREF44 ."
],
[
"We retrained the cae and BNF models to return 13-dimensional features with all other parameters unchanged to be consistent with the experiments of BIBREF4 and for computational reasons. We also did not tune any hyperparameters of the BES-GMM for our new input features. Nonetheless, our baseline cae results do not exactly correspond to the ones in BIBREF4 because for example the MFCC input features have been extracted with a different toolkit and we used a slightly different training procedure.",
"We use several metrics to compare the resulting segmented word tokens to ground truth forced alignments of the data. By mapping every discovered word token to the ground truth word with which it overlaps most, average cluster purity can be calculated as the total proportion of correctly mapped tokens in all clusters. More than one cluster may be mapped to the same ground truth word type. In a similar way, we can calculate unsupervised word error rate (WER), which uses the same cluster-to-word mapping but also takes insertions and deletions into account. Here we consider two ways to perform the cluster mapping: many-to-one, where more than one cluster can be assigned the same word label (as in purity), or one-to-one, where at most one cluster is mapped to a ground truth word type (accomplished in a greedy fashion). We also compute the gender and speaker purity of the clusters, where we want to see clusters that are as diverse as possible on these measures, i.e., low purity. To explicitly evaluate how accurate the model performs segmentation, we compare the proposed word boundary positions to those from forced alignments of the data (falling within a single true phoneme from the boundary). We calculate boundary precision and recall, and report the resulting word boundary F-scores. We also calculate word token F-score, which requires that both boundaries from a ground truth word token be correctly predicted."
],
[
"Table TABREF36 compares MFCCs, cae features (with and without vtln) and bnfs as input to the system of BIBREF4 . It shows that both vtln and bnfs help on all metrics, with improvements ranging from small to more substantial and bnfs clearly giving the most benefit. The effects of vtln are mostly confined to reducing both gender and speaker purity of the identified clusters (which is desirable) while maintaining the performance on other metrics. This means that the learned representations have become more invariant to variation in speaker and gender, which is exactly what vtln aims to do. However, this appears to be insufficient to also help other metrics, aligning with the experiments in BIBREF4 that indicate that improvements on the other metrics are hard to obtain.",
"On the other hand, bnfs result in better performance across all metrics. While some of these improvements are small, they are very consistent across all metrics. This shows that the bnfs are also useful for down-stream tasks in zero-resource settings. It especially demonstrates that such bnfs which are trained on high-resource languages without seeing any target language speech at all are a strong alternative to fully unsupervised features for practical scenarios or could in turn be used to improve unsupervised systems trained on the target language speech data."
],
[
"bnfs cae utd",
"In this work we investigated different representations obtained using data from the target language alone (i.e., fully unsupervised) and from multilingual supervised systems trained on labeled data from non-target languages. We found that the cae, a recent neural approach to unsupervised subword modeling, learns complementary information to the more traditional approach of vtln. This suggests that vtln should also be considered by other researchers using neural approaches. On the other hand, our best results were achieved using multilingual bnfs. These results are competitive with state-of-the-art features learned from target language data only BIBREF17 , BIBREF18 , but have the advantage of a much smaller dimensionality. In addition, it is easy to control the dimensionality of the bnfs, unlike in the nonparametric models of BIBREF17 , BIBREF18 , and this allowed us to use them in the downstream task of word segmentation and clustering. We observed consistent improvements from bnfs across all metrics in this downstream task, and other work demonstrates that these features are also useful for downstream keyword spotting in settings with very small amounts of labeled data BIBREF45 . We also showed that it is theoretically possible to further improve bnfs with language-specific fine-tuning, and we hope to explore models that can do this more reliably than the cae in the future.",
"Finally, our qualitative analysis showed that both cae features and bnfs tend to vary much less over time than traditional PLPs, supporting the idea that they are better at capturing phonetic information rather than small variations in the acoustics. Although this property helps explain the better performance on intrinsic measures and the segmentation task, it harms performance for utd, where the system seems heavily tuned towards PLPs. Therefore, our work also points to the need for term discovery systems that are more robust to different types of input features."
],
[
"The research was funded in part by a James S. McDonnell Foundation Scholar Award."
]
],
"section_name": [
"Introduction",
"Unsupervised Training, Target Language Only",
"Background and Motivation",
"Experimental Setup",
"Evaluation",
"Results and Discussion",
"Supervision from High-Resource Languages",
"Evaluation using ZRSC Data and Measures",
"Can We Improve the Multilingual BNFs?",
"Segmentation and Clustering",
"Experimental Setup and Evaluation",
"Results",
"Conclusions",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"55aebf6056e90c3566f3ef837d89f166bb4683af",
"694efdbe7afcc1af36072005965902bb74334ffc",
"9c2b0864e7294d8fccf3880524e27fa228f13d04"
],
"answer": [
{
"evidence": [
"Next, we explore how multilingual annotated data can be used to improve feature extraction for a zero-resource target language. We train multilingual bnfs on between one and ten languages from the GlobalPhone collection and evaluate on six other languages (simulating different zero-resource targets). We show that training on more languages consistently improves performance on word discrimination, and that the improvement is not simply due to more training data: an equivalent amount of data from one language fails to give the same benefit. In fact, we observe the largest gain in performance when adding the second training language, which is already better than adding three times as much data from the same language. Moreover, when compared to our best results from training unsupervised on target language data only, we find that bnfs trained on just a single other language already outperform the target-language-only training, with multilingual bnfs doing better by a wide margin."
],
"extractive_spans": [
"ten languages"
],
"free_form_answer": "",
"highlighted_evidence": [
"Next, we explore how multilingual annotated data can be used to improve feature extraction for a zero-resource target language. We train multilingual bnfs on between one and ten languages from the GlobalPhone collection and evaluate on six other languages (simulating different zero-resource targets). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We picked another 10 languages (different from the target languages described in Section SECREF7 ) with a combined 198.3 hours of speech from the GlobalPhone corpus. We consider these as high-resource languages, for which transcriptions are available to train a supervised asr system. The languages and dataset sizes are listed in Table TABREF16 . We also use the English wsj corpus BIBREF35 which is comparable to the GlobalPhone corpus. It contains a total of 81 hours of speech, which we either use in its entirety or from which we use a 15 hour subset; this allows us to compare the effect of increasing the amount of data for one language with training on similar amounts of data but from different languages.",
"We use the GlobalPhone corpus of speech read from news articles BIBREF20 . We chose 6 languages from different language families as zero-resource languages on which we evaluate the new feature representations. That means our models do not have any access to the transcriptions of the training data, although transcriptions still need to be available to run the evaluation. The selected languages and dataset sizes are shown in Table TABREF8 . Each GlobalPhone language has recordings from around 100 speakers, with 80% of these in the training sets and no speaker overlap between training, development, and test sets."
],
"extractive_spans": [],
"free_form_answer": "16",
"highlighted_evidence": [
"We picked another 10 languages (different from the target languages described in Section SECREF7 ) with a combined 198.3 hours of speech from the GlobalPhone corpus. We consider these as high-resource languages, for which transcriptions are available to train a supervised asr system. ",
"We chose 6 languages from different language families as zero-resource languages on which we evaluate the new feature representations."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the GlobalPhone corpus of speech read from news articles BIBREF20 . We chose 6 languages from different language families as zero-resource languages on which we evaluate the new feature representations. That means our models do not have any access to the transcriptions of the training data, although transcriptions still need to be available to run the evaluation. The selected languages and dataset sizes are shown in Table TABREF8 . Each GlobalPhone language has recordings from around 100 speakers, with 80% of these in the training sets and no speaker overlap between training, development, and test sets."
],
"extractive_spans": [
"6"
],
"free_form_answer": "",
"highlighted_evidence": [
"We chose 6 languages from different language families as zero-resource languages on which we evaluate the new feature representations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1697ca81f2df1d1e9c68a08c59f29dd8c8a1f204",
"4c2632dec786e4dbe4964b4ee107429c09921202"
],
"answer": [
{
"evidence": [
"For multilingual training, we closely follow the existing Kaldi recipe for the Babel corpus. We train a tdnn BIBREF36 with block softmax BIBREF37 , i.e. all hidden layers are shared between languages, but there is a separate output layer for each language. For each training instance only the error at the corresponding language's output layer is used to update the weights. This architecture is illustrated in Figure FIGREF17 . The tdnn has six 625-dimensional hidden layers followed by a 39-dimensional bottleneck layer with ReLU activations and batch normalization. Each language then has its own 625-dimensional affine and a softmax layer. The inputs to the network are 40-dimensional MFCCs with all cepstral coefficients to which we append i-vectors for speaker adaptation. The network is trained with stochastic gradient descent for 2 epochs with an initial learning rate of INLINEFORM0 and a final learning rate of INLINEFORM1 ."
],
"extractive_spans": [
"train a tdnn BIBREF36 with block softmax",
"tdnn has six 625-dimensional hidden layers followed by a 39-dimensional bottleneck layer"
],
"free_form_answer": "",
"highlighted_evidence": [
"We train a tdnn BIBREF36 with block softmax BIBREF37 , i.e. all hidden layers are shared between languages, but there is a separate output layer for each language. For each training instance only the error at the corresponding language's output layer is used to update the weights. This architecture is illustrated in Figure FIGREF17 . The tdnn has six 625-dimensional hidden layers followed by a 39-dimensional bottleneck layer with ReLU activations and batch normalization. Each language then has its own 625-dimensional affine and a softmax layer."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this work we use the cae in our experiments on unsupervised representation learning, since it performed well in the 2015 ZRSC, achieved some of the best-reported results on the same-different task (which we also consider), and has readily available code. As noted above, the cae attempts to normalize out non-linguistic factors such as speaker, channel, gender, etc., by using top-down information from pairs of similar speech segments. Extracting cae features requires three steps, as illustrated in Figure FIGREF6 . First, an utd system is applied to the target language to extract pairs of speech segments that are likely to be instances of the same word or phrase. Each pair is then aligned at the frame level using dtw, and pairs of aligned frames are presented as the input INLINEFORM0 and target output INLINEFORM1 of a dnn. After training, a middle layer INLINEFORM2 is used as the learned feature representation.",
"FLOAT SELECTED: Fig. 1. Correspondence autoencoder training procedure (see section II-A)."
],
"extractive_spans": [
"Extracting cae features requires three steps, as illustrated in Figure FIGREF6 . First, an utd system is applied to the target language to extract pairs of speech segments that are likely to be instances of the same word or phrase"
],
"free_form_answer": "",
"highlighted_evidence": [
"Extracting cae features requires three steps, as illustrated in Figure FIGREF6 . First, an utd system is applied to the target language to extract pairs of speech segments that are likely to be instances of the same word or phrase. Each pair is then aligned at the frame level using dtw, and pairs of aligned frames are presented as the input INLINEFORM0 and target output INLINEFORM1 of a dnn. After training, a middle layer INLINEFORM2 is used as the learned feature representation.",
"FLOAT SELECTED: Fig. 1. Correspondence autoencoder training procedure (see section II-A)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"661224a54cc027eeac12577cc9aca5ce50a1246c",
"a01993e2b1961f151e7c822332e1e1161ea7c9fd",
"e0393fa778e59b8e8c23c84e669f3291fb95d71c"
],
"answer": [
{
"evidence": [
"We use the GlobalPhone corpus of speech read from news articles BIBREF20 . We chose 6 languages from different language families as zero-resource languages on which we evaluate the new feature representations. That means our models do not have any access to the transcriptions of the training data, although transcriptions still need to be available to run the evaluation. The selected languages and dataset sizes are shown in Table TABREF8 . Each GlobalPhone language has recordings from around 100 speakers, with 80% of these in the training sets and no speaker overlap between training, development, and test sets."
],
"extractive_spans": [
"GlobalPhone corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the GlobalPhone corpus of speech read from news articles BIBREF20 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the GlobalPhone corpus of speech read from news articles BIBREF20 . We chose 6 languages from different language families as zero-resource languages on which we evaluate the new feature representations. That means our models do not have any access to the transcriptions of the training data, although transcriptions still need to be available to run the evaluation. The selected languages and dataset sizes are shown in Table TABREF8 . Each GlobalPhone language has recordings from around 100 speakers, with 80% of these in the training sets and no speaker overlap between training, development, and test sets.",
"FLOAT SELECTED: TABLE I ZERO-RESOURCE LANGUAGES, DATASET SIZES IN HOURS."
],
"extractive_spans": [],
"free_form_answer": "GlobalPhone\nCroatian\nHausa\nMandarin\nSpanish\nSwedish\nTurkish\nZRSC\nBuckeye\nXitsonga",
"highlighted_evidence": [
" The selected languages and dataset sizes are shown in Table TABREF8 . Each GlobalPhone language has recordings from around 100 speakers, with 80% of these in the training sets and no speaker overlap between training, development, and test sets.",
"FLOAT SELECTED: TABLE I ZERO-RESOURCE LANGUAGES, DATASET SIZES IN HOURS."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the GlobalPhone corpus of speech read from news articles BIBREF20 . We chose 6 languages from different language families as zero-resource languages on which we evaluate the new feature representations. That means our models do not have any access to the transcriptions of the training data, although transcriptions still need to be available to run the evaluation. The selected languages and dataset sizes are shown in Table TABREF8 . Each GlobalPhone language has recordings from around 100 speakers, with 80% of these in the training sets and no speaker overlap between training, development, and test sets.",
"We picked another 10 languages (different from the target languages described in Section SECREF7 ) with a combined 198.3 hours of speech from the GlobalPhone corpus. We consider these as high-resource languages, for which transcriptions are available to train a supervised asr system. The languages and dataset sizes are listed in Table TABREF16 . We also use the English wsj corpus BIBREF35 which is comparable to the GlobalPhone corpus. It contains a total of 81 hours of speech, which we either use in its entirety or from which we use a 15 hour subset; this allows us to compare the effect of increasing the amount of data for one language with training on similar amounts of data but from different languages.",
"In the previous experiments, we used data from GlobalPhone, which provides corpora collected and formatted similarly for a wide range of languages. However, GlobalPhone is not freely available and no previous zero-resource studies have used these corpora, so in this section we also provide results on the zrsc 2015 BIBREF0 data sets, which have been widely used in other work. The target languages are English (from the Buckeye corpus BIBREF38 ) and Xitsonga (NCHLT corpus BIBREF39 ). Table TABREF8 includes the corpus statistics. These corpora are not split into train/dev/test; since training is unsupervised, the system is simply trained directly on the unlabeled test set (which could also be done in deployment). Importantly, no hyperparameter tuning is done on the Buckeye or Xitsonga data, so these results still provide a useful test of generalization. Notably, the Buckeye English corpus contains conversational speech and is therefore different in style from the rest of our data."
],
"extractive_spans": [
"GlobalPhone corpus",
"English wsj corpus",
"Buckeye corpus",
"NCHLT corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the GlobalPhone corpus of speech read from news articles BIBREF20 .",
"We also use the English wsj corpus BIBREF35 which is comparable to the GlobalPhone corpus",
"However, GlobalPhone is not freely available and no previous zero-resource studies have used these corpora, so in this section we also provide results on the zrsc 2015 BIBREF0 data sets, which have been widely used in other work. ",
"The target languages are English (from the Buckeye corpus BIBREF38 ) and Xitsonga (NCHLT corpus BIBREF39 ). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"727b357df0e4b966d75e9a36c7cc9b2059e4e241",
"a0d3cd3f3bf72a70b440d610440c571672865662",
"a4339e6c9d842d4525b3b822e5def36c54f24dc8"
],
"answer": [
{
"evidence": [
"The results above were presented as part of an earlier conference version of this paper BIBREF3 . Here, we expand upon that work in several ways. First, we include new results on the corpora and evaluation measures used in the zrsc, to allow more direct comparisons with other work. In doing so, we also provide the first set of results on identical systems evaluated using both the same-different and ABX evaluation measures. This permits the two measures themselves to be better compared. Finally, we provide both a qualitative analysis of the differences between the different features we extract, and a quantitative evaluation on the downstream target-language task of unsupervised full-coverage speech segmentation and clustering using the system of BIBREF4 . This is the first time that multilingual features are used in such a system, which performs a complete segmentation of input speech into hypothesized words. As in our intrinsic evaluations, we find that the multilingual bnfs consistently outperform the best unsupervised cae features, which in turn outperform or do similarly to MFCCs.",
"In this work we use the cae in our experiments on unsupervised representation learning, since it performed well in the 2015 ZRSC, achieved some of the best-reported results on the same-different task (which we also consider), and has readily available code. As noted above, the cae attempts to normalize out non-linguistic factors such as speaker, channel, gender, etc., by using top-down information from pairs of similar speech segments. Extracting cae features requires three steps, as illustrated in Figure FIGREF6 . First, an utd system is applied to the target language to extract pairs of speech segments that are likely to be instances of the same word or phrase. Each pair is then aligned at the frame level using dtw, and pairs of aligned frames are presented as the input INLINEFORM0 and target output INLINEFORM1 of a dnn. After training, a middle layer INLINEFORM2 is used as the learned feature representation."
],
"extractive_spans": [
"same-different",
"ABX evaluation measures"
],
"free_form_answer": "",
"highlighted_evidence": [
" In doing so, we also provide the first set of results on identical systems evaluated using both the same-different and ABX evaluation measures. ",
"In this work we use the cae in our experiments on unsupervised representation learning, since it performed well in the 2015 ZRSC, achieved some of the best-reported results on the same-different task (which we also consider), and has readily available code. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The results above were presented as part of an earlier conference version of this paper BIBREF3 . Here, we expand upon that work in several ways. First, we include new results on the corpora and evaluation measures used in the zrsc, to allow more direct comparisons with other work. In doing so, we also provide the first set of results on identical systems evaluated using both the same-different and ABX evaluation measures. This permits the two measures themselves to be better compared. Finally, we provide both a qualitative analysis of the differences between the different features we extract, and a quantitative evaluation on the downstream target-language task of unsupervised full-coverage speech segmentation and clustering using the system of BIBREF4 . This is the first time that multilingual features are used in such a system, which performs a complete segmentation of input speech into hypothesized words. As in our intrinsic evaluations, we find that the multilingual bnfs consistently outperform the best unsupervised cae features, which in turn outperform or do similarly to MFCCs."
],
"extractive_spans": [
"same-different",
"ABX "
],
"free_form_answer": "",
"highlighted_evidence": [
"First, we include new results on the corpora and evaluation measures used in the zrsc, to allow more direct comparisons with other work. In doing so, we also provide the first set of results on identical systems evaluated using both the same-different and ABX evaluation measures. This permits the two measures themselves to be better compared. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"All experiments in this section are evaluated using the same-different task BIBREF26 , which tests whether a given speech representation can correctly classify two speech segments as having the same word type or not. For each word pair in a pre-defined set INLINEFORM0 the dtw cost between the acoustic feature vectors under a given representation is computed. Two segments are then considered a match if the cost is below a threshold. Precision and recall at a given threshold INLINEFORM1 are defined as INLINEFORM2",
"where INLINEFORM0 is the number of sw, swdp or all discovered matches at that threshold and INLINEFORM1 is the number of actual swdp pairs in INLINEFORM2 . We can compute a precision-recall curve by varying INLINEFORM3 . The final evaluation metric is the ap or the area under that curve. We generate evaluation sets of word pairs for the GlobalPhone development and test sets from all words that are at least 5 characters and 0.5 seconds long, except that we now also include different-word pairs."
],
"extractive_spans": [
"Precision and recall at a given threshold"
],
"free_form_answer": "",
"highlighted_evidence": [
"Precision and recall at a given threshold INLINEFORM1 are defined as INLINEFORM2\n\nwhere INLINEFORM0 is the number of sw, swdp or all discovered matches at that threshold and INLINEFORM1 is the number of actual swdp pairs in INLINEFORM2 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"With how many languages do they experiment in the multilingual setup?",
"How do they extract target language bottleneck features?",
"Which dataset do they use?",
"Which intrisic measures do they use do evaluate obtained representations?"
],
"question_id": [
"5370a0062aae7fa4e700ae47aa143be5c5fc6b22",
"9a52a33d0ae5491c07f125454aea9a41b9babb82",
"8c46a26f9b0b41c656b5b55142d491600663defa",
"e5f8d2fc1332e982a54ee4b4c1f7f55e900d0b86"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. Correspondence autoencoder training procedure (see section II-A).",
"TABLE I ZERO-RESOURCE LANGUAGES, DATASET SIZES IN HOURS.",
"TABLE III HIGH-RESOURCE LANGUAGES, DATASET SIZES IN HOURS.",
"TABLE II AVERAGE PRECISION SCORES ON THE SAME-DIFFERENT TASK (DEV SETS), SHOWING THE EFFECTS OF APPLYING VTLN TO THE INPUT FEATURES FOR THE UTD AND/OR CAE SYSTEMS. CAE INPUT IS EITHER MFCC OR MFCC+VTLN. TOPLINE RESULTS (ROWS 5-6) TRAIN CAE ON GOLD STANDARD PAIRS, RATHER THAN UTD OUTPUT. BASELINE RESULTS (FINAL ROWS) DIRECTLY EVALUATE ACOUSTIC FEATURES WITHOUT UTD/CAE TRAINING. BEST UNSUPERVISED RESULT IN BOLD.",
"Fig. 2. Multilingual ASR training architecture. All layers are shared between languages except for the language-specific output layers at the top.",
"TABLE IV WORD ERROR RATES OF MONOLINGUAL SGMM AND 10-LINGUAL TDNN ASR SYSTEM EVALUATED ON THE DEVELOPMENT SETS.",
"Fig. 3. Same-different task evaluation on the development sets for BNFs trained on different amounts of data. We compare training on up to 10 different languages with additional data in one language (English). For multilingual training, languages were added in two different orders: FR-PT-DE-TH-PL-KO-CS-BG-RU-VI (BNFs 1) and RU-CZ-VI-PL-KO-TH-BG-PT-DE-FR (BNFs 2). Each datapoint shows the result of adding an additional language. As baselines we include the best unsupervised cAE and the cAE trained on gold standard pairs from rows 4 and 6 of Table II.",
"TABLE V COMPARISON OF AP ON THE SAME-DIFFERENT TASK (HIGHER IS BETTER) AND ABX CROSS-/WITHIN-SPEAKER ERROR RATES (LOWER IS BETTER) FOR THE BUCKEYE ENGLISH AND XITSONGA CORPORA.",
"TABLE VI AP ON THE SAME-DIFFERENT TASK WHEN TRAINING CAE ON THE 10-LINGUAL BNFS FROM ABOVE (CAE-BNF) WITH UTD AND GOLD STANDARD WORD PAIRS (TEST SET RESULTS). BASELINES ARE MFCC+VTLN AND THE CAE MODELS FROM ROWS 4 AND 6 OF TABLE II THAT USE MFCC+VTLN AS INPUT FEATURES. BEST RESULT WITHOUT TARGET LANGUAGE SUPERVISION IN BOLD.",
"Fig. 4. Frame-wise cosine similarity matrices for two Spanish utterances from different speakers, comparing different feature representations. Dark regions correspond to high cosine similarity and values below 0.4 are clipped. Red rectangles mark matches discovered by the UTD system and include their DTW similarity scores. In this case the match is not found with PLPs as input features.",
"Fig. 5. Frame-wise cosine similarity matrices for two Spanish utterances from different speakers, comparing different feature representations. Dark regions correspond to high cosine similarity and values below 0.4 are clipped. Red rectangles mark matches discovered by the UTD system and include their DTW similarity scores. The discovered matches are incorrect—although phonetically similar—and found only for cAE features and BNFs.",
"TABLE VII SEGMENTATION AND CLUSTERING RESULTS (LOWER SCORES ARE BETTER, EXCEPT FOR TOKEN AND BOUNDARY F-SCORE AND CLUSTER PURITY)."
],
"file": [
"2-Figure1-1.png",
"3-TableI-1.png",
"4-TableIII-1.png",
"4-TableII-1.png",
"5-Figure2-1.png",
"5-TableIV-1.png",
"6-Figure3-1.png",
"7-TableV-1.png",
"7-TableVI-1.png",
"8-Figure4-1.png",
"9-Figure5-1.png",
"10-TableVII-1.png"
]
} | [
"With how many languages do they experiment in the multilingual setup?",
"Which dataset do they use?"
] | [
[
"1811.04791-Introduction-4",
"1811.04791-Experimental Setup-4",
"1811.04791-Experimental Setup-0"
],
[
"1811.04791-3-TableI-1.png",
"1811.04791-Evaluation using ZRSC Data and Measures-0",
"1811.04791-Experimental Setup-0",
"1811.04791-Experimental Setup-4"
]
] | [
"16",
"GlobalPhone\nCroatian\nHausa\nMandarin\nSpanish\nSwedish\nTurkish\nZRSC\nBuckeye\nXitsonga"
] | 129 |
1906.01749 | Multi-News: a Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model | Automatic generation of summaries from multiple news articles is a valuable tool as the number of online publications grows rapidly. Single document summarization (SDS) systems have benefited from advances in neural encoder-decoder model thanks to the availability of large datasets. However, multi-document summarization (MDS) of news articles has been limited to datasets of a couple of hundred examples. In this paper, we introduce Multi-News, the first large-scale MDS news dataset. Additionally, we propose an end-to-end model which incorporates a traditional extractive summarization model with a standard SDS model and achieves competitive results on MDS datasets. We benchmark several methods on Multi-News and release our data and code in hope that this work will promote advances in summarization in the multi-document setting. | {
"paragraphs": [
[
"Summarization is a central problem in Natural Language Processing with increasing applications as the desire to receive content in a concise and easily-understood format increases. Recent advances in neural methods for text summarization have largely been applied in the setting of single-document news summarization and headline generation BIBREF0 , BIBREF1 , BIBREF2 . These works take advantage of large datasets such as the Gigaword Corpus BIBREF3 , the CNN/Daily Mail (CNNDM) dataset BIBREF4 , the New York Times dataset BIBREF5 and the Newsroom corpus BIBREF6 , which contain on the order of hundreds of thousands to millions of article-summary pairs. However, multi-document summarization (MDS), which aims to output summaries from document clusters on the same topic, has largely been performed on datasets with less than 100 document clusters such as the DUC 2004 BIBREF7 and TAC 2011 BIBREF8 datasets, and has benefited less from advances in deep learning methods.",
"Multi-document summarization of news events offers the challenge of outputting a well-organized summary which covers an event comprehensively while simultaneously avoiding redundancy. The input documents may differ in focus and point of view for an event. We present an example of multiple input news documents and their summary in Figure TABREF2 . The three source documents discuss the same event and contain overlaps in content: the fact that Meng Wanzhou was arrested is stated explicitly in Source 1 and 3 and indirectly in Source 2. However, some sources contain information not mentioned in the others which should be included in the summary: Source 3 states that (Wanzhou) is being sought for extradition by the US while only Source 2 mentioned the attitude of the Chinese side.",
"Recent work in tackling this problem with neural models has attempted to exploit the graph structure among discourse relations in text clusters BIBREF9 or through an auxiliary text classification task BIBREF10 . Additionally, a couple of recent papers have attempted to adapt neural encoder decoder models trained on single document summarization datasets to MDS BIBREF11 , BIBREF12 , BIBREF13 .",
"However, data sparsity has largely been the bottleneck of the development of neural MDS systems. The creation of large-scale multi-document summarization dataset for training has been restricted due to the sparsity and cost of human-written summaries. liu18wikisum trains abstractive sequence-to-sequence models on a large corpus of Wikipedia text with citations and search engine results as input documents. However, no analogous dataset exists in the news domain. To bridge the gap, we introduce Multi-News, the first large-scale MDS news dataset, which contains 56,216 articles-summary pairs. We also propose a hierarchical model for neural abstractive multi-document summarization, which consists of a pointer-generator network BIBREF1 and an additional Maximal Marginal Relevance (MMR) BIBREF14 module that calculates sentence ranking scores based on relevancy and redundancy. We integrate sentence-level MMR scores into the pointer-generator model to adapt the attention weights on a word-level. Our model performs competitively on both our Multi-News dataset and the DUC 2004 dataset on ROUGE scores. We additionally perform human evaluation on several system outputs.",
"Our contributions are as follows: We introduce the first large-scale multi-document summarization datasets in the news domain. We propose an end-to-end method to incorporate MMR into pointer-generator networks. Finally, we benchmark various methods on our dataset to lay the foundations for future work on large-scale MDS."
],
[
"Traditional non-neural approaches to multi-document summarization have been both extractive BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 as well as abstractive BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . Recently, neural methods have shown great promise in text summarization, although largely in the single-document setting, with both extractive BIBREF23 , BIBREF24 , BIBREF25 and abstractive methods BIBREF26 , BIBREF27 , BIBREF1 , BIBREF28 , BIBREF29 , BIBREF30 , BIBREF2 ",
"In addition to the multi-document methods described above which address data sparsity, recent work has attempted unsupervised and weakly supervised methods in non-news domains BIBREF31 , BIBREF32 . The methods most related to this work are SDS adapted for MDS data. zhang18mds adopts a hierarchical encoding framework trained on SDS data to MDS data by adding an additional document-level encoding. baumel18mds incorporates query relevance into standard sequence-to-sequence models. lebanoff18mds adapts encoder-decoder models trained on single-document datasets to the MDS case by introducing an external MMR module which does not require training on the MDS dataset. In our work, we incorporate the MMR module directly into our model, learning weights for the similarity functions simultaneously with the rest of the model."
],
[
"Our dataset, which we call Multi-News, consists of news articles and human-written summaries of these articles from the site newser.com. Each summary is professionally written by editors and includes links to the original articles cited. We will release stable Wayback-archived links, and scripts to reproduce the dataset from these links. Our dataset is notably the first large-scale dataset for MDS on news articles. Our dataset also comes from a diverse set of news sources; over 1,500 sites appear as source documents 5 times or greater, as opposed to previous news datasets (DUC comes from 2 sources, CNNDM comes from CNN and Daily Mail respectively, and even the Newsroom dataset BIBREF6 covers only 38 news sources). A total of 20 editors contribute to 85% of the total summaries on newser.com. Thus we believe that this dataset allows for the summarization of diverse source documents and summaries."
],
[
"The number of collected Wayback links for summaries and their corresponding cited articles totals over 250,000. We only include examples with between 2 and 10 source documents per summary, as our goal is MDS, and the number of examples with more than 10 sources was minimal. The number of source articles per summary present, after downloading and processing the text to obtain the original article text, varies across the dataset, as shown in Table TABREF4 . We believe this setting reflects real-world situations; often for a new or specialized event there may be only a few news articles. Nonetheless, we would like to summarize these events in addition to others with greater news coverage.",
"We split our dataset into training (80%, 44,972), validation (10%, 5,622), and test (10%, 5,622) sets. Table TABREF5 compares Multi-News to other news datasets used in experiments below. We choose to compare Multi-News with DUC data from 2003 and 2004 and TAC 2011 data, which are typically used in multi-document settings. Additionally, we compare to the single-document CNNDM dataset, as this has been recently used in work which adapts SDS to MDS BIBREF11 . The number of examples in our Multi-News dataset is two orders of magnitude larger than previous MDS news data. The total number of words in the concatenated inputs is shorter than other MDS datasets, as those consist of 10 input documents, but larger than SDS datasets, as expected. Our summaries are notably longer than in other works, about 260 words on average. While compressing information into a shorter text is the goal of summarization, our dataset tests the ability of abstractive models to generate fluent text concise in meaning while also coherent in the entirety of its generally longer output, which we consider an interesting challenge."
],
[
"We report the percentage of n-grams in the gold summaries which do not appear in the input documents as a measure of how abstractive our summaries are in Table TABREF6 . As the table shows, the smaller MDS datasets tend to be more abstractive, but Multi-News is comparable and similar to the abstractiveness of SDS datasets. Grusky:18 additionally define three measures of the extractive nature of a dataset, which we use here for a comparison. We extend these notions to the multi-document setting by concatenating the source documents and treating them as a single input. Extractive fragment coverage is the percentage of words in the summary that are from the source article, measuring the extent to which a summary is derivative of a text: DISPLAYFORM0 ",
"where A is the article, S the summary, and INLINEFORM0 the set of all token sequences identified as extractive in a greedy manner; if there is a sequence of source tokens that is a prefix of the remainder of the summary, that is marked as extractive. Similarly, density is defined as the average length of the extractive fragment to which each summary word belongs: DISPLAYFORM0 ",
"Finally, compression ratio is defined as the word ratio between the articles and its summaries: DISPLAYFORM0 ",
"These numbers are plotted using kernel density estimation in Figure FIGREF11 . As explained above, our summaries are larger on average, which corresponds to a lower compression rate. The variability along the x-axis (fragment coverage), suggests variability in the percentage of copied words, with the DUC data varying the most. In terms of y-axis (fragment density), our dataset shows variability in the average length of copied sequence, suggesting varying styles of word sequence arrangement. Our dataset exhibits extractive characteristics similar to the CNNDM dataset."
],
[
"As discussed above, large scale datasets for multi-document news summarization are lacking. There have been several attempts to create MDS datasets in other domains. zopf18mds introduce a multi-lingual MDS dataset based on English and German Wikipedia articles as summaries to create a set of about 7,000 examples. liu18wikisum use Wikipedia as well, creating a dataset of over two million examples. That paper uses Wikipedia references as input documents but largely relies on Google search to increase topic coverage. We, however, are focused on the news domain, and the source articles in our dataset are specifically cited by the corresponding summaries. Related work has also focused on opinion summarization in the multi-document setting; angelidis18opinions introduces a dataset of 600 Amazon product reviews."
],
[
"We introduce several common methods for summarization."
],
[
"The pointer-generator network BIBREF1 is a commonly-used encoder-decoder summarization model with attention BIBREF33 which combines copying words from source documents and outputting words from a vocabulary. The encoder converts each token INLINEFORM0 in the document into the hidden state INLINEFORM1 . At each decoding step INLINEFORM2 , the decoder has a hidden state INLINEFORM3 . An attention distribution INLINEFORM4 is calculated as in BIBREF33 and is used to get the context vector INLINEFORM5 , which is a weighted sum of the encoder hidden states, representing the semantic meaning of the related document content for this decoding time step:",
" DISPLAYFORM0 ",
" The context vector INLINEFORM0 and the decoder hidden state INLINEFORM1 are then passed to two linear layers to produce the vocabulary distribution INLINEFORM2 . For each word, there is also a copy probability INLINEFORM3 . It is the sum of the attention weights over all the word occurrences:",
" DISPLAYFORM0 ",
" The pointer-generator network has a soft switch INLINEFORM0 , which indicates whether to generate a word from vocabulary by sampling from INLINEFORM1 , or to copy a word from the source sequence by sampling from the copy probability INLINEFORM2 .",
" DISPLAYFORM0 ",
"where INLINEFORM0 is the decoder input. The final probability distribution is a weighted sum of the vocabulary distribution and copy probability:",
"P(w) = pgenPvocab(w) + (1-pgen)Pcopy(w)"
],
[
"The Transformer model replaces recurrent layers with self-attention in an encoder-decoder framework and has achieved state-of-the-art results in machine translation BIBREF34 and language modeling BIBREF35 , BIBREF36 . The Transformer has also been successfully applied to SDS BIBREF2 . More specifically, for each word during encoding, the multi-head self-attention sub-layer allows the encoder to directly attend to all other words in a sentence in one step. Decoding contains the typical encoder-decoder attention mechanisms as well as self-attention to all previous generated output. The Transformer motivates the elimination of recurrence to allow more direct interaction among words in a sequence."
],
[
"Maximal Marginal Relevance (MMR) is an approach for combining query-relevance with information-novelty in the context of summarization BIBREF14 . MMR produces a ranked list of the candidate sentences based on the relevance and redundancy to the query, which can be used to extract sentences. The score is calculated as follows:",
"MMR=*argmax D i RS [ Sim 1 (D i ,Q)-(1-) D j S Sim2 (D i ,D j ) ] where INLINEFORM0 is the collection of all candidate sentences, INLINEFORM1 is the query, INLINEFORM2 is the set of sentences that have been selected, and INLINEFORM3 is set of the un-selected ones. In general, each time we want to select a sentence, we have a ranking score for all the candidates that considers relevance and redundancy. A recent work BIBREF11 applied MMR for multi-document summarization by creating an external module and a supervised regression model for sentence importance. Our proposed method, however, incorporates MMR with the pointer-generator network in an end-to-end manner that learns parameters for similarity and redundancy."
],
[
"In this section, we provide the details of our Hierarchical MMR-Attention Pointer-generator (Hi-MAP) model for multi-document neural abstractive summarization. We expand the existing pointer-generator network model into a hierarchical network, which allows us to calculate sentence-level MMR scores. Our model consists of a pointer-generator network and an integrated MMR module, as shown in Figure FIGREF19 ."
],
[
"To expand our model into a hierarchical one, we compute sentence representations on both the encoder and decoder. The input is a collection of sentences INLINEFORM0 from all the source documents, where a given sentence INLINEFORM1 is made up of input word tokens. Word tokens from the whole document are treated as a single sequential input to a Bi-LSTM encoder as in the original encoder of the pointer-generator network from see2017ptrgen (see bottom of Figure FIGREF19 ). For each time step, the output of an input word token INLINEFORM2 is INLINEFORM3 (we use superscript INLINEFORM4 to indicate word-level LSTM cells, INLINEFORM5 for sentence-level).",
"To obtain a representation for each sentence INLINEFORM0 , we take the encoder output of the last token for that sentence. If that token has an index of INLINEFORM1 in the whole document INLINEFORM2 , then the sentence representation is marked as INLINEFORM3 . The word-level sentence embeddings of the document INLINEFORM4 will be a sequence which is fed into a sentence-level LSTM network. Thus, for each input sentence INLINEFORM5 , we obtain an output hidden state INLINEFORM6 . We then get the final sentence-level embeddings INLINEFORM7 (we omit the subscript for sentences INLINEFORM8 ). To obtain a summary representation, we simply treat the current decoded summary as a single sentence and take the output of the last step of the decoder: INLINEFORM9 . We plan to investigate alternative methods for input and output sentence embeddings, such as separate LSTMs for each sentence, in future work."
],
[
"Now, we have all the sentence-level representation from both the articles and summary, and then we apply MMR to compute a ranking on the candidate sentences INLINEFORM0 . Intuitively, incorporating MMR will help determine salient sentences from the input at the current decoding step based on relevancy and redundancy.",
"We follow Section 4.3 to compute MMR scores. Here, however, our query document is represented by the summary vector INLINEFORM0 , and we want to rank the candidates in INLINEFORM1 . The MMR score for an input sentence INLINEFORM2 is then defined as:",
"MMR i = Sim 1 (hs i ,ssum)-(1-) sj D, j i Sim2 (hs i ,hs j ) We then add a softmax function to normalize all the MMR scores of these candidates as a probability distribution. MMR i = ( MMR i )i( MMR i ) Now we define the similarity function between each candidate sentence INLINEFORM0 and summary sentence INLINEFORM1 to be: DISPLAYFORM0 ",
"where INLINEFORM0 is a learned parameter used to transform INLINEFORM1 and INLINEFORM2 into a common feature space.",
"For the second term of Equation SECREF21 , instead of choosing the maximum score from all candidates except for INLINEFORM0 , which is intended to find the candidate most similar to INLINEFORM1 , we choose to apply a self-attention model on INLINEFORM2 and all the other candidates INLINEFORM3 . We then choose the largest weight as the final score:",
" DISPLAYFORM0 ",
" Note that INLINEFORM0 is also a trainable parameter. Eventually, the MMR score from Equation SECREF21 becomes:",
" MMR i = Sim 1 (hsi,ssum)-(1-) scorei"
],
[
"After we calculate INLINEFORM0 for each sentence representation INLINEFORM1 , we use these scores to update the word-level attention weights for the pointer-generator model shown by the blue arrows in Figure FIGREF19 . Since INLINEFORM2 is a sentence weight for INLINEFORM3 , each token in the sentence will have the same value of INLINEFORM4 . The new attention for each input token from Equation EQREF14 becomes: DISPLAYFORM0 "
],
[
"In this section we describe additional methods we compare with and present our assumptions and experimental process."
],
[
"First We concatenate the first sentence of each article in a document cluster as the system summary. For our dataset, First- INLINEFORM0 means the first INLINEFORM1 sentences from each source article will be concatenated as the summary. Due to the difference in gold summary length, we only use First-1 for DUC, as others would exceed the average summary length.",
"LexRank Initially proposed by BIBREF16 , LexRank is a graph-based method for computing relative importance in extractive summarization.",
"TextRank Introduced by BIBREF17 , TextRank is a graph-based ranking model. Sentence importance scores are computed based on eigenvector centrality within a global graph from the corpus.",
"MMR In addition to incorporating MMR in our pointer generator network, we use this original method as an extractive summarization baseline. When testing on DUC data, we set these extractive methods to give an output of 100 tokens and 300 tokens for Multi-News data."
],
[
"PG-Original, PG-MMR These are the original pointer-generator network models reported by BIBREF11 .",
"PG-BRNN The PG-BRNN model is a pointer-generator implementation from OpenNMT. As in the original paper BIBREF1 , we use a 1-layer bi-LSTM as encoder, with 128-dimensional word-embeddings and 256-dimensional hidden states for each direction. The decoder is a 512-dimensional single-layer LSTM. We include this for reference in addition to PG-Original, as our Hi-MAP code builds upon this implementation.",
"CopyTransformer Instead of using an LSTM, the CopyTransformer model used in Gehrmann:18 uses a 4-layer Transformer of 512 dimensions for encoder and decoder. One of the attention heads is chosen randomly as the copy distribution. This model and the PG-BRNN are run without the bottom-up masked attention for inference from Gehrmann:18 as we did not find a large improvement when reproducing the model on this data."
],
[
"Following the setting from BIBREF11 , we report ROUGE BIBREF37 scores, which measure the overlap of unigrams (R-1), bigrams (R-2) and skip bigrams with a max distance of four words (R-SU). For the neural abstractive models, we truncate input articles to 500 tokens in the following way: for each example with INLINEFORM0 source input documents, we take the first 500 INLINEFORM1 tokens from each source document. As some source documents may be shorter, we iteratively determine the number of tokens to take from each document until the 500 token quota is reached. Having determined the number of tokens per source document to use, we concatenate the truncated source documents into a single mega-document. This effectively reduces MDS to SDS on longer documents, a commonly-used assumption for recent neural MDS papers BIBREF10 , BIBREF38 , BIBREF11 . We chose 500 as our truncation size as related MDS work did not find significant improvement when increasing input length from 500 to 1000 tokens BIBREF38 . We simply introduce a special token between source documents to aid our models in detecting document-to-document relationships and leave direct modeling of this relationship, as well as modeling longer input sequences, to future work. We hope that the dataset we introduce will promote such work. For our Hi-MAP model, we applied a 1-layer bidirectional LSTM network, with the hidden state dimension 256 in each direction. The sentence representation dimension is also 256. We set the INLINEFORM2 to calculate the MMR value in Equation SECREF21 .",
"As our focus was on deep methods for MDS, we only tested several non-neural baselines. However, other classical methods deserve more attention, for which we refer the reader to Hong14 and leave the implementation of these methods on Multi-News for future work.",
""
],
[
"In Table TABREF30 and Table TABREF31 we report ROUGE scores on DUC 2004 and Multi-News datasets respectively. We use DUC 2004, as results on this dataset are reported in lebanoff18mds, although this dataset is not the focus of this work. For results on DUC 2004, models were trained on the CNNDM dataset, as in lebanoff18mds. PG-BRNN and CopyTransformer models, which were pretrained by OpenNMT on CNNDM, were applied to DUC without additional training, analogous to PG-Original. We also experimented with training on Multi-News and testing on DUC data, but we did not see significant improvements. We attribute the generally low performance of pointer-generator, CopyTransformer and Hi-MAP to domain differences between DUC and CNNDM as well as DUC and Multi-News. These domain differences are evident in the statistics and extractive metrics discussed in Section 3.",
"Additionally, for both DUC and Multi-News testing, we experimented with using the output of 500 tokens from extractive methods (LexRank, TextRank and MMR) as input to the abstractive model. However, this did not improve results. We believe this is because our truncated input mirrors the First-3 baseline, which outperforms these three extractive methods and thus may provide more information as input to the abstractive model.",
"Our model outperforms PG-MMR when trained and tested on the Multi-News dataset. We see much-improved model performances when trained and tested on in-domain Multi-News data. The Transformer performs best in terms of R-1 while Hi-MAP outperforms it on R-2 and R-SU. Also, we notice a drop in performance between PG-original, and PG-MMR (which takes the pre-trained PG-original and applies MMR on top of the model). Our PG-MMR results correspond to PG-MMR w Cosine reported in lebanoff18mds. We trained their sentence regression model on Multi-News data and leave the investigation of transferring regression models from SDS to Multi-News for future work.",
"In addition to automatic evaluation, we performed human evaluation to compare the summaries produced. We used Best-Worst Scaling BIBREF39 , BIBREF40 , which has shown to be more reliable than rating scales BIBREF41 and has been used to evaluate summaries BIBREF42 , BIBREF32 . Annotators were presented with the same input that the systems saw at testing time; input documents were truncated, and we separated input documents by visible spaces in our annotator interface. We chose three native English speakers as annotators. They were presented with input documents, and summaries generated by two out of four systems, and were asked to determine which summary was better and which was worse in terms of informativeness (is the meaning in the input text preserved in the summary?), fluency (is the summary written in well-formed and grammatical English?) and non-redundancy (does the summary avoid repeating information?). We randomly selected 50 documents from the Multi-News test set and compared all possible combinations of two out of four systems. We chose to compare PG-MMR, CopyTransformer, Hi-MAP and gold summaries. The order of summaries was randomized per example.",
"The results of our pairwise human-annotated comparison are shown in Table TABREF32 . Human-written summaries were easily marked as better than other systems, which, while expected, shows that there is much room for improvement in producing readable, informative summaries. We performed pairwise comparison of the models over the three metrics combined, using a one-way ANOVA with Tukey HSD tests and INLINEFORM0 value of 0.05. Overall, statistically significant differences were found between human summaries score and all other systems, CopyTransformer and the other two models, and our Hi-MAP model compared to PG-MMR. Our Hi-MAP model performs comparably to PG-MMR on informativeness and fluency but much better in terms of non-redundancy. We believe that the incorporation of learned parameters for similarity and redundancy reduces redundancy in our output summaries. In future work, we would like to incorporate MMR into Transformer models to benefit from their fluent summaries."
],
[
" In this paper we introduce Multi-News, the first large-scale multi-document news summarization dataset. We hope that this dataset will promote work in multi-document summarization similar to the progress seen in the single-document case. Additionally, we introduce an end-to-end model which incorporates MMR into a pointer-generator network, which performs competitively compared to previous multi-document summarization models. We also benchmark methods on our dataset. In the future we plan to explore interactions among documents beyond concatenation and experiment with summarizing longer input documents."
]
],
"section_name": [
"Introduction",
"Related Work",
"Multi-News Dataset",
"Statistics and Analysis",
"Diversity",
"Other Datasets",
"Preliminaries",
"Pointer-generator Network",
"Transformer",
"MMR",
"Hi-MAP Model",
"Sentence representations",
"MMR-Attention",
"MMR-attention Pointer-generator",
"Experiments",
"Baseline and Extractive Methods",
"Neural Abstractive Methods",
"Experimental Setting",
"Analysis and Discussion",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"4a5d7d8954da15052e999e4a20a2d51ac786be0a",
"8cda5e7474317f9139323c2ed9e5964b8eaf7275",
"9e61cf619bf8c813b532cf9d25589eafcef5f0c6"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"aa1f745b65332757b0fb0b0df007464d33422c26",
"dfe9a2325d92410aeb9c4d28b10ebd3d4e27ea96"
],
"answer": [
{
"evidence": [
"Our model outperforms PG-MMR when trained and tested on the Multi-News dataset. We see much-improved model performances when trained and tested on in-domain Multi-News data. The Transformer performs best in terms of R-1 while Hi-MAP outperforms it on R-2 and R-SU. Also, we notice a drop in performance between PG-original, and PG-MMR (which takes the pre-trained PG-original and applies MMR on top of the model). Our PG-MMR results correspond to PG-MMR w Cosine reported in lebanoff18mds. We trained their sentence regression model on Multi-News data and leave the investigation of transferring regression models from SDS to Multi-News for future work."
],
"extractive_spans": [
"Our model outperforms PG-MMR when trained and tested on the Multi-News dataset",
"Transformer performs best in terms of R-1 while Hi-MAP outperforms it on R-2 and R-SU"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our model outperforms PG-MMR when trained and tested on the Multi-News dataset.",
"The Transformer performs best in terms of R-1 while Hi-MAP outperforms it on R-2 and R-SU."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 6: ROUGE scores for models trained and tested on the Multi-News dataset."
],
"extractive_spans": [],
"free_form_answer": "Their model ranked 2nd on R-1 metric and ranked 1st on R-2 and R-SU metrics",
"highlighted_evidence": [
"FLOAT SELECTED: Table 6: ROUGE scores for models trained and tested on the Multi-News dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"1cff300495ad4c5fc7552f9c3bca04456c67e7c1",
"25f84af4ea997a28a3674b363d4ae27281080efa",
"9e58fe62c98a67a1d8fdff55bf6ef46a5b5f36d1"
],
"answer": [
{
"evidence": [
"Our dataset, which we call Multi-News, consists of news articles and human-written summaries of these articles from the site newser.com. Each summary is professionally written by editors and includes links to the original articles cited. We will release stable Wayback-archived links, and scripts to reproduce the dataset from these links. Our dataset is notably the first large-scale dataset for MDS on news articles. Our dataset also comes from a diverse set of news sources; over 1,500 sites appear as source documents 5 times or greater, as opposed to previous news datasets (DUC comes from 2 sources, CNNDM comes from CNN and Daily Mail respectively, and even the Newsroom dataset BIBREF6 covers only 38 news sources). A total of 20 editors contribute to 85% of the total summaries on newser.com. Thus we believe that this dataset allows for the summarization of diverse source documents and summaries."
],
"extractive_spans": [],
"free_form_answer": "1500 news sites",
"highlighted_evidence": [
"Our dataset also comes from a diverse set of news sources; over 1,500 sites appear as source documents 5 times or greater, as opposed to previous news datasets (DUC comes from 2 sources, CNNDM comes from CNN and Daily Mail respectively, and even the Newsroom dataset BIBREF6 covers only 38 news sources)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our dataset, which we call Multi-News, consists of news articles and human-written summaries of these articles from the site newser.com. Each summary is professionally written by editors and includes links to the original articles cited. We will release stable Wayback-archived links, and scripts to reproduce the dataset from these links. Our dataset is notably the first large-scale dataset for MDS on news articles. Our dataset also comes from a diverse set of news sources; over 1,500 sites appear as source documents 5 times or greater, as opposed to previous news datasets (DUC comes from 2 sources, CNNDM comes from CNN and Daily Mail respectively, and even the Newsroom dataset BIBREF6 covers only 38 news sources). A total of 20 editors contribute to 85% of the total summaries on newser.com. Thus we believe that this dataset allows for the summarization of diverse source documents and summaries."
],
"extractive_spans": [],
"free_form_answer": "From a diverse set of news sources on site newser.com",
"highlighted_evidence": [
"Our dataset, which we call Multi-News, consists of news articles and human-written summaries of these articles from the site newser.com. ",
"Our dataset also comes from a diverse set of news sources;"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our dataset, which we call Multi-News, consists of news articles and human-written summaries of these articles from the site newser.com. Each summary is professionally written by editors and includes links to the original articles cited. We will release stable Wayback-archived links, and scripts to reproduce the dataset from these links. Our dataset is notably the first large-scale dataset for MDS on news articles. Our dataset also comes from a diverse set of news sources; over 1,500 sites appear as source documents 5 times or greater, as opposed to previous news datasets (DUC comes from 2 sources, CNNDM comes from CNN and Daily Mail respectively, and even the Newsroom dataset BIBREF6 covers only 38 news sources). A total of 20 editors contribute to 85% of the total summaries on newser.com. Thus we believe that this dataset allows for the summarization of diverse source documents and summaries."
],
"extractive_spans": [
"newser.com"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our dataset, which we call Multi-News, consists of news articles and human-written summaries of these articles from the site newser.com."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"368bf15403f48472c30f445bdf13274ff1d49f69",
"855fc4c5cb2723eb51259ac136ebb05c2727b714",
"c2c05cf90180eb2d4784514cfbacd75d15d122a7"
],
"answer": [
{
"evidence": [
"We split our dataset into training (80%, 44,972), validation (10%, 5,622), and test (10%, 5,622) sets. Table TABREF5 compares Multi-News to other news datasets used in experiments below. We choose to compare Multi-News with DUC data from 2003 and 2004 and TAC 2011 data, which are typically used in multi-document settings. Additionally, we compare to the single-document CNNDM dataset, as this has been recently used in work which adapts SDS to MDS BIBREF11 . The number of examples in our Multi-News dataset is two orders of magnitude larger than previous MDS news data. The total number of words in the concatenated inputs is shorter than other MDS datasets, as those consist of 10 input documents, but larger than SDS datasets, as expected. Our summaries are notably longer than in other works, about 260 words on average. While compressing information into a shorter text is the goal of summarization, our dataset tests the ability of abstractive models to generate fluent text concise in meaning while also coherent in the entirety of its generally longer output, which we consider an interesting challenge."
],
"extractive_spans": [],
"free_form_answer": "56216",
"highlighted_evidence": [
"We split our dataset into training (80%, 44,972), validation (10%, 5,622), and test (10%, 5,622) sets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"However, data sparsity has largely been the bottleneck of the development of neural MDS systems. The creation of large-scale multi-document summarization dataset for training has been restricted due to the sparsity and cost of human-written summaries. liu18wikisum trains abstractive sequence-to-sequence models on a large corpus of Wikipedia text with citations and search engine results as input documents. However, no analogous dataset exists in the news domain. To bridge the gap, we introduce Multi-News, the first large-scale MDS news dataset, which contains 56,216 articles-summary pairs. We also propose a hierarchical model for neural abstractive multi-document summarization, which consists of a pointer-generator network BIBREF1 and an additional Maximal Marginal Relevance (MMR) BIBREF14 module that calculates sentence ranking scores based on relevancy and redundancy. We integrate sentence-level MMR scores into the pointer-generator model to adapt the attention weights on a word-level. Our model performs competitively on both our Multi-News dataset and the DUC 2004 dataset on ROUGE scores. We additionally perform human evaluation on several system outputs."
],
"extractive_spans": [
"56,216"
],
"free_form_answer": "",
"highlighted_evidence": [
"To bridge the gap, we introduce Multi-News, the first large-scale MDS news dataset, which contains 56,216 articles-summary pairs."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We split our dataset into training (80%, 44,972), validation (10%, 5,622), and test (10%, 5,622) sets. Table TABREF5 compares Multi-News to other news datasets used in experiments below. We choose to compare Multi-News with DUC data from 2003 and 2004 and TAC 2011 data, which are typically used in multi-document settings. Additionally, we compare to the single-document CNNDM dataset, as this has been recently used in work which adapts SDS to MDS BIBREF11 . The number of examples in our Multi-News dataset is two orders of magnitude larger than previous MDS news data. The total number of words in the concatenated inputs is shorter than other MDS datasets, as those consist of 10 input documents, but larger than SDS datasets, as expected. Our summaries are notably longer than in other works, about 260 words on average. While compressing information into a shorter text is the goal of summarization, our dataset tests the ability of abstractive models to generate fluent text concise in meaning while also coherent in the entirety of its generally longer output, which we consider an interesting challenge."
],
"extractive_spans": [],
"free_form_answer": "56216 ",
"highlighted_evidence": [
"We split our dataset into training (80%, 44,972), validation (10%, 5,622), and test (10%, 5,622) sets. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"Do they use pretrained embeddings in their model?",
"What results are obtained by their model?",
"What sources do the news come from?",
"What is the size of Multi-news dataset?"
],
"question_id": [
"19578949108ef72603afe538059ee55b4dee0751",
"44435fbd4087fa711835d267036b6a1f82336a22",
"86656aae3c27c6ea108f5600dd09ab7e421d876a",
"22488c8628b6db5fd708b6471c31a8eac31f66df"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: An example from our multi-document summarization dataset showing the input documents and their summary. Content found in the summary is colorcoded.",
"Table 2: The number of source articles per example, by frequency, in our dataset.",
"Table 3: Comparison of our Multi-News dataset to other MDS datasets as well as an SDS dataset used as training data for MDS (CNNDM). Training, validation and testing size splits (article(s) to summary) are provided when applicable. Statistics for multi-document inputs are calculated on the concatenation of all input sources.",
"Table 4: Percentage of n-grams in summaries which do not appear in the input documents , a measure of the abstractiveness, in relevant datasets.",
"Figure 1: Density estimation of extractive diversity scores as explained in Section 3.2. Large variability along the y-axis suggests variation in the average length of source sequences present in the summary, while the x axis shows variability in the average length of the extractive fragments to which summary words belong.",
"Figure 2: Our Hierarchical MMR-Attention Pointergenerator (Hi-MAP) model incorporates sentence-level representations and hidden-state-based MMR on top of a standard pointer-generator network.",
"Table 5: ROUGE scores on the DUC 2004 dataset for models trained on CNNDM data, as in Lebanoff et al. (2018).3",
"Table 6: ROUGE scores for models trained and tested on the Multi-News dataset.",
"Table 7: Number of times a system was chosen as best in pairwise comparisons according to informativeness, fluency and non-redundancy."
],
"file": [
"1-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"4-Figure1-1.png",
"5-Figure2-1.png",
"7-Table5-1.png",
"7-Table6-1.png",
"8-Table7-1.png"
]
} | [
"What results are obtained by their model?",
"What sources do the news come from?",
"What is the size of Multi-news dataset?"
] | [
[
"1906.01749-7-Table6-1.png",
"1906.01749-Analysis and Discussion-2"
],
[
"1906.01749-Multi-News Dataset-0"
],
[
"1906.01749-Introduction-3",
"1906.01749-Statistics and Analysis-1"
]
] | [
"Their model ranked 2nd on R-1 metric and ranked 1st on R-2 and R-SU metrics",
"From a diverse set of news sources on site newser.com",
"56216 "
] | 130 |
2004.02334 | Neural Machine Translation with Imbalanced Classes | We cast neural machine translation (NMT) as a classification task in an autoregressive setting and analyze the limitations of both classification and autoregression components. Classifiers are known to perform better with balanced class distributions during training. Since the Zipfian nature of languages causes imbalanced classes, we explore the effect of class imbalance on NMT. We analyze the effect of vocabulary sizes on NMT performance and reveal an explanation for 'why' certain vocabulary sizes are better than others. | {
"paragraphs": [
[
"NLP tasks such as sentiment analysis BIBREF0, BIBREF1, spam detection, etc., are modeled as classification tasks where instances are independently classified. Tasks such as part-of-speech tagging BIBREF2, and named entity recognition BIBREF3 are some examples for sequence tagging in which tokens are classified into tags within the context of sequences. Similarly, we can cast neural machine translation (NMT), an example of a natural language generation (NLG) task, as a form of classification task where tokens are classified within an autoregressor (see Section SECREF2) .",
"Since the parameters of ML classification models are estimated from training data, certain biases in the training data affect the final performance of model. Among those biases, class imbalance is a topic of our interest. Class imbalance is said to exist when one or more classes are not of approximately equal frequency in data. The effect of class imbalance has been extensively studied in several domains where classifiers are used (see Section SECREF32). With neural networks, the imbalanced learning is mostly targeted to computer vision tasks; NLP tasks are underexplored BIBREF4. Word types in natural language models follow a Zipfian distribution, i.e. in any natural language corpus, we observe that a few types are extremely frequent and the vast number of others lie on the long tail of infrequency. The Zipfian distribution thus causes two problems to the classifier based NLG systems:",
"Open-ended Vocabulary: Treating each word type in the vocabulary as a class of ML classifier does not cover the entire vocabulary, because the vocabulary is open-ended and classifiers model a finite set of classes only.",
"Imbalanced Classes: There are a few extremely frequent types and many infrequent types, causing an extreme imbalance. Such an imbalance, in other domains where classifiers are used, has been known to cause undesired biases and severe degradation in the performance BIBREF4.",
"Subwords obtained through e.g. byte pair encoding (BPE) BIBREF5 addresses the open-ended vocabulary problem by using only a finite set of subwords. Due to the benefit and simplicity of BPE, it is rightfully part of the majority of current NMT models. However, the choice of vocabulary size used for BPE is a hyperparameter whose effect is not well understood. In practice, BPE vocabulary choice is either arbitrary or chosen from several trial-and-errors.",
"Regarding the problem of imbalanced classes, steedman-2008-last states that “the machine learning techniques that we rely on are actually very bad at inducing systems for which the crucial information is in rare events”. However, to the best of our knowledge, this problem has not yet been directly addressed in the NLG setting.",
"In this work, we attempt to find answers to these questions: `What value of BPE vocabulary size is best for NMT?', and more crucially an explanation for `Why that value?'. As we will see, the answers and explanations for those are an immediate consequence of a broader question, namely `What is the impact of Zipfian imbalance on classifier-based NLG?'",
"The contributions of this paper are as follows: We offer a simplified view of NMT architectures by re-envisioning them as two high-level components: a classifier and an autoregressor (Section SECREF2). For the best performance of the classifier, we argue that the balanced class distribution is desired, and describe a method to measure class imbalance in a Zipfian distribution (Section SECREF6). For the best performance of the autoregressor, we argue that it is desired to have shorter sequences (Section SECREF7). In Section SECREF8, we describe how BPE vocabulary relates with the desired settings for both classifier and autoregressor. Our experimental setup is described in Section SECREF3, followed by the analysis of results in Section SECREF4 that offers an explanation with evidence for why some vocabulary sizes are better than others. Section SECREF5 uncovers the impact of class imbalance, particularly the discrimination on classes based on their frequency. Section SECREF6 provides an overview of the related work, followed by a conclusion in Section SECREF7."
],
[
"Machine translation is commonly defined as the task of transforming sequences from the form $x = x_1 x_2 x_3 ... x_m$ to $y = y_1 y_2 y_3 ... y_n$, where $x$ is from source language $X$ and $y$ is from target language $Y$ respectively. NMT accomplishes the translation objective using artificial neural networks.",
"There are many variations of NMT architectures with a varied range of differences (Section SECREF30), however, all share the common objective of maximizing ${ \\prod _{t=1}^{n} P(y_t | y_{<t}, x_{1:m})}$ for pairs $(x_{1:m}, y_{1:n})$ sampled from a parallel dataset. NMT architectures are commonly viewed as a pair of encoder-decoder networks. We instead re-envision the NMT architecture as two higher level components: an autoregressor ($R$) and a token classifier ($C$), as shown in Figure FIGREF4.",
"Autoregressor $R$, BIBREF6 being the main component of the NMT model, has many implementations based on various neural network architectures: RNNs such as LSTM and GRU, CNN, and Transformer (Section SECREF30). For any given time step $t$, $R$ transforms the input context consisting of $y_{<t}, x_{1:m}$ into a hidden state vector as $h_t = R(y_{<t}, x_{1:m})$.",
"Classifier $C$ is the same across all architectures. It maps $h_t$ to a probability distribution $P(y_j | h_t) \\forall y_j \\in V_Y$, where $V_Y$ is the vocabulary of $Y$. Intuitively, $C$ scores $h_t$ against an embedding of every class type, then transforms those arbitrarily ranged scores into a probability distribution using the SoftMax normalizer. In machine learning, input to classifiers such as $C$ is generally described as features that are either hand-engineered or automatically extracted using neural networks. In this high-level view of NMT architecture, $R$ is a neural network that serves as an automatic feature extractor for $C$."
],
[
"Untreated, class imbalance leads to bias based on class frequencies. Specifically, classification learning algorithms focus on frequent classes while paying relatively less importance to infrequent classes. Frequency-based bias leads to a poor recall of infrequent classes.",
"When a model is used in a domain mismatch scenario, i.e. where a test set's distribution does not match the training set's distribution, model performance generally degrades. It is not surprising that frequency-biased classifiers show particular degradation in domain mismatch scenarios, as types that were infrequent in the training distribution and were ignored by learning algorithm may appear with high frequency in the newer domain. koehn2017sixchallenges showed empirical evidence of poor generalization of NMT to out-of-domain datasets.",
"In other classification tasks, where each instance is classified independently, methods such as up-sampling the infrequent classes and down-sampling frequent classes are used. In NMT, since the classification is done within the context of sequences, it is possible to accomplish the objective of balancing by altering the lengths of sequences. This phenomenon of achieving balance by altering the sequence lengths is indirectly achieved by, e.g., BPE subword segmentation BIBREF5.",
"Quantification of Zipfian Imbalance: The class imbalance of an observed distribution of training classes is quantified as Divergence ($D$) from a balanced (uniform) distribution. Divergence is measured using a simplified version of Earth Mover Distance, in which the total cost for moving a probability mass between any two bins (analogous to class types) is the sum of the total mass moved. Since any mass moved out of one bin is moved into another, we divide the total per-bin mass moves in half to avoid double counting. Therefore, the imbalance measure $D$ on $K$ class distributions where $p_i$ is the observed probability of class $i$ in the training data is computed as:",
"The range of D is $0 \\le D \\le 1$, and we argue that a lower value of $D$ a desired setting for $C$."
],
[
"Every autoregressive model is an approximation, some maybe better than others, but no model is a perfect one. Therefore, there is a non-zero probability of an error at each time step. The total error accumulated along the sequence grows in proportion to the length of the sequence. These accumulated errors alter the prediction of subsequent tokens in the sequence. Even though beam search attempts to mitigate this, it does not completely resolve it. These challenges with respect to long sentences and beam size are examined by koehn2017sixchallenges. If sequence encoders such as BPE subwords can reduce the steps in the sequences, this indirectly reduces the errors in language generation by imperfectly approximated autoregressors.",
"We summarize sequence lengths using Mean Sequence Length, $\\mu $, computed trivially as the arithmetic mean of the lengths of target language sequences after encoding them:",
"We argue that a smaller $\\mu $ is a desired setting for $R$."
],
[
"BPE vocabulary is learned using a greedy and iterative algorithm BIBREF5. The BPE learning algorithm starts with characters as its initial vocabulary. In each iteration, it greedily selects a pair of the most frequent types (either characters or subwords) that co-occur, and replaces them with a newly created compound type. During segmentation, BPE splitting is performed left-to-right with greedily selecting the longest matched code in the vocabulary. These operations have an effect on both $D$ and $\\mu $.",
"Effect of BPE on $\\mu $: BPE segmentation in comparison to word segmentation, expands rare words into two or more subwords, thus increases the sequence length. In comparison to character segmentation, BPE groups frequent characters as subwords thus reduces the length. BPE vocabulary size is more general that the words and characters are special cases that are attained at the two extremes BIBREF7. It can be used to create sequences that are long as character sequences (undesired for $R$), or short as word sequences (desired for $R$).",
"Effect of BPE on $D$: Whether viewed as a merging of frequent subwords into a relatively less frequent compound, or splitting of rare words into relatively frequent subwords, it alters the class distribution by moving the probability mass of classes. Hence, by altering class distribution, it also alters $D$.",
"Figure FIGREF9 shows the relation between the BPE vocabulary size on both $D$ and $\\mu $. A smaller vocabulary of BPE, after merging a few extremely frequent pairs, has smallest $D$ which is a desired setting for $C$, but at the same point $\\mu $ is large and undesired for $R$. When BPE vocabulary is set to a large one, the effect is reversed i.e. $D$ is large and unfavorable to $C$ while $\\mu $ small and favorable to $R$. As seen with evidence in Section SECREF4, there exists optimal vocabulary size of BPE that achieve the best setting for both $C$ and $R$. Hence, BPE vocabulary size is not arbitrary since it can be tuned to reduce $D$ while keeping $\\mu $ short enough as well.",
"For a comparison, word and character segmentation have no influence on $\\mu $. However, the trim size of word and character vocabulary has an effect on class imbalance $D$ and Out-of-Vocabulary (OOV) tokens and is presented in Figures FIGREF9 and FIGREF9, respectively. The summary of word, character, and BPE with respect to $D$ and $\\mu $ is presented in Table TABREF10."
],
[
"We perform NMT experiments using the base Transformer architecture BIBREF8. A common practice, as seen in vaswani2017attention's experimental setup, is to learn BPE vocabulary jointly for the source and target languages, which facilitates three-way weight sharing between the encoder's input, the decoder's input, and the decoder's output embeddings (classifier's class embeddings) BIBREF9. To facilitate fine-grained analysis of source and target vocabulary sizes and their effect on class imbalance, our models separately learn source and target vocabularies; weight sharing between the encoder's and decoder's embeddings is thus not possible. For the target language, however, we share weights between the decoder's input embeddings and the classifier's class embeddings."
],
[
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages. We use 1.8M sentences of this corpus and build models in English to German and vice versa. To segment initial words (i.e. before any subword processing) we use the Moses word tokenizer and detokenizer. We evaluate with the NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track."
],
[
"Our Transformer NMT model has 6 layers in each of the encoder and decoder, 8 attention heads, 512 hidden vector units, and feed forward intermediate size of 2048. We use label smoothing at 0.1. We use the Adam optimizer BIBREF10 with a controlled learning rate that warms up for 8,000 steps followed by the decay rate recommended for training Transformer models. All models are trained for 100,000 optimizer steps. Mini-batch size per step is no more than 4,200 tokens. We group mini-batches into sentences of similar lengths to reduce padding tokens per batch BIBREF8. We trim sequences longer than 512 time steps. The average training time per experiment is 10Hrs on Nvidia 1080Ti GPUs. For inference (i.e decoding the test sets), we use checkpoint averaging of the last 5 states each, saved at 1000 optimizer steps apart, and a beam size of 4."
],
[
"We use character, word, and BPE subword encoding with various vocabulary sizes to analyze the effect of $D$ and $\\mu $. Each experiment is run twice and we report the mean of BLEU scores in Table TABREF15. The BLEU scores were computed using SacreBLEU BIBREF11 . All results are in Table TABREF15. We observe the following:",
"Experiments #1 and #2 use a word vocabulary, while #3 and #4 use a BPE vocabulary. The results show that with BPE, increasing the vocabulary size at this range reduces BLEU. Experiment #3 with a vocabulary as large as $64k$ BPE types even fails to reach the comparable Word model's (#1) BLEU score, which raises the need for a systematic understanding of `Why BPE model reduced BLEU when vocabulary increased from $32k$ to $64k$?'. With increase in BPE vocabulary, $\\mu $ is reduced which is favorable to $R$. An explanation is that the $D$ increased which is unfavorable to $C$. For Word models, there is an effect of OOVs along with $D$, and it is beyond the scope of this work.",
"Experiments #3, #4, #5, #6 show that with BPE, decreasing the vocabulary indeed improves BLEU. Hence the larger BPE vocabulary such as $32k$ and $64k$ are not the best choice.",
"Experiments #7, #8, #9 and #10 with comparison to #6 showed that reducing vocabulary too much also negatively affects BLEU. Though Experiment #9 with $1k$ target vocabulary has the lowest $D$ favoring the $C$, in comparison to others, the BLEU is still lower than the others. An explanation for this reduction is that $\\mu $ is higher and unfavorable to $R$. Hence a strictly smaller vocabulary is not the best choice either.",
"By comparing #6 with #11, we see that, both have the same target vocabulary of $8k$, hence the same $D$ and $\\mu $, however, the source vocabulary differs from $8k$ to $32k$. Even though #11 had more imbalanced source types than #6, it has no adverse effect on BLEU. Therefore, imbalance on source vocabulary is not meaningful since source types are not the classes of $C$. Increasing the source vocabulary and hence rows in embeddings matrix is a simple way of increasing parameters of NMT model without hurting the BLEU.",
"Experiments #6 and #12 have differences in BLEU that is more significant than the previous pair (#6, #11). Here, both have the same $8k$ as source vocabulary, but the target differs from $8k$ to $32k$ which lead to noticeable differences in $D$ and $\\mu $. Even though #12 has more parameters in the target embeddings matrix, and smaller $\\mu $ than #6, the BLEU is noticeably lower. An explanation we offer is that the $32k$ target types became classes and raised the class imbalance $D$, leading to a reduction in the performance of $C$. This argument holds on both the directions of De-En and En-De. Thus, the class imbalance problem exists in NMT."
],
[
"In a typical classification setting with imbalanced classes, the classifier learns an undesired bias based on frequencies. Specifically, a biased classifier overclassifies frequent classes, leading to over recall but poor precision of frequent words, and underclassifies rare classes, leading to poor recall of rare words. An improvement in balancing the class distribution, therefore, debiases in this regard, leading to improvement in the precision of frequent classes as well as recall of infrequent classes. BLEU focuses only on the precision of classes; except for adding a global brevity penalty, it is ignorant to the poor recall of infrequent classes. Therefore, the numbers reported in Table TABREF15 capture only a part of the improvement from balanced classes. In this section we perform a detailed analysis of the impact of class balancing by considering both precision and recall of classes. We accomplish this in two stages: First, we define a method to measure the bias of the model for classes based on their frequencies. Second, we track the bias in relation to vocabulary size and class imbalance on all our experiments."
],
[
"We measure frequency bias using the Pearson correlation coefficient, $\\rho $, between class rank and class performance, where for performance measures we use precision and recall. We rank classes based on descending order of frequencies in the training data encoded with the same encoding schemes used for reported NMT experiments. With this setup, the class with rank 1, say $F_1$, is the one with the highest frequency, rank 2 is the next highest, and so on. More generally, $F_k$ is an index in the class rank list which has an inverse relation to class frequencies.",
"We define precision $P$ for a class similar to the unigram precision in BLEU and extend its definition to the unigram recall $R$. For the sake of clarity, consider a test dataset $T$ of $N$ pairs of parallel sentences, $(x^{(i)}, y^{(i)})$ where $x$ and $y$ are source and reference sequences respectively. We use single reference $y^{(i)}$ translations for this analysis. For each $x^{(i)}$, let $h^{(i)}$ be the translation hypothesis from an MT model.",
"Let the indicator $\\mathbb {1}_k^{a}$ have value 1 iff type $c_k$ exists in sequence $a$, where $a$ can be either hypothesis $h^{(i)}$ or reference $y^{(i)}$. The function $count(c_k, a)$ counts the times token $c_k$ exists in sequence $a$; $match(c_k, y^{(i)}, h^{(i)})$ returns the times $c_k$ is matched between hypothesis and reference, given by $min\\lbrace count(c_k, y^{(i)}), count(c_k, h^{(i)})\\rbrace $",
"Let $P_k^{(i)}$ and $R_k^{(i)}$ be precision and recall of $c_k$ on a specific record $i \\in T$, given by:",
"Let $P_k$, $R_k$ be the expected precision and recall for $c_k$ over the whole $T$, given by:",
"The Pearson correlation coefficients between $F_k$ vs. $P_k$, and $F_k$ vs. $R_k$ are reported in Table TABREF15 as $\\rho _{F, P}$ and $\\rho _{F, R}$ respectively."
],
[
"A classifier that does not discriminate classes based on their frequencies is the one that exhibits no correlation between class rank vs precision and class rank vs recall. However, in the top rows of Table TABREF15 where larger vocabularies such as $64k$ are used, we make two observations:",
"$\\rho _{F, P}$ is strong and positive. This is an indication that frequent classes have relatively less precision than infrequent classes. If the rank increases (i.e frequency is decreases), precision increases in relation to it, leading to $\\rho _{F, P} > 0$.",
"$\\rho _{F, R}$ is strong and negative. This is an indication that frequent classes have relatively higher recall than infrequent classes. If the rank increases, recall decreases in relation to it, leading to $\\rho _{F, R} < 0$.",
"Figure FIGREF26, as a visualization of Table TABREF15, shows a trend that the correlation (i.e. frequency bias) is lower with smaller vocabulary sizes. However, there still exists some correlation in $\\rho _{F, R}$ since the class imbalance, $D > 0$."
],
[
"We categorize the related work into the subsections as following:"
],
[
"Several variations of NMT models have been proposed and refined: sutskever2014seq2seq, cho2014learning introduced recurrent neural network (RNN) based encoder-decoder models for sequence-to-sequence translation learning. bahdanau2014nmtattn introduced the attention mechanism and luong2015effectiveAttn proposed several variations that became essential components of many future models. RNN modules, either LSTM BIBREF12 or GRU BIBREF13, were the popular choice for composing encoder and decoder of NMT. The encoder used bidirectional information, but the decoder was unidirectional, typically left-to-right, to facilitate autoregressive generation. gehring2017CNNMT showed used convolutional neural network (CNN) architecture that outperformed RNN models. vaswani2017attention proposed another alternative called Transformer whose main components are feed-forward and attention networks. There are only a few models that perform non-autoregressive NMT BIBREF14, BIBREF15. These are focused on improving the speed of inference and the generation quality is currently sub-par compared to autoregressive models. These non-autoregressive models can also be viewed as a token classifier with a different kind of feature extractor whose strengths and limitations are yet to be theoretically understood. Analyzing the non-autoregressive component, especially its performance with longer sequences, is beyond the scope of this work (however, an interesting direction)."
],
[
"sennrich-etal-2016-bpe introduced byte pair encoding (BPE) as a simplified way for solving OOV words without using back-off models. They noted that BPE improved the translation of not only the OOV words, but also some of rare in-vocabulary words. In their work, the vocabulary size was arbitrary, and large as $60k$ and $100k$.",
"morishita-etal-2018-improving viewed BPE more generally in the sense that both character and word vocabularies as two special cases of BPE vocabulary. Their analysis was different than ours in a way that they viewed BPE with varied vocabulary sizes as hierarchical features which were used in addition to a fixed BPE vocabulary size of $16k$ on the target language. DBLP:journals/corr/abs-1810-08641 offer an efficient way to search BPE vocabulary size for NMT. kudo-2018-subword used BPE segmentation as a regularization by introducing sampling based randomness to the BPE segmentation. For the best of our knowledge, no previous work exists that analyzed BPE's effect on class imbalance or answered `why certain BPE vocabularies are better than others?'."
],
[
"The class imbalance problem has been extensively studied in classical ML BIBREF16. In the medical domain Maciej2008MedicalImbalance found that classifier performance deteriorates with even modest imbalance in the training data. Untreated class imbalance has been known to deteriorate the performance of image segmentation, and Sudre2017GeneralizedDice have investigated the sensitivity of various loss functions. Johnson2019SurveyImbalance surveyed imbalance learning with neural networks and reported that the effort is mostly targeted to computer vision tasks. buda-etal-2018-imbalance-cnn provided a definition and quantification method for two types of class imbalance: step imbalance and linear imbalance. Since natural languages are Zipfian, where the class imbalance is neither single stepped nor linear, we defined a divergence measure in Section SECREF6 to quantify it."
],
[
"Envisioning NMT models as a token classifier with an autoregressor helped in analysing the weaknesses of each component independently. The class imbalance was found to cause bias in the token classifier. We showed that BPE vocabulary size is not arbitrary, and it can be tuned to address the class imbalance and sequence lengths appropriately. Our analysis provided an explanation why BPE encoding is more effective compared to word and character models for sequence generation.",
"Even though BPE encoding indirectly reduces the class imbalance compared to words and characters, it does not completely eliminate it. The class distributions after applying BPE contain sufficient imbalance for biasing the classes, and affecting the recall of rare classes. Hence more work is needed in directly addressing the Zipfian imbalance."
],
[
"This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract # FA8650-17-C-9116, and by research sponsored by Air Force Research Laboratory (AFRL) under agreement number FA8750-19-1-1000. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, Air Force Laboratory, DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein."
]
],
"section_name": [
"Introduction",
"Classifier based NLG",
"Classifier based NLG ::: Balanced Classes for Token Classifier",
"Classifier based NLG ::: Shorter Sequences for Autoregressor",
"Classifier based NLG ::: Choosing the Vocabulary Size Systematically",
"Experimental Setup",
"Experimental Setup ::: Dataset",
"Experimental Setup ::: Hyperparameters",
"Analysis",
"Measuring Classifier Bias due to Imbalance",
"Measuring Classifier Bias due to Imbalance ::: Class Frequency Bias Measurement",
"Measuring Classifier Bias due to Imbalance ::: Analysis of Class Frequency Bias",
"Related Work",
"Related Work ::: NMT architectures",
"Related Work ::: Bye Pair Encoding subwords",
"Related Work ::: Class Imbalance",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"29303e3a232b1698d4e2708b09bcd65f41102960",
"bd8f4b53e39941117c17e4e6f5bd097363da5e4a"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"extractive_spans": [],
"free_form_answer": "Source BPE vocabulary size is 32000; target BPE vocabulary size is 8000.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"extractive_spans": [],
"free_form_answer": "BPE 32k, 32k",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"12d9cca736366713d2493c952f2961f36d3368a4",
"f100cff72bf68f12a33a8187b6fdfaea9d80c5d9",
"f61ef8ccf0f385b92112c48d37f24bc5e3c57fb3"
],
"answer": [
{
"evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages. We use 1.8M sentences of this corpus and build models in English to German and vice versa. To segment initial words (i.e. before any subword processing) we use the Moses word tokenizer and detokenizer. We evaluate with the NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track."
],
"extractive_spans": [
"German (De) and English (En)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages. We use 1.8M sentences of this corpus and build models in English to German and vice versa. To segment initial words (i.e. before any subword processing) we use the Moses word tokenizer and detokenizer. We evaluate with the NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track."
],
"extractive_spans": [
"German",
"English"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages. We use 1.8M sentences of this corpus and build models in English to German and vice versa. To segment initial words (i.e. before any subword processing) we use the Moses word tokenizer and detokenizer. We evaluate with the NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track."
],
"extractive_spans": [
"German (De) and English (En) languages"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"acaec6afa5f1413a1ec3418634b0bba71925a857",
"bc118d920d2b18e893077b912b498fa3f9c477e5",
"e4bf1ba8d0fab7813fdc4481df6bb63f9245da1c"
],
"answer": [
{
"evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages. We use 1.8M sentences of this corpus and build models in English to German and vice versa. To segment initial words (i.e. before any subword processing) we use the Moses word tokenizer and detokenizer. We evaluate with the NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track."
],
"extractive_spans": [
"Europarl v9 parallel data set",
"NewsTest2013",
"NewsTest2014"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages. We use 1.8M sentences of this corpus and build models in English to German and vice versa. To segment initial words (i.e. before any subword processing) we use the Moses word tokenizer and detokenizer. We evaluate with the NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages. We use 1.8M sentences of this corpus and build models in English to German and vice versa. To segment initial words (i.e. before any subword processing) we use the Moses word tokenizer and detokenizer. We evaluate with the NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track."
],
"extractive_spans": [
"Europarl v9 parallel data set",
"NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track."
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages.",
"We evaluate with the NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages. We use 1.8M sentences of this corpus and build models in English to German and vice versa. To segment initial words (i.e. before any subword processing) we use the Moses word tokenizer and detokenizer. We evaluate with the NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track."
],
"extractive_spans": [
"Europarl v9",
"NewsTest2013",
"NewsTest2014"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages.",
"We evaluate with the NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"117fdd303f13104c922412880767a411dc9763c8",
"22065cfbaaca974403dee71db0f636cb9911776b",
"995941d5b9e6ebc92e0178087437f2a11e066c72"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"extractive_spans": [],
"free_form_answer": "Word 64k, 64k; Word 32k, 32k; BPE 64k, 64k; BPE 16k, 16k; BPE 8k, 8k; BPE 4k, 4k; BPE 2k, 2k; BPE 1k, 1k; Chars De:176; En:172; BPE 32k, 8k; BPE 8k, 32k",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"extractive_spans": [],
"free_form_answer": "Word 64k, Word 32k, BPE 64k, BPE 32k, BPE 16k, BPE 8k, BPE 4k, BPE 2k, BPE 1k.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"extractive_spans": [],
"free_form_answer": "Word vocabulary sizes: 32000, 64000; BPE vocabulary sizes: 1000, 2000, 4000, 8000, 16000, 32000, 64000; Chars vocabulary sizes: 172, 176.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0ff13cdde38f57b61b51c5f1d838b09c244f0c29",
"b1157151618c129d510bd4ca1d731878e35e65fd"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"extractive_spans": [],
"free_form_answer": "Source BPE vocabulary size is 32000; target BPE vocabulary size is 8000.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0942c3234da1393292020d8c6b58dd697c09fe14",
"63d330bc113b597a52710e6df1b6e15402840993",
"d239b45ec7170275896dfb6b2deb660c6ae727e3"
],
"answer": [
{
"evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages. We use 1.8M sentences of this corpus and build models in English to German and vice versa. To segment initial words (i.e. before any subword processing) we use the Moses word tokenizer and detokenizer. We evaluate with the NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track."
],
"extractive_spans": [
"Europarl v9",
"NewsTest2013 ",
"NewsTest2014"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages. We use 1.8M sentences of this corpus and build models in English to German and vice versa. To segment initial words (i.e. before any subword processing) we use the Moses word tokenizer and detokenizer. We evaluate with the NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages. We use 1.8M sentences of this corpus and build models in English to German and vice versa. To segment initial words (i.e. before any subword processing) we use the Moses word tokenizer and detokenizer. We evaluate with the NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track."
],
"extractive_spans": [
"Europarl v9",
"NewsTest2013",
"NewsTest2014"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages.",
"We evaluate with the NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages. We use 1.8M sentences of this corpus and build models in English to German and vice versa. To segment initial words (i.e. before any subword processing) we use the Moses word tokenizer and detokenizer. We evaluate with the NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track."
],
"extractive_spans": [
"Europarl v9 parallel data set",
"NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the publicly available Europarl v9 parallel data set for training German (De) and English (En) languages.",
"We evaluate with the NewsTest2013 and NewsTest2014 datasets from the WMT 2014 news translation track."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"77b73b63da62663722d8a1354b6d6ce280e0c14f",
"9cfe5199967d798f7d81a790f7a8f2b1ea13888f",
"c3fb6bb4e58970b7bf9ced8233c2620b4f25ac03"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"extractive_spans": [],
"free_form_answer": "Word vocabulary sizes: 32000, 64000; BPE vocabulary sizes: 1000, 2000, 4000, 8000, 16000, 32000, 64000; Chars vocabulary sizes: 172, 176.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"extractive_spans": [],
"free_form_answer": "Word 64k, 64k; Word 32k, 32k; BPE 64k, 64k; BPE 16k, 16k; BPE 8k, 8k; BPE 4k, 4k; BPE 2k, 2k; BPE 1k, 1k; Chars De:176; En:172; BPE 32k, 8k; BPE 8k, 32k",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"extractive_spans": [],
"free_form_answer": "Word 64k, Word 32k, BPE 64k, BPE 32k, BPE 16k, BPE 8k, BPE 4k, BPE 2k, BPE 1k.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
"",
""
],
"question": [
"Which vocabulary size was the better performer?",
"Which languages are explored?",
"What datasets are used in the paper?",
"What vocabulary sizes are explored?",
"What vocabulary size was the best performer?",
"What datasets do they look at?",
"Which vocab sizes did they analyze?"
],
"question_id": [
"1f2952cd1dc0c891232fa678b6c219f6b4d31958",
"23fe8431058f2a7b7588745766fc715f271aad07",
"e5b2eb6a49c163872054333f8670dd3f9563046a",
"73760a45b23b2ec0cab181f82953fb296bb6cd19",
"ec990c16896793a819766bc3168c02556ef69971",
"11c4071d9d7efeede84f47892b1fa0c6a93667eb",
"9aa751aebf6a449d95fb04ceec71688f2ed2cea2"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: The NMT model re-envisioned as a token classifier with an autoregressive feature extractor.",
"Table 1: A comparison of sequence encoding schemes with respect to Vocabulary Size (V ), Class Imbalance (D), and Mean Sequence Length (µ). The row titled Desired describes an ideal encoding scheme for C and R. BPE Subword scheme has Variable values indicating that it can be tuned towards Desired values.",
"Figure 2: Effect of BPE vocabulary size on mean sequence length µ and class imbalance D.",
"Figure 3: Effect of word vocabulary size on OOV tokens and imbalance D. At any specified trim size on the horizontal axis, all the OOV words are mapped to UNK type.",
"Figure 4: The relation between character vocabulary size with OOV tokens and imbalance D. At any specified trim size on the horizontal axis, all the OOV characters are mapped to UNK type.",
"Table 2: BLEU scores from German-English (De-En) and English-German (En-De) experiments on NewsTest2013 (NT13) and NewsTest2014 (NT14) along with their vocabulary sizes, class imbalance D, and mean sequence length µ.",
"Table 3: Correlation analysis of class rank vs precision and class rank vs recall on De-En NMT experiments. All numbers are Pearson correlation coefficients. Rows are in descending order by target vocabulary size (i.e number of classes), and retain the same IDs as in Table 2 for cross-referencing. Correlation indicates the undesired bias on classes based on class frequency. A smaller BPE vocabulary undoes some of the correlations due to the reduction in class imbalance.",
"Figure 5: Correlation analysis shows that reducing the class imbalance removes the bias on classes based on their frequency. This is an explanation of why a smaller target vocabulary achieves better performance in NMT."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"4-Figure2-1.png",
"4-Figure3-1.png",
"4-Figure4-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"7-Figure5-1.png"
]
} | [
"Which vocabulary size was the better performer?",
"What vocabulary sizes are explored?",
"What vocabulary size was the best performer?",
"Which vocab sizes did they analyze?"
] | [
[
"2004.02334-6-Table2-1.png"
],
[
"2004.02334-6-Table2-1.png"
],
[
"2004.02334-6-Table2-1.png"
],
[
"2004.02334-6-Table2-1.png"
]
] | [
"BPE 32k, 32k",
"Word vocabulary sizes: 32000, 64000; BPE vocabulary sizes: 1000, 2000, 4000, 8000, 16000, 32000, 64000; Chars vocabulary sizes: 172, 176.",
"Source BPE vocabulary size is 32000; target BPE vocabulary size is 8000.",
"Word 64k, Word 32k, BPE 64k, BPE 32k, BPE 16k, BPE 8k, BPE 4k, BPE 2k, BPE 1k."
] | 131 |
1908.11046 | Remedying BiLSTM-CNN Deficiency in Modeling Cross-Context for NER. | Recent researches prevalently used BiLSTM-CNN as a core module for NER in a sequence-labeling setup. This paper formally shows the limitation of BiLSTM-CNN encoders in modeling cross-context patterns for each word, i.e., patterns crossing past and future for a specific time step. Two types of cross-structures are used to remedy the problem: A BiLSTM variant with cross-link between layers; a multi-head self-attention mechanism. These cross-structures bring consistent improvements across a wide range of NER domains for a core system using BiLSTM-CNN without additional gazetteers, POS taggers, language-modeling, or multi-task supervision. The model surpasses comparable previous models on OntoNotes 5.0 and WNUT 2017 by 1.4% and 4.6%, especially improving emerging, complex, confusing, and multi-token entity mentions, showing the importance of remedying the core module of NER. | {
"paragraphs": [
[
"Named Entity Recognition (NER) is a core task for information extraction. Originally a structured prediction task, NER has since been formulated as a task of sequential token labeling. BiLSTM-CNN uses a CNN to encode each word and then uses bi-directional LSTMs to encode past and future context respectively at each time step. With state-of-the-art empirical results, most regard it as a robust core module for sequence-labeling NER BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4.",
"However, each direction of BiLSTM only sees and encodes half of a sequence at each time step. For each token, the forward LSTM only encodes past context; the backward LSTM only encodes future context. For computing sentence representations for tasks such as sentence classification and machine translation, this is not a problem, as only the rightmost hidden state of the forward LSTM and only the leftmost hidden state of the backward LSTM are used, and each of the endpoint hidden states sees and encodes the whole sentence. For computing sentence representations for sequence-labeling tasks such as NER, however, this becomes a limitation, as each token uses its own midpoint hidden states, which do not model the patterns that happen to cross past and future at this specific time step.",
"This paper explores two types of cross-structures to help cope with the problem: Cross-BiLSTM-CNN and Att-BiLSTM-CNN. Previous studies have tried to stack multiple LSTMs for sequence-labeling NER BIBREF1. As they follow the trend of stacking forward and backward LSTMs independently, the Baseline-BiLSTM-CNN is only able to learn higher-level representations of past or future per se. Instead, Cross-BiLSTM-CNN, which interleaves every layer of the two directions, models cross-context in an additive manner by learning higher-level representations of the whole context of each token. On the other hand, Att-BiLSTM-CNN models cross-context in a multiplicative manner by capturing the interaction between past and future with a dot-product self-attentive mechanism BIBREF5, BIBREF6.",
"Section SECREF3 formulates the three Baseline, Cross, and Att-BiLSTM-CNN models. The section gives a concrete proof that patterns forming an XOR cannot be modeled by Baseline-BiLSTM-CNN used in all previous work. Cross-BiLSTM-CNN and Att-BiLSTM-CNN are shown to have additive and multiplicative cross-structures respectively to deal with the problem. Section SECREF4 evaluates the approaches on two challenging NER datasets spanning a wide range of domains with complex, noisy, and emerging entities. The cross-structures bring consistent improvements over the prevalently used Baseline-BiLSTM-CNN without additional gazetteers, POS taggers, language-modeling, or multi-task supervision. The improved core module surpasses comparable previous models on OntoNotes 5.0 and WNUT 2017 by 1.4% and 4.6% respectively. Experiments reveal that emerging, complex, confusing, and multi-token entity mentions benefitted much from the cross-structures, and the in-depth entity-chunking analysis finds that the prevalently used Baseline-BiLSTM-CNN is flawed for real-world NER."
],
[
"Many have attempted tackling the NER task with LSTM-based sequence encoders BIBREF7, BIBREF0, BIBREF1, BIBREF8. Among these, the most sophisticated, state-of-the-art is the BiLSTM-CNN proposed by BIBREF1. They stack multiple layers of LSTM cells per direction and also use a CNN to compute character-level word vectors alongside pre-trained word vectors. This paper largely follows their work in constructing the Baseline-BiLSTM-CNN, including the selection of raw features, the CNN, and the multi-layer BiLSTM. A subtle difference is that they send the output of each direction through separate affine-softmax classifiers and then sum their probabilities, while this paper sum the scores from affine layers before computing softmax once. While not changing the modeling capacity regarded in this paper, the baseline model does perform better than their formulation.",
"The modeling of global contexts for sequence-labeling NER has been accomplished using traditional models with extensive feature engineering and conditional random fields (CRF). BIBREF9 build the Illinois NER tagger with feature-based perceptrons. In their analysis, the usefulness of Viterbi decoding is minimal and conflicts their handcrafted global features. On the other hand, recent researches on LSTM or CNN-based sequence encoders report empirical improvements brought by CRF BIBREF7, BIBREF0, BIBREF8, BIBREF10, as it discourages illegal predictions by explicitly modeling class transition probabilities. However, transition probabilities are independent of input sentences. In contrast, the cross-structures studied in this work provide for the direct capture of global patterns and extraction of better features to improve class observation likelihoods.",
"Thought to lighten the burden of compressing all relevant information into a single hidden state, using attention mechanisms on top of LSTMs have shown empirical success for sequence encoders BIBREF5, BIBREF6 and decoders BIBREF11. Self-attention has also been used below encoders to compute word vectors conditioned on context BIBREF12. This work further formally analyzes the deficiency of BiLSTM encoders for sequence labeling and shows that using self-attention on top is actually providing one type of cross structures that capture interactions between past and future context.",
"Besides using additional gazetteers or POS taggers BIBREF13, BIBREF2, BIBREF14, there is a recent trend to use additional large-scale language-modeling corpora BIBREF3 or additional multi-task supervision BIBREF4 to further improve NER performance beyond bare-bone models. However, they all rely on a core BiLSTM sentence encoder with the same limitation studied and remedied in Section SECREF3. So they would indeed benefit from the improvements presented in this paper."
],
[
"All models in the experiments use the same set of raw features: character embedding, character type, word embedding, and word capitalization.",
"For character embedding, 25d vectors are trained from scratch, and 4d one-hot character-type features indicate whether a character is uppercase, lowercase, digit, or punctuation BIBREF1. Word token lengths are unified to 20 by truncation and padding. The resulting 20-by-(25+4) feature map of each token is applied to a character-trigram CNN with 20 kernels per length 1 to 3 and max-over-time pooling to compute a 60d character-based word vector BIBREF15, BIBREF1, BIBREF0.",
"For word embedding, either pre-trained 300d GloVe vectors BIBREF16 or 400d Twitter vectors BIBREF17 are used without further tuning. Also, 4d one-hot word capitalization features indicate whether a word is uppercase, upper-initial, lowercase, or mixed-caps BIBREF18, BIBREF1.",
"Throughout this paper, $X$ denotes the $n$-by-$d_x$ matrix of sequence features, where $n$ is the sentence length and $d_x$ is either 364 (with GloVe) or 464 (with Twitter)."
],
[
"On top of an input feature sequence, BiLSTM is used to capture the future and the past for each time step. Following BIBREF1, 4 distinct LSTM cells – two in each direction – are stacked to capture higher level representations:",
"where $\\overrightarrow{LSTM}_i, \\overleftarrow{LSTM}_i$ denote applying LSTM cell $i$ in forward, backward order, $\\overrightarrow{H}, \\overleftarrow{H}$ denote the resulting feature matrices of the stacked application, and $||$ denotes row-wise concatenation. In all the experiments, 100d LSTM cells are used, so $H \\in R^{n\\times d_h}$ and $d_h=200$.",
"Finally, suppose there are $d_p$ token classes, the probability of each of which is given by the composition of affine and softmax transformations:",
"where $H_t$ is the $t^{th}$ row of $H$, $W_p\\in R^{d_h\\times d_p}$, $b\\in R^{d_p}$ are a trainable weight matrix and bias, and $s_{ti}$ and $s_{tj}$ are the $i$-th and $j$-th elements of $s_t$.",
"Following BIBREF1, the 5 chunk labels O, S, B, I, E denote if a word token is Outside any entity mentions, the Sole token of a mention, the Beginning token of a multi-token mention, In the middle of a multi-token mention, or the Ending token of a multi-token mention. Hence when there are $P$ types of named entities, the actual number of token classes $d_p=P\\times 4+1$ for sequence labeling NER."
],
[
"Consider the following four phrases that form an XOR:",
"Key and Peele (work-of-art)",
"You and I (work-of-art)",
"Key and I",
"You and Peele",
"The first two phrases are respectively a show title and a song title. The other two are not entities as a whole, where the last one actually occurs in an interview with Keegan-Michael Key. Suppose each phrase is the sequence given to Baseline-BiLSTM-CNN for sequence tagging, then the 2nd token \"and\" should be tagged as work-of-art:I in the first two cases and as O in the last two cases.",
"Firstly, note that the score vector at each time step is simply the sum of contributions coming from forward and backward directions plus a bias.",
"where $\\overrightarrow{W}_p,\\overleftarrow{W}_p$ denotes the top-half and bottom-half of $W_p$.",
"Suppose the index of work-of-art:I and O are i, j respectively. Then, to predict each \"and\" correctly, it must hold that",
"where superscripts denote the phrase number.",
"Now, the catch is that phrase 1 and phrase 3 have exactly the same past context for \"and\". Hence the same $\\overrightarrow{H}_2$ and the same $\\overrightarrow{s}_2$, i.e., $\\overrightarrow{s}^1_2=\\overrightarrow{s}^3_2$. Similarly, $\\overrightarrow{s}^2_2=\\overrightarrow{s}^4_2$, $\\overleftarrow{s}^1_2=\\overleftarrow{s}^4_2$, and $\\overleftarrow{s}^2_2=\\overleftarrow{s}^3_2$. Rewriting the constraints with these equalities gives",
"Finally, summing the first two inequalities and the last two inequalities gives two contradicting constraints that cannot be satisfied. In other words, even if an oracle is given to training the model, Baseline-BiLSTM-CNN can only tag at most 3 out of 4 \"and\" correctly. No matter how many LSTM cells are stacked for each direction, the formulation in previous studies simply does not have enough modeling capacity to capture cross-context patterns for sequence labeling NER."
],
[
"Motivated by the limitation of the conventional Baseline-BiLSTM-CNN for sequence labeling, this paper proposes the use of Cross-BiLSTM-CNN by changing the deep structure in Section SECREF2 to",
"As the forward and backward hidden states are interleaved between stacked LSTM layers, Cross-BiLSTM-CNN models cross-context patterns by computing representations of the whole sequence in a feed-forward, additive manner.",
"Specifically, for the XOR cases introduced in Section SECREF3, although phrase 1 and phrase 3 still have the same past context for \"and\" and hence the first layer $\\overrightarrow{LSTM}_1$ can only extract the same low-level hidden features $\\overrightarrow{H}^1_2$, the second layer $\\overrightarrow{LSTM}_2$ considers the whole context $\\overrightarrow{H}^1||\\overleftarrow{H}^3$ and thus have the ability to extract different high-level hidden features $\\overrightarrow{H}^2_2$ for the two phrases.",
"As the higher-level LSTMs of Cross-BiLSTM-CNN have interleaved input from forward and backward hidden states down below, their weight parameters double the size of the first-level LSTMs. Nevertheless, the cross formulation provides the modeling capacity absent in previous studies with how many more LSTM layers."
],
[
"Another way to capture the interaction between past and future context per time step is to add a token-level self-attentive mechanism on top of the same BiLSTM formulation introduced in Section SECREF2. Given the hidden features $H$ of a whole sequence, the model projects each hidden state to different subspaces, depending on whether it is used as the query vector to consult other hidden states for each word token, the key vector to compute its dot-similarities with incoming queries, or the value vector to be weighted and actually convey information to the querying token. As different aspects of a task can call for different attention, multiple attention heads running in parallel are used BIBREF19.",
"Formally, let $m$ be the number of attention heads and $d_c$ be the subspace dimension. For each head $i\\in \\lbrace 1..m\\rbrace $, the attention weight matrix and context matrix are computed by",
"where $W^{qi},W^{ki},W^{vi}\\in R^{d_h\\times d_c}$ are trainable projection matrices and $\\sigma $ performs softmax along the second dimension. Each row of the resulting $\\alpha ^1,\\alpha ^2,\\ldots ,\\alpha ^m\\in R^{n\\times n}$ contains the attention weights of a token to its context, and each row of $C^1,C^2,\\ldots ,C^m\\in R^{n\\times d_c}$ is its context vector.",
"For Att-BiLSTM-CNN, the hidden vector and context vectors of each token are considered together for classification:",
"where $C^i_t$ is the $t$-th row of $C^i$, and $W_c\\in R^{(d_h+md_c)\\times d_p}$ is a trainable weight matrix. In all the experiments, $m=5$ and $d_c=\\frac{d_h}{5}$, so $W_c\\in R^{2d_h\\times d_p}$.",
"While the BiLSTM formulation stays the same as Baseline-BiLSTM-CNN, the computation of attention weights $\\alpha ^i$ and context features $C^i$ models the cross interaction between past and future. To see this, the computation of attention scores can be rewritten as follows.",
"With the un-shifted covariance matrix of the projected $\\overrightarrow{H}\\ ||\\ \\overleftarrow{H}$, Att-BiLSTM-CNN correlates past and future context for each token in a dot-product, multiplicative manner.",
"One advantage of the multi-head self-attentive mechanism is that it only needs to be computed once per sequence, and the matrix computations are highly parallelizable, resulting in little computation time overhead. Moreover, in Section SECREF4, the attention weights provide a better understanding of how the model learns to tackle sequence-labeling NER."
],
[
"OntoNotes 5.0 Fine-Grained NER – a million-token corpus with diverse sources of newswires, web, broadcast news, broadcast conversations, magazines, and telephone conversations BIBREF20, BIBREF21. Some are transcriptions of talk shows, and some are translations from Chinese or Arabic. The dataset contains 18 fine-grained entity types, including hard ones such as law, event, and work-of-art. All the diversities and noisiness require that models are robust across broad domains and able to capture a multitude of linguistic patterns for complex entities.",
"WNUT 2017 Emerging NER – a dataset providing maximally diverse, noisy, and drifting user-generated text BIBREF22. The training set consists of previously annotated tweets – social media text with non-standard spellings, abbreviations, and unreliable capitalization BIBREF23; the development set consists of newly sampled YouTube comments; the test set includes text newly drawn from Twitter, Reddit, and StackExchange. Besides drawing new samples from diverse topics across different sources, the shared task also filtered out text containing surface forms of entities seen in the training set. The resulting dataset requires models to generalize to emerging contexts and entities instead of relying on familiar surface cues."
],
[
"All experiments for Baseline-, Cross-, and Att-BiLSTM-CNN used the same model parameters given in Section SECREF3. The training minimized per-token cross-entropy loss with the Nadam optimizer BIBREF24 with uniform learning rate 0.001, batch size 32, and 35% dropout. Each training lasted 400 epochs when using GloVe embedding (OntoNotes), and 1600 epochs when using Twitter embedding (WNUT). The development set of each dataset was used to select the best epoch to restore model weights for testing. Following previous work on NER, model performances were evaluated with strict mention F1 score. Training of each model on each dataset repeated 6 times to report the mean score and standard deviation.",
"Besides comparing to the Baseline implemented in this paper, results also compared against previously reported results of BiLSTM-CNN BIBREF1, CRF-BiLSTM(-BiLSTM) BIBREF10, BIBREF25, and CRF-IDCNN BIBREF10 on the two datasets. Among them, IDCNN was a CNN-based sentence encoder, which should not have the XOR limitation raised in this paper. Only fair comparisons against models without using additional resources were made. However, the models that used those additional resources (Secion SECREF2) actually all used a BiLSTM sentence encoder with the XOR limitation, so they could indeed integrate with and benefit from the cross-structures."
],
[
"Table TABREF14 shows overall results on the two datasets spanning broad domains of newswires, broadcast, telephone, and social media. The models proposed in this paper significantly surpassed previous comparable models by 1.4% on OntoNotes and 4.6% on WNUT. Compared to the re-implemented Baseline-BiLSTM-CNN, the cross-structures brought 0.7% and 2.2% improvements on OntoNotes and WNUT. More substantial improvements were achieved for WNUT 2017 emerging NER, suggesting that cross-context patterns were even more crucial for emerging contexts and entities than familiar entities, which might often be memorized by their surface forms."
],
[
"Table TABREF16 shows significant results per entity type compared to Baseline ($>$3% absolute F1 differences for either Cross or Att). It could be seen that harder entity types generally benefitted more from the cross-structures. For example, work-of-art/creative-work entities could in principle take any surface forms – unseen, the same as a person name, abbreviated, or written with unreliable capitalizations on social media. Such mentions require models to learn a deep, generalized understanding of their context to accurately identify their boundaries and disambiguate their types. Both cross-structures were more capable in dealing with such hard entities (2.1%/5.6%/3.2%/2.0%) than the prevalently used, problematic Baseline.",
"Moreover, disambiguating fine-grained entity types is also a challenging task. For example, entities of language and NORP often take the same surface forms. Figure FIGREF19 shows an example containing \"Dutch\" and \"English\". While \"English\" was much more frequently used as a language and was identified correctly, the \"Dutch\" mention was tricky for Baseline. The attention heat map (Figure FIGREF24) further tells the story that Att has relied on its attention head to make context-aware decisions. Overall, both cross-structures were much better at disambiguating these fine-grained types (4.1%/0.8%/3.3%/3.4%)."
],
[
"Table TABREF17 shows results among different entity lengths. It could be seen that cross-structures were much better at dealing with multi-token mentions (1.8%/2.3%/8.7%/2.6%) compared to the prevalently used, problematic Baseline.",
"In fact, identifying correct mention boundaries for multi-token mentions poses a unique challenge for sequence-labeling models – all tokens in a mention must be tagged with correct sequential labels to form one correct prediction. Although models often rely on strong hints from a token itself or a single side of the context, however, in general, cross-context modeling is required. For example, a token should be tagged as Inside if and only if it immediately follows a Begin or an I and is immediately followed by an I or an End.",
"Figure FIGREF19 shows a sentence with multiple entity mentions. Among them, \"the White house\" is a triple-token facility mention with unreliable capitalization, resulting in an emerging surface form. Without usual strong hints given by a seen surface form, Baseline predicted a false single-token mention \"White\". In contrast, Att utilized its multiple attention heads (Figure FIGREF24, FIGREF24, FIGREF24) to consider the preceding and succeeding tokens for each token and correctly tagged the three tokens as facility:B, facility:I, facility:E."
],
[
"Entity-chunking is a subtask of NER concerned with locating entity mentions and their boundaries without disambiguating their types. For sequence-labeling models, this means correct O, S, B, I, E tagging for each token. In addition to showing that cross-structures achieved superior performance on multi-token entity mentions (Section SECREF18), an ablation study focused on the chunking tags was performed to better understand how it was achieved.",
"Table TABREF22 shows the entity-chunking ablation results on OntoNotes 5.0 development set. Both Att and Baseline models were taken without re-training for this subtask. The $HC^{all}$ column lists the performance of Att-BiLSTM-CNN on each chunking tag. Other columns list the performance compared to $HC^{all}$. Columns $H$ to $C^5$ are when the full model is deprived of all other information in testing time by forcefully zeroing all vectors except the one specified by the column header. The figures shown in the table are per-token recalls for each chunking tag, which tells if a part of the model is responsible for signaling the whole model to predict that tag. Colors mark relatively high and low values of interest.",
"Firstly, Att appeared to designate the task of scoring I to the attention mechanism: When context vectors $C^{all}$ were left alone, the recall for I tokens only dropped a little (-3.80); When token hidden states $H$ were left alone, the recall for I tokens seriously degraded (-28.18). When $H$ and $C^{all}$ work together, the full Att model was then better at predicting multi-token entity mentions than Baseline.",
"Then, breaking context vectors to each attention head reveals that they have worked in cooperation: $C^2$, $C^3$ focused more on scoring E (-36.45, -39.19) than I (-60.56, -50.19), while $C^4$ focused more on scoring B (-12.21) than I (-57.19). It was when information from all these heads were combined was Att able to better identify a token as being Inside a multi-token mention than Baseline.",
"Finally, the quantitative ablation analysis of chunking tags in this Section and the qualitative case-study attention visualizations in Section SECREF18 explains each other: $C^2$ and especially $C^3$ tended to focus on looking for immediate preceding mention tokens (the diagonal shifted left in Figure FIGREF24, FIGREF24), enabling them to signal for End and Inside; $C^4$ tended to focus on looking for immediate succeeding mention tokens (the diagonal shifted right in Figure FIGREF24), enabling it to signal for Begin and Inside. In fact, without context vectors, instead of BIE, Att would tag \"the White house\" as BSE and extract the same false mention of \"White\" as the OSO of Baseline.",
"Lacking the ability to model cross-context patterns, Baseline inadvertently learned to retract to predict single-token entities (0.13 vs. -0.63, -0.41, -0.38) when an easy hint from a familiar surface form is not available. This indicates a major flaw in BiLSTM-CNNs prevalently used for real-world NER today."
],
[
"This paper has formally analyzed and remedied the deficiency of the prevalently used BiLSTM-CNN in modeling cross-context for NER. A concrete proof of its inability to capture XOR patterns has been given. Additive and multiplicative cross-structures have shown to be crucial in modeling cross-context, significantly enhancing recognition of emerging, complex, confusing, and multi-token entity mentions. Against comparable previous models, 1.4% and 4.6% overall improvements on OntoNotes 5.0 and WNUT 2017 have been achieved, showing the importance of remedying the core module of NER."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model ::: CNN and Word Features",
"Model ::: Baseline-BiLSTM-CNN",
"Model ::: Baseline-BiLSTM-CNN ::: XOR Limitation",
"Model ::: Cross-BiLSTM-CNN",
"Model ::: Att-BiLSTM-CNN",
"Experiments ::: Datasets",
"Experiments ::: Implementation and Baselines",
"Experiments ::: Overall Results",
"Experiments ::: Complex and Confusing Entity Mentions",
"Experiments ::: Multi-Token Entity Mentions",
"Experiments ::: Entity-Chunking",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"867acabff0c1dc2b88ec76b77d4b3ea56ee8f956",
"bf71fe6914eeaf14089adc7cee74c12a9f76ea85",
"fe52b23ca74196d5c9f8c5519237a45d61cc9c08"
],
"answer": [
{
"evidence": [
"Table TABREF14 shows overall results on the two datasets spanning broad domains of newswires, broadcast, telephone, and social media. The models proposed in this paper significantly surpassed previous comparable models by 1.4% on OntoNotes and 4.6% on WNUT. Compared to the re-implemented Baseline-BiLSTM-CNN, the cross-structures brought 0.7% and 2.2% improvements on OntoNotes and WNUT. More substantial improvements were achieved for WNUT 2017 emerging NER, suggesting that cross-context patterns were even more crucial for emerging contexts and entities than familiar entities, which might often be memorized by their surface forms."
],
"extractive_spans": [
"suggesting that cross-context patterns were even more crucial for emerging contexts and entities than familiar entities"
],
"free_form_answer": "",
"highlighted_evidence": [
" More substantial improvements were achieved for WNUT 2017 emerging NER, suggesting that cross-context patterns were even more crucial for emerging contexts and entities than familiar entities, which might often be memorized by their surface forms."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"WNUT 2017 Emerging NER – a dataset providing maximally diverse, noisy, and drifting user-generated text BIBREF22. The training set consists of previously annotated tweets – social media text with non-standard spellings, abbreviations, and unreliable capitalization BIBREF23; the development set consists of newly sampled YouTube comments; the test set includes text newly drawn from Twitter, Reddit, and StackExchange. Besides drawing new samples from diverse topics across different sources, the shared task also filtered out text containing surface forms of entities seen in the training set. The resulting dataset requires models to generalize to emerging contexts and entities instead of relying on familiar surface cues.",
"Table TABREF14 shows overall results on the two datasets spanning broad domains of newswires, broadcast, telephone, and social media. The models proposed in this paper significantly surpassed previous comparable models by 1.4% on OntoNotes and 4.6% on WNUT. Compared to the re-implemented Baseline-BiLSTM-CNN, the cross-structures brought 0.7% and 2.2% improvements on OntoNotes and WNUT. More substantial improvements were achieved for WNUT 2017 emerging NER, suggesting that cross-context patterns were even more crucial for emerging contexts and entities than familiar entities, which might often be memorized by their surface forms."
],
"extractive_spans": [],
"free_form_answer": "The WNUT 2017 dataset had entities already seen in the training set filtered out while the OntoNotes dataset did not. Cross-context patterns thus provided more significant information for NER in WNUT 2017 because the possibility of memorizing entity forms was removed.",
"highlighted_evidence": [
"Besides drawing new samples from diverse topics across different sources, the shared task also filtered out text containing surface forms of entities seen in the training set. ",
"More substantial improvements were achieved for WNUT 2017 emerging NER, suggesting that cross-context patterns were even more crucial for emerging contexts and entities than familiar entities, which might often be memorized by their surface forms."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"WNUT 2017 Emerging NER – a dataset providing maximally diverse, noisy, and drifting user-generated text BIBREF22. The training set consists of previously annotated tweets – social media text with non-standard spellings, abbreviations, and unreliable capitalization BIBREF23; the development set consists of newly sampled YouTube comments; the test set includes text newly drawn from Twitter, Reddit, and StackExchange. Besides drawing new samples from diverse topics across different sources, the shared task also filtered out text containing surface forms of entities seen in the training set. The resulting dataset requires models to generalize to emerging contexts and entities instead of relying on familiar surface cues.",
"Table TABREF14 shows overall results on the two datasets spanning broad domains of newswires, broadcast, telephone, and social media. The models proposed in this paper significantly surpassed previous comparable models by 1.4% on OntoNotes and 4.6% on WNUT. Compared to the re-implemented Baseline-BiLSTM-CNN, the cross-structures brought 0.7% and 2.2% improvements on OntoNotes and WNUT. More substantial improvements were achieved for WNUT 2017 emerging NER, suggesting that cross-context patterns were even more crucial for emerging contexts and entities than familiar entities, which might often be memorized by their surface forms."
],
"extractive_spans": [],
"free_form_answer": "Ontonotes is less noisy than Wnut 2017",
"highlighted_evidence": [
"WNUT 2017 Emerging NER – a dataset providing maximally diverse, noisy, and drifting user-generated text BIBREF22.",
"The resulting dataset requires models to generalize to emerging contexts and entities instead of relying on familiar surface cues.",
"More substantial improvements were achieved for WNUT 2017 emerging NER, suggesting that cross-context patterns were even more crucial for emerging contexts and entities than familiar entities, which might often be memorized by their surface forms."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"ea4394112c1549185e6b763d6f36733a9f2ed794",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"005bcaedd05d389eceeaeb93099963dc0b75f068",
"daf29b1f82dffaa029e3a898ff5e91920d960365"
],
"answer": [
{
"evidence": [
"Table TABREF16 shows significant results per entity type compared to Baseline ($>$3% absolute F1 differences for either Cross or Att). It could be seen that harder entity types generally benefitted more from the cross-structures. For example, work-of-art/creative-work entities could in principle take any surface forms – unseen, the same as a person name, abbreviated, or written with unreliable capitalizations on social media. Such mentions require models to learn a deep, generalized understanding of their context to accurately identify their boundaries and disambiguate their types. Both cross-structures were more capable in dealing with such hard entities (2.1%/5.6%/3.2%/2.0%) than the prevalently used, problematic Baseline.",
"Moreover, disambiguating fine-grained entity types is also a challenging task. For example, entities of language and NORP often take the same surface forms. Figure FIGREF19 shows an example containing \"Dutch\" and \"English\". While \"English\" was much more frequently used as a language and was identified correctly, the \"Dutch\" mention was tricky for Baseline. The attention heat map (Figure FIGREF24) further tells the story that Att has relied on its attention head to make context-aware decisions. Overall, both cross-structures were much better at disambiguating these fine-grained types (4.1%/0.8%/3.3%/3.4%)."
],
"extractive_spans": [],
"free_form_answer": "Complexity is defined by examples of a singular named entity (e.g. work-of-art and creative-work entities) being represented by multiple surface forms. Mapping all of these forms to a single NE requires a complex understanding of the variations, some of which are genre-specific. Confusability is defined by examples when it becomes more difficult to disambiguate named entities that share the same surface form, such as the \"language\" versus \"NORP\" distinction represented by the surface forms Dutch and English.",
"highlighted_evidence": [
"For example, work-of-art/creative-work entities could in principle take any surface forms – unseen, the same as a person name, abbreviated, or written with unreliable capitalizations on social media. ",
"Such mentions require models to learn a deep, generalized understanding of their context to accurately identify their boundaries and disambiguate their types. ",
"Moreover, disambiguating fine-grained entity types is also a challenging task.",
"For example, entities of language and NORP often take the same surface forms. ",
"the disambiguation task becomes harder "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF16 shows significant results per entity type compared to Baseline ($>$3% absolute F1 differences for either Cross or Att). It could be seen that harder entity types generally benefitted more from the cross-structures. For example, work-of-art/creative-work entities could in principle take any surface forms – unseen, the same as a person name, abbreviated, or written with unreliable capitalizations on social media. Such mentions require models to learn a deep, generalized understanding of their context to accurately identify their boundaries and disambiguate their types. Both cross-structures were more capable in dealing with such hard entities (2.1%/5.6%/3.2%/2.0%) than the prevalently used, problematic Baseline.",
"Moreover, disambiguating fine-grained entity types is also a challenging task. For example, entities of language and NORP often take the same surface forms. Figure FIGREF19 shows an example containing \"Dutch\" and \"English\". While \"English\" was much more frequently used as a language and was identified correctly, the \"Dutch\" mention was tricky for Baseline. The attention heat map (Figure FIGREF24) further tells the story that Att has relied on its attention head to make context-aware decisions. Overall, both cross-structures were much better at disambiguating these fine-grained types (4.1%/0.8%/3.3%/3.4%)."
],
"extractive_spans": [
"disambiguating fine-grained entity types",
"entities could in principle take any surface forms – unseen, the same as a person name, abbreviated, or written with unreliable capitalizations on social media"
],
"free_form_answer": "",
"highlighted_evidence": [
"For example, work-of-art/creative-work entities could in principle take any surface forms – unseen, the same as a person name, abbreviated, or written with unreliable capitalizations on social media.",
"Moreover, disambiguating fine-grained entity types is also a challenging task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ea4394112c1549185e6b763d6f36733a9f2ed794",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"50515cbe7132369a033f4b3afb43a4bc4f0008e5",
"520875a44600fe56cc61c11350ae3469123654a1",
"65c32c0c4e4faf2c3ccb578c75339bf7883c2c6b"
],
"answer": [
{
"evidence": [
"This paper explores two types of cross-structures to help cope with the problem: Cross-BiLSTM-CNN and Att-BiLSTM-CNN. Previous studies have tried to stack multiple LSTMs for sequence-labeling NER BIBREF1. As they follow the trend of stacking forward and backward LSTMs independently, the Baseline-BiLSTM-CNN is only able to learn higher-level representations of past or future per se. Instead, Cross-BiLSTM-CNN, which interleaves every layer of the two directions, models cross-context in an additive manner by learning higher-level representations of the whole context of each token. On the other hand, Att-BiLSTM-CNN models cross-context in a multiplicative manner by capturing the interaction between past and future with a dot-product self-attentive mechanism BIBREF5, BIBREF6."
],
"extractive_spans": [
"BiLSTM-CNN"
],
"free_form_answer": "",
"highlighted_evidence": [
"Previous studies have tried to stack multiple LSTMs for sequence-labeling NER BIBREF1. As they follow the trend of stacking forward and backward LSTMs independently, the Baseline-BiLSTM-CNN is only able to learn higher-level representations of past or future per se."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Many have attempted tackling the NER task with LSTM-based sequence encoders BIBREF7, BIBREF0, BIBREF1, BIBREF8. Among these, the most sophisticated, state-of-the-art is the BiLSTM-CNN proposed by BIBREF1. They stack multiple layers of LSTM cells per direction and also use a CNN to compute character-level word vectors alongside pre-trained word vectors. This paper largely follows their work in constructing the Baseline-BiLSTM-CNN, including the selection of raw features, the CNN, and the multi-layer BiLSTM. A subtle difference is that they send the output of each direction through separate affine-softmax classifiers and then sum their probabilities, while this paper sum the scores from affine layers before computing softmax once. While not changing the modeling capacity regarded in this paper, the baseline model does perform better than their formulation."
],
"extractive_spans": [
"BiLSTM-CNN proposed by BIBREF1"
],
"free_form_answer": "",
"highlighted_evidence": [
"This paper largely follows their work in constructing the Baseline-BiLSTM-CNN, including the selection of raw features, the CNN, and the multi-layer BiLSTM."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Many have attempted tackling the NER task with LSTM-based sequence encoders BIBREF7, BIBREF0, BIBREF1, BIBREF8. Among these, the most sophisticated, state-of-the-art is the BiLSTM-CNN proposed by BIBREF1. They stack multiple layers of LSTM cells per direction and also use a CNN to compute character-level word vectors alongside pre-trained word vectors. This paper largely follows their work in constructing the Baseline-BiLSTM-CNN, including the selection of raw features, the CNN, and the multi-layer BiLSTM. A subtle difference is that they send the output of each direction through separate affine-softmax classifiers and then sum their probabilities, while this paper sum the scores from affine layers before computing softmax once. While not changing the modeling capacity regarded in this paper, the baseline model does perform better than their formulation."
],
"extractive_spans": [
"Baseline-BiLSTM-CNN"
],
"free_form_answer": "",
"highlighted_evidence": [
"Among these, the most sophisticated, state-of-the-art is the BiLSTM-CNN proposed by BIBREF1. They stack multiple layers of LSTM cells per direction and also use a CNN to compute character-level word vectors alongside pre-trained word vectors. This paper largely follows their work in constructing the Baseline-BiLSTM-CNN, including the selection of raw features, the CNN, and the multi-layer BiLSTM."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"ea4394112c1549185e6b763d6f36733a9f2ed794",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Why is improvement on OntoNotes significantly smaller compared to improvement on WNUT 2017?",
"How is \"complexity\" and \"confusability\" of entity mentions defined in this work?",
"What are the baseline models?"
],
"question_id": [
"2929e92f9b4939297b4d0f799d464d46e8d52063",
"1dcfcfa46dbcffc2fc7be92dd57df9620258097b",
"77bbe1698e001c5889217be3164982ea36e85752"
],
"question_writer": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Datasets (K-tokens / K-entities).",
"Table 2: Overall results. *Used on WNUT for character-based word vectors, reported better than CNN.",
"Table 4: Improvements against Baseline among different mention lengths.",
"Figure 1: Example problematic entities for Baseline-BiLSTM-CNN.",
"Table 5: Entity-chunking ablation results.",
"Figure 2: Attention heat maps for the mentions in Figure 1, best viewed on computer."
],
"file": [
"5-Table1-1.png",
"6-Table2-1.png",
"6-Table4-1.png",
"7-Figure1-1.png",
"7-Table5-1.png",
"8-Figure2-1.png"
]
} | [
"Why is improvement on OntoNotes significantly smaller compared to improvement on WNUT 2017?",
"How is \"complexity\" and \"confusability\" of entity mentions defined in this work?"
] | [
[
"1908.11046-Experiments ::: Overall Results-0",
"1908.11046-Experiments ::: Datasets-1"
],
[
"1908.11046-Experiments ::: Complex and Confusing Entity Mentions-1",
"1908.11046-Experiments ::: Complex and Confusing Entity Mentions-0"
]
] | [
"Ontonotes is less noisy than Wnut 2017",
"Complexity is defined by examples of a singular named entity (e.g. work-of-art and creative-work entities) being represented by multiple surface forms. Mapping all of these forms to a single NE requires a complex understanding of the variations, some of which are genre-specific. Confusability is defined by examples when it becomes more difficult to disambiguate named entities that share the same surface form, such as the \"language\" versus \"NORP\" distinction represented by the surface forms Dutch and English."
] | 132 |
1906.01076 | Episodic Memory in Lifelong Language Learning | We introduce a lifelong language learning setup where a model needs to learn from a stream of text examples without any dataset identifier. We propose an episodic memory model that performs sparse experience replay and local adaptation to mitigate catastrophic forgetting in this setup. Experiments on text classification and question answering demonstrate the complementary benefits of sparse experience replay and local adaptation to allow the model to continuously learn from new datasets. We also show that the space complexity of the episodic memory module can be reduced significantly (~50-90%) by randomly choosing which examples to store in memory with a minimal decrease in performance. We consider an episodic memory component as a crucial building block of general linguistic intelligence and see our model as a first step in that direction. | {
"paragraphs": [
[
"The ability to continuously learn and accumulate knowledge throughout a lifetime and reuse it effectively to adapt to a new problem quickly is a hallmark of general intelligence. State-of-the-art machine learning models work well on a single dataset given enough training examples, but they often fail to isolate and reuse previously acquired knowledge when the data distribution shifts (e.g., when presented with a new dataset)—a phenomenon known as catastrophic forgetting BIBREF0 , BIBREF1 .",
"The three main approaches to address catastrophic forgetting are based on: (i) augmenting the loss function that is being minimized during training with extra terms (e.g., a regularization term, an optimization constraint) to prevent model parameters learned on a new dataset from significantly deviating from parameters learned on previously seen datasets BIBREF2 , BIBREF3 , BIBREF4 , (ii) adding extra learning phases such as a knowledge distillation phase, an experience replay BIBREF5 , BIBREF6 , and (iii) augmenting the model with an episodic memory module BIBREF7 . Recent methods have shown that these approaches can be combined—e.g., by defining optimization constraints using samples from the episodic memory BIBREF8 , BIBREF9 .",
"In language learning, progress in unsupervised pretraining BIBREF10 , BIBREF11 , BIBREF12 has driven advances in many language understanding tasks BIBREF13 , BIBREF14 . However, these models have been shown to require a lot of in-domain training examples, rapidly overfit to particular datasets, and are prone to catastrophic forgetting BIBREF15 , making them unsuitable as a model of general linguistic intelligence.",
"In this paper, we investigate the role of episodic memory for learning a model of language in a lifelong setup. We propose to use such a component for sparse experience replay and local adaptation to allow the model to continually learn from examples drawn from different data distributions. In experience replay, we randomly select examples from memory to retrain on. Our model only performs experience replay very sparsely to consolidate newly acquired knowledge with existing knowledge in the memory into the model. We show that a 1% experience replay to learning new examples ratio is sufficient. Such a process bears some similarity to memory consolidation in human learning BIBREF16 . In local adaptation, we follow Memory-based Parameter Adaptation BIBREF7 and use examples retrieved from memory to update model parameters used to make a prediction of a particular test example.",
"Our setup is different than a typical lifelong learning setup. We assume that the model only makes one pass over the training examples, similar to BIBREF9 . However, we also assume neither our training nor test examples have dataset identifying information (e.g., a dataset identity, a dataset descriptor). Our experiments focus on lifelong language learning on two tasks—text classification and question answering. BIBREF17 show that many language processing tasks (e.g., classification, summarization, natural language inference, etc.) can be formulated as a question answering problem. We argue that our lifelong language learning setup—where a model is presented with question-answer examples without an explicit identifier about which dataset (distribution) the examples come from—is a more realistic setup to learn a general linguistic intelligence model.",
"Our main contributions in this paper are:"
],
[
"We consider a continual (lifelong) learning setup where a model needs to learn from a stream of training examples INLINEFORM0 . We assume that all our training examples in the series come from multiple datasets of the same task (e.g., a text classification task, a question answering task), and each dataset comes one after the other. Since all examples come from the same task, the same model can be used to make predictions on all examples. A crucial difference between our continual learning setup and previous work is that we do not assume that each example comes with a dataset descriptor (e.g., a dataset identity). As a result, the model does not know which dataset an example comes from and when a dataset boundary has been crossed during training. The goal of learning is to find parameters INLINEFORM1 that minimize the negative log probability of training examples under our model: INLINEFORM2 ",
"Our model consists of three main components: (i) an example encoder, (ii) a task decoder, and (iii) an episodic memory module. Figure FIGREF6 shows an illustration of our complete model. We describe each component in detail in the following."
],
[
"Our encoder is based on the Transformer architecture BIBREF19 . We use the state-of-the-art text encoder BERT BIBREF12 to encode our input INLINEFORM0 . BERT is a large Transformer pretrained on a large unlabeled corpus on two unsupervised tasks—masked language modeling and next sentence prediction. Other architectures such as recurrent neural networks or convolutional neural networks can also be used as the example encoder.",
"In text classification, INLINEFORM0 is a document to be classified; BERT produces a vector representation of each token in INLINEFORM1 , which includes a special beginning-of-document symbol CLS as INLINEFORM2 . In question answering, INLINEFORM3 is a concatenation of a context paragraph INLINEFORM4 and a question INLINEFORM5 separated by a special separator symbol SEP."
],
[
"In text classification, following the original BERT model, we take the representation of the first token INLINEFORM0 from BERT (i.e., the special beginning-of-document symbol) and add a linear transformation and a softmax layer to predict the class of INLINEFORM1 . INLINEFORM2 ",
" Note that since there is no dataset descriptor provided to our model, this decoder is used to predict all classes in all datasets, which we assume to be known in advance.",
"For question answering, our decoder predicts an answer span—the start and end indices of the correct answer in the context. Denote the length of the context paragraph by INLINEFORM0 , and INLINEFORM1 . Denote the encoded representation of the INLINEFORM2 -th token in the context by INLINEFORM3 . Our decoder has two sets of parameters: INLINEFORM4 and INLINEFORM5 . The probability of each context token being the start of the answer is computed as: INLINEFORM6 ",
" We compute the probability of the end index of the answer analogously using INLINEFORM0 . The predicted answer is the span with the highest probability after multiplying the start and end probabilities. We take into account that the start index of an answer needs to precede its end index by setting the probabilities of invalid spans to zero."
],
[
"Our model is augmented with an episodic memory module that stores previously seen examples throughout its lifetime. The episodic memory module is used for sparse experience replay and local adaptation to prevent catastrophic forgetting and encourage positive transfer. We first describe the architecture of our episodic memory module, before discussing how it is used at training and inference (prediction) time in § SECREF3 .",
"The module is a key-value memory block. We obtain the key representation of INLINEFORM0 (denoted by INLINEFORM1 ) using a key network—which is a pretrained BERT model separate from the example encoder. We freeze the key network to prevent key representations from drifting as data distribution changes (i.e. the problem that the key of a test example tends to be closer to keys of recently stored examples).",
"For text classification, our key is an encoded representation of the first token of the document to be classified, so INLINEFORM0 (i.e., the special beginning-of-document symbol). For question answering, we first take the question part of the input INLINEFORM1 . We encode it using the key network and take the first token as the key vector INLINEFORM2 . For both tasks, we store the input and the label INLINEFORM3 as its associated memory value.",
"If we assume that the model has unlimited capacity, we can write all training examples into the memory. However, this assumption is unrealistic in practice. We explore a simple writing strategy that relaxes this constraint based on random write. In random write, we randomly decide whether to write a newly seen example into the memory with some probability. We find that this is a strong baseline that outperforms other simple methods based on surprisal BIBREF20 and the concept of forgettable examples BIBREF21 in our preliminary experiments. We leave investigations of more sophisticated selection methods to future work.",
"Our memory has two retrieval mechanisms: (i) random sampling and (ii) INLINEFORM0 -nearest neighbors. We use random sampling to perform sparse experience replay and INLINEFORM1 -nearest neighbors for local adaptation, which are described in § SECREF3 below."
],
[
"Algorithm UID14 and Algorithm UID14 outline our overall training and inference procedures."
],
[
"In this section, we evaluate our proposed model against several baselines on text classification and question answering tasks."
],
[
"We use publicly available text classification datasets from BIBREF22 to evaluate our models (http://goo.gl/JyCnZq). This collection of datasets includes text classification datasets from diverse domains such as news classification (AGNews), sentiment analysis (Yelp, Amazon), Wikipedia article classification (DBPedia), and questions and answers categorization (Yahoo). Specifically, we use AGNews (4 classes), Yelp (5 classes), DBPedia (14 classes), Amazon (5 classes), and Yahoo (10 classes) datasets. Since classes for Yelp and Amazon datasets have similar semantics (product ratings), we merge the classes for these two datasets. In total, we have 33 classes in our experiments. These datasets have varying sizes. For example, AGNews is ten times smaller than Yahoo. We create a balanced version all datasets used in our experiments by randomly sampling 115,000 training examples and 7,600 test examples from all datasets (i.e., the size of the smallest training and test sets). We leave investigations of lifelong learning in unbalanced datasets to future work. In total, we have 575,000 training examples and 38,000 test examples.",
"We use three question answering datasets: SQuAD 1.1 BIBREF23 , TriviaQA BIBREF24 , and QuAC BIBREF25 . These datasets have different characteristics. SQuAD is a reading comprehension dataset constructed from Wikipedia articles. It includes almost 90,000 training examples and 10,000 validation examples. TriviaQA is a dataset with question-answer pairs written by trivia enthusiasts and evidence collected retrospectively from Wikipedia and the Web. There are two sections of TriviaQA, Web and Wikipedia, which we treat as separate datasets. The Web section contains 76,000 training examples and 10,000 (unverified) validation examples, whereas the Wikipedia section has about 60,000 training examples and 8,000 validation examples. QuAC is an information-seeking dialog-style dataset where a student asks questions about a Wikipedia article and a teacher answers with a short excerpt from the article. It has 80,000 training examples and approximately 7,000 validation examples."
],
[
"We compare the following models in our experiments:",
"Enc-Dec: a standard encoder-decoder model without any episodic memory module.",
"A-GEM BIBREF9 : Average Gradient Episodic Memory model that defines constraints on the gradients that are used to update model parameters based on retrieved examples from the memory. In its original formulation, A-GEM requires dataset identifiers and randomly samples examples from previous datasets. We generalize it to the setting without dataset identities by randomly sampling from the episodic memory module at fixed intervals, similar to our method.",
"Replay: a model that uses stored examples for sparse experience replay without local adaptation. We perform experience replay by sampling 100 examples from the memory and perform a gradient update after every 10,000 training steps, which gives us a 1% replay rate.",
"MbPA BIBREF7 : an episodic memory model that uses stored examples for local adaptation without sparse experience replay. The original MbPA formulation has a trainable key network. Our MbPA baseline uses a fixed key network since MbPA with a trainable key network performs significantly worse.",
"MbPA INLINEFORM0 : an episodic memory model with randomly retrieved examples for local adaptation (no key network).",
"MbPA++: our episodic memory model described in § SECREF2 .",
"MTL: a multitask model trained on all datasets jointly, used as a performance upper bound."
],
[
"We use a pretrained INLINEFORM0 model BIBREF12 as our example encoder and key network. INLINEFORM1 has 12 Transformer layers, 12 self-attention heads, and 768 hidden dimensions (110M parameters in total). We use the default BERT vocabulary in our experiments.",
"We use Adam BIBREF26 as our optimizer. We set dropout BIBREF27 to 0.1 and INLINEFORM0 in Eq. EQREF16 to 0.001. We set the base learning rate to INLINEFORM1 (based on preliminary experiments, in line with the suggested learning rate for using BERT). For text classification, we use a training batch of size 32. For question answering, the batch size is 8. The only hyperparameter that we tune is the local adaptation learning rate INLINEFORM2 . We set the number of neighbors INLINEFORM3 and the number of local adaptation steps INLINEFORM4 . We show results with other INLINEFORM5 and sensitivity to INLINEFORM6 in § SECREF38 .",
"For each experiment, we use 4 Intel Skylake x86-64 CPUs at 2 GHz, 1 Nvidia Tesla V100 GPU, and 20 GB of RAM."
],
[
"The models are trained in one pass on concatenated training sets, and evaluated on the union of the test sets. To ensure robustness of models to training dataset orderings, we evaluate on four different orderings (chosen randomly) for each task. As the multitask model has no inherent dataset ordering, we report results on four different shufflings of combined training examples. We show the exact orderings in Appendix SECREF6 . We tune the local adaptation learning rate using the first dataset ordering for each task and only run the best setting on the other orderings.",
"A main difference between these two tasks is that in text classification the model acquires knowledge about new classes as training progresses (i.e., only a subset of the classes that corresponds to a particular dataset are seen at each training interval), whereas in question answering the span predictor works similarly across datasets.",
"Table TABREF33 provides a summary of our main results. We report (macro-averaged) accuracy for classification and INLINEFORM0 score for question answering. We provide complete per-dataset (non-averaged) results in Appendix SECREF7 . Our results show that A-GEM outperforms the standard encoder-decoder model Enc-Dec, although it is worse than MbPA on both tasks. Local adaptation (MbPA) and sparse experience replay (Replay) help mitigate catastrophic forgetting compared to Enc-Dec, but a combination of them is needed to achieve the best performance (MbPA++).",
"Our experiments also show that retrieving relevant examples from memory is crucial to ensure that the local adaptation phase is useful. Comparing the results from MbPA++ and MbPA INLINEFORM0 , we can see that the model that chooses neighbors randomly is significantly worse than the model that finds and uses similar examples for local adaptation. We emphasize that having a fixed key network is crucial to prevent representation drift. The original MbPA formulation that updates the key network during training results in a model that only performs slightly better than MbPA INLINEFORM1 in our preliminary experiments. Our results suggest that our best model can be improved further by choosing relevant examples for sparse experience replay as well. We leave investigations of such methods to future work.",
"Comparing to the performance of the multitask model MTL—which is as an upper bound on achievable performance—we observe that there is still a gap between continual models and the multitask model. MbPA++ has the smallest performance gap. For text classification, MbPA++ outperforms single-dataset models in terms of averaged performance (70.6 vs. 60.7), demonstrating the success of positive transfer. For question answering, MbPA++ still lags behind single dataset models (62.0 vs. 66.0). Note that the collection of single-dataset models have many more parameters since there is a different set of model parameters per dataset. See Appendix SECREF8 for detailed results of multitask and single-dataset models.",
"Figure FIGREF34 shows INLINEFORM0 score and accuracy of various models on the test set corresponding to the first dataset seen during training as the models are trained on more datasets. The figure illustrates how well each model retains its previously acquired knowledge as it learns new knowledge. We can see that MbPA++ is consistently better compared to other methods."
],
[
"Our results in § SECREF30 assume that we can store all examples in memory (for all models, including the baselines). We investigate variants of MbPA++ that store only 50% and 10% of training examples. We randomly decide whether to write an example to memory or not (with probability 0.5 or 0.1). We show the results in Table TABREF42 . The results demonstrate that while the performance of the model degrades as the number of stored examples decreases, the model is still able to maintain a reasonably high performance even with only 10% memory capacity of the full model.",
"We investigate the effect of the number of retrieved examples for local adaptation to the performance of the model in Table TABREF42 . In both tasks, the model performs better as the number of neighbors increases. Recall that the goal of the local adaptation phase is to shape the output distribution of a test example to peak around relevant classes (or spans) based on retrieved examples from the memory. As a result, it is reasonable for the performance of the model to increase with more neighbors (up to a limit) given a key network that can reliably compute similarities between the test example and stored examples in memory and a good adaptation method.",
"Training MbPA++ takes as much time as training an encoder-decoder model without an episodic memory module since experience replay is performed sparsely (i.e., every 10,000 steps) with only 100 examples. This cost is negligible in practice and we observe no significant difference in terms of wall clock time to the vanilla encoder-decoder baseline. MbPA++ has a higher space complexity for storing seen examples, which could be controlled by limiting the memory capacity.",
"At inference time, MbPA++ requires a local adaptation phase and is thus slower than methods without local adaptation. This can be seen as a limitation of MbPA++ (and MbPA). One way to speed it up is to parallelize predictions across test examples, since each prediction is independent of others. We set the number of local adaptation steps INLINEFORM0 in our experiments. Figure FIGREF44 shows INLINEFORM1 is needed to converge to an optimal performance.",
"Comparing MBpA++ to other episodic memory models, MBpA has roughly the same time and space complexity as MBpA++. A-GEM, on the other hand, is faster at prediction time (no local adaptation), although at training time it is slower due to extra projection steps and uses more memory since it needs to store two sets of gradients (one from the current batch, and one from samples from the memory). We find that this cost is not negligible when using a large encoder such as BERT.",
"We show examples of retrieved neighbors from our episodic memory model in Appendix SECREF9 . We observe that the model manages to retrieve examples that are both syntactically and semantically related to a given query derived from a test example."
],
[
"We introduced a lifelong language learning setup and presented an episodic memory model that performs sparse experience replay and local adaptation to continuously learn and reuse previously acquired knowledge. Our experiments demonstrate that our proposed method mitigates catastrophic forgetting and outperforms baseline methods on text classification and question answering."
],
[
"We use the following dataset orders (chosen randomly) for text classification:",
"For question answering, the orders are:"
],
[
"We show per-dataset breakdown of results in Table TABREF33 in Table TABREF54 and Table TABREF55 for text classification and question answering respectively."
],
[
"We show results of a single dataset model that is only trained on a particular dataset in Table TABREF56 ."
],
[
"We show examples of retrieved examples from memory given a test example in Table TABREF57 ."
]
],
"section_name": [
"Introduction",
"Model",
"Example Encoder",
"Task Decoder",
"Episodic Memory",
"Training and Inference",
"Experiments",
"Datasets",
"Models",
"Implementation Details",
"Results",
"Analysis",
"Conclusion",
"Dataset Order",
"Full Results",
"Single Dataset Models",
"Retrieved Examples"
]
} | {
"answers": [
{
"annotation_id": [
"514b1a2734029de6837ea85a22d65ebca45e7a92",
"d9ce01b3b01a3e80388b91939dd4ab1f7364437b",
"dd47b6035fd7c214ec494f7f535e38a756c3d713"
],
"answer": [
{
"evidence": [
"We use publicly available text classification datasets from BIBREF22 to evaluate our models (http://goo.gl/JyCnZq). This collection of datasets includes text classification datasets from diverse domains such as news classification (AGNews), sentiment analysis (Yelp, Amazon), Wikipedia article classification (DBPedia), and questions and answers categorization (Yahoo). Specifically, we use AGNews (4 classes), Yelp (5 classes), DBPedia (14 classes), Amazon (5 classes), and Yahoo (10 classes) datasets. Since classes for Yelp and Amazon datasets have similar semantics (product ratings), we merge the classes for these two datasets. In total, we have 33 classes in our experiments. These datasets have varying sizes. For example, AGNews is ten times smaller than Yahoo. We create a balanced version all datasets used in our experiments by randomly sampling 115,000 training examples and 7,600 test examples from all datasets (i.e., the size of the smallest training and test sets). We leave investigations of lifelong learning in unbalanced datasets to future work. In total, we have 575,000 training examples and 38,000 test examples."
],
"extractive_spans": [
"news classification",
"sentiment analysis",
"Wikipedia article classification",
"questions and answers categorization "
],
"free_form_answer": "",
"highlighted_evidence": [
"We use publicly available text classification datasets from BIBREF22 to evaluate our models (http://goo.gl/JyCnZq). This collection of datasets includes text classification datasets from diverse domains such as news classification (AGNews), sentiment analysis (Yelp, Amazon), Wikipedia article classification (DBPedia), and questions and answers categorization (Yahoo). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use publicly available text classification datasets from BIBREF22 to evaluate our models (http://goo.gl/JyCnZq). This collection of datasets includes text classification datasets from diverse domains such as news classification (AGNews), sentiment analysis (Yelp, Amazon), Wikipedia article classification (DBPedia), and questions and answers categorization (Yahoo). Specifically, we use AGNews (4 classes), Yelp (5 classes), DBPedia (14 classes), Amazon (5 classes), and Yahoo (10 classes) datasets. Since classes for Yelp and Amazon datasets have similar semantics (product ratings), we merge the classes for these two datasets. In total, we have 33 classes in our experiments. These datasets have varying sizes. For example, AGNews is ten times smaller than Yahoo. We create a balanced version all datasets used in our experiments by randomly sampling 115,000 training examples and 7,600 test examples from all datasets (i.e., the size of the smallest training and test sets). We leave investigations of lifelong learning in unbalanced datasets to future work. In total, we have 575,000 training examples and 38,000 test examples."
],
"extractive_spans": [
" AGNews (4 classes), Yelp (5 classes), DBPedia (14 classes), Amazon (5 classes), and Yahoo (10 classes)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use publicly available text classification datasets from BIBREF22 to evaluate our models (http://goo.gl/JyCnZq). This collection of datasets includes text classification datasets from diverse domains such as news classification (AGNews), sentiment analysis (Yelp, Amazon), Wikipedia article classification (DBPedia), and questions and answers categorization (Yahoo). Specifically, we use AGNews (4 classes), Yelp (5 classes), DBPedia (14 classes), Amazon (5 classes), and Yahoo (10 classes) datasets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use publicly available text classification datasets from BIBREF22 to evaluate our models (http://goo.gl/JyCnZq). This collection of datasets includes text classification datasets from diverse domains such as news classification (AGNews), sentiment analysis (Yelp, Amazon), Wikipedia article classification (DBPedia), and questions and answers categorization (Yahoo). Specifically, we use AGNews (4 classes), Yelp (5 classes), DBPedia (14 classes), Amazon (5 classes), and Yahoo (10 classes) datasets. Since classes for Yelp and Amazon datasets have similar semantics (product ratings), we merge the classes for these two datasets. In total, we have 33 classes in our experiments. These datasets have varying sizes. For example, AGNews is ten times smaller than Yahoo. We create a balanced version all datasets used in our experiments by randomly sampling 115,000 training examples and 7,600 test examples from all datasets (i.e., the size of the smallest training and test sets). We leave investigations of lifelong learning in unbalanced datasets to future work. In total, we have 575,000 training examples and 38,000 test examples."
],
"extractive_spans": [
"news classification",
"sentiment analysis",
"Wikipedia article classification"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use publicly available text classification datasets from BIBREF22 to evaluate our models (http://goo.gl/JyCnZq). This collection of datasets includes text classification datasets from diverse domains such as news classification (AGNews), sentiment analysis (Yelp, Amazon), Wikipedia article classification (DBPedia), and questions and answers categorization (Yahoo)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0091d3a90f3795a47589abf5e9d45b778b705742",
"85f4ad0fac577dcc1f124e58a312d40ddf7e2238",
"cf5a72c882ec297030e483458406c0e2ee98d746"
],
"answer": [
{
"evidence": [
"We compare the following models in our experiments:"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We compare the following models in our experiments:"
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We compare the following models in our experiments:",
"Enc-Dec: a standard encoder-decoder model without any episodic memory module.",
"A-GEM BIBREF9 : Average Gradient Episodic Memory model that defines constraints on the gradients that are used to update model parameters based on retrieved examples from the memory. In its original formulation, A-GEM requires dataset identifiers and randomly samples examples from previous datasets. We generalize it to the setting without dataset identities by randomly sampling from the episodic memory module at fixed intervals, similar to our method.",
"Replay: a model that uses stored examples for sparse experience replay without local adaptation. We perform experience replay by sampling 100 examples from the memory and perform a gradient update after every 10,000 training steps, which gives us a 1% replay rate.",
"MbPA BIBREF7 : an episodic memory model that uses stored examples for local adaptation without sparse experience replay. The original MbPA formulation has a trainable key network. Our MbPA baseline uses a fixed key network since MbPA with a trainable key network performs significantly worse.",
"MbPA INLINEFORM0 : an episodic memory model with randomly retrieved examples for local adaptation (no key network).",
"MbPA++: our episodic memory model described in § SECREF2 .",
"MTL: a multitask model trained on all datasets jointly, used as a performance upper bound."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We compare the following models in our experiments:\n\nEnc-Dec: a standard encoder-decoder model without any episodic memory module.\n\nA-GEM BIBREF9 : Average Gradient Episodic Memory model that defines constraints on the gradients that are used to update model parameters based on retrieved examples from the memory. In its original formulation, A-GEM requires dataset identifiers and randomly samples examples from previous datasets. We generalize it to the setting without dataset identities by randomly sampling from the episodic memory module at fixed intervals, similar to our method.\n\nReplay: a model that uses stored examples for sparse experience replay without local adaptation. We perform experience replay by sampling 100 examples from the memory and perform a gradient update after every 10,000 training steps, which gives us a 1% replay rate.\n\nMbPA BIBREF7 : an episodic memory model that uses stored examples for local adaptation without sparse experience replay. The original MbPA formulation has a trainable key network. Our MbPA baseline uses a fixed key network since MbPA with a trainable key network performs significantly worse.\n\nMbPA INLINEFORM0 : an episodic memory model with randomly retrieved examples for local adaptation (no key network).\n\nMbPA++: our episodic memory model described in § SECREF2 .\n\nMTL: a multitask model trained on all datasets jointly, used as a performance upper bound."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We compare the following models in our experiments:",
"Enc-Dec: a standard encoder-decoder model without any episodic memory module.",
"A-GEM BIBREF9 : Average Gradient Episodic Memory model that defines constraints on the gradients that are used to update model parameters based on retrieved examples from the memory. In its original formulation, A-GEM requires dataset identifiers and randomly samples examples from previous datasets. We generalize it to the setting without dataset identities by randomly sampling from the episodic memory module at fixed intervals, similar to our method.",
"Replay: a model that uses stored examples for sparse experience replay without local adaptation. We perform experience replay by sampling 100 examples from the memory and perform a gradient update after every 10,000 training steps, which gives us a 1% replay rate.",
"MbPA BIBREF7 : an episodic memory model that uses stored examples for local adaptation without sparse experience replay. The original MbPA formulation has a trainable key network. Our MbPA baseline uses a fixed key network since MbPA with a trainable key network performs significantly worse.",
"MbPA INLINEFORM0 : an episodic memory model with randomly retrieved examples for local adaptation (no key network).",
"MbPA++: our episodic memory model described in § SECREF2 .",
"MTL: a multitask model trained on all datasets jointly, used as a performance upper bound."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We compare the following models in our experiments:\n\nEnc-Dec: a standard encoder-decoder model without any episodic memory module.\n\nA-GEM BIBREF9 : Average Gradient Episodic Memory model that defines constraints on the gradients that are used to update model parameters based on retrieved examples from the memory.",
"Replay: a model that uses stored examples for sparse experience replay without local adaptation.",
"MbPA BIBREF7 : an episodic memory model that uses stored examples for local adaptation without sparse experience replay.",
"MbPA INLINEFORM0 : an episodic memory model with randomly retrieved examples for local adaptation (no key network).\n\nMbPA++: our episodic memory model described in § SECREF2 .\n\nMTL: a multitask model trained on all datasets jointly, used as a performance upper bound."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"3555a3a78aec8559ae3792f84e12d879f382f566",
"4e9ad14687ed876e340ff45319c9270cc7647127"
],
"answer": [
{
"evidence": [
"Our model is augmented with an episodic memory module that stores previously seen examples throughout its lifetime. The episodic memory module is used for sparse experience replay and local adaptation to prevent catastrophic forgetting and encourage positive transfer. We first describe the architecture of our episodic memory module, before discussing how it is used at training and inference (prediction) time in § SECREF3 ."
],
"extractive_spans": [
"module that stores previously seen examples throughout its lifetime",
"used for sparse experience replay and local adaptation to prevent catastrophic forgetting and encourage positive transfer"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our model is augmented with an episodic memory module that stores previously seen examples throughout its lifetime. The episodic memory module is used for sparse experience replay and local adaptation to prevent catastrophic forgetting and encourage positive transfer."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our model is augmented with an episodic memory module that stores previously seen examples throughout its lifetime. The episodic memory module is used for sparse experience replay and local adaptation to prevent catastrophic forgetting and encourage positive transfer. We first describe the architecture of our episodic memory module, before discussing how it is used at training and inference (prediction) time in § SECREF3 ."
],
"extractive_spans": [],
"free_form_answer": "It is a memory that stores previously seen examples throughout its lifetime",
"highlighted_evidence": [
"Our model is augmented with an episodic memory module that stores previously seen examples throughout its lifetime. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What text classification tasks are considered?",
"Do they compare against other models?",
"What is episodic memory?"
],
"question_id": [
"b537832bba2eb6d34702a9d71138e661c05a7c3a",
"1002bd01372eba0f3078fb4a951505278ed45f2e",
"3450723bf66956486de777f141bde5073e4a7694"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: An illustration of our model and how it interacts with the key-value memory module during training (left) and inference (right). During training, newly seen examples are used to update the base model and stored in the memory. At certain intervals, we sample examples from the memory and perform gradient updates on the base model (experience replay). During inference, we retrieve examples whose keys are similar to a test example under consideration to fine-tune the model (local adaptation). We use the fine-tuned model to make a prediction and then discard it—keeping the base model for other predictions.",
"Figure 2: Performance on test examples corresponding to the first dataset seen during training as training progresses.",
"Table 3: Results for different # of retrieved examples K.",
"Figure 3: F1 scores for MBPA++ and MBPA as the # of local adaptation steps increases.",
"Table 4: Per-dataset results on text classification for each ordering and model.",
"Table 5: Per-dataset results on question answering for each ordering and model.",
"Table 6: Performance of a standard encoder-decoder model on each dataset in our experiments. We report accuracy for text classification and F1 score for question answering. We also show results from a multitask model for comparisons."
],
"file": [
"2-Figure1-1.png",
"7-Figure2-1.png",
"8-Table3-1.png",
"8-Figure3-1.png",
"11-Table4-1.png",
"12-Table5-1.png",
"12-Table6-1.png"
]
} | [
"What is episodic memory?"
] | [
[
"1906.01076-Episodic Memory-0"
]
] | [
"It is a memory that stores previously seen examples throughout its lifetime"
] | 133 |
1908.09919 | Gender Prediction from Tweets: Improving Neural Representations with Hand-Crafted Features | Author profiling is the characterization of an author through some key attributes such as gender, age, and language. In this paper, a RNN model with Attention (RNNwA) is proposed to predict the gender of a twitter user using their tweets. Both word level and tweet level attentions are utilized to learn 'where to look'. This model (this https URL) is improved by concatenating LSA-reduced n-gram features with the learned neural representation of a user. Both models are tested on three languages: English, Spanish, Arabic. The improved version of the proposed model (RNNwA + n-gram) achieves state-of-the-art performance on English and has competitive results on Spanish and Arabic. | {
"paragraphs": [
[
"Author profiling is the characterization of an author through some key attributes such as gender, age, and language. It's an indispensable task especially in security, forensics, and marketing. Recently, social media has become a great data source for the potential learning approaches. Furthermore, gender prediction has been a popular profiling task.",
"The traditional approach to gender prediction problem is extracting a useful set of hand-crafted features and then feeding them into a standard classification algorithm. In their study, BIBREF0 work with the style-based features of message length, stop word usage, frequency of smiley etc. and use different classifiers such as k-nearest neighbor, naive bayes, covering rules, and backpropagation to predict gender on chat messages. Similarly, BIBREF1 select some hand-crafted features and feed them into various classifiers.",
"Most of the work on gender prediction rely on n-gram features BIBREF2. BIBREF3 give Latent Semantic Analysis (LSA)-reduced forms of word and character n-grams into Support Vector Machine (SVM) and achieve state-of-the-art performance. Apart from exploiting n-gram frequencies, there are studies BIBREF4, BIBREF5, BIBREF6 to extract cross-lingual features to determine gender from tweets. Some other work BIBREF4, BIBREF7 exploit user metadata besides using just tweets.",
"Recently, neural network-based models have been proposed to solve this problem. Rather than explicitly extracting features, the aim is to develop an architecture that implicitly learns. In author profiling, both style and content-based features were proved useful BIBREF8 and neural networks are able to capture both syntactic and semantic regularities. In general, syntactic information is drawn from the local context. On the other hand, semantic information is often captured with larger window sizes. Thus, CNNs are preferred to obtain style-based features while RNNs are the methods of choice for addressing content-based features BIBREF9. In literature, CNN BIBREF10 or RNN BIBREF11, BIBREF12, BIBREF13 is used on this task. BIBREF11 obtain state-of-the-art performance among neural methods by proposing a model architecture where they process text through RNN with GRU cells. Also, the presence of an attention layer is shown to boost the performance of neural methods BIBREF11, BIBREF10.",
"In this work, we propose a model that relies on RNN with attention mechanism (RNNwA). A bidirectional RNN with attention mechanism both on word level and tweet level is trained with word embeddings. The final representation of the user is fed to a fully connected layer for prediction. Since combining some hand-crafted features with a learned linear layer has shown to perform well in complex tasks like Semantic Role Labeling (SRL) BIBREF14, an improved version of the model (RNNwA + n-gram) is also tested with hand-crafted features. In the improved version, LSA-reduced n-gram features are concatenated with the neural representation of the user. Then the result is fed into a fully-connected layer to make prediction. Models are tested in three languages; English, Spanish, and Arabic, and the improved version achieves state-of-the-art accuracy on English, and competitive results on Spanish and Arabic corpus.",
"There are many datasets created for this task BIBREF15, BIBREF16. In this work, we have used the dataset and benchmarks provided by the PAN 2018 shared task on author profiling BIBREF15. As the dataset contains a constant number of 100 tweets per user, accuracy tests are performed both on user and tweet level (tweet-level predictions are made by removing the user-level attention). Tweet-level accuracy tests show interesting results during hyperparameter optimization. When the tweet-level predictions are averaged to produce user-level predictions, it is seen that the hyperparameters that gave the best results in terms of tweet-level accuracy, performs worse in user-level accuracy. The better user-level models, with different hyperparameters, that gave the highest user-level accuracy are observed to slightly overfit on tweet-level. It leads us to believe that the overfitting in the tweet-level predictions in best user-level models acts similar to an attention mechanism by over-emphasizing some distinctive tweets and ignoring the rest."
],
[
"In author profiling, both style-based and content-based features must be addressed BIBREF8. An appropriate baseline for this task is a CNN-based model that is able to capture style-based information BIBREF10. The proposed RNN-based model relies on extracting content-based features. In addition, in order to improve its accuracy, the proposed model is combined with some hand-crafted features. For all of the models, Adam optimizer BIBREF17 is used with cross-entropy loss along with the L2 regularization to prevent from overfitting."
],
[
"CNN model (denoted CNNwA on results) is based on BIBREF10 where each character in the tweet is represented with a character embedding of size 25, which is trained along the neural network. All characters are lower-cased. Non-alphabetical characters such as punctuation are kept with a view to capturing some information on the profile of the user since they are heavily used in twitter as emoticons.",
"Filters of size $3\\times 3$, $6\\times 6$ and $9\\times 9$ are used for each language, and the number of filters is determined by performing grid search on validation set. Among the tested range (50-125 with intervals of 25), the number of filters that gives the best accuracy is 100 (per each filter), for all languages."
],
[
"Since the dataset is not big enough to train word embeddings, Glove word embeddings BIBREF18 of size 200 are used in the proposed RNN Model (denoted RNNwA on results) due to their success at various NLP tasks and their multi-linguality: They encompass all the languages in the test set. In addition, the Glove embeddings are also trained on Twitter data which make them reflect the nature of the dataset better than other alternatives.",
"A bidirectional RNN with GRU BIBREF19 cells are used in this model where the number of cells is a hyperparameter. Among the tested range (50-150 with intervals of 25), best accuracy on validation set is obtained by 150 cells in English and 100 cells in Spanish and Arabic. An attention mechanism is used on word-level in addition to tweet-level to capture the important parts of each tweet as shown in Figure FIGREF2.",
"A feature vector for each tweet is created by feeding tweets to RNN separately. In order to discriminate tweets with respect to their information carrying capacity on its author's gender, Bahdanau attention mechanism BIBREF20 is used to combine the tweets rather than concatenating them before feeding to the network or averaging their predictions later. Figure FIGREF4 shows the tweet-level attention layer in detail which is calculated by the following formulas:",
"where $W_\\alpha $ is a learnable weight matrix that is used to multiply each output of the RNN, $t_i$ is the feature vector of $i$th tweet, $b$ is a learnable bias vector, $w_i$ is a learnable attention weight, $A_i$ is the attention context vector, $v_i$ is the attention value for $i$th tweet, $o_i$ is attention output vector for the corresponding tweet, $K$ is the output vector for user. Matrix $W_\\alpha $ and vectors $w_i$ and $b$ are learned parameters.",
"Attention layer outputs a single feature vector that corresponds to a user, which is then fed to a fully-connected layer to lower the dimension to the number of classes.",
"There are two different attention layers on the model. One is a word level attention where it amplifies the signal coming from important words, the other one is on tweet level where it combines the signals coming from each tweet and creates the final representation of a user."
],
[
"For this model (denoted RNNwA + n-gram on results), n-gram features are collected with the same method described in BIBREF3. At the beginning, word level and character level n-gram features are obtained and concatenated. Then they are normalized with tf-idf transformation. For reducing the number of features and sparsity in n-gram vectors, tuples that have frequency less than 2 are ignored. For character level n-gram $N$ is selected as $3,4$, and 5 and for word level n-gram, $N$ is $1,2$ for Spanish and Arabic; $1,2,3$ for English. The dimension of the vector is reduced by LSA to 300. Then the vector is concatenated with neural representation which is produced right after tweet level attention in RNNwA model. The resultant representation is fed to a fully- connected layer that produces predictions."
],
[
"Models are tested on the PAN 2018 author profiling dataset BIBREF15, which provides tweets in three languages: English, Spanish and Arabic with training/test datasets of sizes (3000 users, 1900 users), (3000 users, 2200 users), and (1500 users, 1000 users) respectively, where each user has 100 tweets. Each training set is further partitioned randomly into training and validation sets with the ratio ($0.8$, $0.2$) respectively for hyper-parameter optimization."
],
[
"In order to measure the effectiveness of the attention mechanism, in addition to the CNN baseline model (CNNwA) and RNNwA, two new models (denoted as CNN and RNN) are created by removing the tweet level attention layer (word level attention stays the same) and generating a prediction for each tweet then just simply taking an average to give a user level prediction. Tweet level accuracies for these models are shown in Table TABREF9.",
"In Table TABREF10, user level accuracy results for the proposed model (RNNwA) along with the baseline models are given. As can be seen in the results, tweet level attention mechanism increases the score of all baseline models with the only exception of the CNNwA model in Arabic.",
"Also, compared to the best neural model BIBREF11 where max pooling is used instead of an attention mechanism on the outputs of RNN, the proposed model (RNNwA) gives better results in terms of accuracy on English and Arabic datasets, and produces similar accuracy levels on Spanish dataset (Table TABREF11). These results show that an attention layer is able to learn \"where/how to look\" for features that are helpful in identifying the gender of a user.",
"On the other hand, the improved model (RNNwA + n-gram), where neural and hand-crafted features are concatenated, increases the accuracy of the proposed model by approximately $0,5$% on English and approximately 2% in Spanish and Arabic. This also supports our intuition that the performance of neural models can be improved by hand-crafted features, which is based on the study of BIBREF14. As can be seen in Table TABREF11, the improved model outperforms the state-of-the-art method of BIBREF3 in English and produces competitive results in Spanish and Arabic.",
"There is an interesting observation concerning the models without tweet level attention (RNN and CNN) in hyper-parameter optimization. During the hyperparameter optimization of the models RNN and CNN, we saved both the models that gave the best tweet-level accuracy and the models that gave the best user-level accuracy. The expectation is to see that the best setup on tweet-level also gives the best performance in user-level, but the outcome is the opposite: Best setups on tweet-level always fall behind best user-level setups. Performance differences between various setups can be seen in Figure FIGREF12 where accuracies of the best three models in terms of tweet-level and best three models in terms of user-level are shown for all languages. It can be observed that the best tweet-level setups are almost $4\\%$ worse in terms of user-level accuracy. Deeper investigation shows that the best user-level models exhibit slight overfitting on tweet-level, in training. Although overfitting normally leads to poor generalization, in this case we believe that this overfitting acts similar to an attention mechanism by over-emphasizing some important tweets and ignoring uninformative ones in the process. Even though this leads to poor tweet-level accuracy, it improves the user-level accuracy of the models as it can be seen from the Figure FIGREF12.",
"[1]In their paper, authors report a result of 82.21 in English but we couldn't verify their accuracy in our repetitions by using their software and the same dataset. [2]Since their software is not provided, we directly take the accuracy values from their paper.",
"",
""
],
[
"In this work, a neural network-based model namely RNN with attention (RNNwA) is proposed on the task of gender prediction from tweets. The proposed model is further improved by hand-crafted features which are obtained by LSA-reduced n-grams and concatenated with the neural representation from RNNwA. User representations that is the result of this model is then fed to a fully-connected layer to make prediction. This improved model achieved state-of-the-art accuracy on English and has a competitive performance on Spanish and Arabic.",
"We also would like to kindly remind our readers that although the model is self-learning, there might still exist a gender bias in the evaluation of the model due to the data itself. Since the model learns to predict the gender directly from tweets of the twitter users, any bias the twitter users have might be reflected in the model predictions."
],
[
"We would like to thank Computer Vision Research Group from Izmir Institute of Technology for providing us the hardware for performing the tests in this research.",
"The Titan V used for this research was donated by the NVIDIA Corporation."
]
],
"section_name": [
"Introduction",
"Model architecture",
"Model architecture ::: Baseline CNN model",
"Model architecture ::: RNN Model",
"Model architecture ::: RNN with N-gram Model",
"Model architecture ::: Dataset",
"Results",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"06efbf97ec352b24dc8efb590e061969a5a5aa6f",
"5e888014d8fe6ecb6ddd29656d72a6988d962e7c",
"83bd4b77db9b7781db6e07538a712a900facf774"
],
"answer": [
{
"evidence": [
"In this work, a neural network-based model namely RNN with attention (RNNwA) is proposed on the task of gender prediction from tweets. The proposed model is further improved by hand-crafted features which are obtained by LSA-reduced n-grams and concatenated with the neural representation from RNNwA. User representations that is the result of this model is then fed to a fully-connected layer to make prediction. This improved model achieved state-of-the-art accuracy on English and has a competitive performance on Spanish and Arabic."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The proposed model is further improved by hand-crafted features which are obtained by LSA-reduced n-grams and concatenated with the neural representation from RNNwA."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"The traditional approach to gender prediction problem is extracting a useful set of hand-crafted features and then feeding them into a standard classification algorithm. In their study, BIBREF0 work with the style-based features of message length, stop word usage, frequency of smiley etc. and use different classifiers such as k-nearest neighbor, naive bayes, covering rules, and backpropagation to predict gender on chat messages. Similarly, BIBREF1 select some hand-crafted features and feed them into various classifiers.",
"Most of the work on gender prediction rely on n-gram features BIBREF2. BIBREF3 give Latent Semantic Analysis (LSA)-reduced forms of word and character n-grams into Support Vector Machine (SVM) and achieve state-of-the-art performance. Apart from exploiting n-gram frequencies, there are studies BIBREF4, BIBREF5, BIBREF6 to extract cross-lingual features to determine gender from tweets. Some other work BIBREF4, BIBREF7 exploit user metadata besides using just tweets."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The traditional approach to gender prediction problem is extracting a useful set of hand-crafted features and then feeding them into a standard classification algorithm. In their study, BIBREF0 work with the style-based features of message length, stop word usage, frequency of smiley etc. and use different classifiers such as k-nearest neighbor, naive bayes, covering rules, and backpropagation to predict gender on chat messages. Similarly, BIBREF1 select some hand-crafted features and feed them into various classifiers.\n\nMost of the work on gender prediction rely on n-gram features BIBREF2. BIBREF3 give Latent Semantic Analysis (LSA)-reduced forms of word and character n-grams into Support Vector Machine (SVM) and achieve state-of-the-art performance. Apart from exploiting n-gram frequencies, there are studies BIBREF4, BIBREF5, BIBREF6 to extract cross-lingual features to determine gender from tweets. Some other work BIBREF4, BIBREF7 exploit user metadata besides using just tweets."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"For this model (denoted RNNwA + n-gram on results), n-gram features are collected with the same method described in BIBREF3. At the beginning, word level and character level n-gram features are obtained and concatenated. Then they are normalized with tf-idf transformation. For reducing the number of features and sparsity in n-gram vectors, tuples that have frequency less than 2 are ignored. For character level n-gram $N$ is selected as $3,4$, and 5 and for word level n-gram, $N$ is $1,2$ for Spanish and Arabic; $1,2,3$ for English. The dimension of the vector is reduced by LSA to 300. Then the vector is concatenated with neural representation which is produced right after tweet level attention in RNNwA model. The resultant representation is fed to a fully- connected layer that produces predictions."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"For this model (denoted RNNwA + n-gram on results), n-gram features are collected with the same method described in BIBREF3. At the beginning, word level and character level n-gram features are obtained and concatenated. Then they are normalized with tf-idf transformation. For reducing the number of features and sparsity in n-gram vectors, tuples that have frequency less than 2 are ignored. For character level n-gram $N$ is selected as $3,4$, and 5 and for word level n-gram, $N$ is $1,2$ for Spanish and Arabic; $1,2,3$ for English. The dimension of the vector is reduced by LSA to 300. Then the vector is concatenated with neural representation which is produced right after tweet level attention in RNNwA model. The resultant representation is fed to a fully- connected layer that produces predictions."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"2b285645dc77e5952140a59357a911c7dc7b6483",
"3749de3ff7c7637a1c9bf6a50e5a254644d68cd1",
"5d3f641a0aedf0b8383d64bb4f5999e76df09ec8"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Accuracy on PAN 2018 test set."
],
"extractive_spans": [],
"free_form_answer": "on PAN 2018 dataset, the accuracy is 82.31% for English, 80.22% for Spanish and 80.50% for Arabic",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Accuracy on PAN 2018 test set."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"On the other hand, the improved model (RNNwA + n-gram), where neural and hand-crafted features are concatenated, increases the accuracy of the proposed model by approximately $0,5$% on English and approximately 2% in Spanish and Arabic. This also supports our intuition that the performance of neural models can be improved by hand-crafted features, which is based on the study of BIBREF14. As can be seen in Table TABREF11, the improved model outperforms the state-of-the-art method of BIBREF3 in English and produces competitive results in Spanish and Arabic.",
"FLOAT SELECTED: Table 3: Accuracy on PAN 2018 test set."
],
"extractive_spans": [],
"free_form_answer": "Accuracy: English 82.31, Spanish 80.22, Arabic 80.50",
"highlighted_evidence": [
"As can be seen in Table TABREF11, the improved model outperforms the state-of-the-art method of BIBREF3 in English and produces competitive results in Spanish and Arabic.",
"FLOAT SELECTED: Table 3: Accuracy on PAN 2018 test set."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: User Level Accuracy of the Proposed Model (RNNwA) along with the Baselines."
],
"extractive_spans": [],
"free_form_answer": "In terms of accuracy, 81.789% for English, 78.227% for Spanish and 78.5% for Arabic",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: User Level Accuracy of the Proposed Model (RNNwA) along with the Baselines."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"014f8026384acc63c0d3e7bf58fcdacbc598d917",
"c44973cf976f3f4826c748b21a9599d6d687df11"
],
"answer": [
{
"evidence": [
"Model architecture ::: RNN with N-gram Model",
"For this model (denoted RNNwA + n-gram on results), n-gram features are collected with the same method described in BIBREF3. At the beginning, word level and character level n-gram features are obtained and concatenated. Then they are normalized with tf-idf transformation. For reducing the number of features and sparsity in n-gram vectors, tuples that have frequency less than 2 are ignored. For character level n-gram $N$ is selected as $3,4$, and 5 and for word level n-gram, $N$ is $1,2$ for Spanish and Arabic; $1,2,3$ for English. The dimension of the vector is reduced by LSA to 300. Then the vector is concatenated with neural representation which is produced right after tweet level attention in RNNwA model. The resultant representation is fed to a fully- connected layer that produces predictions."
],
"extractive_spans": [],
"free_form_answer": "It's a recurrent neural network with n-gram model",
"highlighted_evidence": [
"Model architecture ::: RNN with N-gram Model\nFor this model (denoted RNNwA + n-gram on results), n-gram features are collected with the same method described in BIBREF3. At the beginning, word level and character level n-gram features are obtained and concatenated. Then they are normalized with tf-idf transformation. For reducing the number of features and sparsity in n-gram vectors, tuples that have frequency less than 2 are ignored. For character level n-gram $N$ is selected as $3,4$, and 5 and for word level n-gram, $N$ is $1,2$ for Spanish and Arabic; $1,2,3$ for English. The dimension of the vector is reduced by LSA to 300. Then the vector is concatenated with neural representation which is produced right after tweet level attention in RNNwA model. The resultant representation is fed to a fully- connected layer that produces predictions."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"A bidirectional RNN with GRU BIBREF19 cells are used in this model where the number of cells is a hyperparameter. Among the tested range (50-150 with intervals of 25), best accuracy on validation set is obtained by 150 cells in English and 100 cells in Spanish and Arabic. An attention mechanism is used on word-level in addition to tweet-level to capture the important parts of each tweet as shown in Figure FIGREF2."
],
"extractive_spans": [
"bidirectional RNN with GRU"
],
"free_form_answer": "",
"highlighted_evidence": [
"A bidirectional RNN with GRU BIBREF19 cells are used in this model where the number of cells is a hyperparameter."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Are LSA-reduced n-gram features considered hand-crafted features?",
"What is the performance of the model on English, Spanish and Arabic?",
"How is this model different from a LSTM?"
],
"question_id": [
"36cb7ebdd39e0b8a89ff946d3a3aef8a76a6bb43",
"28e50459da60ceda49fe1578c12f3f805b288bd0",
"e1f61500eb733f2b95692b6a9a53f8aaa6f1e1f6"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"Spanish",
"Spanish",
"Spanish"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Proposed model.",
"Figure 2: Tweet-level Attention Layer in Detail.",
"Table 2: User Level Accuracy of the Proposed Model (RNNwA) along with the Baselines.",
"Table 1: Tweet Level Accuracy of the CNN and RNN Models without Attention.",
"Table 3: Accuracy on PAN 2018 test set.",
"Figure 3: Comparison of Tweet-Level and User-level accuracy of RNN Model. Best three user-level models (colored in red) and best three tweet-level models (colored in blue) are selected for each language."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Table2-1.png",
"5-Table1-1.png",
"6-Table3-1.png",
"6-Figure3-1.png"
]
} | [
"What is the performance of the model on English, Spanish and Arabic?",
"How is this model different from a LSTM?"
] | [
[
"1908.09919-Results-3",
"1908.09919-6-Table3-1.png",
"1908.09919-5-Table2-1.png"
],
[
"1908.09919-Model architecture ::: RNN with N-gram Model-0",
"1908.09919-Model architecture ::: RNN Model-1"
]
] | [
"In terms of accuracy, 81.789% for English, 78.227% for Spanish and 78.5% for Arabic",
"It's a recurrent neural network with n-gram model"
] | 134 |
1910.10670 | Efficient Dynamic WFST Decoding for Personalized Language Models | We propose a two-layer cache mechanism to speed up dynamic WFST decoding with personalized language models. The first layer is a public cache that stores most of the static part of the graph. This is shared globally among all users. A second layer is a private cache that caches the graph that represents the personalized language model, which is only shared by the utterances from a particular user. We also propose two simple yet effective pre-initialization methods, one based on breadth-first search, and another based on a data-driven exploration of decoder states using previous utterances. Experiments with a calling speech recognition task using a personalized contact list demonstrate that the proposed public cache reduces decoding time by factor of three compared to decoding without pre-initialization. Using the private cache provides additional efficiency gains, reducing the decoding time by a factor of five. | {
"paragraphs": [
[
"Speech input is now a common feature for smart devices. In many cases, the user's query involves entities such as a name from a contact list, a location, or a music title. Recognizing entities is particularly challenging for speech recognition because many entities are infrequent or out of the main vocabulary of the system. One way to improve performance is such cases is through the use of a personal language model (LM) which contains the expected user-specific entities. Because each user can have their own personalized LM, it is vital that the speech decoder be able to efficiently load the model on the fly, so it can be used in decoding, without any noticeable increase in latency.",
"Many state-of-the-art speech recognition decoders are based on the weighted finite state transducer (WFST) paradigm BIBREF0, BIBREF1. A conventional WFST decoder searches a statically composed $H C L G$ graph, where $H$ is the graph that translates HMM states to CD phones, $C$ translates CD phones to graphemes, $L$ translates graphemes to words and $G$ is graph that represents the language model. Using a statically composed graph has two limitations. First, it is both compute and memory intensive when the vocabulary and LM are large. Second, the static graph approach makes it hard to handle personalized language models BIBREF2. Many common tasks a user may want to perform with a voice assistant such as making phone calls, messaging to a specific contact or playing favorite music require a personalized language model. A dynamic WFST decoder is better suited for such cases. As denoted in Eq (DISPLAY_FORM1), in a dynamic WFST decoder, $HCL$ is composed and optimized offline, while $G$ is composed on the fly with lazy (on-demand) composition, denoted as $\\circ $.",
"To handle dynamic entities, a class LM $G_c$ is normally used as background $G$ and a personalized LM $G_p$ is replaced on-the-fly, before applying lazy composition.",
"Since the non-terminal states are composed on-the-fly, it means the states of recognition FST will also contain personalized information that cannot be used by other users or service threads.",
"In previous work, a method was proposed to do a pre-initialized composition for a non-class LM BIBREF3. However, it the dynamic part is still expanded on-the-fly. In this work, we propose two improvements in order to best leverage class language models. First, we use simpler methods for pre-initialization which do not need to pre-generate decoder state statistics. Second, we propose a two-layer pre-initialization mechanism that also avoids performing dynamic expansion on per user basis. In the two-layer pre-initialization method, we make use of a class LM with class tag. We build a personalized FST that contains the members of the class for each user. Using the FST replacement algorithm, we obtain a personalized language transducer BIBREF4. We perform a pre-composition for all FST states whose transitions do not contain class tags. By doing so, the actual on-demand composition is only required for the states in personalized FST. For a multi-threaded service, the pre-composed FST can be shared by all threads, since it does not contain personalized FST states (non-terminals). The personalized part will be shared for all utterances from the same user, which will take full advantage of memory usage.",
"Unlike the previous pre-initialization approach that is based on calculating the state statistics BIBREF3, our simplified pre-initialization methods do not rely on pre-calculated state frequencies. Instead, we directly expand the graph with breadth-first search or through a data-driven approach where a small numbers of utterances are processed by the decoder offline. We found that both methods are effective, but the data-driven approach outperforms the breadth first search algorithm. Both methods can be combined to achieve the best performance. Through a series of experiments on a speech recognition task for the calling domain, we found that pre-initialization on the public graph speeds up the decoding time by a factor of three. Futhermore, sharing the private graph further reduces decoding time and results in factor of five improvement in efficiency."
],
[
"The general composition algorithm is well-explained in BIBREF5, BIBREF6 and a pre-composition algorithm with a non-class LM is described in BIBREF3. Here we will only present our new algorithm focusing on how to pre-compose the graph while avoiding non-terminal states. In this work, we use the same mathematical notation as BIBREF0."
],
[
"A WFST can be written as",
"where $\\mathcal {A}$, $\\mathcal {B}$ are finite label sets for input and output. $Q$ is the finite state set. $I\\subseteq Q$ is the initial state set, $F\\subseteq Q$ is final state set. $E\\subseteq Q\\times (\\mathcal {A} \\cup \\lbrace \\epsilon \\rbrace ) \\times (\\mathcal {B} \\cup \\lbrace \\epsilon \\rbrace ) \\times \\mathbb {K} \\times Q$ is a set of transitional mapping between states in $Q$ with weighted input/output label pair, where $\\mathbb {K}$ is a semiring $(\\mathbb {K}, \\oplus , \\otimes , \\overline{0}, \\overline{1})$.",
"The composition of two weighted FSTs is defined as",
"where $\\mathcal {B} = \\mathcal {B}_1 \\cap \\mathcal {A}_2$ is the intersection of output label set of $T_1$ and input label set of $T_2$. For $a, b, c\\ne \\epsilon $, two transitions $(q_1, a, b, w_1, q_1^{\\prime })$ in $T_1$ and $(q2, b, c, w_2, q_2^{\\prime })$, the composed transition will be $((q_1, q_2), a, c, w_1 \\bigotimes w_2, (q_1^{\\prime }, q_2^{\\prime }))$.",
"For two FSTs $T_1$, $T_2$ over semiring $\\mathbb {K}$,",
"is the class language model transducer obtained by replacing the class labels in generic root FST $G_c$ with class FSTs $G_p$ for different classes, where $\\mathcal {C}$ denotes the set of all supported classes.",
"The calculation for composition is very slow for LM with large vocabulary size. Naive on-the-fly composition is very time-consuming. In BIBREF3, the authors proposed a pre-initialized composition algorithm, which does a partial composition based on the state frequency. This one-time cost calculation can do some composition in advance. During decoding search, the FST will skip the composition of pre-initialized states. However, extending this algorithm to class LMs is non-trivial in practice. For a class LM, the non-terminal states cannot be composed during pre-initialization since we need a pre-initialization that is applicable to all users, which means we need to apply some restrictions to prevent composition of the personalized part.",
"We define $T_P$ as a partial composed FST structure for $T=T_1 \\circ T_2$, where $P \\subseteq Q$ is the set of pre-composed states. In real time decoding, the on-the-fly composition will be performed on top of the pre-initialized $T_P$, which is similar to previous work BIBREF3. In a production environment, multiple threads will share the same pre-composed FST $T_P$ structure, while each thread will own a private FST structure.",
"where $T_D$ is the dynamic cache built on top of $T_P$. $T_D$ may need to copy some states from $T_P$ if we need to update information for those states in $T_P$.",
"In order to support this mechanism, we use a two-layered cached FST for decoding. The first layer is public cache which represents $T_P$. It is a static cache created by pre-initialization. The second layer is the private cache, which is owned by a particular user and constructed on-the-fly. Figure FIGREF9 shows the architecture of our two-layer FST. The solid box denotes the static graph and the dashed ones show the dynamic graph. Personalized states will appear only in $T_D$.",
"The static public cache stores the most frequent states, which greatly reduces the run time factor (RTF) of online decoding. Since $T_D$ has a smaller size than a fully dynamic graph, the marginal memory efficiency for multi-threaded service will be better.",
"Furthermore, the private cache will not be freed after decoding a single utterance. The lifetime of a private cache actually can last for the entire dialog section for a specific user. The private cache keeps updating during the dialog session, making processing the subsequent utterances faster as more states are composed and stored in $T_D$. With this accumulated dynamic cache, a longer dialog can expect a better RTF in theory. In general, the static public cache serves all threads, while the private cache boosts the performance within a dialog session. The private cache will be freed at the end of the dialog."
],
[
"Based on the algorithm described in BIBREF3, we allow the states $(q_1, q_2)$ such that $q_2 = (q_c, q_p), q_c \\in Q_c, q_p=0 $ to be pre-composed, where $q_c$ and $q_p$ denote states in $G_c$ and $G_p$, respectively. States in $G_c$ with a class label transition will be ignored during pre-composition.",
"By applying this restriction, the states in the pre-composed recognition FST $T_P$ will not contain any personalized states, and thus, can be shared by all users and threads.",
"Note that care must taken to account for the special case when the initial states could have transitions with a class label. In this case, the entire graph is blocked (Figure FIGREF12(a)), so we need to add an extra $\\epsilon $ transition before class label in the root FST, which will guarantee all the initial states are composed (Figure FIGREF12(b)). In the pre-composition stage, we don't need the actual class FSTs for each class, so $G_p$ is simply a placeholder FST which only contains a placeholder word $\\left\\langle temp \\right\\rangle $. This means all the transitions following the placeholder transition may be blocked if there is no other path that skips over the placeholder transition. In practice, for a large LM graph with a large vocabulary, the connectivity is usually very high, once the initial states are guaranteed to be composed.",
"This pre-composition algorithm can be applied with lookahead filter BIBREF7. We implemented this algorithm using OpenFst framework BIBREF4, which supports such a lookahead filter in both the pre-composition and decoding stages. In our implementation, the decoding FST has a two-layered cache and state table. The state table is necessary since the add-on composition during decoding must be based on the same state map."
],
[
"In general, we can pre-compose all the states of the decoding FST that are applied to all users, i.e. those unrelated to the personalized language model. However, this full set pre-composition could be very slow and memory consuming. In fact, most of the states are rarely composed during real data traffic, and therefore, performing partial pre-composition is sufficient. Here we propose two simple methods for pre-composition."
],
[
"Naive breath-first-search (BFS) is the most obvious way to perform pre-composition. We iterate over all states within a specific distance from the start state of decoding FST. It generalizes to a full set pre-composition when the search depth is large."
],
[
"Our goal is to pre-compose the most frequently encountered states. However, if some frequent states are far from the start state, they may not be identified by naive BFS. In this case, it is very time and memory consuming to increase the depth of the BFS. Moreover, if we simply use a offline corpus of utterances to analyze the frequency of all states, some highly frequent states could be blocked by less frequent states. Thus, the easiest way is to do pre-composition using real utterances.",
"The decoding FST can be expanded while decoding utterances. We utilize a special decoder in the warm-up stage. This warm-up decoder will apply the same restriction discussed in the previous section. We use an empty contact FST in the warm-up stage to avoid expanding any personalization-related states. This data driven pre-composition will expand most frequent states which are visited during warm-up decoding, especially for some specific patterns."
],
[
"Handling out-of-vocabulary (OOV) words in speech recognition is very important especially for contact name recognition. We replace the normal class (contact) FST with a mono-phone FST by adding monophone words in the lexicon BIBREF2, BIBREF8, BIBREF9. By using s monophone FST, we avoid the necessity of adding new words into lexicon on-the-fly, which significantly simplifies the system. We use silence phone \"SIL\" to represent the word boundary. These monophone words will not be applied with silence phone in lexicon since they are not real words.",
"In Figure FIGREF17, the contact name is represented as monophone words using IPA phone set. SIL is added after each name in contact FST. Names with the same pronunciation also need to be handled using disambiguation symbols. In practice, because of accent and pronunciation variability, we have found that multiple pronunciations of OOV names are required in the personalized class FST."
],
[
"We performed a series of experiments on different data sets in order to evaluate the impact on real-time factor (RTF) and word error rate (WER) of the proposed approach. In theory, the pre-composition algorithm will not change the WER, since the search algorithm does not change."
],
[
"In these experiments, speech recognition was performed using a hybrid LSTM-HMM framework. The acoustic model is an LSTM that consumes 40-dimensional log filterbank coefficients as the input and generates the posterior probabilities of 8000 tied context-dependent states as the output. The LM is a pruned 4-gram model trained using various semantic patterns that include a class label as well as a general purpose text corpus. The LM contains $@contact$ as an entity word, which will be replaced by the personalized contact FST. After pruning, the LM has 26 million n-grams.",
"The personalized class FST (contact FST) only contains monophone words. Determinization and minimization are applied to the contact FST with disambiguation symbols. The disambiguation symbols are removed after graph optimization. The decoding experiments are performed on a server with 110 GB memory and 24 processors.",
"Experiments are performed on two data sets. The first contains 7,500 utterances from the calling domain from Facebook employees. This includes commands like “Please call Jun Liu now\". The second consists of approximately 10,000 utterances from other common domains, such as weather, time, and music. Note that we include the contact FST for both calling and non-calling utterances, as we do not assume knowledge of the user's intent a priori. Each user has a contact FST containing 500 contacts on average. We keep up to five pronunciations for each name, generated by a grapheme-to-phoneme model.",
"We experiment with both the naive BFS and the proposed data-driven pre-composition methods. For the data-driven approach, we randomly picked 500 utterances from the evaluation data set as warm up utterances. We use an empty contact FST to be replaced into the root LM to avoid personalized states during warm-up decoding. In order to evaluate the benefit of the proposed private cache to store the personalized language model, we group multiple utterances from a user into virtual dialog sessions of one, two, or five turns."
],
[
"Table TABREF19 shows the WER and RTF for two corpora with different pre-composition methods with ten concurrent speech recognition client requests. The private cache is freed after decoding each utterance. RTF is calculated by $t_{decode}/t_{wav}$, where $t_{decode}$ is the decoding time and $t_{wav}$ is the audio duration. We use 50th and 95th percentile values for the RTF comparison. As expected, the WER remains unchanged for the same data set. With pre-composition, the RTF for both calling and non-calling is reduced by a factor of three.",
"Table TABREF21 shows the additional RTF improvement that can be obtained during multi-turn dialogs from the proposed private cache. When the dialog session is only a single turn, the RTF remains unchanged. However, for multi-turn sessions, additional RTF reductions are obtained for both the calling and non-calling corpora. The decoding time is reduced by a factor of five compared to a fully dynamic graph for dialog sessions of five turns.",
"Figure FIGREF22 shows the RTF and memory usage for teh different pre-composition approaches. The upper graph shows the RTF for different steps of naive BFS using the calling data set. The figure shows that additional BFS steps improves RTF for both 50 and 95 percentiles. However, no improvement is observed beyond five steps, because the most frequent states close to the start state have already been pre-composed. The additional BFS steps only result in more memory usage. With the data-driven warmup, the RTF shows additional improvement. Furthermore, the difference in the p50 and p95 RTF values becomes much smaller than in the BFS approach.",
"The lower graph of Figure FIGREF22 shows the memory usage as a function of the number of concurrent requests. Though the pre-composed graph may use more memory when we have only a small number of threads, the marginal memory cost for additional requests for a fully dynamic graph is roughly 1.5 times larger than for the pre-composed graph. The data-driven method has the best marginal memory efficiency for a large number of concurrent requests."
],
[
"In this work, we propose new methods for improving the efficiency of dynamic WFST decoding with personalized language models. Experimental results show that using a pre-composed graph can reduce the RTF by a factor of three compared with a fully dynamic graph. Moreover, in multi-utterance dialog sessions, the RTF can be reduced by a factor of 5 using the proposed private cache without harming WER. Though a fully dynamic graph uses less memory for the graph, the pre-composed graph has a better marginal memory cost, which is more memory efficient in large-scale production services that need to support a large number of concurrent requests.",
"Our results also show that increasing the steps of naive BFS will not help the RTF, since it may compose infrequently encountered states, resulting in unnecessary memory usage. Using the proposed data-driven warm-up performs better in both marginal memory efficiency and RTF than naive BFS. Both pre-composition methods can also be combined."
],
[
"We would like to thank Mike Seltzer, Christian Fuegen, Julian Chan, and Dan Povey for useful discussions about the work."
]
],
"section_name": [
"Introduction",
"Architecture and Algorithm",
"Architecture and Algorithm ::: Two-layer cached FST during decoding",
"Architecture and Algorithm ::: Pre-composition algorithm for class language models",
"Architecture and Algorithm ::: Pre-composition methods",
"Architecture and Algorithm ::: Pre-composition methods ::: Distance based method",
"Architecture and Algorithm ::: Pre-composition methods ::: Data-driven warm-up",
"Architecture and Algorithm ::: Out-Of-Vocabulary recognition",
"Experiments",
"Experiments ::: Experimental Setup",
"Experiments ::: Results",
"Conclusions",
"Acknoledgements"
]
} | {
"answers": [
{
"annotation_id": [
"9135471d7c6479797c87213dbaca5fe0ff693c7a",
"d74e5ee67e26a9465394a3a45ca3defeb3d41502"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"In order to support this mechanism, we use a two-layered cached FST for decoding. The first layer is public cache which represents $T_P$. It is a static cache created by pre-initialization. The second layer is the private cache, which is owned by a particular user and constructed on-the-fly. Figure FIGREF9 shows the architecture of our two-layer FST. The solid box denotes the static graph and the dashed ones show the dynamic graph. Personalized states will appear only in $T_D$.",
"The static public cache stores the most frequent states, which greatly reduces the run time factor (RTF) of online decoding. Since $T_D$ has a smaller size than a fully dynamic graph, the marginal memory efficiency for multi-threaded service will be better.",
"Furthermore, the private cache will not be freed after decoding a single utterance. The lifetime of a private cache actually can last for the entire dialog section for a specific user. The private cache keeps updating during the dialog session, making processing the subsequent utterances faster as more states are composed and stored in $T_D$. With this accumulated dynamic cache, a longer dialog can expect a better RTF in theory. In general, the static public cache serves all threads, while the private cache boosts the performance within a dialog session. The private cache will be freed at the end of the dialog."
],
"extractive_spans": [
"static public cache stores the most frequent states",
"lifetime of a private cache actually can last for the entire dialog section for a specific user",
"subsequent utterances faster as more states are composed and stored"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to support this mechanism, we use a two-layered cached FST for decoding. The first layer is public cache which represents $T_P$. It is a static cache created by pre-initialization. The second layer is the private cache, which is owned by a particular user and constructed on-the-fly.",
"The static public cache stores the most frequent states, which greatly reduces the run time factor (RTF) of online decoding. Since $T_D$ has a smaller size than a fully dynamic graph, the marginal memory efficiency for multi-threaded service will be better.\n\nFurthermore, the private cache will not be freed after decoding a single utterance. The lifetime of a private cache actually can last for the entire dialog section for a specific user. The private cache keeps updating during the dialog session, making processing the subsequent utterances faster as more states are composed and stored in $T_D$."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"29c4cbca2b3d634509bcfd9bd47778bcf5d3c170",
"2c885015133feb021a5c8dbe307e4e4fcf8a7972",
"c7794dfa4108b4de34751ab7b432cacef7d7e0af"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Experiments are performed on two data sets. The first contains 7,500 utterances from the calling domain from Facebook employees. This includes commands like “Please call Jun Liu now\". The second consists of approximately 10,000 utterances from other common domains, such as weather, time, and music. Note that we include the contact FST for both calling and non-calling utterances, as we do not assume knowledge of the user's intent a priori. Each user has a contact FST containing 500 contacts on average. We keep up to five pronunciations for each name, generated by a grapheme-to-phoneme model."
],
"extractive_spans": [],
"free_form_answer": "English",
"highlighted_evidence": [
"Experiments are performed on two data sets. The first contains 7,500 utterances from the calling domain from Facebook employees. This includes commands like “Please call Jun Liu now\". The second consists of approximately 10,000 utterances from other common domains, such as weather, time, and music."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"064a7f6030d4576c2e33f599d97efb0c7fa925d5",
"5769d5f2119de030c0462c54b3d4ba5f4402e854",
"f28b073eb28eb85fc67d1a5a3a774014581e4c49"
],
"answer": [
{
"evidence": [
"Speech input is now a common feature for smart devices. In many cases, the user's query involves entities such as a name from a contact list, a location, or a music title. Recognizing entities is particularly challenging for speech recognition because many entities are infrequent or out of the main vocabulary of the system. One way to improve performance is such cases is through the use of a personal language model (LM) which contains the expected user-specific entities. Because each user can have their own personalized LM, it is vital that the speech decoder be able to efficiently load the model on the fly, so it can be used in decoding, without any noticeable increase in latency."
],
"extractive_spans": [],
"free_form_answer": "A model that contains the expected user-specific entities.",
"highlighted_evidence": [
"One way to improve performance is such cases is through the use of a personal language model (LM) which contains the expected user-specific entities. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Speech input is now a common feature for smart devices. In many cases, the user's query involves entities such as a name from a contact list, a location, or a music title. Recognizing entities is particularly challenging for speech recognition because many entities are infrequent or out of the main vocabulary of the system. One way to improve performance is such cases is through the use of a personal language model (LM) which contains the expected user-specific entities. Because each user can have their own personalized LM, it is vital that the speech decoder be able to efficiently load the model on the fly, so it can be used in decoding, without any noticeable increase in latency."
],
"extractive_spans": [],
"free_form_answer": "language model which contains user-specific entities",
"highlighted_evidence": [
"One way to improve performance is such cases is through the use of a personal language model (LM) which contains the expected user-specific entities."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Speech input is now a common feature for smart devices. In many cases, the user's query involves entities such as a name from a contact list, a location, or a music title. Recognizing entities is particularly challenging for speech recognition because many entities are infrequent or out of the main vocabulary of the system. One way to improve performance is such cases is through the use of a personal language model (LM) which contains the expected user-specific entities. Because each user can have their own personalized LM, it is vital that the speech decoder be able to efficiently load the model on the fly, so it can be used in decoding, without any noticeable increase in latency.",
"Many state-of-the-art speech recognition decoders are based on the weighted finite state transducer (WFST) paradigm BIBREF0, BIBREF1. A conventional WFST decoder searches a statically composed $H C L G$ graph, where $H$ is the graph that translates HMM states to CD phones, $C$ translates CD phones to graphemes, $L$ translates graphemes to words and $G$ is graph that represents the language model. Using a statically composed graph has two limitations. First, it is both compute and memory intensive when the vocabulary and LM are large. Second, the static graph approach makes it hard to handle personalized language models BIBREF2. Many common tasks a user may want to perform with a voice assistant such as making phone calls, messaging to a specific contact or playing favorite music require a personalized language model. A dynamic WFST decoder is better suited for such cases. As denoted in Eq (DISPLAY_FORM1), in a dynamic WFST decoder, $HCL$ is composed and optimized offline, while $G$ is composed on the fly with lazy (on-demand) composition, denoted as $\\circ $."
],
"extractive_spans": [
" contains the expected user-specific entities"
],
"free_form_answer": "",
"highlighted_evidence": [
"One way to improve performance is such cases is through the use of a personal language model (LM) which contains the expected user-specific entities.",
"Many common tasks a user may want to perform with a voice assistant such as making phone calls, messaging to a specific contact or playing favorite music require a personalized language model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What does the cache consist of?",
"What languages is the model tested on?",
"What is a personalized language model?"
],
"question_id": [
"da4d07645edaf7494a8cb5216150a00690da01f7",
"c0cebef0e29b9d13c165b6f19f6ca8393348c671",
"5695908a8c6beb0e3863a1458a1b93aab508fd34"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Architecture of two layer cached FST. TP is the static public cache built in pre-initialization. TD is the private cache for dynamic composition in decoding time. The lifetime of TD varies based on the length of dialog section.",
"Figure 2: Class language model FST with contact tags. (a) Conventional LM with @contact. (b) LM with additional <eps> between start state 0 and @contact. This guarantees the start state is pre-composed.",
"Figure 3: Monophone contact FST. The monophone will be treated as word in the lexicon without a word boundary, so there is an additional silence phone after each name.",
"Table 1: WER and RTF results for different data set and different pre-composition methods.",
"Table 2: RTF results for decoding in session. Decoder will hold the private cache for entire dialog session.",
"Figure 4: RTF and memory usage comparison. Upper: RTF between fully dynamic graph, different steps of BFS and data driven pre-composition. Lower: Memory usage for different graphs. A pre-composed graph has a better marginal memory cost than a fully dynamic graph."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"4-Figure4-1.png"
]
} | [
"What languages is the model tested on?",
"What is a personalized language model?"
] | [
[
"1910.10670-Experiments ::: Experimental Setup-2"
],
[
"1910.10670-Introduction-1",
"1910.10670-Introduction-0"
]
] | [
"English",
"language model which contains user-specific entities"
] | 135 |
1902.06734 | Author Profiling for Hate Speech Detection | The rapid growth of social media in recent years has fed into some highly undesirable phenomena such as proliferation of abusive and offensive language on the Internet. Previous research suggests that such hateful content tends to come from users who share a set of common stereotypes and form communities around them. The current state-of-the-art approaches to hate speech detection are oblivious to user and community information and rely entirely on textual (i.e., lexical and semantic) cues. In this paper, we propose a novel approach to this problem that incorporates community-based profiling features of Twitter users. Experimenting with a dataset of 16k tweets, we show that our methods significantly outperform the current state of the art in hate speech detection. Further, we conduct a qualitative analysis of model characteristics. We release our code, pre-trained models and all the resources used in the public domain. | {
"paragraphs": [
[
"This work is licensed under a Creative Commons Attribution 4.0 International License.",
"License details: http://creativecommons.org/licenses/by/4.0/. Hate speech, a term used to collectively refer to offensive language, racist comments, sexist remarks, etc., is omnipresent in social media. Users on social media platforms are at risk of being exposed to content that may not only be degrading but also harmful to their mental health in the long term. Pew Research Center highlighted the gravity of the situation via a recently released report BIBREF0 . As per the report, 40% of adult Internet users have personally experienced harassment online, and 60% have witnessed the use of offensive names and expletives. Expectedly, the majority (66%) of those who have personally faced harassment have had their most recent incident occur on a social networking website or app. While most of these websites and apps provide ways of flagging offensive and hateful content, only 8.8% of the victims have actually considered using such provisions. These statistics suggest that passive or manual techniques for curbing propagation of hateful content (such as flagging) are neither effective nor easily scalable BIBREF1 . Consequently, the efforts to automate the detection and moderation of such content have been gaining popularity in natural language processing (nlp) BIBREF2 , BIBREF3 .",
"Several approaches to hate speech detection demonstrate the effectiveness of character-level bag-of-words features in a supervised classification setting BIBREF4 , BIBREF5 , BIBREF6 . More recent approaches, and currently the best performing ones, utilize recurrent neural networks (rnns) to transform content into dense low-dimensional semantic representations that are then used for classification BIBREF1 , BIBREF7 . All of these approaches rely solely on lexical and semantic features of the text they are applied to. Waseem and Hovy c53cecce142c48628b3883d13155261c adopted a more user-centric approach based on the idea that perpetrators of hate speech are usually segregated into small demographic groups; they went on to show that gender information of authors (i.e., users who have posted content) is a helpful indicator. However, Waseem and Hovy focused only on coarse demographic features of the users, disregarding information about their communication with others. But previous research suggests that users who subscribe to particular stereotypes that promote hate speech tend to form communities online. For example, Zook zook mapped the locations of racist tweets in response to President Obama's re-election to show that such tweets were not uniformly distributed across the United States but formed clusters instead. In this paper, we present the first approach to hate speech detection that leverages author profiling information based on properties of the authors' social network and investigate its effectiveness.",
"Author profiling has emerged as a powerful tool for NLP applications, leading to substantial performance improvements in several downstream tasks, such as text classification, sentiment analysis and author attribute identification BIBREF8 , BIBREF9 , BIBREF10 . The relevance of information gained from it is best explained by the idea of homophily, i.e., the phenomenon that people, both in real life as well as on the Internet, tend to associate more with those who appear similar. Here, similarity can be defined along various axes, e.g., location, age, language, etc. The strength of author profiling lies in that if we have information about members of a community $c$ defined by some similarity criterion, and we know that the person $p$ belongs to $c$ , we can infer information about $p$ . This concept has a straightforward application to our task: knowing that members of a particular community are prone to creating hateful content, and knowing that the author p is connected to this community, we can leverage information beyond linguistic cues and more accurately predict the use of hateful/non-hateful language from $p$ . The questions that we seek to address here are: are some authors, and the respective communities that they belong to, more hateful than the others? And can such information be effectively utilized to improve the performance of automated hate speech detection methods?",
"In this paper, we answer these questions and develop novel methods that take into account community-based profiling features of authors when examining their tweets for hate speech. Experimenting with a dataset of $16k$ tweets, we show that the addition of such profiling features to the current state-of-the-art methods for hate speech detection significantly enhances their performance. We also release our code (including code that replicates previous work), pre-trained models and the resources we used in the public domain."
],
[
"Amongst the first ones to apply supervised learning to the task of hate speech detection were Yin et al. Yin09detectionof who used a linear svm classifier to identify posts containing harassment based on local (e.g., n-grams), contextual (e.g., similarity of a post to its neighboring posts) and sentiment-based (e.g., presence of expletives) features. Their best results were with all of these features combined.",
"Djuric et al. Djuric:2015:HSD:2740908.2742760 experimented with comments extracted from the Yahoo Finance portal and showed that distributional representations of comments learned using paragraph2vec BIBREF11 outperform simpler bag-of-words (bow) representations in a supervised classification setting for hate speech detection. Nobata et al. Nobata:2016:ALD:2872427.2883062 improved upon the results of Djuric et al. by training their classifier on a combination of features drawn from four different categories: linguistic (e.g., count of insult words), syntactic (e.g., pos tags), distributional semantic (e.g., word and comment embeddings) and bow-based (word and characters n-grams). They reported that while the best results were obtained with all features combined, character n-grams contributed more to performance than all the other features.",
"Waseem and Hovy c53cecce142c48628b3883d13155261c created and experimented with a dataset of racist, sexist and clean tweets. Utilizing a logistic regression (lr) classifier to distinguish amongst them, they found that character n-grams coupled with gender information of users formed the optimal feature set; on the other hand, geographic and word-length distribution features provided little to no improvement. Working with the same dataset, Badjatiya et al. Badjatiya:17 improved on their results by training a gradient-boosted decision tree (gbdt) classifier on averaged word embeddings learnt using a long short-term memory (lstm) network that they initialized with random embeddings.",
"Waseem zeerakW16-5618 sampled $7k$ more tweets in the same manner as Waseem and Hovy c53cecce142c48628b3883d13155261c. They recruited expert and amateur annotators to annotate the tweets as racism, sexism, both or neither in order to study the influence of annotator knowledge on the task of hate speech detection. Combining this dataset with that of Waseem and Hovy c53cecce142c48628b3883d13155261c, Park et al. W17-3006 explored the merits of a two-step classification process. They first used a lr classifier to separate hateful and non-hateful tweets, followed by another lr classifier to distinguish between racist and sexist ones. They showed that this setup had comparable performance to a one-step classification setup built with convolutional neural networks.",
"Davidson et al. davidson created a dataset of about $25k$ tweets wherein each tweet was annotated as being racist, offensive or neither of the two. They tested several multi-class classifiers with the aim of distinguishing clean tweets from racist and offensive tweets while simultaneously being able to separate the racist and offensive ones. Their best model was a lr classifier trained using tf-idf and pos n-gram features, as well as the count of hash tags and number of words.",
"Wulczyn et al. Wulczyn:2017:EMP:3038912.3052591 prepared three different datasets of comments collected from the English Wikipedia Talk page; one was annotated for personal attacks, another for toxicity and the third one for aggression. Their best performing model was a multi-layered perceptron (mlp) classifier trained on character n-gram features. Experimenting with the personal attack and toxicity datasets, Pavlopoulos et al. Pavlopoulos:17 improved the results of Wulczyn et al. by using a gated recurrent unit (gru) model to encode the comments into dense low-dimensional representations, followed by a lr layer to classify the comments based on those representations."
],
[
"Author profiling has been leveraged in several ways for a variety of purposes in nlp. For instance, many studies have relied on demographic information of the authors. Amongst these are Hovy et al. hovy2015demographic and Ebrahimi et al. ebrahimi2016personalized who extracted age and gender-related information to achieve superior performance in a text classification task. Pavalanathan and Eisenstein pavalanathan2015confounds, in their work, further showed the relevance of the same information to automatic text-based geo-location. Researching along the same lines, Johannsen et al. johannsen2015cross and Mirkin et al. mirkin2015motivating utilized demographic factors to improve syntactic parsing and machine translation respectively.",
"While demographic information has proved to be relevant for a number of tasks, it presents a significant drawback: since this information is not always available for all authors in a social network, it is not particularly reliable. Consequently, of late, a new line of research has focused on creating representations of users in a social network by leveraging the information derived from the connections that they have with other users. In this case, node representations (where nodes represent the authors in the social network) are typically induced using neural architectures. Given the graph representing the social network, such methods create low-dimensional representations for each node, which are optimized to predict the nodes close to it in the network. This approach has the advantage of overcoming the absence of information that the previous approaches face. Among those that implement this idea are Yang et al. yang2016toward, who used representations derived from a social graph to achieve better performance in entity linking tasks, and Chen and Ku chen2016utcnn, who used them for stance classification.",
"A considerable amount of literature has also been devoted to sentiment analysis with representations built from demographic factors BIBREF10 , BIBREF12 . Other tasks that have benefited from social representations are sarcasm detection BIBREF13 and political opinion prediction BIBREF14 ."
],
[
"We experiment with the dataset of Waseem and Hovy c53cecce142c48628b3883d13155261c, containing tweets manually annotated for hate speech. The authors retrieved around $136k$ tweets over a period of two months. They bootstrapped their collection process with a search for commonly used slurs and expletives related to religious, sexual, gender and ethnic minorities. From the results, they identified terms and references to entities that frequently showed up in hateful tweets. Based on this sample, they used a public Twitter api to collect the entire corpus of ca. $136k$ tweets. After having manually annotated a randomly sampled subset of $16,914$ tweets under the categories racism, sexism or none themselves, they asked an expert to review their annotations in order to mitigate against any biases. The inter-annotator agreement was reported at $\\kappa =0.84$ , with a further insight that $85\\%$ of all the disagreements occurred in the sexism class.",
"The dataset was released as a list of $16,907$ tweet IDs along with their corresponding annotations. Using python's Tweepy library, we could only retrieve $16,202$ of the tweets since some of them have now been deleted or their visibility limited. Of the ones retrieved, 1,939 (12%) are labelled as racism, 3,148 (19.4%) as sexism, and the remaining 11,115 (68.6%) as none; this distribution follows the original dataset very closely (11.7%, 20.0%, 68.3%).",
"We were able to extract community-based information for 1,836 out of the 1,875 unique authors who posted the $16,202$ tweets, covering a cumulative of 16,124 of them; the remaining 39 authors have either deactivated their accounts or are facing suspension. Tweets in the racism class are from 5 of the 1,875 authors, while those in the sexism class are from 527 of them."
],
[
"In order to leverage community-based information for the authors whose tweets form our dataset, we create an undirected unlabeled community graph wherein nodes are the authors and edges are the connections between them. An edge is instantiated between two authors $u$ and $v$ if $u$ follows $v$ on Twitter or vice versa. There are a total of 1,836 nodes and 7,561 edges. Approximately 400 of the nodes have no edges, indicating solitary authors who neither follow any other author nor are followed by any. Other nodes have an average degree of 8, with close to 600 of them having a degree of at least 5. The graph is overall sparse with a density of 0.0075.",
"From this community graph, we obtain a vector representation, i.e., an embedding that we refer to as author profile, for each author using the node2vec framework BIBREF15 . Node2vec applies the skip-gram model of Mikolov et al. mikolov2013efficient to a graph in order to create a representation for each of its nodes based on their positions and their neighbors. Specifically, given a graph with nodes $V = \\lbrace v_1$ , $v_2$ , $\\dots $ , $v_n\\rbrace $ , node2vec seeks to maximize the following log probability: ",
"$$\\nonumber \\sum _{v \\in V} \\log Pr\\,(N_s(v)\\, |\\, v)$$ (Eq. 6) ",
"where $N_s(v)$ denotes the network neighborhood of node $v$ generated through sampling strategy $s$ .",
"In doing so, the framework learns low-dimensional embeddings for nodes in the graph. These embeddings can emphasize either their structural role or the local community they are a part of. This depends on the sampling strategies used to generate the neighborhood: if breadth-first sampling (bfs) is adopted, the model focuses on the immediate neighbors of a node; when depth-first sampling (dfs) is used, the model explores farther regions in the network, which results in embeddings that encode more information about the nodes' structural role (e.g., hub in a cluster, or peripheral node). The balance between these two ways of sampling the neighbors is directly controlled by two node2vec parameters, namely $p$ and $q$ . The default value for these is 1, which ensures a node representation that gives equal weight to both structural and community-oriented information. In our work, we use the default value for both $p$ and $q$ . Additionally, since node2vec does not produce embeddings for solitary authors, we map these to a single zero embedding.",
"Figure 1 shows example snippets from the community graph. Some authors belong to densely-connected communities (left figure), while others are part of more sparse ones (right figure). In either case, node2vec generates embeddings that capture the authors' neighborhood."
],
[
"We experiment with seven different methods for classifying tweets as one of racism, sexism, or none. We first re-implement three established and currently best-performing hate speech detection methods — based on character n-grams and recurrent neural networks — as our baselines. We then test whether incorporating author profiling features improves their performance.",
"Char n-grams (lr). As our first baseline, we adopt the method used by Waseem and Hovy c53cecce142c48628b3883d13155261c wherein they train a logistic regression (lr) classifier on the Twitter dataset using character n-gram counts. We use uni-grams, bi-grams, tri-grams and four-grams, and l $_2$ -normalize their counts. Character n-grams have been shown to be effective for the task of hate speech detection BIBREF5 .",
"Hidden-state (hs). As our second baseline, we take the “rnn” method of Pavlopoulos et al. Pavlopoulos:17 which achieves state-of-the-art results on the Wikipedia datasets released by Wulczyn et al. Wulczyn:2017:EMP:3038912.3052591. The method comprises a 1-layer gated recurrent unit (gru) that takes a sequence $w_1$ , $\\dots $ , $w_n$ of words represented as $d$ -dimensional embeddings and encodes them into hidden states $h_1$ , $\\dots $ , $h_n$ . This is followed by an lr layer that uses the last hidden state $h_n$ to classify the tweet. We make two minor modifications to the authors' original architecture: we deepen the 1-layer gru to a 2-layer gru and use softmax instead of sigmoid in the lr layer. Like Pavlopoulos et al., we initialize the word embeddings to glove vectors BIBREF16 . In all our methods, words not available in the glove set are randomly initialized in the range $\\pm 0.05$ , indicating the lack of semantic information. By not mapping these words to a single random embedding, we mitigate against the errors that may arise due to their conflation BIBREF17 . A special oov (out of vocabulary) token is also initialized in the same range. All the embeddings are updated during training, allowing some of the randomly-initialized ones to get task-tuned; the ones that do not get tuned lie closely clustered around the oov token, to which unseen words in the test set are mapped.",
"Word-sum (ws). As a third baseline, we adopt the “lstm+glove+gbdt\" method of Badjatiya et al. Badjatiya:17, which achieves state-of-the-art results on the Twitter dataset we are using. The authors first utilize an lstm to task-tune glove-initialized word embeddings by propagating the error back from an lr layer. They then train a gradient boosted decision tree (gbdt) classifier to classify texts based on the average of the embeddings of constituent words. We make two minor modifications to this method: we use a 2-layer gru instead of the lstm to tune the embeddings, and we train the gbdt classifier on the l $_2$ -normalized sum of the embeddings instead of their average. Although the authors achieved state-of-the-art results on Twitter by initializing embeddings randomly rather than with glove (which is what we do here), we found the opposite when performing a 10-fold stratified cross-validation (cv). A possible explanation of this lies in the authors' decision to not use stratification, which for such a highly imbalanced dataset can lead to unexpected outcomes BIBREF18 . Furthermore, the authors train their lstm on the entire dataset (including the test set) without any early stopping criterion, which leads to over-fitting of the randomly-initialized embeddings.",
"Author profile (auth). In order to test whether community-based information of authors is in itself sufficient to correctly classify the content produced by them, we utilize just the author profiles we generated to train a gbdt classifier.",
"Char n-grams + author profile (lr + auth). This method builds upon the lr baseline by appending author profile vectors on to the character n-gram count vectors for training the lr classifier.",
"Hidden-state + author profile (hs + auth) and Word-sum + author profile (ws + auth). These methods are identical to the char n-grams + author profile method except that here we append the author profiling features on to features derived from the hidden-state and word-sum baselines respectively and feed them to a gbdt classifier."
],
[
"We normalize the input by lowercasing all words and removing stop words. For the gru architecture, we use exactly the same hyper-parameters as Pavlopoulos et al. Pavlopoulos:17, i.e., 128 hidden units, Glorot initialization, cross-entropy loss, and the Adam optimizer BIBREF19 . Badjatiya et al. Badjatiya:17 also use the same settings except they have fewer hidden units. In all our models, besides dropout regularization BIBREF20 , we hold out a small part of the training set as validation data to prevent over-fitting. We implement the models in Keras BIBREF21 with Theano back-end and use 200-dimensional pre-trained glove word embeddings. We employ Lightgbm BIBREF22 as our gdbt classifier and tune its hyper-parameters using 5-fold grid search. For the node2vec framework, we use the same parameters as in the original paper BIBREF15 except we set the dimensionality of node embeddings to 200 and increase the number of iterations to 25 for better convergence."
],
[
"We perform 10-fold stratified cross validation (cv), as suggested by Forman and Scholz Forman:10, to evaluate all seven methods described in the previous section. Following previous research BIBREF7 , BIBREF23 , we report the average weighted precision, recall, and f $_1$ scores for all the methods. The average weighted precision is calculated as: ",
"$$\\nonumber \\frac{\\sum _{i=1}^{10}\\; (w_r\\cdot \\textrm {P}_r^i + w_s\\cdot \\textrm {P}_s^i + w_n\\cdot \\textrm {P}_n^i)}{10}$$ (Eq. 16) ",
"where $\\textrm {P}_r^i, \\textrm {P}_s^i, \\textrm {P}_n^i$ are precision scores on the racism, sexism, and none classes from the $i^{th}$ fold of the cv. The values $w_r$ , $w_s$ , and $w_n$ are the proportions of the racism, sexism, and none classes in the dataset respectively; since we use stratification, these proportions are constant ( $w_r=0.12$ , $w_s=0.19$ , $w_n=0.69$ ) across all folds. Average weighted recall and f $_1$ are calculated in the same manner.",
"The results are presented in Table 1 . For all three baseline methods (lr, ws, and hs), the addition of author profiling features significantly improves performance ( $p < 0.05$ under 10-fold cv paired t-test). The lr + auth method yields the highest performance of f $_1$ $=87.57$ , exceeding its respective baseline by nearly 4 points. A similar trend can be observed for the other methods as well. These results point to the importance of community-based information and author profiling in hate speech detection and demonstrate that our approach can further improve the performance of existing state-of-the-art methods.",
"In Table 2 , we further compare the performance of the different methods on the racism and sexism classes individually. As in the previous experiments, the scores are averaged over 10 folds of cv. Of particular interest are the scores for the sexism class where the f $_1$ increases by over 10 points upon the addition of author profiling features. Upon analysis, we find that such a substantial increase in performance stems from the fact that many of the 527 unique authors of the sexist tweets are closely connected in the community graph. This allows for their penchant for sexism to be expressed in their respective author profiles.",
"The author profiling features on their own (auth) achieve impressive results overall and in particular on the sexism class, where their performance is typical of a community-based generalization, i.e., low precision but high recall. For the racism class on the other hand, the performance of auth on its own is quite poor. This contrast can be explained by the fact that tweets in the racism class come from only 5 unique authors who: (i) are isolated in the community graph, or (ii) have also authored several tweets in the sexism class, or (iii) are densely connected to authors from the sexism and none classes which possibly camouflages their racist nature.",
"We believe that the gains in performance will be more pronounced as the underlying community graph grows since there will be less solitary authors and more edges worth harnessing information from. Even when the data is skewed and there is an imbalance of hateful vs. non-hateful authors, we do expect our approach to still be able to identify clusters of authors with similar views."
],
[
"We conduct a qualitative analysis of system errors and the cases where author profiling leads to the correct classification of previously misclassified examples. Table 3 shows examples of hateful tweets from the dataset that are misclassified by the lr method, but are correctly classified upon the addition of author profiling features, i.e., by the lr + auth method. It is worth noting that some of the wins scored by the latter are on tweets that are part of a larger hateful discourse or contain links to hateful content while not explicitly having textual cues that are indicative of hate speech per se. The addition of author profiling features may then be viewed as a proxy for wider discourse information, thus allowing us to correctly resolve the cases where lexical and semantic features alone are insufficient.",
"However, a number of hateful tweets still remain misclassified despite the addition of author profiling features. According to our analysis, many of these tend to contain urls to hateful content, e.g., “@salmonfarmer1: Logic in the world of Islam http://t.co/6nALv2HPc3\" and “@juliarforster Yes. http://t.co/ixbt0uc7HN\". Since Twitter shortens all urls into a standard format, there is no indication of what they refer to. One way to deal with this limitation could be to additionally maintain a blacklist of links. Another source of system errors is the deliberate obfuscation of words by authors in order to evade detection, e.g., “Kat, a massive c*nt. The biggest ever on #mkr #cuntandandre\". Current hate speech detection methods, including ours, do not directly attempt to address this issue. While this is a challenge for bag-of-word based methods such as lr, we hypothesize that neural networks operating at the character level may be helpful in recognizing obfuscated words.",
"We further conducted an analysis of the author embeddings generated by node2vec, in order to validate that they capture the relevant aspects of the community graph. We visualized the author embeddings in 2-dimensional space using t-sne BIBREF24 , as shown in Figure 2 . We observe that, as in the community graph, there are a few densely populated regions in the visualization that represent authors in closely knit groups who exhibit similar characteristics. The other regions are largely sparse with smaller clusters. Note that we exclude solitary users from this visualization since we have to use a single zero embedding to represent them.",
"Figure 3 further provides visualizations for authors from the sexism and none classes separately. While the authors from the none class are spread out in the embedding space, the ones from the sexism class are more tightly clustered. Note that we do not visualize the 5 authors from the racism class since 4 of them are already covered in the sexism class."
],
[
"In this paper, we explored the effectiveness of community-based information about authors for the purpose of identifying hate speech. Working with a dataset of $16k$ tweets annotated for racism and sexism, we first comprehensively replicated three established and currently best-performing hate speech detection methods based on character n-grams and recurrent neural networks as our baselines. We then constructed a graph of all the authors of tweets in our dataset and extracted community-based information in the form of dense low-dimensional embeddings for each of them using node2vec. We showed that the inclusion of author embeddings significantly improves system performance over the baselines and advances the state of the art in this task. Users prone to hate speech do tend to form social groups online, and this stresses the importance of utilizing community-based information for automatic hate speech detection. In the future, we wish to explore the effectiveness of community-based author profiling in other tasks such as stereotype identification and metaphor detection."
]
],
"section_name": [
"Introduction",
"Hate speech detection",
"Author profiling",
"Dataset",
"Representing authors",
"Classifying content",
"Experimental setup",
"Results",
"Analysis and discussion",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"74f6f2d9f7f8944b6e9883dd61a8be1fb4ce60ea",
"8cc7b6c439f94016a9eebcaa66f732422e251275",
"8ec6e54cb39330a1d13fb274fd62668a7878d148"
],
"answer": [
{
"evidence": [
"We experiment with the dataset of Waseem and Hovy c53cecce142c48628b3883d13155261c, containing tweets manually annotated for hate speech. The authors retrieved around $136k$ tweets over a period of two months. They bootstrapped their collection process with a search for commonly used slurs and expletives related to religious, sexual, gender and ethnic minorities. From the results, they identified terms and references to entities that frequently showed up in hateful tweets. Based on this sample, they used a public Twitter api to collect the entire corpus of ca. $136k$ tweets. After having manually annotated a randomly sampled subset of $16,914$ tweets under the categories racism, sexism or none themselves, they asked an expert to review their annotations in order to mitigate against any biases. The inter-annotator agreement was reported at $\\kappa =0.84$ , with a further insight that $85\\%$ of all the disagreements occurred in the sexism class.",
"The dataset was released as a list of $16,907$ tweet IDs along with their corresponding annotations. Using python's Tweepy library, we could only retrieve $16,202$ of the tweets since some of them have now been deleted or their visibility limited. Of the ones retrieved, 1,939 (12%) are labelled as racism, 3,148 (19.4%) as sexism, and the remaining 11,115 (68.6%) as none; this distribution follows the original dataset very closely (11.7%, 20.0%, 68.3%)."
],
"extractive_spans": [],
"free_form_answer": "Yes, in Waseem and Hovy (2016)",
"highlighted_evidence": [
"We experiment with the dataset of Waseem and Hovy c53cecce142c48628b3883d13155261c, containing tweets manually annotated for hate speech. The authors retrieved around $136k$ tweets over a period of two months. They bootstrapped their collection process with a search for commonly used slurs and expletives related to religious, sexual, gender and ethnic minorities. From the results, they identified terms and references to entities that frequently showed up in hateful tweets. Based on this sample, they used a public Twitter api to collect the entire corpus of ca. $136k$ tweets. After having manually annotated a randomly sampled subset of $16,914$ tweets under the categories racism, sexism or none themselves, they asked an expert to review their annotations in order to mitigate against any biases.",
"The dataset was released as a list of $16,907$ tweet IDs along with their corresponding annotations. Using python's Tweepy library, we could only retrieve $16,202$ of the tweets since some of them have now been deleted or their visibility limited. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We experiment with the dataset of Waseem and Hovy c53cecce142c48628b3883d13155261c, containing tweets manually annotated for hate speech. The authors retrieved around $136k$ tweets over a period of two months. They bootstrapped their collection process with a search for commonly used slurs and expletives related to religious, sexual, gender and ethnic minorities. From the results, they identified terms and references to entities that frequently showed up in hateful tweets. Based on this sample, they used a public Twitter api to collect the entire corpus of ca. $136k$ tweets. After having manually annotated a randomly sampled subset of $16,914$ tweets under the categories racism, sexism or none themselves, they asked an expert to review their annotations in order to mitigate against any biases. The inter-annotator agreement was reported at $\\kappa =0.84$ , with a further insight that $85\\%$ of all the disagreements occurred in the sexism class."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We experiment with the dataset of Waseem and Hovy c53cecce142c48628b3883d13155261c, containing tweets manually annotated for hate speech."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We experiment with the dataset of Waseem and Hovy c53cecce142c48628b3883d13155261c, containing tweets manually annotated for hate speech. The authors retrieved around $136k$ tweets over a period of two months. They bootstrapped their collection process with a search for commonly used slurs and expletives related to religious, sexual, gender and ethnic minorities. From the results, they identified terms and references to entities that frequently showed up in hateful tweets. Based on this sample, they used a public Twitter api to collect the entire corpus of ca. $136k$ tweets. After having manually annotated a randomly sampled subset of $16,914$ tweets under the categories racism, sexism or none themselves, they asked an expert to review their annotations in order to mitigate against any biases. The inter-annotator agreement was reported at $\\kappa =0.84$ , with a further insight that $85\\%$ of all the disagreements occurred in the sexism class."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We experiment with the dataset of Waseem and Hovy"
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86"
]
},
{
"annotation_id": [
"d7fa951f1cda831d9831cc5aae648a035cdf332c",
"f160b92bdbb01f9b86b6539bc034ce38123fcdc0"
],
"answer": [
{
"evidence": [
"We conduct a qualitative analysis of system errors and the cases where author profiling leads to the correct classification of previously misclassified examples. Table 3 shows examples of hateful tweets from the dataset that are misclassified by the lr method, but are correctly classified upon the addition of author profiling features, i.e., by the lr + auth method. It is worth noting that some of the wins scored by the latter are on tweets that are part of a larger hateful discourse or contain links to hateful content while not explicitly having textual cues that are indicative of hate speech per se. The addition of author profiling features may then be viewed as a proxy for wider discourse information, thus allowing us to correctly resolve the cases where lexical and semantic features alone are insufficient."
],
"extractive_spans": [
"tweets that are part of a larger hateful discourse or contain links to hateful content while not explicitly having textual cues"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table 3 shows examples of hateful tweets from the dataset that are misclassified by the lr method, but are correctly classified upon the addition of author profiling features, i.e., by the lr + auth method. It is worth noting that some of the wins scored by the latter are on tweets that are part of a larger hateful discourse or contain links to hateful content while not explicitly having textual cues that are indicative of hate speech per se. The addition of author profiling features may then be viewed as a proxy for wider discourse information, thus allowing us to correctly resolve the cases where lexical and semantic features alone are insufficient."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conduct a qualitative analysis of system errors and the cases where author profiling leads to the correct classification of previously misclassified examples. Table 3 shows examples of hateful tweets from the dataset that are misclassified by the lr method, but are correctly classified upon the addition of author profiling features, i.e., by the lr + auth method. It is worth noting that some of the wins scored by the latter are on tweets that are part of a larger hateful discourse or contain links to hateful content while not explicitly having textual cues that are indicative of hate speech per se. The addition of author profiling features may then be viewed as a proxy for wider discourse information, thus allowing us to correctly resolve the cases where lexical and semantic features alone are insufficient."
],
"extractive_spans": [],
"free_form_answer": "They don't provide wider discourse information",
"highlighted_evidence": [
"The addition of author profiling features may then be viewed as a proxy for wider discourse information, thus allowing us to correctly resolve the cases where lexical and semantic features alone are insufficient."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86",
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"annotation_id": [
"07e713a402634243fbedbbb14f1ee80d3520e426",
"6716d6c93d081622c82bd0c8e1d82c6e24c9c159",
"9e5fcaf1ed05d8d07a8020142508083772349ec6"
],
"answer": [
{
"evidence": [
"In order to leverage community-based information for the authors whose tweets form our dataset, we create an undirected unlabeled community graph wherein nodes are the authors and edges are the connections between them. An edge is instantiated between two authors $u$ and $v$ if $u$ follows $v$ on Twitter or vice versa. There are a total of 1,836 nodes and 7,561 edges. Approximately 400 of the nodes have no edges, indicating solitary authors who neither follow any other author nor are followed by any. Other nodes have an average degree of 8, with close to 600 of them having a degree of at least 5. The graph is overall sparse with a density of 0.0075.",
"From this community graph, we obtain a vector representation, i.e., an embedding that we refer to as author profile, for each author using the node2vec framework BIBREF15 . Node2vec applies the skip-gram model of Mikolov et al. mikolov2013efficient to a graph in order to create a representation for each of its nodes based on their positions and their neighbors. Specifically, given a graph with nodes $V = \\lbrace v_1$ , $v_2$ , $\\dots $ , $v_n\\rbrace $ , node2vec seeks to maximize the following log probability:"
],
"extractive_spans": [],
"free_form_answer": "The features are the outputs from node2vec when run on a community graph where nodes are users and edges are connections if one user follows the other on Twitter.",
"highlighted_evidence": [
"In order to leverage community-based information for the authors whose tweets form our dataset, we create an undirected unlabeled community graph wherein nodes are the authors and edges are the connections between them. An edge is instantiated between two authors $u$ and $v$ if $u$ follows $v$ on Twitter or vice versa.",
"From this community graph, we obtain a vector representation, i.e., an embedding that we refer to as author profile, for each author using the node2vec framework BIBREF15 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to leverage community-based information for the authors whose tweets form our dataset, we create an undirected unlabeled community graph wherein nodes are the authors and edges are the connections between them. An edge is instantiated between two authors $u$ and $v$ if $u$ follows $v$ on Twitter or vice versa. There are a total of 1,836 nodes and 7,561 edges. Approximately 400 of the nodes have no edges, indicating solitary authors who neither follow any other author nor are followed by any. Other nodes have an average degree of 8, with close to 600 of them having a degree of at least 5. The graph is overall sparse with a density of 0.0075.",
"From this community graph, we obtain a vector representation, i.e., an embedding that we refer to as author profile, for each author using the node2vec framework BIBREF15 . Node2vec applies the skip-gram model of Mikolov et al. mikolov2013efficient to a graph in order to create a representation for each of its nodes based on their positions and their neighbors. Specifically, given a graph with nodes $V = \\lbrace v_1$ , $v_2$ , $\\dots $ , $v_n\\rbrace $ , node2vec seeks to maximize the following log probability:"
],
"extractive_spans": [],
"free_form_answer": "The features are the output of running node2vec on a community graph where the nodes are users, and they are connected if one of them follows the other on Twitter.",
"highlighted_evidence": [
"In order to leverage community-based information for the authors whose tweets form our dataset, we create an undirected unlabeled community graph wherein nodes are the authors and edges are the connections between them. An edge is instantiated between two authors $u$ and $v$ if $u$ follows $v$ on Twitter or vice versa.",
"From this community graph, we obtain a vector representation, i.e., an embedding that we refer to as author profile, for each author using the node2vec framework BIBREF15 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to leverage community-based information for the authors whose tweets form our dataset, we create an undirected unlabeled community graph wherein nodes are the authors and edges are the connections between them. An edge is instantiated between two authors $u$ and $v$ if $u$ follows $v$ on Twitter or vice versa. There are a total of 1,836 nodes and 7,561 edges. Approximately 400 of the nodes have no edges, indicating solitary authors who neither follow any other author nor are followed by any. Other nodes have an average degree of 8, with close to 600 of them having a degree of at least 5. The graph is overall sparse with a density of 0.0075.",
"From this community graph, we obtain a vector representation, i.e., an embedding that we refer to as author profile, for each author using the node2vec framework BIBREF15 . Node2vec applies the skip-gram model of Mikolov et al. mikolov2013efficient to a graph in order to create a representation for each of its nodes based on their positions and their neighbors. Specifically, given a graph with nodes $V = \\lbrace v_1$ , $v_2$ , $\\dots $ , $v_n\\rbrace $ , node2vec seeks to maximize the following log probability:"
],
"extractive_spans": [],
"free_form_answer": "The features are the output of running node2vec on a community graph where the nodes are users, and they are connected if one of them follows the other on Twitter.",
"highlighted_evidence": [
"In order to leverage community-based information for the authors whose tweets form our dataset, we create an undirected unlabeled community graph wherein nodes are the authors and edges are the connections between them. An edge is instantiated between two authors $u$ and $v$ if $u$ follows $v$ on Twitter or vice versa.",
"From this community graph, we obtain a vector representation, i.e., an embedding that we refer to as author profile, for each author using the node2vec framework BIBREF15 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"4857c606a55a83454e8d81ffe17e05cf8bc4b75f",
"197290cb509b9a046b311719c6ce1ce408f3be8a"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Is the dataset used in other work?",
"What is the drawback to methods that rely on textual cues?",
"What community-based profiling features are used?"
],
"question_id": [
"fa800a21469a70fa6490bfc67cabdcc8bf086fb5",
"6883767bbdf14e124c61df4f76335d3e91bfcb03",
"11679d1feba747c64bbbc62939a20fbb69ada0f3"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Snippets from the community graph for our Twitter data.",
"Table 1: Average weighted precision, recall and F1 scores of the different methods on the Twitter datasest. All improvements are significant (p < 0.05) under 10-fold CV paired t-test.",
"Table 2: Performance of the methods on the racism and sexism classes separately. All improvements are significant (p < 0.05) under 10-fold CV paired t-test.",
"Table 3: Examples of improved classification upon the addition of author profiling features (AUTH).",
"Figure 2: Visualization of author embeddings in 2-dimensional space.",
"Figure 3: Visualization of authors from different classes."
],
"file": [
"5-Figure1-1.png",
"7-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"8-Figure2-1.png",
"9-Figure3-1.png"
]
} | [
"Is the dataset used in other work?",
"What is the drawback to methods that rely on textual cues?",
"What community-based profiling features are used?"
] | [
[
"1902.06734-Dataset-0",
"1902.06734-Dataset-1"
],
[
"1902.06734-Analysis and discussion-0"
],
[
"1902.06734-Representing authors-0"
]
] | [
"Yes, in Waseem and Hovy (2016)",
"They don't provide wider discourse information",
"The features are the output of running node2vec on a community graph where the nodes are users, and they are connected if one of them follows the other on Twitter."
] | 136 |
1907.08540 | Predicting Human Activities from User-Generated Content | The activities we do are linked to our interests, personality, political preferences, and decisions we make about the future. In this paper, we explore the task of predicting human activities from user-generated content. We collect a dataset containing instances of social media users writing about a range of everyday activities. We then use a state-of-the-art sentence embedding framework tailored to recognize the semantics of human activities and perform an automatic clustering of these activities. We train a neural network model to make predictions about which clusters contain activities that were performed by a given user based on the text of their previous posts and self-description. Additionally, we explore the degree to which incorporating inferred user traits into our model helps with this prediction task. | {
"paragraphs": [
[
"What a person does says a lot about who they are. Information about the types of activities that a person engages in can provide insights about their interests BIBREF0 , personality BIBREF1 , physical health BIBREF2 , the activities that they are likely to do in the future BIBREF3 , and other psychological phenomena like personal values BIBREF4 . For example, it has been shown that university students who exhibit traits of interpersonal affect and self-esteem are more likely to attend parties BIBREF5 , and those that value stimulation are likely to watch movies that can be categorized as thrillers BIBREF6 .",
"Several studies have applied computational approaches to the understanding and modeling of human behavior at scale BIBREF7 and in real time BIBREF8 . However, this previous work has mainly relied on specific devices or platforms that require structured definitions of behaviors to be measured. While this leads to an accurate understanding of the types of activities being done by the involved users, these methods capture a relatively narrow set of behaviors compared to the huge range of things that people do on a day-to-day basis. On the other hand, publicly available social media data provide us with information about an extremely rich and diverse set of human activities, but the data are rarely structured or categorized, and they mostly exist in the form of natural language. Recently, however, natural language processing research has provided several examples of methodologies for extracting and representing human activities from text BIBREF9 , BIBREF10 and even multimodal data BIBREF11 .",
"In this paper, we explore the task of predicting human activities from user-generated text data, which will allow us to gain a deeper understanding of the kinds of everyday activities that people discuss online with one another. Throughout the paper, we use the word “activity” to refer to what an individual user does or has done in their daily life. Unlike the typical use of this term in the computer vision community BIBREF12 , BIBREF13 , in this paper we use it in a broad sense, to also encompass non-visual activities such as “make vacation plans\" or “have a dream” We do not focus on fine-grained sequences actions such as “pick up a camera”, “hold a camera to one's face”, “press the shutter release button”, and others. Rather, we focus on the high-level activity as a person would report to others: “take a picture”. Additionally, we specifically focus on everyday human activities done by the users themselves, rather than larger-scale events BIBREF14 , which are typically characterized by the involvement or interest of many users, often at a specific time and location.",
"Given that the space of possible phrases describing human activities is nearly limitless, we propose a set of human activity clusters that summarize a large set of several hundred-thousand self-reported activities. We then construct predictive models that are able to estimate the likelihood that a user has reported that they have performed an activity from any cluster.",
"The paper makes the following main contributions. First, starting with a set of nearly 30,000 human activity patterns, we compile a very large dataset of more than 200,000 users undertaking one of the human activities matching these patterns, along with over 500 million total tweets from these users. Second, we use a state-of-the-art sentence embedding framework tailored to recognize the semantics of human activities and create a set of activity clusters of variable granularity. Third, we explore a neural model that can predict human activities based on natural language data, and in the process also investigate the relationships between everyday human activities and other social variables such as personal values."
],
[
"While we do not expect to know exactly what a person is doing at any given time, it is fairly common for people to publicly share the types of activities that they are doing by making posts, written in natural language, on social media platforms like Twitter. However, when taking a randomly sampled stream of tweets, we find that only a small fraction of the content was directly related to activities that the users were doing in the real world – instead, most instances are more conversational in nature, or contain the sharing of opinions about the world or links to websites or images. Using such a random sample would require us to filter out a large percentage of the total data collected, making the data collection process inefficient.",
"Therefore, in order to target only those tweets that are rich in human activity content, we formulate a set of queries that allows us to use the Twitter Search API to find instances of users tweeting about common human activities. Each query contains a first-person, past-tense verb within a phrase that describes a common activity that people do. Using this approach, we are able to retrieve a set of tweets that contains a high concentration of human activity content, and we also find that users who wrote these tweets are much more likely to have written other tweets that describe human activities (Table TABREF1 ). We build our set of human activity queries from two sources: the Event2Mind dataset BIBREF15 and a set of short activity surveys, which we collect ourselves, to obtain nearly 30K queries (Table TABREF2 ) ."
],
[
"The Event2Mind dataset contains a large number of event phrases which are annotated for intent and reaction. The events themselves come from four sources of phrasal events (stories, common n-grams found in web data, blogs, and English idioms), and many of them fall under our classification of human activities, making Event2Mind a great resource in our search for concrete examples of human activities. We consider events for which a person is the subject (e.g, “PersonX listens to PersonX's music”) to be human activities, and remove the rest (e.g., “It is Christmas morning”). We then use several simple rules to convert the Event2Mind instances into first-person past-tense activities. Since all events were already filtered so that they begin with “PersonX”, we replace the first occurrence of “PersonX” in each event with “I” and all subsequent occurrences with “me”. All occurrences of “PersonX's” become “my”, and the main verb in each phrase is conjugated to its past-tense form using the Pattern python module. For example, the event “PersonX teaches PersonX's son” becomes the query “I taught my son”. Since Event2Mind also contains wildcard placeholders that can match any span of text within the same phrase (e.g., “PersonX buys INLINEFORM0 at the store”) but the Twitter API doesn't provide a mechanism for wildcard search, we split the event on the string INLINEFORM1 and generate a query that requires all substrings to appear in the tweet. We then check all candidate tweets after retrieval and remove any for which the substrings do not appear in the same order as the original pattern."
],
[
"In order to get an even richer set of human activities, we also ask a set of 1,000 people across the United States to list any five activities that they had done in the past week. We collect our responses using Amazon Mechanical Turk, and manually verify that all responses are reasonable. We remove any duplicate strings and automatically convert them into first-person and past-tense (if they were not in that form already). For this set of queries, there are no wildcards and we only search for exact matches. Example queries obtained using this approach include “I went to the gym” and “I watched a documentary”."
],
[
"Using our combined set of unique human activity queries, we use the Twitter Search API to collect the most recent 100 matches per query (the maximum allowed by the API per request), as available, and we refer to these tweets as our set of queried tweets. We then filter the queried tweets as follows: first, we verify that for any tweets requiring the match of multiple substrings (due to wildcards in the original activity phrase), the substrings appear in the correct order and do not span multiple sentences. Next, we remove activity phrases that are preceded with indications that the author of the tweet did not actually perform the activity, such as “I wish” or “should I ...?”. We refer to the set of tweets left after this filtering as valid queried tweets (see Table TABREF8 for more details).",
"In order to gather other potentially useful information about the users who wrote at least one valid queried tweet, we collect both their self-written profile and their previously written tweets (up to 3,200 past tweets per user, as allowed by the Twitter API), and we refer to these as our set of additional tweets. We ensure that there is no overlap between the sets of queried tweets and additional tweets, so in the unlikely case that a user has posted the same tweet multiple times, it cannot be included in both sets.",
"Further, we use a simple pattern-matching approach to extract additional activities from these additional tweets. We search for strings that match I <VBD> .* <EOS> where <VBD> is any past-tense verb, .* matches any string (non-greedy), and <EOS> matches the end of a sentence. We then perform the same filtering as before for indications that the person did not actually do the activity, and we refer to these filtered matches as our set of additional activities (see Table TABREF11 for more information). Note that since these additional activities can contain any range of verbs, they are naturally noisier than our set of valid query tweets, and we therefore do not treat them as a reliable “ground truth” source of self-reported human activities, but as a potentially useful signal of activity-related information that can be associated with users in our dataset.",
"For our final dataset, we also filter our set of users. From the set of users who posted at least one valid queried tweet, we remove those who had empty user profiles, those with less than 25 additional tweets, and those with less than 5 additional activities (Table TABREF12 )."
],
[
"Given that the set of possible human activity phrases is extremely large and it is unlikely that the same phrase will appear multiple times, we make this space more manageable by first performing a clustering over the set of activity phrase instances that we extract from all valid queried tweets. We define an activity phrase instance as the set of words matching an activity query, plus all following words through the end of the sentence in which the match appears. By doing this clustering, our models will be able to make a prediction about the likelihood that a user has mentioned activities from each cluster, rather than only making predictions about a single point in the semantic space of human activities.",
"In order to cluster our activity phrase instances, we need to define a notion of distance between any pair of instances. For this, we turn to prior work on models to determine semantic similarity between human activity phrases BIBREF16 in which the authors utilized transfer learning in order to fine-tune the Infersent BIBREF17 sentence similarity model to specifically capture relationships between human activity phrases. We use the authors' BiLSTM-max sentence encoder trained to capture the relatedness dimension of human activity phrases to obtain vector representations of each of our activity phrases. The measure of distance between vectors produced by this model was shown to be strongly correlated with human judgments of general activity relatedness (Spearman's INLINEFORM0 between the model and human ratings, while inter-annotator agreement is INLINEFORM1 ).",
"While the relationship between two activity phrases can be defined in a number of ways BIBREF10 , we we chose a model that was optimized to capture relatedness so that our clusters would contain groups of related activities without enforcing that they are strictly the same activity. Since the model that we employed was trained on activity phrases in the infinitive form, we again use the Pattern python library, this time to convert all of our past-tense activities to this form. We also omit the leading first person pronoun from each phrase, and remove user mentions (@<user>), hashtags, and URLs. We then define the distance between any two vectors using cosine distance, i.e., INLINEFORM0 , for vectors INLINEFORM1 and INLINEFORM2 .",
"We use K-means clustering in order to find a set of INLINEFORM0 clusters that can be used to represent the semantic space in which the activity vectors lie. We experiment with INLINEFORM1 with INLINEFORM2 and evaluate the clustering results using several metrics that do not require supervision: within-cluster variance, silhouette coefficient BIBREF18 , Calinski-Harabaz criterion BIBREF19 , and Davies-Bouldin criterion BIBREF20 . In practice, however, we find that these metrics are strongly correlated (either positively or negatively) with the INLINEFORM3 , making it difficult to quantitatively compare the results of using a different number of clusters, and we therefore make a decision based on a qualitative analysis of the clusters. For the purpose of making these kinds of predictions about clusters, it is beneficial to have a smaller number of larger clusters, but clusters that are too large are no longer meaningful since they contain sets of activities that are less strongly related to one another. In the end, we find that using INLINEFORM4 clusters leads to a good balance between cluster size and specificity, and we use this configuration for our prediction experiments moving forward. Examples of activities that were assigned the same cluster label are shown in Table TABREF15 , and Table TABREF16 illustrates the notion of distance within our newly defined semantic space of human activities. For example, two cooking-related clusters are near to one another, while a photography-related cluster is very distant from both."
],
[
"Given a set of activity clusters and knowledge about the users who have reported to have participated in these activities, we explore the ability of machine learning models to make inferences about which activities are likely to be next performed by a user. Here we describe the supervised learning setup, evaluation, and neural architecture used for the prediction task."
],
[
"We formulate our prediction problem as follows: for a given user, we would like to produce a probability distribution over all activity clusters such that: INLINEFORM0 ",
"where INLINEFORM0 is a set of activity clusters, INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are vectors that represent the user's history, profile, and attributes, respectively, and INLINEFORM4 is the target cluster. The target cluster is the cluster label of an activity cluster that contains an activity that is known to have been performed by the user.",
"If a model is able to accurately predict the target cluster, then it is able to estimate the general type of activity that the user is likely to write about doing in the future given some set of information about the user and what they have written in the past. By also generating a probability distribution over the clusters, we can assign a likelihood that each user will write about performing each group of activities in the future. For example, such a model could predict the likelihood that a person will claim to engage in a “Cooking” activity or a “Pet/Animal related” activity.",
"The ability to predict the exact activity cluster correctly is an extremely difficult task, and in fact, achieving that alone would be a less informative result than producing predictions about the likelihood of all clusters. Further, in our setup, we only have knowledge about a sample of activities that people actually have done. In reality, it is very likely that users have participated in activities that belong to a huge variety of clusters, regardless of which activities were actually reported on social media. Therefore, it should be sufficient for a model to give a relatively high probability to any activity that has been reported by a user, even if there is no report of the user having performed an activity from the cluster with the highest probability for that user."
],
[
"As input to our activity prediction model, we use three major components: a user's history, profile, and attributes. We represent a history as a sequence of documents, INLINEFORM0 , written by the user, that contain information about the kinds of activities that they have done. Let INLINEFORM1 , and each document in INLINEFORM2 is represented as a sequence of tokens. We experiment with two sources for INLINEFORM3 : all additional tweets written by a user, or only the additional activities contained in tweets written by a user, which is a direct subset of the text contained in the full set of tweets.",
"A user's profile is a single document, also represented as a sequence of tokens. For each user, we populate the profile input using the plain text user description associated with their account, which often contains terms which express self-identity such as “republican” or “athiest.”",
"We represent the tokens in both the user's history and profile with the pretrained 100-dimensional GloVe-Twitter word embeddings BIBREF21 , and preprocess all text with the script included with these embeddings.",
"Finally, our model allows the inclusion of any additional attributes that might be known or inferred in order to aid the prediction task, which can be passed to the model as a INLINEFORM0 dimensional real-valued vector. For instance, we can use personal values as a set of attributes, as described in Section SECREF26 .",
"We train a deep neural model, summarized in Figure FIGREF21 , to take a user's history, profile, and attributes, and output a probability distribution over the set of INLINEFORM0 clusters of human activities, indicating the likelihood that the user has reported to have performed an activity in each cluster. There are four major components of our network:",
"This is applied to each of the INLINEFORM0 documents in the history– either an activity phrase or a full tweet. For document INLINEFORM1 in INLINEFORM2 , it takes a sequence of token embeddings as input and produces a INLINEFORM3 dimensional vector, INLINEFORM4 as output.",
"This layer takes the sequence INLINEFORM0 as input and produces a single INLINEFORM1 dimensional vector, INLINEFORM2 , as output, intended to represent high-level features extracted from the entire history of the user.",
"Takes each token in the user's profile as input and produces a single INLINEFORM0 dimensional vector, INLINEFORM1 as output.",
"As input, this module takes the concatenation INLINEFORM0 , where INLINEFORM1 is the predefined attribute vector associated with the user. Then, a prediction is made for each of the INLINEFORM2 clusters, first applying softmax in order to obtain a probability distribution. We refer to the dimension of the output as INLINEFORM3 .",
"For any of the three encoder layers, several layer types can be used, including recurrent, convolutional, or self-attention based BIBREF22 layers. The classifier layer is the only layer that does not take a sequence as input and we implement it using a simple feed-forward multi-layer network containing INLINEFORM0 layers with INLINEFORM1 hidden units each. The network is trained with cross-entropy loss, which has been shown to perform competitively when optimizing for top-k classification tasks BIBREF23 ."
],
[
"While the attributes vector INLINEFORM0 can be used to encode any information of interest about a user, we choose to experiment with the use of personal values because of their theoretical connection to human activities BIBREF6 . In order to get a representation of a user's values, we turn to the hierarchical personal values lexicon from BIBREF24 . In this lexicon, there are 50 value dimensions, represented as sets of words and phrases that characterize that value. Since users' profiles often contain value-related content, we use the Distributed Dictionary Representations (DDR) method BIBREF25 to compute a score, INLINEFORM1 for each value dimension, INLINEFORM2 , using cosine similarity as follows: INLINEFORM3 ",
"where INLINEFORM0 is a representation of a set of vectors, which, for the DDR method, is defined as the mean vector of the set; INLINEFORM1 is a set of word embeddings, one for each token in the user's profile; and INLINEFORM2 is another set of word embeddings, one for each token in the lexicon for value dimension INLINEFORM3 . Finally, we set INLINEFORM4 where INLINEFORM5 , the number of value dimensions in the lexicon. Examples of profiles with high scores for sample value dimensions are shown in Table TABREF27 .",
"Further, we explore the types of activity clusters that contain activities reported by users with high scores for various value dimensions. For a given value, we compute a score for each cluster INLINEFORM0 by taking the average INLINEFORM1 of all users who tweeted about doing activities in the cluster. For each value INLINEFORM2 , we can then rank all clusters by their INLINEFORM3 score. Examples of those with the highest scores are presented in Table TABREF28 . We observe that users whose profiles had high scores for Family were likely to report doing activities including family members, those with high scores for Nature tweeted about travel, and those with high Work-Ethic scores reported performing writing related tasks."
],
[
"We evaluate our activity prediction models using a number of metrics that consider not only the most likely cluster, but also the set of INLINEFORM0 most likely clusters. First, we evaluate the average per-class accuracy of the model's ability to rank INLINEFORM1 , the target cluster, within the top INLINEFORM2 clusters. These scores tell us how well the model is able to make predictions about the kinds of activities that each user is likely to do.",
"Second, we test how well the model is able to sort users by their likelihood of having reported to do an activity from a cluster. This average comparison rank (ACR) score is computed as follows: for each user in the test set, we sample INLINEFORM0 other users who do not have the same activity label. Then, we use the probabilities assigned by the model to rank all INLINEFORM1 users by their likelihood of being assigned INLINEFORM3 , and the comparison rank score is the percentage of users who were ranked ahead of the target user (lower is better). We then average this comparison rank across all users in the test set to get the ACR. The ACR score tells us how well the model is able to find a rank users based on their likelihood of writing about doing a given activity, which could be useful for finding, e.g., the users who are most likely to claim that they “purchased some pants” or least likely to mention that they “went to the gym” in the future."
],
[
"We split our data at the user-level, and from our set of valid users we use 200,000 instances for training data, 10,000 as test data, and the rest as our validation set.",
"For the document encoder and profile encoder we use Bi-LSTMs with max pooling BIBREF17 , with INLINEFORM0 and INLINEFORM1 . For the history encoder, we empirically found that single mean pooling layer over the set of all document embeddings outperformed other more complicated architectures, and so that is what we use in our experiments. Finally, the classifier is a 3-layer feed-forward network with and INLINEFORM2 for the hidden layers, followed by a softmax over the INLINEFORM3 -dimensional output. We use Adam BIBREF26 as our optimizer, set the maximum number of epochs to 100, and shuffle the order of the training data at each epoch. During each training step, we represent each user's history as a new random sample of INLINEFORM4 documents if there are more than INLINEFORM5 documents available for the user, and we use a batch size of 32 users. Since there is a class imbalance in our data, we use sample weighting in order to prevent the model from converging to a solution that simply predicts the most common classes present in the training data. Each sample is weighted according to its class, INLINEFORM6 , using the following formula: INLINEFORM7 ",
"where INLINEFORM0 is the number of training instances belonging to class INLINEFORM1 . We evaluate our model on the development data after each epoch and save the model with the highest per-class accuracy. Finally, we compute the results on the test data using this model, and report these results.",
"We test several configurations of our model. We use the complete model described in section SECREF19 using either the set of additional tweets written by a user as their history ( INLINEFORM0 ), or only the set of additional activities contained in those tweets ( INLINEFORM1 ). Then, to test the effect of the various model components, we systematically ablate the attributes vector input INLINEFORM2 , the profile text (and subsequently, the Profile Encoder layer) INLINEFORM3 , and the set of documents, D, comprising the history along with the Document and History Encoders, thereby removing the INLINEFORM4 vector as input to the classifier. We also explore removing pairs of these inputs at the same time. To contextualize the results, we also include the theoretical scores achieved by random guessing, labeled as rand.",
"We consider two variations on our dataset: the first is a simplified, 50-class classification problem. We choose the 50 most common clusters out of our full set of INLINEFORM0 and only make predictions about users who have reportedly performed an activity in one of these clusters. The second variation uses the entire dataset, but rather than making predictions about all INLINEFORM1 classes, we only make fine-grained predictions about those classes for which INLINEFORM2 . We do this under the assumption that training an adequate classifier for a given class requires at least INLINEFORM3 examples. All classes for which INLINEFORM4 are assigned an “other” label. In this way, we still make a prediction for every instance in the dataset, but we avoid allowing the model to try to fit to a huge landscape of outputs when the training data for some of these outputs is insufficient. By setting INLINEFORM5 to 100, we are left with 805 out of 1024 classes, and an 806th “other” class for our 806-class setup. Note that this version includes all activities from all 1024 clusters, it is just that the smallest clusters are grouped together with the “other” label.",
"While our models are able to make predictions indicating that learning has taken place, it is clear that this prediction task is difficult. In the 50-class setup, the INLINEFORM0 model consistently had the strongest average per-class accuracy for all values of INLINEFORM1 and the lowest (best) ACR score (Table TABREF31 ). The INLINEFORM2 model performed nearly as well, showing that using only the human-activity relevant content from a user's history gives similar results to using the full set of content available. When including the attributes and profile for a user, the model typically overfits quickly and generalization deteriorates.",
"In the 806-class version of the task, we observe the effects of including a larger range of activities, including many that do not appear as often as others in the training data (Table TABREF34 ). This version of the task also simulates a more realistic scenario, since predictions can be made for the “other” class when the model does to expect the user to claim to do an activity from any of the known clusters. In this setting, we see that the INLINEFORM0 model works well for INLINEFORM1 , suggesting that the use of the INLINEFORM2 vectors helps, especially when predicting the correct cluster within the top 25 is important. For INLINEFORM3 , the same INLINEFORM4 model that worked best in the 50-class setup again outperforms the others. Here, in contrast to the 50-class setting, using the full set of tweets usually performs better than focusing only on the human activity content. Interestingly, the best ACR scores are even lower in the 806-class setup, showing that it is just as easy to rank users by their likelihood of writing about an activity, even when considering many more activity clusters."
],
[
"In this paper, we addressed the task of predicting human activities from user-generated content. We collected a large Twitter dataset consisting of posts from more than 200,000 users mentioning at least one of the nearly 30,000 everyday activities that we explored. Using sentence embedding models, we projected activity instances into a vector space and perform clustering in order to learn about the high-level groups of behaviors that are commonly mentioned online. We trained predictive models to make inferences about the likelihood that a user had reported to have done activities across the range of clusters that we discovered, and found that these models were able to achieve results significantly higher than random guessing baselines for the metrics that we consider. While the overall prediction scores are not very high, the models that we trained do show that they are able to generalize findings from one set of users to another. This is evidence that the task is feasible, but very difficult, and it could benefit from further investigation.",
"We make the activity clusters, models, and code for the prediction task available at http://lit.eecs.umich.edu/downloads.html"
],
[
"This research was supported in part through computational resources and services provided by the Advanced Research Computing at the University of Michigan. This material is based in part upon work supported by the Michigan Institute for Data Science, by the National Science Foundation (grant #1815291), by the John Templeton Foundation (grant #61156), and by DARPA (grant #HR001117S0026-AIDA-FP-045). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the Michigan Institute for Data Science, the National Science Foundation, the John Templeton Foundation, or DARPA. Many thanks to the anonymous reviewers who provided helpful feedback."
]
],
"section_name": [
"Introduction",
"Data",
"Event2Mind Activities",
"Short Survey Activities",
"Query Results",
"Creating Human Activity Clusters",
"Methodology",
"Problem Statement",
"Model Architecture",
"Incorporating Personal Values",
"Evaluation",
"Experiments and Results",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"38f9bc78397a0861c3df02694ba35fa961aae4ab",
"71f68ef0a40442a7bf8eb3972b556f678db0212d",
"8f0ee71b3d00d4276066129a78e666b2eaf12458"
],
"answer": [
{
"evidence": [
"While the attributes vector INLINEFORM0 can be used to encode any information of interest about a user, we choose to experiment with the use of personal values because of their theoretical connection to human activities BIBREF6 . In order to get a representation of a user's values, we turn to the hierarchical personal values lexicon from BIBREF24 . In this lexicon, there are 50 value dimensions, represented as sets of words and phrases that characterize that value. Since users' profiles often contain value-related content, we use the Distributed Dictionary Representations (DDR) method BIBREF25 to compute a score, INLINEFORM1 for each value dimension, INLINEFORM2 , using cosine similarity as follows: INLINEFORM3"
],
"extractive_spans": [],
"free_form_answer": "The hierarchical personal values lexicon with 50 sets of words and phrases that represent the user's value.",
"highlighted_evidence": [
"In order to get a representation of a user's values, we turn to the hierarchical personal values lexicon from BIBREF24 . In this lexicon, there are 50 value dimensions, represented as sets of words and phrases that characterize that value."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"While the attributes vector INLINEFORM0 can be used to encode any information of interest about a user, we choose to experiment with the use of personal values because of their theoretical connection to human activities BIBREF6 . In order to get a representation of a user's values, we turn to the hierarchical personal values lexicon from BIBREF24 . In this lexicon, there are 50 value dimensions, represented as sets of words and phrases that characterize that value. Since users' profiles often contain value-related content, we use the Distributed Dictionary Representations (DDR) method BIBREF25 to compute a score, INLINEFORM1 for each value dimension, INLINEFORM2 , using cosine similarity as follows: INLINEFORM3"
],
"extractive_spans": [
"personal values"
],
"free_form_answer": "",
"highlighted_evidence": [
"While the attributes vector INLINEFORM0 can be used to encode any information of interest about a user, we choose to experiment with the use of personal values because of their theoretical connection to human activities BIBREF6 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 8: Profiles scoring the highest for various values categories when measured with the values lexicon."
],
"extractive_spans": [],
"free_form_answer": "Family, Nature, Work-Ethic, Religion",
"highlighted_evidence": [
"FLOAT SELECTED: Table 8: Profiles scoring the highest for various values categories when measured with the values lexicon."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"d27bcaf41cc89cd52581e39950206be911fb7639",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"391896a615acf134e0391001302e4c944d11455a",
"3fd77b2f69e70671a4abd5e821bf939bd17d95dc",
"4e9e8f511526322e822bf3bcc74d8cfaa71b74db"
],
"answer": [
{
"evidence": [
"While our models are able to make predictions indicating that learning has taken place, it is clear that this prediction task is difficult. In the 50-class setup, the INLINEFORM0 model consistently had the strongest average per-class accuracy for all values of INLINEFORM1 and the lowest (best) ACR score (Table TABREF31 ). The INLINEFORM2 model performed nearly as well, showing that using only the human-activity relevant content from a user's history gives similar results to using the full set of content available. When including the attributes and profile for a user, the model typically overfits quickly and generalization deteriorates."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"When including the attributes and profile for a user, the model typically overfits quickly and generalization deteriorates."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"While our models are able to make predictions indicating that learning has taken place, it is clear that this prediction task is difficult. In the 50-class setup, the INLINEFORM0 model consistently had the strongest average per-class accuracy for all values of INLINEFORM1 and the lowest (best) ACR score (Table TABREF31 ). The INLINEFORM2 model performed nearly as well, showing that using only the human-activity relevant content from a user's history gives similar results to using the full set of content available. When including the attributes and profile for a user, the model typically overfits quickly and generalization deteriorates."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" When including the attributes and profile for a user, the model typically overfits quickly and generalization deteriorates."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"While our models are able to make predictions indicating that learning has taken place, it is clear that this prediction task is difficult. In the 50-class setup, the INLINEFORM0 model consistently had the strongest average per-class accuracy for all values of INLINEFORM1 and the lowest (best) ACR score (Table TABREF31 ). The INLINEFORM2 model performed nearly as well, showing that using only the human-activity relevant content from a user's history gives similar results to using the full set of content available. When including the attributes and profile for a user, the model typically overfits quickly and generalization deteriorates.",
"In the 806-class version of the task, we observe the effects of including a larger range of activities, including many that do not appear as often as others in the training data (Table TABREF34 ). This version of the task also simulates a more realistic scenario, since predictions can be made for the “other” class when the model does to expect the user to claim to do an activity from any of the known clusters. In this setting, we see that the INLINEFORM0 model works well for INLINEFORM1 , suggesting that the use of the INLINEFORM2 vectors helps, especially when predicting the correct cluster within the top 25 is important. For INLINEFORM3 , the same INLINEFORM4 model that worked best in the 50-class setup again outperforms the others. Here, in contrast to the 50-class setting, using the full set of tweets usually performs better than focusing only on the human activity content. Interestingly, the best ACR scores are even lower in the 806-class setup, showing that it is just as easy to rank users by their likelihood of writing about an activity, even when considering many more activity clusters."
],
"extractive_spans": [],
"free_form_answer": "only in the 806-class task predicting <= 25 clusters",
"highlighted_evidence": [
"In the 50-class setup, the INLINEFORM0 model consistently had the strongest average per-class accuracy for all values of INLINEFORM1 and the lowest (best) ACR score (Table TABREF31 ).",
" In this setting, we see that the INLINEFORM0 model works well for INLINEFORM1 , suggesting that the use of the INLINEFORM2 vectors helps, especially when predicting the correct cluster within the top 25 is important. For INLINEFORM3 , the same INLINEFORM4 model that worked best in the 50-class setup again outperforms the others."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"d27bcaf41cc89cd52581e39950206be911fb7639"
]
},
{
"annotation_id": [
"6e69109d8f48ee3158cf9b4535af3b828494dc20",
"92458786a5cfe447b47f4151d88fa3183b2fcf6f",
"991c9de9a2fdf7a8d41ecb9dc13a75266bc95eac"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Number of human activity queries from multiple sources."
],
"extractive_spans": [],
"free_form_answer": "29,494",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Number of human activity queries from multiple sources."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Number of human activity queries from multiple sources."
],
"extractive_spans": [],
"free_form_answer": "29537",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Number of human activity queries from multiple sources."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The paper makes the following main contributions. First, starting with a set of nearly 30,000 human activity patterns, we compile a very large dataset of more than 200,000 users undertaking one of the human activities matching these patterns, along with over 500 million total tweets from these users. Second, we use a state-of-the-art sentence embedding framework tailored to recognize the semantics of human activities and create a set of activity clusters of variable granularity. Third, we explore a neural model that can predict human activities based on natural language data, and in the process also investigate the relationships between everyday human activities and other social variables such as personal values."
],
"extractive_spans": [
"30,000"
],
"free_form_answer": "",
"highlighted_evidence": [
"First, starting with a set of nearly 30,000 human activity patterns, we compile a very large dataset of more than 200,000 users undertaking one of the human activities matching these patterns, along with over 500 million total tweets from these users."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"d27bcaf41cc89cd52581e39950206be911fb7639",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"18935d5484caaa0bbe67a72485cc9ad6d83ac941",
"a7b55b6a49f9cdad3231953ae187edc7ee027ba4"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"In order to get an even richer set of human activities, we also ask a set of 1,000 people across the United States to list any five activities that they had done in the past week. We collect our responses using Amazon Mechanical Turk, and manually verify that all responses are reasonable. We remove any duplicate strings and automatically convert them into first-person and past-tense (if they were not in that form already). For this set of queries, there are no wildcards and we only search for exact matches. Example queries obtained using this approach include “I went to the gym” and “I watched a documentary”."
],
"extractive_spans": [],
"free_form_answer": "1000 people",
"highlighted_evidence": [
"In order to get an even richer set of human activities, we also ask a set of 1,000 people across the United States to list any five activities that they had done in the past week. We collect our responses using Amazon Mechanical Turk, and manually verify that all responses are reasonable. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"d27bcaf41cc89cd52581e39950206be911fb7639",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"4e5526d9620e4df157ee933e241dd2928b3ba6b8",
"bb9552d42f3ae47324c6e341b21298cab87c7709"
],
"answer": [
{
"evidence": [
"Therefore, in order to target only those tweets that are rich in human activity content, we formulate a set of queries that allows us to use the Twitter Search API to find instances of users tweeting about common human activities. Each query contains a first-person, past-tense verb within a phrase that describes a common activity that people do. Using this approach, we are able to retrieve a set of tweets that contains a high concentration of human activity content, and we also find that users who wrote these tweets are much more likely to have written other tweets that describe human activities (Table TABREF1 ). We build our set of human activity queries from two sources: the Event2Mind dataset BIBREF15 and a set of short activity surveys, which we collect ourselves, to obtain nearly 30K queries (Table TABREF2 ) ."
],
"extractive_spans": [
" query contains a first-person, past-tense verb within a phrase that describes a common activity that people do"
],
"free_form_answer": "",
"highlighted_evidence": [
"Therefore, in order to target only those tweets that are rich in human activity content, we formulate a set of queries that allows us to use the Twitter Search API to find instances of users tweeting about common human activities. Each query contains a first-person, past-tense verb within a phrase that describes a common activity that people do. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Therefore, in order to target only those tweets that are rich in human activity content, we formulate a set of queries that allows us to use the Twitter Search API to find instances of users tweeting about common human activities. Each query contains a first-person, past-tense verb within a phrase that describes a common activity that people do. Using this approach, we are able to retrieve a set of tweets that contains a high concentration of human activity content, and we also find that users who wrote these tweets are much more likely to have written other tweets that describe human activities (Table TABREF1 ). We build our set of human activity queries from two sources: the Event2Mind dataset BIBREF15 and a set of short activity surveys, which we collect ourselves, to obtain nearly 30K queries (Table TABREF2 ) ."
],
"extractive_spans": [],
"free_form_answer": "By querying Twitter Search API for the tweets containing a first-person and a past-tense verb that describes a common activity.",
"highlighted_evidence": [
"Therefore, in order to target only those tweets that are rich in human activity content, we formulate a set of queries that allows us to use the Twitter Search API to find instances of users tweeting about common human activities. Each query contains a first-person, past-tense verb within a phrase that describes a common activity that people do."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0a51272d2877b2e7822e48d8e98c30b2c44516a4",
"928bb8b317dbe2b3febca7a30015073c4bce8ed8",
"b2b83c29d4864b8a344a7f7dd9ed57dbe955d167"
],
"answer": [
{
"evidence": [
"Therefore, in order to target only those tweets that are rich in human activity content, we formulate a set of queries that allows us to use the Twitter Search API to find instances of users tweeting about common human activities. Each query contains a first-person, past-tense verb within a phrase that describes a common activity that people do. Using this approach, we are able to retrieve a set of tweets that contains a high concentration of human activity content, and we also find that users who wrote these tweets are much more likely to have written other tweets that describe human activities (Table TABREF1 ). We build our set of human activity queries from two sources: the Event2Mind dataset BIBREF15 and a set of short activity surveys, which we collect ourselves, to obtain nearly 30K queries (Table TABREF2 ) ."
],
"extractive_spans": [
"Twitter"
],
"free_form_answer": "",
"highlighted_evidence": [
"Therefore, in order to target only those tweets that are rich in human activity content, we formulate a set of queries that allows us to use the Twitter Search API to find instances of users tweeting about common human activities."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Therefore, in order to target only those tweets that are rich in human activity content, we formulate a set of queries that allows us to use the Twitter Search API to find instances of users tweeting about common human activities. Each query contains a first-person, past-tense verb within a phrase that describes a common activity that people do. Using this approach, we are able to retrieve a set of tweets that contains a high concentration of human activity content, and we also find that users who wrote these tweets are much more likely to have written other tweets that describe human activities (Table TABREF1 ). We build our set of human activity queries from two sources: the Event2Mind dataset BIBREF15 and a set of short activity surveys, which we collect ourselves, to obtain nearly 30K queries (Table TABREF2 ) ."
],
"extractive_spans": [
"Twitter "
],
"free_form_answer": "",
"highlighted_evidence": [
"Therefore, in order to target only those tweets that are rich in human activity content, we formulate a set of queries that allows us to use the Twitter Search API to find instances of users tweeting about common human activities. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Therefore, in order to target only those tweets that are rich in human activity content, we formulate a set of queries that allows us to use the Twitter Search API to find instances of users tweeting about common human activities. Each query contains a first-person, past-tense verb within a phrase that describes a common activity that people do. Using this approach, we are able to retrieve a set of tweets that contains a high concentration of human activity content, and we also find that users who wrote these tweets are much more likely to have written other tweets that describe human activities (Table TABREF1 ). We build our set of human activity queries from two sources: the Event2Mind dataset BIBREF15 and a set of short activity surveys, which we collect ourselves, to obtain nearly 30K queries (Table TABREF2 ) ."
],
"extractive_spans": [
" Twitter"
],
"free_form_answer": "",
"highlighted_evidence": [
"Therefore, in order to target only those tweets that are rich in human activity content, we formulate a set of queries that allows us to use the Twitter Search API to find instances of users tweeting about common human activities."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"d27bcaf41cc89cd52581e39950206be911fb7639",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"what user traits are taken into account?",
"does incorporating user traits help the task?",
"how many activities are in the dataset?",
"who annotated the datset?",
"how were the data instances chosen?",
"what social media platform was the data collected from?"
],
"question_id": [
"e0c80d31d590df46d33502169b1d32f0aa1ea6e3",
"7a8b24062a5bb63a8b4c729f6247a7fd2fec7f07",
"cab082973e1648b0f0cc651ab4e0298a5ca012b5",
"1cc394bdfdfd187fc0af28500ad47a0a764d5645",
"16cc37e4f8e2db99eaf89337a3d9ada431170d5b",
"cc78a08f5bfe233405c99cb3dac1f11f3a9268b1"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Effect of targeted query approach on activity frequency in tweets. “Valid activities” are defined as first-person verb phrases that clearly indicate that the author of the text has actually performed the concrete activity being described. For each set of tweets, a random subset of 100 was chosen and manually annotated for validity.",
"Table 2: Number of human activity queries from multiple sources.",
"Table 3: Summary of query results.",
"Table 4: Summary of additional data.",
"Table 5: Summary valid user filtering.",
"Table 6: Examples of clustered activities (with manually provided labels, for reference purposes only).",
"Table 7: Three sample clusters and their distances from the first cluster in Table 6, showing the closest cluster, a somewhat distant cluster, and a very distant cluster.",
"Figure 1: Predictive model architecture.",
"Table 8: Profiles scoring the highest for various values categories when measured with the values lexicon.",
"Table 9: Activity clusters associated with the highest scoring users for various values categories when measured with the values lexicon.",
"Table 10: Per-class accuracy (%) @ keval and ACR scores for the 50-class prediction task. Note that removing h from either fullT or fullA gives the same model. For ACR only, lower is better.",
"Table 11: Per-class accuracy (%) @ keval and ACR scores for the 806-class prediction task. Note that removing h from either fullT or fullA gives the same model. For ACR only, lower is better."
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"3-Table3-1.png",
"3-Table4-1.png",
"3-Table5-1.png",
"4-Table6-1.png",
"5-Table7-1.png",
"6-Figure1-1.png",
"7-Table8-1.png",
"7-Table9-1.png",
"8-Table10-1.png",
"9-Table11-1.png"
]
} | [
"what user traits are taken into account?",
"does incorporating user traits help the task?",
"how many activities are in the dataset?",
"who annotated the datset?",
"how were the data instances chosen?"
] | [
[
"1907.08540-7-Table8-1.png"
],
[
"1907.08540-Experiments and Results-5",
"1907.08540-Experiments and Results-6"
],
[
"1907.08540-Introduction-4",
"1907.08540-2-Table2-1.png"
],
[
"1907.08540-Short Survey Activities-0"
],
[
"1907.08540-Data-1"
]
] | [
"Family, Nature, Work-Ethic, Religion",
"only in the 806-class task predicting <= 25 clusters",
"29537",
"1000 people",
"By querying Twitter Search API for the tweets containing a first-person and a past-tense verb that describes a common activity."
] | 137 |
1611.04887 | Interpreting the Syntactic and Social Elements of the Tweet Representations via Elementary Property Prediction Tasks | Research in social media analysis is experiencing a recent surge with a large number of works applying representation learning models to solve high-level syntactico-semantic tasks such as sentiment analysis, semantic textual similarity computation, hashtag prediction and so on. Although the performance of the representation learning models are better than the traditional baselines for the tasks, little is known about the core properties of a tweet encoded within the representations. Understanding these core properties would empower us in making generalizable conclusions about the quality of representations. Our work presented here constitutes the first step in opening the black-box of vector embedding for social media posts, with emphasis on tweets in particular. In order to understand the core properties encoded in a tweet representation, we evaluate the representations to estimate the extent to which it can model each of those properties such as tweet length, presence of words, hashtags, mentions, capitalization, and so on. This is done with the help of multiple classifiers which take the representation as input. Essentially, each classifier evaluates one of the syntactic or social properties which are arguably salient for a tweet. This is also the first holistic study on extensively analysing the ability to encode these properties for a wide variety of tweet representation models including the traditional unsupervised methods (BOW, LDA), unsupervised representation learning methods (Siamese CBOW, Tweet2Vec) as well as supervised methods (CNN, BLSTM). | {
"paragraphs": [
[
"Research in social media analysis is recently seeing a surge in the number of research works applying representation learning models to solve high-level syntactico-semantic tasks such as sentiment analysis [1], semantic textual similarity computation [2], hashtag prediction [3] and so on. Though the performance of the representation learning models are better than the traditional models for all the tasks, little is known about the core properties of a tweet encoded within the representations. In a recent work, Hill et al. [4] perform a comparison of different sentence representation models by evaluating them for different high-level semantic tasks such as paraphrase identification, sentiment classification, question answering, document retrieval and so on. This type of coarse-grained analysis is opaque as it does not clearly reveal the kind of information encoded by the representations. Our work presented here constitutes the first step in opening the black-box of vector embeddings for social media posts, particularly tweets.",
"Essentially we ask the following question: “What are the core properties encoded in the given tweet representation?”. We explicitly group the set of these properties into two categories: syntactic and social. Syntactic category includes properties such as tweet length, the order of words in it, the words themselves, slang words, hashtags and named entities in the tweet. On the other hand, social properties consist of `is reply', and `reply time'. We investigate the degree to which the tweet representations encode these properties. We assume that if we cannot train a classifier to predict a property based on its tweet representation, then this property is not encoded in this representation. For example, the model which preserves the tweet length should perform well in predicting the length given the representation generated from the model. Though these elementary property prediction tasks are not directly related to any downstream application, knowing that the model is good at modeling a particular property (e.g., the social properties) indicates that it could excel in correlated applications (e.g., user profiling task). In this work we perform an extensive evaluation of 9 unsupervised and 4 supervised tweet representation models, using 8 different properties. The most relevant work is that of Adi et al. [5], which investigates three sentence properties in comparing unsupervised sentence representation models such as average of words vectors and LSTM auto-encoders. We differ from their work in two ways: (1) While they focus on sentences, we focus on social media posts which opens up the challenge of considering multiple salient properties such as hashtags, named entities, conversations and so on. (2) While they work with only unsupervised representation-learning models, we investigate the traditional unsupervised methods (BOW, LDA), unsupervised representation learning methods (Siamese CBOW, Tweet2Vec), as well as supervised methods (CNN, BLSTM).",
"Our main contributions are summarized below.",
"The paper is organized as follows. Sections 2 and 3 discuss the set of proposed elementary property prediction tasks and the models considered for this study respectively. Section 4 and 5 presents the experiment setup and result analysis respectively. We conclude the work with a brief summary in Section 5."
],
[
"In this section we list down the set of proposed elementary property prediction tasks to test the characteristics of a tweet embedding. Table TABREF4 explains all the tasks considered in this study. Note that we use a neural network to build the elementary property prediction task classifier which has the following two layers in order: the representation layer and the softmax layer on top whose size varies according to the specific task. When there are more than one input for a task, we concatenate embeddings for each input.",
"[1]https://noisy-text.github.io/norm-shared-task.html"
],
[
"In this section we list down the set of models considered in the study."
],
[
"Bag Of Words (BOW) [17] - This simple representation captures the TF-IDF value of an n-gram. We pick top 50K n-grams, with the value of `n' going up to 5.",
"Latent Dirichlet Allocation (LDA) [18] - We use the topic distribution resulting by running LDA with number of topics as 200, as tweet representation.",
"Bag Of Means (BOM) - We take the average of the word embeddings obtained by running the Glove [12] model on 2 billion tweets with embedding size as 200.",
"Deep Structured Semantic Models (DSSM) [9] - This is a deep encoder trained to represent query and document in common space, for document ranking. We use the publicly available pre-trained encoder to encode the tweets.",
"Convolutional DSSM (CDSSM) [10] - This is the convolutional variant of DSSM.",
"Paragraph2Vec (PV) [13] - This model based on Word2Vec [15] learns embedding for a document which is good in predicting the words within it. We use the BOW variant with embedding size and window size of 200 and 10 respectively.",
"Skip-Thought Vectors (STV) [6] - This is a GRU [16] encoder trained to predict adjacent sentences in a books corpus. We use the recommended combine-skip (4800-dimensional) vectors from the publicly available encoder.",
"Tweet2Vec (T2V) [3] - This is a character composition model working directly on the character sequences to predict the user-annotated hashtags in a tweet. We use publicly available encoder, which was trained on 2 million tweets.",
"Siamese CBOW (SCBOW) [2] - This model uses averaging of word vectors to represent a sentence, and the objective and data used here is the same as that for STV. Note that this is different from BOW because the word vectors here are optimized for sentence representation."
],
[
"Convolutional Neural Network (CNN) - This is a simple CNN proposed in [7].",
"Long Short Term Memory Network (LSTM) [14] - This is a vanilla LSTM based recurrent model, applied from start to the end of a tweet, and the last hidden vector is used as tweet representation.",
"Bi-directional LSTM (BLSTM) [14] - This extends LSTM by using two LSTM networks, processing a tweet left-to-right and right-to-left respectively. Tweet is represented by concatenating the last hidden vector of both the LSTMs.",
"FastText (FT) [8] - This is a simple architecture which averages the n-gram vectors to represent a tweet, followed by the softmax in the final layer. This simple model has been shown to be effective for the text classification task."
],
[
"In this section we perform an extensive evaluation of all the models in an attempt to find the significance of different representation models. Essentially we study every model (with optimal settings reported in the corresponding paper) with respect to the following three perspectives."
],
[
"Fine-grained analysis of various supervised and unsupervised models discussed in Section SECREF3 , across various dimensions discussed in Section SECREF4 , is presented in Table TABREF30 . The codes used to conduct our experiments are publicly accessible at: https://github.com/ganeshjawahar/fine-tweet/."
],
[
"We summarize the results of property prediction tasks in Table TABREF31 . Length prediction turns out to be a difficult task for most of the models. Models which rely on the recurrent architectures such as LSTM, STV, T2V have sufficient capacity to perform well in modeling the tweet length. Also BLSTM is the best in modeling slang words. BLSTM outperforms the LSTM variant in all the tasks except `Content', which signifies the power of using the information flowing from both the directions of the tweet. T2V which is expected to perform well in this task because of its ability to work at a more fine level (i.e., characters) performs the worst. In fact T2V does not outperform other models in any task, which could be mainly due to the fact that the hashtags which are used for supervision in learning tweet representations reduces the generalization capability of the tweets beyond hashtag prediction. Prediction tasks such as `Content' and `Hashtag' seem to be less difficult as all the models perform nearly optimal for them. The superior performance of all the models for the `Content' task in particular is unlike the relatively lower performance reported for in [5], mainly because of the short length of the tweets. The most surprising result is when the BOM model turned out to be the best in `Word Order' task, as the model by nature loses the word order. This might be due to the correlation between word order patterns and the occurrences of specific words. BOM has also proven to perform well for identifying the named entities in the tweet.",
"STV is good for most of the social tasks. We believe the main reason for STV's performance is two-fold: (a) the inter-sentential features extracted from STV's encoder by the prediction of the surrounding sentences in the books corpus contains rich social elements that are vital for social tasks (e.g., user profiling), (b) the recurrent structure in both the encoder and decoder persists useful information in the memory nicely. The second claim is further substantiated by observing the poor performance of SCBOW whose objective is also similar to STV, but with a simpler architecture (i.e., word vector averaging). In future it would be interesting to create such a model for Twitter conversations or chronologically ordered topical tweets so as to directly capture the latent social features from Twitter."
],
[
"This setup captures the behavior of the model with the increase in the context size, which is defined in terms of number of words. For `Word Order' task, we see the performance of all the models to be negatively correlated with the tweet length, which is expected. On the other hand, there is no correlation between the tweet length and the performance of all the models for the tasks such as `Slang Words', `Content', `Hashtag', `NE', and `Is Reply'. For social tasks such as `Is Reply' and `Reply Time', we see a positive correlation between the tweet length and the performance of all the models. This finding is intuitive in social media analysis where additional context is mostly helpful in modeling the social behavior."
],
[
"This test essentially captures the importance of “natural word order”. We found that LDA was invariant to the reordering of the words in the tweet for most of the tasks. This result is not surprising as LDA considers each word in the tweet independently. CNN, LSTM and BLSTM rely on the word order significantly to perform well for most of the prediction tasks."
],
[
"This work proposed a set of elementary property prediction tasks to understand different tweet representations in an application independent, fine-grained fashion. The open nature of social media not only poses a plethora of opportunities to understand the basic characteristics of the posts, but also helped us draw novel insights about different representation models. We observed that among supervised models, CNN, LSTM and BLSTM encapsulates most of the syntactic and social properties with a great accuracy, while BOW, DSSM, STV and T2V does that among the unsupervised models. Tweet length affects the task prediction accuracies, but we found that all models behave similarly under variation in tweet length. Finally while LDA is insensitive to input word order, CNN, LSTM and BLSTM are extremely sensitive to word order."
],
[
"[1] Tang, D., Wei, F., Qin, B., Yang, N., Liu, T., & Zhou, M.: Sentiment Embeddings with Applications to Sentiment Analysis. In: TKDE. (2016) 28(2) 496-509",
"[2] Kenter, T., Borisov, A., & de Rijke, M.: Siamese CBOW: Optimizing Word Embeddings for Sentence Representations. In: ACL. (2016) 941-951",
"[3] Dhingra, B., Zhou, Z., Fitzpatrick, D., Muehl, M., & Cohen, W. W.: Tweet2Vec: Character-Based Distributed Representations for Social Media. In: ACL. (2016)",
"[4] Hill, F., Cho, K.,& Korhonen, A.: Learning distributed representations of sentences from unlabelled data. In: NAACL. (2016)",
"[5] Adi, Y., Kermany, E., Belinkov, Y., Lavi, O., & Goldberg, Y.: Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks. arXiv preprint arXiv:1608.04207. (2016)",
"[6] Kiros, R., Zhu, Y., Salakhutdinov, R. R., Zemel, R., Urtasun, R., Torralba, A., & Fidler, S.: Skip-thought vectors. In: NIPS. (2015) 3294-3302",
"[7] Kim, Y.: Convolutional neural networks for sentence classification. In: EMNLP. (2014)",
"[8] Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T.: Bag of Tricks for Efficient Text Classification. arXiv preprint arXiv:1607.01759. (2016)",
"[9] Huang, P. S., He, X., Gao, J., Deng, L., Acero, A., & Heck, L.: Learning deep structured semantic models for web search using clickthrough data. In: CIKM. (2013)",
"[10] Shen, Y., He, X., Gao, J., Deng, L., & Mesnil, G.: A latent semantic model with convolutional-pooling structure for information retrieval. In: CIKM. (2014)",
"[11] Ritter, A., Clark, S., Mausam, & Etzioni, O.: Named entity recognition in tweets: an experimental study. In: EMNLP. (2011) 1524-1534",
"[12] Pennington, J., Socher, R., & Manning, C. D.: Glove: Global Vectors for Word Representation. In: EMNLP. (2014) 1532-43",
"[13] Le, Q. V., & Mikolov, T.: Distributed Representations of Sentences and Documents. In: ICML. (2014) 1188-1196",
"[14] Graves, A., Mohamed, A. R., & Hinton, G.: Speech recognition with deep recurrent neural networks. In: ICASSP. (2013) 6645-6649",
"[15] Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J.: Distributed representations of words and phrases and their compositionality. In: NIPS. (2013) 3111-3119",
"[16] Cho, K., Van Merriënboer, B., Bahdanau, D., & Bengio, Y.: On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259. (2014)",
"[17] Harris, Z. S.: Distributional structure. In: Word. (1954) 146-162",
"[18] Blei, D. M., Ng, A. Y., & Jordan, M. I.: Latent dirichlet allocation. In: JMLR. (2003)"
]
],
"section_name": [
"Introduction",
"Elementary Property Prediction Tasks",
"Representation Models",
"Unsupervised",
"Supervised",
"Experiments",
"Results and Analysis",
"Property Prediction Task Accuracy",
"Property Prediction Task Accuracy versus Tweet Length",
"Sensitivity of Property Prediction Task to Word Order",
"Conclusion",
"References"
]
} | {
"answers": [
{
"annotation_id": [
"3bfe2ba9f2e7b80f08301ea11e5be01e152dbf89",
"f4ece7ae01fbb0e1a6be236c0612f86bd846dc9c",
"ab73b1bc233e61c527e17567ace8ac6555e46fb1"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"In this section we list down the set of models considered in the study.",
"Bag Of Means (BOM) - We take the average of the word embeddings obtained by running the Glove [12] model on 2 billion tweets with embedding size as 200.",
"Skip-Thought Vectors (STV) [6] - This is a GRU [16] encoder trained to predict adjacent sentences in a books corpus. We use the recommended combine-skip (4800-dimensional) vectors from the publicly available encoder.",
"Tweet2Vec (T2V) [3] - This is a character composition model working directly on the character sequences to predict the user-annotated hashtags in a tweet. We use publicly available encoder, which was trained on 2 million tweets."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In this section we list down the set of models considered in the study.",
"Bag Of Means (BOM) - We take the average of the word embeddings obtained by running the Glove [12] model on 2 billion tweets with embedding size as 200",
"Skip-Thought Vectors (STV) [6] - This is a GRU [16] encoder trained to predict adjacent sentences in a books corpus. We use the recommended combine-skip (4800-dimensional) vectors from the publicly available encoder.",
"Tweet2Vec (T2V) [3] - This is a character composition model working directly on the character sequences to predict the user-annotated hashtags in a tweet. We use publicly available encoder, which was trained on 2 million tweets."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5cec15b210fee4d504af12246b4b8cdd8045b97d",
"60df00189d4fcaaff60d7d29f5a2b462aea44153",
"fe498c32b92d7b7151535dbe288a591dcf62d677"
],
"answer": [
{
"evidence": [
"This work proposed a set of elementary property prediction tasks to understand different tweet representations in an application independent, fine-grained fashion. The open nature of social media not only poses a plethora of opportunities to understand the basic characteristics of the posts, but also helped us draw novel insights about different representation models. We observed that among supervised models, CNN, LSTM and BLSTM encapsulates most of the syntactic and social properties with a great accuracy, while BOW, DSSM, STV and T2V does that among the unsupervised models. Tweet length affects the task prediction accuracies, but we found that all models behave similarly under variation in tweet length. Finally while LDA is insensitive to input word order, CNN, LSTM and BLSTM are extremely sensitive to word order."
],
"extractive_spans": [
"among supervised models, CNN, LSTM and BLSTM encapsulates most of the syntactic and social properties with a great accuracy, while BOW, DSSM, STV and T2V does that among the unsupervised models",
"Tweet length affects the task prediction accuracies",
"LDA is insensitive to input word order, CNN, LSTM and BLSTM are extremely sensitive"
],
"free_form_answer": "",
"highlighted_evidence": [
"We observed that among supervised models, CNN, LSTM and BLSTM encapsulates most of the syntactic and social properties with a great accuracy, while BOW, DSSM, STV and T2V does that among the unsupervised models. Tweet length affects the task prediction accuracies, but we found that all models behave similarly under variation in tweet length. Finally while LDA is insensitive to input word order, CNN, LSTM and BLSTM are extremely sensitive to word order."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Essentially we ask the following question: “What are the core properties encoded in the given tweet representation?”. We explicitly group the set of these properties into two categories: syntactic and social. Syntactic category includes properties such as tweet length, the order of words in it, the words themselves, slang words, hashtags and named entities in the tweet. On the other hand, social properties consist of `is reply', and `reply time'. We investigate the degree to which the tweet representations encode these properties. We assume that if we cannot train a classifier to predict a property based on its tweet representation, then this property is not encoded in this representation. For example, the model which preserves the tweet length should perform well in predicting the length given the representation generated from the model. Though these elementary property prediction tasks are not directly related to any downstream application, knowing that the model is good at modeling a particular property (e.g., the social properties) indicates that it could excel in correlated applications (e.g., user profiling task). In this work we perform an extensive evaluation of 9 unsupervised and 4 supervised tweet representation models, using 8 different properties. The most relevant work is that of Adi et al. [5], which investigates three sentence properties in comparing unsupervised sentence representation models such as average of words vectors and LSTM auto-encoders. We differ from their work in two ways: (1) While they focus on sentences, we focus on social media posts which opens up the challenge of considering multiple salient properties such as hashtags, named entities, conversations and so on. (2) While they work with only unsupervised representation-learning models, we investigate the traditional unsupervised methods (BOW, LDA), unsupervised representation learning methods (Siamese CBOW, Tweet2Vec), as well as supervised methods (CNN, BLSTM).",
"Bag Of Words (BOW) [17] - This simple representation captures the TF-IDF value of an n-gram. We pick top 50K n-grams, with the value of `n' going up to 5.",
"Latent Dirichlet Allocation (LDA) [18] - We use the topic distribution resulting by running LDA with number of topics as 200, as tweet representation.",
"Bag Of Means (BOM) - We take the average of the word embeddings obtained by running the Glove [12] model on 2 billion tweets with embedding size as 200.",
"Deep Structured Semantic Models (DSSM) [9] - This is a deep encoder trained to represent query and document in common space, for document ranking. We use the publicly available pre-trained encoder to encode the tweets.",
"Convolutional DSSM (CDSSM) [10] - This is the convolutional variant of DSSM.",
"Paragraph2Vec (PV) [13] - This model based on Word2Vec [15] learns embedding for a document which is good in predicting the words within it. We use the BOW variant with embedding size and window size of 200 and 10 respectively.",
"Skip-Thought Vectors (STV) [6] - This is a GRU [16] encoder trained to predict adjacent sentences in a books corpus. We use the recommended combine-skip (4800-dimensional) vectors from the publicly available encoder.",
"Tweet2Vec (T2V) [3] - This is a character composition model working directly on the character sequences to predict the user-annotated hashtags in a tweet. We use publicly available encoder, which was trained on 2 million tweets.",
"Siamese CBOW (SCBOW) [2] - This model uses averaging of word vectors to represent a sentence, and the objective and data used here is the same as that for STV. Note that this is different from BOW because the word vectors here are optimized for sentence representation.",
"Convolutional Neural Network (CNN) - This is a simple CNN proposed in [7].",
"Long Short Term Memory Network (LSTM) [14] - This is a vanilla LSTM based recurrent model, applied from start to the end of a tweet, and the last hidden vector is used as tweet representation.",
"Bi-directional LSTM (BLSTM) [14] - This extends LSTM by using two LSTM networks, processing a tweet left-to-right and right-to-left respectively. Tweet is represented by concatenating the last hidden vector of both the LSTMs.",
"FastText (FT) [8] - This is a simple architecture which averages the n-gram vectors to represent a tweet, followed by the softmax in the final layer. This simple model has been shown to be effective for the text classification task.",
"This work proposed a set of elementary property prediction tasks to understand different tweet representations in an application independent, fine-grained fashion. The open nature of social media not only poses a plethora of opportunities to understand the basic characteristics of the posts, but also helped us draw novel insights about different representation models. We observed that among supervised models, CNN, LSTM and BLSTM encapsulates most of the syntactic and social properties with a great accuracy, while BOW, DSSM, STV and T2V does that among the unsupervised models. Tweet length affects the task prediction accuracies, but we found that all models behave similarly under variation in tweet length. Finally while LDA is insensitive to input word order, CNN, LSTM and BLSTM are extremely sensitive to word order."
],
"extractive_spans": [
"CNN, LSTM and BLSTM encapsulates most of the syntactic and social properties with a great accuracy, while BOW, DSSM, STV and T2V does that among the unsupervised models",
"Tweet length affects the task prediction accuracies,",
"CNN, LSTM and BLSTM are extremely sensitive to word order"
],
"free_form_answer": "",
"highlighted_evidence": [
"Essentially we ask the following question: “What are the core properties encoded in the given tweet representation?”. We explicitly group the set of these properties into two categories: syntactic and social. Syntactic category includes properties such as tweet length, the order of words in it, the words themselves, slang words, hashtags and named entities in the tweet. On the other hand, social properties consist of `is reply', and `reply time'. We investigate the degree to which the tweet representations encode these properties.",
"Bag Of Words (BOW) [17] - This simple representation captures the TF-IDF value of an n-gram. We pick top 50K n-grams, with the value of `n' going up to 5.\n\nLatent Dirichlet Allocation (LDA) [18] - We use the topic distribution resulting by running LDA with number of topics as 200, as tweet representation.\n\nBag Of Means (BOM) - We take the average of the word embeddings obtained by running the Glove [12] model on 2 billion tweets with embedding size as 200.\n\nDeep Structured Semantic Models (DSSM) [9] - This is a deep encoder trained to represent query and document in common space, for document ranking. We use the publicly available pre-trained encoder to encode the tweets.\n\nConvolutional DSSM (CDSSM) [10] - This is the convolutional variant of DSSM.\n\nParagraph2Vec (PV) [13] - This model based on Word2Vec [15] learns embedding for a document which is good in predicting the words within it. We use the BOW variant with embedding size and window size of 200 and 10 respectively.\n\nSkip-Thought Vectors (STV) [6] - This is a GRU [16] encoder trained to predict adjacent sentences in a books corpus. We use the recommended combine-skip (4800-dimensional) vectors from the publicly available encoder.\n\nTweet2Vec (T2V) [3] - This is a character composition model working directly on the character sequences to predict the user-annotated hashtags in a tweet. We use publicly available encoder, which was trained on 2 million tweets.\n\nSiamese CBOW (SCBOW) [2] - This model uses averaging of word vectors to represent a sentence, and the objective and data used here is the same as that for STV. Note that this is different from BOW because the word vectors here are optimized for sentence representation.\n\n",
"Convolutional Neural Network (CNN) - This is a simple CNN proposed in [7].\n\nLong Short Term Memory Network (LSTM) [14] - This is a vanilla LSTM based recurrent model, applied from start to the end of a tweet, and the last hidden vector is used as tweet representation.\n\nBi-directional LSTM (BLSTM) [14] - This extends LSTM by using two LSTM networks, processing a tweet left-to-right and right-to-left respectively. Tweet is represented by concatenating the last hidden vector of both the LSTMs.\n\nFastText (FT) [8] - This is a simple architecture which averages the n-gram vectors to represent a tweet, followed by the softmax in the final layer. This simple model has been shown to be effective for the text classification task.\n\n",
"We observed that among supervised models, CNN, LSTM and BLSTM encapsulates most of the syntactic and social properties with a great accuracy, while BOW, DSSM, STV and T2V does that among the unsupervised models. Tweet length affects the task prediction accuracies, but we found that all models behave similarly under variation in tweet length. Finally while LDA is insensitive to input word order, CNN, LSTM and BLSTM are extremely sensitive to word order.",
"We observed that among supervised models, CNN, LSTM and BLSTM encapsulates most of the syntactic and social properties with a great accuracy, while BOW, DSSM, STV and T2V does that among the unsupervised models. Tweet length affects the task prediction accuracies, but we found that all models behave similarly under variation in tweet length. Finally while LDA is insensitive to input word order, CNN, LSTM and BLSTM are extremely sensitive to word order."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"This work proposed a set of elementary property prediction tasks to understand different tweet representations in an application independent, fine-grained fashion. The open nature of social media not only poses a plethora of opportunities to understand the basic characteristics of the posts, but also helped us draw novel insights about different representation models. We observed that among supervised models, CNN, LSTM and BLSTM encapsulates most of the syntactic and social properties with a great accuracy, while BOW, DSSM, STV and T2V does that among the unsupervised models. Tweet length affects the task prediction accuracies, but we found that all models behave similarly under variation in tweet length. Finally while LDA is insensitive to input word order, CNN, LSTM and BLSTM are extremely sensitive to word order."
],
"extractive_spans": [],
"free_form_answer": "Supervised models CNN, LSTM and BLSTM and unsupervised models BOW, DSSM, STV and T2V can encapsulate most of the syntactic and social properties. Tweet length affects the task prediction accuracies for all models. LDA is insensitive to input word order, but, CNN, LSTM\nand BLSTM are not.",
"highlighted_evidence": [
"We observed that among supervised models, CNN, LSTM and BLSTM encapsulates most of the syntactic and social properties with a great accuracy, while BOW, DSSM, STV and T2V does that among the unsupervised models. Tweet length affects the task prediction accuracies, but we found that all models behave similarly under variation in tweet length. Finally while LDA is insensitive to input word order, CNN, LSTM and BLSTM are extremely sensitive to word order."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"0ab12cb28724b881bb7e1a73903c56899c1cd7eb",
"2e487c43fe87fba20bdc40f453cd90ff96a68121"
],
"answer": [
{
"evidence": [
"Essentially we ask the following question: “What are the core properties encoded in the given tweet representation?”. We explicitly group the set of these properties into two categories: syntactic and social. Syntactic category includes properties such as tweet length, the order of words in it, the words themselves, slang words, hashtags and named entities in the tweet. On the other hand, social properties consist of `is reply', and `reply time'. We investigate the degree to which the tweet representations encode these properties. We assume that if we cannot train a classifier to predict a property based on its tweet representation, then this property is not encoded in this representation. For example, the model which preserves the tweet length should perform well in predicting the length given the representation generated from the model. Though these elementary property prediction tasks are not directly related to any downstream application, knowing that the model is good at modeling a particular property (e.g., the social properties) indicates that it could excel in correlated applications (e.g., user profiling task). In this work we perform an extensive evaluation of 9 unsupervised and 4 supervised tweet representation models, using 8 different properties. The most relevant work is that of Adi et al. [5], which investigates three sentence properties in comparing unsupervised sentence representation models such as average of words vectors and LSTM auto-encoders. We differ from their work in two ways: (1) While they focus on sentences, we focus on social media posts which opens up the challenge of considering multiple salient properties such as hashtags, named entities, conversations and so on. (2) While they work with only unsupervised representation-learning models, we investigate the traditional unsupervised methods (BOW, LDA), unsupervised representation learning methods (Siamese CBOW, Tweet2Vec), as well as supervised methods (CNN, BLSTM)."
],
"extractive_spans": [
" if we cannot train a classifier to predict a property based on its tweet representation, then this property is not encoded in this representation"
],
"free_form_answer": "",
"highlighted_evidence": [
"We assume that if we cannot train a classifier to predict a property based on its tweet representation, then this property is not encoded in this representation. For example, the model which preserves the tweet length should perform well in predicting the length given the representation generated from the model. Though these elementary property prediction tasks are not directly related to any downstream application, knowing that the model is good at modeling a particular property (e.g., the social properties) indicates that it could excel in correlated applications (e.g., user profiling task)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this section we list down the set of proposed elementary property prediction tasks to test the characteristics of a tweet embedding. Table TABREF4 explains all the tasks considered in this study. Note that we use a neural network to build the elementary property prediction task classifier which has the following two layers in order: the representation layer and the softmax layer on top whose size varies according to the specific task. When there are more than one input for a task, we concatenate embeddings for each input.",
"Essentially we ask the following question: “What are the core properties encoded in the given tweet representation?”. We explicitly group the set of these properties into two categories: syntactic and social. Syntactic category includes properties such as tweet length, the order of words in it, the words themselves, slang words, hashtags and named entities in the tweet. On the other hand, social properties consist of `is reply', and `reply time'. We investigate the degree to which the tweet representations encode these properties. We assume that if we cannot train a classifier to predict a property based on its tweet representation, then this property is not encoded in this representation. For example, the model which preserves the tweet length should perform well in predicting the length given the representation generated from the model. Though these elementary property prediction tasks are not directly related to any downstream application, knowing that the model is good at modeling a particular property (e.g., the social properties) indicates that it could excel in correlated applications (e.g., user profiling task). In this work we perform an extensive evaluation of 9 unsupervised and 4 supervised tweet representation models, using 8 different properties. The most relevant work is that of Adi et al. [5], which investigates three sentence properties in comparing unsupervised sentence representation models such as average of words vectors and LSTM auto-encoders. We differ from their work in two ways: (1) While they focus on sentences, we focus on social media posts which opens up the challenge of considering multiple salient properties such as hashtags, named entities, conversations and so on. (2) While they work with only unsupervised representation-learning models, we investigate the traditional unsupervised methods (BOW, LDA), unsupervised representation learning methods (Siamese CBOW, Tweet2Vec), as well as supervised methods (CNN, BLSTM)."
],
"extractive_spans": [],
"free_form_answer": "Through 8 different property prediction tasks",
"highlighted_evidence": [
"In this section we list down the set of proposed elementary property prediction tasks to test the characteristics of a tweet embedding. ",
"In this work we perform an extensive evaluation of 9 unsupervised and 4 supervised tweet representation models, using 8 different properties. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"",
"",
""
],
"question": [
"Do they report results only for English data?",
"What conclusions do the authors draw from their experiments?",
"In what way does each classifier evaluate one of the syntactic or social properties which are salient for a tweet?"
],
"question_id": [
"101d7a355e8bf6d1860917876ee0b9971eae7a2f",
"4288621e960ffbfce59ef1c740d30baac1588b9b",
"c3befe7006ca81ce64397df654c31c11482dafbe"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 1: Details of the Set of Proposed Elementary Property Prediction Tasks",
"Table 2: Fine-grained analysis of supervised/unsupervised models",
"Table 3: Elementary Property Prediction Task F1-Score (%) - Performance Comparison"
],
"file": [
"2-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png"
]
} | [
"What conclusions do the authors draw from their experiments?",
"In what way does each classifier evaluate one of the syntactic or social properties which are salient for a tweet?"
] | [
[
"1611.04887-Unsupervised-2",
"1611.04887-Unsupervised-0",
"1611.04887-Unsupervised-5",
"1611.04887-Unsupervised-3",
"1611.04887-Introduction-1",
"1611.04887-Unsupervised-7",
"1611.04887-Supervised-1",
"1611.04887-Supervised-2",
"1611.04887-Conclusion-0",
"1611.04887-Unsupervised-8",
"1611.04887-Unsupervised-6",
"1611.04887-Supervised-0",
"1611.04887-Unsupervised-1",
"1611.04887-Supervised-3",
"1611.04887-Unsupervised-4"
],
[
"1611.04887-Introduction-1",
"1611.04887-Elementary Property Prediction Tasks-0"
]
] | [
"Supervised models CNN, LSTM and BLSTM and unsupervised models BOW, DSSM, STV and T2V can encapsulate most of the syntactic and social properties. Tweet length affects the task prediction accuracies for all models. LDA is insensitive to input word order, but, CNN, LSTM\nand BLSTM are not.",
"Through 8 different property prediction tasks"
] | 138 |
1808.10006 | Correcting Length Bias in Neural Machine Translation | We study two problems in neural machine translation (NMT). First, in beam search, whereas a wider beam should in principle help translation, it often hurts NMT. Second, NMT has a tendency to produce translations that are too short. Here, we argue that these problems are closely related and both rooted in label bias. We show that correcting the brevity problem almost eliminates the beam problem; we compare some commonly-used methods for doing this, finding that a simple per-word reward works well; and we introduce a simple and quick way to tune this reward using the perceptron algorithm. | {
"paragraphs": [
[
"Although highly successful, neural machine translation (NMT) systems continue to be plagued by a number of problems. We focus on two here: the beam problem and the brevity problem.",
"First, machine translation systems rely on heuristics to search through the intractably large space of possible translations. Most commonly, beam search is used during the decoding process. Traditional statistical machine translation systems often rely on large beams to find good translations. However, in neural machine translation, increasing the beam size has been shown to degrade performance. This is the last of the six challenges identified by BIBREF0 .",
"The second problem, noted by several authors, is that NMT tends to generate translations that are too short. BIBREF1 and BIBREF0 address this by dividing translation scores by their length, inspired by work on audio chords BIBREF2 . A similar method is also used by Google's production system BIBREF3 . A third simple method used by various authors BIBREF4 , BIBREF5 , BIBREF6 is a tunable reward added for each output word. BIBREF7 and BIBREF8 propose variations of this reward that enable better guarantees during search.",
"In this paper, we argue that these two problems are related (as hinted at by BIBREF0 ) and that both stem from label bias, an undesirable property of models that generate sentences word by word instead of all at once.",
"The typical solution is to introduce a sentence-level correction to the model. We show that making such a correction almost completely eliminates the beam problem. We compare two commonly-used corrections, length normalization and a word reward, and show that the word reward is slightly better.",
"Finally, instead of tuning the word reward using grid search, we introduce a way to learn it using a perceptron-like tuning method. We show that the optimal value is sensitive both to task and beam size, implying that it is important to tune for every model trained. Fortunately, tuning is a quick post-training step."
],
[
"Current neural machine translation models are examples of locally normalized models, which estimate the probability of generating an output sequence INLINEFORM0 as INLINEFORM1 ",
"For any partial output sequence INLINEFORM0 , let us call INLINEFORM1 , where INLINEFORM2 ranges over all possible completions of INLINEFORM3 , the suffix distribution of INLINEFORM4 . The suffix distribution must sum to one, so if the model overestimates INLINEFORM5 , there is no way for the suffix distribution to downgrade it. This is known as label bias BIBREF9 , BIBREF10 ."
],
[
"Label bias was originally identified in the context of HMMs and MEMMs for sequence-labeling tasks, where the input sequence INLINEFORM0 and output sequence INLINEFORM1 have the same length, and INLINEFORM2 is conditioned only on the partial input sequence INLINEFORM3 . In this case, since INLINEFORM4 has no knowledge of future inputs, it's much more likely to be incorrectly estimated. For example, suppose we had to translate, word-by-word, un hélicoptère to a helicopter (Figure FIGREF2 ). Given just the partial input un, there is no way to know whether to translate it as a or an. Therefore, the probability for the incorrect translation INLINEFORM5 will turn out to be an overestimate. As a result, the model will overweight translations beginning with an, regardless of the next input word.",
"This effect is most noticeable when the suffix distribution has low entropy, because even when new input (hélicoptère) is revealed, the model will tend to ignore it. For example, suppose that the available translations for hélicoptère are helicopter, chopper, whirlybird, and autogyro. The partial translation a must divide its probability mass among the three translations that start with a consonant, while an gives all its probability mass to autogyro, causing the incorrect translation an autogyro to end up with the highest probability.",
"In this example, INLINEFORM0 , even though overestimated, is still lower than INLINEFORM1 , and wins only because its suffixes have higher probability. Greedy search would prune the incorrect prefix an and yield the correct output. In general, then, we might expect greedy or beam search to alleviate some symptoms of label bias. Namely, a prefix with a low-entropy suffix distribution can be pruned if its probability is, even though overestimated, not among the highest probabilities. Such an observation was made by BIBREF11 in the context of dependency parsing, and we will see next that precisely such a situation affects output length in NMT."
],
[
"In NMT, unlike the word-by-word translation example in the previous section, each output symbol is conditioned on the entire input sequence. Nevertheless, it's still possible to overestimate or underestimate INLINEFORM0 , so the possibility of label bias still exists. We expect that it will be more visible with weaker models, that is, with less training data.",
"Moreover, in NMT, the output sequence is of variable length, and generation of the output sequence stops when </s> is generated. In effect, for any prefix ending with </s>, the suffix distribution has zero entropy. This situation parallels example of the previous section closely: if the model overestimates the probability of outputting </s>, it may proceed to ignore the rest of the input and generate a truncated translation.",
"Figure FIGREF4 illustrates how this can happen. Although the model can learn not to prefer shorter translations by predicting a low probability for INLINEFORM0 early on, at each time step, the score of INLINEFORM1 puts a limit on the total remaining score a translation can have; in the figure, the empty translation has score INLINEFORM2 , so that no translation can have score lower than INLINEFORM3 . This lays a heavy burden on the model to correctly guess the total score of the whole translation at the outset.",
"As in our label-bias example, greedy search would prune the incorrect empty translation. More generally, consider beam search: at time step INLINEFORM0 , only the top INLINEFORM1 partial or complete translations are retained while the rest are pruned. (Implementations of beam search vary in the details, but this variant is simplest for the sake of argument.) Even if a translation ending at time INLINEFORM2 scores higher than a longer translation, as long as it does not fall within the top INLINEFORM3 when compared with partial translations of length INLINEFORM4 (or complete translations of length at most INLINEFORM5 ), it will be pruned and unable to block the longer translation. But if we widen the beam ( INLINEFORM6 ), then translation accuracy will suffer. We call this problem (which is BIBREF0 's sixth challenge) the beam problem. Our claim, hinted at by BIBREF0 , is that the brevity problem and the beam problem are essentially the same, and that solving one will solve the other."
],
[
"To address the brevity problem, many designers of NMT systems add corrections to the model. These corrections are often presented as modifications to the search procedure. But, in our view, the brevity problem is essentially a modeling problem, and these corrections should be seen as modifications to the model (Section SECREF5 ). Furthermore, since the root of the problem is local normalization, our view is that these modifications should be trained as globally-normalized models (Section SECREF6 )."
],
[
"Without any length correction, the standard model score (higher is better) is: INLINEFORM0 ",
"To our knowledge, there are three methods in common use for adjusting the model to favor longer sentences.",
"Length normalization divides the score by INLINEFORM0 BIBREF0 , BIBREF1 , BIBREF2 : INLINEFORM1 ",
"Google's NMT system BIBREF3 relies on a more complicated correction: INLINEFORM0 ",
"Finally, some systems add a constant word reward BIBREF5 : INLINEFORM0 ",
"If INLINEFORM0 , this reduces to the baseline model. The advantage of this simple reward is that it can be computed on partial translations, making it easier to integrate into beam search."
],
[
"All of the above modifications can be viewed as modifications to the base model so that it is no longer a locally-normalized probability model.",
"To train this model, in principle, we should use something like the globally-normalized negative log-likelihood: INLINEFORM0 ",
" where INLINEFORM0 is the reference translation. However, optimizing this is expensive, as it requires performing inference on every training example or heuristic approximations BIBREF12 , BIBREF13 .",
"Alternatively, we can adopt a two-tiered model, familiar from phrase-based translation BIBREF4 , first training INLINEFORM0 and then training INLINEFORM1 while keeping the parameters of INLINEFORM2 fixed, possibly on a smaller dataset. A variety of methods, like minimum error rate training BIBREF14 , BIBREF5 , are possible, but keeping with the globally-normalized negative log-likelihood, we obtain, for the constant word reward, the gradient: INLINEFORM3 ",
" where INLINEFORM0 is the 1-best translation. Then the stochastic gradient descent update is just the familiar perceptron rule: INLINEFORM1 ",
" although below, we update on a batch of sentences rather than a single sentence. Since there is only one parameter to train, we can train it on a relatively small dataset.",
"Length normalization does not have any additional parameters, with the result (in our opinion, strange) that a change is made to the model without any corresponding change to training. We could use gradient-based methods to tune the INLINEFORM0 in the GNMT correction, but the perceptron approximation turns out to drive INLINEFORM1 to INLINEFORM2 , so a different method would be needed."
],
[
"We compare the above methods in four settings, a high-resource German–English system, a medium-resource Russian–English system, and two low-resource French–English and English–French systems. For all settings, we show that larger beams lead to large BLEU and METEOR drops if not corrected. We also show that the optimal parameters can depend on the task, language pair, training data size, as well as the beam size. These values can affect performance strongly."
],
[
"Most of the experimental settings below follow the recommendations of BIBREF15 . Our high-resource, German–English data is from the 2016 WMT shared task BIBREF16 . We use a bidirectional encoder-decoder model with attention BIBREF17 . Our word representation layer has 512 hidden units, while other hidden layers have 1024 nodes. Our model is trained using Adam with a learning rate of 0.0002. We use 32k byte-pair encoding (BPE) operations learned on the combined source and target training data BIBREF19 . We train on minibatches of size 2012 words and validate every 100k sentences, selecting the final model based on development perplexity. Our medium-resource, Russian–English system uses data from the 2017 WMT translation task, which consists of roughly 1 million training sentences BIBREF20 . We use the same architecture as our German–English system, but only have 512 nodes in all layers. We use 16k BPE operations and dropout of 0.2. We train on minibatches of 512 words and validate every 50k sentences.",
"Our low-resource systems use French and English data from the 2010 IWSLT TALK shared task BIBREF21 . We build both French–English and English–French systems. These networks are the same as for the medium Russian-English task, but use only 6k BPE operations. We train on minibatches of 512 words and validate every 30k sentences, restarting Adam when the development perplexity goes up.",
"To tune our correction parameters, we use 1000 sentences from the German–English development dataset, 1000 sentences from the Russian–English development dataset, and the entire development dataset for French–English (892 sentences). We initialize the parameter, INLINEFORM0 . We use batch gradient descent, which we found to be much more stable than stochastic gradient descent, and use a learning rate of INLINEFORM1 , clipping gradients for INLINEFORM2 to 0.5. Training stops if all parameters have an update of less than 0.03 or a max of 25 epochs was reached."
],
[
"Here, we first show that the beam problem is indeed the brevity problem. We then demonstrate that solving the length problem does solve the beam problem. Tables TABREF10 , TABREF11 , and TABREF12 show the results of our German–English, Russian–English, and French–English systems respectively. Each table looks at the impact on BLEU, METEOR, and the ratio of the lengths of generated sentences compared to the gold lengths BIBREF22 , BIBREF23 . The baseline method is a standard model without any length correction. The reward method is the tuned constant word reward discussed in the previous section. Norm refers to the normalization method, where a hypothesis' score is divided by its length.",
"The top sections of Tables TABREF10 , TABREF11 , TABREF12 illustrate the brevity and beam problems in the baseline models. As beam size increases, the BLEU and METEOR scores drop significantly. This is due to the brevity problem, which is illustrated by the length ratio numbers that also drop with increased beam size. For larger beam sizes, the length of the generated output sentences are a fraction of the lengths of the correct translations. For the lower-resource French–English task, the drop is more than 8 BLEU when increasing the beam size from 10 to 150. The issue is even more evident in our Russian-English system where we increase the beam to 1000 and BLEU scores drop by more than 20 points.",
"The results of tuning the word reward, INLINEFORM0 , as described in Section SECREF6 , is shown in the second section of Tables TABREF10 , TABREF11 , and TABREF12 . In contrast to our baseline systems, our tuned word reward always fixes the brevity problem (length ratios are approximately 1.0), and generally fixes the beam problem. An optimized word reward score always leads to improvements in METEOR scores over any of the best baselines. Across all language pairs, reward and norm have close METEOR scores, though the reward method wins out slightly. BLEU scores for reward and norm also increase over the baseline in most cases, despite BLEU's inherent bias towards shorter sentences. Most notably, whereas the baseline Russian–English system lost more than 20 BLEU points when the beam was increased to 1000, our tuned reward score resulted in a BLEU gain over any baseline beam size. Whereas in our baseline systems, the length ratio decreases with larger beam sizes, our tuned word reward results in length ratios of nearly 1.0 across all language pairs, mitigating many of the issues of the brevity problem.",
"We note that the beam problem in NMT exists for relatively small beam sizes – especially when compared to traditional beam sizes in SMT systems. On our medium-resource Russian–English system, we investigate the full impact of this problem using a much larger beam size of 1000. In Table TABREF10 , we can see that the beam problem is particularly pronounced. The first row of the table shows the uncorrected, baseline score. From a beam of 10 to a beam of 1000, the drop in BLEU scores is over 20 points. This is largely due to the brevity problem discussed earlier. The second row of the table shows the length of the translated outputs compared to the lengths of the correct translations. Though the problem persists even at a beam size of 10, at a beam size of 1000, our baseline system generates less than one third the number of words that are in the correct translations. Furthermore, 37.3% of our translated outputs have sentences of length 0. In other words, the most likely translation is to immediately generate the stop symbol. This is the problem visualized in Figure FIGREF4 .",
"However, when we tune our word reward score with a beam of 1000, the problem mostly goes away. Over the uncorrected baseline, we see a 22.0 BLEU point difference for a beam of 1000. Over the uncorrected baseline with a beam of 10, the corrected beam of 1000 gets a BLEU gain of 0.8 BLEU. However, the beam of 1000 still sees a drop of less than 1.0 BLEU over the best corrected version. The word reward method beats the uncorrected baseline and the length normalization correction in almost all cases.",
"Another way to demonstrate that the beam problem is the same as the brevity problem is to look at the translations generated by baseline systems on shorter sentences. Figure FIGREF18 shows the BLEU scores of the Russian–English system for beams of size 10 and 1000 on sentences of varying lengths, with and without correcting lengths. The x-axes of the figure are cumulative: length 20 includes sentences of length 0–20, while length 10 includes 0–10. It is worth noting that BLEU is a word-level metric, but the systems were built using BPE; so the sequences actually generated are longer than the x-axes would suggest.",
"The baseline system on sentences with 10 words or less still has relatively high BLEU scores—even for a beam of 1000. Though there is a slight drop in BLEU (less than 2), it is not nearly as severe as when looking at the entire test set (more than 20). When correcting for length with normalization or word reward, the problem nearly disappears when considering the entire test set, with reward doing slightly better. For comparison, the rightmost points in each of the subplots correspond to the BLEU scores in columns 10 and 1000 of Table TABREF10 . This suggests that the beam problem is strongly related to the brevity problem.",
"The interaction between the length problem and the beam problem can be visualized in the histograms of Figure FIGREF19 on the Russian–English system. In the upper left plot, the uncorrected model with beam 10 has the majority of the generated sentences with a length ratio close to 1.0, the gold lengths. Going down the column, as the beam size increases, the distribution of length ratios skews closer to 0. By a beam size of 1000, 37% of the sentences have a length of 0. However, both the word reward and the normalized models remain very peaked around a length ratio of 1.0 even as the beam size increases."
],
[
"Above, we have shown that fixing the length problem with a word reward score fixes the beam problem. However these results are contingent upon choosing an adequate word reward score, which we have done in our experiments by optimization using a perceptron loss. Here, we show the sensitivity of systems to the value of this penalty, as well as the fact that there is not one correct penalty for all tasks. It is dependent on a myriad of factors including, beam size, dataset, and language pair.",
"In order to investigate how sensitive a system is to the reward score, we varied values of INLINEFORM0 from 0 to INLINEFORM1 on both our German–English and Russian–English systems with a beam size of 50. BLEU scores and length ratios on 1000 heldout development sentences are shown in Figure FIGREF27 . The length ratio is correlated with the word reward as expected, and the BLEU score varies by more than 5 points for German–English and over 4.5 points for Russian–English. On German–English, our method found a value of INLINEFORM2 , which is slightly higher than optimal; this is because the heldout sentences have a slightly shorter length ratio than the training sentences. Conversely, on Russian–English, our found value of INLINEFORM3 is slightly lower than optimal as these heldout sentences have a slightly higher length ratio than the sentences used in training.",
"Tuning the reward penalty using the method described in Section SECREF6 resulted in consistent improvements in METEOR scores and length ratios across all of our systems and language pairs. Tables TABREF10 , TABREF11 , and TABREF12 show the optimized value of INLINEFORM0 for each beam size. Within a language pair, the optimal value of INLINEFORM1 is different for every beam size. Likewise, for a given beam size, the optimal value is different for every system. Our French–English and English–French systems in Table TABREF12 have the exact same architecture, data, and training criteria. Yet, even for the same beam size, the tuned word reward scores are very different.",
"Low-resource neural machine translation performs significantly worse than high-resource machine translation BIBREF0 . Table TABREF26 looks at the impact of training data size on BLEU scores and the beam problem by using 10% and 50% of the available Russian–English data. Once again, the optimal value of INLINEFORM0 is different across all systems and beam sizes. Interestingly, as the amount of training data decreases, the gains in BLEU using a tuned reward penalty increase with larger beam sizes. This suggests that the beam problem is more prevalent in lower-resource settings, likely due to the fact that less training data can increase the effects of label bias.",
"Fortunately, the tuning process is very inexpensive. Although it requires decoding on a development dataset multiple times, we only need a small dataset. The time required for tuning our French–English and German–English systems is shown in Table TABREF13 . These experiments were run on an Nvidia GeForce GTX 1080Ti. The tuning usually takes a few minutes to hours, which is just a fraction of the overall training time. We note that there are numerous optimizations that could be taken to speed this up even more, such as storing the decoding lattice for partial reuse. However, we leave this for future work."
],
[
"Tuning the word reward score generally had higher METEOR scores than length normalization across all of our settings. With BLEU, length normalization beat the word reward on German-English and French–English, but tied on English-French and lost on Russian–English. For the largest beam of 1000, the tuned word reward had a higher BLEU than length normalization. Overall, the two methods have relatively similar performance, but the tuned word reward has the more theoretically justified, globally-normalized derivation – especially in the context of label bias' influence on the brevity problem."
],
[
"We have explored simple and effective ways to alleviate or eliminate the beam problem. We showed that the beam problem can largely be explained by the brevity problem, which results from the locally-normalized structure of the model. We compared two corrections to the model and introduced a method to learn the parameters of these corrections. Because this method is helpful and easy, we hope to see it included to make stronger baseline NMT systems.",
"We have argued that the brevity problem is an example of label bias, and that the solution is a very limited form of globally-normalized model. These can be seen as the simplest case of the more general problem of label bias and the more general solution of globally-normalized models for NMT BIBREF24 , BIBREF25 , BIBREF26 , BIBREF13 . Some questions for future research are:"
],
[
"This research was supported in part by University of Southern California, subcontract 67108176 under DARPA contract HR0011-15-C-0115, and an Amazon Research Award to Chiang."
]
],
"section_name": [
"Introduction",
"Problem",
"Label bias in sequence labeling",
"Length bias in NMT",
"Correcting Length",
"Models",
"Training",
"Experiments",
"Data and settings",
"Solving the length problem solves the beam problem",
"Tuning word reward",
"Word reward vs. length normalization",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"5bef034add26189d63f3eae5708939703d3b9435",
"ee254426978e8cf2f433e37dfe3b048129018ac0"
],
"answer": [
{
"evidence": [
"Alternatively, we can adopt a two-tiered model, familiar from phrase-based translation BIBREF4 , first training INLINEFORM0 and then training INLINEFORM1 while keeping the parameters of INLINEFORM2 fixed, possibly on a smaller dataset. A variety of methods, like minimum error rate training BIBREF14 , BIBREF5 , are possible, but keeping with the globally-normalized negative log-likelihood, we obtain, for the constant word reward, the gradient: INLINEFORM3",
"where INLINEFORM0 is the 1-best translation. Then the stochastic gradient descent update is just the familiar perceptron rule: INLINEFORM1",
"although below, we update on a batch of sentences rather than a single sentence. Since there is only one parameter to train, we can train it on a relatively small dataset."
],
"extractive_spans": [],
"free_form_answer": "Optimal per-word reward is found using SGD, which in this case is the same as the perceptron algorithm",
"highlighted_evidence": [
"A variety of methods, like minimum error rate training BIBREF14 , BIBREF5 , are possible, but keeping with the globally-normalized negative log-likelihood, we obtain, for the constant word reward, the gradient: INLINEFORM3\n\nwhere INLINEFORM0 is the 1-best translation. Then the stochastic gradient descent update is just the familiar perceptron rule: INLINEFORM1\n\nalthough below, we update on a batch of sentences rather than a single sentence."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Alternatively, we can adopt a two-tiered model, familiar from phrase-based translation BIBREF4 , first training INLINEFORM0 and then training INLINEFORM1 while keeping the parameters of INLINEFORM2 fixed, possibly on a smaller dataset. A variety of methods, like minimum error rate training BIBREF14 , BIBREF5 , are possible, but keeping with the globally-normalized negative log-likelihood, we obtain, for the constant word reward, the gradient: INLINEFORM3",
"where INLINEFORM0 is the 1-best translation. Then the stochastic gradient descent update is just the familiar perceptron rule: INLINEFORM1",
"although below, we update on a batch of sentences rather than a single sentence. Since there is only one parameter to train, we can train it on a relatively small dataset."
],
"extractive_spans": [
"hen the stochastic gradient descent update is just the familiar perceptron rule: INLINEFORM1"
],
"free_form_answer": "",
"highlighted_evidence": [
"Alternatively, we can adopt a two-tiered model, familiar from phrase-based translation BIBREF4 , first training INLINEFORM0 and then training INLINEFORM1 while keeping the parameters of INLINEFORM2 fixed, possibly on a smaller dataset. A variety of methods, like minimum error rate training BIBREF14 , BIBREF5 , are possible, but keeping with the globally-normalized negative log-likelihood, we obtain, for the constant word reward, the gradient: INLINEFORM3\n\nwhere INLINEFORM0 is the 1-best translation. Then the stochastic gradient descent update is just the familiar perceptron rule: INLINEFORM1\n\nalthough below, we update on a batch of sentences rather than a single sentence. Since there is only one parameter to train, we can train it on a relatively small dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"0b41a6f9eda11a31e702aeef0ae39569ceb72836",
"0ffb80a5def565d7ce4dbdc910aa10253823088f",
"5e1ff3924687a74d8ec0203c8a912693107960db"
],
"answer": [
{
"evidence": [
"The results of tuning the word reward, INLINEFORM0 , as described in Section SECREF6 , is shown in the second section of Tables TABREF10 , TABREF11 , and TABREF12 . In contrast to our baseline systems, our tuned word reward always fixes the brevity problem (length ratios are approximately 1.0), and generally fixes the beam problem. An optimized word reward score always leads to improvements in METEOR scores over any of the best baselines. Across all language pairs, reward and norm have close METEOR scores, though the reward method wins out slightly. BLEU scores for reward and norm also increase over the baseline in most cases, despite BLEU's inherent bias towards shorter sentences. Most notably, whereas the baseline Russian–English system lost more than 20 BLEU points when the beam was increased to 1000, our tuned reward score resulted in a BLEU gain over any baseline beam size. Whereas in our baseline systems, the length ratio decreases with larger beam sizes, our tuned word reward results in length ratios of nearly 1.0 across all language pairs, mitigating many of the issues of the brevity problem."
],
"extractive_spans": [
" tuned word reward "
],
"free_form_answer": "",
"highlighted_evidence": [
"The results of tuning the word reward, INLINEFORM0 , as described in Section SECREF6 , is shown in the second section of Tables TABREF10 , TABREF11 , and TABREF12 . In contrast to our baseline systems, our tuned word reward always fixes the brevity problem (length ratios are approximately 1.0), and generally fixes the beam problem. An optimized word reward score always leads to improvements in METEOR scores over any of the best baselines."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To our knowledge, there are three methods in common use for adjusting the model to favor longer sentences.",
"Length normalization divides the score by INLINEFORM0 BIBREF0 , BIBREF1 , BIBREF2 : INLINEFORM1",
"Google's NMT system BIBREF3 relies on a more complicated correction: INLINEFORM0",
"Finally, some systems add a constant word reward BIBREF5 : INLINEFORM0"
],
"extractive_spans": [],
"free_form_answer": "Length normalization; Google’s NMT correction; constant word reward",
"highlighted_evidence": [
"To our knowledge, there are three methods in common use for adjusting the model to favor longer sentences.\n\nLength normalization divides the score by INLINEFORM0 BIBREF0 , BIBREF1 , BIBREF2 : INLINEFORM1\n\nGoogle's NMT system BIBREF3 relies on a more complicated correction: INLINEFORM0\n\nFinally, some systems add a constant word reward BIBREF5 : INLINEFORM0"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Length normalization divides the score by INLINEFORM0 BIBREF0 , BIBREF1 , BIBREF2 : INLINEFORM1",
"Google's NMT system BIBREF3 relies on a more complicated correction: INLINEFORM0",
"Finally, some systems add a constant word reward BIBREF5 : INLINEFORM0"
],
"extractive_spans": [
"Length normalization",
"Google's NMT",
"constant word reward"
],
"free_form_answer": "",
"highlighted_evidence": [
"Length normalization divides the score by INLINEFORM0 BIBREF0 , BIBREF1 , BIBREF2 : INLINEFORM1\n\nGoogle's NMT system BIBREF3 relies on a more complicated correction: INLINEFORM0\n\nFinally, some systems add a constant word reward BIBREF5 : INLINEFORM0"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2e187a49c157d5dbe63f323ebf903762336ff3a9",
"ac8acf1a1b77d03b0d3572c668ea24988ef907be",
"f898c6aeb12e316d09cc25bac30f2694a8792ac9"
],
"answer": [
{
"evidence": [
"As in our label-bias example, greedy search would prune the incorrect empty translation. More generally, consider beam search: at time step INLINEFORM0 , only the top INLINEFORM1 partial or complete translations are retained while the rest are pruned. (Implementations of beam search vary in the details, but this variant is simplest for the sake of argument.) Even if a translation ending at time INLINEFORM2 scores higher than a longer translation, as long as it does not fall within the top INLINEFORM3 when compared with partial translations of length INLINEFORM4 (or complete translations of length at most INLINEFORM5 ), it will be pruned and unable to block the longer translation. But if we widen the beam ( INLINEFORM6 ), then translation accuracy will suffer. We call this problem (which is BIBREF0 's sixth challenge) the beam problem. Our claim, hinted at by BIBREF0 , is that the brevity problem and the beam problem are essentially the same, and that solving one will solve the other."
],
"extractive_spans": [],
"free_form_answer": "Using a wider beam increases the probability of a shorter translation to remain in the top k variants and eventually score higher than any longer and more accurate translation variant",
"highlighted_evidence": [
"More generally, consider beam search: at time step INLINEFORM0 , only the top INLINEFORM1 partial or complete translations are retained while the rest are pruned.",
"Even if a translation ending at time INLINEFORM2 scores higher than a longer translation, as long as it does not fall within the top INLINEFORM3 when compared with partial translations of length INLINEFORM4 (or complete translations of length at most INLINEFORM5 ), it will be pruned and unable to block the longer translation. But if we widen the beam ( INLINEFORM6 ), then translation accuracy will suffer.",
"Wideing "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We note that the beam problem in NMT exists for relatively small beam sizes – especially when compared to traditional beam sizes in SMT systems. On our medium-resource Russian–English system, we investigate the full impact of this problem using a much larger beam size of 1000. In Table TABREF10 , we can see that the beam problem is particularly pronounced. The first row of the table shows the uncorrected, baseline score. From a beam of 10 to a beam of 1000, the drop in BLEU scores is over 20 points. This is largely due to the brevity problem discussed earlier. The second row of the table shows the length of the translated outputs compared to the lengths of the correct translations. Though the problem persists even at a beam size of 10, at a beam size of 1000, our baseline system generates less than one third the number of words that are in the correct translations. Furthermore, 37.3% of our translated outputs have sentences of length 0. In other words, the most likely translation is to immediately generate the stop symbol. This is the problem visualized in Figure FIGREF4 ."
],
"extractive_spans": [
"brevity problem"
],
"free_form_answer": "",
"highlighted_evidence": [
"From a beam of 10 to a beam of 1000, the drop in BLEU scores is over 20 points. This is largely due to the brevity problem discussed earlier."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As in our label-bias example, greedy search would prune the incorrect empty translation. More generally, consider beam search: at time step INLINEFORM0 , only the top INLINEFORM1 partial or complete translations are retained while the rest are pruned. (Implementations of beam search vary in the details, but this variant is simplest for the sake of argument.) Even if a translation ending at time INLINEFORM2 scores higher than a longer translation, as long as it does not fall within the top INLINEFORM3 when compared with partial translations of length INLINEFORM4 (or complete translations of length at most INLINEFORM5 ), it will be pruned and unable to block the longer translation. But if we widen the beam ( INLINEFORM6 ), then translation accuracy will suffer. We call this problem (which is BIBREF0 's sixth challenge) the beam problem. Our claim, hinted at by BIBREF0 , is that the brevity problem and the beam problem are essentially the same, and that solving one will solve the other."
],
"extractive_spans": [
"if a translation ending at time INLINEFORM2 scores higher than a longer translation, as long as it does not fall within the top INLINEFORM3 when compared with partial translations of length INLINEFORM4 (or complete translations of length at most INLINEFORM5 ), it will be pruned and unable to block the longer translation. But if we widen the beam ( INLINEFORM6 ), then translation accuracy will suffer. We call this problem (which is BIBREF0 's sixth challenge) the beam problem."
],
"free_form_answer": "",
"highlighted_evidence": [
"As in our label-bias example, greedy search would prune the incorrect empty translation. More generally, consider beam search: at time step INLINEFORM0 , only the top INLINEFORM1 partial or complete translations are retained while the rest are pruned. (Implementations of beam search vary in the details, but this variant is simplest for the sake of argument.) Even if a translation ending at time INLINEFORM2 scores higher than a longer translation, as long as it does not fall within the top INLINEFORM3 when compared with partial translations of length INLINEFORM4 (or complete translations of length at most INLINEFORM5 ), it will be pruned and unable to block the longer translation. But if we widen the beam ( INLINEFORM6 ), then translation accuracy will suffer. We call this problem (which is BIBREF0 's sixth challenge) the beam problem. Our claim, hinted at by BIBREF0 , is that the brevity problem and the beam problem are essentially the same, and that solving one will solve the other."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How is a per-word reward tuned with the perceptron algorithm?",
"What methods are used to correct the brevity problem?",
"Why does wider beam search hurt NMT?"
],
"question_id": [
"5d0a3f8ca3882f87773cf8c2ef1b4f72b9cc241e",
"dce27c49b9bf1919ca545e04663507d83bb42dbe",
"991ea04072b3412928be5e6e903cfa54eeac3951"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"bias",
"bias",
"bias"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Label bias causes this toy word-by-word translation model to translate French un hélicoptère incorrectly to an autogyro.",
"Figure 2: A locally normalized model must determine, at each time step, a “budget” for the total remaining log-probability. In this example sentence, “The British women won Olymp ic gold in p airs row ing,” the empty translation has initial position 622 in the beam. Already by the third step of decoding, the correct translation has a lower score than the empty translation. However, using greedy search, a nonempty translation would be returned.",
"Table 1: Results of the Russian–English translation system. We report BLEU and METEOR scores, as well as the ratio of the length of generated sentences compared to the correct translations (length). γ is the word reward score discovered during training. Here, we examine a much larger beam (1000). The beam problem is more pronounced at this scale, with the baseline system losing over 20 BLEU points when increasing the beam from size 10 to 1000. However, both our tuned length reward score and length normalization recover most of this loss.",
"Table 2: Results of the high-resource German–English system. Rows: BLEU, METEOR, length = ratio of output to reference length; γ = learned parameter value. While baseline performance decreases with beam size due to the brevity problem, other methods perform more consistently across beam sizes. Length normalization (norm) gets the best BLEU scores, but similar METEOR scores to the word reward.",
"Table 3: Results of low-resource French–English and English–French systems. Rows: BLEU, METEOR, length = ratio of output to reference length; γ = learned parameter value. While baseline performance decreases with beam size due to the brevity problem, other methods perform more consistently across beam sizes. Word reward gets the best scores in both directions on METEOR. Length normalization (norm) gets the best BLEU scores in Fra-Eng due to the slight bias of BLEU towards shorter translations.",
"Table 4: Tuning time on top of baseline training time. Times are in minutes on 1000 dev examples (German– English) or 892 dev examples (French–English). Due to the much larger model size, we only looked at beam sizes up to 75 for German–English.",
"Figure 3: Impact of beam size on BLEU score when varying reference sentence lengths (in words) for Russian– English. The x-axis is cumulative moving right; length 20 includes sentences of length 0-20, while length 10 includes 0-10. As reference length increases, the BLEU scores of a baseline system with beam size of 10 remain nearly constant. However, a baseline system with beam 1000 has a high BLEU score for shorter sentences, but a very low score when the entire test set is used. Our tuned reward and normalized models do not suffer from this problem on the entire test set, but take a slight performance hit on the shortest sentences.",
"Figure 4: Histogram of length ratio between generated sentences and gold varied across methods and beam size for Russian–English. Note that the baseline method skews closer 0 as the beam size increases, while our other methods remain peaked around 1.0. There are a few outliers to the right that have been cut off, as well as the peaks at 0.0 and 1.0.",
"Figure 5: Effect of word penalty on BLEU and hypothesis length for Russian–English (top) and GermanEnglish (bottom) on 1000 unseen dev examples with beams of 50. Note that the vertical bars represent the word reward that was found during tuning.",
"Table 5: Varying the size of the Russian–English training dataset results in different optimal word reward scores (γ). In all settings, the tuned score alleviates the beam problem. As the datasets get smaller, using a tuned larger beam improves the BLEU score over a smaller tuned beam. This suggests that lower-resource systems are more susceptible to the beam problem."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"6-Table4-1.png",
"8-Figure3-1.png",
"8-Figure4-1.png",
"9-Figure5-1.png",
"10-Table5-1.png"
]
} | [
"What methods are used to correct the brevity problem?",
"Why does wider beam search hurt NMT?"
] | [
[
"1808.10006-Solving the length problem solves the beam problem-2",
"1808.10006-Models-1"
],
[
"1808.10006-Solving the length problem solves the beam problem-3",
"1808.10006-Length bias in NMT-3"
]
] | [
"Length normalization; Google’s NMT correction; constant word reward",
"Using a wider beam increases the probability of a shorter translation to remain in the top k variants and eventually score higher than any longer and more accurate translation variant"
] | 139 |
1702.02584 | Predicting Audience's Laughter Using Convolutional Neural Network | For the purpose of automatically evaluating speakers' humor usage, we build a presentation corpus containing humorous utterances based on TED talks. Compared to previous data resources supporting humor recognition research, ours has several advantages, including (a) both positive and negative instances coming from a homogeneous data set, (b) containing a large number of speakers, and (c) being open. Focusing on using lexical cues for humor recognition, we systematically compare a newly emerging text classification method based on Convolutional Neural Networks (CNNs) with a well-established conventional method using linguistic knowledge. The advantages of the CNN method are both getting higher detection accuracies and being able to learn essential features automatically. | {
"paragraphs": [
[
"The ability to make effective presentations has been found to be linked with success at school and in the workplace. Humor plays an important role in successful public speaking, e.g., helping to reduce public speaking anxiety often regarded as the most prevalent type of social phobia, generating shared amusement to boost persuasive power, and serving as a means to attract attention and reduce tension BIBREF0 .",
"Automatically simulating an audience's reactions to humor will not only be useful for presentation training, but also improve conversational systems by giving machines more empathetic power. The present study reports our efforts in recognizing utterances that cause laughter in presentations. These include building a corpus from TED talks and using Convolutional Neural Networks (CNNs) in the recognition.",
"The remainder of the paper is organized as follows: Section SECREF2 briefly reviews the previous related research; Section SECREF3 describes the corpus we collected from TED talks; Section SECREF4 describes the text classification methods; Section SECREF5 reports on our experiments; finally, Section SECREF6 discusses the findings of our study and plans for future work."
],
[
"Humor recognition refers to the task of deciding whether a sentence/spoken-utterance expresses a certain degree of humor. In most of the previous studies BIBREF1 , BIBREF2 , BIBREF3 , humor recognition was modeled as a binary classification task. In the seminal work BIBREF1 , a corpus of INLINEFORM0 “one-liners\" was created using daily joke websites to collect humorous instances while using formal writing resources (e.g., news titles) to obtain non-humorous instances. Three humor-specific stylistic features, including alliteration, antonymy, and adult slang were utilized together with content-based features to build classifiers. In a recent work BIBREF3 , a new corpus was constructed from the Pun of the Day website. BIBREF3 explained and computed latent semantic structure features based on the following four aspects: (a) Incongruity, (b) Ambiguity, (c) Interpersonal Effect, and (d) Phonetic Style. In addition, Word2Vec BIBREF4 distributed representations were utilized in the model building.",
"Beyond lexical cues from text inputs, other research has also utilized speakers' acoustic cues BIBREF2 , BIBREF5 . These studies have typically used audio tracks from TV shows and their corresponding captions in order to categorize characters' speaking turns as humorous or non-humorous. Utterances prior to canned laughter that was manually inserted into the shows were treated as humorous, while other utterances were treated as negative cases.",
"Convolutional Neural Networks (CNNs) have recently been successfully used in several text categorization tasks (e.g., review rating, sentiment recognition, and question type recognition). Kim2014,Johnson2015,Zhang2015 suggested that using a simple CNN setup, which entails one layer of convolution on top of word embedding vectors, achieves excellent results on multiple tasks. Deep learning recently has been applied to computational humor research BIBREF5 , BIBREF6 . In Bertero2016LREC, CNN was found to be the best model that uses both acoustic and lexical cues for humor recognition. By using Long Short Time Memory (LSTM) cells BIBREF7 , Bertero2016NAACL showed that Recurrent Neural Networks (RNNs) perform better on modeling sequential information than Conditional Random Fields (CRFs) BIBREF8 .",
"From the brief review, it is clear that corpora used in humor research so far are limited to one-line puns or jokes and conversations from TV comedy shows. There is a great need for an open corpus that can support investigating humor in presentations. CNN-based text categorization methods have been applied to humor recognition (e.g., in BIBREF5 ) but with limitations: (a) a rigorous comparison with the state-of-the-art conventional method examined in yang-EtAl:2015:EMNLP2 is missing; (b) CNN's performance in the previous research is not quite clear; and (c) some important techniques that can improve CNN performance (e.g., using varied-sized filters and dropout regularization BIBREF10 ) were not applied. Therefore, the present study is meant to address these limitations."
],
[
"TED Talks are recordings from TED conferences and other special TED programs. In the present study, we focused on the transcripts of the talks. Most transcripts of the talks contain the markup `(Laughter)', which represents where audiences laughed aloud during the talks. This special markup was used to determine utterance labels.",
"We collected INLINEFORM0 TED Talk transcripts. An example transcription is given in Figure FIGREF4 . The collected transcripts were split into sentences using the Stanford CoreNLP tool BIBREF11 . In this study, sentences containing or immediately followed by `(Laughter)' were used as `Laughter' sentences, as shown in Figure FIGREF4 ; all other sentences were defined as `No-Laughter' sentences. Following BIBREF1 and BIBREF3 , we selected the same numbers ( INLINEFORM1 ) of `Laughter' and `No-Laughter' sentences. To minimize possible topic shifts between positive and negative instances, for each positive instance, we picked one negative instance nearby (the context window was 7 sentences in this study). For example, in Figure FIGREF4 , a negative instance (corresponding to `sent-2') was selected from the nearby sentences ranging from `sent-7' to `sent+7'."
],
[
""
],
[
"Following yang-EtAl:2015:EMNLP2, we applied Random Forest BIBREF12 to perform humor recognition by using the following two groups of features. The first group are latent semantic structural features covering the following 4 categories: Incongruity (2), Ambiguity (6), Interpersonal Effect (4), and Phonetic Pattern (4). The second group are semantic distance features, including the humor label classes from 5 sentences in the training set that are closest to this sentence (found by using a k-Nearest Neighbors (kNN) method), and each sentence's averaged Word2Vec representations ( INLINEFORM0 ). More details can be found in BIBREF3 ."
],
[
"Our CNN-based text classification's setup follows Kim2014. Figure FIGREF17 depicts the model's details. From the left side's input texts to the right side's prediction labels, different shapes of tensors flow through the entire network for solving the classification task in an end-to-end mode.",
"Firstly, tokenized text strings were converted to a INLINEFORM0 tensor with shape INLINEFORM1 , where INLINEFORM2 represents sentences' maximum length while INLINEFORM3 represents the word-embedding dimension. In this study, we utilized the Word2Vec BIBREF4 embedding vectors ( INLINEFORM4 ) that were trained on 100 billion words of Google News. Next, the embedding matrix was fed into a INLINEFORM5 convolution network with multiple filters. To cover varied reception fields, we used filters of sizes of INLINEFORM6 , INLINEFORM7 , and INLINEFORM8 . For each filter size, INLINEFORM9 filters were utilized. Then, max pooling, which stands for finding the largest value from a vector, was applied to each feature map (total INLINEFORM10 feature maps) output by the INLINEFORM11 convolution. Finally, maximum values from all of INLINEFORM12 filters were formed as a flattened vector to go through a fully connected (FC) layer to predict two possible labels (Laughter vs. No-Laughter). Note that for INLINEFORM13 convolution and FC layer's input, we applied `dropout' BIBREF10 regularization, which entails randomly setting a proportion of network weights to be zero during model training, to overcome over-fitting. By using cross-entropy as the learning metric, the whole sequential network (all weights and bias) could be optimized by using any SGD optimization, e.g., Adam BIBREF13 , Adadelta BIBREF14 , and so on."
],
[
"",
"We used two corpora: the TED Talk corpus (denoted as TED) and the Pun of the Day corpus (denoted as Pun). Note that we normalized words in the Pun data to lowercase to avoid a possibly elevated result caused by a special pattern: in the original format, all negative instances started with capital letters. The Pun data allows us to verify that our implementation is consistent with the work reported in yang-EtAl:2015:EMNLP2.",
"In our experiment, we firstly divided each corpus into two parts. The smaller part (the Dev set) was used for setting various hyper-parameters used in text classifiers. The larger portion (the CV set) was then formulated as a 10-fold cross-validation setup for obtaining a stable and comprehensive model evaluation result. For the PUN data, the Dev contains 482 sentences, while the CV set contains 4344 sentences. For the TED data, the Dev set contains 1046 utterances, while the CV set contains 8406 utterances. Note that, with a goal of building a speaker-independent humor detector, when partitioning our TED data set, we always kept all utterances of a single talk within the same partition. To our knowledge, this is the first time that such a strict experimental setup has been used in recognizing humor in conversations, and it makes the humor recognition task on the TED data quite challenging.",
"When building conventional models, we developed our own feature extraction scripts and used the SKLL python package for building Random Forest models. When implementing CNN, we used the Keras Python package. Regarding hyper-parameter tweaking, we utilized the Tree Parzen Estimation (TPE) method as detailed in TPE. After running 200 iterations of tweaking, we ended up with the following selection: INLINEFORM0 is 6 (entailing that the various filter sizes are INLINEFORM1 ), INLINEFORM2 is 100, INLINEFORM3 is INLINEFORM4 and INLINEFORM5 is INLINEFORM6 , optimization uses Adam BIBREF13 . When training the CNN model, we randomly selected INLINEFORM7 of the training data as the validation set for using early stopping to avoid over-fitting.",
"On the Pun data, the CNN model shows consistent improved performance over the conventional model, as suggested in BIBREF3 . In particular, precision has been greatly increased from INLINEFORM0 to INLINEFORM1 . On the TED data, we also observed that the CNN model helps to increase precision (from INLINEFORM2 to INLINEFORM3 ) and accuracy (from INLINEFORM4 to INLINEFORM5 ). The empirical evaluation results suggest that the CNN-based model has an advantage on the humor recognition task. In addition, focusing on the system development time, generating and implementing those features in the conventional model would take days or even weeks. However, the CNN model automatically learns its optimal feature representation and can adjust the features automatically across data sets. This makes the CNN model quite versatile for supporting different tasks and data domains. Compared with the humor recognition results on the Pun data, the results on the TED data are still quite low, and more research is needed to fully handle humor in authentic presentations."
],
[
"",
"For the purpose of monitoring how well speakers can use humor during their presentations, we have created a corpus from TED talks. Compared to the existing (albeit limited) corpora for humor recognition research, ours has the following advantages: (a) it was collected from authentic talks, rather than from TV shows performed by professional actors based on scripts; (b) it contains about 100 times more speakers compared to the limited number of actors in existing corpora. We compared two types of leading text-based humor recognition methods: a conventional classifier (e.g., Random Forest) based on human-engineered features vs. an end-to-end CNN method, which relies on its inherent representation learning. We found that the CNN method has better performance. More importantly, the representation learning of the CNN method makes it very efficient when facing new data sets.",
"Stemming from the present study, we envision that more research is worth pursuing: (a) for presentations, cues from other modalities such as audio or video will be included, similar to Bertero2016LREC; (b) context information from multiple utterances will be modeled by using sequential modeling methods."
]
],
"section_name": [
"Introduction",
"Previous Research",
"TED Talk Data",
"Methods",
"Conventional Model",
"CNN model",
"Experiments",
"Discussion"
]
} | {
"answers": [
{
"annotation_id": [
"4acc294c0143fe6299cc27673c3598186b20480a",
"6ad57c463d4a25f4808d3a43ae9fdfc2dbc1061a",
"f62a12f23f440c6968aa1a07fd001b599371468c"
],
"answer": [
{
"evidence": [
"Following yang-EtAl:2015:EMNLP2, we applied Random Forest BIBREF12 to perform humor recognition by using the following two groups of features. The first group are latent semantic structural features covering the following 4 categories: Incongruity (2), Ambiguity (6), Interpersonal Effect (4), and Phonetic Pattern (4). The second group are semantic distance features, including the humor label classes from 5 sentences in the training set that are closest to this sentence (found by using a k-Nearest Neighbors (kNN) method), and each sentence's averaged Word2Vec representations ( INLINEFORM0 ). More details can be found in BIBREF3 ."
],
"extractive_spans": [],
"free_form_answer": "Random Forest to perform humor recognition by using the following two groups of features: latent semantic structural features and semantic distance features.",
"highlighted_evidence": [
"Following yang-EtAl:2015:EMNLP2, we applied Random Forest BIBREF12 to perform humor recognition by using the following two groups of features. The first group are latent semantic structural features covering the following 4 categories: Incongruity (2), Ambiguity (6), Interpersonal Effect (4), and Phonetic Pattern (4). The second group are semantic distance features, including the humor label classes from 5 sentences in the training set that are closest to this sentence (found by using a k-Nearest Neighbors (kNN) method), and each sentence's averaged Word2Vec representations ( INLINEFORM0 )."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Following yang-EtAl:2015:EMNLP2, we applied Random Forest BIBREF12 to perform humor recognition by using the following two groups of features. The first group are latent semantic structural features covering the following 4 categories: Incongruity (2), Ambiguity (6), Interpersonal Effect (4), and Phonetic Pattern (4). The second group are semantic distance features, including the humor label classes from 5 sentences in the training set that are closest to this sentence (found by using a k-Nearest Neighbors (kNN) method), and each sentence's averaged Word2Vec representations ( INLINEFORM0 ). More details can be found in BIBREF3 ."
],
"extractive_spans": [
"Random Forest BIBREF12"
],
"free_form_answer": "",
"highlighted_evidence": [
"Following yang-EtAl:2015:EMNLP2, we applied Random Forest BIBREF12 to perform humor recognition by using the following two groups of features. The first group are latent semantic structural features covering the following 4 categories: Incongruity (2), Ambiguity (6), Interpersonal Effect (4), and Phonetic Pattern (4). The second group are semantic distance features, including the humor label classes from 5 sentences in the training set that are closest to this sentence (found by using a k-Nearest Neighbors (kNN) method), and each sentence's averaged Word2Vec representations ( INLINEFORM0 )."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Conventional Model",
"Following yang-EtAl:2015:EMNLP2, we applied Random Forest BIBREF12 to perform humor recognition by using the following two groups of features. The first group are latent semantic structural features covering the following 4 categories: Incongruity (2), Ambiguity (6), Interpersonal Effect (4), and Phonetic Pattern (4). The second group are semantic distance features, including the humor label classes from 5 sentences in the training set that are closest to this sentence (found by using a k-Nearest Neighbors (kNN) method), and each sentence's averaged Word2Vec representations ( INLINEFORM0 ). More details can be found in BIBREF3 ."
],
"extractive_spans": [],
"free_form_answer": "Random Forest classifier using latent semantic structural features, semantic distance features and sentences' averaged Word2Vec representations",
"highlighted_evidence": [
"Conventional Model\nFollowing yang-EtAl:2015:EMNLP2, we applied Random Forest BIBREF12 to perform humor recognition by using the following two groups of features. The first group are latent semantic structural features covering the following 4 categories: Incongruity (2), Ambiguity (6), Interpersonal Effect (4), and Phonetic Pattern (4). The second group are semantic distance features, including the humor label classes from 5 sentences in the training set that are closest to this sentence (found by using a k-Nearest Neighbors (kNN) method), and each sentence's averaged Word2Vec representations ( INLINEFORM0 )."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"263e5eef1a46ce9bd66e7732a86d1015f12b0a6e",
"8f0965c1893cb08b6fccd08625c0bfced549e82a"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Convolutional Neural Networks (CNNs) have recently been successfully used in several text categorization tasks (e.g., review rating, sentiment recognition, and question type recognition). Kim2014,Johnson2015,Zhang2015 suggested that using a simple CNN setup, which entails one layer of convolution on top of word embedding vectors, achieves excellent results on multiple tasks. Deep learning recently has been applied to computational humor research BIBREF5 , BIBREF6 . In Bertero2016LREC, CNN was found to be the best model that uses both acoustic and lexical cues for humor recognition. By using Long Short Time Memory (LSTM) cells BIBREF7 , Bertero2016NAACL showed that Recurrent Neural Networks (RNNs) perform better on modeling sequential information than Conditional Random Fields (CRFs) BIBREF8 ."
],
"extractive_spans": [
"one layer of convolution on top of word embedding vectors, achieves excellent results on multiple tasks"
],
"free_form_answer": "",
"highlighted_evidence": [
"Convolutional Neural Networks (CNNs) have recently been successfully used in several text categorization tasks (e.g., review rating, sentiment recognition, and question type recognition). Kim2014,Johnson2015,Zhang2015 suggested that using a simple CNN setup, which entails one layer of convolution on top of word embedding vectors, achieves excellent results on multiple tasks."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5391fea9fc674e54281de49f15dc621ff243a341",
"d16c48697bc81bde04859cfe9ef595ee24c057af"
],
"answer": [
{
"evidence": [
"Humor recognition refers to the task of deciding whether a sentence/spoken-utterance expresses a certain degree of humor. In most of the previous studies BIBREF1 , BIBREF2 , BIBREF3 , humor recognition was modeled as a binary classification task. In the seminal work BIBREF1 , a corpus of INLINEFORM0 “one-liners\" was created using daily joke websites to collect humorous instances while using formal writing resources (e.g., news titles) to obtain non-humorous instances. Three humor-specific stylistic features, including alliteration, antonymy, and adult slang were utilized together with content-based features to build classifiers. In a recent work BIBREF3 , a new corpus was constructed from the Pun of the Day website. BIBREF3 explained and computed latent semantic structure features based on the following four aspects: (a) Incongruity, (b) Ambiguity, (c) Interpersonal Effect, and (d) Phonetic Style. In addition, Word2Vec BIBREF4 distributed representations were utilized in the model building."
],
"extractive_spans": [
"Incongruity",
"Ambiguity",
"Interpersonal Effect",
"Phonetic Style"
],
"free_form_answer": "",
"highlighted_evidence": [
"In a recent work BIBREF3 , a new corpus was constructed from the Pun of the Day website. BIBREF3 explained and computed latent semantic structure features based on the following four aspects: (a) Incongruity, (b) Ambiguity, (c) Interpersonal Effect, and (d) Phonetic Style. In addition, Word2Vec BIBREF4 distributed representations were utilized in the model building."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Humor recognition refers to the task of deciding whether a sentence/spoken-utterance expresses a certain degree of humor. In most of the previous studies BIBREF1 , BIBREF2 , BIBREF3 , humor recognition was modeled as a binary classification task. In the seminal work BIBREF1 , a corpus of INLINEFORM0 “one-liners\" was created using daily joke websites to collect humorous instances while using formal writing resources (e.g., news titles) to obtain non-humorous instances. Three humor-specific stylistic features, including alliteration, antonymy, and adult slang were utilized together with content-based features to build classifiers. In a recent work BIBREF3 , a new corpus was constructed from the Pun of the Day website. BIBREF3 explained and computed latent semantic structure features based on the following four aspects: (a) Incongruity, (b) Ambiguity, (c) Interpersonal Effect, and (d) Phonetic Style. In addition, Word2Vec BIBREF4 distributed representations were utilized in the model building."
],
"extractive_spans": [
"alliteration",
"antonymy",
"adult slang"
],
"free_form_answer": "",
"highlighted_evidence": [
"Three humor-specific stylistic features, including alliteration, antonymy, and adult slang were utilized together with content-based features to build classifiers."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"6bd9e78e3bb13aa9c5fd274ec018aadefc27b38b",
"bef3526ac217efd2215e41e925bbe290076ab43a",
"e05ce7b902b9a4ff1d4d3465043a0eca0da76c90"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"We used two corpora: the TED Talk corpus (denoted as TED) and the Pun of the Day corpus (denoted as Pun). Note that we normalized words in the Pun data to lowercase to avoid a possibly elevated result caused by a special pattern: in the original format, all negative instances started with capital letters. The Pun data allows us to verify that our implementation is consistent with the work reported in yang-EtAl:2015:EMNLP2."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We used two corpora: the TED Talk corpus (denoted as TED) and the Pun of the Day corpus (denoted as Pun)."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"45a45040f2c3efa3d2e10fac483a90c00184bbc6",
"6f627da6f5052eca03baa0ab2b8c0d3e35abc0e6",
"edb8d7b0cc6d5b88ef6d8d1d92a96b74335dd4c6"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0dc86f95dad6ed9995a811fa01e64492a57280db",
"90e923707cd11a20ddf7dadb537ea95afaa59430",
"ddda9d763a5c5554d5c161e5564d05b508c367fd"
],
"answer": [
{
"evidence": [
"We collected INLINEFORM0 TED Talk transcripts. An example transcription is given in Figure FIGREF4 . The collected transcripts were split into sentences using the Stanford CoreNLP tool BIBREF11 . In this study, sentences containing or immediately followed by `(Laughter)' were used as `Laughter' sentences, as shown in Figure FIGREF4 ; all other sentences were defined as `No-Laughter' sentences. Following BIBREF1 and BIBREF3 , we selected the same numbers ( INLINEFORM1 ) of `Laughter' and `No-Laughter' sentences. To minimize possible topic shifts between positive and negative instances, for each positive instance, we picked one negative instance nearby (the context window was 7 sentences in this study). For example, in Figure FIGREF4 , a negative instance (corresponding to `sent-2') was selected from the nearby sentences ranging from `sent-7' to `sent+7'."
],
"extractive_spans": [],
"free_form_answer": "Laughter from the audience.",
"highlighted_evidence": [
"In this study, sentences containing or immediately followed by `(Laughter)' were used as `Laughter' sentences, as shown in Figure FIGREF4 ; all other sentences were defined as `No-Laughter' sentences."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We collected INLINEFORM0 TED Talk transcripts. An example transcription is given in Figure FIGREF4 . The collected transcripts were split into sentences using the Stanford CoreNLP tool BIBREF11 . In this study, sentences containing or immediately followed by `(Laughter)' were used as `Laughter' sentences, as shown in Figure FIGREF4 ; all other sentences were defined as `No-Laughter' sentences. Following BIBREF1 and BIBREF3 , we selected the same numbers ( INLINEFORM1 ) of `Laughter' and `No-Laughter' sentences. To minimize possible topic shifts between positive and negative instances, for each positive instance, we picked one negative instance nearby (the context window was 7 sentences in this study). For example, in Figure FIGREF4 , a negative instance (corresponding to `sent-2') was selected from the nearby sentences ranging from `sent-7' to `sent+7'."
],
"extractive_spans": [],
"free_form_answer": "by laughter",
"highlighted_evidence": [
" In this study, sentences containing or immediately followed by `(Laughter)' were used as `Laughter' sentences, as shown in Figure FIGREF4 ; all other sentences were defined as `No-Laughter' sentences. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"TED Talks are recordings from TED conferences and other special TED programs. In the present study, we focused on the transcripts of the talks. Most transcripts of the talks contain the markup `(Laughter)', which represents where audiences laughed aloud during the talks. This special markup was used to determine utterance labels."
],
"extractive_spans": [],
"free_form_answer": "By laughter from the audience",
"highlighted_evidence": [
"Most transcripts of the talks contain the markup `(Laughter)', which represents where audiences laughed aloud during the talks. This special markup was used to determine utterance labels."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What linguistic model does the conventional method use?",
"What is novel about the newly emerging CNN method, in comparison to well-established conventional method?",
"What lexical cues are used for humor recogition?",
"Do they evaluate only on English data?",
"How many speakers are included in the dataset?",
"How are the positive instances annotated? e.g. by annotators, or by laughter from the audience?"
],
"question_id": [
"a82a12a22a45d9507bc359635ffe9574f15e0810",
"355cf303ba61f84b580e2016fcb24e438abeafa7",
"88757bc49ccab76e587fba7521f0981d6a1af2f7",
"2f9a31f5a2b668acf3bce8958f5daa67ab8b2c83",
"4830459e3d1d204e431025ce7e596ef3f8d757d2",
"74ebfba06f37cc95dfe59c3790ebe6165e6be19c"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"humor",
"humor",
"humor",
"humor",
"humor",
"humor"
],
"topic_background": [
"research",
"research",
"research",
"research",
"research",
"research"
]
} | {
"caption": [
"Figure 1: An excerpt from TED talk “Tim Urban: Inside the mind of a master procrastinator” (http: //bit.ly/2l1P3RJ)",
"Table 1: Humor recognition on both Pun and TED data sets by using (a) random prediction (Chance), conventional method (Base) and CNN method; the sizes of the dev and CV partitions are provided for each data set."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png"
]
} | [
"What linguistic model does the conventional method use?",
"How are the positive instances annotated? e.g. by annotators, or by laughter from the audience?"
] | [
[
"1702.02584-Conventional Model-0"
],
[
"1702.02584-TED Talk Data-0",
"1702.02584-TED Talk Data-1"
]
] | [
"Random Forest classifier using latent semantic structural features, semantic distance features and sentences' averaged Word2Vec representations",
"By laughter from the audience"
] | 140 |
1709.05453 | Augmenting End-to-End Dialog Systems with Commonsense Knowledge | Building dialog agents that can converse naturally with humans is a challenging yet intriguing problem of artificial intelligence. In open-domain human-computer conversation, where the conversational agent is expected to respond to human responses in an interesting and engaging way, commonsense knowledge has to be integrated into the model effectively. In this paper, we investigate the impact of providing commonsense knowledge about the concepts covered in the dialog. Our model represents the first attempt to integrating a large commonsense knowledge base into end-to-end conversational models. In the retrieval-based scenario, we propose the Tri-LSTM model to jointly take into account message and commonsense for selecting an appropriate response. Our experiments suggest that the knowledge-augmented models are superior to their knowledge-free counterparts in automatic evaluation. | {
"paragraphs": [
[
"In recent years, data-driven approaches to building conversation models have been made possible by the proliferation of social media conversation data and the increase of computing power. By relying on a large number of message-response pairs, the Seq2Seq framework BIBREF0 attempts to produce an appropriate response based solely on the message itself, without any memory module.",
"In human-to-human conversations, however, people respond to each other's utterances in a meaningful way not only by paying attention to the latest utterance of the conversational partner itself, but also by recalling relevant information about the concepts covered in the dialogue and integrating it into their responses. Such information may contain personal experience, recent events, commonsense knowledge and more (Figure 1 ). As a result, it is speculated that a conversational model with a “memory look-up” module can mimic human conversations more closely BIBREF1 , BIBREF2 . In open-domain human-computer conversation, where the model is expected to respond to human utterances in an interesting and engaging way, commonsense knowledge has to be integrated into the model effectively.",
"In the context of artificial intelligence (AI), commonsense knowledge is the set of background information that an individual is intended to know or assume and the ability to use it when appropriate BIBREF3 , BIBREF4 , BIBREF5 . Due to the vastness of such kind of knowledge, we speculate that this goal is better suited by employing an external memory module containing commonsense knowledge rather than forcing the system to encode it in model parameters as in traditional methods.",
"In this paper, we investigate how to improve end-to-end dialogue systems by augmenting them with commonsense knowledge, integrated in the form of external memory. The remainder of this paper is as follows: next section proposes related work in the context of conversational models and commonsense knowledge; following, a section describes the proposed model in detail; later, a section illustrates experimental results; finally, the last section proposes concluding remarks and future work."
],
[
"Data-driven conversational models generally fall into two categories: retrieval-based methods BIBREF6 , BIBREF7 , BIBREF8 , which select a response from a predefined repository, and generation-based methods BIBREF9 , BIBREF10 , BIBREF11 , which employ an encoder-decoder framework where the message is encoded into a vector representation and, then, fed to the decoder to generate the response. The latter is more natural (as it does not require a response repository) yet suffers from generating dull or vague responses and generally needs a great amount of training data.",
"The use of an external memory module in natural language processing (NLP) tasks has received considerable attention recently, such as in question answering BIBREF12 and language modeling BIBREF13 . It has also been employed in dialogue modeling in several limited settings. With memory networks, BIBREF14 used a set of fact triples about movies as long-term memory when modeling reddit dialogues, movie recommendation and factoid question answering. Similarly in a restaurant reservation setting, BIBREF2 provided local restaurant information to the conversational model.",
"Researchers have also proposed several methods to incorporate knowledge as external memory into the Seq2Seq framework. BIBREF15 incorporated the topic words of the message obtained from a pre-trained latent Dirichlet allocation (LDA) model into the context vector through a joint attention mechanism. BIBREF1 mined FoodSquare tips to be searched by an input message in the food domain and encoded such tips into the context vector through one-turn hop. The model we propose in this work shares similarities with BIBREF16 , which encoded unstructured textual knowledge with a recurrent neural network (RNN). Our work distinguishes itself from previous research in that we consider a large heterogeneous commonsense knowledge base in an open-domain retrieval-based dialogue setting."
],
[
"Several commonsense knowledge bases have been constructed during the past decade, such as ConceptNet BIBREF17 and SenticNet BIBREF18 . The aim of commonsense knowledge representation and reasoning is to give a foundation of real-world knowledge to a variety of AI applications, e.g., sentiment analysis BIBREF19 , handwriting recognition BIBREF20 , e-health BIBREF21 , aspect extraction BIBREF22 , and many more. Typically, a commonsense knowledge base can be seen as a semantic network where concepts are nodes in the graph and relations are edges (Figure 2 ). Each $<concept1, relation, concept2 >$ triple is termed an assertion.",
"Based on the Open Mind Common Sense project BIBREF23 , ConceptNet not only contains objective facts such as “Paris is the capital of France” that are constantly true, but also captures informal relations between common concepts that are part of everyday knowledge such as “A dog is a pet”. This feature of ConceptNet is desirable in our experiments, because the ability to recognize the informal relations between common concepts is necessary in the open-domain conversation setting we are considering in this paper."
],
[
"In this work, we concentrate on integrating commonsense knowledge into retrieval-based conversational models, because they are easier to evaluate BIBREF24 , BIBREF7 and generally take a lot less data to train. We leave the generation-based scenario to future work.",
"Message (context) $x$ and response $y$ are a sequence of tokens from vocabulary $V$ . Given $x$ and a set of response candidates $[y_1,y_2,y_3...,y_K]\\in Y$ , the model chooses the most appropriate response $\\hat{y}$ according to: ",
"$$\\hat{y}=\\mathop {\\arg \\max }_{y\\in {Y}}f(x,y),$$ (Eq. 6) ",
"where $f(x,y)$ is a scoring function measuring the “compatibility” of $x$ and $y$ . The model is trained on $<message, response, label >$ triples with cross entropy loss, where $label$ is binary indicating whether the $<message, response >$ pair comes from real data or is randomly combined."
],
[
"As a variation of vanilla RNN, a long short-term memory (LSTM) network BIBREF25 is good at handling long-term dependencies and can be used to map an utterance to its last hidden state as fixed-size embedding representation. The Dual-LSTM encoder BIBREF6 represents the message $x$ and response $y$ as fixed-size embeddings $\\vec{x}$ and $\\vec{y}$ with the last hidden states of the same LSTM. The compatibility function of the two is thus defined by: ",
"$$f(x,y) = \\sigma (\\vec{x}^{T}W\\vec{y}),$$ (Eq. 8) ",
"where matrix $W \\in \\mathcal {R}^{D\\times D}$ is learned during training."
],
[
"In this paper, we assume that a commonsense knowledge base is composed of assertions $A$ about concepts $C$ . Each assertion $a \\in A$ takes the form of a triple $<c_1,r,c_2 >$ , where $r \\in R$ is a relation between $c_1$ and $c_2$ , such as IsA, CapableOf, etc. $c_1,c_2$ are concepts in $C$ . The relation set $R$ is typically much smaller than $C$0 . $C$1 can either be a single word (e.g., “dog” and “book”) or a multi-word expression (e.g., “take_a_stand” and “go_shopping”). We build a dictionary $C$2 out of $C$3 where every concept $C$4 is a key and a list of all assertions in $C$5 concerning $C$6 , i.e., $C$7 or $C$8 , is the value. Our goal is to retrieve commonsense knowledge about every concept covered in the message.",
"We define $A_x$ as the set of commonsense assertions concerned with message $x$ . To recover concepts in message $x$ , we use simple $n$ -gram matching ( $n\\le N$ ). Every $n$ -gram in $c$ is considered a potential concept. If the $n$ -gram is a key in $x$0 , the corresponding value, i.e., all assertions in $x$1 concerning the concept, is added to $x$2 (Figure 4 )."
],
[
"Our main approach to integrating commonsense knowledge into the conversational model involves using another LSTM for encoding all assertions $a$ in $A_x$ , as illustrated in Figure 3 . Each $a$ , originally in the form of $<c_1,r,c_2 >$ , is transformed into a sequence of tokens by chunking $c_1$ , $c_2$ , concepts which are potentially multi-word phrases, into $[c_{11},c_{12},c_{13}...]$ and $[c_{21},c_{22},c_{23}...]$ . Thus, $a=[c_{11},c_{12},c_{13}...,r,c_{21},c_{22},c_{23}...]$ .",
"We add $R$ to vocabulary $V$ , that is, each $r$ in $R$ will be treated like any regular word in $V$ during encoding. We decide not to use each concept $c$ as a unit for encoding $a$ because $C$ is typically too large ( $>$ 1M). $a$ is encoded as embedding representation $V$0 using another LSTM. Note that this encoding scheme is suitable for any natural utterances containing commonsense knowledge in addition to well-structured assertions. We define the match score of assertion $V$1 and response $V$2 as: ",
"$$m(a,y) = \\vec{a}^{T}W_a\\vec{y},$$ (Eq. 16) ",
"where $W_a \\in \\mathcal {R}^{D\\times D}$ is learned during training. Commonsense assertions $A_x$ associated with a message is usually large ( $>$ 100 in our experiment). We observe that in a lot of cases of open-domain conversation, response $y$ can be seen as triggered by certain perception of message $x$ defined by one or more assertions in $A_x$ , as illustrated in Figure 4 . We can see the difference between message and response pair when commonsense knowledge is used. For example, the word `Insomnia' in the message is mapped to the commonsense assertion `Insomnia, IsA, sleep $\\_$ problem'. The appropriate response is then matched to `sleep $\\_$ problem' that is `go to bed'. Similarly, the word `Hawaii' in the message is mapped to the commonsense assertion `Hawaii, UsedFor, tourism'. The appropriate response is then matched to `tourism' that is `enjoy vacation'. In this way, new words can be mapped to the commonly used vocabulary and improve response accuracy.",
"Our assumption is that $A_x$ is helpful in selecting an appropriate response $y$ . However, usually very few assertions in $A_x$ are related to a particular response $y$ in the open-domain setting. As a result, we define the match score of $A_x$ and $y$ as ",
"$$m(A_x,y)=\\mathop {\\max }_{a\\in {A_x}} m(a,y),$$ (Eq. 17) ",
"that is, we only consider the commonsense assertion $a$ with the highest match score with $y$ , as most of $A_x$ are not relevant to $y$ . Incorporating $m(A_x,y)$ into the Dual-LSTM encoder, our Tri-LSTM encoder model is thus defined as: ",
"$$f(x,y) = \\sigma (\\vec{x}^{T}W\\vec{y} + m(A_x,y)),$$ (Eq. 18) ",
"i.e., we use simple addition to supplement $x$ with $A_x$ , without introducing a mechanism for any further interaction between $x$ and $A_x$ . This simple approach is suitable for response selection and proves effective in practice.",
"The intuition we are trying to capture here is that an appropriate response $y$ should not only be compatible with $x$ , but also related to certain memory recall triggered by $x$ as captured by $m(A_x,y)$ . In our case, the memory is commonsense knowledge about the world. In cases where $A_x = \\emptyset $ , i.e., no commonsense knowledge is recalled, $m(A_x,y)=0$ and the model degenerates to Dual-LSTM encoder."
],
[
"We follow BIBREF2 , BIBREF14 and use supervised word embeddings as a baseline. Word embeddings are most well-known in the context of unsupervised training on raw text as in BIBREF27 , yet they can also be used to score message-response pairs. The embedding vectors are trained directly for this goal. In this setting, the “compatibility” function of $x$ and $y$ is defined as: ",
"$$f(x,y)=\\vec{x}^T\\vec{y}$$ (Eq. 21) ",
"In this setting, $\\vec{x},\\vec{y}$ are bag-of-words embeddings. With retrieved commonsense assertions $A_x$ , we embed each $a\\in {A_x}$ to bag-of-words representation $\\vec{a}$ and have: ",
"$$f(x,y)=\\vec{x}^T\\vec{y}+\\mathop {\\max }_{a\\in {A_x}} \\ \\ \\vec{a}^T\\vec{y}.$$ (Eq. 22) ",
"This linear model differs from Tri-LSTM encoder in that it represents an utterance with its bag-of-words embedding instead of RNNs.",
"Memory networks BIBREF13 , BIBREF28 are a class of models that perform language understanding by incorporating a memory component. They perform attention over memory to retrieve all relevant information that may help with the task. In our dialogue modeling setting, we use $A_x$ as the memory component. Our implementation of memory networks, similar to BIBREF2 , BIBREF14 , differs from supervised word embeddings described above in only one aspect: how to treat multiple entries in memory. In memory networks, output memory representation $\\vec{o}=\\sum _{i}p_i\\vec{a}_i$ , where $\\vec{a}_i$ is the bag-of-words embedding of $a_i\\in {A_x}$ and $p_i$ is the attention signal over memory $A_x$ calculated by $p_i=softmax(\\vec{x}^T\\vec{a_i})$ . The “compatibility” function of $x$ and $y$ is defined as: ",
"$$f(x,y)=(\\vec{x}+\\vec{o})^T\\vec{y}=\\vec{x}^T\\vec{y}+(\\sum _{i}p_i\\vec{a}_i)^T\\vec{y}$$ (Eq. 24) ",
"In contrast to supervised word embeddings described above, attention over memory is determined by message $x$ . This mechanism was originally designed to retrieve information from memory that is relevant to the context, which in our setting is already achieved during commonsense knowledge retrieval. As speculated, the attention over multiple memory entries is better determined by response $y$ in our setting. We empirically prove this point below."
],
[
"To the best of our knowledge, there is currently no well-established open-domain response selection benchmark dataset available, although certain Twitter datasets have been used in the response generation setting BIBREF29 , BIBREF30 . We thus evaluate our method against state-of-the-art approaches in the response selection task on Twitter dialogues.",
"1.4M Twitter <message, response $>$ pairs are used for our experiments. They were extracted over a 5-month period, from February through July in 2011. 1M Twitter <message, response $>$ pairs are used for training. With the original response as ground truth, we construct 1M <message, response, label=1 $>$ triples as positive instances. Another 1M negative instances <message, response, label=0 $>$ are constructed by replacing the ground truth response with a random response in the training set.",
"For tuning and evaluation, we use 20K <message, response $>$ pairs that constitute the validation set (10K) and test set (10K). They are selected by a criterion that encourages interestingness and relevance: both the message and response have to be at least 3 tokens long and contain at least one non-stopword. For every message, at least one concept has to be found in the commonsense knowledge base. For each instance, we collect another 9 random responses from elsewhere to constitute the response candidates.",
"Preprocessing of the dataset includes normalizing hashtags, “@User”, URLs, emoticons. Vocabulary $V$ is built out of the training set with 5 as minimum word frequency, containing 62535 words and an extra $<UNK >$ token representing all unknown words."
],
[
"In our experiment, ConceptNet is used as the commonsense knowledge base. Preprocessing of this knowledge base involves removing assertions containing non-English characters or any word outside vocabulary $V$ . 1.4M concepts remain. 0.8M concepts are unigrams, 0.43M are bi-grams and the other 0.17M are tri-grams or more. Each concept is associated with an average of 4.3 assertions. More than half of the concepts are associated with only one assertion.",
"An average of 2.8 concepts can be found in ConceptNet for each message in our Twitter Dialogue Dataset, yielding an average of 150 commonsense assertions (the size of $A_x$ ). Unsurprisingly, common concepts with more assertions associated are favored in actual human conversations.",
"It is worth noting that ConceptNet is also noisy due to uncertainties in the constructing process, where 15.5% of all assertions are considered “false” or “vague” by human evaluators BIBREF17 . Our max-pooling strategy used in Tri-LSTM encoder and supervised word embeddings is partly designed to alleviate this weakness."
],
[
"In all our models excluding term frequency–inverse document frequency (TF-IDF) BIBREF31 , we initialize word embeddings with pretrained GloVe embedding vectors BIBREF32 . The size of hidden units in LSTM models is set to 256 and the word embedding dimension is 100. We use stochastic gradient descent (SGD) for optimizing with batch size of 64. We fixed training rate at 0.001."
],
[
"The main results for TF-IDF, word embeddings, memory networks and LSTM models are summarized in Table 1 . We observe that:",
"(1) LSTMs perform better at modeling dialogues than word embeddings on our dataset, as shown by the comparison between Tri-LSTM and word embeddings.",
"(2) Integrating commonsense knowledge into conversational models boosts model performance, as Tri-LSTM outperforms Dual-LSTM by a certain margin.",
"(3) Max-pooling over all commonsense assertions depending on response $y$ is a better method for utilizing commonsense knowledge than attention over memory in our setting, as demonstrated by the gain of performance of word embeddings over memory networks.",
"We also analyze samples from the test set to gain an insight on how commonsense knowledge supplements the message itself in response selection by comparing Tri-LSTM encoder and Dual-LSTM encoder.",
"As illustrated in Table 2 , instances 1,2 represent cases where commonsense assertions as an external memory module provide certain clues that the other model failed to capture. For example in instance 2, Tri-LSTM selects the response “...improve your french” to message “bonjour madame” based on a retrieved assertion “ $bonjour, IsA, hello\\_in\\_french$ ”, while Dual-LSTM selects an irrelevant response. Unsurprisingly, Dual-LSTM is also able to select the correct response in some cases where certain commonsense knowledge is necessary, as illustrated in instance 3. Both models select “... pink or black” in response to message “...what color shoes...”, even though Dual-LSTM does not have access to a helpful assertion “ $pink, RelatedTo,\ncolor$ ”.",
"Informally speaking, such cases suggest that to some extent, Dual-LSTM (models with no memory) is able to encode certain commonsense knowledge in model parameters (e.g., word embeddings) in an implicit way. In other cases, e.g., instance 4, the message itself is enough for the selection of the correct response, where both models do equally well."
],
[
"In this paper, we emphasized the role of memory in conversational models. In the open-domain chit-chat setting, we experimented with commonsense knowledge as external memory and proposed to exploit LSTM to encode commonsense assertions to enhance response selection.",
"In the other research line of response generation, such knowledge can potentially be used to condition the decoder in favor of more interesting and relevant responses. Although the gains presented by our new method is not spectacular according to Recall@ $k$ , our view represents a promising attempt at integrating a large heterogeneous knowledge base that potentially describes the world into conversational models as a memory component.",
"Our future work includes extending the commonsense knowledge with common (or factual) knowledge, e.g., to extend the knowledge base coverage by linking more named entities to commonsense knowledge concepts BIBREF34 , and developing a better mechanism for utilizing such knowledge instead of the simple max-pooling scheme used in this paper. We would also like to explore the memory of the model for multiple message response pairs in a long conversation.",
"Lastly, we plan to integrate affective knowledge from SenticNet in the dialogue system in order to enhance its emotional intelligence and, hence, achieve a more human-like interaction. The question, after all, is not whether intelligent machines can have any emotions, but whether machines can be intelligent without any emotions BIBREF35 ."
],
[
"We gratefully acknowledge the help of Alan Ritter for sharing the twitter dialogue dataset and the NTU PDCC center for providing computing resources."
]
],
"section_name": [
"Introduction",
"Conversational Models",
"Commonsense Knowledge",
"Task Definition",
"Dual-LSTM Encoder",
"Commonsense Knowledge Retrieval",
"Tri-LSTM Encoder",
"Comparison Approaches",
"Twitter Dialogue Dataset",
"ConceptNet",
"Parameter Settings",
"Results and Analysis",
"Conclusion and Future Work",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"122cb008543fc0b3a86b4814bf60f441af928232",
"9a87863f5f98e960c4843444b683b4220780888c",
"cbdca68ed7ec49d9c7586a2ae8d643f2af6990fd"
],
"answer": [
{
"evidence": [
"In the context of artificial intelligence (AI), commonsense knowledge is the set of background information that an individual is intended to know or assume and the ability to use it when appropriate BIBREF3 , BIBREF4 , BIBREF5 . Due to the vastness of such kind of knowledge, we speculate that this goal is better suited by employing an external memory module containing commonsense knowledge rather than forcing the system to encode it in model parameters as in traditional methods."
],
"extractive_spans": [
"by employing an external memory module containing commonsense knowledge"
],
"free_form_answer": "",
"highlighted_evidence": [
"Due to the vastness of such kind of knowledge, we speculate that this goal is better suited by employing an external memory module containing commonsense knowledge rather than forcing the system to encode it in model parameters as in traditional methods."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [
"using another LSTM for encoding all assertions $a$ in $A_x$ , as illustrated in Figure 3 . Each $a$ , originally in the form of $$ , is transformed into a sequence of tokens by chunking $c_1$ , $c_2$ , concepts which are potentially multi-word phrases, into $[c_{11},c_{12},c_{13}...]$ and $[c_{21},c_{22},c_{23}...]$ . Thus, $a=[c_{11},c_{12},c_{13}...,r,c_{21},c_{22},c_{23}...]$ ."
],
"free_form_answer": "",
"highlighted_evidence": [
"Our main approach to integrating commonsense knowledge into the conversational model involves using another LSTM for encoding all assertions $a$ in $A_x$ , as illustrated in Figure 3 . Each $a$ , originally in the form of $$ , is transformed into a sequence of tokens by chunking $c_1$ , $c_2$ , concepts which are potentially multi-word phrases, into $[c_{11},c_{12},c_{13}...]$ and $[c_{21},c_{22},c_{23}...]$ . Thus, $a=[c_{11},c_{12},c_{13}...,r,c_{21},c_{22},c_{23}...]$ ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We define $A_x$ as the set of commonsense assertions concerned with message $x$ . To recover concepts in message $x$ , we use simple $n$ -gram matching ( $n\\le N$ ). Every $n$ -gram in $c$ is considered a potential concept. If the $n$ -gram is a key in $x$0 , the corresponding value, i.e., all assertions in $x$1 concerning the concept, is added to $x$2 (Figure 4 ).",
"Our main approach to integrating commonsense knowledge into the conversational model involves using another LSTM for encoding all assertions $a$ in $A_x$ , as illustrated in Figure 3 . Each $a$ , originally in the form of $<c_1,r,c_2 >$ , is transformed into a sequence of tokens by chunking $c_1$ , $c_2$ , concepts which are potentially multi-word phrases, into $[c_{11},c_{12},c_{13}...]$ and $[c_{21},c_{22},c_{23}...]$ . Thus, $a=[c_{11},c_{12},c_{13}...,r,c_{21},c_{22},c_{23}...]$ ."
],
"extractive_spans": [],
"free_form_answer": "using another LSTM for encoding commonsense assertions",
"highlighted_evidence": [
"In this paper, we assume that a commonsense knowledge base is composed of assertions $A$ about concepts $C$ . Each assertion $a \\in A$ takes the form of a triple $$ , where $r \\in R$ is a relation between $c_1$ and $c_2$ , such as IsA, CapableOf, etc. $c_1,c_2$ are concepts in $C$ . The relation set $R$ is typically much smaller than $C$0 . $C$1 can either be a single word (e.g., “dog” and “book”) or a multi-word expression (e.g., “take_a_stand” and “go_shopping”). We build a dictionary $C$2 out of $C$3 where every concept $C$4 is a key and a list of all assertions in $C$5 concerning $C$6 , i.e., $C$7 or $C$8 , is the value. Our goal is to retrieve commonsense knowledge about every concept covered in the message.\n\nWe define $A_x$ as the set of commonsense assertions concerned with message $x$ . To recover concepts in message $x$ , we use simple $n$ -gram matching ( $n\\le N$ ). Every $n$ -gram in $c$ is considered a potential concept. If the $n$ -gram is a key in $x$0 , the corresponding value, i.e., all assertions in $x$1 concerning the concept, is added to $x$2 (Figure 4 ).",
"Our main approach to integrating commonsense knowledge into the conversational model involves using another LSTM for encoding all assertions $a$ in $A_x$ , as illustrated in Figure 3 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ca2a4695129d0180768a955fb5910d639f79aa34",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"197290cb509b9a046b311719c6ce1ce408f3be8a"
]
},
{
"annotation_id": [
"8049677fcec5a9ea229b192b991a3ae5af0dd7fe",
"a82a0bc741da79e84b59f0eb2f32720a0121596b"
],
"answer": [
{
"evidence": [
"Researchers have also proposed several methods to incorporate knowledge as external memory into the Seq2Seq framework. BIBREF15 incorporated the topic words of the message obtained from a pre-trained latent Dirichlet allocation (LDA) model into the context vector through a joint attention mechanism. BIBREF1 mined FoodSquare tips to be searched by an input message in the food domain and encoded such tips into the context vector through one-turn hop. The model we propose in this work shares similarities with BIBREF16 , which encoded unstructured textual knowledge with a recurrent neural network (RNN). Our work distinguishes itself from previous research in that we consider a large heterogeneous commonsense knowledge base in an open-domain retrieval-based dialogue setting."
],
"extractive_spans": [
"open-domain"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our work distinguishes itself from previous research in that we consider a large heterogeneous commonsense knowledge base in an open-domain retrieval-based dialogue setting."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To the best of our knowledge, there is currently no well-established open-domain response selection benchmark dataset available, although certain Twitter datasets have been used in the response generation setting BIBREF29 , BIBREF30 . We thus evaluate our method against state-of-the-art approaches in the response selection task on Twitter dialogues."
],
"extractive_spans": [],
"free_form_answer": "open-domain Twitter dialogues",
"highlighted_evidence": [
"To the best of our knowledge, there is currently no well-established open-domain response selection benchmark dataset available, although certain Twitter datasets have been used in the response generation setting BIBREF29 , BIBREF30 . We thus evaluate our method against state-of-the-art approaches in the response selection task on Twitter dialogues."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"99ee399059d3214bd1c5922dfa37a983a251afe9",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"5ef50afae884939830bde0b8d4dc172863429154",
"dd558da9dcceaeaafcdc106dbe25364fd6b1c236",
"fdc88844753c23e67156295400b0d53535479045"
],
"answer": [
{
"evidence": [
"In our experiment, ConceptNet is used as the commonsense knowledge base. Preprocessing of this knowledge base involves removing assertions containing non-English characters or any word outside vocabulary $V$ . 1.4M concepts remain. 0.8M concepts are unigrams, 0.43M are bi-grams and the other 0.17M are tri-grams or more. Each concept is associated with an average of 4.3 assertions. More than half of the concepts are associated with only one assertion."
],
"extractive_spans": [
"ConceptNet"
],
"free_form_answer": "",
"highlighted_evidence": [
"In our experiment, ConceptNet is used as the commonsense knowledge base."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Based on the Open Mind Common Sense project BIBREF23 , ConceptNet not only contains objective facts such as “Paris is the capital of France” that are constantly true, but also captures informal relations between common concepts that are part of everyday knowledge such as “A dog is a pet”. This feature of ConceptNet is desirable in our experiments, because the ability to recognize the informal relations between common concepts is necessary in the open-domain conversation setting we are considering in this paper.",
"In our experiment, ConceptNet is used as the commonsense knowledge base. Preprocessing of this knowledge base involves removing assertions containing non-English characters or any word outside vocabulary $V$ . 1.4M concepts remain. 0.8M concepts are unigrams, 0.43M are bi-grams and the other 0.17M are tri-grams or more. Each concept is associated with an average of 4.3 assertions. More than half of the concepts are associated with only one assertion."
],
"extractive_spans": [
"ConceptNet"
],
"free_form_answer": "",
"highlighted_evidence": [
"Based on the Open Mind Common Sense project BIBREF23 , ConceptNet not only contains objective facts such as “Paris is the capital of France” that are constantly true, but also captures informal relations between common concepts that are part of everyday knowledge such as “A dog is a pet”. This feature of ConceptNet is desirable in our experiments, because the ability to recognize the informal relations between common concepts is necessary in the open-domain conversation setting we are considering in this paper.",
"In our experiment, ConceptNet is used as the commonsense knowledge base."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In our experiment, ConceptNet is used as the commonsense knowledge base. Preprocessing of this knowledge base involves removing assertions containing non-English characters or any word outside vocabulary $V$ . 1.4M concepts remain. 0.8M concepts are unigrams, 0.43M are bi-grams and the other 0.17M are tri-grams or more. Each concept is associated with an average of 4.3 assertions. More than half of the concepts are associated with only one assertion."
],
"extractive_spans": [
"ConceptNet"
],
"free_form_answer": "",
"highlighted_evidence": [
"In our experiment, ConceptNet is used as the commonsense knowledge base."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ca2a4695129d0180768a955fb5910d639f79aa34",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86"
]
}
],
"nlp_background": [
"two",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How you incorporate commonsense into an LSTM?",
"Which domain are the conversations in?",
"Which commonsense knowledge base are they using?"
],
"question_id": [
"3a01dc85ac983002fd631f1c28fc1cbe16094c24",
"00ffe2c59a3ba18d6d2b353d6ab062a152c88526",
"042800c3336ed5f4826203616a39747c61382ba6"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"common sense",
"commonsense",
"commonsense"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Left: In traditional dialogue systems, the response is determined solely by the message itself (arrows denote dependencies). Right: The responder recalls relevant information from memory; memory and message content jointly determine the response. In the illustrated example, the responder retrieves the event “Left dictionary on book shelf” from memory, which triggers a meaningful response.",
"Figure 2: A sketch of SenticNet semantic network.",
"Figure 3: Tri-LSTM encoder. We use LSTM to encode message, response and commonsense assertions. LSTM weights for message and response are tied. The lower box is equal to a Dual-LSTM encoder. The upper box is the memory module encoding all commonsense assertions.",
"Figure 4: In the illustrated case, five concepts are identified in the message. All assertions associated with the five concepts constitute Ax. We show three appropriate responses for this single message. Each of them is associated with (same color) only one or two commonsense assertions, which is a paradigm in open-domain conversation and provides ground for our maxpooling strategy. It is also possible that an appropriate response is not relevant to any of the common assertions in Ax at all, in which case our method falls back to Dual-LSTM.",
"Table 1: Model evaluation. ∗ indicates models with commonsense knowledge integrated. The TF-IDF model is trained following (Lowe et al. 2015b). The “Recall@k” method is used for evaluation (Lowe et al. 2016b). The model is asked to rank a total of N responses containing one positive response and N − 1 negative responses (N = 10 according to our test set). If the ranking of the positive response is not larger than k, Recall@k is positive for that instance.",
"Table 2: Case studies for the impact of commonsense assertions. “Activated Assertion” is the commonsense assertion entry in Ax chosen by max-pooling. ♦ indicates correct selection. All 4 instances displayed are taken from the test set."
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png",
"6-Table1-1.png",
"6-Table2-1.png"
]
} | [
"How you incorporate commonsense into an LSTM?",
"Which domain are the conversations in?"
] | [
[
"1709.05453-Commonsense Knowledge Retrieval-1",
"1709.05453-Introduction-2",
"1709.05453-Tri-LSTM Encoder-0"
],
[
"1709.05453-Twitter Dialogue Dataset-0",
"1709.05453-Conversational Models-2"
]
] | [
"using another LSTM for encoding commonsense assertions",
"open-domain Twitter dialogues"
] | 141 |
2002.06854 | HotelRec: a Novel Very Large-Scale Hotel Recommendation Dataset | Today, recommender systems are an inevitable part of everyone's daily digital routine and are present on most internet platforms. State-of-the-art deep learning-based models require a large number of data to achieve their best performance. Many datasets fulfilling this criterion have been proposed for multiple domains, such as Amazon products, restaurants, or beers. However, works and datasets in the hotel domain are limited: the largest hotel review dataset is below the million samples. Additionally, the hotel domain suffers from a higher data sparsity than traditional recommendation datasets and therefore, traditional collaborative-filtering approaches cannot be applied to such data. In this paper, we propose HotelRec, a very large-scale hotel recommendation dataset, based on TripAdvisor, containing 50 million reviews. To the best of our knowledge, HotelRec is the largest publicly available dataset in the hotel domain (50M versus 0.9M) and additionally, the largest recommendation dataset in a single domain and with textual reviews (50M versus 22M). We release HotelRec for further research: this https URL. | {
"paragraphs": [
[
"The increasing flood of information on the web creates a need for selecting content according to the end user's preferences. Today, recommender systems are deployed on most internet platforms and play an important role in everybody's daily digital routine, including e-commerce websites, social networks, music streaming, or hotel booking.",
"Recommender systems have been investigated over more than thirty years BIBREF0. Over the years, many models and datasets in different domains and various sizes have been developed: movies BIBREF1, Amazon products BIBREF2, BIBREF3, or music BIBREF4. With the tremendous success of large deep learning-based recommender systems, in better capturing user-item interactions, the recommendation quality has been significantly improved BIBREF5.",
"However, the increase in recommendation performance with deep learning-based models comes at the cost of large datasets. Most recent state-of-the-art models, such as BIBREF6, BIBREF7, or BIBREF8 necessitate large datasets (i.e., millions) to achieve high performance.",
"In the hotel domain, only a few works have studied hotel recommendation, such as BIBREF9 or BIBREF10. Additionally, to the best of our knowledge, the largest publicly available hotel review dataset contains $870k$ samples BIBREF11. Unlike commonly used recommendation datasets, the hotel domain suffers from higher data sparsity and therefore, traditional collaborative-filtering approaches cannot be applied BIBREF10, BIBREF12, BIBREF13. Furthermore, rating a hotel is different than traditional products, because the whole experience lasts longer, and there are more facets to review BIBREF12.",
"In contrast, we propose in this work HotelRec, a novel large-scale hotel recommendation dataset based on hotel reviews from TripAdvisor, and containing approximately 50 million reviews. A sample review is shown in Figure FIGREF1. To the best of our knowledge, HotelRec is the largest publicly available hotel review dataset (at least 60 times larger than previous datasets). Furthermore, we analyze various aspects of the HotelRec dataset and benchmark the performance of different models on two tasks: rating prediction and recommendation performance. Although reasonable performance is achieved by a state-of-the-art method, there is still room for improvement. We believe that HotelRec will offer opportunities to apply and develop new large recommender systems, and push furthermore the recommendation for hotels, which differs from traditional datasets."
],
[
"Recommendation is an old problem that has been studied from a wide range of areas, such as Amazon products BIBREF14, beers BIBREF15, restaurants, images BIBREF16, music BIBREF4, and movies BIBREF1. The size of the datasets generally varies from hundreds of thousands to tens of millions of user-item interactions; an interaction always contains a rating and could have additional attributes, such as a user-written text, sub-ratings, the date, or whether the review was helpful. At the time of writing, and to the best of our knowledge, the largest available recommendation corpus on a specific domain and with textual reviews, is based on Amazon Books and proposed by he2016ups. It contains a total of 22 million book reviews. In comparison, HotelRec has $2.3$ times more reviews and is based on hotels. Consequently, HotelRec is the largest domain-specific public recommendation dataset with textual reviews and on a single domain. We highlight with textual reviews, because some other datasets (e.g., Netflix Prize BIBREF17) contain more interactions, that only includes the rating and the date.",
"To the best of our knowledge, only a few number of datasets for hotel reviews have been created: 35k BIBREF9, 68k BIBREF18, 140k BIBREF19, 142k BIBREF20, 235k BIBREF9, 435k BIBREF13, and 870k BIBREF11. However, the number of users, items, and interactions is limited compared to traditional recommendation datasets. In contrast, the HotelRec dataset has at least two orders of magnitude more examples. Statistics of HotelRec is available in Table TABREF2."
],
[
"Everyday a large number of people write hotel reviews on on-line platforms (e.g., Booking, TripAdvisor) to share their opinions toward multiple aspects, such as their Overall experience, the Service, or the Location. Among the most popular platforms, we selected TripAdvisor: according to their third quarterly report of November 2019, on the U.S. Securities and Exchange Commission website, TripAdvisor is the world's largest online travel site with approximately $1.4$ million hotels. Consequently, we created our dataset HotelRec based on TripAdvisor hotel reviews. The statistics of the HotelRec dataset, the 5-core, and 20-core versions are shown in Table TABREF2; each contains at least $k$ reviews for each user or item.",
"In this section, we first discuss about the data collection process (Section SECREF8), followed by general descriptive statistics (Section SECREF12). Finally, Section SECREF18 analyzes the overall rating and sub-ratings."
],
[
"We first crawled all areas listed on TripAdvisor's SiteIndex. Each area link leads to another page containing different information, such as a list of accommodations, or restaurants; we gathered all links corresponding to hotels. Our robot then opened each of the hotel links and filtered out hotels without any review. In total, in July 2019, there were $365\\,056$ out of $2\\,502\\,140$ hotels with at least one review.",
"Although the pagination of reviews for each hotel is accessible via a URL, the automatic scraping is discouraged: loading a page takes approximately one second, some pop-ups might appear randomly, and the robot will be eventually blocked because of its speed. We circumvented all these methods by mimicking a human behavior with the program Selenium, that we have linked with Python. However, each action (i.e., disabling the calendar, going to the next page of reviews) had to be separated by a time gap of one second. Moreover, each hotel employed a review pagination system displaying only five reviews at the same time, which majorly slowed down the crawling.",
"An example review is shown in Figure FIGREF1. For each review, we collected: the URL of the user's profile and hotel, the date, the overall rating, the summary (i.e., the title of the review), the written text, and the multiple sub-ratings when provided. These sub-ratings correspond to a fine-grained evaluation of a specific aspect, such as Service, Cleanliness, or Location. The full list of fine-grained aspects is available in Figure FIGREF1, and their correlation in Section SECREF18",
"We naively parallelized the crawling on approximately 100 cores for two months. After removing duplicated reviews, as in mcauley2013hidden, we finally collected $50\\,264\\,531$ hotel reviews."
],
[
"HotelRec includes $50\\,264\\,531$ hotel reviews from TripAdvisor in a period of nineteen years (from February 1, 2001 to May 14, 2019). The distribution of reviews over the years is available in Figure FIGREF13. There is a significant activity increase of users from 2001 to 2010. After this period, the number of reviews per year grows slowly and oscillates between one to ten million.",
"In total, there are $21\\,891\\,294$ users. The distribution of reviews per user is shown in Figure FIGREF13. Similarly to other recommender datasets BIBREF3, BIBREF21, the distribution resembles a Power-law distribution: many users write one or a few reviews. In HotelRec, $67.55\\%$ users have written only one review, and $90.73\\%$ with less than five reviews. Additionally, in the 5-core subset, less than $15\\%$ of $2\\,012\\,162$ users had a peer with whom they have co-rated three or more hotels. Finally, the average user has $2.24$ reviews, and the median is $1.00$.",
"Relating to the items, there are $365\\,056$ hotels, which is roughly 60 times smaller than the number of users. This ratio is also consistent with other datasets BIBREF14, BIBREF15.",
"Figure FIGREF13 displays the distribution of reviews per hotel. The distribution also has a shape of a Power-law distribution, but its center is closer to $3\\,000$ than the 100 of the user distribution. However, in comparison, only $0.26\\%$ hotels have less than five reviews and thus, the average reviews per hotel and the median are higher: $137.69$ and $41.00$.",
"Finally, we analyze the distribution of words per review, to understand how much people write about hotels. The distribution of words per review is shown in Figure FIGREF13. The average review length is $125.57$ words, which is consistent with other studies BIBREF14."
],
[
"When writing a review, the Overall rating is mandatory: it represents the evaluation of the whole user experience towards a hotel. It is consequently available for all reviews in HotelRec. However, sub-ratings only assess one or more particular aspects (up to eight), such as Service, Cleanliness, or Location. Additionally, they are optional: the user can choose how many and what aspects to evaluate. Among all the reviews, $35\\,836\\,414$ ($71.30\\%$) have one or several sub-ratings, with a maximum of eight aspects. The distribution of the number of assessed fine-grained aspects is shown in Table TABREF19, where All represents the coverage over the whole set of reviews, and With Sub-Ratings over the set of reviews having sub-ratings (i.e., approximately 35 million). Interestingly, most of the sub-ratings are evaluated in a group of three or six aspects. We hypothesize that this phenomenon came from a limitation of TripAdvisor on the user interface, where the set of aspects to evaluate was predefined.",
"We analyze in Table TABREF20 the distribution of the reviews with fine-grained and Overall ratings. Unsurprisingly, the Overall rating is always available as it is mandatory. In terms of aspects, there is a group of six that are majorly predominant (following the observation in Table TABREF19), and two that are rarely rated: Check-In and Business Service. Surprisingly, these two aspects are not sharing similar rating averages and percentiles than the others. We explain this difference due to the small number of reviews rating them (approximately $2\\%$). Furthermore, most ratings across aspects are positive: the 25th percentile is 4, with an average of $4.23$ and a median of 5.",
"Finally, in Figure FIGREF21, we computed the Pearson correlation of ratings between all pairs of aspects, including fine-grained and Overall ones. Interesting, all aspect-pairs have a correlation between $0.46$ and $0.83$. We observe that Service, Value, and Rooms correlate the most with the Overall ratings. Unsurprisingly, the aspect pair Service-Check In and Rooms-Cleanliness have a correlation of $0.80$, because people often evaluate them together in a similar fashion. Interestingly, Location is the aspect that correlates the least with the others, followed by Business Service, and Check-In."
],
[
"In this section, we first describe two different $k$-core subsets of the HotelRec dataset that we used to evaluate multiple baselines on two tasks: rating prediction and recommendation performance. We then detail the models we employed, and discuss their results."
],
[
"We used the aforementioned dataset HotelRec, containing approximately 50 million hotel reviews. The characteristics of this dataset are described in Section SECREF12 and Section SECREF18 Following the literature BIBREF8, BIBREF22, we focused our evaluation on two $k$-core subsets of HotelRec, with at least $k$ reviews for each user or item. In this paper, we employed the most common values for $k$: 5 and 20. We randomly divided each of the datasets into $80/10/10$ for training, validation, and testing subsets.",
"From each review, we kept the corresponding \"userID\", \"itemID\", rating (from 1 to 5 stars), written text, and date. We preprocessed the text by lowering and tokenizing it. Statistics of both subsets are shown in Table TABREF2."
],
[
"We evaluated different models on the HotelRec subsets, 5-core and 20-core, on two tasks: rating prediction and recommendation performance. We have separated the evaluation because most models are only tailored for one of the tasks but not both. Therefore, we applied different models for each task and evaluated them separately.",
"For the rating prediction task, following the literature, we reported the results in terms of Mean Square Error (MSE) and Root Mean Square Error (RMSE). We assessed the recommendation performance of a ranked list by Hit Ratio (HR) and Normalized Discounted Cumulative Gain (NDCG) BIBREF23, as in he2017neural. We truncated the ranked list at 5, 10 and 20. The HR measures whether a new item is on the top-$k$ list and NDCG measures the position of the hit by assigning higher scores to hits at top ranks. As in he2017neural, we computed both metrics for each test user and reported the average score. Regarding the models, we employed the following baselines:",
"Mean: A simple model that predicts a rating by the mean ratings of the desired item. It is a good baseline in recommendation BIBREF13;",
"HFT BIBREF14: A latent-factor approach combined with a topic model that aims to find topics in the review text that correlate with latent factors of the users and the items;",
"TransNet(-Ext): The model is based on zheng2017joint, which learns a user and item profile based on former reviews using convolutional neural networks, and predicts the ratings using matrix factorization methods afterward. They added a regularizer network to improve performance. TransNet-Ext is an extension of TransNet by using a collaborative-filtering component in addition to user and item reviews history.",
"For the recommendation performance task, we used the following models :",
"RAND: A simple model recommending random items;",
"POP BIBREF24: Another non-personalized recommender method, where items are recommended based on their popularity (i.e., the number of interactions with users). It is a common baseline to benchmark the recommendation performance;",
"ItemKNN/UserKNN BIBREF25: Two standard item-based (respectively user-based) collaborative filtering methods, using $k$ nearest neighbors;",
"PureSVD BIBREF26: A similarity based approach that constructs a similarity matrix through the SVD decomposition of the rating matrix;",
"GMF BIBREF8: A generalization of the matrix factorization method that applies a linear kernel to model the latent feature interactions;",
"MLP BIBREF8: Similar than GMF, but it models the interaction of latent features with a neural network instead of a linear kernel;",
"NeuMF BIBREF8: A model combining GMF and MLP to better model the complex user-item interactions.",
"Due to the large size of the HotelRec dataset, especially in the 5-core setting (around 20 million reviews), running an extensive hyper-parameter tuning for each neural model would require a high time and resource budget. Therefore, for the neural model, we used the default parameters from the original implementation and a random search of three trials. For all other models (i.e., HFT, ItemKNN, UserKNN, PureSVD), we ran a standard grid search over the parameter sets."
],
[
"We show in Table TABREF35 the performance in terms of the mean square error (MSE) and the root mean square error (RMSE). Surprisingly, we observe that the neural network TransNet and its extension perform poorly in comparison to the matrix factorization model HFT and the simple Mean baselines. Although TransNet learns a user and item profile based on the most recent reviews, it cannot capture efficiently the interaction from these profiles. Moreover, the additional collaborative-filtering component in TransNet-Ext seems to worsen the performance, which is consistent with the results of musat2013recommendation; in the hotel domain, the set users who have rated the same hotels is sparser than usual recommendation datasets.",
"Interestingly, the Mean model obtains the best performance on the 20-core subset, while HFT achieves the best performance on the 5-core subset. We hypothesize that HFT and TransNet(-Ext) models perform better on the 5-core than 20-core subset, because of the number of data. More specifically, HFT employs Latent Dirichlet Allocation BIBREF27 to approximate topic and word distributions. Thus, the probabilities are more accurate with a text corpus approximately ten times larger."
],
[
"The results of the baselines are available in Table TABREF36. HR@$k$ and NDCG@$k$ correspond to the Hit Ratio (HR) and Normalized Discounted Cumulative Gain (NDCG), evaluated on the top-$k$ computed ranked items for a particular test user, and then averaged over all test users.",
"First, we can see that NeuMF significantly outperforms all other baselines on both $k$-core subsets. The other methods GMF and MLP - both used within NeuMF - also show quite strong performance and comparable performance. However, NeuFM achieves higher results by fusing GMF and MNLP within the same model. Second, if we compare ItemKNN and UserKNN, we observe that on both subsets, the user collaborative filtering approach underperform compared to its item-based variant, that matches the founding in the rating prediction task of the previous section, and the work of musat2013recommendation,musat2015personalizing. Additionally, PureSVD achieves comparable results with UserKNN.",
"Finally, the two non-personalized baselines RAND and POP obtain unsurprisingly low results, indicating the necessity of modeling user's preferences to a personalized recommendation."
],
[
"In this work, we introduce HotelRec, a novel large-scale dataset of hotel reviews based on TripAdvisor, and containing approximately 50 million reviews. Each review includes the user profile, the hotel URL, the overall rating, the summary, the user-written text, the date, and multiple sub-ratings of aspects when provided. To the best of our knowledge, HotelRec is the largest publicly available dataset in the hotel domain ($50M$ versus $0.9M$) and additionally, the largest recommendation dataset in a single domain and with textual reviews ($50M$ versus $22M$).",
"We further analyze the HotelRec dataset and provide benchmark results for two tasks: rating prediction and recommendation performance. We apply multiple common baselines, from non-personalized methods to competitive models, and show that reasonable performance could be obtained, but still far from results achieved in other domains in the literature.",
"In future work, we could easily increase the dataset with other languages and use it for multilingual recommendation. We release HotelRec for further research: https://github.com/Diego999/HotelRec."
]
],
"section_name": [
"Introduction",
"Related Work",
"HotelRec",
"HotelRec ::: Data Collection",
"HotelRec ::: Descriptive Statistics",
"HotelRec ::: Overall and Sub-Ratings",
"Experiments and Results",
"Experiments and Results ::: Datasets",
"Experiments and Results ::: Evaluation Metrics and Baselines",
"Experiments and Results ::: Rating Prediction",
"Experiments and Results ::: Recommendation Performance",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"1507b37c15070202975d05798fa2160ec37a9c87",
"1e766419f4bffed23703d63bb0dbce9abf97322d",
"2ae0896bb978fa4989dd04603d4b3f4c57f11f64"
],
"answer": [
{
"evidence": [
"We first crawled all areas listed on TripAdvisor's SiteIndex. Each area link leads to another page containing different information, such as a list of accommodations, or restaurants; we gathered all links corresponding to hotels. Our robot then opened each of the hotel links and filtered out hotels without any review. In total, in July 2019, there were $365\\,056$ out of $2\\,502\\,140$ hotels with at least one review.",
"Although the pagination of reviews for each hotel is accessible via a URL, the automatic scraping is discouraged: loading a page takes approximately one second, some pop-ups might appear randomly, and the robot will be eventually blocked because of its speed. We circumvented all these methods by mimicking a human behavior with the program Selenium, that we have linked with Python. However, each action (i.e., disabling the calendar, going to the next page of reviews) had to be separated by a time gap of one second. Moreover, each hotel employed a review pagination system displaying only five reviews at the same time, which majorly slowed down the crawling.",
"An example review is shown in Figure FIGREF1. For each review, we collected: the URL of the user's profile and hotel, the date, the overall rating, the summary (i.e., the title of the review), the written text, and the multiple sub-ratings when provided. These sub-ratings correspond to a fine-grained evaluation of a specific aspect, such as Service, Cleanliness, or Location. The full list of fine-grained aspects is available in Figure FIGREF1, and their correlation in Section SECREF18",
"We naively parallelized the crawling on approximately 100 cores for two months. After removing duplicated reviews, as in mcauley2013hidden, we finally collected $50\\,264\\,531$ hotel reviews."
],
"extractive_spans": [],
"free_form_answer": "The authors crawled all areas listed an TripAdvisor's SiteIndex and gathered all links related to hotels. Using Selenium, they put a time gap between opening each page, to mimic human behaviour and avoid having their scraper being detected. They discarded pages without a review and for pages with a review, they collected the review's profile, the overall rating, the summary, the written text and subratings, where given. ",
"highlighted_evidence": [
"We first crawled all areas listed on TripAdvisor's SiteIndex. Each area link leads to another page containing different information, such as a list of accommodations, or restaurants; we gathered all links corresponding to hotels. Our robot then opened each of the hotel links and filtered out hotels without any review. In total, in July 2019, there were $365\\,056$ out of $2\\,502\\,140$ hotels with at least one review.",
"Although the pagination of reviews for each hotel is accessible via a URL, the automatic scraping is discouraged: loading a page takes approximately one second, some pop-ups might appear randomly, and the robot will be eventually blocked because of its speed. We circumvented all these methods by mimicking a human behavior with the program Selenium, that we have linked with Python. However, each action (i.e., disabling the calendar, going to the next page of reviews) had to be separated by a time gap of one second. Moreover, each hotel employed a review pagination system displaying only five reviews at the same time, which majorly slowed down the crawling.",
"For each review, we collected: the URL of the user's profile and hotel, the date, the overall rating, the summary (i.e., the title of the review), the written text, and the multiple sub-ratings when provided. These sub-ratings correspond to a fine-grained evaluation of a specific aspect, such as Service, Cleanliness, or Location",
"We naively parallelized the crawling on approximately 100 cores for two months. After removing duplicated reviews, as in mcauley2013hidden, we finally collected $50\\,264\\,531$ hotel reviews."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In contrast, we propose in this work HotelRec, a novel large-scale hotel recommendation dataset based on hotel reviews from TripAdvisor, and containing approximately 50 million reviews. A sample review is shown in Figure FIGREF1. To the best of our knowledge, HotelRec is the largest publicly available hotel review dataset (at least 60 times larger than previous datasets). Furthermore, we analyze various aspects of the HotelRec dataset and benchmark the performance of different models on two tasks: rating prediction and recommendation performance. Although reasonable performance is achieved by a state-of-the-art method, there is still room for improvement. We believe that HotelRec will offer opportunities to apply and develop new large recommender systems, and push furthermore the recommendation for hotels, which differs from traditional datasets."
],
"extractive_spans": [
"hotel reviews from TripAdvisor"
],
"free_form_answer": "",
"highlighted_evidence": [
"In contrast, we propose in this work HotelRec, a novel large-scale hotel recommendation dataset based on hotel reviews from TripAdvisor, and containing approximately 50 million reviews."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Everyday a large number of people write hotel reviews on on-line platforms (e.g., Booking, TripAdvisor) to share their opinions toward multiple aspects, such as their Overall experience, the Service, or the Location. Among the most popular platforms, we selected TripAdvisor: according to their third quarterly report of November 2019, on the U.S. Securities and Exchange Commission website, TripAdvisor is the world's largest online travel site with approximately $1.4$ million hotels. Consequently, we created our dataset HotelRec based on TripAdvisor hotel reviews. The statistics of the HotelRec dataset, the 5-core, and 20-core versions are shown in Table TABREF2; each contains at least $k$ reviews for each user or item."
],
"extractive_spans": [
"TripAdvisor hotel reviews"
],
"free_form_answer": "",
"highlighted_evidence": [
"Everyday a large number of people write hotel reviews on on-line platforms (e.g., Booking, TripAdvisor) to share their opinions toward multiple aspects, such as their Overall experience, the Service, or the Location. Among the most popular platforms, we selected TripAdvisor: according to their third quarterly report of November 2019, on the U.S. Securities and Exchange Commission website, TripAdvisor is the world's largest online travel site with approximately $1.4$ million hotels. Consequently, we created our dataset HotelRec based on TripAdvisor hotel reviews. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"599952c0c9a07a242cb5b1e0e5839c48febd5430",
"9c55d77bb1e12a678435fafe7e7e76776b7c7de7"
],
"answer": [
{
"evidence": [
"Relating to the items, there are $365\\,056$ hotels, which is roughly 60 times smaller than the number of users. This ratio is also consistent with other datasets BIBREF14, BIBREF15."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Relating to the items, there are $365\\,056$ hotels, which is roughly 60 times smaller than the number of users. This ratio is also consistent with other datasets BIBREF14, BIBREF15."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
},
{
"annotation_id": [
"229be2e10f6ecf7fd456feb321045b9ffda9b936",
"5005ab5df40aa859a3bee03581c027f09acf676b",
"af4f96fe776b481c3f6c629234955134fc468724"
],
"answer": [
{
"evidence": [
"In contrast, we propose in this work HotelRec, a novel large-scale hotel recommendation dataset based on hotel reviews from TripAdvisor, and containing approximately 50 million reviews. A sample review is shown in Figure FIGREF1. To the best of our knowledge, HotelRec is the largest publicly available hotel review dataset (at least 60 times larger than previous datasets). Furthermore, we analyze various aspects of the HotelRec dataset and benchmark the performance of different models on two tasks: rating prediction and recommendation performance. Although reasonable performance is achieved by a state-of-the-art method, there is still room for improvement. We believe that HotelRec will offer opportunities to apply and develop new large recommender systems, and push furthermore the recommendation for hotels, which differs from traditional datasets."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Furthermore, we analyze various aspects of the HotelRec dataset and benchmark the performance of different models on two tasks: rating prediction and recommendation performance."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"In this section, we first describe two different $k$-core subsets of the HotelRec dataset that we used to evaluate multiple baselines on two tasks: rating prediction and recommendation performance. We then detail the models we employed, and discuss their results."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In this section, we first describe two different $k$-core subsets of the HotelRec dataset that we used to evaluate multiple baselines on two tasks: rating prediction and recommendation performance. We then detail the models we employed, and discuss their results."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"In contrast, we propose in this work HotelRec, a novel large-scale hotel recommendation dataset based on hotel reviews from TripAdvisor, and containing approximately 50 million reviews. A sample review is shown in Figure FIGREF1. To the best of our knowledge, HotelRec is the largest publicly available hotel review dataset (at least 60 times larger than previous datasets). Furthermore, we analyze various aspects of the HotelRec dataset and benchmark the performance of different models on two tasks: rating prediction and recommendation performance. Although reasonable performance is achieved by a state-of-the-art method, there is still room for improvement. We believe that HotelRec will offer opportunities to apply and develop new large recommender systems, and push furthermore the recommendation for hotels, which differs from traditional datasets."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Furthermore, we analyze various aspects of the HotelRec dataset and benchmark the performance of different models on two tasks: rating prediction and recommendation performance. Although reasonable performance is achieved by a state-of-the-art method, there is still room for improvement."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How did they obtain the dataset?",
"Are the recommendations specific to a region?",
"Did they experiment on this dataset?"
],
"question_id": [
"52868394eb2b3b37eb5f47f51c06ad53061f4495",
"59dc6b1d3da74a2e67a6fb1ce940b28d9e3d8de0",
"713e1c7b0ab17759ba85d7cd2041e387831661df"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Statistics of the whole HotelRec dataset and its kcore subsets (number of users, items, interactions, and the sparsity ratio).",
"Figure 1: Review from TripAdvisor, with sub-ratings.",
"Table 3: Descriptive statistics of the ratings of the Overall and fine-grained aspect ratings (e.g., Service, Rooms). Coverage describes the ratio of reviews having a particular fine-grained rating. The other columns represent the average, and the 25th, 50th (median), 75th percentiles of the individual ratings.",
"Table 2: Statistics of the number of rated fine-grained aspects in the HotelRec dataset. Coverage is the ratio of reviews having i sub-ratings over: All reviews, and only reviews With Sub-Ratings available.",
"Figure 2: Histograms of multiple attributes of HotelRec, in logarithmic scales: number of reviews per user, item and year, and number of words per review.",
"Figure 3: Pearson correlation between all fine-grained and overall ratings. All aspect pairs are highly correlated.",
"Table 4: Evaluation of rating prediction in terms of Mean Square Error (MSE) and Root Mean Square Error (RMSE).",
"Table 5: Evaluation of Top-K recommendation performance in terms of Hit Ratio (HR) and Normalized Discounted Cumulative Gain (NDCG)."
],
"file": [
"1-Table1-1.png",
"1-Figure1-1.png",
"3-Table3-1.png",
"3-Table2-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"5-Table4-1.png",
"6-Table5-1.png"
]
} | [
"How did they obtain the dataset?"
] | [
[
"2002.06854-HotelRec-0",
"2002.06854-HotelRec ::: Data Collection-3",
"2002.06854-Introduction-4",
"2002.06854-HotelRec ::: Data Collection-1",
"2002.06854-HotelRec ::: Data Collection-0",
"2002.06854-HotelRec ::: Data Collection-2"
]
] | [
"The authors crawled all areas listed an TripAdvisor's SiteIndex and gathered all links related to hotels. Using Selenium, they put a time gap between opening each page, to mimic human behaviour and avoid having their scraper being detected. They discarded pages without a review and for pages with a review, they collected the review's profile, the overall rating, the summary, the written text and subratings, where given. "
] | 142 |
1906.05506 | Character n-gram Embeddings to Improve RNN Language Models | This paper proposes a novel Recurrent Neural Network (RNN) language model that takes advantage of character information. We focus on character n-grams based on research in the field of word embedding construction (Wieting et al. 2016). Our proposed method constructs word embeddings from character n-gram embeddings and combines them with ordinary word embeddings. We demonstrate that the proposed method achieves the best perplexities on the language modeling datasets: Penn Treebank, WikiText-2, and WikiText-103. Moreover, we conduct experiments on application tasks: machine translation and headline generation. The experimental results indicate that our proposed method also positively affects these tasks. | {
"paragraphs": [
[
"Neural language models have played a crucial role in recent advances of neural network based methods in natural language processing (NLP). For example, neural encoder-decoder models, which are becoming the de facto standard for various natural language generation tasks including machine translation BIBREF1 , summarization BIBREF2 , dialogue BIBREF3 , and caption generation BIBREF4 can be interpreted as conditional neural language models. Moreover, neural language models can be used for rescoring outputs from traditional methods, and they significantly improve the performance of automatic speech recognition BIBREF5 . This implies that better neural language models improve the performance of application tasks.",
"In general, neural language models require word embeddings as an input BIBREF6 . However, as described by BIBREF7 , this approach cannot make use of the internal structure of words although the internal structure is often an effective clue for considering the meaning of a word. For example, we can comprehend that the word `causal' is related to `cause' immediately because both words include the same character sequence `caus'. Thus, if we incorporate a method that handles the internal structure such as character information, we can improve the quality of neural language models and probably make them robust to infrequent words.",
"To incorporate the internal structure, BIBREF7 concatenated character embeddings with an input word embedding. They demonstrated that incorporating character embeddings improved the performance of RNN language models. Moreover, BIBREF8 and BIBREF9 applied Convolutional Neural Networks (CNN) to construct word embeddings from character embeddings.",
"On the other hand, in the field of word embedding construction, some previous researchers found that character INLINEFORM0 -grams are more useful than single characters BIBREF0 , BIBREF10 . In particular, BIBREF0 demonstrated that constructing word embeddings from character INLINEFORM1 -gram embeddings outperformed the methods that construct word embeddings from character embeddings by using CNN or a Long Short-Term Memory (LSTM).",
"Based on their reports, in this paper, we propose a neural language model that utilizes character INLINEFORM0 -gram embeddings. Our proposed method encodes character INLINEFORM1 -gram embeddings to a word embedding with simplified Multi-dimensional Self-attention (MS) BIBREF11 . We refer to this constructed embedding as char INLINEFORM2 -MS-vec. The proposed method regards char INLINEFORM3 -MS-vec as an input in addition to a word embedding.",
"We conduct experiments on the well-known benchmark datasets: Penn Treebank, WikiText-2, and WikiText-103. Our experiments indicate that the proposed method outperforms neural language models trained with well-tuned hyperparameters and achieves state-of-the-art scores on each dataset. In addition, we incorporate our proposed method into a standard neural encoder-decoder model and investigate its effect on machine translation and headline generation. We indicate that the proposed method also has a positive effect on such tasks."
],
[
"In this study, we focus on RNN language models, which are widely used in the literature. This section briefly overviews the basic RNN language model.",
"In language modeling, we compute joint probability by using the product of conditional probabilities. Let INLINEFORM0 be a word sequence with length INLINEFORM1 , namely, INLINEFORM2 . We formally obtain the joint probability of word sequence INLINEFORM3 as follows: DISPLAYFORM0 ",
" INLINEFORM0 is generally assumed to be 1 in this literature, i.e., INLINEFORM1 , and thus we can ignore its calculation.",
"To estimate the conditional probability INLINEFORM0 , RNN language models encode sequence INLINEFORM1 into a fixed-length vector and compute the probability distribution of each word from this fixed-length vector. Let INLINEFORM2 be the vocabulary size and let INLINEFORM3 be the probability distribution of the vocabulary at timestep INLINEFORM4 . Moreover, let INLINEFORM5 be the dimension of the hidden state of an RNN and let INLINEFORM6 be the dimensions of embedding vectors. Then, RNN language models predict the probability distribution INLINEFORM7 by the following equation: DISPLAYFORM0 ",
" where INLINEFORM0 is a weight matrix, INLINEFORM1 is a bias term, and INLINEFORM2 is a word embedding matrix. INLINEFORM3 and INLINEFORM4 are a one-hot vector of an input word INLINEFORM5 and the hidden state of the RNN at timestep INLINEFORM6 , respectively. We define INLINEFORM7 at timestep INLINEFORM8 as a zero vector, that is, INLINEFORM9 . Let INLINEFORM10 represent an abstract function of an RNN, which might be the LSTM, the Quasi-Recurrent Neural Network (QRNN) BIBREF12 , or any other RNN variants."
],
[
"We incorporate char INLINEFORM0 -MS-vec, which is an embedding constructed from character INLINEFORM1 -gram embeddings, into RNN language models since, as discussed earlier, previous studies revealed that we can construct better word embeddings by using character INLINEFORM2 -gram embeddings BIBREF0 , BIBREF10 . In particular, we expect char INLINEFORM3 -MS-vec to help represent infrequent words by taking advantage of the internal structure.",
"Figure FIGREF4 is the overview of the proposed method using character 3-gram embeddings (char3-MS-vec). As illustrated in this figure, our proposed method regards the sum of char3-MS-vec and the standard word embedding as an input of an RNN. In other words, let INLINEFORM0 be char INLINEFORM1 -MS-vec and we replace Equation with the following: DISPLAYFORM0 "
],
[
"To compute INLINEFORM0 , we apply an encoder to character INLINEFORM1 -gram embeddings. Previous studies demonstrated that additive composition, which computes the (weighted) sum of embeddings, is a suitable method for embedding construction BIBREF13 , BIBREF0 . Thus, we adopt (simplified) multi-dimensional self-attention BIBREF11 , which computes weights for each dimension of given embeddings and sums up the weighted embeddings (i.e., element-wise weighted sum) as an encoder. Let INLINEFORM2 be the character INLINEFORM3 -gram embeddings of an input word, let INLINEFORM4 be the number of character INLINEFORM5 -grams extracted from the word, and let INLINEFORM6 be the matrix whose INLINEFORM7 -th column corresponds to INLINEFORM8 , that is, INLINEFORM9 . The multi-dimensional self-attention constructs the word embedding INLINEFORM10 by the following equations: DISPLAYFORM0 ",
" where INLINEFORM0 means element-wise product of vectors, INLINEFORM1 is a weight matrix, INLINEFORM2 is the INLINEFORM3 -th column of a given matrix, and INLINEFORM4 is the INLINEFORM5 -th element of a given vector. In short, Equation applies the softmax function to each row of INLINEFORM6 and extracts the INLINEFORM7 -th column as INLINEFORM8 .",
"Let us consider the case where an input word is `the' and we use character 3-gram in Figure FIGREF4 . We prepare special characters `' and `$' to represent the beginning and end of the word, respectively. Then, `the' is composed of three character 3-grams: `th', `the', and `he$'. We multiply the embeddings of these 3-grams by transformation matrix INLINEFORM0 and apply the softmax function to each row as in Equation . As a result of the softmax, we obtain a matrix that contains weights for each embedding. The size of the computed matrix is identical to the input embedding matrix: INLINEFORM1 . We then compute Equation EQREF7 , i.e., the weighted sum of the embeddings, and add the resulting vector to the word embedding of `the'. Finally, we input the vector into an RNN to predict the next word."
],
[
" BIBREF14 and BIBREF15 proposed a word tying method (WT) that shares the word embedding matrix ( INLINEFORM0 in Equation ) with the weight matrix to compute probability distributions ( INLINEFORM1 in Equation EQREF3 ). They demonstrated that WT significantly improves the performance of RNN language models.",
"In this study, we adopt char INLINEFORM0 -MS-vec as the weight matrix in language modeling. Concretely, we use INLINEFORM1 instead of INLINEFORM2 in Equation EQREF3 , where INLINEFORM3 contains char INLINEFORM4 -MS-vec for all words in the vocabulary."
],
[
"We investigate the effect of char INLINEFORM0 -MS-vec on the word-level language modeling task. In detail, we examine the following four research questions;"
],
[
"We used the standard benchmark datasets for the word-level language modeling: Penn Treebank (PTB) BIBREF16 , WikiText-2 (WT2), and WikiText-103 (WT103) BIBREF17 . BIBREF18 and BIBREF17 published pre-processed PTB, WT2, and WT103. Following the previous studies, we used these pre-processed datasets for our experiments.",
"Table TABREF14 describes the statistics of the datasets. Table TABREF14 demonstrates that the vocabulary size of WT103 is too large, and thus it is impractical to compute char INLINEFORM0 -MS-vec for all words at every step. Therefore, we did not use INLINEFORM1 for word tying. In other words, we used only word embeddings INLINEFORM2 as the weight matrix INLINEFORM3 in WT103.",
"For machine translation, we used two kinds of language pairs: English-French and English-German sentences in the IWSLT 2016 dataset. The dataset contains about 208K English-French pairs and 189K English-German pairs. We conducted four translation tasks: from English to each language (En-Fr and En-De), and their reverses (Fr-En and De-En).",
"For headline generation, we used sentence-headline pairs extracted from the annotated English Gigaword corpus BIBREF35 in the same manner as BIBREF2 . The training set contains about 3.8M sentence-headline pairs. For evaluation, we exclude the test set constructed by BIBREF2 because it contains some invalid instances, as reported in BIBREF33 . We instead used the test sets constructed by BIBREF33 and BIBREF34 ."
],
[
"For base RNN language models, we adopted the state-of-the-art LSTM language model BIBREF19 for PTB and WT2, and QRNN for WT103 BIBREF12 . BIBREF20 demonstrated that the standard LSTM trained with appropriate hyperparameters outperformed various architectures such as Recurrent Highway Networks (RHN) BIBREF21 . In addition to several regularizations, BIBREF19 introduced Averaged Stochastic Gradient Descent (ASGD) BIBREF22 to train the 3-layered LSTM language model. As a result, their ASGD Weight-Dropped LSTM (AWD-LSTM) achieved state-of-the-art results on PTB and WT2. For WT103, BIBREF23 achieved the top score with the 4-layered QRNN. Thus, we used AWD-LSTM for PTB and WT2, and QRNN for WT103 as the base language models, respectively. We used their implementations for our experiments."
],
[
"Table TABREF15 shows perplexities of the baselines and the proposed method. We varied INLINEFORM0 for char INLINEFORM1 -MS-vec from 2 to 4. For the baseline, we also applied two word embeddings to investigate the performance in the case where we use more kinds of word embeddings. In detail, we prepared INLINEFORM2 and used INLINEFORM3 instead of INLINEFORM4 in Equation . Table TABREF15 also shows the number of character INLINEFORM5 -grams in each dataset. This table indicates that char INLINEFORM6 -MS-vec improved the performance of state-of-the-art models except for char4-MS-vec on WT103. These results indicate that char INLINEFORM7 -MS-vec can raise the quality of word-level language models. In particular, Table TABREF15 shows that char3-MS-vec achieved the best scores consistently. In contrast, an additional word embedding did not improve the performance. This fact implies that the improvement of char INLINEFORM8 -MS-vec is caused by using character INLINEFORM9 -grams. Thus, we answer yes to the first research question.",
"Table TABREF16 shows the training time spent on each epoch. We calculated it on the NVIDIA Tesla P100. Table TABREF16 indicates that the proposed method requires more computational time than the baseline unfortunately. We leave exploring a faster structure for our future work.",
"Table TABREF17 shows perplexities on the PTB dataset where the frequency of an input word is lower than 2,000 in the training data. This table indicates that the proposed method can improve the performance even if an input word is infrequent. In other words, char INLINEFORM0 -MS-vec helps represent the meanings of infrequent words. Therefore, we answer yes to the second research question in the case of our experimental settings.",
"We explored the effectiveness of multi-dimensional self-attention for word embedding construction. Table TABREF24 shows perplexities of using several encoders on the PTB dataset. As in BIBREF8 , we applied CNN to construct word embeddings (charCNN in Table TABREF24 ). Moreover, we applied the summation and standard self-attention, which computes the scalar value as a weight for a character INLINEFORM0 -gram embedding, to construct word embeddings (char INLINEFORM1 -Sum-vec and char INLINEFORM2 -SS-vec, respectively). For CNN, we used hyperparameters identical to BIBREF8 (“Original Settings” in Table TABREF24 ) but the setting has two differences from other architectures: 1. The dimension of the computed vectors is much larger than the dimension of the baseline word embeddings and 2. The dimension of the input character embeddings is much smaller than the dimension of the baseline word embeddings. Therefore, we added two configurations: assigning the dimension of the computed vectors and input character embeddings a value identical to the baseline word embeddings (in Table TABREF24 , “Small CNN result dims” and “Large embedding dims”, respectively).",
"Table TABREF24 shows that the proposed char INLINEFORM0 -MS-vec outperformed charCNN even though the original settings of charCNN had much larger parameters. Moreover, we trained charCNN with two additional settings but CNN did not improve the baseline performance. This result implies that char INLINEFORM1 -MS-vec is better embeddings than ones constructed by applying CNN to character embeddings. Table TABREF24 also indicates that char INLINEFORM2 -Sum-vec was harmful to the performance. Moreover, char INLINEFORM3 -SS-vec did not have a positive effect on the baseline. These results answer yes to the third research question; our use of multi-dimensional self-attention is more appropriate for constructing word embeddings from character INLINEFORM4 -gram embeddings.",
"Table TABREF24 also shows that excluding INLINEFORM0 from word tying (“Exclude INLINEFORM1 from word tying”) achieved almost the same score as the baseline. Moreover, this table indicates that performance fails as the the number of parameters is increased. Thus, we need to assign INLINEFORM2 to word tying to prevent over-fitting for the PTB dataset. In addition, this result implies that the performance of WT103 in Table TABREF15 might be raised if we can apply word tying to WT103.",
"Moreover, to investigate the effect of only char INLINEFORM0 -MS-vec, we ignore INLINEFORM1 in Equation EQREF5 . We refer to this setting as “Remove word embeddings INLINEFORM2 ” in Table TABREF24 . Table TABREF24 shows cahr3-MS-vec and char4-MS-vec are superior to char2-MS-vec. In the view of perplexity, char3-MS-vec and char4-MS-vec achieved comparable scores to each other. On the other hand, char3-MS-vec is composed of much smaller parameters. Furthermore, we decreased the embedding size INLINEFORM3 to adjust the number of parameters to the same size as the baseline (“Same #Params as baseline” in Table TABREF24 ). In this setting, char3-MS-vec achieved the best perplexity. Therefore, we consider that char3-MS-vec is more useful than char4-MS-vec, which is the answer to the fourth research question. We use the combination of the char3-MS-vec INLINEFORM4 and word embedding INLINEFORM5 in the following experiments.",
"Finally, we compare the proposed method with the published scores reported in previous studies. Tables TABREF25 , TABREF26 , and TABREF27 , respectively, show perplexities of the proposed method and previous studies on PTB, WT2, and WT103. Since AWD-LSTM-MoS BIBREF28 and AWD-LSTM-DOC BIBREF29 achieved the state-of-the-art scores on PTB and WT2, we combined char3-MS-vec with them. These tables show that the proposed method improved the performance of the base model and outperformed the state-of-the-art scores on all datasets. In particular, char3-MS-vec improved perplexity by at least 1 point from current best scores on the WT103 dataset.",
"Tables TABREF31 and TABREF32 show the results of machine translation and headline generation, respectively. These tables show that EncDec+char3-MS-vec outperformed EncDec in all test data. In other words, these results indicate that our proposed method also has a positive effect on the neural encoder-decoder model. Moreover, it is noteworthy that char3-MS-vec improved the performance of EncDec even though the vocabulary set constructed by BPE contains subwords. This implies that character INLINEFORM0 -gram embeddings improve the quality of not only word embeddings but also subword embeddings.",
"In addition to the results of our implementations, the lower portion of Table TABREF32 contains results reported in previous studies. Table TABREF32 shows that EncDec+char3-MS-vec also outperformed the methods proposed in previous studies. Therefore, EncDec+char3-MS-vec achieved the top scores in the test sets constructed by BIBREF33 and BIBREF34 even though it does not have a task-specific architecture such as the selective gate proposed by BIBREF33 .",
"In these experiments, we only applied char3-MS-vec to EncDec but BIBREF38 indicated that combining multiple kinds of subword units can improve the performance. We will investigate the effect of combining several character INLINEFORM0 -gram embeddings in future work."
],
[
"As described in Section SECREF1 , neural encoder-decoder models can be interpreted as conditional neural language models. Therefore, to investigate if the proposed method contributes to encoder-decoder models, we conduct experiments on machine translation and headline generation tasks."
],
[
"We employed the neural encoder-decoder with attention mechanism described in BIBREF34 as the base model. Its encoder consists of a 2-layer bidirectional LSTM and its decoder consists of a 2-layer LSTM with attention mechanism proposed by BIBREF36 . We refer to this neural encoder-decoder as EncDec. To investigate the effect of the proposed method, we introduced char3-MS-vec into EncDec. Here, we applied char3-MS-vec to both the encoder and decoder. Moreover, we did not apply word tying technique to EncDec because it is default setting in the widely-used encoder-decoder implementation.",
"We set the embedding size and dimension of the LSTM hidden state to 500 for machine translation and 400 for headline generation. The mini-batch size is 64 for machine translation and 256 for headline generation. For other hyperparameters, we followed the configurations described in BIBREF34 . We constructed the vocabulary set by using Byte-Pair-Encoding (BPE) BIBREF37 because BPE is a currently widely-used technique for vocabulary construction. We set the number of BPE merge operations to 16K for machine translation and 5K for headline generation."
],
[
"In this paper, we incorporated character information with RNN language models. Based on the research in the field of word embedding construction BIBREF0 , we focused on character INLINEFORM0 -gram embeddings to construct word embeddings. We used multi-dimensional self-attention BIBREF11 to encode character INLINEFORM1 -gram embeddings. Our proposed char INLINEFORM2 -MS-vec improved the performance of state-of-the-art RNN language models and achieved the best perplexities on Penn Treebank, WikiText-2, and WikiText-103. Moreover, we investigated the effect of char INLINEFORM3 -MS-vec on application tasks, specifically, machine translation and headline generation. Our experiments show that char INLINEFORM4 -MS-vec also improved the performance of a neural encoder-decoder on both tasks."
],
[
"This work was supported by JSPS KAKENHI Grant Number JP18K18119. We would like to thank the anonymous reviewers for their helpful suggestions and comments."
]
],
"section_name": [
"Introduction",
"RNN Language Model",
"Incorporating Character nn-gram Embeddings",
"Multi-dimensional Self-attention",
"Word Tying",
"Experiments on Language Modeling",
"Datasets",
"Baseline RNN Language Model",
"Results",
"Experiments on Applications",
"Experimental Settings",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"1616eb202279e4d8fb461fc8994cc932f6c4832e",
"1b937f718af4efca2b469c6128082152f9ab9fba",
"663eb48855a9b6bd1e4ffd53e2e1066f30b8f11f"
],
"answer": [
{
"evidence": [
"Moreover, to investigate the effect of only char INLINEFORM0 -MS-vec, we ignore INLINEFORM1 in Equation EQREF5 . We refer to this setting as “Remove word embeddings INLINEFORM2 ” in Table TABREF24 . Table TABREF24 shows cahr3-MS-vec and char4-MS-vec are superior to char2-MS-vec. In the view of perplexity, char3-MS-vec and char4-MS-vec achieved comparable scores to each other. On the other hand, char3-MS-vec is composed of much smaller parameters. Furthermore, we decreased the embedding size INLINEFORM3 to adjust the number of parameters to the same size as the baseline (“Same #Params as baseline” in Table TABREF24 ). In this setting, char3-MS-vec achieved the best perplexity. Therefore, we consider that char3-MS-vec is more useful than char4-MS-vec, which is the answer to the fourth research question. We use the combination of the char3-MS-vec INLINEFORM4 and word embedding INLINEFORM5 in the following experiments."
],
"extractive_spans": [
"cahr3-MS-vec",
"char4-MS-vec",
"char2-MS-vec"
],
"free_form_answer": "",
"highlighted_evidence": [
"Moreover, to investigate the effect of only char INLINEFORM0 -MS-vec, we ignore INLINEFORM1 in Equation EQREF5 . We refer to this setting as “Remove word embeddings INLINEFORM2 ” in Table TABREF24 . Table TABREF24 shows cahr3-MS-vec and char4-MS-vec are superior to char2-MS-vec."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF15 shows perplexities of the baselines and the proposed method. We varied INLINEFORM0 for char INLINEFORM1 -MS-vec from 2 to 4. For the baseline, we also applied two word embeddings to investigate the performance in the case where we use more kinds of word embeddings. In detail, we prepared INLINEFORM2 and used INLINEFORM3 instead of INLINEFORM4 in Equation . Table TABREF15 also shows the number of character INLINEFORM5 -grams in each dataset. This table indicates that char INLINEFORM6 -MS-vec improved the performance of state-of-the-art models except for char4-MS-vec on WT103. These results indicate that char INLINEFORM7 -MS-vec can raise the quality of word-level language models. In particular, Table TABREF15 shows that char3-MS-vec achieved the best scores consistently. In contrast, an additional word embedding did not improve the performance. This fact implies that the improvement of char INLINEFORM8 -MS-vec is caused by using character INLINEFORM9 -grams. Thus, we answer yes to the first research question."
],
"extractive_spans": [],
"free_form_answer": "2, 3 and 4",
"highlighted_evidence": [
"We varied INLINEFORM0 for char INLINEFORM1 -MS-vec from 2 to 4."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF15 shows perplexities of the baselines and the proposed method. We varied INLINEFORM0 for char INLINEFORM1 -MS-vec from 2 to 4. For the baseline, we also applied two word embeddings to investigate the performance in the case where we use more kinds of word embeddings. In detail, we prepared INLINEFORM2 and used INLINEFORM3 instead of INLINEFORM4 in Equation . Table TABREF15 also shows the number of character INLINEFORM5 -grams in each dataset. This table indicates that char INLINEFORM6 -MS-vec improved the performance of state-of-the-art models except for char4-MS-vec on WT103. These results indicate that char INLINEFORM7 -MS-vec can raise the quality of word-level language models. In particular, Table TABREF15 shows that char3-MS-vec achieved the best scores consistently. In contrast, an additional word embedding did not improve the performance. This fact implies that the improvement of char INLINEFORM8 -MS-vec is caused by using character INLINEFORM9 -grams. Thus, we answer yes to the first research question."
],
"extractive_spans": [
"char3"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF15 shows that char3-MS-vec achieved the best scores consistently. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"1f75747df613b4930c0d3f348e6db774d4f30f1a",
"b9d75380db322f95b0f9ce4bf5ccd33cf60a7ddd",
"b977be661f3ee3aa3e5762d6f2b89aa886f78d96"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3644d38477dd196a1ea1bd93b12d5c9873e8bbc1",
"68954c947eac3fe24142f876f90686130ee5add9"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"3d8dfc1c483967e298c62170248b06396767b008",
"7d9dcb363af3adb0eb93cf8e5a176bbd85601fe1",
"a907336c418cd2fa81704549f1e27c79b75d9231"
],
"answer": [
{
"evidence": [
"For headline generation, we used sentence-headline pairs extracted from the annotated English Gigaword corpus BIBREF35 in the same manner as BIBREF2 . The training set contains about 3.8M sentence-headline pairs. For evaluation, we exclude the test set constructed by BIBREF2 because it contains some invalid instances, as reported in BIBREF33 . We instead used the test sets constructed by BIBREF33 and BIBREF34 ."
],
"extractive_spans": [
"English Gigaword corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"For headline generation, we used sentence-headline pairs extracted from the annotated English Gigaword corpus BIBREF35 in the same manner as BIBREF2 . ",
" For evaluation, we exclude the test set constructed by BIBREF2 because it contains some invalid instances, as reported in BIBREF33 . "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For headline generation, we used sentence-headline pairs extracted from the annotated English Gigaword corpus BIBREF35 in the same manner as BIBREF2 . The training set contains about 3.8M sentence-headline pairs. For evaluation, we exclude the test set constructed by BIBREF2 because it contains some invalid instances, as reported in BIBREF33 . We instead used the test sets constructed by BIBREF33 and BIBREF34 ."
],
"extractive_spans": [
"English Gigaword corpus BIBREF35"
],
"free_form_answer": "",
"highlighted_evidence": [
"For headline generation, we used sentence-headline pairs extracted from the annotated English Gigaword corpus BIBREF35 in the same manner as BIBREF2 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For headline generation, we used sentence-headline pairs extracted from the annotated English Gigaword corpus BIBREF35 in the same manner as BIBREF2 . The training set contains about 3.8M sentence-headline pairs. For evaluation, we exclude the test set constructed by BIBREF2 because it contains some invalid instances, as reported in BIBREF33 . We instead used the test sets constructed by BIBREF33 and BIBREF34 ."
],
"extractive_spans": [
" the annotated English Gigaword corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"For headline generation, we used sentence-headline pairs extracted from the annotated English Gigaword corpus BIBREF35 in the same manner as BIBREF2 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"b3554febdb7c02b593c8df5d3ba32767afd07911",
"d1415cb57e576f376e9240214a469849dcc1111b",
"ff84c4ca444364b9bcd70d04bb742f19363bfe48"
],
"answer": [
{
"evidence": [
"Tables TABREF31 and TABREF32 show the results of machine translation and headline generation, respectively. These tables show that EncDec+char3-MS-vec outperformed EncDec in all test data. In other words, these results indicate that our proposed method also has a positive effect on the neural encoder-decoder model. Moreover, it is noteworthy that char3-MS-vec improved the performance of EncDec even though the vocabulary set constructed by BPE contains subwords. This implies that character INLINEFORM0 -gram embeddings improve the quality of not only word embeddings but also subword embeddings.",
"FLOAT SELECTED: Table 9: BLEU scores on the IWSLT16 dataset. We report the average score of 3 runs."
],
"extractive_spans": [],
"free_form_answer": "BLEU score of 35.48 on En-Fr, 23.27 on En-De, 34.43 on Fr-En, 28.86 on De-En",
"highlighted_evidence": [
"Tables TABREF31 and TABREF32 show the results of machine translation and headline generation, respectively.",
"FLOAT SELECTED: Table 9: BLEU scores on the IWSLT16 dataset. We report the average score of 3 runs."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 9: BLEU scores on the IWSLT16 dataset. We report the average score of 3 runs."
],
"extractive_spans": [],
"free_form_answer": "BLEU scores are: En-Fr(35.84), En-De(23.27), Fr-En(34.43) and De-En(28.86).",
"highlighted_evidence": [
"FLOAT SELECTED: Table 9: BLEU scores on the IWSLT16 dataset. We report the average score of 3 runs."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Tables TABREF31 and TABREF32 show the results of machine translation and headline generation, respectively. These tables show that EncDec+char3-MS-vec outperformed EncDec in all test data. In other words, these results indicate that our proposed method also has a positive effect on the neural encoder-decoder model. Moreover, it is noteworthy that char3-MS-vec improved the performance of EncDec even though the vocabulary set constructed by BPE contains subwords. This implies that character INLINEFORM0 -gram embeddings improve the quality of not only word embeddings but also subword embeddings.",
"FLOAT SELECTED: Table 9: BLEU scores on the IWSLT16 dataset. We report the average score of 3 runs."
],
"extractive_spans": [],
"free_form_answer": "Bleu on IWSLT16: En-FR 35.48, En-De 23.27, Fr-En 34.43, De-En 28.86",
"highlighted_evidence": [
"Tables TABREF31 and TABREF32 show the results of machine translation and headline generation, respectively.",
"FLOAT SELECTED: Table 9: BLEU scores on the IWSLT16 dataset. We report the average score of 3 runs."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"39aa26ae5841a4bdb7f27da78924ebd6c033832c",
"7d88da2fa4798b0893615aa827ca800d59f0072a"
],
"answer": [
{
"evidence": [
"Figure FIGREF4 is the overview of the proposed method using character 3-gram embeddings (char3-MS-vec). As illustrated in this figure, our proposed method regards the sum of char3-MS-vec and the standard word embedding as an input of an RNN. In other words, let INLINEFORM0 be char INLINEFORM1 -MS-vec and we replace Equation with the following: DISPLAYFORM0"
],
"extractive_spans": [],
"free_form_answer": "They use a sum of charn-MS-vec and the standard word embedding as an input of an RNN",
"highlighted_evidence": [
"As illustrated in this figure, our proposed method regards the sum of char3-MS-vec and the standard word embedding as an input of an RNN."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Based on their reports, in this paper, we propose a neural language model that utilizes character INLINEFORM0 -gram embeddings. Our proposed method encodes character INLINEFORM1 -gram embeddings to a word embedding with simplified Multi-dimensional Self-attention (MS) BIBREF11 . We refer to this constructed embedding as char INLINEFORM2 -MS-vec. The proposed method regards char INLINEFORM3 -MS-vec as an input in addition to a word embedding."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The proposed method regards char INLINEFORM3 -MS-vec as an input in addition to a word embedding."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What sized character n-grams do they use?",
"Do they experiment with fine-tuning their embeddings?",
"Which word embeddings do they compare against?",
"Which dataset do they evaluate on for headline generation?",
"What results do their embeddings obtain on machine translation?",
"How do they combine ordinary word embeddings and ones constructed from character n-grams?"
],
"question_id": [
"00db191facf903cef18fb1727d1cab638c277e0a",
"1edfe390828f02a2db9a88454421c7f3d4cdd611",
"3dad6b792044018bb968ac0d0fd4628653f9e4b7",
"a28c73a6a8c46a43a1eec2b42b542dd7fde1e30e",
"5f1ffaa738fedd5b6668ec8b58a027ddea6867ce",
"8e26c471ca0ee1b9779da04c0b81918fd310d0f3"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Overview of the proposed method. The proposed method computes charn-MS-vec from character n-gram (3- gram in this figure) embeddings and inputs the sum of it and the standard word embedding into an RNN.",
"Table 1: Statistics of PTB, WT2, and WT103.",
"Table 2: Perplexities on each dataset. We varied the n for charn-MS-vec from 2 to 4.",
"Table 3: Computational speed of the baseline and proposed method on NVIDIA Tesla P100.",
"Table 4: Perplexities on the PTB dataset where an input word is infrequent in the training data, which means its frequency is lower than 2,000.",
"Table 5: Perplexities of each structure on PTB dataset.",
"Table 6: Perplexities of the proposed method and as reported in previous studies on the PTB dataset.",
"Table 7: Perplexities of the proposed method and as reported in previous studies on the WT2 dataset.",
"Table 8: Perplexities of the proposed method and as reported in previous studies on the WT103 dataset.",
"Table 9: BLEU scores on the IWSLT16 dataset. We report the average score of 3 runs.",
"Table 10: ROUGE F1 scores on the headline generation test sets provided by (Zhou et al. 2017) and (Kiyono et al. 2017). The upper part is the results of our implementation and the lower part shows the scores reported in previous studies. In the upper part, we report the average score of 3 runs."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"4-Table5-1.png",
"5-Table6-1.png",
"5-Table7-1.png",
"5-Table8-1.png",
"7-Table9-1.png",
"7-Table10-1.png"
]
} | [
"What sized character n-grams do they use?",
"What results do their embeddings obtain on machine translation?",
"How do they combine ordinary word embeddings and ones constructed from character n-grams?"
] | [
[
"1906.05506-Results-0",
"1906.05506-Results-6"
],
[
"1906.05506-7-Table9-1.png",
"1906.05506-Results-8"
],
[
"1906.05506-Introduction-4"
]
] | [
"2, 3 and 4",
"Bleu on IWSLT16: En-FR 35.48, En-De 23.27, Fr-En 34.43, De-En 28.86",
"They use a sum of charn-MS-vec and the standard word embedding as an input of an RNN"
] | 143 |
1808.00957 | SWDE : A Sub-Word And Document Embedding Based Engine for Clickbait Detection | In order to expand their reach and increase website ad revenue, media outlets have started using clickbait techniques to lure readers to click on articles on their digital platform. Having successfully enticed the user to open the article, the article fails to satiate his curiosity serving only to boost click-through rates. Initial methods for this task were dependent on feature engineering, which varies with each dataset. Industry systems have relied on an exhaustive set of rules to get the job done. Neural networks have barely been explored to perform this task. We propose a novel approach considering different textual embeddings of a news headline and the related article. We generate sub-word level embeddings of the title using Convolutional Neural Networks and use them to train a bidirectional LSTM architecture. An attention layer allows for calculation of significance of each term towards the nature of the post. We also generate Doc2Vec embeddings of the title and article text and model how they interact, following which it is concatenated with the output of the previous component. Finally, this representation is passed through a neural network to obtain a score for the headline. We test our model over 2538 posts (having trained it on 17000 records) and achieve an accuracy of 83.49% outscoring previous state-of-the-art approaches. | {
"paragraphs": [
[
"In recent years, content delivery has changed drastically, shifting from offline methods to the Internet. It is now the primary source of information for a majority of the populace, especially for ever-changing news updates. This has also caused a shift in users' preferred sources. Previously, these preferences were static, sticking to a particular news source. Now, with the plethora of information available easily, there is no differentiation in the source it has been gathered from, with users opting to go for whatever is convenient.",
"Keeping up with the times, news agencies have expanded their digital presence, increasing their reach exponentially. They generate revenue by (1) advertisements on their websites, or (2) a subscription based model for articles that might interest users. Since multiple agencies offer similar content, the user has his pick. To lure in more readers and increase the number of clicks on their content, subsequently enhancing their agency's revenue, writers have begun adopting a new technique - clickbait.",
"Merriam-Webster defines clickbait as something (such as a headline) to encourage readers to click on hyperlinks based on snippets of information accompanying it, especially when those links lead to content of dubious value or interest. It is built to create, and consequently capitalise, on the Loewenstein information gap BIBREF0 by purposefully misrepresenting or promising what can be expected while reading a story on the web, be it through a headline, image or related text.",
"We propose a two-pronged approach to detect such headlines. The first component leverages distributional semantics of the title text and models its temporal and sequential properties. The article title is represented as a concatenation of its sub-word level embeddings. The sub-word representation serves as input to a bidirectional LSTM network. The contribution of a sub-word towards the clickbait nature of the headline is calculated in a differential manner since the output of the LSTM is passed into an attention layer BIBREF1 , following which it goes through a dense layer. The second component focuses on Doc2Vec embeddings of the title and article content, performing an element wise multiplication of the two. This is concatenated with the dense layer output from the previous component. The obtained output is then passed through multiple hidden layers which performs the final classification.",
"Previous work in this field that has exploited the power of embeddings has considered either word vectors, for their ability to create context-sensitive word representations, or character-level word embeddings to model the orthographic features of a word. We propose the use of sub-word level representations since it incorporates the word's morphological features. Attaching an attention mechanism to it helps us identify the surprise associated with each representation within the clickbait. One of the identifying characteristics of clickbait is that the article title differs from the text attached to it. For this reason, we define a component to capture the interaction between these attributes and augment our model."
],
[
"The importance of detecting clickbait headlines has increased exponentially in recent years. Initial work in this domain can be traced back to BIBREF2 , relying on heavy feature engineering on a specific news dataset. These works define the various types of clickbait and focus on the presence of linguistic peculiarities in the headline text, including various informality metrics and the use of forward references. Applying such techniques over a social media stream was first attempted by BIBREF3 as the authors crowdsourced a dataset of tweets BIBREF4 and performed feature engineering to accomplish the task. BIBREF5 have tried to expand the work done for news headlines they collected from trusted sources.",
" BIBREF6 used the same collection of headlines as BIBREF5 and proposed the first neural network based approach in the field. They employed various recurrent neural network architectures to model sequential data and its dependencies, taking as its inputs a concatenation of the word and character-level embeddings of the headline. Their experiments yielded that bidirectional LSTMs BIBREF7 were best suited for the same. BIBREF8 built BiLSTMs to model each textual attribute of the post (post-text, target-title, target-paragraphs, target-description, target-keywords, post-time) available in the corpus BIBREF4 , concatenating their outputs and feeding it to a fully connected layer to classify the post. Attention mechanisms BIBREF1 have grown popular for various text classification tasks, like aspect based sentiment analysis. Utilising this technique, BIBREF9 deployed a self-attentive bidirectional GRU to infer the importance of each tweet token and model the annotation distribution of headlines in the corpus.",
"Word vectors and character vectors have been used across various approaches proposed to solve this problem. However, we suggest the use of subword representations to better analyse the morphology of possible clickbait-y words. We also attempt to model the interaction between the title of an article and its text."
],
[
"We now describe our approach to clickbait detection and the reasons behind devising such a model. Our approach is a fusion of multiple components, each exploiting a particular type of embedding: (1) BiLSTM with attention, and (2) Doc2Vec enrichment. Figure FIGREF14 lays out our proposed architecture.",
"We start with an explanation of the various types of embeddings we have used and proceed to describe the various components of our model, both individually and together. Finally, we cover how the parameters are learned."
],
[
"Word2Vec BIBREF10 has fast become the most popular text embedding method for text since it models a word based on its context. BIBREF11 proposed a convolutional neural network architecture to generate subword-level representations of words in order to capture word orthography. Sub-word level embeddings learn representations for character n-grams and represent words as the sum of the n-gram vectors BIBREF12 . Such representations also take into account word roots and inflections, rather than just word context. They work well even with highly noisy text with containing misspellings due to the model learning morpheme-level feature maps. They have proven to be extremely useful in tasks such as sentiment analysis BIBREF13 , PoS tagging BIBREF14 and language modeling BIBREF11 . These intermediate sub-word feature representations are learned by the filters during the convolution operation. We generate such an embedding by passing the characters of a sentence individually into 3 layer 1D convolutional neural network. Each filter then acts as a learned sub-word level feature. A representation for this architecture can be found in Figure FIGREF1 ."
],
[
"Doc2Vec BIBREF15 is an unsupervised approach to generate vector representations for slightly larger bodies of text, such as sentences, paragraphs and documents. It has been adapted from Word2Vec BIBREF10 which is used to generate vectors for words in large unlabeled corpora. The vectors generated by this approach come handy in tasks like calculating similarity metrics for sentences, paragraphs and documents. In sequential models like RNNs, the word sequence is captured in the generated sentence vectors. However, in Doc2Vec, the representations are order independent. We use GenSim BIBREF16 to learn 300 dimensional Doc2Vec embeddings for each target description and post title available."
],
[
"Recurrent Neural Network (RNN) is a class of artificial neural networks which utilizes sequential information and maintains history through its intermediate layers. A standard RNN has an internal state whose output at every time-step which can be expressed in terms of that of previous time-steps. However, it has been seen that standard RNNs suffer from a problem of vanishing gradients BIBREF17 . This means it will not be able to efficiently model dependencies and interactions between sub-word representations that are a few steps apart. LSTMs are able to tackle this issue by their use of gating mechanisms. We convert each article headline into its corresponding sub-word level representation to act as input to our bidirectional LSTMs.",
" INLINEFORM0 represent forward states of the LSTM and its state updates satisfy the following equations: DISPLAYFORM0 DISPLAYFORM1 ",
"here INLINEFORM0 is the logistic sigmoid function, INLINEFORM1 , INLINEFORM2 , INLINEFORM3 represent the forget, input and output gates respectively. INLINEFORM4 denotes the input at time INLINEFORM5 and INLINEFORM6 denotes the latent state, INLINEFORM7 and INLINEFORM8 represent the bias terms. The forget, input and output gates control the flow of information throughout the sequence. INLINEFORM9 and INLINEFORM10 are matrices which represent the weights associated with the connections.",
" INLINEFORM0 denote the backward states and its updates can be computed similarly.",
"The number of bidirectional LSTM units is set to a constant K, which is the maximum length of all title lengths of records used in training. The forward and backward states are then concatenated to obtain INLINEFORM0 , where DISPLAYFORM0 ",
"Finally, we are left with the task of figuring out the significance of each word in the sequence i.e. how much a particular sub-word representation influences the clickbait-y nature of the post. The effectiveness of attention mechanisms have been proven for the task of neural machine translation BIBREF1 and it has the same effect in this case. The goal of attention mechanisms in such tasks is to derive context vectors which capture relevant source side information and help predict the current target representation. The sequence of annotations generated by the encoder to come up with a context vector capturing how each sub-word contributes to the record's clickbait quotient is of paramount importance to this model. In a typical RNN encoder-decoder framework BIBREF1 , a context vector is generated at each time-step to predict the target sub-word. However, we only need it for calculation of context vector for a single time-step. DISPLAYFORM0 ",
"where, INLINEFORM0 ,..., INLINEFORM1 represents the sequence of annotations to which the encoder maps the post title vector and each INLINEFORM2 represents the respective weight corresponding to each annotation INLINEFORM3 . This is represented as the left most component in Figure FIGREF14 ."
],
[
"Each record in the dataset has a target description attached with it. This is the entire text of the article whose title has been given. By definition, clickbait articles differ from the content described in their headline. We generate document embeddings for both the title and the article text and perform element wise multiplication over the two. This allows us to capture the interaction between the two, something which has not been used before. Since the title is supposed to mislead the reader with respect to the content, modeling this interaction in terms of their similarity gives an added dimenstion to our approach. It augments the output obtained from the first component."
],
[
"The outputs from the aforementioned components are now concatenated and passed through two dense layers and finally goes into a fully connected layer. This layer finally gives out the probability that a post can be marked clickbait."
],
[
"We use binary cross-entropy as the loss optimization function for our model. The cross-entropy method BIBREF18 is an iterative procedure where each iteration can be divided into two stages:",
"(1) Generate a random data sample (vectors, trajectories etc.) according to a specified mechanism.",
"(2) Update the parameters of the random mechanism based on the data to produce a \"better\" sample in the next iteration."
],
[
" BIBREF4 crowdsourced the annotation of 19538 tweets they had curated, into various levels of their clickbait-y nature. These tweets contained the title and text of the article and also included supplementary information such as target description, target keywords and linked images. We trained our model over 17000 records in the described dataset and test it over 2538 disjoint instances from the same. We performed our experiments with the aim of increasing the accuracy and F1 score of the model. Other metrics like mean squared error (MSE) were also considered."
],
[
"We randomly partition the training set of over 17000 posts into training and validation set in a 4:1 ratio. This ensures that the two sets do not overlap. The model hyperparameters are tuned over the validation set. We initialise the fully connected network weights with the uniform distribution in the range INLINEFORM0 and INLINEFORM1 BIBREF19 . We used a batch size of 256 and adadelta BIBREF20 as a gradient based optimizer for learning the model parameters."
],
[
"In Table 1, we evaluate our model against the existing state-of-the-art for the dataset used and other models which have employed similar techniques to accomplish the task. It is clear that our proposed model outperforms the previous feature engineering benchmark and other work done in the field both in terms of F1 score and accuracy of detection. Feature engineering models rely on a selection of handcrafted attributes which may not be able to consider all the factors involved in making a post clickbait. The approach proposed in BIBREF8 takes into account each of the textual features available in an individual fashion, considering them to be independent of each other, which is not the case since, by definition of clickbait, the content of the article title and text are not mutually exclusive. BIBREF21 proposed the integration of multimodal embeddings. BIBREF6 utilise word and character embeddings which do not capture morpheme-level information that may incorporate a surprise element."
],
[
"We have devised an approach to detecting clickbait that puts emphasis on utilising the linguistic value of words by learning its morphological features through its sub-word representations. These embeddings and their dependencies are, in turn, modeled by the LSTM. Attention mechanism allows us to understand the importance of individual representations towards the nature of the post. Using the document embeddings for title and article text allows us to augment the generated embeddings and use as input to a neural network to finally classify the post. In the future, we would like to explore the possibility of integrating the sub-word representations with deep neural networks to better model the temporal and sequential properties of text."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model Architecture",
"Sub-word Level Representation",
"Document Embeddings",
"Bidirectional LSTM with Attention",
"Doc2Vec Enrichment",
"Fusion of Components",
"Learning the Parameters",
"Evaluation Results",
"Training",
"Model Comparison",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"1adf37302ee7fa276dab076257d373b67a11f6ee",
"3b46192eff34d8ed0ffbcbc296033f1d9c5d7271",
"6ed754d255e52ba5201487e71cc344c54811ad43"
],
"answer": [
{
"evidence": [
"BIBREF4 crowdsourced the annotation of 19538 tweets they had curated, into various levels of their clickbait-y nature. These tweets contained the title and text of the article and also included supplementary information such as target description, target keywords and linked images. We trained our model over 17000 records in the described dataset and test it over 2538 disjoint instances from the same. We performed our experiments with the aim of increasing the accuracy and F1 score of the model. Other metrics like mean squared error (MSE) were also considered."
],
"extractive_spans": [],
"free_form_answer": "A crowdsourced twitter dataset containing 19358 tweets",
"highlighted_evidence": [
"BIBREF4 crowdsourced the annotation of 19538 tweets they had curated, into various levels of their clickbait-y nature. ",
"We trained our model over 17000 records in the described dataset and test it over 2538 disjoint instances from the same. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"BIBREF4 crowdsourced the annotation of 19538 tweets they had curated, into various levels of their clickbait-y nature. These tweets contained the title and text of the article and also included supplementary information such as target description, target keywords and linked images. We trained our model over 17000 records in the described dataset and test it over 2538 disjoint instances from the same. We performed our experiments with the aim of increasing the accuracy and F1 score of the model. Other metrics like mean squared error (MSE) were also considered."
],
"extractive_spans": [
"BIBREF4"
],
"free_form_answer": "",
"highlighted_evidence": [
"BIBREF4 crowdsourced the annotation of 19538 tweets they had curated, into various levels of their clickbait-y nature. These tweets contained the title and text of the article and also included supplementary information such as target description, target keywords and linked images. We trained our model over 17000 records in the described dataset and test it over 2538 disjoint instances from the same."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"BIBREF4 crowdsourced the annotation of 19538 tweets they had curated, into various levels of their clickbait-y nature. These tweets contained the title and text of the article and also included supplementary information such as target description, target keywords and linked images. We trained our model over 17000 records in the described dataset and test it over 2538 disjoint instances from the same. We performed our experiments with the aim of increasing the accuracy and F1 score of the model. Other metrics like mean squared error (MSE) were also considered."
],
"extractive_spans": [],
"free_form_answer": "19538 tweets from BIBREF4",
"highlighted_evidence": [
"BREF4 crowdsourced the annotation of 19538 tweets they had curated, into various levels of their clickbait-y nature. These tweets contained the title and text of the article and also included supplementary information such as target description, target keywords and linked images. We trained our model over 17000 records in the described dataset and test it over 2538 disjoint instances from the same. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"d9135203a92ded14d260a7d551b7a447c8b7c910"
]
},
{
"annotation_id": [
"1720492fdb45763cd9ec3531fd7b07144abec6a1",
"72f2d611ce2b22be1dad133807d6f556ddcb7bed",
"c830918762da40ca21885f81ec4970ae04d961da"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Model Performance Comparison"
],
"extractive_spans": [],
"free_form_answer": "BiLSTM for 0.02 F1, Feature Engineering SotA for 0.08 F1, and Concatenated NN Architecture for 0.24 F1.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Model Performance Comparison"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In Table 1, we evaluate our model against the existing state-of-the-art for the dataset used and other models which have employed similar techniques to accomplish the task. It is clear that our proposed model outperforms the previous feature engineering benchmark and other work done in the field both in terms of F1 score and accuracy of detection. Feature engineering models rely on a selection of handcrafted attributes which may not be able to consider all the factors involved in making a post clickbait. The approach proposed in BIBREF8 takes into account each of the textual features available in an individual fashion, considering them to be independent of each other, which is not the case since, by definition of clickbait, the content of the article title and text are not mutually exclusive. BIBREF21 proposed the integration of multimodal embeddings. BIBREF6 utilise word and character embeddings which do not capture morpheme-level information that may incorporate a surprise element.",
"FLOAT SELECTED: Table 1: Model Performance Comparison"
],
"extractive_spans": [],
"free_form_answer": "Proposed model had 0.63 F1 score and 83.49% accuracy compared to the 0.61 F1 and 83.28% accuracy of best compared method.",
"highlighted_evidence": [
"In Table 1, we evaluate our model against the existing state-of-the-art for the dataset used and other models which have employed similar techniques to accomplish the task. It is clear that our proposed model outperforms the previous feature engineering benchmark and other work done in the field both in terms of F1 score and accuracy of detection.",
"FLOAT SELECTED: Table 1: Model Performance Comparison"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In Table 1, we evaluate our model against the existing state-of-the-art for the dataset used and other models which have employed similar techniques to accomplish the task. It is clear that our proposed model outperforms the previous feature engineering benchmark and other work done in the field both in terms of F1 score and accuracy of detection. Feature engineering models rely on a selection of handcrafted attributes which may not be able to consider all the factors involved in making a post clickbait. The approach proposed in BIBREF8 takes into account each of the textual features available in an individual fashion, considering them to be independent of each other, which is not the case since, by definition of clickbait, the content of the article title and text are not mutually exclusive. BIBREF21 proposed the integration of multimodal embeddings. BIBREF6 utilise word and character embeddings which do not capture morpheme-level information that may incorporate a surprise element.",
"FLOAT SELECTED: Table 1: Model Performance Comparison"
],
"extractive_spans": [],
"free_form_answer": "By more than 0.02 with F1 score and 0.21% with accuracy",
"highlighted_evidence": [
"In Table 1, we evaluate our model against the existing state-of-the-art for the dataset used and other models which have employed similar techniques to accomplish the task.",
"FLOAT SELECTED: Table 1: Model Performance Comparison"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"d9135203a92ded14d260a7d551b7a447c8b7c910",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"382f83b1f2fa0e003b97d5d4354795931b475aaf",
"bfcdb68885bdf1350102f50d764e82372ff83d8a"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Which dataset do they use?",
"By how much do they outperform previous state-of-the-art approaches?",
"Do they analyze attention outputs to determine which terms in general contribute to clickbait titles?"
],
"question_id": [
"a398c9b061f28543bc77c2951d0dfc5d1bee9e87",
"dae9caf8434ce43c9bc5913ebf062bc057a27cfe",
"e9b6b14b8061b71d73a73d8138c8dab8eda4ba3f"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Architecture for learning Sub-word Level Representations using CNN",
"Figure 2: Full Model Architecture",
"Table 1: Model Performance Comparison"
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Table1-1.png"
]
} | [
"By how much do they outperform previous state-of-the-art approaches?"
] | [
[
"1808.00957-Model Comparison-0",
"1808.00957-5-Table1-1.png"
]
] | [
"By more than 0.02 with F1 score and 0.21% with accuracy"
] | 144 |
1808.07231 | Reducing Gender Bias in Abusive Language Detection | Abusive language detection models tend to have a problem of being biased toward identity words of a certain group of people because of imbalanced training datasets. For example,"You are a good woman"was considered"sexist"when trained on an existing dataset. Such model bias is an obstacle for models to be robust enough for practical use. In this work, we measure gender biases on models trained with different abusive language datasets, while analyzing the effect of different pre-trained word embeddings and model architectures. We also experiment with three bias mitigation methods: (1) debiased word embeddings, (2) gender swap data augmentation, and (3) fine-tuning with a larger corpus. These methods can effectively reduce gender bias by 90-98% and can be extended to correct model bias in other scenarios. | {
"paragraphs": [
[
"Automatic detection of abusive language is an important task since such language in online space can lead to personal trauma, cyber-bullying, hate crime, and discrimination. As more and more people freely express their opinions in social media, the amount of textual contents produced every day grows almost exponentially, rendering it difficult to effectively moderate user content. For this reason, using machine learning and natural language processing (NLP) systems to automatically detect abusive language is useful for many websites or social media services.",
"Although many works already tackled on training machine learning models to automatically detect abusive language, recent works have raised concerns about the robustness of those systems. BIBREF0 have shown how to easily cause false predictions with adversarial examples in Google's API, and BIBREF1 show that classifiers can have unfair biases toward certain groups of people.",
"We focus on the fact that the representations of abusive language learned in only supervised learning setting may not be able to generalize well enough for practical use since they tend to overfit to certain words that are neutral but occur frequently in the training samples. To such classifiers, sentences like “You are a good woman” are considered “sexist” probably because of the word “woman.”",
"This phenomenon, called false positive bias, has been reported by BIBREF1 . They further defined this model bias as unintended, “a model contains unintended bias if it performs better for comments containing some particular identity terms than for comments containing others.”",
"Such model bias is important but often unmeasurable in the usual experiment settings since the validation/test sets we use for evaluation are already biased. For this reason, we tackle the issue of measuring and mitigating unintended bias. Without achieving certain level of generalization ability, abusive language detection models may not be suitable for real-life situations.",
"In this work, we address model biases specific to gender identities (gender bias) existing in abusive language datasets by measuring them with a generated unbiased test set and propose three reduction methods: (1) debiased word embedding, (2) gender swap data augmentation, (3) fine-tuning with a larger corpus. Moreover, we compare the effects of different pre-trained word embeddings and model architectures on gender bias."
],
[
"So far, many efforts were put into defining and constructing abusive language datasets from different sources and labeling them through crowd-sourcing or user moderation BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Many deep learning approaches have been explored to train a classifier with those datasets to develop an automatic abusive language detection system BIBREF6 , BIBREF7 , BIBREF8 . However, these works do not explicitly address any model bias in their models.",
"Addressing biases in NLP models/systems have recently started to gain more interest in the research community, not only because fairness in AI is important but also because bias correction can improve the robustness of the models. BIBREF9 is one of the first works to point out the gender stereotypes inside word2vec BIBREF10 and propose an algorithm to correct them. BIBREF11 also propose a method called Word Embedding Association Test (WEAT) to measure model bias inside word embeddings and finds that many of those pretrained embeddings contain problematic bias toward gender or race. BIBREF1 is one of the first works that point out existing “unintended” bias in abusive language detection models. BIBREF12 compare 219 sentiment analysis systems participating in SemEval competition with their proposed dataset, which can be used for evaluating racial and gender bias of those systems. BIBREF13 shows the effectiveness of measuring and correcting gender biases in co-reference resolution tasks. We later show how we extend a few of these works into ours."
],
[
"This dataset consists of tweets with sexist tweets collected from Twitter by searching for tweets that contain common terms pertaining to sexism such as “feminazi.” The tweets were then annotated by experts based on criteria founded in critical race theory. The original dataset also contained a relatively small number of “racist” label tweets, but we only retain “sexist” samples to focus on gender biases. BIBREF2 , BIBREF3 , the creators of the dataset, describe “sexist” and “racist” languages as specific subsets of abusive language."
],
[
"Recently, BIBREF4 has published a large scale crowdsourced abusive tweet dataset with 60K tweets. Their work incrementally and iteratively investigated methods such as boosted sampling and exploratory rounds, to effectively annotate tweets through crowdsourcing. Through such systematic processes, they identify the most relevant label set in identifying abusive behaviors in Twitter as INLINEFORM0 resulting in 11% as 'Abusive,' 7.5% as 'Hateful', 22.5% as 'Spam', and 59% as 'None'. We transform this dataset for a binary classification problem by concatenating 'None'/'Spam' together, and 'Abusive'/'Hateful' together."
],
[
"Gender bias cannot be measured when evaluated on the original dataset as the test sets will follow the same biased distribution, so normal evaluation set will not suffice. Therefore, we generate a separate unbiased test set for each gender, male and female, using the identity term template method proposed in BIBREF1 .",
"The intuition of this template method is that given a pair of sentences with only the identity terms different (ex. “He is happy” & “She is happy”), the model should be able to generalize well and output same prediction for abusive language. This kind of evaluation has also been performed in SemEval 2018: Task 1 Affect In Tweets BIBREF12 to measure the gender and race bias among the competing systems for sentiment/emotion analysis.",
"Using the released code of BIBREF1 , we generated 1,152 samples (576 pairs) by filling the templates with common gender identity pairs (ex. male/female, man/woman, etc.). We created templates (Table TABREF6 ) that contained both neutral and offensive nouns and adjectives inside the vocabulary (See Table TABREF7 ) to retain balance in neutral and abusive samples.",
"For the evaluation metric, we use 1) AUC scores on the original test set (Orig. AUC), 2) AUC scores on the unbiased generated test set (Gen. AUC), and 3) the false positive/negative equality differences proposed in BIBREF1 which aggregates the difference between the overall false positive/negative rate and gender-specific false positive/negative rate. False Positive Equality Difference (FPED) and False Negative Equality Difference (FNED) are defined as below, where INLINEFORM0 . INLINEFORM1 ",
"Since the classifiers output probabilities, equal error rate thresholds are used for prediction decision.",
"While the two AUC scores show the performances of the models in terms of accuracy, the equality difference scores show them in terms of fairness, which we believe is another dimension for evaluating the model's generalization ability.",
"Debiased Word Embeddings (DE) BIBREF9 proposed an algorithm to correct word embeddings by removing gender stereotypical information. All the other experiments used pretrained word2vec to initialized the embedding layer but we substitute the pretrained word2vec with their published embeddings to verify their effectiveness in our task.",
"Gender Swap (GS) We augment the training data by identifying male entities and swapping them with equivalent female entities and vice-versa. This simple method removes correlation between gender and classification decision and has proven to be effective for correcting gender biases in co-reference resolution task BIBREF13 .",
"Bias fine-tuning (FT) We propose a method to use transfer learning from a less biased corpus to reduce the bias. A model is initially trained with a larger, less-biased source corpus with a same or similar task, and fine-tuned with a target corpus with a larger bias. This method is inspired by the fact that model bias mainly rises from the imbalance of labels and the limited size of data samples. Training the model with a larger and less biased dataset may regularize and prevent the model from over-fitting to the small, biased dataset."
],
[
"We first measure gender biases in st and abt datasets. We explore three neural models used in previous works on abusive language classification: Convolutional Neural Network (CNN) BIBREF7 , Gated Recurrent Unit (GRU) BIBREF14 , and Bidirectional GRU with self-attention ( INLINEFORM0 -GRU) BIBREF8 , but with a simpler mechanism used in BIBREF15 . Hyperparameters are found using the validation set by finding the best performing ones in terms of original AUC scores. These are the used hyperparameters:",
"CNN: Convolution layers with 3 filters with the size of [3,4,5], feature map size=100, Embedding Size=300, Max-pooling, Dropout=0.5",
"GRU: hidden dimension=512, Maximum Sequence Length=100, Embedding Size=300, Dropout=0.3",
" INLINEFORM0 -GRU: hidden dimension=256 (bidirectional, so 512 in total), Maximum Sequence Length=100, Attention Size=512, Embedding Size=300, Dropout=0.3",
"We also compare different pre-trained embeddings, word2vec BIBREF10 trained on Google News corpus, FastText BIBREF16 ) trained on Wikipedia corpus, and randomly initialized embeddings (random) to analyze their effects on the biases. Experiments were run 10 times and averaged.",
"Debiased word2vec BIBREF9 is compared with the original word2vec BIBREF10 for evaluation. For gender swapping data augmentation, we use pairs identified through crowd-sourcing by BIBREF13 .",
"After identifying the degree of gender bias of each dataset, we select a source with less bias and a target with more bias. Vocabulary is extracted from training split of both sets. The model is first trained by the source dataset. We then remove final softmax layer and attach a new one initialized for training the target. The target is trained with a slower learning rate. Early stopping is decided by the valid set of the respective dataset.",
"Based on this criterion and results from Section SECREF13 , we choose the abt dataset as source and st dataset as target for bias fine-tuning experiments."
],
[
"Tables TABREF12 and TABREF14 show the bias measurement experiment results for st and abt, respectively. As expected, pre-trained embeddings improved task performance. The score on the unbiased generated test set (Gen. ROC) also improved since word embeddings can provide prior knowledge of words.",
"However, the equality difference scores tended to be larger when pre-trained embeddings were used, especially in the st dataset. This confirms the result of BIBREF9 . In all experiments, direction of the gender bias was towards female identity words. We can infer that this is due to the more frequent appearances of female identities in “sexist” tweets and lack of negative samples, similar to the reports of BIBREF1 . This is problematic since not many NLP datasets are large enough to reflect the true data distribution, more prominent in tasks like abusive language where data collection and annotation are difficult.",
"On the other hand, abt dataset showed significantly better results on the two equality difference scores, of at most 0.04. Performance in the generated test set was better because the models successfully classify abusive samples regardless of the gender identity terms used. Hence, we can assume that abt dataset is less gender-biased than the st dataset, presumably due to its larger size, balance in classes, and systematic collection method.",
"Interestingly, the architecture of the models also influenced the biases. Models that “attend” to certain words, such as CNN's max-pooling or INLINEFORM0 -GRU's self-attention, tended to result in higher false positive equality difference scores in st dataset. These models show effectiveness in catching not only the discriminative features for classification, but also the “unintended” ones causing the model biases."
],
[
"We experiment and discuss various methods to reduce gender biases identified in Section SECREF13 ."
],
[
"Table TABREF16 shows the results of experiments using the three methods proposed. The first rows are the baselines without any method applied. We can see from the second rows of each section that debiased word embeddings alone do not effectively correct the bias of the whole system that well, while gender swapping significantly reduced both the equality difference scores. Meanwhile, fine-tuning bias with a larger, less biased source dataset helped to decrease the equality difference scores and greatly improve the AUC scores from the generated unbiased test set. The latter improvement shows that the model significantly reduced errors on the unbiased set in general.",
"To our surprise, the most effective method was applying both debiased embedding and gender swap to GRU, which reduced the equality differences by 98% & 89% while losing only 1.5% of the original performance. We assume that this may be related to the influence of “attending” model architectures on biases as discussed in Section SECREF13 . On the other hand, using the three methods together improved both generated unbiased set performance and equality differences, but had the largest decrease in the original performance.",
"All methods involved some performance loss when gender biases were reduced. Especially, fine-tuning had the largest decrease in original test set performance. This could be attributed to the difference in the source and target tasks (abusive & sexist). However, the decrease was marginal (less than 4%), while the drop in bias was significant. We assume the performance loss happens because mitigation methods modify the data or the model in a way that sometimes deters the models from discriminating important “unbiased” features."
],
[
"We discussed model biases, especially toward gender identity terms, in abusive language detection. We found out that pre-trained word embeddings, model architecture, and different datasets all can have influence. Also, we found our proposed methods can reduce gender biases up to 90-98%, improving the robustness of the models.",
"As shown in Section SECREF13 , some classification performance drop happens when mitigation methods. We believe that a meaningful extension of our work can be developing bias mitigation methods that maintain (or even increase) the classification performance and reduce the bias at the same time. Some previous works BIBREF17 , BIBREF18 employ adversarial training methods to make the classifiers unbiased toward certain variables. However, those works do not deal with natural language where features like gender and race are latent variables inside the language. Although those approaches are not directly comparable to our methods, it would be interesting to explore adversarial training to tackle this problem in the future.",
"Although our work is preliminary, we hope that our work can further develop the discussion of evaluating NLP systems in different directions, not merely focusing on performance metrics like accuracy or AUC. The idea of improving models by measuring and correcting gender bias is still unfamiliar but we argue that they can be crucial in building systems that are not only ethical but also practical. Although this work focuses on gender terms, the methods we proposed can easily be extended to other identity problems like racial and to different tasks like sentiment analysis by following similar steps, and we hope to work on this in the future."
],
[
"This work is partially funded by ITS/319/16FP of Innovation Technology Commission, HKUST, and 16248016 of Hong Kong Research Grants Council."
]
],
"section_name": [
"Introduction",
"Related Work",
"Sexist Tweets (st)",
"Abusive Tweets (abt)",
"Methodology",
"Experimental Setup",
"Results & Discussions",
"Reducing Gender Biases",
"Results & Discussion",
"Conclusion & Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"60b268461c34175f2d0988a3f644d700bdedcc89",
"6e0251b901dfd18b074f66eee03a4b63e2e929c8",
"ce232c8eec32b8844e3487778cf383c596c36ebf"
],
"answer": [
{
"evidence": [
"Although our work is preliminary, we hope that our work can further develop the discussion of evaluating NLP systems in different directions, not merely focusing on performance metrics like accuracy or AUC. The idea of improving models by measuring and correcting gender bias is still unfamiliar but we argue that they can be crucial in building systems that are not only ethical but also practical. Although this work focuses on gender terms, the methods we proposed can easily be extended to other identity problems like racial and to different tasks like sentiment analysis by following similar steps, and we hope to work on this in the future."
],
"extractive_spans": [
"sentiment analysis ",
"other identity problems like racial"
],
"free_form_answer": "",
"highlighted_evidence": [
"Although this work focuses on gender terms, the methods we proposed can easily be extended to other identity problems like racial and to different tasks like sentiment analysis by following similar steps, and we hope to work on this in the future.\n\n"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Although our work is preliminary, we hope that our work can further develop the discussion of evaluating NLP systems in different directions, not merely focusing on performance metrics like accuracy or AUC. The idea of improving models by measuring and correcting gender bias is still unfamiliar but we argue that they can be crucial in building systems that are not only ethical but also practical. Although this work focuses on gender terms, the methods we proposed can easily be extended to other identity problems like racial and to different tasks like sentiment analysis by following similar steps, and we hope to work on this in the future."
],
"extractive_spans": [
"other identity problems like racial",
"sentiment analysis"
],
"free_form_answer": "",
"highlighted_evidence": [
"Although this work focuses on gender terms, the methods we proposed can easily be extended to other identity problems like racial and to different tasks like sentiment analysis by following similar steps, and we hope to work on this in the future."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As shown in Section SECREF13 , some classification performance drop happens when mitigation methods. We believe that a meaningful extension of our work can be developing bias mitigation methods that maintain (or even increase) the classification performance and reduce the bias at the same time. Some previous works BIBREF17 , BIBREF18 employ adversarial training methods to make the classifiers unbiased toward certain variables. However, those works do not deal with natural language where features like gender and race are latent variables inside the language. Although those approaches are not directly comparable to our methods, it would be interesting to explore adversarial training to tackle this problem in the future."
],
"extractive_spans": [
"developing bias mitigation methods that maintain (or even increase) the classification performance and reduce the bias at the same time"
],
"free_form_answer": "",
"highlighted_evidence": [
"We believe that a meaningful extension of our work can be developing bias mitigation methods that maintain (or even increase) the classification performance and reduce the bias at the same time."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"4cd2c185ed28f5c994fcac3db3905adf45cec01d",
"da9982a09b3966d100eb90ad79a0ae1fafc9cdc0"
],
"answer": [
{
"evidence": [
"To our surprise, the most effective method was applying both debiased embedding and gender swap to GRU, which reduced the equality differences by 98% & 89% while losing only 1.5% of the original performance. We assume that this may be related to the influence of “attending” model architectures on biases as discussed in Section SECREF13 . On the other hand, using the three methods together improved both generated unbiased set performance and equality differences, but had the largest decrease in the original performance."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"On the other hand, using the three methods together improved both generated unbiased set performance and equality differences, but had the largest decrease in the original performance."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Debiased Word Embeddings (DE) BIBREF9 proposed an algorithm to correct word embeddings by removing gender stereotypical information. All the other experiments used pretrained word2vec to initialized the embedding layer but we substitute the pretrained word2vec with their published embeddings to verify their effectiveness in our task.",
"Gender Swap (GS) We augment the training data by identifying male entities and swapping them with equivalent female entities and vice-versa. This simple method removes correlation between gender and classification decision and has proven to be effective for correcting gender biases in co-reference resolution task BIBREF13 .",
"Bias fine-tuning (FT) We propose a method to use transfer learning from a less biased corpus to reduce the bias. A model is initially trained with a larger, less-biased source corpus with a same or similar task, and fine-tuned with a target corpus with a larger bias. This method is inspired by the fact that model bias mainly rises from the imbalance of labels and the limited size of data samples. Training the model with a larger and less biased dataset may regularize and prevent the model from over-fitting to the small, biased dataset."
],
"extractive_spans": [
"Debiased Word Embeddings",
"Gender Swap",
"Bias fine-tuning"
],
"free_form_answer": "",
"highlighted_evidence": [
"Debiased Word Embeddings (DE) BIBREF9 proposed an algorithm to correct word embeddings by removing gender stereotypical information.",
"Gender Swap (GS) We augment the training data by identifying male entities and swapping them with equivalent female entities and vice-versa.",
"Bias fine-tuning (FT) We propose a method to use transfer learning from a less biased corpus to reduce the bias."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"60aaca5f3afeb9fe6307328bdc8b0f9030cad9ff",
"926d817aa8a1ac773e32b2cf32b166714bb8f98c"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 6: Results of bias mitigation methods on st dataset. ‘O’ indicates that the corresponding method is applied. See Section 5.3 for more analysis."
],
"extractive_spans": [],
"free_form_answer": "Gender Swap",
"highlighted_evidence": [
"FLOAT SELECTED: Table 6: Results of bias mitigation methods on st dataset. ‘O’ indicates that the corresponding method is applied. See Section 5.3 for more analysis."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To our surprise, the most effective method was applying both debiased embedding and gender swap to GRU, which reduced the equality differences by 98% & 89% while losing only 1.5% of the original performance. We assume that this may be related to the influence of “attending” model architectures on biases as discussed in Section SECREF13 . On the other hand, using the three methods together improved both generated unbiased set performance and equality differences, but had the largest decrease in the original performance."
],
"extractive_spans": [
"most effective method was applying both debiased embedding and gender swap"
],
"free_form_answer": "",
"highlighted_evidence": [
"To our surprise, the most effective method was applying both debiased embedding and gender swap to GRU, which reduced the equality differences by 98% & 89% while losing only 1.5% of the original performance."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2375ae33ec7030fd894a982e16c3783e8705eff1",
"352513541974dafc663137f8dab2c3c1d15a54ea",
"a5604343638f503b0203792c7962071366d2be71"
],
"answer": [
{
"evidence": [
"We first measure gender biases in st and abt datasets. We explore three neural models used in previous works on abusive language classification: Convolutional Neural Network (CNN) BIBREF7 , Gated Recurrent Unit (GRU) BIBREF14 , and Bidirectional GRU with self-attention ( INLINEFORM0 -GRU) BIBREF8 , but with a simpler mechanism used in BIBREF15 . Hyperparameters are found using the validation set by finding the best performing ones in terms of original AUC scores. These are the used hyperparameters:"
],
"extractive_spans": [
"Convolutional Neural Network",
"Gated Recurrent Unit",
"Bidirectional GRU with self-attention"
],
"free_form_answer": "",
"highlighted_evidence": [
"We explore three neural models used in previous works on abusive language classification: Convolutional Neural Network (CNN) BIBREF7 , Gated Recurrent Unit (GRU) BIBREF14 , and Bidirectional GRU with self-attention ( INLINEFORM0 -GRU) BIBREF8 , but with a simpler mechanism used in BIBREF15 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We first measure gender biases in st and abt datasets. We explore three neural models used in previous works on abusive language classification: Convolutional Neural Network (CNN) BIBREF7 , Gated Recurrent Unit (GRU) BIBREF14 , and Bidirectional GRU with self-attention ( INLINEFORM0 -GRU) BIBREF8 , but with a simpler mechanism used in BIBREF15 . Hyperparameters are found using the validation set by finding the best performing ones in terms of original AUC scores. These are the used hyperparameters:"
],
"extractive_spans": [
"Convolutional Neural Network",
"Gated Recurrent Unit",
"Bidirectional GRU with self-attention"
],
"free_form_answer": "",
"highlighted_evidence": [
"We explore three neural models used in previous works on abusive language classification: Convolutional Neural Network (CNN) BIBREF7 , Gated Recurrent Unit (GRU) BIBREF14 , and Bidirectional GRU with self-attention ( INLINEFORM0 -GRU) BIBREF8 , but with a simpler mechanism used in BIBREF15 . "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We first measure gender biases in st and abt datasets. We explore three neural models used in previous works on abusive language classification: Convolutional Neural Network (CNN) BIBREF7 , Gated Recurrent Unit (GRU) BIBREF14 , and Bidirectional GRU with self-attention ( INLINEFORM0 -GRU) BIBREF8 , but with a simpler mechanism used in BIBREF15 . Hyperparameters are found using the validation set by finding the best performing ones in terms of original AUC scores. These are the used hyperparameters:"
],
"extractive_spans": [
"Convolutional Neural Network (CNN)",
"Gated Recurrent Unit (GRU)",
"Bidirectional GRU with self-attention ( INLINEFORM0 -GRU)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We explore three neural models used in previous works on abusive language classification: Convolutional Neural Network (CNN) BIBREF7 , Gated Recurrent Unit (GRU) BIBREF14 , and Bidirectional GRU with self-attention ( INLINEFORM0 -GRU) BIBREF8 , but with a simpler mechanism used in BIBREF15 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"52b107b9857117fc07b11e8f4905e96e0127045d",
"dbec40cab82123db51e22a244e6928c431b9f9a5",
"ded8983ea98e06758ad6d3f7e009ee597ed9408e"
],
"answer": [
{
"evidence": [
"We also compare different pre-trained embeddings, word2vec BIBREF10 trained on Google News corpus, FastText BIBREF16 ) trained on Wikipedia corpus, and randomly initialized embeddings (random) to analyze their effects on the biases. Experiments were run 10 times and averaged."
],
"extractive_spans": [
"word2vec",
"FastText",
"randomly initialized embeddings (random)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We also compare different pre-trained embeddings, word2vec BIBREF10 trained on Google News corpus, FastText BIBREF16 ) trained on Wikipedia corpus, and randomly initialized embeddings (random) to analyze their effects on the biases."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We also compare different pre-trained embeddings, word2vec BIBREF10 trained on Google News corpus, FastText BIBREF16 ) trained on Wikipedia corpus, and randomly initialized embeddings (random) to analyze their effects on the biases. Experiments were run 10 times and averaged."
],
"extractive_spans": [],
"free_form_answer": "word2vec train on Google News corpus; FastText train on Wikipedia corpus; randomly initialized embeddings",
"highlighted_evidence": [
"We also compare different pre-trained embeddings, word2vec BIBREF10 trained on Google News corpus, FastText BIBREF16 ) trained on Wikipedia corpus, and randomly initialized embeddings (random) to analyze their effects on the biases."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We also compare different pre-trained embeddings, word2vec BIBREF10 trained on Google News corpus, FastText BIBREF16 ) trained on Wikipedia corpus, and randomly initialized embeddings (random) to analyze their effects on the biases. Experiments were run 10 times and averaged."
],
"extractive_spans": [
"word2vec BIBREF10 trained on Google News corpus",
"FastText BIBREF16 ) trained on Wikipedia corpus,"
],
"free_form_answer": "",
"highlighted_evidence": [
"We also compare different pre-trained embeddings, word2vec BIBREF10 trained on Google News corpus, FastText BIBREF16 ) trained on Wikipedia corpus, and randomly initialized embeddings (random) to analyze their effects on the biases."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"1a5a16137e325f5403883f40323e18e77bc7f742",
"2703e848e124cd55a4eefe41ff83a70a4a0b4e50",
"7667ca528638204d1be83a240217fc7a4ecb3e6d"
],
"answer": [
{
"evidence": [
"For the evaluation metric, we use 1) AUC scores on the original test set (Orig. AUC), 2) AUC scores on the unbiased generated test set (Gen. AUC), and 3) the false positive/negative equality differences proposed in BIBREF1 which aggregates the difference between the overall false positive/negative rate and gender-specific false positive/negative rate. False Positive Equality Difference (FPED) and False Negative Equality Difference (FNED) are defined as below, where INLINEFORM0 . INLINEFORM1"
],
"extractive_spans": [
"False Positive Equality Difference",
"False Negative Equality Difference"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the evaluation metric, we use 1) AUC scores on the original test set (Orig. AUC), 2) AUC scores on the unbiased generated test set (Gen. AUC), and 3) the false positive/negative equality differences proposed in BIBREF1 which aggregates the difference between the overall false positive/negative rate and gender-specific false positive/negative rate. False Positive Equality Difference (FPED) and False Negative Equality Difference (FNED) are defined as below, where INLINEFORM0 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For the evaluation metric, we use 1) AUC scores on the original test set (Orig. AUC), 2) AUC scores on the unbiased generated test set (Gen. AUC), and 3) the false positive/negative equality differences proposed in BIBREF1 which aggregates the difference between the overall false positive/negative rate and gender-specific false positive/negative rate. False Positive Equality Difference (FPED) and False Negative Equality Difference (FNED) are defined as below, where INLINEFORM0 . INLINEFORM1"
],
"extractive_spans": [
"AUC scores on the original test set ",
"AUC scores on the unbiased generated test set",
"the false positive/negative equality differences proposed in BIBREF1 which aggregates the difference between the overall false positive/negative rate and gender-specific false positive/negative rate"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the evaluation metric, we use 1) AUC scores on the original test set (Orig. AUC), 2) AUC scores on the unbiased generated test set (Gen. AUC), and 3) the false positive/negative equality differences proposed in BIBREF1 which aggregates the difference between the overall false positive/negative rate and gender-specific false positive/negative rate."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For the evaluation metric, we use 1) AUC scores on the original test set (Orig. AUC), 2) AUC scores on the unbiased generated test set (Gen. AUC), and 3) the false positive/negative equality differences proposed in BIBREF1 which aggregates the difference between the overall false positive/negative rate and gender-specific false positive/negative rate. False Positive Equality Difference (FPED) and False Negative Equality Difference (FNED) are defined as below, where INLINEFORM0 . INLINEFORM1"
],
"extractive_spans": [
"AUC scores on the original test set (Orig. AUC)",
" AUC scores on the unbiased generated test set (Gen. AUC)",
"false positive/negative equality differences"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the evaluation metric, we use 1) AUC scores on the original test set (Orig. AUC), 2) AUC scores on the unbiased generated test set (Gen. AUC), and 3) the false positive/negative equality differences proposed in BIBREF1 which aggregates the difference between the overall false positive/negative rate and gender-specific false positive/negative rate."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What other scenarios can the bias mitigation methods be applied to?",
"Are the three bias mitigation methods combined in any model?",
"Which of the three bias mitigation methods is most effective?",
"What model architectures are used?",
"What pre-trained word embeddings are used?",
"What metrics are used to measure gender biases?"
],
"question_id": [
"76e17e648a4d1f386eb6bf61b0c24f134af872be",
"7572f6e68a2ed2c41b87c5088ba8680afa0c0a0b",
"5d2bbcc3aa769e639dc21893890bc36b76597a33",
"4ddc53afffaf1622d97695347dd1b3190d156dee",
"5d93245832d90b31aee42ea2bf1e7704c22ebeca",
"c0dbf3f1957f3bff3ced5b48aff60097f3eac7bb"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"bias",
"bias",
"bias",
"bias",
"bias",
"bias"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Dataset statistics. µ, σ,max are mean, std.dev, and maximum of sentence lengths",
"Table 2: Example of templates used to generated an unbiased test set.",
"Table 4: Results on st. False negative/positive equality differences are larger when pre-trained embedding is used and CNN or α-RNN is trained",
"Table 3: Example of offensive and non-offensive verbs & adjectives used for generating the unbiased test set.",
"Table 5: Results on abt. The false negative/positive equality difference is significantly smaller than the st",
"Table 6: Results of bias mitigation methods on st dataset. ‘O’ indicates that the corresponding method is applied. See Section 5.3 for more analysis."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"3-Table4-1.png",
"3-Table3-1.png",
"4-Table5-1.png",
"4-Table6-1.png"
]
} | [
"Which of the three bias mitigation methods is most effective?",
"What pre-trained word embeddings are used?"
] | [
[
"1808.07231-Results & Discussion-1",
"1808.07231-4-Table6-1.png"
],
[
"1808.07231-Experimental Setup-4"
]
] | [
"Gender Swap",
"word2vec train on Google News corpus; FastText train on Wikipedia corpus; randomly initialized embeddings"
] | 145 |
1712.02555 | Hungarian Layer: Logics Empowered Neural Architecture | Neural architecture is a purely numeric framework, which fits the data as a continuous function. However, lacking of logic flow (e.g. \textit{if, for, while}), traditional algorithms (e.g. \textit{Hungarian algorithm, A$^*$ searching, decision tress algorithm}) could not be embedded into this paradigm, which limits the theories and applications. In this paper, we reform the calculus graph as a dynamic process, which is guided by logic flow. Within our novel methodology, traditional algorithms could empower numerical neural network. Specifically, regarding the subject of sentence matching, we reformulate this issue as the form of task-assignment, which is solved by Hungarian algorithm. First, our model applies BiLSTM to parse the sentences. Then Hungarian layer aligns the matching positions. Last, we transform the matching results for soft-max regression by another BiLSTM. Extensive experiments show that our model outperforms other state-of-the-art baselines substantially. | {
"paragraphs": [
[
"Paraphrase identification is an important topic in artificial intelligence and this task justifies whether two sentences expressed in various forms are semantically similar, BIBREF0 . For example, “On Sunday, the boy runs in the yard” and “The child runs outside at the weekend” are identified as paraphrase. This task directly benefits many industrial applications, such as plagiarism identification BIBREF0 , machine translation BIBREF1 and removing redundancy questions in Quora website BIBREF2 . Recently, there emerge many methods, such as ABCNN BIBREF3 , Siamese LSTM BIBREF2 and L.D.C BIBREF4 .",
"Conventionally, neural methodology aligns the sentence pair and then generates a matching score for paraphrase identification, BIBREF4 , BIBREF2 . Regarding the alignment, we conjecture that the aligned unmatched parts are semantically critical, where we define the corresponded word pairs with low similarity as aligned unmatched parts. For an example: “On Sunday, the boy runs in the yard” and “The child runs inside at the weekend”, the matched parts (i.e. (Sunday, weekend), (boy, child), run) barely make contribution to the semantic sentence similarity, but the unmatched parts (i.e. “yard” and “inside”) determine these two sentences are semantically dissimilar. For another example: “On Sunday, the boy runs in the yard” and “The child runs outside at the weekend”, the aligned unmatched parts (i.e. “yard” and “outside”) are semantically similar, which makes the two sentences paraphrase. In conclusion, if the aligned unmatched parts are semantically consistent, the two sentences are paraphrase, otherwise they are non-paraphrase.",
"Traditional alignment methods take advantage of attention mechanism BIBREF2 , which is a soft-max weighting technique. Weighting technique could pick out the most similar/dissimilar parts, but is weak in modeling the aligned unmatched parts, which are the crucial evidence to identify paraphrase. For the input sentences in Figure FIGREF1 , the weight between “Sunday” and “run” is lower than the weight between “yard” and “inside”, but the former weight is not the evidence of paraphrase/non-paraphrase, because the former two words that are most dissimilar should not be aligned for an inappropriate comparison.",
"To extract the aligned unmatched parts, in this paper, we embed Hungarian algorithm BIBREF5 into neural architecture as Hungarian layer (Algorithm SECREF7 ). Illustrated in Figure FIGREF1 , the alignment in sentence matching could be formulated as the task-assignment problem, which is tackled by Hungarian algorithm. Simply, Hungarian algorithm works out the theoretically optimal alignment relationship in an exclusive manner and the exclusiveness characterizes the aligned unmatched parts. For the example in Figure FIGREF1 , because Hungarian layer allocates the aligned pairs with exclusiveness, the matched parts (i.e (Sunday, weekend), (boy, child), run) are aligned firstly, then the word “yard” would be assigned to the word “inside” with a negative similarity, making a strong evidence for discrimination.",
"Specifically, our model performs this task in three steps. First, our model applies BiLSTM to parse the input sentences into hidden representations. Then, Hungarian layer leverages the hidden representations to extract the aligned unmatched parts. Last, we apply cosine similarity to metric the aligned unmatched parts for a final discrimination. Regarding the training process of Hungarian layer, we modify the back-propagation algorithm in both directions. In the forward pass, Hungarian layer works out the alignment relationship, according to which, the computational graph is dynamically constructed, as demonstrated in Figure FIGREF13 . Once the computational graph has been dynamically constructed, the backward propagation could be performed as usual in a conventional graph.",
"We conduct our experiments on the public benchmark dataset of “Quora Question Pairs” for the task of paraphrase identification. Experimental results demonstrate that our model outperforms other baselines extensively and significantly, which verifies our theory about the aligned unmatched parts and illustrates the effectiveness of our methodology.",
"Contributions. (1.) We offer a new perspective for paraphrase identification, which focuses on the aligned unmatched parts of two sentences. Accordingly, we propose the Hungarian layer to extract the aligned unmatched parts. The proposed method can achieve hard and exclusive alignments between two sequences, while we can learn parameters by end-to-end back-propagation. (2.) Our model outperforms other baselines extensively, verifying the effectiveness of our theory and method.",
"Organization. In Section 2, we survey the related work of paraphrase identification and dynamic differentiable computational graphs. In Section 3, we introduce our neural architecture. In Section 4, we conduct the experiments. In Section 5, we conclude our paper and publish our codes."
],
[
"We have surveyed this task and categorized related papers into three lines."
],
[
"The topic of paraphrase identification raises in the last decade. The development has been through four stages before neural architectures: word specific, syntactic tree specific, semantic matching and probabilistic graph modeling.",
"Firstly, BIBREF6 focuses on simple surface-form matching between bag-of-words, which produces poor accuracy, because of word ambiguities and syntactic complexity. Therefore, syntactic analysis is introduced into this task for semantic understanding, such as deeper semantic analysis BIBREF7 , quasi-synchronous grammars BIBREF8 and tree edit distance BIBREF9 . Notably, most of these methods compare the grammar tree (e.g. syntactic tree, dependency tree, etc.) of sentence pair. Further, semantic information such as negation, hypernym, synonym and antonym is integrated into this task for a better prediction precision, BIBREF10 . Finally, BIBREF11 leverages a semi-Markov CRF to align phrases rather than words, which consumes too many resources for industrial applications.",
"In summary, the advantage of this branch, which roots the foundation in linguistics, is semantically interpretable, while the disadvantage is too simple to understand complex language phenomenon."
],
[
"With the popularity of deep neural network, some neural architectures are proposed to analyze the complex language phenomenon in a data-fitting way, which promotes the performance. First of all, the neural network extracts the abstracted features from each sentence independently, then measures the similarity of the abstracted feature pair. There list two frameworks: CNN-based and RAE-based.",
"Commonly, CNN could be treated as n-gram method, which corresponds to language model. Specifically, BIBREF12 applies a bi-gram CNN to jointly model source and target sequences. BIBREF13 achieves a better performance by following this work. BIBREF14 has proposed a RAE based model to characterize phrase-level representation, which promotes simple pooling method, BIBREF15 . Multi-perspective methods BIBREF2 take the advantage of multiple metric aspects to boost the accuracy.",
"In summary, the advantage of this branch is to model complex and ambiguous linguistic phenomenon in a black-box style. However, the disadvantage is that the encoder could not adjust the abstracted representations according to the correlation of sentence pair, making an imperfect matching process."
],
[
"To emphasize the correlation of sentence pair in encoder, the researchers propose the attention-based neural architectures, which guide the encoding process according to the corresponding part. There introduce the representative methods: ABCNN BIBREF3 and L.D.C BIBREF2 .",
"ABCNN is a CNN-based model. In a single stage, this model computes the attention similarity matrix for the convolution layer, then sums out each row and column as the weighs of pooling layer. The output of convolution layer is weighted by pooling layer in an average manner as the output of this stage. ABCNN could stack at most three stages. This method achieves satisfactory performance in many tasks, because of modeling correlation in sentence encoder. L.D.C model BIBREF4 is an attention-based method, which decomposes the hidden representations into similar and dissimilar parts, then respectively processes each parts to generate the final result. Notably, L.D.C is the state-of-the-art method.",
"In summary, the advantage of this branch is to model alignment or correlation in the encoding process. However, the disadvantage is to focus on the matched parts, rather than the unmatched parts, which are critical in this task as previously discussed."
],
[
"Neural Turing Machine (NTM) BIBREF16 , BIBREF17 is a seminal work to implement instrument-based algorithm in the neural architecture, which attempts to express algorithms by simulating memory and controller. However, NTM leverages the weighting technique, which involves too much noise and makes the learned algorithm fuzzy. Thus, we propose a hard way to embed algorithms into neural architectures.",
"There also exist some papers for dynamical computational graph construction. At the lower level, pointer-switch networks BIBREF18 are a kind of dynamic differentiable neural model. At the higher level, some architecture search models BIBREF19 , BIBREF20 construct new differentiable computational graphs dynamically at every iteration."
],
[
"First, we introduce the basic components of our neural architecture. Then, we analyze the training process of Hungarian layer, that how to dynamically construct the computational graph."
],
[
"Our neural architecture is illustrated in Figure FIGREF6 . Basically our model is composed by four components, namely, word embedding, bi-directional LSTM (BiLSTM), Hungarian layer and cosine similarity.",
"Word Embedding. The goal of this layer is to represent each word INLINEFORM0 in every sentence INLINEFORM1 with INLINEFORM2 -dimensional semantic vectors. The word representations, which are pre-trained by GloVe BIBREF21 , are unmodified within the learning procedure. The inputs of this layer are a pair of sentences as word sequences INLINEFORM3 and INLINEFORM4 , while the outputs are corresponding embedding matrices as INLINEFORM5 and INLINEFORM6 .",
"Bi-Directional LSTM (BiLSTM). The purpose of this layer is to transform lexical representations to hidden contextual representations. For hidden contextual encoding, we employ a parameter-shared bi-directional LSTM (BiLSTM) BIBREF22 to parse the word embeddings into hidden representations, mathematically as: DISPLAYFORM0 ",
"where INLINEFORM0 is the INLINEFORM1 -th hidden representation and INLINEFORM2 corresponds to the INLINEFORM3 -th word embedding in the source/target sentence or INLINEFORM4 / INLINEFORM5 .",
"Hungarian Layer. This layer, which is the matching component of our model, extracts the aligned unmatched parts from the source and target sentences. This layer is composed by two sequential stages.",
"Algorithm SECREF7 demonstrates the first stage. The objective of this stage is to align the source and target hidden representations. The inputs of this stage are INLINEFORM0 source hidden representation vectors INLINEFORM1 and INLINEFORM2 target hidden representation vectors INLINEFORM3 , while the outputs of this stage are INLINEFORM4 aligned hidden representation vector pairs INLINEFORM5 , assuming INLINEFORM6 , where INLINEFORM7 corresponds to the INLINEFORM8 -th aligned source/target hidden representation vector, respectively.",
"Specifically in this stage, there are totally three steps. First, the input hidden representations are crossly dotted to generate the pairwise similarity matrix INLINEFORM0 . Then, Hungarian algorithm works out the aligned source-target position pairs INLINEFORM1 with this similarity matrix. For example in Figure FIGREF1 , assuming the left/top sentence indicates the source/target sequence, the aligned source-target position pairs are listed as INLINEFORM2 . Last, the input hidden representation vectors INLINEFORM3 are re-organized into the aligned source-target hidden representation vector pairs INLINEFORM4 , according to the aligned source-target position pairs INLINEFORM5 .",
"The second stage attempts to extract the aligned unmatched parts by weighting the aligned hidden representations INLINEFORM0 from the first stage. Required by extracting the unmatched parts, if two aligned representations are matched, the weight for them should be small, otherwise, large dissimilarity leads to large weight. For this reason, we introduce cosine dissimilarity, mathematically as: DISPLAYFORM0 ",
"where INLINEFORM0 is the INLINEFORM1 -th aligned cosine dissimilarity and INLINEFORM2 is the INLINEFORM3 -th aligned cosine similarity from the first stage. Thus, the aligned hidden representations are concatenated and then weighted by cosine dissimilarity: DISPLAYFORM0 ",
"where INLINEFORM0 is the INLINEFORM1 -th output of Hungarian layer, INLINEFORM2 is the INLINEFORM3 -th aligned source/target hidden representation generated by Algorithm SECREF7 and INLINEFORM4 is the scalar-vector multiplication. Actually in the practical setting, most of cosine dissimilarity approach 0 and the remaining hidden representations indicate the aligned unmatched parts.",
"[t] Hungarian Layer: First Stage [1] Source and target sentence hidden representations: INLINEFORM0 and INLINEFORM1 . INLINEFORM2 , where INLINEFORM3 and INLINEFORM4 mean the INLINEFORM5 -th aligned hidden representations for source and target respectively, and INLINEFORM6 means the corresponding similarity. Generate the pairwise similarity matrix: INLINEFORM7 ",
"where INLINEFORM0 is the dot product and INLINEFORM1 is the length of vector. Perform Hungarian algorithm BIBREF5 to assign the aligned position pairs INLINEFORM2 , where INLINEFORM3 is INLINEFORM4 -th aligned source/target position of the sentence pair. INLINEFORM5 , where INLINEFORM6 is the length of source sentence. Compute INLINEFORM7 , where INLINEFORM8 is the pairwise similarity for INLINEFORM9 -th matched position. return INLINEFORM10 , where INLINEFORM11 corresponds to the INLINEFORM12 -th aligned source/target hidden representation, while INLINEFORM13 is the INLINEFORM14 -th aligned source-target position pair, INLINEFORM15 are the input source/target hidden representation vectors and INLINEFORM16 is the INLINEFORM17 -th aligned cosine similarity.",
"Cosine Similarity. Last, we average the concatenated hidden representations as the final sentence representation INLINEFORM0 , which is a conventional procedure in neural natural language processing, BIBREF4 . Then, we employ a cosine similarity as the output: DISPLAYFORM0 ",
"where INLINEFORM0 is the matching score, INLINEFORM1 is the length of vector and INLINEFORM2 / INLINEFORM3 is the corresponding source/target part of the final sentence representation INLINEFORM4 . Thus, our output ranges in INLINEFORM5 , where INLINEFORM6 means the two sentences are similar/paraphrase, and INLINEFORM7 means otherwise. For further evaluation of accuracy, we also apply a threshold learned in the development dataset to binary the cosine similarity as paraphrase/non-paraphrase. Notably, the introduction of concatenation layer facilitates the inference and training of Hungarian layer."
],
[
"Previously discussed, Hungarian algorithm is embedded into neural architecture, making a challenge for learning process. We tackle this issue by modifying the back-propagation algorithm in a dynamically graph-constructing manner. In the forward pass, we dynamically construct the links between Hungarian layer and the next layer, according to the aligned position pairs, while in the backward process, the back-propagation is performed through the dynamically constructed links. Next, we illustratively exemplify how the computational graph is dynamically constructed in Hungarian layer as Figure FIGREF13 shows.",
"As Figure FIGREF13 shows, in the forward propagation, Hungarian algorithm works out the aligned position pairs, according to which, neural components are dynamically connected to the next layer. For the example of Figure FIGREF13 , the 1st source and 2nd target word representations are jointly linked to the 1st aligned position of concatenation layer. Once the computational graph has been dynamically constructed in the forward pass, the backward process could propagate through the dynamically constructed links between layers, without any branching and non-differentiated issues. For the example in Figure FIGREF13 , the backward pass firstly propagates to the 1st aligned position of concatenation layer, then respectively propagates to 1st source and 2nd target word representations. In this way, the optimization framework could still adjust the parameters of neural architectures in an end-to-end manner."
],
[
"In this section, we verify our model performance on the famous public benchmark dataset of “Quora Question Pairs”. First, we introduce the experimental settings, in Section 4.1. Then, in Section 4.2, we conduct the performance evaluation. Last, in order to further test our assumptions, that the aligned unmatched parts are semantically critical, we conduct a case study for illustration in Section 4.3."
],
[
"We initialize the word embedding with 300-dimensional GloVe BIBREF21 word vectors pre-trained in the 840B Common Crawl corpus BIBREF21 . For the out-of-vocabulary (OOV) words, we directly apply zero vector as word representation. Regarding the hyper-parameters, we set the hidden dimension as 150 for each BiLSTM. To train the model, we leverage AdaDelta BIBREF23 as our optimizer, with hyper-parameters as moment factor INLINEFORM0 and INLINEFORM1 . We train the model until convergence, but at most 30 rounds. We apply the batch size as 512."
],
[
"Dataset. Actually, to demonstrate the effectiveness of our model, we perform our experiments on the famous public benchmark dataset of “Quora Question Pairs” . For a fair comparison, we follow the splitting rules of BIBREF2 . Specifically, there are over 400,000 question pairs in this dataset, and each question pair is annotated with a binary value indicating whether the two questions are paraphrase of each other or not. We randomly select 5,000 paraphrases and 5,000 non-paraphrases as the development set, and sample another 5,000 paraphrases and 5,000 non-paraphrases as the test set. We keep the remaining instances as the training set. Baselines. To make a sufficient comparison, we choose five state-of-the-art baselines: Siamese CNN, Multi-Perspective CNN, Siamese LSTM, Multi-Perspective LSTM, and L.D.C. Specifically, Siamese CNN and LSTM encode the two input sentences into two sentence vectors by CNN and LSTM, respectively, BIBREF24 . Based on the two sentence vectors, a cosine similarity is leveraged to make the final decision. Multi-Perspective methods leverage different metric aspects to promote the performance, BIBREF2 . L.D.C model BIBREF4 is an attention-based method, which decomposes the hidden representations into similar and dissimilar parts. L.D.C is a powerful model which achieves the state-of-the-art performance.",
"We have tested L.D.C. and our model five times to evaluate the mean and variance, then perform the test for statistical significance.",
" INLINEFORM0 We apply t-test and INLINEFORM1 . Thus, the improvement is statistically significant.",
"Results. Our results are reported in Table TABREF17 . We can conclude that:",
"Our method outperforms all the baselines, which illustrates the effectiveness of our model.",
"In order to evaluate the reliability of the comparison between L.D.C and our model, the results are tested for statistical significance using t-test. In this case, we obtain a p-value = 0.003 INLINEFORM0 0.01. Therefore, the null hypothesis that values are drawn from the same population (i.e., the accuracies of two approaches are virtually equivalent) can be rejected, which means that the improvement is statistically significant.",
"Compared with Siamese LSTM BIBREF24 , which lacks the matching layer, our model could precisely align the input sentences. Thus, our method promotes the performance.",
"Compared with L.D.C. BIBREF4 , which is an attention-based method and still analyzes the dissimilar part, our model could exactly extract the aligned unmatched parts rather than the fuzzy dissimilar parts. Thus, our performance is better.",
"Notably, L.D.C. is a very complex model, which is beaten by our simple model within a statistically significant improvement. This comparison illustrates our model is indeed simple but effective. Thus it is very suitable for industrial applications."
],
[
"We have conducted a case study in the practical setting of “Quora Question Pairs” with our model for paraphrase identification. Illustrated in Figure FIGREF18 , the slashed grids correspond to the aligned matched parts, while the crossed ones indicate the aligned unmatched parts. Notably, we mark the pairwise similarity below INLINEFORM0 as unmatched in this case study.",
"For the example of (a), there exist two input sentences: “What is your review of Hidden Figures -LRB- 2016 movie -RRB-” and “What are your impressions of Hidden Figures -LRB- 2017 movie -RRB-”. From our case analysis, most of the aligned parts are matched, while minor aligned unmatched parts are similar. Thus, our method justifies the two sentences as paraphrase. This is accorded to our assumption.",
"For the example of (b), there exist two input sentences: “Why is saltwater taffy candy imported in Austria” and “Why is salt water taffy candy unknown in Japan”. There are two unmatched parts that “imported/unknown” and “Austria/Japan”, which are conflicted. Thus, the case is classified as non-paraphrase.",
"For the example of (c), the two sentences are: “How can I stop being addicted to love” and “How can I stop being so addicted to my phone”. From our case analysis, there is an extreme conflict that “love/phone”, making this case non-paraphrase, according to our assumption.",
"For the example of (d), the two sentences are: “Is a(n) APK file just a hidden app” and “Where do APK files get stored in Android Studio”. As we know, there are too many conflicts in this case, making a very dissimilar score as non-paraphrase.",
"In summary, this case study justifies our assumption that “the aligned unmatched parts are semantically critical”."
],
[
"In this paper, we leverage Hungarian algorithm to design Hungarian layer, which extracts the aligned matched and unmatched parts exclusively from the sentence pair. Then our model is designed by assuming the aligned unmatched parts are semantically critical. Experimental results on benchmark datasets verify our theory and demonstrate the effectiveness of our proposed method."
]
],
"section_name": [
"Introduction",
"Related Work",
"Non-Neural Architecture for Paraphrase Identification",
"Neural Architecture for Paraphrase Identification: Independent Sentence Encoder",
"Neural Architecture for Paraphrase Identification: Interdependent Sentence Encoder",
"Dynamic Differentiable Computational Graphs",
"Methodology",
"Neural Architecture",
"Training Hungarian Layer",
"Experiment",
"Experimental Setting",
"Performance Evaluation",
"Case Study",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"b26a4be590f588025d6cd9c7fc0e17341a9d24b3",
"bcf8077db73b0105efd23787adf99b8371300101"
],
"answer": [
{
"evidence": [
"As Figure FIGREF13 shows, in the forward propagation, Hungarian algorithm works out the aligned position pairs, according to which, neural components are dynamically connected to the next layer. For the example of Figure FIGREF13 , the 1st source and 2nd target word representations are jointly linked to the 1st aligned position of concatenation layer. Once the computational graph has been dynamically constructed in the forward pass, the backward process could propagate through the dynamically constructed links between layers, without any branching and non-differentiated issues. For the example in Figure FIGREF13 , the backward pass firstly propagates to the 1st aligned position of concatenation layer, then respectively propagates to 1st source and 2nd target word representations. In this way, the optimization framework could still adjust the parameters of neural architectures in an end-to-end manner."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"As Figure FIGREF13 shows, in the forward propagation, Hungarian algorithm works out the aligned position pairs, according to which, neural components are dynamically connected to the next layer.",
"Once the computational graph has been dynamically constructed in the forward pass, the backward process could propagate through the dynamically constructed links between layers, without any branching and non-differentiated issues."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"As Figure FIGREF13 shows, in the forward propagation, Hungarian algorithm works out the aligned position pairs, according to which, neural components are dynamically connected to the next layer. For the example of Figure FIGREF13 , the 1st source and 2nd target word representations are jointly linked to the 1st aligned position of concatenation layer. Once the computational graph has been dynamically constructed in the forward pass, the backward process could propagate through the dynamically constructed links between layers, without any branching and non-differentiated issues. For the example in Figure FIGREF13 , the backward pass firstly propagates to the 1st aligned position of concatenation layer, then respectively propagates to 1st source and 2nd target word representations. In this way, the optimization framework could still adjust the parameters of neural architectures in an end-to-end manner."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"As Figure FIGREF13 shows, in the forward propagation, Hungarian algorithm works out the aligned position pairs, according to which, neural components are dynamically connected to the next layer. For the example of Figure FIGREF13 , the 1st source and 2nd target word representations are jointly linked to the 1st aligned position of concatenation layer. Once the computational graph has been dynamically constructed in the forward pass, the backward process could propagate through the dynamically constructed links between layers, without any branching and non-differentiated issues."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"1ff6b3f672ed32bff8ede85bbc9aaf859879a239",
"78ab3dd1f65083db14ca505e0ff54d248eeab94f",
"835ca62fecb2c21cad2522ba613f6a65a318bc84"
],
"answer": [
{
"evidence": [
"We conduct our experiments on the public benchmark dataset of “Quora Question Pairs” for the task of paraphrase identification. Experimental results demonstrate that our model outperforms other baselines extensively and significantly, which verifies our theory about the aligned unmatched parts and illustrates the effectiveness of our methodology."
],
"extractive_spans": [
"Quora Question Pairs"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct our experiments on the public benchmark dataset of “Quora Question Pairs” for the task of paraphrase identification."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conduct our experiments on the public benchmark dataset of “Quora Question Pairs” for the task of paraphrase identification. Experimental results demonstrate that our model outperforms other baselines extensively and significantly, which verifies our theory about the aligned unmatched parts and illustrates the effectiveness of our methodology."
],
"extractive_spans": [
"Quora Question Pairs"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct our experiments on the public benchmark dataset of “Quora Question Pairs” for the task of paraphrase identification."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conduct our experiments on the public benchmark dataset of “Quora Question Pairs” for the task of paraphrase identification. Experimental results demonstrate that our model outperforms other baselines extensively and significantly, which verifies our theory about the aligned unmatched parts and illustrates the effectiveness of our methodology."
],
"extractive_spans": [
"the public benchmark dataset of “Quora Question Pairs”"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct our experiments on the public benchmark dataset of “Quora Question Pairs” for the task of paraphrase identification."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"96d30f143f25bcd27cf86f2a0b46981030453066",
"ac6ad862428eac2aec6a35934d4326b9b3c61563",
"e915b7ec59c653bc57633c16fc09ed5d184bf2a2"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Performance Evaluation on “Quora Question Pairs”."
],
"extractive_spans": [],
"free_form_answer": "0.78% over the best state-of-the-art baseline",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Performance Evaluation on “Quora Question Pairs”."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Performance Evaluation on “Quora Question Pairs”."
],
"extractive_spans": [],
"free_form_answer": "The average improvement in accuracy of their model over baselines is 3.026 points.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Performance Evaluation on “Quora Question Pairs”."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Performance Evaluation on “Quora Question Pairs”."
],
"extractive_spans": [],
"free_form_answer": "by more than 0.18",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Performance Evaluation on “Quora Question Pairs”."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do they ensure the that the architecture is differentiable everywhere after adding the Hungarian layer?",
"Which dataset(s) do they train on?",
"By how much does their model outperform state-of-the-art baselines?"
],
"question_id": [
"ed7ce13cd95f7664a5e4fc530dcf72dc3808dced",
"26eceba0e6e4c0b6dfa94e5708dd74b63f701731",
"ff69b363ca604f80b2aa7afdc6a32d2ffd2d1f85"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Sentence Matching Formulated as Task Assignment. “Sunday”, “The boy”, “The child”, “run” and “weekend” are assigned to the corresponding parts, exclusively. Thus, as the aligned unmatched part, “yard” is assigned to “inside”, which is the evidence of non-paraphrase.",
"Figure 2: Our Neural Architecture. The sentence pair corresponds to word sequences (p1, p2, ..., pM ) and (q1, q2, ..., qN ). First, word embeddings are parsed by BiLSTM to generate the hidden representations. Then, the hidden representations of BiLSTM are processed by Hungarian layer, the outputs of which correspond to the weighting concatenation of aligned representations. Last, the results of Hungarian layer are measured by cosine similarity.",
"Figure 3: Illustration of Dynamically Constructed Computational Graph. The solid lines correspond to the forward propagation, while the dashed lines correspond to the backward propagation. Hungarian layer computes the aligned position pairs, according to which, the hidden representations are dynamically connected to the next layer. For example, (1, 2) means 1st word in source corresponds to 2nd word in target, while the 1st source and 2nd target word representations are jointly linked to the 1st aligned position of concatenation layer in the forward propagation. Backward propagation is performed through the dynamically constructed links between layers without any branching and non-differentiated issues.",
"Table 1: Performance Evaluation on “Quora Question Pairs”.",
"Figure 4: Illustration of Case Study. The slashed grids correspond to the aligned matched parts, while the crossed ones indicate the aligned unmatched parts."
],
"file": [
"2-Figure1-1.png",
"4-Figure2-1.png",
"6-Figure3-1.png",
"7-Table1-1.png",
"8-Figure4-1.png"
]
} | [
"By how much does their model outperform state-of-the-art baselines?"
] | [
[
"1712.02555-7-Table1-1.png"
]
] | [
"by more than 0.18"
] | 146 |
1909.02322 | Informative and Controllable Opinion Summarization | Opinion summarization is the task of automatically generating summaries for a set of opinions about a specific target (e.g., a movie or a product). Since the number of input documents can be prohibitively large, neural network-based methods sacrifice end-to-end elegance and follow a two-stage approach where an extractive model first pre-selects a subset of salient opinions and an abstractive model creates the summary while conditioning on the extracted subset. However, the extractive stage leads to information loss and inflexible generation capability. In this paper we propose a summarization framework that eliminates the need to pre-select salient content. We view opinion summarization as an instance of multi-source transduction, and make use of all input documents by condensing them into multiple dense vectors which serve as input to an abstractive model. Beyond producing more informative summaries, we demonstrate that our approach allows to take user preferences into account based on a simple zero-shot customization technique. Experimental results show that our model improves the state of the art on the Rotten Tomatoes dataset by a wide margin and generates customized summaries effectively. | {
"paragraphs": [
[
"The proliferation of opinions expressed in online reviews, blogs, internet forums, and social media has created a pressing need for automated systems which enable customers, companies, or service providers to make informed decisions without having to absorb large amounts of opinionated text. Opinion summarization is the task of automatically generating summaries for a set of opinions about a specific target BIBREF0. Figure FIGREF1 shows various reviews about the movie “Coach Carter” and example summaries generated by humans and automatic systems. The vast majority of previous work BIBREF1 views opinion summarization as the final stage of a three-step process involving: (1) aspect extraction (i.e., finding features pertaining to the target of interest, such as battery life or sound quality); (2) sentiment prediction (i.e., determining the sentiment of the extracted aspects); and (3) summary generation (i.e., presenting the identified opinions to the user). Textual summaries are created following mostly extractive methods which select representative segments (usually sentences) from the source text BIBREF2, BIBREF3, BIBREF4, BIBREF5. Despite being less popular, abstractive approaches seem more appropriate for the task at hand as they attempt to generate summaries which are maximally informative and minimally redundant without simply rearranging passages from the original opinions BIBREF6, BIBREF7, BIBREF8, BIBREF9. General-purpose summarization approaches have recently shown promising results with end-to-end models which are data-driven and take advantage of the success of sequence-to-sequence neural network architectures. Most approaches BIBREF10, BIBREF11, BIBREF12, BIBREF13 encode documents and then decode the learned representations into an abstractive summary, often by attending to the source input BIBREF14 and copying words from it BIBREF15. Under this modeling paradigm, it is no longer necessary to identify aspects and their sentiment for the opinion summarization task, as these are learned indirectly from training data (i.e., sets of opinions and their corresponding summaries). These models are usually tested on domains where the input is either one document or a small set of documents. However, the number of opinions tends to be very large (150 for the example in Figure FIGREF1). It is therefore practically unfeasible to train a model in an end-to-end fashion, given the memory limitations of modern hardware. As a result, current approaches BIBREF16, BIBREF17, BIBREF18, BIBREF19 sacrifice end-to-end elegance in favor of a two-stage framework which we call Extract-Abstract: an extractive model first selects a subset of opinions and an abstractive model then generates the summary while conditioning on the extracted subset (see Figure FIGREF5). The extractive pass unfortunately has two drawbacks. Firstly, on account of having access to a subset of opinions, the summaries can be less informative and inaccurate, as shown in Figure FIGREF1. And secondly, user preferences cannot be easily taken into account (e.g., the reader may wish to obtain a summary focusing on the acting or plot of a movie as opposed to a general-purpose summary) since more specialized information might have been removed.",
"In this paper, we propose Condense-Abstract, an alternative two-stage framework which uses all input documents when generating the summary (see Figure FIGREF5). We view the opinion summarization problem as an instance of multi-source transduction BIBREF20; we first represent the input documents as multiple encodings, aiming to condense their meaning and distill information relating to sentiment and various aspects of the target being reviewed. These condensed representations are then aggregated using a multi-source fusion module based on which an opinion summary is generated using an abstractive model. We also introduce a zero-shot customization technique allowing users to control important aspects of the generated summary at test time. Our approach enables controllable generation while leveraging the full spectrum of opinions available for a specific target. We perform experiments on a dataset consisting of movie reviews and opinion summaries elicited from the Rotten Tomatoes website (BIBREF16; see Figure FIGREF1). Our framework outperforms state-of-the-art models by a large margin using automatic metrics and in a judgment elicitation study. We also verify that our zero-shot customization technique can effectively generate need-specific summaries."
],
[
"Most opinion summarization models follow extractive methods (see BIBREF21 and BIBREF22 for overviews), with the exception of a few systems which are able to generate novel words and phrases not featured in the source text. BIBREF6 propose a graph-based framework for generating ultra concise opinion summaries, while BIBREF8 represent reviews by discourse trees which they aggregate to a global graph from which they generate a summary. Other work BIBREF7, BIBREF23 takes the distribution of opinions and their aspects into account so as to generate more readable summaries. BIBREF9 present a hybrid system which uses extractive techniques to select salient quotes from the input reviews and embeds them into an abstractive summary to provide evidence for positive or negative opinions. More recent work has seen the effective application of sequence-to-sequence models BIBREF24, BIBREF14 to various abstractive summarization tasks including headline generation BIBREF10, single- BIBREF15, BIBREF25, and multi-document summarization BIBREF16, BIBREF17, BIBREF18. Closest to our approach is the work of BIBREF16 who generate opinion summaries following a two-stage process which first selects documents bearing pertinent information, and then generates the summary by conditioning on these documents. Specifically, they use a ridge regression model with hand-engineered features such as TF-IDF scores and word counts, to estimate the importance of a document relative to its cluster (see also BIBREF17 for a survey of additional document selection methods). The extracted documents are then concatenated into a long sequence and fed to an encoder-decoder model. Our proposed framework eliminates the need to pre-select salient documents which we argue leads to information loss and less flexible generation capability. Instead, a separate model first condenses the source documents into multiple dense vectors which serve as input to a decoder to generate an abstractive summary. Beyond producing more informative summaries, we demonstrate that our approach allows to customize them. Recent conditional generation models have focused on controlling various aspects of the output such as politeness BIBREF26, length BIBREF27, BIBREF28, content BIBREF28, or style BIBREF29. In contrast to these approaches, our customization technique requires neither training examples of documents and corresponding (customized) summaries nor specialized pre-processing to encode which tokens in the input might give rise to customization."
],
[
"We propose an alternative to the Extract first, Abstract later (EA) approach which eliminates the need for an extractive model and enables the use of all input documents when generating the summary. Figure FIGREF5 illustrates our Condense-Abstract (CA) framework. In lieu of an integrated encoder-decoder, we generate summaries using two separate models. The Condense model returns document encodings for $N$ input documents, while the Abstract model uses these encodings to create an abstractive summary. This two-step approach has at least three advantages for multi-document summarization. Firstly, optimization is easier since parameters for the encoder and decoder weights are learned separately. Secondly, CA-based models are more space-efficient, since $N$ documents in the cluster are not treated as one very large instance but as $N$ separate instances when training the Condense model. Finally, it is possible to generate customized summaries targeting specific aspects of the input since the Abstract model operates over the encodings of all available documents."
],
[
"Let $\\mathcal {D}$ denote a cluster of $N$ documents about a specific target (e.g., a movie or product). For each document $X=\\lbrace w_1,w_2,...,w_M\\rbrace \\in \\mathcal {D}$, the Condense model learns an encoding $d$, and word-level encodings $h_1, h_2, ..., h_M$. We use a BiLSTM autoencoder as the Condense model. Specifically, we employ a Bidirectional Long Short Term Memory (BiLSTM) encoder BIBREF31:",
"where $\\overrightarrow{h}_i$ and $\\overleftarrow{h}_i$ are forward and backward hidden states of the BiLSTM at timestep $i$, and $;$ denotes concatenation. Training is performed with a reconstruction objective. Specifically, we use a separate LSTM as the decoder where the first hidden state $z_0$ is set to $d$ (see Equation (5)). Words $w^{\\prime }_t$ are generated using a softmax classifier:",
"The auto-encoder is trained with a maximum likelihood loss:",
"An advantage of using a separate encoder is increased training data, since we treat a single target with $N$ input documents as $N$ different instances. Once training has taken place, we use the Condense model to obtain $N$ pairs of document encodings $\\lbrace d_i\\rbrace $ and word-level encodings $\\lbrace h_{i,1}, h_{i,2}, ..., h_{i,M}\\rbrace $, $1 \\le i \\le N$ as representations for the documents in $\\mathcal {D}$."
],
[
"The Abstract model first fuses the multiple encodings obtained from the Condense stage and then generates a summary using a decoder."
],
[
"The $N$ pairs of document encodings $\\lbrace d_i\\rbrace $ and word-level encodings $\\lbrace h_{i,1}, h_{i,2}, ..., h_{i,M}\\rbrace $, $1 \\le i \\le N$ are aggregated into a single pair of document encoding $d^{\\prime }$ and word-level encodings $h^{\\prime }_1, h^{\\prime }_2, ..., h^{\\prime }_V$, where $V$ is the number of total unique tokens in the input. We fuse document encodings, using an attentive pooling method which gives more weight to important documents. Specifically, we learn a set of weight vectors $a_i \\in \\mathbb {R}^{D_d}$, where $D_d$ is the dimension of $d_i$, to weight-sum the document encodings:",
"where the mean encoding $\\bar{d}$ is used as the query vector, and $W_p \\in \\mathbb {R}^{D_d \\times D_d \\times D_d}$ is a learned tensor. We also fuse word-level encodings, since the same words may appear in multiple documents. To do this, we simply average all encodings of the same word, if multiple tokens of the word exist:",
"where $V_{w_j}$ is the number of tokens for word $w_j$ in the input."
],
[
"The decoder generates summaries conditioned on the reduced document encoding $d^{\\prime }$ and reduced word-level encodings $h^{\\prime }_1,h^{\\prime }_2,...,h^{\\prime }_V$. We use a simple LSTM decoder enhanced with attention BIBREF14 and copy mechanisms BIBREF32. We set the first hidden state $s_0$ to $d^{\\prime }$, and run an LSTM to calculate the current hidden state using the previous hidden state $s_{t-1}$ and word $y^{\\prime }_{t-1}$ at time step $t$:",
"At each time step $t$, we use an attention mechanism over word-level encodings to output the attention weight vector $a_t$ and context vector $c_t$:",
"Finally, we employ a copy mechanism over the input words to output the final word probability $p(y^{\\prime }_t)$ as a weighted sum over the generation probability $p_g(y^{\\prime }_t)$ and the copy probability $p_c(y^{\\prime }_t)$:",
"where $W$, $v$, and $b$ are learned parameters, and $t$ is the current timestep."
],
[
"The model presented so far treats all documents as equally important and has no specific mechanism to encourage saliency and eliminate redundancy. In order to encourage the decoder to focus on salient content, we can straightforwardly incorporate information from an extractive step. In experiments, we select $k$ documents using SummaRunner BIBREF33, a state-of-the-art neural extractive model where each document is classified as to whether it should be part of the summary or not. We concatenate $k$ preselected documents into a long sequence and encode it using a separate BiLSTM encoder. The encoded sequence serves as input to an LSTM decoder which generates a salience-biased hidden state $r_t$. We then update hidden state $s_t$ in Equation (DISPLAY_FORM19) as $s_t = [s_t; r_t]$. Notice that we still take all input documents into account, while acknowledging that some might be more descriptive than others."
],
[
"We use two objective functions to train the Abstract model. Firstly, we use a maximum likelihood loss to optimize the generation probability distribution $p(y^{\\prime }_t)$ based on gold summaries $Y=\\lbrace y_1,y_2,...,y_L\\rbrace $ provided at training time:",
"Secondly, we propose a way to introduce supervision and guide the attention pooling weights $W_p$ in Equation () when fusing the document encodings. Our motivation is that the resulting fused encoding $d^{\\prime }$ should be roughly equivalent to the encoding of summary $y$, which can be calculated as $z=\\text{\\textsc {Condense}}(y)$. Specifically, we use a hinge loss that maximizes the inner product between $d^{\\prime }$ and $z$ and simultaneously minimizes the inner product between $d^{\\prime }$ and $n_i$, where $n_i$ is the encoding of one of five randomly sampled negative summaries:",
"The final objective is then the sum of both loss functions:",
""
],
[
"Another advantage of our approach is that at test time, we can either generate a general-purpose summary or a need-specific summary. To generate the former, we run the trained model as is and use beam search to find the sequence of words with the highest cumulative probability. To generate the latter, we employ a simple technique that revises the query vector $\\bar{d}$ in Equation (DISPLAY_FORM16). More concretely, in the movie review domain, we assume that users might wish to obtain a summary that focuses on the positive or negative aspects of a movie, the quality of the acting, or the plot. In a different domain, users might care about the price of a product, its comfort, and so on. We undertake such customization without requiring access to need-specific summaries at training time. Instead, at test time, we assume access to background reviews to represent the user need. For example, if we wish to generate a positive summary, our method requires a set of reviews with positive sentiment which approximately provide some background on how sentiment is communicated in a review. We use these background reviews conveying a user need $x$ (e.g., acting, plot, positive or negative sentiment) during fusion to attend more to input reviews related to $x$. Let $C_x$ denote the set of background reviews. We obtain a new query vector $\\hat{d} = \\sum _{c=1}^{|C_x|} d_c / |C_x|$, where $d_c$ is the document encoding of the $c$'th review in $C_x$, calculated using the Condense model. This change allows the model to focus on input reviews with semantics similar to the user need as conveyed by the background reviews $C_x$. The new query vector $\\hat{d}$ is used instead of $\\bar{d}$ to obtain document encoding $d^{\\prime }$ (see Equation (DISPLAY_FORM16))."
],
[
"We performed experiments on the Rotten Tomatoes dataset provided in BIBREF16. It contains 3,731 movies; for each movie we are given a large set of reviews (99.8 on average) written by professional critics and users and a gold-standard consensus, i.e. a summary written by an editor (see an example in Figure FIGREF1). On average, reviews are 19.7 tokens long, while the summary length is 19.6 tokens. The dataset is divided into 2,458 movies for training, 536 movies for development, and 737 movies for testing. Following previous work BIBREF16, we used a generic label for movie titles during training which we replace with the original movie names at test time."
],
[
"For all experiments, our model used word embeddings with 128 dimensions, pretrained on the training data using GloVe BIBREF34. We set the dimensions of all hidden vectors to 256, the batch size to 8, and the beam search size to 5. We applied dropout BIBREF35 at a rate of 0.5. The model was trained using the Adam optimizer BIBREF36 and $l_2$ constraint BIBREF37 of 2. We performed early stopping based on model performance on the development set. Our model is implemented in PyTorch."
],
[
"We present two variants of our approach: (a) AE+Att+Copy uses the Condense and Abstract models described above, but without salience-biased extracts, while (b) AE+Att+Copy+Salient does incorporate them. We further compared our approach against two types of methods: one-pass methods and methods that use the EA framework. Fully extractive methods include (c) LexRank BIBREF38, a PageRank-like summarization algorithm which generates a summary by selecting the $n$ most salient units, until the length of the target summary is reached; (d) SubModular BIBREF39, a supervised learning approach to train submodular scoring functions for extractive multi-document summarization; (e) Opinosis BIBREF6 a graph-based abstractive summarizer that generates concise summaries of highly redundant opinions; and (f) SummaRunner BIBREF33. EA-based methods include (g) Regress+S2S BIBREF16, an instantiation of the EA framework where a ridge regression model with hand-engineered features implements the Extract model, while an attention-based sequence-to-sequence neural network is the Abstract model; (h) SummaRunner+S2S, our implementation of an EA-based system which uses SummaRunner instead of Regress as the Extract model; and (i) SummaRunner+S2S+Copy, the same model as (h) but enhanced with a copy mechanism BIBREF32. For all EA-based systems, we set $k=5$, which is tuned on the development set. Larger $k$ leads to worse performance, possibly because the Abstract model becomes harder to optimize."
],
[
"We considered two evaluation metrics which are also reported in BIBREF16: METEOR BIBREF40, a recall-oriented metric that rewards matching stems, synonyms, and paraphrases, and ROUGE-SU4 BIBREF41 which is calculated as the recall of unigrams and skip-bigrams up to four words. We also report F1 for ROUGE-1, ROUGE-2, and ROUGE-L, which are widely used in summarization BIBREF41. They respectively measure word-overlap, bigram-overlap, and the longest common subsequence between the reference and system summaries. Our results are presented in Table TABREF28. The first block shows one-pass systems, both supervised (SubModular, SummaRunner) and unsupervised (LexRank, Opinosis). We can see that SummaRunner is the best performing system in this block; despite being extractive, it benefits from training data and the ability of neural models to learn task-specific representations. The second block in Table TABREF28 shows several two-pass abstractive systems based on the EA framework. Our implementation of an EA-based system, SummaRunner+S2S+Copy, improves over the purely extractive SummaRunner and the previously reported best EA-based system, Regress+S2S. The third block presents two models using the proposed CA framework. Both systems outperform all other models across all metrics; AE+Att+Copy+Salient is the best model overall which exploits information about all documents and most salient ones."
],
[
"In addition to automatic evaluation, we also assessed system output by eliciting human judgments. Participants compared summaries produced from the best extractive baseline (SummaRunner), and the best EA- and CA-based systems (SummaRunner+S2S+Copy and AE+Att+Copy+Salient, respectively). As an upper bound, we also included Gold standard summaries. The study was conducted on the Amazon Mechanical Turk platform using Best-Worst Scaling (BWS; BIBREF42), a less labor-intensive alternative to paired comparisons that has been shown to produce more reliable results than rating scales BIBREF43. Specifically, participants were shown the movie title and basic background information (i.e., synopsis, release year, genre, director, and cast). They were also presented with three system summaries and asked to select the best and worst among them according to Informativeness (i.e., does the summary convey opinions about specific aspects of the movie in a concise manner?), Correctness (i.e., is the information in the summary factually accurate and does it correspond to the information given about the movie?), and Grammaticality (i.e., is the summary fluent and grammatical?). Examples of system summaries are shown in Figure FIGREF1 and Figure FIGREF37. We randomly selected 50 movies from the test set and compared all possible combinations of summary triples for each movie. We collected three judgments for each comparison. The order of summaries and movies was randomized per participant. The score of a system was computed as the percentage of times it was chosen as best minus the percentage of times it was selected as worst. The scores range from -1 (worst) to 1 (best) and are shown in Table TABREF36. Perhaps unsurprisingly, the human-generated gold summaries were considered best, whereas our model (AE+Att+Copy+Salient) was ranked second, indicating that humans find its output more informative, correct, and grammatical compared to other systems. SummaRunner was ranked third followed by SummaRunner+S2S+Copy. We inspected the summaries produced by the latter system and found they were factually incorrect bearing little correspondence to the movie (examples shown in Figure FIGREF37), possibly due to the huge information loss at the extraction stage. All pairwise system differences are statistically significant using a one-way ANOVA with posthoc Tukey HSD tests ($p < 0.01$)."
],
[
"We further assessed the ability of CA-based systems to generate customized summaries at test time. As discussed earlier, customization at test time is not trivially possible for EA-based systems and as a result we cannot compare against them. Instead, we evaluate two CA-based systems, namely AE+Att+Copy and AE+Att+Copy+Salient. Similar to EA-based systems, the latter biases summary generation towards the $k$ most salient extracted opinions using an additional extractive module, which may not contain information relevant to the user's need (we set $k=5$ in our experiments). We thus expect this model to be less effective for customization than AE+Att+Copy which makes no assumptions regarding which summaries to consider. In this experiment, we assume users may wish to control the output summaries in four ways focusing on acting- and plot-related aspects of a movie review, as well as its sentiment, which may be positive or negative. Let Cust($x$) be the zero-shot customization technique discussed in the previous section, where $x$ is an information need (i.e., acting, plot, positive, or negative). We sampled a small set of background reviews $C_x$ ($|C_x|$=1,000) from a corpus of 1 million reviews covering 7,500 movies from the Rotten Tomatoes website, made available in BIBREF29. The reviews contain sentiment labels provided by their authors and heuristically classified aspect labels. We then ran Cust($x$) using both AE+Att+Copy and AE+Att+Copy+Salient models. We show in Figure FIGREF37 customized summaries generated by the two models. To determine which system is better at customization, we again conducted a judgment elicitation study on AMT. Participants read a summary which was created by a general-purpose system or its customized variant. They were then asked to decide if the summary is generic or focuses on a specific aspect (plot or acting) and expresses positive, negative, or neutral sentiment. We selected 50 movies (from the test set) which had mixed reviews and collected judgements from three different participants per summary. The summaries were presented in random order per participant.",
"Table TABREF40 shows what participants thought of summaries produced by non-customized systems (see column No) and systems which had customization switched on (see column Yes). Overall, we observe that AE+Att+Copy is able to customize summaries to a great extent. In all cases, crowdworkers perceive a significant increase in the proportion of aspect $x$ when using Cust($x$). AE+Att+Copy+Salient is unable to generate need-specific summaries, showing no discernible difference between generic and customized summaries. This shows that the use of an extractive module, which is used as one of the main components of EA-based approaches, limits the flexibility of the abstractive model to customize summaries based on a user need."
],
[
"We proposed the Condense-Abstract (CA) framework for opinion summarization. Both automatic and human-based evaluation show that CA-based approaches produce more informative and factually correct summaries compared to purely extractive models and models including an extractive summary pre-selection stage. We also show that a simple zero-shot customization technique is able to generate aspect- and sentiment-based summaries at test time. In the future, we plan to apply CA-based approaches to other multi-document summarization tasks and domains. It would also be interesting to investigate an unsupervised or semi-supervised approach where reviews are available but no (or only a few) gold-standard summaries are given."
]
],
"section_name": [
"Introduction",
"Related Work",
"Condense-Abstract Framework",
"Condense-Abstract Framework ::: The Condense Model",
"Condense-Abstract Framework ::: The Abstract Model",
"Condense-Abstract Framework ::: The Abstract Model ::: Multi-source Fusion",
"Condense-Abstract Framework ::: The Abstract Model ::: Decoder",
"Condense-Abstract Framework ::: The Abstract Model ::: Salience-biased Extracts",
"Condense-Abstract Framework ::: The Abstract Model ::: Training",
"Condense-Abstract Framework ::: Zero-shot Customization",
"Experimental Setup ::: Dataset",
"Experimental Setup ::: Training Configuration",
"Experimental Setup ::: Comparison Systems",
"Results ::: Automatic Evaluation",
"Results ::: Human Evaluation",
"Results ::: Customizing Summaries",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"2889b6797dda40ac56ef6fdbb50b125b74bf7de9",
"3e6d62f979fd3b7021fb690ab02a50dbd2cf94b9",
"966a4aeecfae68ca946e32606ab47ab73d33442c"
],
"answer": [
{
"evidence": [
"We present two variants of our approach: (a) AE+Att+Copy uses the Condense and Abstract models described above, but without salience-biased extracts, while (b) AE+Att+Copy+Salient does incorporate them. We further compared our approach against two types of methods: one-pass methods and methods that use the EA framework. Fully extractive methods include (c) LexRank BIBREF38, a PageRank-like summarization algorithm which generates a summary by selecting the $n$ most salient units, until the length of the target summary is reached; (d) SubModular BIBREF39, a supervised learning approach to train submodular scoring functions for extractive multi-document summarization; (e) Opinosis BIBREF6 a graph-based abstractive summarizer that generates concise summaries of highly redundant opinions; and (f) SummaRunner BIBREF33. EA-based methods include (g) Regress+S2S BIBREF16, an instantiation of the EA framework where a ridge regression model with hand-engineered features implements the Extract model, while an attention-based sequence-to-sequence neural network is the Abstract model; (h) SummaRunner+S2S, our implementation of an EA-based system which uses SummaRunner instead of Regress as the Extract model; and (i) SummaRunner+S2S+Copy, the same model as (h) but enhanced with a copy mechanism BIBREF32. For all EA-based systems, we set $k=5$, which is tuned on the development set. Larger $k$ leads to worse performance, possibly because the Abstract model becomes harder to optimize."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We further compared our approach against two types of methods: one-pass methods and methods that use the EA framework."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"We present two variants of our approach: (a) AE+Att+Copy uses the Condense and Abstract models described above, but without salience-biased extracts, while (b) AE+Att+Copy+Salient does incorporate them. We further compared our approach against two types of methods: one-pass methods and methods that use the EA framework. Fully extractive methods include (c) LexRank BIBREF38, a PageRank-like summarization algorithm which generates a summary by selecting the $n$ most salient units, until the length of the target summary is reached; (d) SubModular BIBREF39, a supervised learning approach to train submodular scoring functions for extractive multi-document summarization; (e) Opinosis BIBREF6 a graph-based abstractive summarizer that generates concise summaries of highly redundant opinions; and (f) SummaRunner BIBREF33. EA-based methods include (g) Regress+S2S BIBREF16, an instantiation of the EA framework where a ridge regression model with hand-engineered features implements the Extract model, while an attention-based sequence-to-sequence neural network is the Abstract model; (h) SummaRunner+S2S, our implementation of an EA-based system which uses SummaRunner instead of Regress as the Extract model; and (i) SummaRunner+S2S+Copy, the same model as (h) but enhanced with a copy mechanism BIBREF32. For all EA-based systems, we set $k=5$, which is tuned on the development set. Larger $k$ leads to worse performance, possibly because the Abstract model becomes harder to optimize."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We further compared our approach against two types of methods: one-pass methods and methods that use the EA framework. Fully extractive methods include (c) LexRank BIBREF38, a PageRank-like summarization algorithm which generates a summary by selecting the $n$ most salient units, until the length of the target summary is reached; (d) SubModular BIBREF39, a supervised learning approach to train submodular scoring functions for extractive multi-document summarization; (e) Opinosis BIBREF6 a graph-based abstractive summarizer that generates concise summaries of highly redundant opinions; and (f) SummaRunner BIBREF33. EA-based methods include (g) Regress+S2S BIBREF16, an instantiation of the EA framework where a ridge regression model with hand-engineered features implements the Extract model, while an attention-based sequence-to-sequence neural network is the Abstract model; (h) SummaRunner+S2S, our implementation of an EA-based system which uses SummaRunner instead of Regress as the Extract model; and (i) SummaRunner+S2S+Copy, the same model as (h) but enhanced with a copy mechanism BIBREF32."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"b5e0c1d93bf6d76bfb728b99b03a7eba52a3fd60",
"ec670256d13dfe5c44aa16315f816b03b7ffe456"
],
"answer": [
{
"evidence": [
"We propose an alternative to the Extract first, Abstract later (EA) approach which eliminates the need for an extractive model and enables the use of all input documents when generating the summary. Figure FIGREF5 illustrates our Condense-Abstract (CA) framework. In lieu of an integrated encoder-decoder, we generate summaries using two separate models. The Condense model returns document encodings for $N$ input documents, while the Abstract model uses these encodings to create an abstractive summary. This two-step approach has at least three advantages for multi-document summarization. Firstly, optimization is easier since parameters for the encoder and decoder weights are learned separately. Secondly, CA-based models are more space-efficient, since $N$ documents in the cluster are not treated as one very large instance but as $N$ separate instances when training the Condense model. Finally, it is possible to generate customized summaries targeting specific aspects of the input since the Abstract model operates over the encodings of all available documents.",
"Let $\\mathcal {D}$ denote a cluster of $N$ documents about a specific target (e.g., a movie or product). For each document $X=\\lbrace w_1,w_2,...,w_M\\rbrace \\in \\mathcal {D}$, the Condense model learns an encoding $d$, and word-level encodings $h_1, h_2, ..., h_M$. We use a BiLSTM autoencoder as the Condense model. Specifically, we employ a Bidirectional Long Short Term Memory (BiLSTM) encoder BIBREF31:",
"The decoder generates summaries conditioned on the reduced document encoding $d^{\\prime }$ and reduced word-level encodings $h^{\\prime }_1,h^{\\prime }_2,...,h^{\\prime }_V$. We use a simple LSTM decoder enhanced with attention BIBREF14 and copy mechanisms BIBREF32. We set the first hidden state $s_0$ to $d^{\\prime }$, and run an LSTM to calculate the current hidden state using the previous hidden state $s_{t-1}$ and word $y^{\\prime }_{t-1}$ at time step $t$:"
],
"extractive_spans": [],
"free_form_answer": "Condense-Abstract Framework, consisting of BiLSTM autoencoder and LSTM decoder with attention.",
"highlighted_evidence": [
" Figure FIGREF5 illustrates our Condense-Abstract (CA) framework. ",
" We use a BiLSTM autoencoder as the Condense model.",
"We use a simple LSTM decoder enhanced with attention BIBREF14 and copy mechanisms BIBREF32. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We propose an alternative to the Extract first, Abstract later (EA) approach which eliminates the need for an extractive model and enables the use of all input documents when generating the summary. Figure FIGREF5 illustrates our Condense-Abstract (CA) framework. In lieu of an integrated encoder-decoder, we generate summaries using two separate models. The Condense model returns document encodings for $N$ input documents, while the Abstract model uses these encodings to create an abstractive summary. This two-step approach has at least three advantages for multi-document summarization. Firstly, optimization is easier since parameters for the encoder and decoder weights are learned separately. Secondly, CA-based models are more space-efficient, since $N$ documents in the cluster are not treated as one very large instance but as $N$ separate instances when training the Condense model. Finally, it is possible to generate customized summaries targeting specific aspects of the input since the Abstract model operates over the encodings of all available documents.",
"Let $\\mathcal {D}$ denote a cluster of $N$ documents about a specific target (e.g., a movie or product). For each document $X=\\lbrace w_1,w_2,...,w_M\\rbrace \\in \\mathcal {D}$, the Condense model learns an encoding $d$, and word-level encodings $h_1, h_2, ..., h_M$. We use a BiLSTM autoencoder as the Condense model. Specifically, we employ a Bidirectional Long Short Term Memory (BiLSTM) encoder BIBREF31:",
"The Abstract model first fuses the multiple encodings obtained from the Condense stage and then generates a summary using a decoder.",
"The decoder generates summaries conditioned on the reduced document encoding $d^{\\prime }$ and reduced word-level encodings $h^{\\prime }_1,h^{\\prime }_2,...,h^{\\prime }_V$. We use a simple LSTM decoder enhanced with attention BIBREF14 and copy mechanisms BIBREF32. We set the first hidden state $s_0$ to $d^{\\prime }$, and run an LSTM to calculate the current hidden state using the previous hidden state $s_{t-1}$ and word $y^{\\prime }_{t-1}$ at time step $t$:"
],
"extractive_spans": [
"BiLSTM autoencoder as the Condense model",
"simple LSTM decoder enhanced with attention BIBREF14 and copy mechanisms BIBREF32"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Condense model returns document encodings for $N$ input documents, while the Abstract model uses these encodings to create an abstractive summary.",
"We use a BiLSTM autoencoder as the Condense model.",
"The Abstract model first fuses the multiple encodings obtained from the Condense stage and then generates a summary using a decoder.",
"We use a simple LSTM decoder enhanced with attention BIBREF14 and copy mechanisms BIBREF32."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"b77a7a1d9939ea49afd4c3796161edffd74b7ac4",
"b9bd292e9055561e7656b86b862af537177dd8b5",
"d6d4419ae0121bfa9d5d26fb16bf978fc0000fae"
],
"answer": [
{
"evidence": [
"We performed experiments on the Rotten Tomatoes dataset provided in BIBREF16. It contains 3,731 movies; for each movie we are given a large set of reviews (99.8 on average) written by professional critics and users and a gold-standard consensus, i.e. a summary written by an editor (see an example in Figure FIGREF1). On average, reviews are 19.7 tokens long, while the summary length is 19.6 tokens. The dataset is divided into 2,458 movies for training, 536 movies for development, and 737 movies for testing. Following previous work BIBREF16, we used a generic label for movie titles during training which we replace with the original movie names at test time."
],
"extractive_spans": [],
"free_form_answer": "3731 movies containing around 372353 reviews",
"highlighted_evidence": [
"We performed experiments on the Rotten Tomatoes dataset provided in BIBREF16. It contains 3,731 movies; for each movie we are given a large set of reviews (99.8 on average) written by professional critics and users and a gold-standard consensus, i.e. a summary written by an editor (see an example in Figure FIGREF1). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We performed experiments on the Rotten Tomatoes dataset provided in BIBREF16. It contains 3,731 movies; for each movie we are given a large set of reviews (99.8 on average) written by professional critics and users and a gold-standard consensus, i.e. a summary written by an editor (see an example in Figure FIGREF1). On average, reviews are 19.7 tokens long, while the summary length is 19.6 tokens. The dataset is divided into 2,458 movies for training, 536 movies for development, and 737 movies for testing. Following previous work BIBREF16, we used a generic label for movie titles during training which we replace with the original movie names at test time."
],
"extractive_spans": [],
"free_form_answer": "3731",
"highlighted_evidence": [
"We performed experiments on the Rotten Tomatoes dataset provided in BIBREF16. It contains 3,731 movies; for each movie we are given a large set of reviews (99.8 on average) written by professional critics and users and a gold-standard consensus, i.e. a summary written by an editor (see an example in Figure FIGREF1). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We performed experiments on the Rotten Tomatoes dataset provided in BIBREF16. It contains 3,731 movies; for each movie we are given a large set of reviews (99.8 on average) written by professional critics and users and a gold-standard consensus, i.e. a summary written by an editor (see an example in Figure FIGREF1). On average, reviews are 19.7 tokens long, while the summary length is 19.6 tokens. The dataset is divided into 2,458 movies for training, 536 movies for development, and 737 movies for testing. Following previous work BIBREF16, we used a generic label for movie titles during training which we replace with the original movie names at test time."
],
"extractive_spans": [
"3,731 movies; for each movie we are given a large set of reviews (99.8 on average)"
],
"free_form_answer": "",
"highlighted_evidence": [
"It contains 3,731 movies; for each movie we are given a large set of reviews (99.8 on average) written by professional critics and users and a gold-standard consensus, i.e. a summary written by an editor (see an example in Figure FIGREF1). On average, reviews are 19.7 tokens long, while the summary length is 19.6 tokens."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do they compare to previous work?",
"What is the model trained?",
"How large is the dataset used?"
],
"question_id": [
"ee19fd54997f2eec7c87c7d4a2169026fe208285",
"74fcb741d29892918903702dbb145fef372d1de3",
"de0d135b94ba3b3a4f4a0fb03df38a84f9dc9da4"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Three out of 150 reviews for the movie “Coach Carter”, and summaries written by the editor, and generated by a model following the EXTRACT-ABSTRACT approach and the proposed CONDENSE-ABSTRACT framework. The latter produces more informative and factual summaries whilst allowing to control aspects of the generated summary (such as the acting or plot of the movie).",
"Figure 2: Illustration of EA and CA frameworks for opinion summarization. In the CA framework, users can obtain needspecific summaries at test time (e.g., give me a summary focusing on acting).",
"Table 1: Automatic evaluation results. Systems whose results are taken from Wang and Ling (2016) are marked with an asterisk *. Best performing results per metric are boldfaced.",
"Table 2: System ranking based on human judgments, using Best-Worst Scaling.",
"Table 3: Proportion of summaries which mention a specific aspect/sentiment. Boldfaced values show a significant increase (p < 0.01; using two-sample bootstrap tests) compared to the non-customized system variant. Aspects are not mutually exclusive (e.g. a summary may talk about both acting and plot), thus the total percentage may exceed 100%.",
"Figure 3: Examples of general-purpose and need-specific summaries generated by four systems. We also show the consensus summary (GOLD). :::::::::"
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"7-Figure3-1.png"
]
} | [
"What is the model trained?",
"How large is the dataset used?"
] | [
[
"1909.02322-Condense-Abstract Framework-0",
"1909.02322-Condense-Abstract Framework ::: The Abstract Model ::: Decoder-0",
"1909.02322-Condense-Abstract Framework ::: The Condense Model-0",
"1909.02322-Condense-Abstract Framework ::: The Abstract Model-0"
],
[
"1909.02322-Experimental Setup ::: Dataset-0"
]
] | [
"Condense-Abstract Framework, consisting of BiLSTM autoencoder and LSTM decoder with attention.",
"3731"
] | 147 |
1805.04579 | Using Statistical and Semantic Models for Multi-Document Summarization | We report a series of experiments with different semantic models on top of various statistical models for extractive text summarization. Though statistical models may better capture word co-occurrences and distribution around the text, they fail to detect the context and the sense of sentences /words as a whole. Semantic models help us gain better insight into the context of sentences. We show that how tuning weights between different models can help us achieve significant results on various benchmarks. Learning pre-trained vectors used in semantic models further, on given corpus, can give addition spike in performance. Using weighing techniques in between different statistical models too further refines our result. For Statistical models, we have used TF/IDF, TextRAnk, Jaccard/Cosine Similarities. For Semantic Models, we have used WordNet-based Model and proposed two models based on Glove Vectors and Facebook's InferSent. We tested our approach on DUC 2004 dataset, generating 100-word summaries. We have discussed the system, algorithms, analysis and also proposed and tested possible improvements. ROUGE scores were used to compare to other summarizers. | {
"paragraphs": [
[
"Automatic Text Summarization deals with the task of condensing documents into a summary, whose level is similar to a human-generated summary. It is mostly distributed into two distinct domains, i.e., Abstractive Summarization and Extractive Summarization. Abstractive summarization( Dejong et al. ,1978) involves models to deduce the crux of the document. It then presents a summary consisting of words and phrases that were not there in the actual document, sometimes even paraphrasing BIBREF1 . A state of art method proposed by Wenyuan Zeng BIBREF2 produces such summaries with length restricted to 75. There have been many recent developments that produce optimal results, but it is still in a developing phase. It highly relies on natural language processing techniques, which is still evolving to match human standards. These shortcomings make abstractive summarization highly domain selective. As a result, their application is skewed to the areas where NLP techniques have been superlative. Extractive Summarization, on the other hand, uses different methods to identify the most informative/dominant sentences through the text, and then present the results, ranking them accordingly. In this paper, we have proposed two novel stand-alone summarization methods.The first method is based on Glove Model BIBREF3 ,and other is based on Facebook's InferSent BIBREF4 . We have also discussed how we can effectively subdue shortcomings of one model by using it in coalition with models which capture the view that other faintly held."
],
[
"A vast number of methods have been used for document summarization. Some of the methods include determining the length and positioning of sentences in the text BIBREF5 , deducing centroid terms to find the importance of text BIBREF5 and setting a threshold on average TF-IDF scores. Bag-of-words approach, i.e., making sentence/Word freq matrix, using a signature set of words and assigning them weights to use them as a criterion for importance measure BIBREF6 have also been used. Summarization using weights on high-frequency words BIBREF7 describes that high-frequency terms can be used to deduce the core of document.",
"While semantic summarizers like Lexical similarity is based on the assumption that important sentences are identified by strong chains BIBREF8 , BIBREF9 , BIBREF10 . In other words, it relates sentences that employ words with the same meaning (synonyms) or other semantic relation. It uses WordNet BIBREF11 to find similarity among words that apply to Word Frequency algorithm.POS(Part of Speech) Tagging and WSD(Word Sense Disambiguation) are common among semantic summarizers. Graphical summarizers like TextRank have also provided great benchmark results.TextRank assigns weights to important keywords from the document using graph-based model and sentences which capture most of those concepts/keywords are ranked higher) BIBREF9 , BIBREF12 TextRank uses Google's PageRank (Brin and Page, 1998) for graphical modeling. Though semantic and graphical models may better capture the sense of document but miss out on statistical view.",
"There is a void of hybrid summarizers; there haven't been many studies made in the area.Wong BIBREF13 conducted some preliminary research but there isn't much there on benchmark tests to our knowledge. We use a mixture of statistical and semantic models, assign weights among them by training on field-specific corpora. As there is a significant variation in choices among different fields. We support our proposal with expectations that shortcomings posed by one model can be filled with positives from others. We deploy experimental analysis to test our proposition."
],
[
"For Statistical analysis we use Similarity matrices, word co-occurrence/ n-gram model, andTF/IDF matrix. For semantic analysis we use custom Glove based model, WordNet based Model and Facebook InferSent BIBREF4 based Model. For Multi-Document Summarization,after training on corpus, we assign weights among the different techniques .We store the sense vector for documents, along with weights, for future reference. For Single document summarization, firstly we calculate the sense vector for that document and calculate the nearest vector from the stored Vectors, we use the weights of the nearest vector. We will describe the flow for semantic and statistical models separately."
],
[
"We discuss, in detail, the steps that are common for both statistical and semantic models.",
"We use NLTK sentence tokenizer sent_tokenize(), based on PUNKT tokenizer, pre-trained on a corpus. It can differentiate between Mr. , Mrs. and other abbreviations etc. and the normal sentence boundaries. BIBREF14 ",
"Given a document INLINEFORM0 we tokenize it into sentences as < INLINEFORM1 >.",
"Replacing all the special characters with spaces for easier word-tagging and Tokenizing.",
"We use NLTK word tokenizer, which is a Penn Treebank–style tokenizer, to tokenize words.We calculate the total unique words in the Document. If we can write any sentence as:-",
" INLINEFORM0 < INLINEFORM1 >, INLINEFORM2 ",
"Then the number of unique words can be represented as:- INLINEFORM0 INLINEFORM1 "
],
[
"paragraph4 3.25ex plus1ex minus.2ex -1em Frequency Matrix generation: Our tokenized words contain redundancy due to digits and transitional words such as “and”, “but” etc., which carry little information. Such words are termed stop words. BIBREF15 We removed stop words and words occurring in <0.2% and >15% of the documents (considering the word frequency over all documents). After the removal, the no. of unique words left in the particular document be p where p<m (where m is the total no. of unique words in our tokenized list originally). We now formulate a matrix INLINEFORM0 where n is the total number of sentences and p is the total number of unique words left in the document. Element INLINEFORM1 in the matrix INLINEFORM2 denotes frequency of INLINEFORM3 unique word in the INLINEFORM4 sentence.",
"paragraph4 3.25ex plus1ex minus.2ex -1em Similarity/Correlation Matrix generation: We now have have sentence word frequency vector INLINEFORM0 as < INLINEFORM1 > where INLINEFORM2 denotes frequency of INLINEFORM3 unique word in the INLINEFORM4 sentence. We now compute, INLINEFORM5 ",
"We use two similarity measures :",
"Jaccard Similarity",
"Cosine Similarity",
"We generate the similarity matrix INLINEFORM0 for each of the similarity Measure, where INLINEFORM1 indexes the similarity Measure. Element INLINEFORM2 of INLINEFORM3 denotes similarity between INLINEFORM4 and INLINEFORM5 sentence. Consequentially, we will end up with INLINEFORM6 and INLINEFORM7 , corresponding to each similarity measure.",
"For some sets A and B, <a,b,c,... >and <x,y,z,... >respectively, the Jaccard Similarity is defined as:- INLINEFORM0 ",
"The Cosine distance between `u' and `v', is defined as:- INLINEFORM0 ",
"where INLINEFORM0 is the dot product of INLINEFORM1 and INLINEFORM2 .",
"PageRank algorithm BIBREF16 , devised to rank web pages, forms the core of Google Search. It roughly works by ranking pages according to the number and quality of outsourcing links from the page. For NLP, a PageRank based technique ,TextRank has been a major breakthrough in the field. TextRank based summarization has seeded exemplary results on benchmarks. We use a naive TextRank analogous for our task.",
"Given INLINEFORM0 sentences < INLINEFORM1 >, we intend to generate PageRank or probability distribution matrix INLINEFORM2 , INLINEFORM3 ",
", where INLINEFORM0 in original paper denoted probability with which a randomly browsing user lands on a particular page. For the summarization task, they denote how strongly a sentence is connected with rest of document, or how well sentence captures multiple views/concepts. The steps are as:",
"Initialize INLINEFORM0 as, INLINEFORM1 ",
"Define INLINEFORM0 , probability that randomly chosen sentence is in summary and INLINEFORM1 as measure of change i.e. to stop computation when difference between to successive INLINEFORM2 computations recedes below INLINEFORM3 .",
"Using cosine-similarity matrix INLINEFORM0 , we generate the following equation as a measure for relation between sentences:- INLINEFORM1 ",
"Repeat last step until INLINEFORM0 .",
"Take top ranking sentences in INLINEFORM0 for summary.",
"Term Frequency(TF)/Bag of words is the count of how many times a word occurs in the given document. Inverse Document Frequency(IDF) is the number of times word occurs in complete corpus. Infrequent words through corpus will have higher weights, while weights for more frequent words will be depricated.",
"Underlying steps for TF/IDF summarization are:",
"Create a count vector INLINEFORM0 ",
"Build a tf-idf matrix INLINEFORM0 with element INLINEFORM1 as, INLINEFORM2 ",
"Here, INLINEFORM0 denotes term frequency of ith word in jth sentence, and INLINEFORM1 represents the IDF frequency.",
"Score each sentence, taking into consideration only nouns, we use NLTK POS-tagger for identifying nouns. INLINEFORM0 ",
"Applying positional weighing . INLINEFORM0 INLINEFORM1 ",
"Summarize using top ranking sentences."
],
[
"We proceed in the same way as we did for statistical models. All the pre-processing steps remain nearly same. We can make a little change by using lemmatizer instead of stemmer. Stemming involves removing the derivational affixes/end of words by heuristic analysis in hope to achieve base form. Lemmatization, on the other hand, involves firstly POS tagging BIBREF17 , and after morphological and vocabulary analysis, reducing the word to its base form. Stemmer output for `goes' is `goe', while lemmatized output with the verb passed as POS tag is `go'. Though lemmatization may have little more time overhead as compared to stemming, it necessarily provides better base word reductions. Since WordNet BIBREF18 and Glove both require dictionary look-ups, in order for them to work well, we need better base word mappings. Hence lemmatization is preferred.",
"Part of Speech(POS) Tagging: We tag the words using NLTK POS-Tagger.",
"Lemmatization: We use NTLK lemmatizer with POS tags passed as contexts.",
"We generated Similarity matrices in the case of Statistical Models. We will do the same here, but for sentence similarity measure we use the method devised by Dao. BIBREF19 The method is defined as:",
"Word Sense Disambiguation(WSD): We use the adapted version of Lesk algorithm BIBREF20 , as devised by Dao, to derive the sense for each word.",
"Sentence pair Similarity: For each pair of sentences, we create semantic similarity matrix INLINEFORM0 . Let INLINEFORM1 and INLINEFORM2 be two sentences of lengths INLINEFORM3 and INLINEFORM4 respectively. Then the resultant matrix INLINEFORM5 will be of size INLINEFORM6 , with element INLINEFORM7 denoting semantic similarity between sense/synset of word at position INLINEFORM8 in sentence INLINEFORM9 and sense/synset of word at position INLINEFORM10 in sentence INLINEFORM11 , which is calculated by path length similarity using is-a (hypernym/hyponym) hierarchies. It uses the idea that shorter the path length, higher the similarity. To calculate the path length, we proceed in following manner:-",
"For two words INLINEFORM0 and INLINEFORM1 , with synsets INLINEFORM2 and INLINEFORM3 respectively, INLINEFORM4 INLINEFORM5 ",
"We formulate the problem of capturing semantic similarity between sentences as the problem of computing a maximum total matching weight of a bipartite graph, where X and Y are two sets of disjoint nodes. We use the Hungarian method BIBREF21 to solve this problem. Finally we get bipartite matching matrix INLINEFORM0 with entry INLINEFORM1 denoting matching between INLINEFORM2 and INLINEFORM3 . To obtain the overall similarity, we use Dice coefficient, INLINEFORM4 ",
"with threshold set to INLINEFORM0 , and INLINEFORM1 , INLINEFORM2 denoting lengths of sentence INLINEFORM3 and INLINEFORM4 respectively.",
"We perform the previous step over all pairs to generate the similarity matrix INLINEFORM0 .",
"Glove Model provides us with a convenient method to represent words as vectors, using vectors representation for words, we generate vector representation for sentences. We work in the following order,",
"Represent each tokenized word INLINEFORM0 in its vector form < INLINEFORM1 >.",
"Represent each sentence into vector using following equation, INLINEFORM0 ",
"where INLINEFORM0 being frequency of INLINEFORM1 in INLINEFORM2 .",
"Calculate similarity between sentences using cosine distance between two sentence vectors.",
"Populate similarity matrix INLINEFORM0 using previous step.",
"Infersent is a state of the art supervised sentence encoding technique BIBREF4 . It outperformed another state-of-the-art sentence encoder SkipThought on several benchmarks, like the STS benchmark (http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark). The model is trained on Stanford Natural Language Inference (SNLI) dataset BIBREF22 using seven architectures Long Short-Term Memory (LSTM), Gated Recurrent Units (GRU), forward and backward GRU with hidden states concatenated, Bi-directional LSTMs (BiLSTM) with min/max pooling, self-attentive network and (HCN's) Hierarchical convolutional networks. The network performances are task/corpus specific.",
"Steps to generate similarity matrix INLINEFORM0 are:",
"Encode each sentence to generate its vector representation < INLINEFORM0 >.",
"Calculate similarity between sentence pair using cosine distance.",
"Populate similarity matrix INLINEFORM0 using previous step."
],
[
"TF-IDF scores and TextRank allows us to directly rank sentences and choose INLINEFORM0 top sentences, where INLINEFORM1 is how many sentences user want in the summary. On the other hand, the similarity matrix based approach is used in case of all Semantic Models, and Similarity/correlation based Statistical models. To rank sentences from Similarity matrix, we can use following approaches:-",
"Ranking through Relevance score",
"For each sentence INLINEFORM0 in similarity matrix the Relevance Score is as:-",
" INLINEFORM0 ",
"We can now choose INLINEFORM0 top ranking sentences by RScores. Higher the RScore, higher the rank of sentence.",
"Hierarchical Clustering",
"Given a similarity matrix INLINEFORM0 , let INLINEFORM1 denote an individual element, then Hierarchical clustering is performed as follows:-",
"Initialize a empty list INLINEFORM0 .",
"Choose element with highest similarity value let it be INLINEFORM0 where, INLINEFORM1 ",
"Replace values in column and row INLINEFORM0 in following manner:-",
" INLINEFORM0 ",
" INLINEFORM0 ",
"Replace entries corresponding to column and row INLINEFORM0 by zeros.",
"Add INLINEFORM0 and INLINEFORM1 to INLINEFORM2 , if they are not already there.",
"Repeat steps 2-5 until single single non-zero element remains, for remaining non-zero element apply Step 5 and terminate.",
"We will have rank list INLINEFORM0 in the end.",
"We can now choose INLINEFORM0 top ranking sentences from INLINEFORM1 ."
],
[
"After generating summary from a particular model, our aim is to compute summaries through overlap of different models. Let us have INLINEFORM0 summaries from INLINEFORM1 different models. For INLINEFORM2 summarization model, let the INLINEFORM3 sentences contained be:-",
" INLINEFORM0 ",
"Now for our list of sentences INLINEFORM0 we define cWeight as weight obtained for each sentence using INLINEFORM1 models.",
" INLINEFORM0 ",
"Here, INLINEFORM0 is a function which returns 1 if sentence is in summary of INLINEFORM1 model, otherwise zero. INLINEFORM2 is weight assigned to each model without training, INLINEFORM3 "
],
[
"We here use machine learning based approach to further increase the quality of our summarization technique. The elemental concept is that we use training set of INLINEFORM0 domain specific documents, with gold standard/human-composed summaries, provided we fine tune our weights INLINEFORM1 for different models taking F1-score/F-measure. BIBREF23 as factor. INLINEFORM2 ",
"We proceed in the following manner:-",
"For each document in training set generate summary using each model independently, compute the INLINEFORM0 w.r.t. gold summary.",
"For each model, assign the weights using INLINEFORM0 ",
"Here, INLINEFORM0 denotes INLINEFORM1 for INLINEFORM2 model in INLINEFORM3 document.",
"We now obtain cWeight as we did previously, and formulate cumulative summary, capturing the consensus of different models. We hence used a supervised learning algorithm to capture the mean performances of different models over the training data to fine-tune our summary."
],
[
"As we discussed earlier, summarization models are field selective. Some models tend to perform remarkably better than others in certain fields. So, instead of assigning uniform weights to all models we can go by the following approach.",
"For each set of documents we train on, we generate document vector using bidirectional GRU ( BIBREF24 as described by Zichao Yang BIBREF25 for each document. We then generate complete corpus vector as follows:- INLINEFORM0 ",
"where, INLINEFORM0 is total training set size, INLINEFORM1 is number of features in document vector.",
"We save INLINEFORM0 and INLINEFORM1 corresponding to each corpus.",
"For each single document summarization task, we generate given texts document vector, perform nearest vector search over all stored INLINEFORM0 , apply weights corresponding to that corpus."
],
[
"We evaluate our approaches on 2004 DUC(Document Understanding Conferences) dataset(https://duc.nist.gov/). The Dataset has 5 Tasks in total. We work on Task 2. It (Task 2) contains 50 news documents cluster for multi-document summarization. Only 665-character summaries are provided for each cluster. For evaluation, we use ROGUE, an automatic summary evaluation metric. It was firstly used for DUC 2004 data-set. Now, it has become a benchmark for evaluation of automated summaries. ROUGE is a correlation metric for fixed-length summaries populated using n-gram co-occurrence. For comparison between model summary and to-be evaluated summary, separate scores for 1, 2, 3, and 4-gram matching are kept. We use ROUGE-2, a bi-gram based matching technique for our task.",
"In the Table 1, we try different model pairs with weights trained on corpus for Task 2. We have displayed mean ROUGE-2 scores for base Models. We have calculated final scores taking into consideration all normalizations, stemming, lemmatizing and clustering techniques, and the ones providing best results were used. We generally expected WordNet, Glove based semantic models to perform better given they better capture crux of the sentence and compute similarity using the same, but instead, they performed average. This is attributed to the fact they assigned high similarity scores to not so semantically related sentences. We also observe that combinations with TF/IDF and Similarity Matrices(Jaccard/Cosine) offer nearly same results. The InferSent based Summarizer performed exceptionally well. We initially used pre-trained features to generate sentence vectors through InferSent."
],
[
"We can see that using a mixture of Semantic and Statistical models offers an improvement over stand-alone models. Given better training data, results can be further improved. Using domain-specific labeled data can provide a further increase in performances of Glove and WordNet Models.",
"Some easy additions that can be worked on are:",
"Unnecessary parts of the sentence can be trimmed to improve summary further.",
"Using better algorithm to capture sentence vector through Glove Model can improve results.",
"Query specific summarizer can be implemented with little additions.",
"For generating summary through model overlaps, we can also try Graph-based methods or different Clustering techniques."
]
],
"section_name": [
"Introduction",
"Related Work",
"Proposed Approach",
"Prepossessing",
"Using Stastical Models",
"Using Semantic Models",
"Generating Summaries",
"Single Document Summarization",
"Multi-Document/Domain-Specific Summarization",
"Domain-Specific Single Document Summarization",
"Experiments",
"Conclusion/Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"2997b988bcd2db60a52c31e9fc1f123d561a4f8f",
"4718b310b7e5339478bc877c3aee46f5e3db0549",
"77c5725307254dc07d489ce0bae9a3e2bd02b8b5"
],
"answer": [
{
"evidence": [
"After generating summary from a particular model, our aim is to compute summaries through overlap of different models. Let us have INLINEFORM0 summaries from INLINEFORM1 different models. For INLINEFORM2 summarization model, let the INLINEFORM3 sentences contained be:-",
"Given a document INLINEFORM0 we tokenize it into sentences as < INLINEFORM1 >.",
"Now for our list of sentences INLINEFORM0 we define cWeight as weight obtained for each sentence using INLINEFORM1 models.",
"Here, INLINEFORM0 is a function which returns 1 if sentence is in summary of INLINEFORM1 model, otherwise zero. INLINEFORM2 is weight assigned to each model without training, INLINEFORM3"
],
"extractive_spans": [],
"free_form_answer": "They define cWeight as weight obtained for each sentence using all the models where the sentences is in the summary of predicted by each model.",
"highlighted_evidence": [
"For INLINEFORM2 summarization model, let the INLINEFORM3 sentences contained be:-\n\nINLINEFORM0\n\nNow for our list of sentences INLINEFORM0 we define cWeight as weight obtained for each sentence using INLINEFORM1 models.\n\nINLINEFORM0\n\nHere, INLINEFORM0 is a function which returns 1 if sentence is in summary of INLINEFORM1 model, otherwise zero. INLINEFORM2 is weight assigned to each model without training, INLINEFORM3"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"There is a void of hybrid summarizers; there haven't been many studies made in the area.Wong BIBREF13 conducted some preliminary research but there isn't much there on benchmark tests to our knowledge. We use a mixture of statistical and semantic models, assign weights among them by training on field-specific corpora. As there is a significant variation in choices among different fields. We support our proposal with expectations that shortcomings posed by one model can be filled with positives from others. We deploy experimental analysis to test our proposition."
],
"extractive_spans": [
"by training on field-specific corpora"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use a mixture of statistical and semantic models, assign weights among them by training on field-specific corpora."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For Statistical analysis we use Similarity matrices, word co-occurrence/ n-gram model, andTF/IDF matrix. For semantic analysis we use custom Glove based model, WordNet based Model and Facebook InferSent BIBREF4 based Model. For Multi-Document Summarization,after training on corpus, we assign weights among the different techniques .We store the sense vector for documents, along with weights, for future reference. For Single document summarization, firstly we calculate the sense vector for that document and calculate the nearest vector from the stored Vectors, we use the weights of the nearest vector. We will describe the flow for semantic and statistical models separately.",
"Now for our list of sentences INLINEFORM0 we define cWeight as weight obtained for each sentence using INLINEFORM1 models.",
"Here, INLINEFORM0 denotes INLINEFORM1 for INLINEFORM2 model in INLINEFORM3 document.",
"We now obtain cWeight as we did previously, and formulate cumulative summary, capturing the consensus of different models. We hence used a supervised learning algorithm to capture the mean performances of different models over the training data to fine-tune our summary."
],
"extractive_spans": [
"after training on corpus, we assign weights among the different techniques"
],
"free_form_answer": "",
"highlighted_evidence": [
"For Multi-Document Summarization,after training on corpus, we assign weights among the different techniques .We store the sense vector for documents, along with weights, for future reference. For Single document summarization, firstly we calculate the sense vector for that document and calculate the nearest vector from the stored Vectors, we use the weights of the nearest vector.",
"Now for our list of sentences INLINEFORM0 we define cWeight as weight obtained for each sentence using INLINEFORM1 models.",
"Here, INLINEFORM0 denotes INLINEFORM1 for INLINEFORM2 model in INLINEFORM3 document.\n\nWe now obtain cWeight as we did previously, and formulate cumulative summary, capturing the consensus of different models. We hence used a supervised learning algorithm to capture the mean performances of different models over the training data to fine-tune our summary."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"d9135203a92ded14d260a7d551b7a447c8b7c910",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"36d686fffab38c03276d053835774d84b0cbcf3f",
"6169728355517ac2cbdb57af6f673b59ecf813be"
],
"answer": [
{
"evidence": [
"Infersent is a state of the art supervised sentence encoding technique BIBREF4 . It outperformed another state-of-the-art sentence encoder SkipThought on several benchmarks, like the STS benchmark (http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark). The model is trained on Stanford Natural Language Inference (SNLI) dataset BIBREF22 using seven architectures Long Short-Term Memory (LSTM), Gated Recurrent Units (GRU), forward and backward GRU with hidden states concatenated, Bi-directional LSTMs (BiLSTM) with min/max pooling, self-attentive network and (HCN's) Hierarchical convolutional networks. The network performances are task/corpus specific."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Infersent is a state of the art supervised sentence encoding technique BIBREF4 ."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"4eecc605f4fa5eaf7b1548c6e1a01897f3dc65d0",
"7cf2955d35c98da45ca6a8ec89940d6b759ff772",
"cdc78dc98db8019d9fcb76c97ca7dbe6c4d80218"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Average ROUGE-2 Scores for Different Combination of Models.",
"In the Table 1, we try different model pairs with weights trained on corpus for Task 2. We have displayed mean ROUGE-2 scores for base Models. We have calculated final scores taking into consideration all normalizations, stemming, lemmatizing and clustering techniques, and the ones providing best results were used. We generally expected WordNet, Glove based semantic models to perform better given they better capture crux of the sentence and compute similarity using the same, but instead, they performed average. This is attributed to the fact they assigned high similarity scores to not so semantically related sentences. We also observe that combinations with TF/IDF and Similarity Matrices(Jaccard/Cosine) offer nearly same results. The InferSent based Summarizer performed exceptionally well. We initially used pre-trained features to generate sentence vectors through InferSent."
],
"extractive_spans": [],
"free_form_answer": "Combination of Jaccard/Cosine Similarity Matrix, TextRank and InferSent Based Model",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Average ROUGE-2 Scores for Different Combination of Models.",
"In the Table 1, we try different model pairs with weights trained on corpus for Task 2. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Average ROUGE-2 Scores for Different Combination of Models.",
"FLOAT SELECTED: Table 1: Average ROUGE-2 Scores for Different Combination of Models.",
"FLOAT SELECTED: Table 1: Average ROUGE-2 Scores for Different Combination of Models."
],
"extractive_spans": [],
"free_form_answer": "Jaccard/Cosine Similarity Matrix+TextRank\n+InferSent Based Model",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Average ROUGE-2 Scores for Different Combination of Models.",
"FLOAT SELECTED: Table 1: Average ROUGE-2 Scores for Different Combination of Models.",
"FLOAT SELECTED: Table 1: Average ROUGE-2 Scores for Different Combination of Models."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Average ROUGE-2 Scores for Different Combination of Models."
],
"extractive_spans": [],
"free_form_answer": "Best result was obtained by using combination of: Jaccard/Cosine Similarity Matrix, TextRank and InferSent Based Model",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Average ROUGE-2 Scores for Different Combination of Models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"d9135203a92ded14d260a7d551b7a447c8b7c910",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How exactly do they weigh between different statistical models?",
"Do they compare against state-of-the-art summarization approaches?",
"What showed to be the best performing combination of semantic and statistical model on the summarization task in terms of ROUGE score?"
],
"question_id": [
"6a20a3220c4edad758b912e2d3e5b99b0b295d96",
"c2745e44ebe7dd57126b784ac065f0b7fc2630f1",
"d5dcc89a08924bed9772bc431090cbb52fb7836f"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Average ROUGE-2 Scores for Different Combination of Models.",
"Table 2: Average ROUGE-2 scores for base methods."
],
"file": [
"11-Table1-1.png",
"13-Table2-1.png"
]
} | [
"How exactly do they weigh between different statistical models?",
"What showed to be the best performing combination of semantic and statistical model on the summarization task in terms of ROUGE score?"
] | [
[
"1805.04579-Proposed Approach-0",
"1805.04579-Multi-Document/Domain-Specific Summarization-5",
"1805.04579-Prepossessing-2",
"1805.04579-Related Work-2",
"1805.04579-Multi-Document/Domain-Specific Summarization-4",
"1805.04579-Single Document Summarization-0",
"1805.04579-Single Document Summarization-2"
],
[
"1805.04579-Experiments-1",
"1805.04579-11-Table1-1.png"
]
] | [
"They define cWeight as weight obtained for each sentence using all the models where the sentences is in the summary of predicted by each model.",
"Best result was obtained by using combination of: Jaccard/Cosine Similarity Matrix, TextRank and InferSent Based Model"
] | 148 |
1908.10149 | Incremental Improvement of a Question Answering System by Re-ranking Answer Candidates using Machine Learning | We implement a method for re-ranking top-10 results of a state-of-the-art question answering (QA) system. The goal of our re-ranking approach is to improve the answer selection given the user question and the top-10 candidates. We focus on improving deployed QA systems that do not allow re-training or re-training comes at a high cost. Our re-ranking approach learns a similarity function using n-gram based features using the query, the answer and the initial system confidence as input. Our contributions are: (1) we generate a QA training corpus starting from 877 answers from the customer care domain of T-Mobile Austria, (2) we implement a state-of-the-art QA pipeline using neural sentence embeddings that encode queries in the same space than the answer index, and (3) we evaluate the QA pipeline and our re-ranking approach using a separately provided test set. The test set can be considered to be available after deployment of the system, e.g., based on feedback of users. Our results show that the system performance, in terms of top-n accuracy and the mean reciprocal rank, benefits from re-ranking using gradient boosted regression trees. On average, the mean reciprocal rank improves by 9.15%. | {
"paragraphs": [
[
"In this work, we examine the problem of incrementally improving deployed QA systems in an industrial setting. We consider the domain of customer care of a wireless network provider and focus on answering frequent questions (focussing on the long tail of the question distribution BIBREF0 ). In this setting, the most frequent topics are covered by a separate industry-standard chatbot based on hand-crafted rules by dialogue engineers. Our proposed process is based on the augmented cross-industry standard process for data mining BIBREF1 (augmented CRISP data mining cycle). In particular, we are interested in methods for improving a model after its deployment through re-ranking of the initial ranking results. In advance, we follow the steps of the CRISP cycle towards deployment for generating a state-of-the-art baseline QA model. First, we examine existing data (data understanding) and prepare a corpus for training (data preparation). Second, we implement and train a QA pipeline using state-of-the-art open source components (modelling). We perform an evaluation using different amounts of data and different pipeline configurations (evaluation), also to understand the nature of the data and the application (business understanding). Third, we investigate the effectiveness and efficiency of re-ranking in improving our QA pipeline after the deployment phase of CRISP. Adaptivity after deployment is modelled as (automatic) operationalisation step with external reflection based on, e.g., user feedback. This could be replaced by introspective meta-models that allow the system to enhance itself by metacognition BIBREF1 . The QA system and the re-ranking approach are evaluated using a separate test set that maps actual user queries from a chat-log to answers of the QA corpus. Sample queries from the evaluation set with one correct and one incorrect sample are shown in Table TABREF1 .",
"With this work, we want to answer the question whether a deployed QA system that is difficult to adapt and that provides a top-10 ranking of answer candidates, can be improved by an additional re-ranking step that corresponds to the operationalisation step of the augmented CRISP cycle. It is also important to know the potential gain and the limitations of such a method that works on top of an existing system. We hypothesise that our proposed re-ranking approach can effectively improve ranking-based QA systems."
],
[
"The broad field of QA includes research ranging from retrieval-based BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 to generative BIBREF6 , BIBREF7 , as well as, from closed-domain BIBREF8 , BIBREF9 to open-domain QA BIBREF7 , BIBREF10 , BIBREF11 , BIBREF12 . We focus on the notion of improving an already deployed system.",
"For QA dialogues based on structured knowledge representations, this can be achieved by maintaining and adapting the knowledgebase BIBREF13 , BIBREF14 , BIBREF15 . In addition, BIBREF1 proposes metacognition models for building self-reflective and adaptive AI systems, e.g., dialogue systems, that improve by introspection. Buck et al. present a method for reformulating user questions: their method automatically adapts user queries with the goal to improve the answer selection of an existing QA model BIBREF16 .",
"Other works suggest humans-in-the-loop for improving QA systems. Savenkov and Agichtein use crowdsourcing for re-ranking retrieved answer candidates in a real-time QA framework BIBREF17 . In Guardian, crowdworkers prepare a dialogue system based on a certain web API and, after deployment, manage actual conversations with users BIBREF18 . EVORUS learns to select answers from multiple chatbots via crowdsourcing BIBREF19 . The result is a chatbot ensemble excels the performance of each individual chatbot. Williams et al. present a dialogue architecture that continuously learns from user interaction and feedback BIBREF20 .",
"We propose a re-ranking algorithm similar to BIBREF17 : we train a similarity model using n-gram based features of QA pairs for improving the answer selection of a retrieval-based QA system."
],
[
"We implement our question answering system using state-of-the-art open source components. Our pipeline is based on the Rasa natural language understanding (NLU) framework BIBREF21 which offers two standard pipelines for text classification: spacy_sklearn and tensorflow_embedding. The main difference is that spacy_sklearn uses Spacy for feature extraction with pre-trained word embedding models and Scikit-learn BIBREF22 for text classification. In contrast, the tensorflow_embedding pipeline trains custom word embeddings for text similarity estimation using TensorFlow BIBREF23 as machine learning backend. Figure FIGREF5 shows the general structure of both pipelines. We train QA models using both pipelines with the pre-defined set of hyper-parameters. For tensorflow_embedding, we additionally monitor changes in system performance using different epoch configurations. Further, we compare the performances of pipelines with or without a spellchecker and investigate whether model training benefits from additional user examples by training models with the three different versions of our training corpus including no additional samples (kw), samples from 1 user (kw+1u) or samples from 2 users (kw+2u) (see section Corpora). All training conditions are summarized in Table TABREF4 . Next, we describe the implementation details of our QA system as shown in Figure FIGREF5 : the spellchecker module, the subsequent pre-processing and feature encoding, and the text classification. We include descriptions for both pipelines.",
"Spellchecker We address the problem of frequent spelling mistakes in user queries by implementing an automated spell-checking and correction module. It is based on a Python port of the SymSpell algorithm initialized with word frequencies for German. We apply the spellchecker as first component in our pipeline.",
"Pre-Processing and Feature Encoding. The spacy_sklearn pipeline uses Spacy for pre-processing and feature encoding. Pre-processing includes the generation of a Spacy document and tokenization using their German language model de_core_news_sm (v2.0.0). The feature encoding is obtained via the vector function of the Spacy document that returns the mean word embedding of all tokens in a query. For German, Spacy provides only a simple dense encoding of queries (no proper word embedding model). The pre-processing step of the tensorflow_embedding pipeline uses a simple whitespace tokenizer for token extraction. The tokens are used for the feature encoding step that is based on Scikit-learn's CountVectorizer. It returns a bag of words histogram with words being the tokens (1-grams).",
"Text Classification. The spacy_sklearn pipeline relies on Scikit-learn for text classification using a support vector classifier (SVC). The model confidences are used for ranking all answer candidates; the top-10 results are returned.",
"Text classification for tensorflow_embedding is done using TensorFlow with an implementation of the StarSpace algorithm BIBREF24 . This component learns (and later applies) one embedding model for user queries and one for the answer id. It minimizes the distance between embeddings of QA training samples. The distances between a query and all answer ids are used for ranking."
],
[
"In this work, we include two corpora: one for training the baseline system and another for evaluating the performance of the QA pipeline and our re-ranking approach. In the following, we describe the creation of the training corpus and the structure of the test corpus. Both corpora have been anonymised.",
"Training Corpus. The customer care department provides 877 answers to common user questions. Each answer is tagged with a variable amount of keywords or key-phrases ( INLINEFORM0 , INLINEFORM1 ), 3338 in total. We asked students to augment the training corpus with, in total, two additional natural example queries. This process can be scaled by crowdsourcing for an application in productive systems that might include more answers or that requires more sample question per answer or both. The full dataset contains, on average, INLINEFORM2 sample queries per answer totalling in 5092 queries overall. For model training, all questions (including keywords) are used as input with the corresponding answer as output. We generated three versions of the training corpus: keywords only (kw, INLINEFORM3 ), keywords with samples from 1 user (kw+1u, INLINEFORM4 ) and keywords with samples from 2 users (kw+2u, INLINEFORM5 ).",
"Evaluation Corpus.",
"The performance of the implemented QA system and of our re-ranking approach is assessed using a separate test corpus. It includes 3084 real user requests from a chat-log of T-Mobile Austria, which are assigned to suitable answers from the training corpus (at most three). The assignment was performed manually by domain experts of the wireless network provider. We use this corpus for estimating the baseline performance of the QA pipeline using different pipeline configurations and different versions of the training corpus. In addition, we use the corpus for evaluating our re-ranking approach per cross-validation: we regard the expert annotations as offline human feedback. The queries in this corpus contain a lot of spelling mistakes. We address this in our QA pipeline generation by implementing a custom spell-checking component."
],
[
"We evaluate the baseline model using all training configurations in Table TABREF4 to find a well-performing baseline for our re-ranking experiment. We use the evaluation corpus as reference data and report the top-1 to top-10 accuracies and the mean reciprocal rank for the top-10 results (MRR@10) as performance metrics. For computing the top-n accuracy, we count all queries for which the QA pipeline contains a correct answer on rank 1 to n and divide the result by the number of test queries. The MRR is computed as the mean of reciprocal ranks over all test queries. The reciprocal rank for one query is defined as INLINEFORM0 : The RR is 1 if the correct answer is ranked first, INLINEFORM1 if it is at the second rank and so on. We set RR to zero, if the answer is not contained in the top-10 results.",
"Results. Figure FIGREF10 shows the accuracy and MRR values for all conditions. We only restrict tensorflow_embedding to the default number of epochs which is 300. At the corpus level, we can observe that the accuracy and the MRR increase when training with additional user annotations for all pipeline configurations. For example, the spacy_sklearn pipeline without spell-checking achieves a top-10 accuracy of INLINEFORM0 and a MRR of INLINEFORM1 when using the kw training corpus with keywords only. Both measures increase to INLINEFORM2 and INLINEFORM3 , respectively, when adding two natural queries for training. In some cases, adding only 1 user query results in slightly better scores. However, the overall trend is that more user annotations yield better results.",
"In addition, we observe performance improvements for pipelines that use our spell-checking component when compared to the default pipelines that do not make use of it: The spacy_sklearn kw+2u condition performs INLINEFORM0 better, the tensorflow_embedding kw+2u condition performs INLINEFORM1 better, in terms of top-10 accuracy. We can observe similar improvements for the majority of included metrics. Similar to the differentiation by corpus, we can find cases where spell-checking reduces the performance for a particular measure, against the overall trend.",
"Overall, the tensorflow_embedding pipelines perform considerably better than the spacy_sklearn pipeline irrespective of the remaining parameter configuration: the best performing methods are achieved by the tensorflow_embedding pipeline with spell-checking. Figure FIGREF11 sheds more light on this particular setting. It provides performance measures for all corpora and for different number of epochs used for model training. Pipelines that use 300 epochs for training range among the best for all corpora. When adding more natural user annotations, using 100 epochs achieves similar or better scores, in particular concerning the top-10 accuracy and the MRR. Re-ranking the top-10 results can only improve the performance in QA, if the correct answer is among the top-10 results. Therefore, we use the tensorflow_embedding pipeline with spellchecking, 100 epochs and the full training corpus as baseline for evaluating the re-ranking approach."
],
[
"Our re-ranking approach compares a user query with the top-10 results of the baseline QA system. In contrast to the initial ranking, our re-ranking takes the content of the answer candidates into account instead of encoding the user query only. Our algorithm compares the text of the recent user query to each result. We include the answer text and the confidence value of the baseline system for computing a similarity estimate. Finally, we re-rank the results by their similarity to the query (see Algorithm SECREF5 ).",
"a user query INLINEFORM0 ; the corresponding list of top-10 results INLINEFORM1 including an answer INLINEFORM2 and the baseline confidence INLINEFORM3 ; an updated ranking INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 sort R' by confidences c', descending INLINEFORM9 INLINEFORM10 Re-Ranking Algorithm",
"We consider a data-driven similarity function that compares linguistic features of the user query and answer candidates and also takes into account the confidence of the baseline QA system. This similarity estimate shall enhance the baseline by using an extended data and feature space, but without neglecting the learned patterns of the baseline system. The possible improvement in top-1 accuracy is limited by the top-10 accuracy of the baseline system ( INLINEFORM0 ), because our re-ranking cannot choose from the remaining answers. Figure FIGREF12 shows how the re-ranking model is connected to the deployed QA system: it requires access to its in- and outputs for the additional ranking step.",
"We consider the gradient boosted regression tree for learning a similarity function for re-ranking similar to BIBREF17 . The features for model training are extracted from pre-processed query-answer pairs. Pre-processing includes tokenization and stemming of query and answer and the extraction of uni-, bi- and tri-grams from both token sequences. We include three distance metrics as feature: the Jaccard distance, the cosine similarity, and the plain number of n-gram matches between n-grams of a query and an answer.",
"a train- and test split of the evaluation corpus INLINEFORM0 , each including QA-pairs as tuples INLINEFORM1 ; the pre-trained baseline QA model for initial ranking INLINEFORM2 and the untrained re-ranking model INLINEFORM3 . evaluation metrics. training of the re-ranking model INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 *R contains top-10 results INLINEFORM8 continue with next QA pair add positive sample INLINEFORM9 *confidence for INLINEFORM10 INLINEFORM11 INLINEFORM12 add negative sample INLINEFORM13 random INLINEFORM14 INLINEFORM15 INLINEFORM16 INLINEFORM17 INLINEFORM18 evaluation of the re-ranking model INLINEFORM19 INLINEFORM20 INLINEFORM21 *top-10 baseline ranking INLINEFORM22 *apply re-ranking INLINEFORM23 INLINEFORM24 Evaluation Procedure (per Data Split)"
],
[
"We compare our data-driven QA system with a version that re-ranks resulting top-10 candidates using the additional ranking model. We want to answer the question whether our re-ranking approach can improve the performance of the baseline QA pipeline after deployment. For that, we use the evaluation corpus ( INLINEFORM0 ) for training and evaluating our re-ranking method using 10-fold cross-validation, i.e., INLINEFORM1 of the data is used for training and INLINEFORM2 for testing with 10 different train-test splits.",
"The training and testing procedure per data split of the cross-validation is shown in Algorithm SECREF5 . For each sample query INLINEFORM0 in the train set INLINEFORM1 , we include the correct answer INLINEFORM2 and one randomly selected negative answer candidate INLINEFORM3 for a balanced model training. We skip a sample, if the correct answer is not contained in the top-10 results: we include INLINEFORM4 of the data (see top-10 accuracy of the baseline QA model in Figure FIGREF11 ). The baseline QA model INLINEFORM5 and the trained re-ranking method INLINEFORM6 are applied to all sample queries in the test set INLINEFORM7 . Considered performance metrics are computed using the re-ranked top-10 INLINEFORM8 . We repeat the cross-validation 5 times to reduce effects introduced by the random selection of negative samples. We report the average metrics from 10 cross-validation folds and the 5 repetitions of the evaluation procedure.",
"Results. The averaged cross-validation results of our evaluation, in terms of top-n accuracies and the MRR@10, are shown in Table TABREF15 : the top-1 to top-9 accuracies improve consistently. The relative improvement decreases from INLINEFORM0 for the top-1 accuracy to INLINEFORM1 for the top-9 accuracy. The top-10 accuracy stays constant, because the re-ranking cannot choose from outside the top-10 candidates. The MRR improves from INLINEFORM2 to INLINEFORM3 ( INLINEFORM4 )."
],
[
"Our results indicate that the accuracy of the described QA system benefits from our re-ranking approach. Hence, it can be applied to improve the performance of already deployed QA systems that provide a top-10 ranking with confidences as output. However, the performance gain is small, which might have several reasons. For example, we did not integrate spell-checking in our re-ranking method which proved to be effective in our baseline evaluation. Further, the re-ranking model is based on very simple features. It would be interesting to investigate the impact of more advanced features, or models, on the ranking performance (e.g., word embeddings BIBREF26 and deep neural networks for learning similarity functions BIBREF3 , BIBREF4 ). Nevertheless, as can be seen in examples 1, 2 and 4 in Table TABREF1 , high-ranked but incorrect answers are often meaningful with respect to the query: the setting in our evaluation is overcritical, because we count incorrect, but meaningful answers as negative result. A major limitation is that the re-ranking algorithm cannot choose answer candidates beyond the top-10 results. It would be interesting to classify whether an answer is present in the top-10 or not. If not, the algorithm could search outside the top-10 results. Such a meta-model can also be used to estimate weaknesses of the QA model: it can determine topics that regularly fail, for instance, to guide data labelling for a targeted improvement of the model, also known as active learning BIBREF27 , and in combination with techniques from semi-supervised learning BIBREF5 , BIBREF28 .",
"Data labelling and incremental model improvement can be scaled by crowdsourcing. Examples include the parallel supervision of re-ranking results and targeted model improvement as human oracles in an active learning setting. Results from crowd-supervised re-ranking allows us to train improved re-ranking models BIBREF17 , BIBREF19 , but also a meta-model that detects queries which are prone to error. The logs of a deployed chatbot, that contain actual user queries, can be efficiently analysed using such a meta-model to guide the sample selection for costly human data augmentation and creation. An example of a crowdsourcing approach that could be applied to our QA system and data, with search logs can be found in BIBREF0 ."
],
[
"We implemented a simple re-ranking method and showed that it can effectively improve the performance of QA systems after deployment. Our approach includes the top-10 answer candidates and confidences of the initial ranking for selecting better answers. Promising directions for future work include the investigation of more advanced ranking approaches for increasing the performance gain and continuous improvements through crowdsourcing and active learning."
]
],
"section_name": [
"Introduction",
"Related Work",
"Question Answering System",
"Corpora",
"Baseline Performance Evaluation",
"Re-Ranking Approach",
"Re-Ranking Performance Evaluation",
"Discussion",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"3b56f7ff79d47fef670c80334173df7d115ba9f0",
"efaba19ac59f0f51e3b47c10565709313fe86a1f"
],
"answer": [
{
"evidence": [
"We implement our question answering system using state-of-the-art open source components. Our pipeline is based on the Rasa natural language understanding (NLU) framework BIBREF21 which offers two standard pipelines for text classification: spacy_sklearn and tensorflow_embedding. The main difference is that spacy_sklearn uses Spacy for feature extraction with pre-trained word embedding models and Scikit-learn BIBREF22 for text classification. In contrast, the tensorflow_embedding pipeline trains custom word embeddings for text similarity estimation using TensorFlow BIBREF23 as machine learning backend. Figure FIGREF5 shows the general structure of both pipelines. We train QA models using both pipelines with the pre-defined set of hyper-parameters. For tensorflow_embedding, we additionally monitor changes in system performance using different epoch configurations. Further, we compare the performances of pipelines with or without a spellchecker and investigate whether model training benefits from additional user examples by training models with the three different versions of our training corpus including no additional samples (kw), samples from 1 user (kw+1u) or samples from 2 users (kw+2u) (see section Corpora). All training conditions are summarized in Table TABREF4 . Next, we describe the implementation details of our QA system as shown in Figure FIGREF5 : the spellchecker module, the subsequent pre-processing and feature encoding, and the text classification. We include descriptions for both pipelines."
],
"extractive_spans": [
"We implement our question answering system using state-of-the-art open source components. "
],
"free_form_answer": "",
"highlighted_evidence": [
"We implement our question answering system using state-of-the-art open source components. Our pipeline is based on the Rasa natural language understanding (NLU) framework BIBREF21 which offers two standard pipelines for text classification: spacy_sklearn and tensorflow_embedding. The main difference is that spacy_sklearn uses Spacy for feature extraction with pre-trained word embedding models and Scikit-learn BIBREF22 for text classification. In contrast, the tensorflow_embedding pipeline trains custom word embeddings for text similarity estimation using TensorFlow BIBREF23 as machine learning backend. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We implement our question answering system using state-of-the-art open source components. Our pipeline is based on the Rasa natural language understanding (NLU) framework BIBREF21 which offers two standard pipelines for text classification: spacy_sklearn and tensorflow_embedding. The main difference is that spacy_sklearn uses Spacy for feature extraction with pre-trained word embedding models and Scikit-learn BIBREF22 for text classification. In contrast, the tensorflow_embedding pipeline trains custom word embeddings for text similarity estimation using TensorFlow BIBREF23 as machine learning backend. Figure FIGREF5 shows the general structure of both pipelines. We train QA models using both pipelines with the pre-defined set of hyper-parameters. For tensorflow_embedding, we additionally monitor changes in system performance using different epoch configurations. Further, we compare the performances of pipelines with or without a spellchecker and investigate whether model training benefits from additional user examples by training models with the three different versions of our training corpus including no additional samples (kw), samples from 1 user (kw+1u) or samples from 2 users (kw+2u) (see section Corpora). All training conditions are summarized in Table TABREF4 . Next, we describe the implementation details of our QA system as shown in Figure FIGREF5 : the spellchecker module, the subsequent pre-processing and feature encoding, and the text classification. We include descriptions for both pipelines."
],
"extractive_spans": [],
"free_form_answer": "Rasa natural language understanding framework",
"highlighted_evidence": [
"We implement our question answering system using state-of-the-art open source components. Our pipeline is based on the Rasa natural language understanding (NLU) framework BIBREF21 which offers two standard pipelines for text classification: spacy_sklearn and tensorflow_embedding."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"299fccad1c2e6eb46f2e1533db466061bb1a190f",
"5a9ee2c085f99250e42c53b73f1051899c781f2c",
"c5f9a1a2024d9703b689118867079c8a5b465f11"
],
"answer": [
{
"evidence": [
"The performance of the implemented QA system and of our re-ranking approach is assessed using a separate test corpus. It includes 3084 real user requests from a chat-log of T-Mobile Austria, which are assigned to suitable answers from the training corpus (at most three). The assignment was performed manually by domain experts of the wireless network provider. We use this corpus for estimating the baseline performance of the QA pipeline using different pipeline configurations and different versions of the training corpus. In addition, we use the corpus for evaluating our re-ranking approach per cross-validation: we regard the expert annotations as offline human feedback. The queries in this corpus contain a lot of spelling mistakes. We address this in our QA pipeline generation by implementing a custom spell-checking component."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" ",
"The performance of the implemented QA system and of our re-ranking approach is assessed using a separate test corpus. It includes 3084 real user requests from a chat-log of T-Mobile Austria, which are assigned to suitable answers from the training corpus (at most three). "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Our re-ranking approach compares a user query with the top-10 results of the baseline QA system. In contrast to the initial ranking, our re-ranking takes the content of the answer candidates into account instead of encoding the user query only. Our algorithm compares the text of the recent user query to each result. We include the answer text and the confidence value of the baseline system for computing a similarity estimate. Finally, we re-rank the results by their similarity to the query (see Algorithm SECREF5 )."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Our re-ranking approach compares a user query with the top-10 results of the baseline QA system. In contrast to the initial ranking, our re-ranking takes the content of the answer candidates into account instead of encoding the user query only. Our algorithm compares the text of the recent user query to each result. We include the answer text and the confidence value of the baseline system for computing a similarity estimate. Finally, we re-rank the results by their similarity to the query (see Algorithm SECREF5 )."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"5124d5e44003be034475233d6df7f4e10b24b48f",
"833b52560bb0fb5d2fd3c0d494db49c0bde85388",
"bbce811a84fd088eaf06db7ba6c6132474da72c6"
],
"answer": [
{
"evidence": [
"The performance of the implemented QA system and of our re-ranking approach is assessed using a separate test corpus. It includes 3084 real user requests from a chat-log of T-Mobile Austria, which are assigned to suitable answers from the training corpus (at most three). The assignment was performed manually by domain experts of the wireless network provider. We use this corpus for estimating the baseline performance of the QA pipeline using different pipeline configurations and different versions of the training corpus. In addition, we use the corpus for evaluating our re-ranking approach per cross-validation: we regard the expert annotations as offline human feedback. The queries in this corpus contain a lot of spelling mistakes. We address this in our QA pipeline generation by implementing a custom spell-checking component."
],
"extractive_spans": [],
"free_form_answer": "3084 real user requests assigned to suitable answers from the training corpus.",
"highlighted_evidence": [
"The performance of the implemented QA system and of our re-ranking approach is assessed using a separate test corpus. It includes 3084 real user requests from a chat-log of T-Mobile Austria, which are assigned to suitable answers from the training corpus (at most three). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Evaluation Corpus.",
"The performance of the implemented QA system and of our re-ranking approach is assessed using a separate test corpus. It includes 3084 real user requests from a chat-log of T-Mobile Austria, which are assigned to suitable answers from the training corpus (at most three). The assignment was performed manually by domain experts of the wireless network provider. We use this corpus for estimating the baseline performance of the QA pipeline using different pipeline configurations and different versions of the training corpus. In addition, we use the corpus for evaluating our re-ranking approach per cross-validation: we regard the expert annotations as offline human feedback. The queries in this corpus contain a lot of spelling mistakes. We address this in our QA pipeline generation by implementing a custom spell-checking component."
],
"extractive_spans": [
"3084 real user requests from a chat-log of T-Mobile Austria"
],
"free_form_answer": "",
"highlighted_evidence": [
"Evaluation Corpus.\n\nThe performance of the implemented QA system and of our re-ranking approach is assessed using a separate test corpus. It includes 3084 real user requests from a chat-log of T-Mobile Austria, which are assigned to suitable answers from the training corpus (at most three). The assignment was performed manually by domain experts of the wireless network provider. We use this corpus for estimating the baseline performance of the QA pipeline using different pipeline configurations and different versions of the training corpus. In addition, we use the corpus for evaluating our re-ranking approach per cross-validation: we regard the expert annotations as offline human feedback. The queries in this corpus contain a lot of spelling mistakes. We address this in our QA pipeline generation by implementing a custom spell-checking component."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The performance of the implemented QA system and of our re-ranking approach is assessed using a separate test corpus. It includes 3084 real user requests from a chat-log of T-Mobile Austria, which are assigned to suitable answers from the training corpus (at most three). The assignment was performed manually by domain experts of the wireless network provider. We use this corpus for estimating the baseline performance of the QA pipeline using different pipeline configurations and different versions of the training corpus. In addition, we use the corpus for evaluating our re-ranking approach per cross-validation: we regard the expert annotations as offline human feedback. The queries in this corpus contain a lot of spelling mistakes. We address this in our QA pipeline generation by implementing a custom spell-checking component."
],
"extractive_spans": [
"3084"
],
"free_form_answer": "",
"highlighted_evidence": [
"The performance of the implemented QA system and of our re-ranking approach is assessed using a separate test corpus. It includes 3084 real user requests from a chat-log of T-Mobile Austria, which are assigned to suitable answers from the training corpus (at most three)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What QA system was used in this work?",
"Is the re-ranking approach described in this paper a transductive learning technique?",
"How big is the test set used for evaluating the proposed re-ranking approach?"
],
"question_id": [
"d418bf6595b1b51a114f28ac8a6909c278838aeb",
"6d6b0628d8a942c57d7af1447a563021be79bc64",
"b21245212244ad7adf7d321420f2239a0f0fe56b"
],
"question_writer": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1 Sample queries with a correct and an incorrect answer option according to our test set. We report the answers’ rank of the baseline model that we used for our re-ranking experiments.",
"Table 2 Considered configurations for QA pipeline training.",
"Fig. 1 The basic configuration of the QA pipeline, which is a part of our complete QA system architecture with the re-ranking algorithm.",
"Fig. 2 Performance metrics in terms of top-1 to top-10 accuracy and MRR@10 of both QA pipelines for different pipeline configurations and training corpora.",
"Fig. 4 Complete QA system architecture including the re-ranking model. The re-ranking model is trained using manually annotated data for generating a supervised/ideal ranking result for the top-10 answers from the QA system. Features are extracted from the user question and a particular answer candidate. At inference time, the re-ranking model is used to improve the initial top-10 ranking.",
"Table 3 Performance metrics of the baseline QA pipeline and our re-ranking method (n = 3084)."
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"5-Figure1-1.png",
"7-Figure2-1.png",
"8-Figure4-1.png",
"11-Table3-1.png"
]
} | [
"What QA system was used in this work?",
"How big is the test set used for evaluating the proposed re-ranking approach?"
] | [
[
"1908.10149-Question Answering System-0"
],
[
"1908.10149-Corpora-2",
"1908.10149-Corpora-3"
]
] | [
"Rasa natural language understanding framework",
"3084 real user requests assigned to suitable answers from the training corpus."
] | 149 |
1803.07828 | Expeditious Generation of Knowledge Graph Embeddings | Knowledge Graph Embedding methods aim at representing entities and relations in a knowledge base as points or vectors in a continuous vector space. Several approaches using embeddings have shown promising results on tasks such as link prediction, entity recommendation, question answering, and triplet classification. However, only a few methods can compute low-dimensional embeddings of very large knowledge bases without needing state-of-the-art computational resources. In this paper, we propose KG2Vec, a simple and fast approach to Knowledge Graph Embedding based on the skip-gram model. Instead of using a predefined scoring function, we learn it relying on Long Short-Term Memories. We show that our embeddings achieve results comparable with the most scalable approaches on knowledge graph completion as well as on a new metric. Yet, KG2Vec can embed large graphs in lesser time by processing more than 250 million triples in less than 7 hours on common hardware. | {
"paragraphs": [
[
"Recently, the number of public datasets in the Linked Data cloud has significantly grown to almost 10 thousands. At the time of writing, at least four of these datasets contain more than one billion triples each. This huge amount of available data has become a fertile ground for Machine Learning and Data Mining algorithms. Today, applications of machine-learning techniques comprise a broad variety of research areas related to Linked Data, such as Link Discovery, Named Entity Recognition, and Structured Question Answering. The field of Knowledge Graph Embedding (KGE) has emerged in the Machine Learning community during the last five years. The underlying concept of KGE is that in a knowledge base, each entity and relation can be regarded as a vector in a continuous space. The generated vector representations can be used by algorithms employing machine learning, deep learning, or statistical relational learning to accomplish a given task. Several KGE approaches have already shown promising results on tasks such as link prediction, entity recommendation, question answering, and triplet classification BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Moreover, Distributional Semantics techniques (e.g., Word2Vec or Doc2Vec) are relatively new in the Semantic Web community. The RDF2Vec approaches BIBREF4 , BIBREF5 are examples of pioneering research and to date, they represent the only option for learning embeddings on a large knowledge graph without the need for state-of-the-art hardware. To this end, we devise the KG2Vec approach, which comprises skip-gram techniques for creating embeddings on large knowledge graphs in a feasible time but still maintaining the quality of state-of-the-art embeddings. Our evaluation shows that KG2Vec achieves a vector quality comparable to the most scalable approaches and can process more than 250 million triples in less than 7 hours on a machine with suboptimal performances."
],
[
"An early effort to automatically generate features from structured knowledge was proposed in BIBREF6 . RESCAL BIBREF7 is a relational-learning algorithm based on Tensor Factorization using Alternating Least-Squares which has showed to scale to large RDF datasets such as YAGO BIBREF8 and reach good results in the tasks of link prediction, entity resolution, or collective classification BIBREF9 . Manifold approaches which rely on translations have been implemented so far BIBREF10 , BIBREF11 , BIBREF12 , BIBREF2 , BIBREF13 , BIBREF0 . TransE is the first method where relationships are interpreted as translations operating on the low-dimensional embeddings of the entities BIBREF10 . On the other hand, TransH models a relation as a hyperplane together with a translation operation on it BIBREF11 . TransA explores embedding methods for entities and relations belonging to two different knowledge graphs finding the optimal loss function BIBREF12 , whilst PTransE relies on paths to build the final vectors BIBREF1 . The algorithms TransR and CTransR proposed in BIBREF2 aim at building entity and relation embeddings in separate entity space and relation spaces, so as to learn embeddings through projected translations in the relation space; an extension of this algorithm makes use of rules to learn embeddings BIBREF13 . An effort to jointly embed structured and unstructured data (such as text) was proposed in BIBREF14 . The idea behind the DistMult approach is to consider entities as low-dimensional vectors learned from a neural network and relations as bilinear and/or linear mapping functions BIBREF15 . TransG, a generative model address the issue of multiple relation semantics of a relation, has showed to go beyond state-of-the-art results BIBREF0 . ComplEx is based on latent factorization and, with the use of complex-valued embeddings, it facilitates composition and handles a large variety of binary relations BIBREF16 . The fastText algorithm was meant for word embeddings, however BIBREF17 showed that a simple bag-of-words can generate surprisingly good KGEs.",
"The field of KGE has considerably grown during the last two years, earning a spot also in the Semantic Web community. In 2016, BIBREF3 proposed HolE, which relies on holographic models of associative memory by employing circular correlation to create compositional representations. HolE can capture rich interactions by using correlation as the compositional operator but it simultaneously remains efficient to compute, easy to train, and scalable to large datasets. In the same year, BIBREF4 presented RDF2Vec which uses language modeling approaches for unsupervised feature extraction from sequences of words and adapts them to RDF graphs. After generating sequences by leveraging local information from graph substructures by random walks, RDF2Vec learns latent numerical representations of entities in RDF graphs. The algorithm has been extended in order to reduce the computational time and the biased regarded the random walking BIBREF5 . More recently, BIBREF18 exploited the Global Vectors algorithm to compute embeddings from the co-occurrence matrix of entities and relations without generating the random walks. In following research, the authors refer to their algorithm as KGloVe."
],
[
"This study addresses the following research questions:",
"Formally, let $t = (s,p,o)$ be a triple containing a subject, a predicate, and an object in a knowledge base $K$ . For any triple, $(s,p,o) \\subseteq E \\times R \\times (E \\cap L)$ , where $E$ is the set of all entities, $R$ is the set of all relations, and $L$ is the set of all literals (i.e., string or numerical values). A representation function $F$ defined as ",
"$$F : (E \\cap R \\cap L) \\rightarrow \\mathbb {R}^d$$ (Eq. 7) ",
"assigns a vector of dimensionality $d$ to an entity, a relation, or a literal. However, some approaches consider only the vector representations of entities or subjects (i.e, $\\lbrace s \\in E : \\exists (s, p, o) \\in K \\rbrace $ ). For instance, in approaches based on Tensor Factorization, given a relation, its subjects and objects are processed and transformed into sparse matrices; all the matrices are then combined into a tensor whose depth is the number of relations. For the final embedding, current approaches rely on dimensionality reduction to decrease the overall complexity BIBREF9 , BIBREF12 , BIBREF2 . The reduction is performed through an embedding map $\\Phi : \\mathbb {R}^d \\rightarrow \\mathbb {R}^k$ , which is a homomorphism that maps the initial vector space into a smaller, reduced space. The positive value $k < d$ is called the rank of the embedding. Note that each dimension of the reduced common space does not necessarily have an explicit connection with a particular relation. Dimensionality reduction methods include Principal Component Analysis techniques BIBREF9 and generative statistical models such as Latent Dirichlet Allocation BIBREF19 , BIBREF20 .",
"Existing KGE approaches based on the skip-gram model such as RDF2Vec BIBREF4 submit paths built using random walks to a Word2Vec algorithm. Instead, we preprocess the input knowledge base by converting each triple into a small sentence of three words. Our method is faster as it allows us to avoid the path generation step. The generated text corpus is thus processed by the skip-gram model as follows."
],
[
"We adapt the skip-gram model BIBREF21 to deal with our small sequences of length three. In this work, we only consider URIs and discard literals, therefore we compute a vector for each element $u \\in E \\cap R$ . Considering a triple as a sequence of three URIs $T = \\lbrace u_s, u_p, u_o$ }, the aim is to maximize the average log probability ",
"$$\\frac{1}{3} \\sum _{u \\in T} \\sum _{u^{\\prime } \\in T \\setminus u} \\log p(u | u^{\\prime })$$ (Eq. 9) ",
"which means, in other words, to adopt a context window of 2, since the sequence size is always $|T|=3$ . The probability above is theoretically defined as: ",
"$$p(u | u^{\\prime }) = \\frac{\\exp ( {v^O_{u}}^{\\top } v^I_{u^{\\prime }} )}{\\sum _{x \\in E \\cap R} \\exp ( {v^O_{x}}^{\\top } v^I_{u^{\\prime }} )}$$ (Eq. 10) ",
"where $v^I_x$ and $v^O_x$ are respectively the input and output vector representations of a URI $x$ . We imply a negative sampling of 5, i.e. 5 words are randomly selected to have an output of 0 and consequently update the weights."
],
[
"Several methods have been proposed to evaluate word embeddings. The most common ones are based on analogies BIBREF22 , BIBREF23 , where word vectors are summed up together, e.g.: ",
"$$v[\"queen\"] \\approx v[\"king\"] + v[\"woman\"] - v[\"man\"]$$ (Eq. 13) ",
"An analogy where the approximation above is satisfied within a certain threshold can thus predict hidden relationships among words, which in our environment means to predict new links among entities BIBREF4 . The analogy-based score function for a given triple $(\\bar{s},\\bar{p},\\bar{o})$ is defined as follows. ",
"$$score(\\bar{s},\\bar{p},\\bar{o}) = \\frac{1}{\\left|\\lbrace (s,\\bar{p},o) \\in K \\rbrace \\right|} \\sum _{(s,\\bar{p},o) \\in K} {\n{\\left\\lbrace \\begin{array}{ll}\n1 & \\text{if } \\left\\Vert v_{\\bar{s}} + v_o - v_s - v_{\\bar{o}} \\right\\Vert \\le \\epsilon \\\\\n0 & \\text{otherwise}\n\\end{array}\\right.}\n}$$ (Eq. 14) ",
"where $\\epsilon $ is an arbitrarily small positive value. In words, given a predicate $\\bar{p}$ , we select all triples where it occurs. For each triple, we compute the relation vector as the difference between the object and the subject vectors. We then count a match whenever the vector sum of subject $\\bar{s}$ and relation is close to object $\\bar{o}$ within a radius $\\epsilon $ . The score is equal to the rate of matches over the number of selected triples.",
"We evaluate the scoring function above against a neural network based on Long Short-Term Memories (LSTM). The neural network takes a sequence of embeddings as input, namely $v_s, v_p, v_o$ for a triple $(s,p,o) \\in K$ . A dense hidden layer of the same size of the embeddings is connected to a single output neuron with sigmoid activation, which returns a value between 0 and 1. The negative triples are generated using two strategies, i.e. for each triple in the training set (1) randomly extract a relation and its two nodes or (2) corrupt the subject or the object. We use the Adam optimizer and 100 epochs of training."
],
[
"As recently highlighted by several members of the ML and NLP communities, KGEs are rarely evaluated on downstream tasks different from link prediction (also known as knowledge base completion). Achieving high performances on link prediction does not necessarily mean that the generated embeddings are good, since the inference task is often carried out in combination with an external algorithm such as a neural network or a scoring function. The complexity is thus approach-dependent and distributed between the latent structure in the vector model and the parameters (if any) of the inference algorithm. For instance, a translational model such as TransE BIBREF10 would likely feature very complex embeddings, since in most approaches the inference function is a simple addition. On the other hand, we may find less structure in a tensor factorization model such as RESCAL BIBREF7 , as the inference is performed by a feed-forward neural network which extrapolates the hidden semantics layer by layer.",
"In this paper, we introduce two metrics inspired by The Identity of Indiscernibles BIBREF24 to gain insights over the distributional quality of the learned embeddings. The more characteristics two entities share, the more similar they are and so should be their vector representations. Considering the set of characteristics $C_K(s)=\\lbrace (p_1,o_1),\\dots ,(p_m,o_m)\\rbrace $ of a subject $s$ in a triple, we can define a metric that expresses the similarity among two entities $e_1,e_2$ as the Jaccard index between their sets of characteristics $C_K(e_1)$ and $C_K(e_2)$ . Given a set of entities $\\tilde{E}$ and their $N$ nearest neighbours in the vector space, the overall Neighbour Similarity Test (NST) metric is defined as: ",
"$$ \nNST(\\tilde{E},N,K) = \\frac{1}{N \\vert \\tilde{E} \\vert } \\sum _{e \\in \\tilde{E}} \\sum _{j=1}^N \\frac{\\vert C_K(e) \\cap C_K(n_j^{(e)}) \\vert }{\\vert C_K(e) \\cup C_K(n_j^{(e)}) \\vert }$$ (Eq. 19) ",
"where $n_j^{(e)}$ is the $j$ th nearest neighbour of $e$ in the vector space.",
"The second metric is the Type and Category Test (TCT), based on the assumption that two entities which share types and categories should be close in the vector space. This assumption is suggested by the human bias for which rdf:type and dct:subject would be predicates with a higher weight than the others. Although this does not happen, we compute it for a mere sake of comparison with the NST metric. The TCT formula is equal to Equation 19 except for sets $C_K(e)$ , which are replaced by sets of types and categories $TC_K(e)$ ."
],
[
"We implemented KG2Vec in Python 2.7 using the Gensim and Keras libraries with Theano environment. Source code, datasets, and vectors obtained are available online. All experiments were carried out on an Ubuntu 16.04 server with 128 GB RAM and 40 CPUs.",
"The dataset used in the experiments are described in Table 1 . The AKSW-bib dataset – employed for the link prediction evaluation – was created using information from people and projects on the AKSW.org website and bibliographical data from Bibsonomy. We built a model on top of the English 2015-10 version of the DBpedia knowledge graph BIBREF25 ; Figure 1 shows a 3-dimensional plot of selected entities. For the English DBpedia 2016-04 dataset, we built two models. In the first, we set a threshold to embed only the entities occurring at least 5 times in the dataset; we chose this setting to be aligned to the related works' models. In the second model, all 36 million entities in DBpedia are associated a vector. More insights about the first model can be found in the next two subsections, while the resource consumption for creating the second model can be seen in Figure 3 ."
],
[
"In this study, we aim at generating embeddings at a high rate while preserving accuracy. In Table 1 , we already showed that our simple pipeline can achieve a rate of almost $11,000$ triples per second on a large dataset such as DBpedia 2016-04. In Table 2 , we compare KG2Vec with three other scalable approaches for embedding knowledge bases. We selected the best settings of RDF2Vec and KGloVe according to their respective articles, since both algorithms had already been successfully evaluated on DBpedia BIBREF4 , BIBREF18 . We also tried to compute fastText embeddings on our machine, however we had to halt the process after three days. As the goal of our investigation is efficiency, we discarded any other KGE approach that would have needed more than three days of computation to deliver the final model BIBREF18 .",
"RDF2Vec has shown to be the most expensive in terms of disk space consumed, as the created random walks amounted to $\\sim $ 300 GB of text. Moreover, we could not measure the runtime for the first phase of KGloVe, i.e. the calculation of the Personalized PageRank values of DBpedia entities. In fact, the authors used pre-computed entity ranks from BIBREF26 and the KGloVe source code does not feature a PageRank algorithm. We estimated the runtime comparing their hardware specs with ours. Despite being unable to reproduce any experiments from the other three approaches, we managed to evaluate their embeddings by downloading the pretrained models and creating a KG2Vec embedding model of the same DBpedia dataset there employed."
],
[
"For the link prediction task, we partition the dataset into training and test set with a ratio of 9:1. In Table 3 , we show preliminary results between the different strategies on the AKSW-bib dataset using KG2Vec embeddings. As can be seen, our LSTM-based scoring function significantly outperforms the analogy-based one in both settings. According to the Hits@10 accuracy we obtained, corrupting triples to generate negative examples is the better strategy. This first insight can foster new research on optimizing a scoring function for KGE approaches based on distributional semantics."
],
[
"Computing the NST and TCT distributional quality metrics on the entire DBpedia dataset is time-demanding, since for each entity, the model and the graph need to be queried for the $N$ nearest neighbours and their respective sets. However, we approximate the final value by tracing the partial values of NST and TCT over time. In other words, at each iteration $i$ , we compute the metrics over $\\tilde{E}_i = \\lbrace e_1, \\dots , e_i\\rbrace $ . Figure 2 shows the partial TCT value on the most important 10,000 entities for $N=\\lbrace 1,10\\rbrace $ according to the ranks computed by BIBREF26 . Here, KG2Vec maintains a higher index than the other two approaches, despite these are steadily increasing after the $\\sim 2,000$ th entity. We interpret the lower TCT for the top $2,000$ entities as noise produced by the fact that these nodes are hyperconnected to the rest of the graph, therefore it is hard for them to remain close to their type peers. In Figures 2 and 3 , the TCT and NST metrics respectively are computed on 10,000 random entities. In both cases, the values for the two settings of all approaches stabilize after around $1,000$ entities, however we clearly see that RDF2Vec embeddings achieve the highest distributional quality by type and category. The higher number of occurrences per entity in the huge corpus of random walks in RDF2Vec might be the reason of this result for rarer entities.",
"In Figure 3 , we show the CPU, Memory, and disk consumption for KG2Vec on the larger model of DBpedia 2016-04. All three subphases of the algorithm are visible in the plot. For 2.7 hours, tokens are counted; then, the learning proceeds for 7.7 hours; finally in the last 2.3 hours, the model is saved."
],
[
"We presented a fast approach for generating KGEs dubbed KG2Vec. We conclude that the skip-gram model, if trained directly on triples as small sentences of length three, significantly gains in runtime while preserving a decent vector quality. Moreover, the KG2Vec embeddings have shown higher distributional quality for the most important entities in the graph according to PageRank. As a future work, we plan to extend the link prediction evaluation to other benchmarks by using analogies and our LSTM-based scoring function over the embedding models of the approaches here compared."
]
],
"section_name": [
"Introduction",
"Related Work",
"KG2Vec",
"Adapting the skip-gram model",
"Scoring functions",
"Metrics",
"Evaluation",
"Runtime",
"Preliminary results on link prediction",
"Distributional quality",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"3641cbf2a8b90c1053aaefb7db23ef501bffa196",
"8132433c110b826230941f9be14befd651ba9f81",
"cc29159cae181a8f86e2afae6bcd31ad40e84a67"
],
"answer": [
{
"evidence": [
"In this paper, we introduce two metrics inspired by The Identity of Indiscernibles BIBREF24 to gain insights over the distributional quality of the learned embeddings. The more characteristics two entities share, the more similar they are and so should be their vector representations. Considering the set of characteristics $C_K(s)=\\lbrace (p_1,o_1),\\dots ,(p_m,o_m)\\rbrace $ of a subject $s$ in a triple, we can define a metric that expresses the similarity among two entities $e_1,e_2$ as the Jaccard index between their sets of characteristics $C_K(e_1)$ and $C_K(e_2)$ . Given a set of entities $\\tilde{E}$ and their $N$ nearest neighbours in the vector space, the overall Neighbour Similarity Test (NST) metric is defined as:",
"$$ NST(\\tilde{E},N,K) = \\frac{1}{N \\vert \\tilde{E} \\vert } \\sum _{e \\in \\tilde{E}} \\sum _{j=1}^N \\frac{\\vert C_K(e) \\cap C_K(n_j^{(e)}) \\vert }{\\vert C_K(e) \\cup C_K(n_j^{(e)}) \\vert }$$ (Eq. 19)",
"where $n_j^{(e)}$ is the $j$ th nearest neighbour of $e$ in the vector space.",
"The second metric is the Type and Category Test (TCT), based on the assumption that two entities which share types and categories should be close in the vector space. This assumption is suggested by the human bias for which rdf:type and dct:subject would be predicates with a higher weight than the others. Although this does not happen, we compute it for a mere sake of comparison with the NST metric. The TCT formula is equal to Equation 19 except for sets $C_K(e)$ , which are replaced by sets of types and categories $TC_K(e)$ ."
],
"extractive_spans": [],
"free_form_answer": "They propose two new metrics. One, which they call the Neighbour Similarity Test, calculates how many shared characteristics there are between entities whose representations are neighbors in the embedding space. The second, which they call the Type and Category Test, is the same as the Neighbour Similarity Test, except it uses entity types and categories in the place of individual entity characteristics.",
"highlighted_evidence": [
"In this paper, we introduce two metrics inspired by The Identity of Indiscernibles BIBREF24 to gain insights over the distributional quality of the learned embeddings. The more characteristics two entities share, the more similar they are and so should be their vector representations. Considering the set of characteristics $C_K(s)=\\lbrace (p_1,o_1),\\dots ,(p_m,o_m)\\rbrace $ of a subject $s$ in a triple, we can define a metric that expresses the similarity among two entities $e_1,e_2$ as the Jaccard index between their sets of characteristics $C_K(e_1)$ and $C_K(e_2)$ . Given a set of entities $\\tilde{E}$ and their $N$ nearest neighbours in the vector space, the overall Neighbour Similarity Test (NST) metric is defined as:\n\n$$ NST(\\tilde{E},N,K) = \\frac{1}{N \\vert \\tilde{E} \\vert } \\sum _{e \\in \\tilde{E}} \\sum _{j=1}^N \\frac{\\vert C_K(e) \\cap C_K(n_j^{(e)}) \\vert }{\\vert C_K(e) \\cup C_K(n_j^{(e)}) \\vert }$$ (Eq. 19)\n\nwhere $n_j^{(e)}$ is the $j$ th nearest neighbour of $e$ in the vector space.\n\nThe second metric is the Type and Category Test (TCT), based on the assumption that two entities which share types and categories should be close in the vector space. This assumption is suggested by the human bias for which rdf:type and dct:subject would be predicates with a higher weight than the others. Although this does not happen, we compute it for a mere sake of comparison with the NST metric. The TCT formula is equal to Equation 19 except for sets $C_K(e)$ , which are replaced by sets of types and categories $TC_K(e)$ ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, we introduce two metrics inspired by The Identity of Indiscernibles BIBREF24 to gain insights over the distributional quality of the learned embeddings. The more characteristics two entities share, the more similar they are and so should be their vector representations. Considering the set of characteristics $C_K(s)=\\lbrace (p_1,o_1),\\dots ,(p_m,o_m)\\rbrace $ of a subject $s$ in a triple, we can define a metric that expresses the similarity among two entities $e_1,e_2$ as the Jaccard index between their sets of characteristics $C_K(e_1)$ and $C_K(e_2)$ . Given a set of entities $\\tilde{E}$ and their $N$ nearest neighbours in the vector space, the overall Neighbour Similarity Test (NST) metric is defined as:",
"$$ NST(\\tilde{E},N,K) = \\frac{1}{N \\vert \\tilde{E} \\vert } \\sum _{e \\in \\tilde{E}} \\sum _{j=1}^N \\frac{\\vert C_K(e) \\cap C_K(n_j^{(e)}) \\vert }{\\vert C_K(e) \\cup C_K(n_j^{(e)}) \\vert }$$ (Eq. 19)",
"where $n_j^{(e)}$ is the $j$ th nearest neighbour of $e$ in the vector space.",
"The second metric is the Type and Category Test (TCT), based on the assumption that two entities which share types and categories should be close in the vector space. This assumption is suggested by the human bias for which rdf:type and dct:subject would be predicates with a higher weight than the others. Although this does not happen, we compute it for a mere sake of comparison with the NST metric. The TCT formula is equal to Equation 19 except for sets $C_K(e)$ , which are replaced by sets of types and categories $TC_K(e)$ ."
],
"extractive_spans": [],
"free_form_answer": "Neighbour Similarity Test; Type and Category Test",
"highlighted_evidence": [
"Considering the set of characteristics $C_K(s)=\\lbrace (p_1,o_1),\\dots ,(p_m,o_m)\\rbrace $ of a subject $s$ in a triple, we can define a metric that expresses the similarity among two entities $e_1,e_2$ as the Jaccard index between their sets of characteristics $C_K(e_1)$ and $C_K(e_2)$ . Given a set of entities $\\tilde{E}$ and their $N$ nearest neighbours in the vector space, the overall Neighbour Similarity Test (NST) metric is defined as:\n\n$$ NST(\\tilde{E},N,K) = \\frac{1}{N \\vert \\tilde{E} \\vert } \\sum _{e \\in \\tilde{E}} \\sum _{j=1}^N \\frac{\\vert C_K(e) \\cap C_K(n_j^{(e)}) \\vert }{\\vert C_K(e) \\cup C_K(n_j^{(e)}) \\vert }$$ (Eq. 19)\n\nwhere $n_j^{(e)}$ is the $j$ th nearest neighbour of $e$ in the vector space.",
"The second metric is the Type and Category Test (TCT), based on the assumption that two entities which share types and categories should be close in the vector space."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, we introduce two metrics inspired by The Identity of Indiscernibles BIBREF24 to gain insights over the distributional quality of the learned embeddings. The more characteristics two entities share, the more similar they are and so should be their vector representations. Considering the set of characteristics $C_K(s)=\\lbrace (p_1,o_1),\\dots ,(p_m,o_m)\\rbrace $ of a subject $s$ in a triple, we can define a metric that expresses the similarity among two entities $e_1,e_2$ as the Jaccard index between their sets of characteristics $C_K(e_1)$ and $C_K(e_2)$ . Given a set of entities $\\tilde{E}$ and their $N$ nearest neighbours in the vector space, the overall Neighbour Similarity Test (NST) metric is defined as:",
"$$ NST(\\tilde{E},N,K) = \\frac{1}{N \\vert \\tilde{E} \\vert } \\sum _{e \\in \\tilde{E}} \\sum _{j=1}^N \\frac{\\vert C_K(e) \\cap C_K(n_j^{(e)}) \\vert }{\\vert C_K(e) \\cup C_K(n_j^{(e)}) \\vert }$$ (Eq. 19)",
"where $n_j^{(e)}$ is the $j$ th nearest neighbour of $e$ in the vector space.",
"The second metric is the Type and Category Test (TCT), based on the assumption that two entities which share types and categories should be close in the vector space. This assumption is suggested by the human bias for which rdf:type and dct:subject would be predicates with a higher weight than the others. Although this does not happen, we compute it for a mere sake of comparison with the NST metric. The TCT formula is equal to Equation 19 except for sets $C_K(e)$ , which are replaced by sets of types and categories $TC_K(e)$ ."
],
"extractive_spans": [],
"free_form_answer": "Neighbour Similarity Test (NST) and Type and Category Test (TCT)",
"highlighted_evidence": [
"Considering the set of characteristics $C_K(s)=\\lbrace (p_1,o_1),\\dots ,(p_m,o_m)\\rbrace $ of a subject $s$ in a triple, we can define a metric that expresses the similarity among two entities $e_1,e_2$ as the Jaccard index between their sets of characteristics $C_K(e_1)$ and $C_K(e_2)$ . Given a set of entities $\\tilde{E}$ and their $N$ nearest neighbours in the vector space, the overall Neighbour Similarity Test (NST) metric is defined as:\n\n$$ NST(\\tilde{E},N,K) = \\frac{1}{N \\vert \\tilde{E} \\vert } \\sum _{e \\in \\tilde{E}} \\sum _{j=1}^N \\frac{\\vert C_K(e) \\cap C_K(n_j^{(e)}) \\vert }{\\vert C_K(e) \\cup C_K(n_j^{(e)}) \\vert }$$ (Eq. 19)\n\nwhere $n_j^{(e)}$ is the $j$ th nearest neighbour of $e$ in the vector space.",
"The second metric is the Type and Category Test (TCT), based on the assumption that two entities which share types and categories should be close in the vector space.",
"The TCT formula is equal to Equation 19 except for sets $C_K(e)$ , which are replaced by sets of types and categories $TC_K(e)$ ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"08a77843a23b7ce3870829ca2e66177ccc043d30",
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"4c0e246c38c60956b9987d0d53a0f850a287439b",
"60dbc28df2052fc89fbfc84dff541ff9358ddf7e",
"ee56d7cb23eed40dcb189b91f8321c8ba29184fd"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2 Runtime comparison of the single phases. Those with (*) are estimated runtimes."
],
"extractive_spans": [],
"free_form_answer": "RDF2Vec takes 123 minutes to generate random walks and an estimated 96 hours to train word2vec. KGloVe takes an estimated 12 hours to train GloVe. fastText takes an estimated 72 hours to train",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2 Runtime comparison of the single phases. Those with (*) are estimated runtimes."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2 Runtime comparison of the single phases. Those with (*) are estimated runtimes."
],
"extractive_spans": [],
"free_form_answer": "RDF2Vec: 123 minutes runtime with >96 hours training, FastText: 5 minutes with >72 hours training",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2 Runtime comparison of the single phases. Those with (*) are estimated runtimes."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2 Runtime comparison of the single phases. Those with (*) are estimated runtimes."
],
"extractive_spans": [],
"free_form_answer": "between 12 hours and 96 hours",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2 Runtime comparison of the single phases. Those with (*) are estimated runtimes."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"08a77843a23b7ce3870829ca2e66177ccc043d30",
"c7d4a630661cd719ea504dba56393f78278b296b",
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"975533617bc96c9da667c932c921136ef01e03bb",
"9b10ee1728505009a569cbc5eed3eaefb9b9ea1e"
],
"answer": [
{
"evidence": [
"Existing KGE approaches based on the skip-gram model such as RDF2Vec BIBREF4 submit paths built using random walks to a Word2Vec algorithm. Instead, we preprocess the input knowledge base by converting each triple into a small sentence of three words. Our method is faster as it allows us to avoid the path generation step. The generated text corpus is thus processed by the skip-gram model as follows."
],
"extractive_spans": [
"a subject, a predicate, and an object in a knowledge base"
],
"free_form_answer": "",
"highlighted_evidence": [
"Existing KGE approaches based on the skip-gram model such as RDF2Vec BIBREF4 submit paths built using random walks to a Word2Vec algorithm. Instead, we preprocess the input knowledge base by converting each triple into a small sentence of three words."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We adapt the skip-gram model BIBREF21 to deal with our small sequences of length three. In this work, we only consider URIs and discard literals, therefore we compute a vector for each element $u \\in E \\cap R$ . Considering a triple as a sequence of three URIs $T = \\lbrace u_s, u_p, u_o$ }, the aim is to maximize the average log probability",
"$$\\frac{1}{3} \\sum _{u \\in T} \\sum _{u^{\\prime } \\in T \\setminus u} \\log p(u | u^{\\prime })$$ (Eq. 9)",
"which means, in other words, to adopt a context window of 2, since the sequence size is always $|T|=3$ . The probability above is theoretically defined as:"
],
"extractive_spans": [
"context window of 2"
],
"free_form_answer": "",
"highlighted_evidence": [
"Considering a triple as a sequence of three URIs $T = \\lbrace u_s, u_p, u_o$ }, the aim is to maximize the average log probability\n\n$$\\frac{1}{3} \\sum _{u \\in T} \\sum _{u^{\\prime } \\in T \\setminus u} \\log p(u | u^{\\prime })$$ (Eq. 9)\n\nwhich means, in other words, to adopt a context window of 2, since the sequence size is always $|T|=3$ ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"08a77843a23b7ce3870829ca2e66177ccc043d30",
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What is the new metric?",
"How long do other state-of-the-art models take to process the same amount of data?",
"What context is used when computing the embedding for an entity?"
],
"question_id": [
"4a201b8b9cc566b56aedb5ab45335f202bc41845",
"6a90135bd001be69a888076aff1b149b78adf443",
"1f40adc719d8ccda81e7e90525b577f5698b5aad"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"link prediction",
"link prediction",
"link prediction"
],
"topic_background": [
"research",
"research",
"research"
]
} | {
"caption": [
"Fig. 1 A selection of DBpedia resources along with their vectors in 3 dimensions obtained using Principal Component Analysis. Blue points are resources, whilst red points are classes. As can be seen, resources follow the distributional hypothesis.",
"Table 1 Details and runtimes for the generation of KG2Vec embeddings on two datasets.",
"Table 2 Runtime comparison of the single phases. Those with (*) are estimated runtimes.",
"Table 3 Filtered Hits@10 values on link prediction on AKSW-bib using different strategies.",
"Fig. 2 Partial TCT value on DBpedia 2016-04 for the top 10,000 entities. 0 2000 4000 6000 8000 10000 Iteration",
"Fig. 4 Partial NST value on DBpedia 2016-04 for 10,000 random entities. 0 2 4 6 8 10 12 14",
"Fig. 5 CPU, Memory, and disk consumption for KG2Vec on the larger model of DBpedia 2016-04."
],
"file": [
"7-Figure1-1.png",
"9-Table1-1.png",
"10-Table2-1.png",
"11-Table3-1.png",
"12-Figure2-1.png",
"12-Figure4-1.png",
"12-Figure5-1.png"
]
} | [
"What is the new metric?",
"How long do other state-of-the-art models take to process the same amount of data?"
] | [
[
"1803.07828-Metrics-4",
"1803.07828-Metrics-3"
],
[
"1803.07828-10-Table2-1.png"
]
] | [
"Neighbour Similarity Test (NST) and Type and Category Test (TCT)",
"between 12 hours and 96 hours"
] | 150 |
1803.08419 | The Rapidly Changing Landscape of Conversational Agents | Conversational agents have become ubiquitous, ranging from goal-oriented systems for helping with reservations to chit-chat models found in modern virtual assistants. In this survey paper, we explore this fascinating field. We look at some of the pioneering work that defined the field and gradually move to the current state-of-the-art models. We look at statistical, neural, generative adversarial network based and reinforcement learning based approaches and how they evolved. Along the way we discuss various challenges that the field faces, lack of context in utterances, not having a good quantitative metric to compare models, lack of trust in agents because they do not have a consistent persona etc. We structure this paper in a way that answers these pertinent questions and discusses competing approaches to solve them. | {
"paragraphs": [
[
"One of the earliest goals of Artificial Intelligence (AI) has been to build machines that can converse with us. Whether in early AI literature or the current popular culture, conversational agents have captured our imagination like no other technology has. In-fact the ultimate test of whether true artificial intelligence has been achieved, the Turing test BIBREF0 proposed by Alan Turing the father of artificial intelligence in 1950, revolves around the concept of a good conversational agent. The test is deemed to have been passed if a conversational agent is able to fool human judges into believing that it is in fact a human being.",
"Starting with pattern matching programs like ELIZA developed at MIT in 1964 to the current commercial conversational agents and personal assistants (Siri, Allo, Alexa, Cortana et al) that all of us carry in our pockets, conversational agents have come a long way. In this paper we look at this incredible journey. We start by looking at early rule-based methods which consisted of hand engineered features, most of which were domain specific. However, in our view, the advent of neural networks that were capable of capturing long term dependencies in text and the creation of the sequence to sequence learning model BIBREF1 that was capable of handling utterances of varying length is what truly revolutionized the field. Since the sequence to sequence model was first used to build a neural conversational agent BIBREF2 in 2016 the field has exploded. With a multitude of new approaches being proposed in the last two years which significantly impact the quality of these conversational agents, we skew our paper towards the post 2016 era. Indeed one of the key features of this paper is that it surveys the exciting new developments in the domain of conversational agents.",
"Dialogue systems, also known as interactive conversational agents, virtual agents and sometimes chatterbots, are used in a wide set of applications ranging from technical support services to language learning tools and entertainment. Dialogue systems can be divided into goal-driven systems, such as technical support services, booking systems, and querying systems. On the other hand we have non-goal-driven systems which are also referred to as chit-chat models. There is no explicit purpose for interacting with these agents other than entertainment. Compared to goal oriented dialog systems where the universe is limited to an application, building open-ended chit-chat models is more challenging. Non-goal oriented agents are a good indication of the state of the art of artificial intelligence according to the Turing test. With no grounding in common sense and no sense of context these agents have to fall back on canned responses and resort to internet searches now. But as we discuss in section SECREF5 , new techniques are emerging to provide this much needed context to these agents.",
"The recent successes in the domain of Reinforcement Learning (RL) has also opened new avenues of applications in the conversational agent setting. We explore some of these approaches in section SECREF6 ",
"Another feature that has been traditionally lacking in conversation agents is a personality. O Vinayal et al BIBREF2 hypothesis that not having a consistent personality is one of the main reasons that is stopping us from passing the turing test. Conversational agents also lack emotional consistency in their responses. These features are vital if we want humans to trust conversational agents. In section SECREF7 we discuss state of the art approaches to overcome these problems.",
"Despite such huge advancements in the field, the way these models are evaluated is something that needs to be dramatically altered. Currently there exists no perfect quantitative method to compare two conversational agents. The field has to rely on qualitative measures or measures like BLeU and perplexity borrowed from machine translation. In section SECREF8 we discuss this problem in detail."
],
[
"Initially, the interactive dialogue systems were based on and limited to speaker independent recognition of isolated words and phrases or limited continuous speech such as digit strings. In August 1993, there came the ESPRIT SUNDIAL project (Peckham et al, 1993 BIBREF3 ) which was aimed at allowing spontaneous conversational inquiries over the telephone for the train timetable and flight enquiries. The linguistic processing component in it was based on natural language parsing. The parser made use of alternative word hypotheses represented in a lattice or graph in constructing a parse tree and allowance was made for gaps and partially parsable strings. It made use of both syntactic and semantic knowledge for the task domain. It was able to achieve a 96% success rate for the flight inquiry application in English. However, the issue was that the given conversational agent was heavily limited to the types of applications it can perform and its high success rate was more due to that instead of great natural language techniques (relative to recent times).",
"In 1995, two researchers (Ball et al, 1995 BIBREF4 ) at Microsoft developed a conversational assistant called Persona which was one of the first true personal assistant similar to what we have in recent times (like Siri, etc). It allowed users the maximum flexibility to express their requests in whatever syntax they found most natural and the interface was based on a broad-coverage NLP system unlike the system discussed in the previous paragraph. In this, a labelled semantic graph is generated from the speech input which encodes case frames or thematic roles. After this, a sequence of graph transformations is applied on it using the knowledge of interaction scenario and application domain. This results into a normalized application specific structure called as task graph which is then matched against the templates (in the application) which represent the normalized task graphs corresponding to all the possible user statements that the assistant understands and the action is then executed. The accuracy was not that good and they did not bother to calculate it. Also, due to the integrated nature of conversational interaction in Persona, the necessary knowledge must be provided to each component of the system. Although it had limitations, it provided a very usable linguistic foundation for conversational interaction.",
"The researchers thought that if they can create assistant models specific to the corresponding models, they can achieve better accuracy for those applications instead of creating a common unified personal assistant which at that time performed quite poorly. There was a surge in application-specific assistants like in-car intelligent personal assistant (Schillo et al, 1996 BIBREF5 ), spoken-language interface to execute military exercises (Stent et al, 1999 BIBREF6 ), etc. Since it was difficult to develop systems with high domain extensibility, the researchers came up with a distributed architecture for cooperative spoken dialogue agents (Lin et al, 1999 BIBREF7 ).",
"Under this architecture, different spoken dialogue agents handling different domains can be developed independently and cooperate with one another to respond to the user’s requests. While a user interface agent can access the correct spoken dialogue agent through a domain switching protocol, and carry over the dialogue state and history so as to keep the knowledge processed persistently and consistently across different domains. Figure FIGREF1 shows the agent society for spoken dialogue for tour information service.",
"If we define the false alarm rate by counting the utterances in which unnecessary domain-switching occurred and the detection rate by counting the utterances in which the desired domain-switching were accurately detected, then in this model, high detection rate was achieved at very low false alarm rate. For instance, for around a false alarm rate of 0.2, the model was able to achieve a detection rate of around 0.9 for the case of tag sequence search with language model search scheme."
],
[
"Next came the era of using machine learning methods in the area of conversation agents which totally revolutionized this field.",
"Maxine Eskenazi and her team initially wanted to build spoken dialog system for the less general sections of the population, such as the elderly and non-native speakers of English. They came up with Let’s Go project (Raux et al, 2003 BIBREF8 ) that was designed to provide Pittsburgh area bus information. Later, this was opened to the general public (Raux et al, 2005 BIBREF9 ). Their work is important in terms of the techniques they used.",
"The speech recognition was done using n-gram statistical model which is then passed to a robust parser based on an extended Context Free Grammar allowing the system to skip unknown words and perform partial parsing. They wrote the grammar based on a combination of their own intuition and a small scale Wizard-of-Oz experiment they ran. The grammar rules used to identify bus stops were generated automatically from the schedule database. After this, they trained a statistical language model on the artificial corpus. In order to make the parsing grammar robust enough to parse fairly ungrammatical, yet understandable sentences, it was kept as general as possible. On making it public, they initially achieved a task success rate of 43.3% for the whole corpus and 43.6 when excluding sessions that did not contain any system-directed speech.",
"After this they tried to increase the performance of the system (Raux et al, 2006 BIBREF10 ). They retrained their acoustic models by performing Baum-Welch optimization on the transcribed data (starting from their original models). Unfortunately, this only brought marginal improvement because the models (semi-continuous HMMs) and algorithms they were using were too simplistic for this task. They improved the turn-taking management abilities of the system by closely analysing the feedback they received. They added more specific strategies, aiming at dealing with problems like noisy environments, too loud or too long utterances, etc. They found that they were able to get a success rate of 79% for the complete dialogues (which was great).",
"The previous papers (like the ones which we discussed in the above paragraph) did not attempt to use data-driven techniques for the dialog agents because such data was not available in large amount at that time. But then there was a high increase in the collection of spoken dialog corpora which made it possible to use data-driven techniques to build and use models of task-oriented dialogs and possibly get good results. In the paper by Srinivas et al,2008 BIBREF11 , the authors proposed using data-driven techniques to build task structures for individual dialogs and use the dialog task structures for dialog act classification, task/subtask classification, task/subtask prediction and dialog act prediction.",
"For each utterance, they calculated features like n-grams of the words and their POS tags, dialog act and task/subtask label. Then they put those features in the binary MaxEnt classifier. For this, their model was able to achieve an error rate of 25.1% for the dialog act classification which was better than the best performing models at that time. Although, according to the modern standards, the results are not that great but the approach they suggested (of using data to build machine learning models) forms the basis of the techniques that are currently used in this area."
],
[
"The problem with rule-based models was that they were often domain dependent and could not be easily ported to a new domain. They also depended on hand crafted rules which was both expensive and required domain expertise. Two factors which when combined spell doom for scalbility. All of this changed in 2015 when Vinyals et al proposed an approach BIBREF2 inspired from the recent progress in machine translation BIBREF1 . Vinyals et al used the sequence to sequence learning architecture for conversation agents. Their model was the first model which could be trained end-to-end, and could generate a new output utterance based on just the input sentence and no other hand crafted features.",
"They achieved this by casting the conversation modelling task, as a task of predicting the next sequence given the previous sequence using recurrent networks. This simple approach truly changed the conversation agent landscape. Most of the state-of-the-art today is built on their success. In a nutshell the input utterance is input to an encoder network, which is a recurrent neural network (RNN) in this case, but as we will see Long Short Term Memory (LSTMs) BIBREF12 have since replaced RNNs as the standard for this task. The encoder summarizes the input utterance into a fixed length vector representation which is input to the decoder, which itself is again a RNN. The paper looks at this fixed vector as the thought vector - which hold the most important information of the input utterance. The Decoder netwroks takes this as input and output's an output utterance word-by-word until it generates an end-of-speech INLINEFORM0 token. This approach allows for variable length inputs and outputs. The network is jointly trained on two turn conversations. Figure FIGREF3 shows the sequence to sequence neural conversation model.",
"Even though most of the modern work in the field is built on this approach there is a significant drawback to this idea. This model can theoretically never solve the problem of modelling dialogues due to various simplifications, the most important of them being the objective function that is being optimized does not capture the actual objective achieved through human communication, which is typically longer term and based on exchange of information rather than next step prediction. It is important to see that optimizing an agent to generate text based on what it sees in the two-turn conversation dataset that it is trained on does not mean that the agent would be able to generalize to human level conversation across contexts. Nevertheless in absence of a better way to capture human communication this approach laid the foundation of most of the modern advances in the field. Another problem that plagues this paper and the field in general is Evaluation. As there can be multiple correct output utterances for a given input utterance there is no quantitative way to evaluate how well a model is performing. In this paper to show the efficacy of their model the authors publish snippets of conversations across different datasets. We discuss this general problem in evaluation later.",
"Iulian et al. build on this sequence-to-sequence based approach in their paper presented in AAAI 2016 BIBREF13 . Their work is inspired by the hierarchical recurrent encoder-decoder architecture (HRED) proposed by Sordoni et al. BIBREF14 . Their premise is that a dialogue can be seen as a sequence of utterances which, in turn, are sequences of tokens. Taking advantage of this built in hierarchy they model their system in the following fashion.",
"The encoder RNN maps each utterance to an utterance vector. The utterance vector is the hidden state obtained after the last token of the utterance has been processed. The higher-level context RNN keeps track of past utterances by processing iteratively each utterance vector. After processing utterance INLINEFORM0 , the hidden state of the context RNN represents a summary of the dialogue up to and including turn INLINEFORM1 , which is used to predict the next utterance INLINEFORM2 . The next utterance prediction is performed by means of a decoder RNN, which takes the hidden state of the context RNN and produces a probability distribution over the tokens in the next utterance. As seen in figure FIGREF4 ",
"The advantages of using a hierarchical representation are two-fold. First, the context RNN allows the model to represent a form of common ground between speakers, e.g. to represent topics and concepts shared between the speakers using a distributed vector representation. Second, because the number of computational steps between utterances is reduced. This makes the objective function more stable w.r.t. the model parameters, and helps propagate the training signal for first-order optimization methods.",
"Models like sequence-to-sequence and the hierarchical approaches have proven to be good baseline models. In the last couple of years there has been a major effort to build on top of these baselines to make conversational agents more robust BIBREF15 BIBREF16 .",
"Due to their large parameter space, the estimation of neural conversation models requires considerable amounts of dialogue data. Large online corpora are helpful for this. However several dialogue corpora, most notably those extracted from subtitles, do not include any explicit turn segmentation or speaker identification.The neural conversation model may therefore inadvertently learn responses that remain within the same dialogue turn instead of starting a new turn. Lison et al BIBREF17 overcome these limitations by introduce a weighting model into the neural architecture. The weighting model, which is itself estimated from dialogue data, associates each training example to a numerical weight that reflects its intrinsic quality for dialogue modelling. At training time, these sample weights are included into the empirical loss to be minimized. The purpose of this model is to associate each ⟨context, response⟩ example pair to a numerical weight that reflects the intrinsic “quality” of each example. The instance weights are then included in the empirical loss to minimize when learning the parameters of the neural conversation model. The weights are themselves computed via a neural model learned from dialogue data. Approaches like BIBREF17 are helpful but data to train these neural conversational agents remains scarce especially in academia, we talk more about the scarcity of data in a future section."
],
[
"Though sequence-to-sequence based models have achieved a lot of success, another push in the field has been to instead train a language model over the entire dialogue as one single sequence BIBREF18 . These works argue that a language model is better suited to dialogue modeling, as it learns how the conversation evolves as information progresses.",
"Mei et al. BIBREF19 improve the coherence of such neural dialogue language models by developing a generative dynamic attention mechanism that allows each generated word to choose which related words it wants to align to in the increasing conversation history (including the previous words in the response being generated). They introduce a dynamic attention mechanism to a RNN language model in which the scope of attention increases as the recurrence operation progresses from the start through the end of the conversation. The dynamic attention model promotes coherence of the generated dialogue responses (continuations) by favoring the generation of words that have syntactic or semantic associations with salient words in the conversation history."
],
[
"Although these neural models are really powerful, so much so that they power most of the commercially available smart assistants and conversational agents. However these agents lack a sense of context and a grounding in common sense that their human interlocutors possess. This is especially evident when interacting with a commercial conversation agent, when more often that not the agent has to fall back to canned responses or resort to displaying Internet search results in response to an input utterance. One of the main goals of the research community, over the last year or so, has been to overcome this fundamental problem with conversation agents. A lot of different approaches have been proposed ranging from using knowledge graphs BIBREF20 to augment the agent's knowledge to using latest advancements in the field of online learning BIBREF21 . In this section we discuss some of these approaches.",
"The first approach we discuss is the Dynamic Knowledge Graph Network (DynoNet) proposed by He et al BIBREF20 , in which the dialogue state is modeled as a knowledge graph with an embedding for each node. To model both structured and open-ended context they model two agents, each with a private list of items with attributes, that must communicate to identify the unique shared item. They structure entities as a knowledge graph; as the dialogue proceeds, new nodes are added and new context is propagated on the graph. An attention-based mechanism over the node embeddings drives generation of new utterances. The model is best explained by the example used in the paper which is as follows: The knowledge graph represents entities and relations in the agent’s private KB, e.g., item-1’s company is google. As the conversation unfolds, utterances are embedded and incorporated into node embeddings of mentioned entities. For instance, in Figure FIGREF6 , “anyone went to columbia” updates the embedding of columbia. Next, each node recursively passes its embedding to neighboring nodes so that related entities (e.g., those in the same row or column) also receive information from the most recent utterance. In this example, jessica and josh both receive new context when columbia is mentioned. Finally, the utterance generator, an LSTM, produces the next utterance by attending to the node embeddings.",
"However Lee et al in BIBREF21 take a different approach to add knowledge to conversational agents. They proposes using a continuous learning based approach. They introduce a task-independent conversation model and an adaptive online algorithm for continual learning which together allow them to sequentially train a conversation model over multiple tasks without forgetting earlier tasks.",
"In a different approach, Ghazvininejad et al BIBREF22 propose a knowledge grounded approach which infuses the output utterance with factual information relevant to the conversational context. Their architecture is shown in figure FIGREF7 . They use an external collection of world facts which is a large collection of raw text entries (e.g., Foursquare, Wikipedia, or Amazon reviews) indexed by named entities as keys. Then, given a conversational history or source sequence S, they identify the “focus” in S, which is the text span (one or more entities) based on which they form a query to link to the facts. The query is then used to retrieve all contextually relevant facts. Finally, both conversation history and relevant facts are fed into a neural architecture that features distinct encoders for conversation history and facts. Another interesting facet of such a model is that new facts can be added and old facts updated by just updating the world facts dictionary without retraining the model from scratch, thus making the model more adaptive and robust.",
"Instead of just having a set of facts to augment the conversation, a richer way could be to use knowledge graphs or commonsense knowledge bases which consist of [entity-relation-entity] triples. Young et al explore this idea in BIBREF23 . For a given input utterance, they find the relevant assertions in the common sense knowledge base using simple n-gram matching. They then perform chunking on the relevant assertions and feed the individual token to a tri-LSTM encoder. The output of this encoder is weighted along with the input utterance and the output utterance is generated. They claim that such common sense conversation agents outperform a naive conversation agent.",
"Another interesting way to add knowledge to the conversation agents is to capture external knowledge for a given dialog using a search engine. In the paper by Long et al, 2017 BIBREF24 , the authors built a model to generate natural and informative responses for customer service oriented dialog incorporating external knowledge.",
"They get the external knowledge using a search engine. Then a knowledge enhanced sequence-to-sequence framework is designed to model multi-turn dialogs on external knowledge conditionally. For this purpose, their model extends the simple sequence-to-sequence model by augmenting the input with the knowledge vector so as to take account of the knowledge in the procedure of response generation into the decoder of the sequence-to-sequence model. Both the encoder and the decoder are composed of LSTM.",
"Their model scores an average human rating of 3.3919 out of 5 in comparison to the baseline which is 3.3638 out of 5. Hence, their model generates more informative responses. However, they found the external knowledge plays a negative role in the procedure of response generation when there is more noise in the information. Exploring how to obtain credible knowledge of a given dialog history can be a future generation of their model."
],
[
"After exploring the neural methods in a lot of detail, the researchers have also begun exploring, in the current decade, how to use the reinforcement learning methods in the dialogue and personal agents."
],
[
"One of the first main papers that thought of using reinforcement learning for this came in 2005 by English et al BIBREF25 . They used an on-policy Monte Carlo method and the objective function they used was a linear combination of the solution quality (S) and the dialog length (L), taking the form: o(S,I) = INLINEFORM0 - INLINEFORM1 .",
"At the end of each dialog the interaction was given a score based on the evaluation function and that score was used to update the dialog policy of both agents (that is, the conversants). The state-action history for each agent was iterated over separately and the score from the recent dialog was averaged in with the expected return from the existing policy. They chose not to include any discounting factor to the dialog score as they progressed back through the dialog history. The decision to equally weight each state-action pair in the dialog history was made because an action’s contribution to the dialog score is not dependent upon its proximity to the end of the task. In order to combat the problem of converging to an effective policy they divided up the agent training process into multiple epochs.",
"The average objective function score for the case of learned policies was 44.90. One of the main reasons for the low accuracy (which is also a limitation of this paper) was that there were a number of aspects of dialog that they had not modeled such as non-understandings, misunderstandings, and even parsing sentences into the action specification and generating sentences from the action specification. But the paper set the pavement of the reinforcement learning methods into the area of dialog and personal agents."
],
[
"Let’s have a look at KB-InfoBot (by Dhingra et al, 2017 BIBREF26 ): a multi-turn dialogue agent which helps users search Knowledge Bases (KBs) without composing complicated queries. In this paper, they replace the symbolic queries (which break the differentiability of the system and prevent end-to-end training of neural dialogue agents) with an induced ‘soft’ posterior distribution over the KB that indicates which entities the user is interested in. Integrating the soft retrieval process with a reinforcement learner leads to higher task success rate and reward in both simulations and against real users.",
"In this, the authors used an RNN to allow the network to maintain an internal state of dialogue history. Specifically, they used a Gated Recurrent Unit followed by a fully-connected layer and softmax non-linearity to model the policy π over the actions. During training, the agent samples its actions from this policy to encourage exploration. Parameters of the neural components were trained using the REINFORCE algorithm. For end-to-end training they updated both the dialogue policy and the belief trackers using the reinforcement signal. While testing, the dialogue is regarded as a success if the user target is in top five results returned by the agent and the reward is accordingly calculated that helps the agent take the next action.",
"Their system returns a success rate of 0.66 for small knowledge bases and a great success rate of 0.83 for medium and large knowledge bases. As the user interacts with the agent, the collected data can be used to train the end-to-end agent which we see has a strong learning capability. Gradually, as more experience is collected, the system can switch from Reinforcement Learning-Soft to the personalized end-to-end agent. Effective implementation of this requires such personalized end-to-end agents to learn quickly which should be explored in the future.",
"However, the system has a few limitations. The accuracy is not enough for using for the practical applications. The agent suffers from the cold start issue. In the case of end-to-end learning, they found that for a moderately sized knowledge base, the agent almost always fails if starting from random initialization."
],
[
"Deep reinforcement learning (RL) methods have significant potential for dialogue policy optimisation. However, they suffer from a poor performance in the early stages of learning as we saw in the paper in the above section. This is especially problematic for on-line learning with real users.",
"In the paper by Su et al, 2017 BIBREF27 , they proposed a sample-efficient actor-critic reinforcement learning with supervised data for dialogue management. Just for a heads up, actor-critic algorithms are the algorithms that have an actor stores the policy according to which the action is taken by the agent and a critic that critiques the actions chosen by the actor (that is, the rewards obtained after the action are sent to the critic using which it calculates value functions).",
"To speed up the learning process, they presented two sample-efficient neural networks algorithms: trust region actor-critic with experience replay (TRACER) and episodic natural actor-critic with experience replay (eNACER). Both models employ off-policy learning with experience replay to improve sample-efficiency. For TRACER, the trust region helps to control the learning step size and avoid catastrophic model changes. For eNACER, the natural gradient identifies the steepest ascent direction in policy space to speed up the convergence.",
"To mitigate the cold start issue, a corpus of demonstration data was utilised to pre-train the models prior to on-line reinforcement learning. Combining these two approaches, they demonstrated a practical approach to learn deep RL-based dialogue policies and also demonstrated their effectiveness in a task-oriented information seeking domain.",
"We can see in the figure FIGREF11 that the success rate reaches at around 95% for the case of policy trained with corpus data and using reinforcement learning which is impressive. Also, they train very quickly. For instance, for training just around 500-1000 dialogues, eNACER has a success rate of around 95% and TRACER has a success rate of around 92%. However, the authors noted that performance falls off rather rapidly in noise as the uncertainty estimates are not handled well by neural networks architectures. This can also be a topic for future research."
],
[
"Recently, generative adversarial networks are being explored and how they can be used in the dialog agents. Although generative adversarial networks are a topic in itself to explore. However, the paper mentioned below used uses reinforcement learning along with generative adversarial network so we cover it here inside the reinforcement learning methods. They can be used by the applications to generate dialogues similar to humans.",
"In the paper by Li et al, 2017 BIBREF28 , the authors proposed using adversarial training for open-domain dialogue generation such that the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances. The task is considered as a reinforcement learning problem where two systems get jointly trained: a generative model to produce response sequences, and a discriminator (similar to the human evaluator in the Turing test) that distinguishes between the human-generated dialogues and the machine-generated ones. The generative model defines the policy that generates a response given the dialog history and the discriminative model is a binary classifier that takes a sequence of dialog utterances as inputs and outputs whether the input is generated by the humans or machines. The outputs from the discriminator are then used as rewards for the generative model pushing the system to generate dialogues that mostly resemble human dialogues.",
"The key idea of the system is to encourage the generator to generate utterances that are indistinguishable from human generated dialogues. The policy gradient methods are used to achieve such a goal, in which the score of current utterances being human-generated ones assigned by the discriminator is used as a reward for the generator, which is trained to maximize the expected reward of generated utterances using the REINFORCE algorithm.",
"Their model achieved a machine vs random accuracy score of 0.952 out of 1. However, on applying the same training paradigm to machine translation in preliminary experiments, the authors did not find a clear performance boost. They thought that it may be because the adversarial training strategy is more beneficial to tasks in which there is a big discrepancy between the distributions of the generated sequences and the reference target sequences (that is, the adversarial approach may be more beneficial on tasks in which entropy of the targets is high). In the future, this relationship can be further explored."
],
[
"A lack of a coherent personality in conversational agents that most of these models propose has been identified as one of the primary reasons that these agents have not been able to pass the Turing test BIBREF0 BIBREF2 . Aside from such academic motivations, making conversational agents more like their human interlocutors which posses both a persona and are capable of parsing emotions is of great practical and commercial use. Consequently in the last couple of years different approaches have been tried to achieve this goal.",
"Li et al BIBREF29 address the challenge of consistency and how to endow data-driven systems with the coherent “persona” needed to model human-like behavior. They consider a persona to be composite of elements of identity (background facts or user profile), language behavior, and interaction style. They also account for a persona to be adaptive since an agent may need to present different facets to different human interlocutors depending on the interaction. Ultimately these personas are incorporated into the model as embeddings. Adding a persona not only improves the human interaction but also improves BLeU score and perplexity over the baseline sequence to sequence models. The model represents each individual speaker as a vector or embedding, which encodes speaker-specific information (e.g.dialect, register, age, gender, personal information) that influences the content and style of her responses. Most importantly these traits do not need to be explicitly annotated, which would be really tedious and limit the applications of the model. Instead the model manages to cluster users along some of these traits (e.g. age, country of residence) based on the responses alone. The model first encodes message INLINEFORM0 into a vector representation INLINEFORM1 using the source LSTM. Then for each step in the target side, hidden units are obtained by combining the representation produced by the target LSTM at the previous time step, the word representations at the current time step, and the speaker embedding INLINEFORM2 . In this way, speaker information is encoded and injected into the hidden layer at each time step and thus helps predict personalized responses throughout the generation process. The process described here is visualizes in figure FIGREF13 below.",
"Building on works like this the Emotional Chatting Machine model proposed by Zhou et al BIBREF30 is a model which generates responses that are not only grammatically consistent but also emotionally consistent. To achieve this their approach models the high-level abstraction of emotion expressions by embedding emotion categories. They also capture the change of implicit internal emotion states and use explicit emotion expressions with an external emotion vocabulary.",
"Although they did not evaluate their model on some standard metric, they showed that their model can generate responses appropriate not only in content but also in emotion. In the future, instead of specifying an emotion class, the model should decide the most appropriate emotion category for the response. However, this may be challenging since such a task depends on the topic, context or the mood of the user.",
"The goal of capturing emotions and having consistent personalities for a conversational agent is an important one. The field is still nascent but advances in the domain will have far reaching consequences for conversational models in general. People tend to trust agents that are emotionally consistent, and in the long term trust is what will decide the fate of large scale adoption of conversational agents."
],
[
"Evaluating conversational agents is an open research problem in the field. With the inclusion of emotion component in the modern conversation agents, evaluating such models has become even more complex.The current evaluation methods like perplexity and BLEU score are not good enough and correlate very weakly with human judgments. In the paper by Liu et al, 2016 BIBREF31 , the authors discuss about how not to evaluate the dialogue system. They provide quantitative and qualitative results highlighting specific weaknesses in existing metrics and provide recommendations for the future development of better automatic evaluation metrics for dialogue systems.",
"According to them, the metrics (like Kiros et al, 2015 BIBREF32 ) that are based on distributed sentence representations hold the most promise for the future. It is because word-overlap metrics like BLEU simply require too many ground-truth responses to find a significant match for a reasonable response due to the high diversity of dialogue responses. Similarly, the metrics that are embedding-based consist of basic averages of vectors obtained through distributional semantics and so they are also insufficiently complex for modeling sentence-level compositionality in dialogue.",
"The metrics that take into account the context can also be considered. Such metrics can come in the form of an evaluation model that is learned from data. This model can be either a discriminative model that attempts to distinguish between model and human responses or a model that uses data collected from the human survey in order to provide human-like scores to proposed responses."
],
[
"In this survey paper we explored the exciting and rapidly changing field of conversational agents. We talked about the early rule-based methods that depended on hand-engineered features. These methods laid the ground work for the current models. However these models were expensive to create and the features depended on the domain that the conversational agent was created for. It was hard to modify these models for a new domain. As computation power increased, and we developed neural networks that were able to capture long range dependencies (RNNs,GRUs,LSTMs) the field moved towards neural models for building these agents. Sequence to sequence model created in 2015 was capable of handling utterances of variable lengths, the application of sequence to sequence to conversation agents truly revolutionized the domain. After this advancement the field has literally exploded with numerous application in the last couple of years. The results have been impressive enough to find their way into commercial applications such that these agents have become truly ubiquitous. We attempt to present a broad view of these advancements with a focus on the main challenges encountered by the conversational agents and how these new approaches are trying to mitigate them."
]
],
"section_name": [
"Introduction",
"Early Techniques",
"Machine Learning Methods",
"Sequence to Sequence approaches for dialogue modelling",
"Language Model based approaches for dialogue modelling",
"Knowledge augmented models",
"Reinforcement Learning based models",
"Initial reinforcement methods",
"End-to-End Reinforcement Learning of Dialogue Agents for Information Access",
"Actor-Critic Algorithm",
"Using Generative Adversarial Network",
"Approaches to Human-ize agents",
"Evaluation methods",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"1ac3ebcf2b3bf93ba00aa223fd33bef6ed493a80",
"8bf54cd3b3961f6274c0ac935831c32c4e28e9db",
"d5636a22c955dfd4226efab5af79a44f71d84adf"
],
"answer": [
{
"evidence": [
"Evaluating conversational agents is an open research problem in the field. With the inclusion of emotion component in the modern conversation agents, evaluating such models has become even more complex.The current evaluation methods like perplexity and BLEU score are not good enough and correlate very weakly with human judgments. In the paper by Liu et al, 2016 BIBREF31 , the authors discuss about how not to evaluate the dialogue system. They provide quantitative and qualitative results highlighting specific weaknesses in existing metrics and provide recommendations for the future development of better automatic evaluation metrics for dialogue systems.",
"According to them, the metrics (like Kiros et al, 2015 BIBREF32 ) that are based on distributed sentence representations hold the most promise for the future. It is because word-overlap metrics like BLEU simply require too many ground-truth responses to find a significant match for a reasonable response due to the high diversity of dialogue responses. Similarly, the metrics that are embedding-based consist of basic averages of vectors obtained through distributional semantics and so they are also insufficiently complex for modeling sentence-level compositionality in dialogue."
],
"extractive_spans": [
"perplexity and BLEU score are not good enough and correlate very weakly with human judgments",
"word-overlap metrics like BLEU simply require too many ground-truth responses to find a significant match for a reasonable response due to the high diversity of dialogue responses",
"metrics that are embedding-based consist of basic averages of vectors obtained through distributional semantics and so they are also insufficiently complex for modeling sentence-level compositionality"
],
"free_form_answer": "",
"highlighted_evidence": [
"The current evaluation methods like perplexity and BLEU score are not good enough and correlate very weakly with human judgments.",
"It is because word-overlap metrics like BLEU simply require too many ground-truth responses to find a significant match for a reasonable response due to the high diversity of dialogue responses. Similarly, the metrics that are embedding-based consist of basic averages of vectors obtained through distributional semantics and so they are also insufficiently complex for modeling sentence-level compositionality in dialogue."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Evaluating conversational agents is an open research problem in the field. With the inclusion of emotion component in the modern conversation agents, evaluating such models has become even more complex.The current evaluation methods like perplexity and BLEU score are not good enough and correlate very weakly with human judgments. In the paper by Liu et al, 2016 BIBREF31 , the authors discuss about how not to evaluate the dialogue system. They provide quantitative and qualitative results highlighting specific weaknesses in existing metrics and provide recommendations for the future development of better automatic evaluation metrics for dialogue systems.",
"According to them, the metrics (like Kiros et al, 2015 BIBREF32 ) that are based on distributed sentence representations hold the most promise for the future. It is because word-overlap metrics like BLEU simply require too many ground-truth responses to find a significant match for a reasonable response due to the high diversity of dialogue responses. Similarly, the metrics that are embedding-based consist of basic averages of vectors obtained through distributional semantics and so they are also insufficiently complex for modeling sentence-level compositionality in dialogue."
],
"extractive_spans": [],
"free_form_answer": "The metrics correlate very weakly with human judgements, word-overlap metrics require too many ground-truth reposnses and embedding-based metrics are insufficiently complex for modeling sentence-level compositionality in dialogue",
"highlighted_evidence": [
"The current evaluation methods like perplexity and BLEU score are not good enough and correlate very weakly with human judgments. ",
"According to them, the metrics (like Kiros et al, 2015 BIBREF32 ) that are based on distributed sentence representations hold the most promise for the future. It is because word-overlap metrics like BLEU simply require too many ground-truth responses to find a significant match for a reasonable response due to the high diversity of dialogue responses. Similarly, the metrics that are embedding-based consist of basic averages of vectors obtained through distributional semantics and so they are also insufficiently complex for modeling sentence-level compositionality in dialogue."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Despite such huge advancements in the field, the way these models are evaluated is something that needs to be dramatically altered. Currently there exists no perfect quantitative method to compare two conversational agents. The field has to rely on qualitative measures or measures like BLeU and perplexity borrowed from machine translation. In section SECREF8 we discuss this problem in detail.",
"Even though most of the modern work in the field is built on this approach there is a significant drawback to this idea. This model can theoretically never solve the problem of modelling dialogues due to various simplifications, the most important of them being the objective function that is being optimized does not capture the actual objective achieved through human communication, which is typically longer term and based on exchange of information rather than next step prediction. It is important to see that optimizing an agent to generate text based on what it sees in the two-turn conversation dataset that it is trained on does not mean that the agent would be able to generalize to human level conversation across contexts. Nevertheless in absence of a better way to capture human communication this approach laid the foundation of most of the modern advances in the field. Another problem that plagues this paper and the field in general is Evaluation. As there can be multiple correct output utterances for a given input utterance there is no quantitative way to evaluate how well a model is performing. In this paper to show the efficacy of their model the authors publish snippets of conversations across different datasets. We discuss this general problem in evaluation later.",
"Evaluating conversational agents is an open research problem in the field. With the inclusion of emotion component in the modern conversation agents, evaluating such models has become even more complex.The current evaluation methods like perplexity and BLEU score are not good enough and correlate very weakly with human judgments. In the paper by Liu et al, 2016 BIBREF31 , the authors discuss about how not to evaluate the dialogue system. They provide quantitative and qualitative results highlighting specific weaknesses in existing metrics and provide recommendations for the future development of better automatic evaluation metrics for dialogue systems.",
"According to them, the metrics (like Kiros et al, 2015 BIBREF32 ) that are based on distributed sentence representations hold the most promise for the future. It is because word-overlap metrics like BLEU simply require too many ground-truth responses to find a significant match for a reasonable response due to the high diversity of dialogue responses. Similarly, the metrics that are embedding-based consist of basic averages of vectors obtained through distributional semantics and so they are also insufficiently complex for modeling sentence-level compositionality in dialogue."
],
"extractive_spans": [
"As there can be multiple correct output utterances for a given input utterance there is no quantitative way to evaluate how well a model is performing.",
"The current evaluation methods like perplexity and BLEU score are not good enough and correlate very weakly with human judgments. ",
"According to them, the metrics (like Kiros et al, 2015 BIBREF32 ) that are based on distributed sentence representations hold the most promise for the future. It is because word-overlap metrics like BLEU simply require too many ground-truth responses to find a significant match for a reasonable response due to the high diversity of dialogue responses."
],
"free_form_answer": "",
"highlighted_evidence": [
"Currently there exists no perfect quantitative method to compare two conversational agents. The field has to rely on qualitative measures or measures like BLeU and perplexity borrowed from machine translation. In section SECREF8 we discuss this problem in detail.",
" As there can be multiple correct output utterances for a given input utterance there is no quantitative way to evaluate how well a model is performing. In this paper to show the efficacy of their model the authors publish snippets of conversations across different datasets. We discuss this general problem in evaluation later.",
"The current evaluation methods like perplexity and BLEU score are not good enough and correlate very weakly with human judgments. In the paper by Liu et al, 2016 BIBREF31 , the authors discuss about how not to evaluate the dialogue system. They provide quantitative and qualitative results highlighting specific weaknesses in existing metrics and provide recommendations for the future development of better automatic evaluation metrics for dialogue systems.",
"According to them, the metrics (like Kiros et al, 2015 BIBREF32 ) that are based on distributed sentence representations hold the most promise for the future. It is because word-overlap metrics like BLEU simply require too many ground-truth responses to find a significant match for a reasonable response due to the high diversity of dialogue responses. Similarly, the metrics that are embedding-based consist of basic averages of vectors obtained through distributional semantics and so they are also insufficiently complex for modeling sentence-level compositionality in dialogue."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"855e8b2e20c2ae2da307e7acec078bdcfb812a40",
"c733bcd79d8b2fc894bd36abb600eef226fdd5a3",
"ff12b3d43e4a9d54ca4f83ec9b7caccedece692f"
],
"answer": [
{
"evidence": [
"Despite such huge advancements in the field, the way these models are evaluated is something that needs to be dramatically altered. Currently there exists no perfect quantitative method to compare two conversational agents. The field has to rely on qualitative measures or measures like BLeU and perplexity borrowed from machine translation. In section SECREF8 we discuss this problem in detail."
],
"extractive_spans": [
"BLeU",
"perplexity"
],
"free_form_answer": "",
"highlighted_evidence": [
"Currently there exists no perfect quantitative method to compare two conversational agents. The field has to rely on qualitative measures or measures like BLeU and perplexity borrowed from machine translation. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Evaluating conversational agents is an open research problem in the field. With the inclusion of emotion component in the modern conversation agents, evaluating such models has become even more complex.The current evaluation methods like perplexity and BLEU score are not good enough and correlate very weakly with human judgments. In the paper by Liu et al, 2016 BIBREF31 , the authors discuss about how not to evaluate the dialogue system. They provide quantitative and qualitative results highlighting specific weaknesses in existing metrics and provide recommendations for the future development of better automatic evaluation metrics for dialogue systems."
],
"extractive_spans": [
" perplexity and BLEU score"
],
"free_form_answer": "",
"highlighted_evidence": [
"The current evaluation methods like perplexity and BLEU score are not good enough and correlate very weakly with human judgments."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Despite such huge advancements in the field, the way these models are evaluated is something that needs to be dramatically altered. Currently there exists no perfect quantitative method to compare two conversational agents. The field has to rely on qualitative measures or measures like BLeU and perplexity borrowed from machine translation. In section SECREF8 we discuss this problem in detail."
],
"extractive_spans": [
"BLeU ",
"perplexity "
],
"free_form_answer": "",
"highlighted_evidence": [
"Despite such huge advancements in the field, the way these models are evaluated is something that needs to be dramatically altered. Currently there exists no perfect quantitative method to compare two conversational agents. The field has to rely on qualitative measures or measures like BLeU and perplexity borrowed from machine translation. In section SECREF8 we discuss this problem in detail."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"02947ab66fc131e731e4ec6c0d0687d103164b0d",
"45ec5a6ebd77e5578e8d2655f5638bde77555219",
"bc4869034d134b842598fa0a66e9a76f5d346a27"
],
"answer": [
{
"evidence": [
"Evaluating conversational agents is an open research problem in the field. With the inclusion of emotion component in the modern conversation agents, evaluating such models has become even more complex.The current evaluation methods like perplexity and BLEU score are not good enough and correlate very weakly with human judgments. In the paper by Liu et al, 2016 BIBREF31 , the authors discuss about how not to evaluate the dialogue system. They provide quantitative and qualitative results highlighting specific weaknesses in existing metrics and provide recommendations for the future development of better automatic evaluation metrics for dialogue systems."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Evaluating conversational agents is an open research problem in the field. With the inclusion of emotion component in the modern conversation agents, evaluating such models has become even more complex.The current evaluation methods like perplexity and BLEU score are not good enough and correlate very weakly with human judgments."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"In this survey paper we explored the exciting and rapidly changing field of conversational agents. We talked about the early rule-based methods that depended on hand-engineered features. These methods laid the ground work for the current models. However these models were expensive to create and the features depended on the domain that the conversational agent was created for. It was hard to modify these models for a new domain. As computation power increased, and we developed neural networks that were able to capture long range dependencies (RNNs,GRUs,LSTMs) the field moved towards neural models for building these agents. Sequence to sequence model created in 2015 was capable of handling utterances of variable lengths, the application of sequence to sequence to conversation agents truly revolutionized the domain. After this advancement the field has literally exploded with numerous application in the last couple of years. The results have been impressive enough to find their way into commercial applications such that these agents have become truly ubiquitous. We attempt to present a broad view of these advancements with a focus on the main challenges encountered by the conversational agents and how these new approaches are trying to mitigate them."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We attempt to present a broad view of these advancements with a focus on the main challenges encountered by the conversational agents and how these new approaches are trying to mitigate them."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"62d06e5f0b538272a591ce820e65d087caea43f3",
"76928737e257a012b320695c2420f3e46a3e398c",
"9596864035bd64d6bb75e3a060f1a2d40acdb04f"
],
"answer": [
{
"evidence": [
"Reinforcement Learning based models",
"After exploring the neural methods in a lot of detail, the researchers have also begun exploring, in the current decade, how to use the reinforcement learning methods in the dialogue and personal agents.",
"Initial reinforcement methods",
"One of the first main papers that thought of using reinforcement learning for this came in 2005 by English et al BIBREF25 . They used an on-policy Monte Carlo method and the objective function they used was a linear combination of the solution quality (S) and the dialog length (L), taking the form: o(S,I) = INLINEFORM0 - INLINEFORM1 .",
"End-to-End Reinforcement Learning of Dialogue Agents for Information Access",
"Let’s have a look at KB-InfoBot (by Dhingra et al, 2017 BIBREF26 ): a multi-turn dialogue agent which helps users search Knowledge Bases (KBs) without composing complicated queries. In this paper, they replace the symbolic queries (which break the differentiability of the system and prevent end-to-end training of neural dialogue agents) with an induced ‘soft’ posterior distribution over the KB that indicates which entities the user is interested in. Integrating the soft retrieval process with a reinforcement learner leads to higher task success rate and reward in both simulations and against real users.",
"In the paper by Su et al, 2017 BIBREF27 , they proposed a sample-efficient actor-critic reinforcement learning with supervised data for dialogue management. Just for a heads up, actor-critic algorithms are the algorithms that have an actor stores the policy according to which the action is taken by the agent and a critic that critiques the actions chosen by the actor (that is, the rewards obtained after the action are sent to the critic using which it calculates value functions).",
"To speed up the learning process, they presented two sample-efficient neural networks algorithms: trust region actor-critic with experience replay (TRACER) and episodic natural actor-critic with experience replay (eNACER). Both models employ off-policy learning with experience replay to improve sample-efficiency. For TRACER, the trust region helps to control the learning step size and avoid catastrophic model changes. For eNACER, the natural gradient identifies the steepest ascent direction in policy space to speed up the convergence.",
"In the paper by Li et al, 2017 BIBREF28 , the authors proposed using adversarial training for open-domain dialogue generation such that the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances. The task is considered as a reinforcement learning problem where two systems get jointly trained: a generative model to produce response sequences, and a discriminator (similar to the human evaluator in the Turing test) that distinguishes between the human-generated dialogues and the machine-generated ones. The generative model defines the policy that generates a response given the dialog history and the discriminative model is a binary classifier that takes a sequence of dialog utterances as inputs and outputs whether the input is generated by the humans or machines. The outputs from the discriminator are then used as rewards for the generative model pushing the system to generate dialogues that mostly resemble human dialogues."
],
"extractive_spans": [
"adversarial training for open-domain dialogue generation ",
"trust region actor-critic with experience replay ",
"episodic natural actor-critic with experience replay",
"multi-turn dialogue agent",
"on-policy Monte Carlo method "
],
"free_form_answer": "",
"highlighted_evidence": [
"Reinforcement Learning based models\nAfter exploring the neural methods in a lot of detail, the researchers have also begun exploring, in the current decade, how to use the reinforcement learning methods in the dialogue and personal agents.\n\nInitial reinforcement methods\nOne of the first main papers that thought of using reinforcement learning for this came in 2005 by English et al BIBREF25 . ",
"End-to-End Reinforcement Learning of Dialogue Agents for Information Access\nLet’s have a look at KB-InfoBot (by Dhingra et al, 2017 BIBREF26 ): a multi-turn dialogue agent which helps users search Knowledge Bases (KBs) without composing complicated queries.",
"In the paper by Su et al, 2017 BIBREF27 , they proposed a sample-efficient actor-critic reinforcement learning with supervised data for dialogue management. ",
"To speed up the learning process, they presented two sample-efficient neural networks algorithms: trust region actor-critic with experience replay (TRACER) and episodic natural actor-critic with experience replay (eNACER).",
"In the paper by Li et al, 2017 BIBREF28 , the authors proposed using adversarial training for open-domain dialogue generation such that the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Recently, generative adversarial networks are being explored and how they can be used in the dialog agents. Although generative adversarial networks are a topic in itself to explore. However, the paper mentioned below used uses reinforcement learning along with generative adversarial network so we cover it here inside the reinforcement learning methods. They can be used by the applications to generate dialogues similar to humans.",
"In the paper by Li et al, 2017 BIBREF28 , the authors proposed using adversarial training for open-domain dialogue generation such that the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances. The task is considered as a reinforcement learning problem where two systems get jointly trained: a generative model to produce response sequences, and a discriminator (similar to the human evaluator in the Turing test) that distinguishes between the human-generated dialogues and the machine-generated ones. The generative model defines the policy that generates a response given the dialog history and the discriminative model is a binary classifier that takes a sequence of dialog utterances as inputs and outputs whether the input is generated by the humans or machines. The outputs from the discriminator are then used as rewards for the generative model pushing the system to generate dialogues that mostly resemble human dialogues.",
"The key idea of the system is to encourage the generator to generate utterances that are indistinguishable from human generated dialogues. The policy gradient methods are used to achieve such a goal, in which the score of current utterances being human-generated ones assigned by the discriminator is used as a reward for the generator, which is trained to maximize the expected reward of generated utterances using the REINFORCE algorithm."
],
"extractive_spans": [
"the authors proposed using adversarial training for open-domain dialogue generation such that the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances.",
"The task is considered as a reinforcement learning problem where two systems get jointly trained: a generative model to produce response sequences, and a discriminator (similar to the human evaluator in the Turing test) that distinguishes between the human-generated dialogues and the machine-generated ones",
"The generative model defines the policy that generates a response given the dialog history and the discriminative model is a binary classifier that takes a sequence of dialog utterances as inputs and outputs whether the input is generated by the humans or machines."
],
"free_form_answer": "",
"highlighted_evidence": [
"Recently, generative adversarial networks are being explored and how they can be used in the dialog agents. Although generative adversarial networks are a topic in itself to explore. However, the paper mentioned below used uses reinforcement learning along with generative adversarial network so we cover it here inside the reinforcement learning methods. They can be used by the applications to generate dialogues similar to humans.",
"In the paper by Li et al, 2017 BIBREF28 , the authors proposed using adversarial training for open-domain dialogue generation such that the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances. The task is considered as a reinforcement learning problem where two systems get jointly trained: a generative model to produce response sequences, and a discriminator (similar to the human evaluator in the Turing test) that distinguishes between the human-generated dialogues and the machine-generated ones. The generative model defines the policy that generates a response given the dialog history and the discriminative model is a binary classifier that takes a sequence of dialog utterances as inputs and outputs whether the input is generated by the humans or machines. The outputs from the discriminator are then used as rewards for the generative model pushing the system to generate dialogues that mostly resemble human dialogues.",
"The key idea of the system is to encourage the generator to generate utterances that are indistinguishable from human generated dialogues. The policy gradient methods are used to achieve such a goal, in which the score of current utterances being human-generated ones assigned by the discriminator is used as a reward for the generator, which is trained to maximize the expected reward of generated utterances using the REINFORCE algorithm."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Recently, generative adversarial networks are being explored and how they can be used in the dialog agents. Although generative adversarial networks are a topic in itself to explore. However, the paper mentioned below used uses reinforcement learning along with generative adversarial network so we cover it here inside the reinforcement learning methods. They can be used by the applications to generate dialogues similar to humans.",
"In the paper by Li et al, 2017 BIBREF28 , the authors proposed using adversarial training for open-domain dialogue generation such that the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances. The task is considered as a reinforcement learning problem where two systems get jointly trained: a generative model to produce response sequences, and a discriminator (similar to the human evaluator in the Turing test) that distinguishes between the human-generated dialogues and the machine-generated ones. The generative model defines the policy that generates a response given the dialog history and the discriminative model is a binary classifier that takes a sequence of dialog utterances as inputs and outputs whether the input is generated by the humans or machines. The outputs from the discriminator are then used as rewards for the generative model pushing the system to generate dialogues that mostly resemble human dialogues."
],
"extractive_spans": [
"authors proposed using adversarial training for open-domain dialogue generation such that the system is trained to produce sequences that are indistinguishable from human-generated",
"task is considered as a reinforcement learning problem where two systems get jointly trained: a generative model to produce response sequences, and a discriminator (similar to the human evaluator in the Turing test) that distinguishes between the human-generated dialogues and the machine-generated ones"
],
"free_form_answer": "",
"highlighted_evidence": [
"However, the paper mentioned below used uses reinforcement learning along with generative adversarial network so we cover it here inside the reinforcement learning methods.",
"In the paper by Li et al, 2017 BIBREF28 , the authors proposed using adversarial training for open-domain dialogue generation such that the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances. The task is considered as a reinforcement learning problem where two systems get jointly trained: a generative model to produce response sequences, and a discriminator (similar to the human evaluator in the Turing test) that distinguishes between the human-generated dialogues and the machine-generated ones. The generative model defines the policy that generates a response given the dialog history and the discriminative model is a binary classifier that takes a sequence of dialog utterances as inputs and outputs whether the input is generated by the humans or machines. The outputs from the discriminator are then used as rewards for the generative model pushing the system to generate dialogues that mostly resemble human dialogues."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1b043f24c7bdf5ee5ac92502ef86af4aa90a6278",
"a0839752b09fb130659f6102aa978fb370f24336",
"eb5d20e453330b336c7062529346a8318c7ecb29"
],
"answer": [
{
"evidence": [
"Sequence to Sequence approaches for dialogue modelling",
"The problem with rule-based models was that they were often domain dependent and could not be easily ported to a new domain. They also depended on hand crafted rules which was both expensive and required domain expertise. Two factors which when combined spell doom for scalbility. All of this changed in 2015 when Vinyals et al proposed an approach BIBREF2 inspired from the recent progress in machine translation BIBREF1 . Vinyals et al used the sequence to sequence learning architecture for conversation agents. Their model was the first model which could be trained end-to-end, and could generate a new output utterance based on just the input sentence and no other hand crafted features.",
"Language Model based approaches for dialogue modelling",
"Though sequence-to-sequence based models have achieved a lot of success, another push in the field has been to instead train a language model over the entire dialogue as one single sequence BIBREF18 . These works argue that a language model is better suited to dialogue modeling, as it learns how the conversation evolves as information progresses."
],
"extractive_spans": [
"Sequence to Sequence approaches for dialogue modelling",
"Language Model based approaches for dialogue modelling"
],
"free_form_answer": "",
"highlighted_evidence": [
"Sequence to Sequence approaches for dialogue modelling\nThe problem with rule-based models was that they were often domain dependent and could not be easily ported to a new domain. They also depended on hand crafted rules which was both expensive and required domain expertise. Two factors which when combined spell doom for scalbility. All of this changed in 2015 when Vinyals et al proposed an approach BIBREF2 inspired from the recent progress in machine translation BIBREF1 . Vinyals et al used the sequence to sequence learning architecture for conversation agents. Their model was the first model which could be trained end-to-end, and could generate a new output utterance based on just the input sentence and no other hand crafted features.",
"Language Model based approaches for dialogue modelling\nThough sequence-to-sequence based models have achieved a lot of success, another push in the field has been to instead train a language model over the entire dialogue as one single sequence BIBREF18 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Sequence to Sequence approaches for dialogue modelling",
"The problem with rule-based models was that they were often domain dependent and could not be easily ported to a new domain. They also depended on hand crafted rules which was both expensive and required domain expertise. Two factors which when combined spell doom for scalbility. All of this changed in 2015 when Vinyals et al proposed an approach BIBREF2 inspired from the recent progress in machine translation BIBREF1 . Vinyals et al used the sequence to sequence learning architecture for conversation agents. Their model was the first model which could be trained end-to-end, and could generate a new output utterance based on just the input sentence and no other hand crafted features.",
"Language Model based approaches for dialogue modelling",
"Though sequence-to-sequence based models have achieved a lot of success, another push in the field has been to instead train a language model over the entire dialogue as one single sequence BIBREF18 . These works argue that a language model is better suited to dialogue modeling, as it learns how the conversation evolves as information progresses."
],
"extractive_spans": [
"Sequence to Sequence approaches",
"Language Model based approaches"
],
"free_form_answer": "",
"highlighted_evidence": [
"Sequence to Sequence approaches for dialogue modelling\nThe problem with rule-based models was that they were often domain dependent and could not be easily ported to a new domain. They also depended on hand crafted rules which was both expensive and required domain expertise. Two factors which when combined spell doom for scalbility. All of this changed in 2015 when Vinyals et al proposed an approach BIBREF2 inspired from the recent progress in machine translation BIBREF1 . Vinyals et al used the sequence to sequence learning architecture for conversation agents. Their model was the first model which could be trained end-to-end, and could generate a new output utterance based on just the input sentence and no other hand crafted features.",
"Language Model based approaches for dialogue modelling\nThough sequence-to-sequence based models have achieved a lot of success, another push in the field has been to instead train a language model over the entire dialogue as one single sequence BIBREF18 . These works argue that a language model is better suited to dialogue modeling, as it learns how the conversation evolves as information progresses."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Sequence to Sequence approaches for dialogue modelling",
"Language Model based approaches for dialogue modelling"
],
"extractive_spans": [
"Sequence to Sequence approaches",
"Language Model "
],
"free_form_answer": "",
"highlighted_evidence": [
"Sequence to Sequence approaches for dialogue modelling",
"Language Model based approaches for dialogue modelling"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"2caf527285c79e5d69ace710709b502f3715d6bc",
"d66de509a19f8d99d276ec5efbb7d5c2911e55be"
],
"answer": [
{
"evidence": [
"The speech recognition was done using n-gram statistical model which is then passed to a robust parser based on an extended Context Free Grammar allowing the system to skip unknown words and perform partial parsing. They wrote the grammar based on a combination of their own intuition and a small scale Wizard-of-Oz experiment they ran. The grammar rules used to identify bus stops were generated automatically from the schedule database. After this, they trained a statistical language model on the artificial corpus. In order to make the parsing grammar robust enough to parse fairly ungrammatical, yet understandable sentences, it was kept as general as possible. On making it public, they initially achieved a task success rate of 43.3% for the whole corpus and 43.6 when excluding sessions that did not contain any system-directed speech.",
"After this they tried to increase the performance of the system (Raux et al, 2006 BIBREF10 ). They retrained their acoustic models by performing Baum-Welch optimization on the transcribed data (starting from their original models). Unfortunately, this only brought marginal improvement because the models (semi-continuous HMMs) and algorithms they were using were too simplistic for this task. They improved the turn-taking management abilities of the system by closely analysing the feedback they received. They added more specific strategies, aiming at dealing with problems like noisy environments, too loud or too long utterances, etc. They found that they were able to get a success rate of 79% for the complete dialogues (which was great)."
],
"extractive_spans": [
"semi-continuous HMMs"
],
"free_form_answer": "",
"highlighted_evidence": [
"The speech recognition was done using n-gram statistical model which is then passed to a robust parser based on an extended Context Free Grammar allowing the system to skip unknown words and perform partial parsing. ",
"Unfortunately, this only brought marginal improvement because the models (semi-continuous HMMs) and algorithms they were using were too simplistic for this task. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Maxine Eskenazi and her team initially wanted to build spoken dialog system for the less general sections of the population, such as the elderly and non-native speakers of English. They came up with Let’s Go project (Raux et al, 2003 BIBREF8 ) that was designed to provide Pittsburgh area bus information. Later, this was opened to the general public (Raux et al, 2005 BIBREF9 ). Their work is important in terms of the techniques they used.",
"The speech recognition was done using n-gram statistical model which is then passed to a robust parser based on an extended Context Free Grammar allowing the system to skip unknown words and perform partial parsing. They wrote the grammar based on a combination of their own intuition and a small scale Wizard-of-Oz experiment they ran. The grammar rules used to identify bus stops were generated automatically from the schedule database. After this, they trained a statistical language model on the artificial corpus. In order to make the parsing grammar robust enough to parse fairly ungrammatical, yet understandable sentences, it was kept as general as possible. On making it public, they initially achieved a task success rate of 43.3% for the whole corpus and 43.6 when excluding sessions that did not contain any system-directed speech."
],
"extractive_spans": [
"The speech recognition was done using n-gram statistical model",
"The grammar rules used to identify bus stops were generated automatically from the schedule database",
"they trained a statistical language model on the artificial corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"Maxine Eskenazi and her team initially wanted to build spoken dialog system for the less general sections of the population, such as the elderly and non-native speakers of English. They came up with Let’s Go project (Raux et al, 2003 BIBREF8 ) that was designed to provide Pittsburgh area bus information. Later, this was opened to the general public (Raux et al, 2005 BIBREF9 ). Their work is important in terms of the techniques they used.\n\nThe speech recognition was done using n-gram statistical model which is then passed to a robust parser based on an extended Context Free Grammar allowing the system to skip unknown words and perform partial parsing. They wrote the grammar based on a combination of their own intuition and a small scale Wizard-of-Oz experiment they ran. The grammar rules used to identify bus stops were generated automatically from the schedule database. After this, they trained a statistical language model on the artificial corpus."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"413dc8e7f4ba99d27e509d4b1fcf917f839ee09b",
"d3faa85553a8d857d8f653de5fee8c9506a01d19"
],
"answer": [
{
"evidence": [
"Early Techniques",
"Initially, the interactive dialogue systems were based on and limited to speaker independent recognition of isolated words and phrases or limited continuous speech such as digit strings. In August 1993, there came the ESPRIT SUNDIAL project (Peckham et al, 1993 BIBREF3 ) which was aimed at allowing spontaneous conversational inquiries over the telephone for the train timetable and flight enquiries. The linguistic processing component in it was based on natural language parsing. The parser made use of alternative word hypotheses represented in a lattice or graph in constructing a parse tree and allowance was made for gaps and partially parsable strings. It made use of both syntactic and semantic knowledge for the task domain. It was able to achieve a 96% success rate for the flight inquiry application in English. However, the issue was that the given conversational agent was heavily limited to the types of applications it can perform and its high success rate was more due to that instead of great natural language techniques (relative to recent times)."
],
"extractive_spans": [
"spontaneous conversational inquiries over the telephone for the train timetable and flight enquiries."
],
"free_form_answer": "",
"highlighted_evidence": [
"Early Techniques\nInitially, the interactive dialogue systems were based on and limited to speaker independent recognition of isolated words and phrases or limited continuous speech such as digit strings. In August 1993, there came the ESPRIT SUNDIAL project (Peckham et al, 1993 BIBREF3 ) which was aimed at allowing spontaneous conversational inquiries over the telephone for the train timetable and flight enquiries. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Initially, the interactive dialogue systems were based on and limited to speaker independent recognition of isolated words and phrases or limited continuous speech such as digit strings. In August 1993, there came the ESPRIT SUNDIAL project (Peckham et al, 1993 BIBREF3 ) which was aimed at allowing spontaneous conversational inquiries over the telephone for the train timetable and flight enquiries. The linguistic processing component in it was based on natural language parsing. The parser made use of alternative word hypotheses represented in a lattice or graph in constructing a parse tree and allowance was made for gaps and partially parsable strings. It made use of both syntactic and semantic knowledge for the task domain. It was able to achieve a 96% success rate for the flight inquiry application in English. However, the issue was that the given conversational agent was heavily limited to the types of applications it can perform and its high success rate was more due to that instead of great natural language techniques (relative to recent times).",
"In 1995, two researchers (Ball et al, 1995 BIBREF4 ) at Microsoft developed a conversational assistant called Persona which was one of the first true personal assistant similar to what we have in recent times (like Siri, etc). It allowed users the maximum flexibility to express their requests in whatever syntax they found most natural and the interface was based on a broad-coverage NLP system unlike the system discussed in the previous paragraph. In this, a labelled semantic graph is generated from the speech input which encodes case frames or thematic roles. After this, a sequence of graph transformations is applied on it using the knowledge of interaction scenario and application domain. This results into a normalized application specific structure called as task graph which is then matched against the templates (in the application) which represent the normalized task graphs corresponding to all the possible user statements that the assistant understands and the action is then executed. The accuracy was not that good and they did not bother to calculate it. Also, due to the integrated nature of conversational interaction in Persona, the necessary knowledge must be provided to each component of the system. Although it had limitations, it provided a very usable linguistic foundation for conversational interaction.",
"Maxine Eskenazi and her team initially wanted to build spoken dialog system for the less general sections of the population, such as the elderly and non-native speakers of English. They came up with Let’s Go project (Raux et al, 2003 BIBREF8 ) that was designed to provide Pittsburgh area bus information. Later, this was opened to the general public (Raux et al, 2005 BIBREF9 ). Their work is important in terms of the techniques they used."
],
"extractive_spans": [
"allowing spontaneous conversational inquiries over the telephone for the train timetable and flight enquiries",
"conversational assistant called Persona which was one of the first true personal assistant similar to what we have in recent times (like Siri, etc)",
"Let’s Go project (Raux et al, 2003 BIBREF8 ) that was designed to provide Pittsburgh area bus information"
],
"free_form_answer": "",
"highlighted_evidence": [
"In August 1993, there came the ESPRIT SUNDIAL project (Peckham et al, 1993 BIBREF3 ) which was aimed at allowing spontaneous conversational inquiries over the telephone for the train timetable and flight enquiries.",
"In 1995, two researchers (Ball et al, 1995 BIBREF4 ) at Microsoft developed a conversational assistant called Persona which was one of the first true personal assistant similar to what we have in recent times (like Siri, etc).",
"They came up with Let’s Go project (Raux et al, 2003 BIBREF8 ) that was designed to provide Pittsburgh area bus information."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"92e225f283a7d2ee55babccece2acef650384cb6",
"de2bc7af71902893f8a1a2c2a118228fbe1eb1bc"
],
"answer": [
{
"evidence": [
"Initially, the interactive dialogue systems were based on and limited to speaker independent recognition of isolated words and phrases or limited continuous speech such as digit strings. In August 1993, there came the ESPRIT SUNDIAL project (Peckham et al, 1993 BIBREF3 ) which was aimed at allowing spontaneous conversational inquiries over the telephone for the train timetable and flight enquiries. The linguistic processing component in it was based on natural language parsing. The parser made use of alternative word hypotheses represented in a lattice or graph in constructing a parse tree and allowance was made for gaps and partially parsable strings. It made use of both syntactic and semantic knowledge for the task domain. It was able to achieve a 96% success rate for the flight inquiry application in English. However, the issue was that the given conversational agent was heavily limited to the types of applications it can perform and its high success rate was more due to that instead of great natural language techniques (relative to recent times)."
],
"extractive_spans": [
"ESPRIT SUNDIAL project (Peckham et al, 1993 BIBREF3 )"
],
"free_form_answer": "",
"highlighted_evidence": [
"Initially, the interactive dialogue systems were based on and limited to speaker independent recognition of isolated words and phrases or limited continuous speech such as digit strings. In August 1993, there came the ESPRIT SUNDIAL project (Peckham et al, 1993 BIBREF3 ) which was aimed at allowing spontaneous conversational inquiries over the telephone for the train timetable and flight enquiries."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Early Techniques",
"Initially, the interactive dialogue systems were based on and limited to speaker independent recognition of isolated words and phrases or limited continuous speech such as digit strings. In August 1993, there came the ESPRIT SUNDIAL project (Peckham et al, 1993 BIBREF3 ) which was aimed at allowing spontaneous conversational inquiries over the telephone for the train timetable and flight enquiries. The linguistic processing component in it was based on natural language parsing. The parser made use of alternative word hypotheses represented in a lattice or graph in constructing a parse tree and allowance was made for gaps and partially parsable strings. It made use of both syntactic and semantic knowledge for the task domain. It was able to achieve a 96% success rate for the flight inquiry application in English. However, the issue was that the given conversational agent was heavily limited to the types of applications it can perform and its high success rate was more due to that instead of great natural language techniques (relative to recent times)."
],
"extractive_spans": [
" ESPRIT SUNDIAL project"
],
"free_form_answer": "",
"highlighted_evidence": [
"Early Techniques\nInitially, the interactive dialogue systems were based on and limited to speaker independent recognition of isolated words and phrases or limited continuous speech such as digit strings. In August 1993, there came the ESPRIT SUNDIAL project (Peckham et al, 1993 BIBREF3 ) which was aimed at allowing spontaneous conversational inquiries over the telephone for the train timetable and flight enquiries. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
"",
"",
"",
""
],
"question": [
"What are the limitations of the currently used quantitative metrics? e.g. why are they not 'good'?",
"What metrics are typically used to compare models?",
"Is there a benchmark to compare the different approaches?",
"What GAN and RL approaches are used?",
"What type of neural models are used?",
"What type of statistical models were used initially?",
"What was the proposed use of conversational agents in pioneering work?",
"What work pioneered the field of conversational agents?"
],
"question_id": [
"f92c344e9b1a986754277fd0f08a47dc3e5f9feb",
"b10388e343868ca8e5c7c601ebb903f52e756e61",
"e8cdeb3a081d51cc143c7090a54c82d393f1a2ca",
"833d3ae7613500f2867ed8b33d233d71781014e7",
"a1a0365bf6968cbdfd1072cf3923c26250bc955c",
"64f7337970e8d1989b2e1f7106d86f73c4a3d0af",
"8fdb4f521d3ba4179f8ccc4c28ba399aab6c3550",
"a0d45b71feb74774cfdc0d5c6e23cd41bc6bc1f2"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1. Agent society for spoken dialogue for tour information service [8]",
"Figure 2. sequence to sequence framework for modelling conversation [3]",
"Figure 3. Hierarchical approach to dialogue modelling. A context RNN summarizes the utterances until that point from the encoder. The decoder produces output utterances based on the hidden state of the context RNN instead of the encoder RNN [14]",
"Figure 4. Example demonstrating how DynoNet augments the conversation [21]",
"Figure 5. The neural architecture of the knowledge grounded model which uses a set of external world facts to augment the output utterance generated bt the model [23]",
"Figure 6. The success rate of TRACER for a random policy, policy trained with corpus data (NN:SL) and further improved via RL (NN:SL+RL) respectively in user simulation under various semantic error rates [8]",
"Figure 7. Visualization of how the persona is integrated in a sequence to sequence style conversational agent [30]"
],
"file": [
"3-Figure1-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png",
"7-Figure4-1.png",
"8-Figure5-1.png",
"10-Figure6-1.png",
"12-Figure7-1.png"
]
} | [
"What are the limitations of the currently used quantitative metrics? e.g. why are they not 'good'?"
] | [
[
"1803.08419-Evaluation methods-0",
"1803.08419-Introduction-5",
"1803.08419-Sequence to Sequence approaches for dialogue modelling-2",
"1803.08419-Evaluation methods-1"
]
] | [
"The metrics correlate very weakly with human judgements, word-overlap metrics require too many ground-truth reposnses and embedding-based metrics are insufficiently complex for modeling sentence-level compositionality in dialogue"
] | 151 |
1908.08917 | A Lost Croatian Cybernetic Machine Translation Program | We are exploring the historical significance of research in the field of machine translation conducted by Bulcsu Laszlo, Croatian linguist, who was a pioneer in machine translation in Yugoslavia during the 1950s. We are focused on two important seminal papers written by members of his research group from 1959 and 1962, as well as their legacy in establishing a Croatian machine translation program based around the Faculty of Humanities and Social Sciences of the University of Zagreb in the late 1950s and early 1960s. We are exploring their work in connection with the beginnings of machine translation in the USA and USSR, motivated by the Cold War and the intelligence needs of the period. We also present the approach to machine translation advocated by the Croatian group in Yugoslavia, which is different from the usual logical approaches of the period, and his advocacy of cybernetic methods, which would be adopted as a canon by the mainstream AI community only decades later. | {
"paragraphs": [
[
"In this paper, we are exploring the historical significance of Croatian machine translation research group. The group was active in 1950s, and it was conducted by Bulcsu Laszlo, Croatian linguist, who was a pioneer in machine translation during the 1950s in Yugoslavia.",
"To put the research of the Croatian group in the right context, we have to explore the origin of the idea of machine translation. The idea of machine translation is an old one, and its origin is commonly connected with the work of Rene Descartes, i.e. to his idea of universal language, as described in his letter to Mersenne from 20.xi.1629 BIBREF0. Descartes describes universal language as a simplified version of the language which will serve as an “interlanguage” for translation. That is, if we want to translate from English to Croatian, we will firstly translate from English to an “interlanguage”, and then from the “interlanguage” to Croatian. As described later in this paper, this idea had been implemented in the machine translation process, firstly in the Indonesian-to-Russian machine translation system created by Andreev, Kulagina and Melchuk from the early 1960s.",
"In modern times, the idea of machine translation was put forth by the philosopher and logician Yehoshua Bar-Hillel (most notably in BIBREF1 and BIBREF2), whose papers were studied by the Croatian group. Perhaps the most important unrealized point of contact between machine translation and cybernetics happened in the winter of 1950/51. In that period, Bar-Hillel met Rudolf Carnap in Chicago, who introduced to him the (new) idea of cybernetics. Also, Carnap gave him the contact details of his former teaching assistant, Walter Pitts, who was at that moment with Norbert Wiener at MIT and who was supposed to introduce him to Wiener, but the meeting never took place BIBREF3. Nevertheless, Bar-Hillel was to stay at MIT where he, inspired by cybernetics, would go to organize the first machine translation conference in the world in 1952 BIBREF3.",
"The idea of machine translation was a tempting idea in the 1950s. The main military interest in machine translation as an intelligence gathering tool (translation of scientific papers, daily press, technical reports, and everything the intelligence services could get their hands on) was sparked by the Soviet advance in nuclear technology, and would later be compounded by the success of Vostok 1 (termed by the USA as a “strategic surprise”). In the nuclear age, being able to read and understand what the other side was working on was of crucial importance BIBREF4. Machine translation was quickly absorbed in the program of the Dartmouth Summer Research Project on Artificial Intelligence in 1956 (where Artificial Intelligence as a field was born), as one of the five core fields of artificial intelligence (later to be known as natural language processing). One other field was included here, the “nerve nets” as they were known back then, today commonly known as artificial neural networks. What is also essential for our discussion is that the earliest programming language for artificial intelligence, Lisp, was invented in 1958 by John McCarthy BIBREF5. But let us take a closer look at the history of machine translation. In the USA, the first major wave of government and military funding for machine translation came in 1954, and the period of abundancy lasted until 1964, when the National Research Council established the Automatic Language Processing Advisory Committee (ALPAC), which was to assess the results of the ten years of intense funding. The findings were very negative, and funding was almost gone BIBREF4, hence the ALPAC report became the catalyst for the first “AI Winter”.",
"One of the first recorded attempts of producing a machine translation system in the USSR was in 1954 BIBREF6, and the attempt was applauded by the Communist party of the Soviet Union, by the USSR Committee for Science and Technology and the USSR Academy of Sciences. The source does not specify how this first system worked, but it does delineate that the major figures of machine translation of the time were N. Andreev of the Leningrad State University, O. Kulagina and I. Melchuk of the Steklov Mathematical Institute. There is information on an Indonesian-to-Russian machine translation system by Andreev, Kulagina and Melchuk from the early 1960s, but it is reported that the system was ultimately a failure, in the same way early USA systems were. The system had statistical elements set forth by Andreev, but the bulk was logical and knowledge-heavy processing put forth by Kulagina and Melchuk. The idea was to have a logical intermediate language, under the working name “Interlingua”, which was the connector of both natural languages, and was used to model common-sense human knowledge. For more details, see BIBREF6.",
"In the USSR, there were four major approaches to machine translation in the late 1950s BIBREF7. The first one was the research at the Institute for Precise Mechanics and Computational Technology of the USSR Academy of Sciences. Their approach was mostly experimental and not much different from today's empirical methods. They evaluated the majority of algorithms known at the time algorithms over meticulously prepared datasets, whose main strength was data cleaning, and by 1959 they have built a German-Russian machine translation prototype. The second approach, as noted by Mulić BIBREF7, was championed by the team at the Steklov Mathematical Institute of the USSR Academy of Sciences led by A. A. Reformatsky. Their approach was mainly logical, and they extended the theoretical ideas of Bar-Hillel BIBREF2 to build three algorithms: French-Russian, English-Russian and Hungarian-Russian. The third and perhaps the most successful approach was the one by A. A. Lyapunov, O. S. Kulagina and R. L. Dobrushin. Their efforts resulted in the formation of the Mathematical Linguistics Seminar at the Faculty of Philology in Moscow in 1956 and in Leningrad in 1957. Their approach was mainly information-theoretic (but they also tried logic-based approaches BIBREF7), which was considered cybernetic at that time. This was the main role model for the Croatian efforts from 1957 onwards. The fourth, and perhaps most influential, was the approach at the Experimental Laboratory of the Leningrad University championed by N. D. Andreev BIBREF7. Here, the algorithms for Indonesian-Russian, Arabic-Russian, Hindu-Russian, Japanese-Russian, Burmese-Russian, Norwegian-Russian, English-Russian, Spanish-Russian and Turkish-Russian were being built. The main approach of Andreev's group was to use an intermediary language, which would capture the meanings BIBREF7. It was an approach similar to KL-ONE, which would be introduced in the West much later (in 1985) by Brachman and Schmolze BIBREF8. It is also interesting to note that the Andreev group had a profound influence on the Czechoslovakian machine translation program BIBREF9, which unfortunately suffered a similar fate as the Yugoslav one due to the lack of funding.",
"Andreev's approach was in a sense \"external\". The modelling would be statistical, but its purpose would not be to mimic the stochasticity of the human thought process, but rather to produce a working machine translation system. Kulagina and Melchuk disagreed with this approach as they thought that more of what is presently called \"philosophical logic\" was needed to model the human thought process at the symbolic level, and according to them, the formalization of the human thought process was a prerequisite for developing a machine translation system (cf. BIBREF6). We could speculate that sub-symbolic processing would have been acceptable too, since that approach is also rooted in philosophical logic as a way of formalizing human cognitive functions and is also \"internal\" in the same sense symbolic approaches are.",
"There were many other centers for research in machine translation: Gorkovsky University (Omsk), 1st Moscow Institute for Foreign Languages, Computing Centre of the Armenian SSR and at the Institute for Automatics and Telemechanics of the Georgian SSR BIBREF7. It is worthwhile to note that both the USA and the USSR had access to state-of-the-art computers, and the political support for the production of such systems meant that computers were made available to researchers in machine translation. However, the results were poor in the late 1950s, and a working system was yet to be shown. All work was therefore theoretical work implemented on a computer, which proved to be sub-optimal."
],
[
"In Yugoslavia, organized effort in machine translation started in 1959, but the first individual effort was made by Vladimir Matković from the Institute for Telecommunications in Zagreb in 1957 in his PhD thesis on entropy in the Croatian language BIBREF10. The main research group in machine translation was formed in 1958, at the Circle for Young Linguists in Zagreb, initiated by a young linguist Bulcsu Laszlo, who graduated in Russian language, Southern Slavic languages and English language and literature at the University of Zagreb in 1952. The majority of the group members came from different departments of the Faculty of Humanities and Social Sciences of the University of Zagreb, with several individuals from other institutions. The members from the Faculty of Humanities and Social Sciences were: Svetozar Petrović (Department of Comparative Literature), Stjepan Babić (Department of Serbo-Croatian Language and Literature), Krunoslav Pranjić (Department of Serbo-Croatian Language and Literature), Željko Bujas (Department of English Language and Literature), Malik Mulić (Department of Russian Language and Literature) and Bulcsu Laszlo (Department of Comparative Slavistics). The members of the research group from outside the Faculty of Humanities and Social Sciences were: Božidar Finka (Institute for Language of the Yugoslav Academy of Sciences and Arts), Vladimir Vranić (Center for Numerical Research of the Yugoslav Academy of Sciences and Arts), Vladimir Matković (Institute for Telecommunications), Vladimir Muljević (Institute for Regulatory and Signal Devices) BIBREF10.",
"Laszlo and Petrović BIBREF11 also commented on the state of the art of the time, noting the USA prototype efforts from 1954 and the publication of a collection of research papers in 1955 as well as the USSR efforts starting from 1955 and the UK prototype from 1956. They do not detail or cite the articles they mention. However, the fact that they referred to them in a text published in 1959 (probably prepared for publishing in 1958, based on BIBREF11, where Laszlo and Petrović described that the group had started its work in 1958) leads us to the conclusion that the poorly funded Croatian research was lagging only a couple of years behind the research of the superpowers (which invested heavily in this effort). Another interesting moment, which they delineated in BIBREF11, is that the group soon discovered that some experimental work had already been done in 1957 at the Institute of Telecommunications (today a part of the Faculty of Electrical Engineering and Computing at the University of Zagreb) by Vladimir Matković. Because of this, they decided to include him in the research group of the Faculty of Humanities and Social Sciences at the University of Zagreb. The work done by Matković was documented in his doctoral dissertation but remained unpublished until 1959.",
"The Russian machine translation pioneer Andreev expressed hope that the Yugoslav (Croatian) research group could create a prototype, but sadly, due to the lack of federal funding, this never happened BIBREF10. Unlike their colleagues in the USA and the USSR, Laszlo’s group had to manage without an actual computer (which is painfully obvious in BIBREF12), and the results remained mainly theoretical. Appealing probably to the political circles of the time, Laszlo and Petrović note that, although it sounds strange, research in computational linguistics is mainly a top-priority military effort in other countries BIBREF11. There is a quote from BIBREF10 which perhaps best delineates the optimism and energy that the researchers in Zagreb had:",
"\"[...] The process of translation has to mechanicalized as soon as possible, and this is only possible if a competent, fast and inexhaustible machine which could inherit the translation task is created, even if just schematic. The machine needs to think for us. If machines help humans in physical tasks, why would they not help them in mental tasks with their mechanical memory and automated logic\" (p. 118)."
],
[
"Laszlo and Petrović BIBREF11 considered cybernetics (as described in BIBREF13 by Wiener, who invented the term “cybernetics”) to be the best approach for machine translation in the long run. The question is whether Laszlo's idea of cybernetics would drive the research of the group towards artificial neural networks. Laszlo and his group do not go into neural network details (bear in mind that this is 1959, the time of Rosenblatt), but the following passage offers a strong suggestion about the idea they had (bearing in mind that Wiener relates McCulloch and Pitts' ideas in his book): \"Cybernetics is the scientific discipline which studies analogies between machines and living organisms\" (BIBREF11, p. 107). They fully commit to the idea two pages later (BIBREF11, p. 109): \"An important analogy is the one between the functioning of the machine and that of the human nervous system\". This could be taken to mean a simple computer brain analogy in the spirit of BIBREF14 and later BIBREF15, but Laszlo and Petrović specifically said that thinking of cybernetics as the \"theory of electronic computers\" (as they are made) is wrong BIBREF11, since the emphasis should be on modelling analogical processes. There is a very interesting quote from BIBREF11, where Laszlo and Petrović note that \"today, there is a significant effort in the world to make fully automated machine translation possible; to achieve this, logicians and linguists are making efforts on ever more sophisticated problems\". This seems to suggest that they were aware of the efforts of logicians (such as Bar Hillel, and to some degree Pitts, since Wiener specifically mentions logicians-turned-cyberneticists in his book BIBREF13), but still concluded that a cybernetic approach would probably be a better choice.",
"Laszlo and Petrović BIBREF11 argued that, in order to trim the search space, the words would have to be coded so as to retain their information value but to rid the representations of needless redundancies. This was based on previous calculations of language entropy by Matković, and Matković's idea was simple: conduct a statistical analysis to determine the most frequent letters and assign them the shortest binary code. So A would get 101, while F would get 11010011 BIBREF11. Building on that, Laszlo suggested that, when making an efficient machine translation system, one has to take into account not just the letter frequencies but also the redundancies of some of the letters in a word BIBREF16. This suggests that the strategy would be as follows: first make a thesaurus, and pick a representative for each meaning, then stem or lemmatize the words, then remove the needless letters from words (i.e. letters that carry little information, such as vowels, but being careful not to equate two different words), and then encode the words in binary strings, using the letter frequencies. After that, the texts are ready for translation, but unfortunately, the translation method is never explicated. Nevertheless, it is hinted that it should be \"cybernetic\", which, along with what we have presented earlier, would most probably mean artificial neural networks. This is highlighted by the following passage (BIBREF11, p. 117):",
"\"A man who spends 50 years in a lively and multifaceted mental activity hears a billion and a half words. For a machine to have an ability comparable to such an intellectual, not just in terms of speed but also in terms of quality, it has to have a memory and a language sense of the same capacity, and for that - which is paramount - it has to have in-built conduits for concept association and the ability to logically reason and verify, in a word, the ability to learn fast.\"",
"Unfortunately, this idea of using machine learning was never fully developed, and the Croatian group followed the Soviet approach(es) closely. Pranjić BIBREF17 analyses and extrapolates five basic ideas in the Soviet Machine Translation program, which were the basis for the Croatian approach:",
"Separation of the dictionary from the MT algorithm",
"Separation of the understanding and generation modules of the MT algorithms",
"All words need to be lemmatized",
"The word lemma should be the key of the dictionary, but other forms of the word must be placed as a list in the value next to the key",
"Use context to determine the meaning of polysemous words.",
"The dictionary that was mentioned before is, in fact, the intermediary language, and all the necessary knowledge should be placed in this dictionary, the keys should ideally be just abstract codes, and everything else would reside and be accessible as values next to the keys BIBREF12. Petrović, when discussing the translation of poetry BIBREF18, noted that ideally, machine translation should be from one language to another, without the use of an intermediate language of meanings.",
"Finka and Laszlo envisioned three main data preparation tasks that are needed before prototype development could commence BIBREF10. The first task is to compile a dictionary of words sorted from the end of the word to the beginning. This would enable the development of what is now called stemming and lemmatization modules: a knowledge base with suffixes so they can be trimmed, but also a systematic way to find the base of the word (lemmatization) (p. 121). The second task would be to make a word frequency table. This would enable focusing on a few thousand most frequent words and dropping the rest. This is currently a good industrial practice for building efficient natural language processing systems, and in 1962, it was a computational necessity. The last task was to create a good thesaurus, but such a thesaurus where every data point has a \"meaning\" as the key, and words (synonyms) as values. The prototype would then operate on these meanings when they become substituted for words.",
"But what are those meanings? The algorithm to be used was a simple statistical alignment algorithm (in hopes of capturing semantics) described in BIBREF12 on a short Croatian sentence \"čovjek [noun-subject] puši [verb-predicate] lulu [noun-objective]\" (A man is smoking a pipe). The first step would be to parse and lemmatize. Nouns in Croatian have seven cases just in the singular, with different suffixes, for example:",
"ČOVJEK - Nominative singular",
"ČOVJEKA - Genitive singular",
"ČOVJEKU - Dative singular",
"ČOVJEKA - Accusative singular",
"ČOVJEČE - Vocative singular",
"ČOVJEKU - Locative singular",
"ČOVJEKOM - Instrumental singular",
"Although morphologically transparent, the lemma in the mentioned case would be “ČOVJEK-”; there is a voice change in the Vocative case, so for the purpose of translation, “ČOVJE-” would be the “lemma”. The other two lemmas are PUš- and LUL-.",
"The thesaurus would have multiple entries for each lemma, and they would be ordered by descending frequency (if the group actually made a prototype, they would have realized that this simple frequency count was not enough to avoid only the first meaning to be used). The dictionary entry for ČOVJE- (using modern JSON notation) is:",
"\"ČOVJE-\": \"mankind\": 193.5: \"LITTLENESS\", 690.2: \"AGENT\", \"man\": 554.4: \"REPRESENTATION\", 372.1: \"MANKIND\", 372.3: \"MANKIND\" ..., ...",
"The meaning of the numbers used is never explained, but they would probably be used for cross-referencing word categories.",
"After all the lemmas comprising the sentence have been looked up in this dictionary, the next step is to keep only the inner values and discard the inner keys, thus collapsing the list, so that the example above would become:",
"\"COVJE-\": 193.5: \"LITTLENESS\", 690.2: \"AGENT\", 554.4: \"REPRESENTATION\", 372.1: \"MANKIND\", 372.3: \"MANKIND\" ...",
"Next, the most frequently occurring meaning would be kept, but only if it grammatically fits the final sentence. One can extrapolate that it is tacitly assumed that the grammatical structure of the source language matches the target language, and to do this, a kind of categorical grammar similar to Lambek calculus BIBREF19 would have to be used. It seems that the Croatian group was not aware of the paper by Lambek (but only of Bar-Hillel's papers), so they did not elaborate this part.",
"Finka BIBREF20 notes that Matković, in his dissertation from 1957, considered the use of bigrams and trigrams to “help model the word context”. It is not clear whether Finka means character bigrams, which was computationally feasible at the time, or word bigrams, which was not feasible, but the suggestion of modelling the word context does point in this direction. Even though the beginnings of using character bigrams can be traced back to Claude Shannon BIBREF21, using character-level bigrams in natural language processing was studied extensively only by Gilbert and Moore BIBREF22. It can be argued, that in a sense, Matković predated these results, but his research and ideas were not known in the west, and he was not cited. The successful use of word bigrams in text classification had to wait until BIBREF23. The long time it took to get from character to words was mainly due to computational limitations, but Matković's ideas are not to be dismissed lightly on account of computational complexity, since the idea of using word bigrams was being explored by the Croatian group–perhaps the reason for considering such an idea was the lack of a computer and the underestimation of the memory requirements. The whole process described above is illustrated in Fig. 1.",
"",
"Several remarks are in order. First, the group seemed to think that encodings would be needed, but it seems that entropy-based encodings and calculations added no real benefits (i.e. added no benefit that would not be offset by the cost of calculating the codes). In addition, Finka and Laszlo BIBREF10 seem to place great emphasis on lemmatization instead of stemming, which, if they had constructed a prototype, they would have noticed it to be very hard to tackle with the technology of the age. Nevertheless, the idea of proper lemmatization would probably be replaced with moderately precise hard-coded stemming, made with the help of the \"inverse dictionary\", which Finka and Laszlo proposed as one of the key tasks in their 1962 paper. This paper also highlights the need for a frequency count and taking only the most frequent words, which is an approach that later became widely used in the natural language processing community. Sentential alignment coupled with part-of-speech tagging was correctly identified as one of the key aspects of machine translation, but its complexity was severely underestimated by the group. One might argue that these two modules are actually everything that is needed for a successful machine translation system, which shows the complexity of the task.",
"As noted earlier, the group had no computer available to build a prototype, and subsequently, they have underestimated the complexity of determining sentential alignment. Sentential alignment seems rather trivial from a theoretical standpoint, but it could be argued that machine translation can be reduced to sentential alignment. This reduction vividly suggests the full complexity of sentential alignment. But the complexity of alignment was not evident at the time, and only several decades after the Croatian group's dissolution, in the late 1990s, did the group centered around Tillmann and Ney start to experiment with statistical models using (non-trivial) alignment modules, and producing state-of-the-art results (cf. BIBREF24) and BIBREF25. However, this was statistical learning, and it would take another two decades for sentential alignment to be implemented in cybernetic models, by then known under a new name, deep learning. Alignment was implemented in deep neural networks by BIBREF26 and BIBREF27, but a better approach, called attention, which is a trainable alignment module, was being developed in parallel, starting with the seminal paper on attention in computer vision by BIBREF28."
],
[
"At this point, we are leaving the historical analysis behind to speculate on what the group might have discovered if they had had access to a computer. First of all, did the Croatian group have a concrete idea for tackling alignment? Not really. However, an approach can be read between the lines of primarily BIBREF16 and BIBREF17. In BIBREF17, Pranić addresses the Soviet model by Andreev, looking at it as if it was composed of two modules – an understanding module and a generation module. Following the footsteps of Andreev, their interaction should be over an idealized language. Laszlo BIBREF16 notes that such an idealized language should be encoded by keeping the entropy in mind. He literally calls for using entropy to eliminate redundancy while translating to an artificial language, and as Mulić notes BIBREF7, Andreev's idea (which should be followed) was to use an artificial language as an intermediary language, which has all the essential structures of all the languages one wishes to translate.",
"The step which was needed here was to eliminate the notion of structure alignment and just seek sentential alignment. This, in theory, can be done by using only entropy. A simple alignment could be made by using word entropies in both languages and aligning the words by decreasing entropy. This would work better for translating into a language with no articles. A better approach, which was not beyond the thinking of the group since it was already proposed by Matković in his dissertation from 1957 BIBREF20, would be to use word bigrams and align them. It is worth mentioning that, although the idea of machine translation in the 1950s in Croatia did not have a significant influence on development of the field, it shows that Croatian linguists had contemporary views and necessary competencies for its development. But, unfortunately, the development of machine translation in Croatia had been stopped because of the previously discussed circumstances. In 1964, Laszlo went to the USA, where he spent the next seven years, and after returning to Croatia, he was active as a university professor, but because of disagreement with the ruling political option regarding Croatian language issues, he published very rarely and was mainly focused on other linguistic issues in that period, but his work was a major influence on the later development of computational linguistics in Croatia."
]
],
"section_name": [
"Beginnings of Machine Translation and Artificial Intelligence in the USA and USSR",
"The formation of the Croatian group in Zagreb",
"Contributions of the Croatian group",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"137e12a3d7a11ea21636cadccb21911c42e7bb6e",
"9c150f08c71b0bf844872d373e0e9661bb27752f",
"e3ea1c1273691b205f4c5ee9b210e9349e4bc7d7"
],
"answer": [
{
"evidence": [
"Laszlo and Petrović BIBREF11 also commented on the state of the art of the time, noting the USA prototype efforts from 1954 and the publication of a collection of research papers in 1955 as well as the USSR efforts starting from 1955 and the UK prototype from 1956. They do not detail or cite the articles they mention. However, the fact that they referred to them in a text published in 1959 (probably prepared for publishing in 1958, based on BIBREF11, where Laszlo and Petrović described that the group had started its work in 1958) leads us to the conclusion that the poorly funded Croatian research was lagging only a couple of years behind the research of the superpowers (which invested heavily in this effort). Another interesting moment, which they delineated in BIBREF11, is that the group soon discovered that some experimental work had already been done in 1957 at the Institute of Telecommunications (today a part of the Faculty of Electrical Engineering and Computing at the University of Zagreb) by Vladimir Matković. Because of this, they decided to include him in the research group of the Faculty of Humanities and Social Sciences at the University of Zagreb. The work done by Matković was documented in his doctoral dissertation but remained unpublished until 1959."
],
"extractive_spans": [
"lagging only a couple of years behind the research of the superpowers"
],
"free_form_answer": "",
"highlighted_evidence": [
"However, the fact that they referred to them in a text published in 1959 (probably prepared for publishing in 1958, based on BIBREF11, where Laszlo and Petrović described that the group had started its work in 1958) leads us to the conclusion that the poorly funded Croatian research was lagging only a couple of years behind the research of the superpowers (which invested heavily in this effort)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Laszlo and Petrović BIBREF11 also commented on the state of the art of the time, noting the USA prototype efforts from 1954 and the publication of a collection of research papers in 1955 as well as the USSR efforts starting from 1955 and the UK prototype from 1956. They do not detail or cite the articles they mention. However, the fact that they referred to them in a text published in 1959 (probably prepared for publishing in 1958, based on BIBREF11, where Laszlo and Petrović described that the group had started its work in 1958) leads us to the conclusion that the poorly funded Croatian research was lagging only a couple of years behind the research of the superpowers (which invested heavily in this effort). Another interesting moment, which they delineated in BIBREF11, is that the group soon discovered that some experimental work had already been done in 1957 at the Institute of Telecommunications (today a part of the Faculty of Electrical Engineering and Computing at the University of Zagreb) by Vladimir Matković. Because of this, they decided to include him in the research group of the Faculty of Humanities and Social Sciences at the University of Zagreb. The work done by Matković was documented in his doctoral dissertation but remained unpublished until 1959.",
"The Russian machine translation pioneer Andreev expressed hope that the Yugoslav (Croatian) research group could create a prototype, but sadly, due to the lack of federal funding, this never happened BIBREF10. Unlike their colleagues in the USA and the USSR, Laszlo’s group had to manage without an actual computer (which is painfully obvious in BIBREF12), and the results remained mainly theoretical. Appealing probably to the political circles of the time, Laszlo and Petrović note that, although it sounds strange, research in computational linguistics is mainly a top-priority military effort in other countries BIBREF11. There is a quote from BIBREF10 which perhaps best delineates the optimism and energy that the researchers in Zagreb had:",
"In the USSR, there were four major approaches to machine translation in the late 1950s BIBREF7. The first one was the research at the Institute for Precise Mechanics and Computational Technology of the USSR Academy of Sciences. Their approach was mostly experimental and not much different from today's empirical methods. They evaluated the majority of algorithms known at the time algorithms over meticulously prepared datasets, whose main strength was data cleaning, and by 1959 they have built a German-Russian machine translation prototype. The second approach, as noted by Mulić BIBREF7, was championed by the team at the Steklov Mathematical Institute of the USSR Academy of Sciences led by A. A. Reformatsky. Their approach was mainly logical, and they extended the theoretical ideas of Bar-Hillel BIBREF2 to build three algorithms: French-Russian, English-Russian and Hungarian-Russian. The third and perhaps the most successful approach was the one by A. A. Lyapunov, O. S. Kulagina and R. L. Dobrushin. Their efforts resulted in the formation of the Mathematical Linguistics Seminar at the Faculty of Philology in Moscow in 1956 and in Leningrad in 1957. Their approach was mainly information-theoretic (but they also tried logic-based approaches BIBREF7), which was considered cybernetic at that time. This was the main role model for the Croatian efforts from 1957 onwards. The fourth, and perhaps most influential, was the approach at the Experimental Laboratory of the Leningrad University championed by N. D. Andreev BIBREF7. Here, the algorithms for Indonesian-Russian, Arabic-Russian, Hindu-Russian, Japanese-Russian, Burmese-Russian, Norwegian-Russian, English-Russian, Spanish-Russian and Turkish-Russian were being built. The main approach of Andreev's group was to use an intermediary language, which would capture the meanings BIBREF7. It was an approach similar to KL-ONE, which would be introduced in the West much later (in 1985) by Brachman and Schmolze BIBREF8. It is also interesting to note that the Andreev group had a profound influence on the Czechoslovakian machine translation program BIBREF9, which unfortunately suffered a similar fate as the Yugoslav one due to the lack of funding.",
"The idea of machine translation was a tempting idea in the 1950s. The main military interest in machine translation as an intelligence gathering tool (translation of scientific papers, daily press, technical reports, and everything the intelligence services could get their hands on) was sparked by the Soviet advance in nuclear technology, and would later be compounded by the success of Vostok 1 (termed by the USA as a “strategic surprise”). In the nuclear age, being able to read and understand what the other side was working on was of crucial importance BIBREF4. Machine translation was quickly absorbed in the program of the Dartmouth Summer Research Project on Artificial Intelligence in 1956 (where Artificial Intelligence as a field was born), as one of the five core fields of artificial intelligence (later to be known as natural language processing). One other field was included here, the “nerve nets” as they were known back then, today commonly known as artificial neural networks. What is also essential for our discussion is that the earliest programming language for artificial intelligence, Lisp, was invented in 1958 by John McCarthy BIBREF5. But let us take a closer look at the history of machine translation. In the USA, the first major wave of government and military funding for machine translation came in 1954, and the period of abundancy lasted until 1964, when the National Research Council established the Automatic Language Processing Advisory Committee (ALPAC), which was to assess the results of the ten years of intense funding. The findings were very negative, and funding was almost gone BIBREF4, hence the ALPAC report became the catalyst for the first “AI Winter”."
],
"extractive_spans": [],
"free_form_answer": "Author of this research noted the USA prototype effort from 1954 and research papers in 1955as well as USSR effort from 1955. ",
"highlighted_evidence": [
"Laszlo and Petrović BIBREF11 also commented on the state of the art of the time, noting the USA prototype efforts from 1954 and the publication of a collection of research papers in 1955 as well as the USSR efforts starting from 1955 and the UK prototype from 1956. They do not detail or cite the articles they mention. However, the fact that they referred to them in a text published in 1959 (probably prepared for publishing in 1958, based on BIBREF11, where Laszlo and Petrović described that the group had started its work in 1958) leads us to the conclusion that the poorly funded Croatian research was lagging only a couple of years behind the research of the superpowers (which invested heavily in this effort).",
"Unlike their colleagues in the USA and the USSR, Laszlo’s group had to manage without an actual computer (which is painfully obvious in BIBREF12), and the results remained mainly theoretical. ",
"In the USSR, there were four major approaches to machine translation in the late 1950s BIBREF7. ",
"Machine translation was quickly absorbed in the program of the Dartmouth Summer Research Project on Artificial Intelligence in 1956 (where Artificial Intelligence as a field was born), as one of the five core fields of artificial intelligence (later to be known as natural language processing)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Beginnings of Machine Translation and Artificial Intelligence in the USA and USSR",
"There were many other centers for research in machine translation: Gorkovsky University (Omsk), 1st Moscow Institute for Foreign Languages, Computing Centre of the Armenian SSR and at the Institute for Automatics and Telemechanics of the Georgian SSR BIBREF7. It is worthwhile to note that both the USA and the USSR had access to state-of-the-art computers, and the political support for the production of such systems meant that computers were made available to researchers in machine translation. However, the results were poor in the late 1950s, and a working system was yet to be shown. All work was therefore theoretical work implemented on a computer, which proved to be sub-optimal."
],
"extractive_spans": [
"It is worthwhile to note that both the USA and the USSR had access to state-of-the-art computers, and the political support for the production of such systems meant that computers were made available to researchers in machine translation. However, the results were poor in the late 1950s, and a working system was yet to be shown. All work was therefore theoretical work implemented on a computer, which proved to be sub-optimal."
],
"free_form_answer": "",
"highlighted_evidence": [
"Beginnings of Machine Translation and Artificial Intelligence in the USA and USSR",
"It is worthwhile to note that both the USA and the USSR had access to state-of-the-art computers, and the political support for the production of such systems meant that computers were made available to researchers in machine translation. However, the results were poor in the late 1950s, and a working system was yet to be shown. All work was therefore theoretical work implemented on a computer, which proved to be sub-optimal."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"752265adb8327c7d1b6c8a0408e483ce6b0d6593",
"b871c6dee8d5b10a74d32a1419ee21fb983c3c02",
"efc4623b10ca4ebd219f5447f1f1e8e1db7aaec8"
],
"answer": [
{
"evidence": [
"In the USSR, there were four major approaches to machine translation in the late 1950s BIBREF7. The first one was the research at the Institute for Precise Mechanics and Computational Technology of the USSR Academy of Sciences. Their approach was mostly experimental and not much different from today's empirical methods. They evaluated the majority of algorithms known at the time algorithms over meticulously prepared datasets, whose main strength was data cleaning, and by 1959 they have built a German-Russian machine translation prototype. The second approach, as noted by Mulić BIBREF7, was championed by the team at the Steklov Mathematical Institute of the USSR Academy of Sciences led by A. A. Reformatsky. Their approach was mainly logical, and they extended the theoretical ideas of Bar-Hillel BIBREF2 to build three algorithms: French-Russian, English-Russian and Hungarian-Russian. The third and perhaps the most successful approach was the one by A. A. Lyapunov, O. S. Kulagina and R. L. Dobrushin. Their efforts resulted in the formation of the Mathematical Linguistics Seminar at the Faculty of Philology in Moscow in 1956 and in Leningrad in 1957. Their approach was mainly information-theoretic (but they also tried logic-based approaches BIBREF7), which was considered cybernetic at that time. This was the main role model for the Croatian efforts from 1957 onwards. The fourth, and perhaps most influential, was the approach at the Experimental Laboratory of the Leningrad University championed by N. D. Andreev BIBREF7. Here, the algorithms for Indonesian-Russian, Arabic-Russian, Hindu-Russian, Japanese-Russian, Burmese-Russian, Norwegian-Russian, English-Russian, Spanish-Russian and Turkish-Russian were being built. The main approach of Andreev's group was to use an intermediary language, which would capture the meanings BIBREF7. It was an approach similar to KL-ONE, which would be introduced in the West much later (in 1985) by Brachman and Schmolze BIBREF8. It is also interesting to note that the Andreev group had a profound influence on the Czechoslovakian machine translation program BIBREF9, which unfortunately suffered a similar fate as the Yugoslav one due to the lack of funding.",
"The step which was needed here was to eliminate the notion of structure alignment and just seek sentential alignment. This, in theory, can be done by using only entropy. A simple alignment could be made by using word entropies in both languages and aligning the words by decreasing entropy. This would work better for translating into a language with no articles. A better approach, which was not beyond the thinking of the group since it was already proposed by Matković in his dissertation from 1957 BIBREF20, would be to use word bigrams and align them. It is worth mentioning that, although the idea of machine translation in the 1950s in Croatia did not have a significant influence on development of the field, it shows that Croatian linguists had contemporary views and necessary competencies for its development. But, unfortunately, the development of machine translation in Croatia had been stopped because of the previously discussed circumstances. In 1964, Laszlo went to the USA, where he spent the next seven years, and after returning to Croatia, he was active as a university professor, but because of disagreement with the ruling political option regarding Croatian language issues, he published very rarely and was mainly focused on other linguistic issues in that period, but his work was a major influence on the later development of computational linguistics in Croatia."
],
"extractive_spans": [
"the lack of funding"
],
"free_form_answer": "",
"highlighted_evidence": [
"It is also interesting to note that the Andreev group had a profound influence on the Czechoslovakian machine translation program BIBREF9, which unfortunately suffered a similar fate as the Yugoslav one due to the lack of funding.",
"But, unfortunately, the development of machine translation in Croatia had been stopped because of the previously discussed circumstances. In 1964, Laszlo went to the USA, where he spent the next seven years, and after returning to Croatia, he was active as a university professor, but because of disagreement with the ruling political option regarding Croatian language issues, he published very rarely and was mainly focused on other linguistic issues in that period, but his work was a major influence on the later development of computational linguistics in Croatia."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Laszlo and Petrović BIBREF11 also commented on the state of the art of the time, noting the USA prototype efforts from 1954 and the publication of a collection of research papers in 1955 as well as the USSR efforts starting from 1955 and the UK prototype from 1956. They do not detail or cite the articles they mention. However, the fact that they referred to them in a text published in 1959 (probably prepared for publishing in 1958, based on BIBREF11, where Laszlo and Petrović described that the group had started its work in 1958) leads us to the conclusion that the poorly funded Croatian research was lagging only a couple of years behind the research of the superpowers (which invested heavily in this effort). Another interesting moment, which they delineated in BIBREF11, is that the group soon discovered that some experimental work had already been done in 1957 at the Institute of Telecommunications (today a part of the Faculty of Electrical Engineering and Computing at the University of Zagreb) by Vladimir Matković. Because of this, they decided to include him in the research group of the Faculty of Humanities and Social Sciences at the University of Zagreb. The work done by Matković was documented in his doctoral dissertation but remained unpublished until 1959."
],
"extractive_spans": [
" poorly funded Croatian research was lagging only a couple of years behind the research of the superpowers"
],
"free_form_answer": "",
"highlighted_evidence": [
"Laszlo and Petrović BIBREF11 also commented on the state of the art of the time, noting the USA prototype efforts from 1954 and the publication of a collection of research papers in 1955 as well as the USSR efforts starting from 1955 and the UK prototype from 1956. They do not detail or cite the articles they mention. However, the fact that they referred to them in a text published in 1959 (probably prepared for publishing in 1958, based on BIBREF11, where Laszlo and Petrović described that the group had started its work in 1958) leads us to the conclusion that the poorly funded Croatian research was lagging only a couple of years behind the research of the superpowers (which invested heavily in this effort)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The Russian machine translation pioneer Andreev expressed hope that the Yugoslav (Croatian) research group could create a prototype, but sadly, due to the lack of federal funding, this never happened BIBREF10. Unlike their colleagues in the USA and the USSR, Laszlo’s group had to manage without an actual computer (which is painfully obvious in BIBREF12), and the results remained mainly theoretical. Appealing probably to the political circles of the time, Laszlo and Petrović note that, although it sounds strange, research in computational linguistics is mainly a top-priority military effort in other countries BIBREF11. There is a quote from BIBREF10 which perhaps best delineates the optimism and energy that the researchers in Zagreb had:"
],
"extractive_spans": [
"the lack of federal funding",
"Laszlo’s group had to manage without an actual computer"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Russian machine translation pioneer Andreev expressed hope that the Yugoslav (Croatian) research group could create a prototype, but sadly, due to the lack of federal funding, this never happened BIBREF10. Unlike their colleagues in the USA and the USSR, Laszlo’s group had to manage without an actual computer (which is painfully obvious in BIBREF12), and the results remained mainly theoretical. Appealing probably to the political circles of the time, Laszlo and Petrović note that, although it sounds strange, research in computational linguistics is mainly a top-priority military effort in other countries BIBREF11. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"507b5cfc293e2fc288a52842589ed5f32b265833",
"8c2e20d85f02e0b398b3c7bd4bf6388a142d15d4"
],
"answer": [
{
"evidence": [
"Finka and Laszlo envisioned three main data preparation tasks that are needed before prototype development could commence BIBREF10. The first task is to compile a dictionary of words sorted from the end of the word to the beginning. This would enable the development of what is now called stemming and lemmatization modules: a knowledge base with suffixes so they can be trimmed, but also a systematic way to find the base of the word (lemmatization) (p. 121). The second task would be to make a word frequency table. This would enable focusing on a few thousand most frequent words and dropping the rest. This is currently a good industrial practice for building efficient natural language processing systems, and in 1962, it was a computational necessity. The last task was to create a good thesaurus, but such a thesaurus where every data point has a \"meaning\" as the key, and words (synonyms) as values. The prototype would then operate on these meanings when they become substituted for words."
],
"extractive_spans": [
"compile a dictionary of words sorted from the end of the word to the beginning",
"make a word frequency table",
"create a good thesaurus"
],
"free_form_answer": "",
"highlighted_evidence": [
"The first task is to compile a dictionary of words sorted from the end of the word to the beginning. This would enable the development of what is now called stemming and lemmatization modules: a knowledge base with suffixes so they can be trimmed, but also a systematic way to find the base of the word (lemmatization) (p. 121). The second task would be to make a word frequency table. This would enable focusing on a few thousand most frequent words and dropping the rest. This is currently a good industrial practice for building efficient natural language processing systems, and in 1962, it was a computational necessity. The last task was to create a good thesaurus, but such a thesaurus where every data point has a \"meaning\" as the key, and words (synonyms) as values. ",
"The first task is to compile a dictionary of words sorted from the end of the word to the beginning. This would enable the development of what is now called stemming and lemmatization modules: a knowledge base with suffixes so they can be trimmed, but also a systematic way to find the base of the word (lemmatization) (p. 121). The second task would be to make a word frequency table. This would enable focusing on a few thousand most frequent words and dropping the rest. This is currently a good industrial practice for building efficient natural language processing systems, and in 1962, it was a computational necessity. The last task was to create a good thesaurus, but such a thesaurus where every data point has a \"meaning\" as the key, and words (synonyms) as values. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Separation of the dictionary from the MT algorithm",
"Separation of the understanding and generation modules of the MT algorithms",
"All words need to be lemmatized",
"The word lemma should be the key of the dictionary, but other forms of the word must be placed as a list in the value next to the key",
"Use context to determine the meaning of polysemous words."
],
"extractive_spans": [
"Separation of the dictionary from the MT algorithm",
"Separation of the understanding and generation modules of the MT algorithms",
"All words need to be lemmatized",
"The word lemma should be the key of the dictionary,",
"Use context to determine the meaning of polysemous words."
],
"free_form_answer": "",
"highlighted_evidence": [
"Separation of the dictionary from the MT algorithm\n\nSeparation of the understanding and generation modules of the MT algorithms\n\nAll words need to be lemmatized\n\nThe word lemma should be the key of the dictionary, but other forms of the word must be placed as a list in the value next to the key\n\nUse context to determine the meaning of polysemous words."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"06c3f590c1639d8ddd18b738dbcb88a83a19505e",
"28d03283867578919f8b6d83664be78e5290523a",
"db2b1012b1ed03d40dbf6bde5f6a6bb46125cf26"
],
"answer": [
{
"evidence": [
"In the USSR, there were four major approaches to machine translation in the late 1950s BIBREF7. The first one was the research at the Institute for Precise Mechanics and Computational Technology of the USSR Academy of Sciences. Their approach was mostly experimental and not much different from today's empirical methods. They evaluated the majority of algorithms known at the time algorithms over meticulously prepared datasets, whose main strength was data cleaning, and by 1959 they have built a German-Russian machine translation prototype. The second approach, as noted by Mulić BIBREF7, was championed by the team at the Steklov Mathematical Institute of the USSR Academy of Sciences led by A. A. Reformatsky. Their approach was mainly logical, and they extended the theoretical ideas of Bar-Hillel BIBREF2 to build three algorithms: French-Russian, English-Russian and Hungarian-Russian. The third and perhaps the most successful approach was the one by A. A. Lyapunov, O. S. Kulagina and R. L. Dobrushin. Their efforts resulted in the formation of the Mathematical Linguistics Seminar at the Faculty of Philology in Moscow in 1956 and in Leningrad in 1957. Their approach was mainly information-theoretic (but they also tried logic-based approaches BIBREF7), which was considered cybernetic at that time. This was the main role model for the Croatian efforts from 1957 onwards. The fourth, and perhaps most influential, was the approach at the Experimental Laboratory of the Leningrad University championed by N. D. Andreev BIBREF7. Here, the algorithms for Indonesian-Russian, Arabic-Russian, Hindu-Russian, Japanese-Russian, Burmese-Russian, Norwegian-Russian, English-Russian, Spanish-Russian and Turkish-Russian were being built. The main approach of Andreev's group was to use an intermediary language, which would capture the meanings BIBREF7. It was an approach similar to KL-ONE, which would be introduced in the West much later (in 1985) by Brachman and Schmolze BIBREF8. It is also interesting to note that the Andreev group had a profound influence on the Czechoslovakian machine translation program BIBREF9, which unfortunately suffered a similar fate as the Yugoslav one due to the lack of funding."
],
"extractive_spans": [
"They evaluated the majority of algorithms known at the time algorithms over meticulously prepared datasets, whose main strength was data cleaning, and by 1959 they have built a German-Russian machine translation prototype.",
"Their approach was mainly logical, and they extended the theoretical ideas of Bar-Hillel BIBREF2 to build three algorithms: French-Russian, English-Russian and Hungarian-Russian.",
"Their efforts resulted in the formation of the Mathematical Linguistics Seminar at the Faculty of Philology in Moscow in 1956 and in Leningrad in 1957. Their approach was mainly information-theoretic (but they also tried logic-based approaches BIBREF7), which was considered cybernetic at that time. This was the main role model for the Croatian efforts from 1957 onwards.",
" Here, the algorithms for Indonesian-Russian, Arabic-Russian, Hindu-Russian, Japanese-Russian, Burmese-Russian, Norwegian-Russian, English-Russian, Spanish-Russian and Turkish-Russian were being built. The main approach of Andreev's group was to use an intermediary language, which would capture the meanings BIBREF7."
],
"free_form_answer": "",
"highlighted_evidence": [
"In the USSR, there were four major approaches to machine translation in the late 1950s BIBREF7. The first one was the research at the Institute for Precise Mechanics and Computational Technology of the USSR Academy of Sciences. Their approach was mostly experimental and not much different from today's empirical methods. They evaluated the majority of algorithms known at the time algorithms over meticulously prepared datasets, whose main strength was data cleaning, and by 1959 they have built a German-Russian machine translation prototype. The second approach, as noted by Mulić BIBREF7, was championed by the team at the Steklov Mathematical Institute of the USSR Academy of Sciences led by A. A. Reformatsky. Their approach was mainly logical, and they extended the theoretical ideas of Bar-Hillel BIBREF2 to build three algorithms: French-Russian, English-Russian and Hungarian-Russian. The third and perhaps the most successful approach was the one by A. A. Lyapunov, O. S. Kulagina and R. L. Dobrushin. Their efforts resulted in the formation of the Mathematical Linguistics Seminar at the Faculty of Philology in Moscow in 1956 and in Leningrad in 1957. Their approach was mainly information-theoretic (but they also tried logic-based approaches BIBREF7), which was considered cybernetic at that time. This was the main role model for the Croatian efforts from 1957 onwards. The fourth, and perhaps most influential, was the approach at the Experimental Laboratory of the Leningrad University championed by N. D. Andreev BIBREF7. Here, the algorithms for Indonesian-Russian, Arabic-Russian, Hindu-Russian, Japanese-Russian, Burmese-Russian, Norwegian-Russian, English-Russian, Spanish-Russian and Turkish-Russian were being built. The main approach of Andreev's group was to use an intermediary language, which would capture the meanings BIBREF7. It was an approach similar to KL-ONE, which would be introduced in the West much later (in 1985) by Brachman and Schmolze BIBREF8. It is also interesting to note that the Andreev group had a profound influence on the Czechoslovakian machine translation program BIBREF9, which unfortunately suffered a similar fate as the Yugoslav one due to the lack of funding."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"One of the first recorded attempts of producing a machine translation system in the USSR was in 1954 BIBREF6, and the attempt was applauded by the Communist party of the Soviet Union, by the USSR Committee for Science and Technology and the USSR Academy of Sciences. The source does not specify how this first system worked, but it does delineate that the major figures of machine translation of the time were N. Andreev of the Leningrad State University, O. Kulagina and I. Melchuk of the Steklov Mathematical Institute. There is information on an Indonesian-to-Russian machine translation system by Andreev, Kulagina and Melchuk from the early 1960s, but it is reported that the system was ultimately a failure, in the same way early USA systems were. The system had statistical elements set forth by Andreev, but the bulk was logical and knowledge-heavy processing put forth by Kulagina and Melchuk. The idea was to have a logical intermediate language, under the working name “Interlingua”, which was the connector of both natural languages, and was used to model common-sense human knowledge. For more details, see BIBREF6."
],
"extractive_spans": [
"to have a logical intermediate language, under the working name “Interlingua”, which was the connector of both natural languages"
],
"free_form_answer": "",
"highlighted_evidence": [
" The idea was to have a logical intermediate language, under the working name “Interlingua”, which was the connector of both natural languages, and was used to model common-sense human knowledge. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"One of the first recorded attempts of producing a machine translation system in the USSR was in 1954 BIBREF6, and the attempt was applauded by the Communist party of the Soviet Union, by the USSR Committee for Science and Technology and the USSR Academy of Sciences. The source does not specify how this first system worked, but it does delineate that the major figures of machine translation of the time were N. Andreev of the Leningrad State University, O. Kulagina and I. Melchuk of the Steklov Mathematical Institute. There is information on an Indonesian-to-Russian machine translation system by Andreev, Kulagina and Melchuk from the early 1960s, but it is reported that the system was ultimately a failure, in the same way early USA systems were. The system had statistical elements set forth by Andreev, but the bulk was logical and knowledge-heavy processing put forth by Kulagina and Melchuk. The idea was to have a logical intermediate language, under the working name “Interlingua”, which was the connector of both natural languages, and was used to model common-sense human knowledge. For more details, see BIBREF6.",
"In the USSR, there were four major approaches to machine translation in the late 1950s BIBREF7. The first one was the research at the Institute for Precise Mechanics and Computational Technology of the USSR Academy of Sciences. Their approach was mostly experimental and not much different from today's empirical methods. They evaluated the majority of algorithms known at the time algorithms over meticulously prepared datasets, whose main strength was data cleaning, and by 1959 they have built a German-Russian machine translation prototype. The second approach, as noted by Mulić BIBREF7, was championed by the team at the Steklov Mathematical Institute of the USSR Academy of Sciences led by A. A. Reformatsky. Their approach was mainly logical, and they extended the theoretical ideas of Bar-Hillel BIBREF2 to build three algorithms: French-Russian, English-Russian and Hungarian-Russian. The third and perhaps the most successful approach was the one by A. A. Lyapunov, O. S. Kulagina and R. L. Dobrushin. Their efforts resulted in the formation of the Mathematical Linguistics Seminar at the Faculty of Philology in Moscow in 1956 and in Leningrad in 1957. Their approach was mainly information-theoretic (but they also tried logic-based approaches BIBREF7), which was considered cybernetic at that time. This was the main role model for the Croatian efforts from 1957 onwards. The fourth, and perhaps most influential, was the approach at the Experimental Laboratory of the Leningrad University championed by N. D. Andreev BIBREF7. Here, the algorithms for Indonesian-Russian, Arabic-Russian, Hindu-Russian, Japanese-Russian, Burmese-Russian, Norwegian-Russian, English-Russian, Spanish-Russian and Turkish-Russian were being built. The main approach of Andreev's group was to use an intermediary language, which would capture the meanings BIBREF7. It was an approach similar to KL-ONE, which would be introduced in the West much later (in 1985) by Brachman and Schmolze BIBREF8. It is also interesting to note that the Andreev group had a profound influence on the Czechoslovakian machine translation program BIBREF9, which unfortunately suffered a similar fate as the Yugoslav one due to the lack of funding."
],
"extractive_spans": [
"The idea was to have a logical intermediate language"
],
"free_form_answer": "",
"highlighted_evidence": [
"The idea was to have a logical intermediate language, under the working name “Interlingua”, which was the connector of both natural languages, and was used to model common-sense human knowledge. For more details, see BIBREF6.",
"The second approach, as noted by Mulić BIBREF7, was championed by the team at the Steklov Mathematical Institute of the USSR Academy of Sciences led by A. A. Reformatsky. Their approach was mainly logical, and they extended the theoretical ideas of Bar-Hillel BIBREF2 to build three algorithms: French-Russian, English-Russian and Hungarian-Russian."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"00f44375587100ea64690b8a236ccefbc74eb06f",
"530b6023b8fded9308e8aea335366486f9a1f997"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How does this research compare to research going on in the US and USSR at this time?",
"What is the reason this research was not adopted in the 1960s?",
"What is included in the cybernetic methods mentioned?",
"What were the usual logical approaches of the time period?",
"What language was this research published in?"
],
"question_id": [
"89414ef7fcb2709c47827f30a556f543b9a9e6e0",
"faffcc6ef27c1441e6528f924e320368430d8da3",
"afad388a0141bdda5ca9586803ac53d5f10f41f6",
"baaa6ad7148b785429a20f38786cd03ab9a2646e",
"de346decb1fbca8746b72c78ea9d1208902f5e0a"
],
"question_writer": [
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255"
],
"search_query": [
"expert machine translation",
"expert machine translation",
"expert machine translation",
"expert machine translation",
"expert machine translation"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1."
],
"file": [
"9-Figure1-1.png"
]
} | [
"How does this research compare to research going on in the US and USSR at this time?"
] | [
[
"1908.08917-Beginnings of Machine Translation and Artificial Intelligence in the USA and USSR-7",
"1908.08917-Beginnings of Machine Translation and Artificial Intelligence in the USA and USSR-5",
"1908.08917-The formation of the Croatian group in Zagreb-1",
"1908.08917-Beginnings of Machine Translation and Artificial Intelligence in the USA and USSR-3",
"1908.08917-The formation of the Croatian group in Zagreb-2"
]
] | [
"Author of this research noted the USA prototype effort from 1954 and research papers in 1955as well as USSR effort from 1955. "
] | 152 |
1804.07445 | Sentence Simplification with Memory-Augmented Neural Networks | Sentence simplification aims to simplify the content and structure of complex sentences, and thus make them easier to interpret for human readers, and easier to process for downstream NLP applications. Recent advances in neural machine translation have paved the way for novel approaches to the task. In this paper, we adapt an architecture with augmented memory capacities called Neural Semantic Encoders (Munkhdalai and Yu, 2017) for sentence simplification. Our experiments demonstrate the effectiveness of our approach on different simplification datasets, both in terms of automatic evaluation measures and human judgments. | {
"paragraphs": [
[
"The goal of sentence simplification is to compose complex sentences into simpler ones so that they are more comprehensible and accessible, while still retaining the original information content and meaning. Sentence simplification has a number of practical applications. On one hand, it provides reading aids for people with limited language proficiency BIBREF1 , BIBREF2 , or for patients with linguistic and cognitive disabilities BIBREF3 . On the other hand, it can improve the performance of other NLP tasks BIBREF4 , BIBREF5 , BIBREF6 . Prior work has explored monolingual machine translation (MT) approaches, utilizing corpora of simplified texts, e.g., Simple English Wikipedia (SEW), and making use of statistical MT models, such as phrase-based MT (PBMT) BIBREF7 , BIBREF8 , BIBREF9 , tree-based MT (TBMT) BIBREF10 , BIBREF11 , or syntax-based MT (SBMT) BIBREF12 .",
"Inspired by the success of neural MT BIBREF13 , BIBREF14 , recent work has started exploring neural simplification with sequence to sequence (Seq2seq) models, also referred to as encoder-decoder models. Nisioi et al. Nisioi:17 implemented a standard LSTM-based Seq2seq model and found that they outperform PBMT, SBMT, and unsupervised lexical simplification approaches. Zhang and Lapata BIBREF15 viewed the encoder-decoder model as an agent and employed a deep reinforcement learning framework in which the reward has three components capturing key aspects of the target output: simplicity, relevance, and fluency. The common practice for Seq2seq models is to use recurrent neural networks (RNNs) with Long Short-Term Memory BIBREF16 or Gated Recurrent Unit BIBREF17 for the encoder and decoder BIBREF18 , BIBREF15 . These architectures were designed to be capable of memorizing long-term dependencies across sequences. Nevertheless, their memory is typically small and might not be enough for the simplification task, where one is confronted with long and complicated sentences. In this study, we go beyond the conventional LSTM/GRU-based Seq2seq models and propose to use a memory-augmented RNN architecture called Neural Semantic Encoders (NSE). This architecture has been shown to be effective in a wide range of NLP tasks BIBREF0 . The contribution of this paper is twofold:",
"(1) First, we present a novel simplification model which is, to the best of our knowledge, the first model that use memory-augmented RNN for the task. We investigate the effectiveness of neural Seq2seq models when different neural architectures for the encoder are considered. Our experiments reveal that the NseLstm model that uses an NSE as the encoder and an LSTM as the decoder performed the best among these models, improving over strong simplification systems. (2) Second, we perform an extensive evaluation of various approaches proposed in the literature on different datasets. Results of both automatic and human evaluation show that our approach is remarkably effective for the task, significantly reducing the reading difficulty of the input, while preserving grammaticality and the original meaning. We further discuss some advantages and disadvantages of these approaches."
],
[
"Our approach is based on an attention-based Seq2seq model BIBREF19 (Figure FIGREF1 ). Given a complex source sentence INLINEFORM0 , the model learns to generate its simplified version INLINEFORM1 . The encoder reads through INLINEFORM2 and computes a sequence of hidden states INLINEFORM3 :",
" INLINEFORM0 ,",
"where INLINEFORM0 is a non-linear activation function (e.g., LSTM), INLINEFORM1 is the hidden state at time INLINEFORM2 . Each time the model generates a target word INLINEFORM3 , the decoder looks at a set of positions in the source sentence where the most relevant information is located. Specifically, another non-linear activation function INLINEFORM4 is used for the decoder where the hidden state INLINEFORM5 at time INLINEFORM6 is computed by:",
" INLINEFORM0 .",
"Here, the context vector INLINEFORM0 is computed as a weighted sum of the hidden vectors INLINEFORM1 :",
" INLINEFORM0 , INLINEFORM1 ,",
"where INLINEFORM0 is the dot product of two vectors. Generation is conditioned on INLINEFORM1 and all the previously generated target words INLINEFORM2 :",
" INLINEFORM0 ,",
" INLINEFORM0 ,",
"where INLINEFORM0 is some non-linear function. The training objective is to minimize the cross-entropy loss of the training source-target pairs."
],
[
"An RNN allows us to compute a hidden state INLINEFORM0 of each word summarizing the preceding words INLINEFORM1 , but not considering the following words INLINEFORM2 that might also be useful for simplification. An alternative approach is to use a bidirectional-RNN BIBREF20 . Here, we propose to use Neural Semantic Encoders BIBREF21 . During each encoding time step INLINEFORM3 , we compute a memory matrix INLINEFORM4 where INLINEFORM5 is the dimensionality of the word vectors. This matrix is initialized with the word vectors and is refined over time through NSE's functions to gain a better understanding of the input sequence. Concretely, NSE sequentially reads the tokens INLINEFORM6 with its read function:",
" INLINEFORM0 ,",
"where INLINEFORM0 is an LSTM, INLINEFORM1 is the hidden state at time INLINEFORM2 . Then, a compose function is used to compose INLINEFORM3 with relevant information retrieved from the memory at the previous time step, INLINEFORM4 :",
" INLINEFORM0 ,",
"where INLINEFORM0 is a multi-layer perceptron with one hidden layer, INLINEFORM1 is the output vector, and INLINEFORM2 is a linear combination of the memory slots of INLINEFORM3 , weighted by INLINEFORM4 :",
" INLINEFORM0 , INLINEFORM1 ",
"Here, INLINEFORM0 is the INLINEFORM1 row of the memory matrix at time INLINEFORM2 , INLINEFORM3 . Next, a write function is used to map INLINEFORM4 to the encoder output space:",
" INLINEFORM0 ,",
"where INLINEFORM0 is an LSTM, INLINEFORM1 is the hidden state at time INLINEFORM2 . Finally, the memory is updated accordingly. The retrieved memory content pointed by INLINEFORM3 is erased and the new content is added:",
" INLINEFORM0 ",
"NSE gives us unrestricted access to the entire source sequence stored in the memory. As such, the encoder may attend to relevant words when encoding each word. The sequence INLINEFORM0 is then used as the sequence INLINEFORM1 in Section SECREF2 ."
],
[
"We differ from the approach of Zhang et al. Zhang:17 in the sense that we implement both a greedy strategy and a beam-search strategy to generate the target sentence. Whereas the greedy decoder always chooses the simplification candidate with the highest log-probability, the beam-search decoder keeps a fixed number (beam) of the highest scoring candidates at each time step. We report the best simplification among the outputs based on automatic evaluation measures."
],
[
"Following BIBREF15 , we experiment on three simplification datasets, namely: (1) Newsela BIBREF22 , a high-quality simplification corpus of news articles composed by Newsela professional editors for children at multiple grade levels. We used the split of the data in BIBREF15 , i.e., 94,208/1,129/1,077 pairs for train/dev/test. (2) WikiSmall BIBREF10 , which contains aligned complex-simple sentence pairs from English Wikipedia (EW) and SEW. The dataset has 88,837/205/100 pairs for train/dev/test. (3) WikiLarge BIBREF15 , a larger corpus in which the training set is a mixture of three Wikipedia datasets in BIBREF10 , BIBREF11 , BIBREF23 , and the development and test sests are complex sentences taken from WikiSmall, each has 8 simplifications written by Amazon Mechanical Turk workers BIBREF12 . The dataset has 296,402/2,000/359 pairs for train/dev/test. Table TABREF7 provides statistics on the training sets."
],
[
"We implemented two attention-based Seq2seq models, namely: (1) LstmLstm: the encoder is implemented by two LSTM layers; (2) NseLstm: the encoder is implemented by NSE. The decoder in both cases is implemented by two LSTM layers. The computations for a single model are run on an NVIDIA Titan-X GPU. For all experiments, our models have 300-dimensional hidden states and 300-dimensional word embeddings. Parameters were initialized from a uniform distribution [-0.1, 0.1). We used the same hyperparameters across all datasets. Word embeddings were initialized either randomly or with Glove vectors BIBREF24 pre-trained on Common Crawl data (840B tokens), and fine-tuned during training. We used a vocabulary size of 20K for Newsela, and 30K for WikiSmall and WikiLarge. Our models were trained with a maximum number of 40 epochs using Adam optimizer BIBREF25 with step size INLINEFORM0 for LstmLstm, and INLINEFORM1 for NseLstm, the exponential decay rates INLINEFORM2 . The batch size is set to 32. We used dropout BIBREF26 for regularization with a dropout rate of 0.3. For beam search, we experimented with beam sizes of 5 and 10. Following BIBREF27 , we replaced each out-of-vocabulary token INLINEFORM3 with the source word INLINEFORM4 with the highest alignment score INLINEFORM5 , i.e., INLINEFORM6 .",
"Our models were tuned on the development sets, either with BLEU BIBREF28 that scores the output by counting INLINEFORM0 -gram matches with the reference, or SARI BIBREF12 that compares the output against both the reference and the input sentence. Both measures are commonly used to automatically evaluate the quality of simplification output. We noticed that SARI should be used with caution when tuning neural Seq2seq simplification models. Since SARI depends on the differences between a system's output and the input sentence, large differences may yield very good SARI even though the output is ungrammatical. Thus, when tuning with SARI, we ignored epochs in which the BLEU score of the output is too low, using a threshold INLINEFORM1 . We set INLINEFORM2 to 22 on Newsela, 33 on WikiSmall, and 77 on WikiLarge."
],
[
"We compared our models, either tuned with BLEU (-B) or SARI (-S), against systems reported in BIBREF15 , namely Dress, a deep reinforcement learning model, Dress-Ls, a combination of Dress and a lexical simplification model BIBREF15 , Pbmt-R, a PBMT model with dissimilarity-based re-ranking BIBREF9 , Hybrid, a hybrid semantic-based model that combines a simplification model and a monolingual MT model BIBREF29 , and Sbmt-Sari, a SBMT model with simplification-specific components. BIBREF12 ."
],
[
"We measured BLEU, and SARI at corpus-level following BIBREF15 . In addition, we also evaluated system output by eliciting human judgments. Specifically, we randomly selected 40 sentences from each test set, and included human reference simplifications and corresponding simplifications from the systems above. We then asked three volunteers to rate simplifications with respect to Fluency (the extent to which the output is grammatical English), Adequacy (the extent to which the output has the same meaning as the input sentence), and Simplicity (the extent to which the output is simpler than the input sentence) using a five point Likert scale."
],
[
"The results of the automatic evaluation are displayed in Table TABREF15 . We first discuss the results on Newsela that contains high-quality simplifications composed by professional editors. In terms of BLEU, all neural models achieved much higher scores than Pbmt-R and Hybrid. NseLstm-B scored highest with a BLEU score of 26.31. With regard to SARI, NseLstm-S scored best among neural models (29.58) and came close to the performance of Hybrid (30.00). This indicates that NSE offers an effective means to better encode complex sentences for sentence simplification.",
"On WikiSmall, Hybrid – the current state-of-the-art – achieved best BLEU (53.94) and SARI (30.46) scores. Among neural models, NseLstm-B yielded the highest BLEU score (53.42), while NseLstm-S performed best on SARI (29.75). On WikiLarge, again, NseLstm-B had the highest BLEU score of 92.02. Sbmt-Sari – that was trained on a huge corpus of 106M sentence pairs and 2B words – scored highest on SARI with 39.96, followed by Dress-Ls (37.27), Dress (37.08), and NseLstm-S (36.88)."
],
[
"The results of human judgments are displayed in Table TABREF16 . On Newsela, NseLstm-B scored highest on Fluency. Pbmt-R was significantly better than all other systems on Adequacy while LstmLstm-S performed best on Simplicity. NseLstm-B did very well on both Adequacy and Simplicity, and was best in terms of Average. Example model outputs on Newsela are provided in Table TABREF18 .",
"On WikiSmall, NseLstm-B performed best on both Fluency and Adequacy. On WikiLarge, LstmLstm-B achieved the highest Fluency score while NseLstm-B received the highest Adequacy score. In terms of Simplicity and Average, NseLstm-S outperformed all other systems on both WikiSmall and WikiLarge.",
"As shown in Table TABREF16 , neural models often outperformed traditional systems (Pbmt-R, Hybrid, Sbmt-Sari) on Fluency. This is not surprising given the recent success of neural Seq2seq models in language modeling and neural machine translation BIBREF30 , BIBREF27 . On the downside, our manual inspection reveals that neural models learn to perform copying very well in terms of rewrite operations (e.g., copying, deletion, reordering, substitution), often outputting the same or parts of the input sentence.",
"Finally, as can be seen in Table TABREF16 , Reference scored lower on Adequacy compared to Fluency and Simplicity on Newsela. On Wikipedia-based datasets, Reference obtained high Adequacy scores but much lower Simplicity scores compared to Newsela. This supports the assertion by previous work BIBREF22 that SEW has a large proportion of inadequate simplifications."
],
[
"Table TABREF20 shows the correlations between the scores assigned by humans and the automatic evaluation measures. There is a positive significant correlation between Fluency and Adequacy (0.69), but a negative significant correlation between Adequacy and Simplicity (-0.64). BLEU correlates well with Fluency (0.63) and Adequacy (0.90) while SARI correlates well with Simplicity (0.73). BLEU and SARI show a negative significant correlation (-0.54). The results reflect the challenge of managing the trade-off between Fluency, Adequacy and Simplicity in sentence simplification."
],
[
"In this paper, we explore neural Seq2seq models for sentence simplification. We propose to use an architecture with augmented memory capacities which we believe is suitable for the task, where one is confronted with long and complex sentences. Results of both automatic and human evaluation on different datasets show that our model is capable of significantly reducing the reading difficulty of the input, while performing well in terms of grammaticality and meaning preservation."
],
[
"We would like to thank Emily Druhl, Jesse Lingeman, and the UMass BioNLP team for their help with this work. We also thank Xingxing Zhang, Sergiu Nisioi for valuable discussions. The authors would like to acknowledge the reviewers for their thoughtful comments and suggestions. "
]
],
"section_name": [
"Introduction",
"Attention-based Encoder-Decoder Model",
"Neural Semantic Encoders",
"Decoding",
"Datasets",
"Models and Training Details",
"Comparing Systems",
"Evaluation",
"Automatic Evaluation Measures",
"Human Judgments",
"Correlations",
"Conclusions",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"812961236d33bd03a772f609457521df32801db9",
"8a7b881c35efb459484b77710c53c085777c936d",
"d7ca50571e67e38e2899b4d0724b210bba79a780"
],
"answer": [
{
"evidence": [
"We measured BLEU, and SARI at corpus-level following BIBREF15 . In addition, we also evaluated system output by eliciting human judgments. Specifically, we randomly selected 40 sentences from each test set, and included human reference simplifications and corresponding simplifications from the systems above. We then asked three volunteers to rate simplifications with respect to Fluency (the extent to which the output is grammatical English), Adequacy (the extent to which the output has the same meaning as the input sentence), and Simplicity (the extent to which the output is simpler than the input sentence) using a five point Likert scale."
],
"extractive_spans": [
"English"
],
"free_form_answer": "",
"highlighted_evidence": [
"We then asked three volunteers to rate simplifications with respect to Fluency (the extent to which the output is grammatical English), Adequacy (the extent to which the output has the same meaning as the input sentence), and Simplicity (the extent to which the output is simpler than the input sentence) using a five point Likert scale."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Following BIBREF15 , we experiment on three simplification datasets, namely: (1) Newsela BIBREF22 , a high-quality simplification corpus of news articles composed by Newsela professional editors for children at multiple grade levels. We used the split of the data in BIBREF15 , i.e., 94,208/1,129/1,077 pairs for train/dev/test. (2) WikiSmall BIBREF10 , which contains aligned complex-simple sentence pairs from English Wikipedia (EW) and SEW. The dataset has 88,837/205/100 pairs for train/dev/test. (3) WikiLarge BIBREF15 , a larger corpus in which the training set is a mixture of three Wikipedia datasets in BIBREF10 , BIBREF11 , BIBREF23 , and the development and test sests are complex sentences taken from WikiSmall, each has 8 simplifications written by Amazon Mechanical Turk workers BIBREF12 . The dataset has 296,402/2,000/359 pairs for train/dev/test. Table TABREF7 provides statistics on the training sets."
],
"extractive_spans": [
"English "
],
"free_form_answer": "",
"highlighted_evidence": [
"Following BIBREF15 , we experiment on three simplification datasets, namely: (1) Newsela BIBREF22 , a high-quality simplification corpus of news articles composed by Newsela professional editors for children at multiple grade levels. We used the split of the data in BIBREF15 , i.e., 94,208/1,129/1,077 pairs for train/dev/test. (2) WikiSmall BIBREF10 , which contains aligned complex-simple sentence pairs from English Wikipedia (EW) and SEW. The dataset has 88,837/205/100 pairs for train/dev/test. (3) WikiLarge BIBREF15 , a larger corpus in which the training set is a mixture of three Wikipedia datasets in BIBREF10 , BIBREF11 , BIBREF23 , and the development and test sests are complex sentences taken from WikiSmall, each has 8 simplifications written by Amazon Mechanical Turk workers BIBREF12 . The dataset has 296,402/2,000/359 pairs for train/dev/test. Table TABREF7 provides statistics on the training sets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We measured BLEU, and SARI at corpus-level following BIBREF15 . In addition, we also evaluated system output by eliciting human judgments. Specifically, we randomly selected 40 sentences from each test set, and included human reference simplifications and corresponding simplifications from the systems above. We then asked three volunteers to rate simplifications with respect to Fluency (the extent to which the output is grammatical English), Adequacy (the extent to which the output has the same meaning as the input sentence), and Simplicity (the extent to which the output is simpler than the input sentence) using a five point Likert scale."
],
"extractive_spans": [
"English"
],
"free_form_answer": "",
"highlighted_evidence": [
"We then asked three volunteers to rate simplifications with respect to Fluency (the extent to which the output is grammatical English), Adequacy (the extent to which the output has the same meaning as the input sentence), and Simplicity (the extent to which the output is simpler than the input sentence) using a five point Likert scale."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"a98bdb099739ebe15f8db679201dbc024a6064fc",
"f6482e64d8905251c1405c1784e7751e13150e1b"
],
"answer": [
{
"evidence": [
"We compared our models, either tuned with BLEU (-B) or SARI (-S), against systems reported in BIBREF15 , namely Dress, a deep reinforcement learning model, Dress-Ls, a combination of Dress and a lexical simplification model BIBREF15 , Pbmt-R, a PBMT model with dissimilarity-based re-ranking BIBREF9 , Hybrid, a hybrid semantic-based model that combines a simplification model and a monolingual MT model BIBREF29 , and Sbmt-Sari, a SBMT model with simplification-specific components. BIBREF12 ."
],
"extractive_spans": [
"Dress",
"Dress-Ls",
"Pbmt-R",
"Hybrid",
" Sbmt-Sari"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compared our models, either tuned with BLEU (-B) or SARI (-S), against systems reported in BIBREF15 , namely Dress, a deep reinforcement learning model, Dress-Ls, a combination of Dress and a lexical simplification model BIBREF15 , Pbmt-R, a PBMT model with dissimilarity-based re-ranking BIBREF9 , Hybrid, a hybrid semantic-based model that combines a simplification model and a monolingual MT model BIBREF29 , and Sbmt-Sari, a SBMT model with simplification-specific components. BIBREF12 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We compared our models, either tuned with BLEU (-B) or SARI (-S), against systems reported in BIBREF15 , namely Dress, a deep reinforcement learning model, Dress-Ls, a combination of Dress and a lexical simplification model BIBREF15 , Pbmt-R, a PBMT model with dissimilarity-based re-ranking BIBREF9 , Hybrid, a hybrid semantic-based model that combines a simplification model and a monolingual MT model BIBREF29 , and Sbmt-Sari, a SBMT model with simplification-specific components. BIBREF12 ."
],
"extractive_spans": [
"Dress",
" Dress-Ls",
"Pbmt-R",
"Hybrid",
"Sbmt-Sari"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compared our models, either tuned with BLEU (-B) or SARI (-S), against systems reported in BIBREF15 , namely Dress, a deep reinforcement learning model, Dress-Ls, a combination of Dress and a lexical simplification model BIBREF15 , Pbmt-R, a PBMT model with dissimilarity-based re-ranking BIBREF9 , Hybrid, a hybrid semantic-based model that combines a simplification model and a monolingual MT model BIBREF29 , and Sbmt-Sari, a SBMT model with simplification-specific components. BIBREF12 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0a7557846d6cc51e31fa0ed48820a8b7e15d8a7f",
"58cecc7bfa023fed22302f01bbac666bb7768390",
"72348c2f99e5f536f0dde9136ee591c102aaedf1"
],
"answer": [
{
"evidence": [
"The results of the automatic evaluation are displayed in Table TABREF15 . We first discuss the results on Newsela that contains high-quality simplifications composed by professional editors. In terms of BLEU, all neural models achieved much higher scores than Pbmt-R and Hybrid. NseLstm-B scored highest with a BLEU score of 26.31. With regard to SARI, NseLstm-S scored best among neural models (29.58) and came close to the performance of Hybrid (30.00). This indicates that NSE offers an effective means to better encode complex sentences for sentence simplification."
],
"extractive_spans": [
"BLEU",
"SARI"
],
"free_form_answer": "",
"highlighted_evidence": [
"The results of the automatic evaluation are displayed in Table TABREF15 . We first discuss the results on Newsela that contains high-quality simplifications composed by professional editors. In terms of BLEU, all neural models achieved much higher scores than Pbmt-R and Hybrid. NseLstm-B scored highest with a BLEU score of 26.31. With regard to SARI, NseLstm-S scored best among neural models (29.58) and came close to the performance of Hybrid (30.00). This indicates that NSE offers an effective means to better encode complex sentences for sentence simplification."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our models were tuned on the development sets, either with BLEU BIBREF28 that scores the output by counting INLINEFORM0 -gram matches with the reference, or SARI BIBREF12 that compares the output against both the reference and the input sentence. Both measures are commonly used to automatically evaluate the quality of simplification output. We noticed that SARI should be used with caution when tuning neural Seq2seq simplification models. Since SARI depends on the differences between a system's output and the input sentence, large differences may yield very good SARI even though the output is ungrammatical. Thus, when tuning with SARI, we ignored epochs in which the BLEU score of the output is too low, using a threshold INLINEFORM1 . We set INLINEFORM2 to 22 on Newsela, 33 on WikiSmall, and 77 on WikiLarge."
],
"extractive_spans": [
"BLEU ",
"SARI "
],
"free_form_answer": "",
"highlighted_evidence": [
"Our models were tuned on the development sets, either with BLEU BIBREF28 that scores the output by counting INLINEFORM0 -gram matches with the reference, or SARI BIBREF12 that compares the output against both the reference and the input sentence."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We measured BLEU, and SARI at corpus-level following BIBREF15 . In addition, we also evaluated system output by eliciting human judgments. Specifically, we randomly selected 40 sentences from each test set, and included human reference simplifications and corresponding simplifications from the systems above. We then asked three volunteers to rate simplifications with respect to Fluency (the extent to which the output is grammatical English), Adequacy (the extent to which the output has the same meaning as the input sentence), and Simplicity (the extent to which the output is simpler than the input sentence) using a five point Likert scale."
],
"extractive_spans": [
"BLEU",
"SARI"
],
"free_form_answer": "",
"highlighted_evidence": [
"We measured BLEU, and SARI at corpus-level following BIBREF15 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0115ae5128eb8a881ab7f9f21dd496fc534f77d0",
"5f636820951342ffeca7f2be6edc2f909065b20b",
"dfc7abf38fb3d8034373902e6a99bd116b1b5e12"
],
"answer": [
{
"evidence": [
"We measured BLEU, and SARI at corpus-level following BIBREF15 . In addition, we also evaluated system output by eliciting human judgments. Specifically, we randomly selected 40 sentences from each test set, and included human reference simplifications and corresponding simplifications from the systems above. We then asked three volunteers to rate simplifications with respect to Fluency (the extent to which the output is grammatical English), Adequacy (the extent to which the output has the same meaning as the input sentence), and Simplicity (the extent to which the output is simpler than the input sentence) using a five point Likert scale."
],
"extractive_spans": [],
"free_form_answer": "Rate simplifications with respect to Fluency, Adequacy, and Simplicity, using a five point Likert scale.",
"highlighted_evidence": [
" We then asked three volunteers to rate simplifications with respect to Fluency (the extent to which the output is grammatical English), Adequacy (the extent to which the output has the same meaning as the input sentence), and Simplicity (the extent to which the output is simpler than the input sentence) using a five point Likert scale."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We measured BLEU, and SARI at corpus-level following BIBREF15 . In addition, we also evaluated system output by eliciting human judgments. Specifically, we randomly selected 40 sentences from each test set, and included human reference simplifications and corresponding simplifications from the systems above. We then asked three volunteers to rate simplifications with respect to Fluency (the extent to which the output is grammatical English), Adequacy (the extent to which the output has the same meaning as the input sentence), and Simplicity (the extent to which the output is simpler than the input sentence) using a five point Likert scale."
],
"extractive_spans": [
"We then asked three volunteers to rate simplifications with respect to Fluency (the extent to which the output is grammatical English), Adequacy (the extent to which the output has the same meaning as the input sentence), and Simplicity (the extent to which the output is simpler than the input sentence) using a five point Likert scale."
],
"free_form_answer": "",
"highlighted_evidence": [
"Specifically, we randomly selected 40 sentences from each test set, and included human reference simplifications and corresponding simplifications from the systems above. We then asked three volunteers to rate simplifications with respect to Fluency (the extent to which the output is grammatical English), Adequacy (the extent to which the output has the same meaning as the input sentence), and Simplicity (the extent to which the output is simpler than the input sentence) using a five point Likert scale."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We measured BLEU, and SARI at corpus-level following BIBREF15 . In addition, we also evaluated system output by eliciting human judgments. Specifically, we randomly selected 40 sentences from each test set, and included human reference simplifications and corresponding simplifications from the systems above. We then asked three volunteers to rate simplifications with respect to Fluency (the extent to which the output is grammatical English), Adequacy (the extent to which the output has the same meaning as the input sentence), and Simplicity (the extent to which the output is simpler than the input sentence) using a five point Likert scale."
],
"extractive_spans": [],
"free_form_answer": "By fluency, adequacy, and simplicity using a five point Likert scale.",
"highlighted_evidence": [
"We then asked three volunteers to rate simplifications with respect to Fluency (the extent to which the output is grammatical English), Adequacy (the extent to which the output has the same meaning as the input sentence), and Simplicity (the extent to which the output is simpler than the input sentence) using a five point Likert scale."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"d3778aa78ba32211948fc29db9c69ebdc86f1e4c",
"d7c9fd69d5eb84595267c74ce0ceef033bc22f7c"
],
"answer": [
{
"evidence": [
"Following BIBREF15 , we experiment on three simplification datasets, namely: (1) Newsela BIBREF22 , a high-quality simplification corpus of news articles composed by Newsela professional editors for children at multiple grade levels. We used the split of the data in BIBREF15 , i.e., 94,208/1,129/1,077 pairs for train/dev/test. (2) WikiSmall BIBREF10 , which contains aligned complex-simple sentence pairs from English Wikipedia (EW) and SEW. The dataset has 88,837/205/100 pairs for train/dev/test. (3) WikiLarge BIBREF15 , a larger corpus in which the training set is a mixture of three Wikipedia datasets in BIBREF10 , BIBREF11 , BIBREF23 , and the development and test sests are complex sentences taken from WikiSmall, each has 8 simplifications written by Amazon Mechanical Turk workers BIBREF12 . The dataset has 296,402/2,000/359 pairs for train/dev/test. Table TABREF7 provides statistics on the training sets."
],
"extractive_spans": [
"Newsela BIBREF22",
"WikiSmall BIBREF10",
"WikiLarge BIBREF15"
],
"free_form_answer": "",
"highlighted_evidence": [
"Following BIBREF15 , we experiment on three simplification datasets, namely: (1) Newsela BIBREF22 , a high-quality simplification corpus of news articles composed by Newsela professional editors for children at multiple grade levels. We used the split of the data in BIBREF15 , i.e., 94,208/1,129/1,077 pairs for train/dev/test. (2) WikiSmall BIBREF10 , which contains aligned complex-simple sentence pairs from English Wikipedia (EW) and SEW. The dataset has 88,837/205/100 pairs for train/dev/test. (3) WikiLarge BIBREF15 , a larger corpus in which the training set is a mixture of three Wikipedia datasets in BIBREF10 , BIBREF11 , BIBREF23 , and the development and test sests are complex sentences taken from WikiSmall, each has 8 simplifications written by Amazon Mechanical Turk workers BIBREF12 . The dataset has 296,402/2,000/359 pairs for train/dev/test. Table TABREF7 provides statistics on the training sets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Following BIBREF15 , we experiment on three simplification datasets, namely: (1) Newsela BIBREF22 , a high-quality simplification corpus of news articles composed by Newsela professional editors for children at multiple grade levels. We used the split of the data in BIBREF15 , i.e., 94,208/1,129/1,077 pairs for train/dev/test. (2) WikiSmall BIBREF10 , which contains aligned complex-simple sentence pairs from English Wikipedia (EW) and SEW. The dataset has 88,837/205/100 pairs for train/dev/test. (3) WikiLarge BIBREF15 , a larger corpus in which the training set is a mixture of three Wikipedia datasets in BIBREF10 , BIBREF11 , BIBREF23 , and the development and test sests are complex sentences taken from WikiSmall, each has 8 simplifications written by Amazon Mechanical Turk workers BIBREF12 . The dataset has 296,402/2,000/359 pairs for train/dev/test. Table TABREF7 provides statistics on the training sets."
],
"extractive_spans": [
"Newsela",
"WikiSmall",
"WikiLarge"
],
"free_form_answer": "",
"highlighted_evidence": [
"Following BIBREF15 , we experiment on three simplification datasets, namely: (1) Newsela BIBREF22 , a high-quality simplification corpus of news articles composed by Newsela professional editors for children at multiple grade levels.",
"(2) WikiSmall BIBREF10 , which contains aligned complex-simple sentence pairs from English Wikipedia (EW) and SEW.",
" (3) WikiLarge BIBREF15 , a larger corpus in which the training set is a mixture of three Wikipedia datasets in BIBREF10 , BIBREF11 , BIBREF23 , and the development and test sests are complex sentences taken from WikiSmall, each has 8 simplifications written by Amazon Mechanical Turk workers BIBREF12 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"what language was the data in?",
"what was the baseline?",
"which automatic metrics were used in evaluation?",
"how do humans judge the simplified sentences?",
"what datasets were used?"
],
"question_id": [
"0bde3ecfdd7c4a9af23f53da2cda6cd7a8398220",
"f7ee48dd32c666ef83a4ae4aa06bcde85dd8ec4b",
"051034cc94f2c02d3041575c53f969b3311c9ea1",
"511e46b5aa8e1ee9e7dc890f47fa15ef94d4a0af",
"6b4006a90aeaaff8914052d72d28851a9c0c0146"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Attention-based encoder-decoder model. The model may attend to relevant positions in the source sentence while decoding the simplification, e.g., to generate the target word won the model may attend to the words received, nominated and Prize in the source sentence.",
"Table 1: Statistics for the training sets: the vocabulary size (vocab size), and the average number of tokens per sentence (#tokens/sent) of the source (src) and target (tgt) language.",
"Table 2: Model performance using automatic evaluation measures (BLEU and SARI).",
"Table 3: Average human ratings (Fluency (F), Adequacy (A), Simplicity (S), and Average (Avg.)).",
"Table 4: Example model outputs on Newsela. Substitutions are shown in bold.",
"Table 5: Pearson correlation between the scores assigned by humans and the automatic evaluation measures. Scores marked ∗∗ are significant at p < 0.01."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"5-Table5-1.png"
]
} | [
"how do humans judge the simplified sentences?"
] | [
[
"1804.07445-Evaluation-0"
]
] | [
"By fluency, adequacy, and simplicity using a five point Likert scale."
] | 153 |
1910.03355 | An Interactive Machine Translation Framework for Modernizing Historical Documents | Due to the nature of human language, historical documents are hard to comprehend by contemporary people. This limits their accessibility to scholars specialized in the time period in which the documents were written. Modernization aims at breaking this language barrier by generating a new version of a historical document, written in the modern version of the document's original language. However, while it is able to increase the document's comprehension, modernization is still far from producing an error-free version. In this work, we propose a collaborative framework in which a scholar can work together with the machine to generate the new version. We tested our approach on a simulated environment, achieving significant reductions of the human effort needed to produce the modernized version of the document. | {
"paragraphs": [
[
"In recent years, awareness of the importance of preserving our cultural heritage has increased. Historical documents are an important part of that heritage. In order to preserve them, there is an increased need in creating digital text versions which can be search and automatically processed BIBREF0. However, their linguistic properties create additional difficulties: due to the lack of a spelling convention, orthography changes depending on the time period and author. Furthermore, human language evolves with the passage of time, increasing the difficulty of the document's comprehension. Thus, historical documents are mostly accessible to scholars specialized in the time period in which each document was written.",
"Modernization tackles the language barrier in order to increase the accessibility of historical documents. To achieve this, it generates a new version of a historical document in the modern version of the language in which the document was originally written (fi:Shakespeare shows an example of modernizing a document). However, while modernization has been successful in order to increase the comprehension of historical documents BIBREF1, BIBREF2, it is still far from creating error-free modern versions. Therefore, this task still needs to be carried out by scholars.",
"Interactive machine translation (IMT) fosters human–computer collaborations to generate error-free translations in a productive way BIBREF4, BIBREF5. In this work, we proposed to apply one of these protocols to historical documents modernization. We strive for creating an error-free modern version of a historical document, decreasing the human effort needed to achieve this goal.",
"The rest of this document is structured as follows: se:work introduces the related work. Then, in se:IMT we present our protocol. se:exp describes the experiments conducted in order to assess our proposal. The results of those experiments are presented and discussed in se:res. Finally, in se:conc, conclusions are drawn."
],
[
"While the lack of a spelling convention has been extensively researched for years BIBREF6, BIBREF7, BIBREF8, modernization of historical documents is a younger field. BIBREF1 organized a shared task in order to translate historical text to contemporary language. The main goal of this shared task was to tackle the spelling problem. However, they also approached document modernization using a set of rules. BIBREF9 proposed a modernization approach based on statistical machine translation (SMT). A neural machine translation (NMT) approach was proposed by BIBREF2. Finally, BIBREF10 extracted parallel phrases from an original parallel corpus and used them as an additional training data for their NMT approach.",
"Despise the promising results achieved in last years, machine translation (MT) is still far from producing high-quality translations BIBREF11. Therefore, a human agent has to supervise these translation in a post-editing stage. IMT was introduced with the goal of combining the knowledge of a human translator and the efficiency of an MT system. Although many protocols have been proposed in recent years BIBREF12, BIBREF13, BIBREF14, BIBREF15, the prefix-based remains as one of the most successful approaches BIBREF5, BIBREF16, BIBREF17. In this approach, the user corrects the leftmost wrong word from the translation hypothesis, inherently validating a correct prefix. With each new correction, the system generates a suffix that completes the prefix to produce a new translation."
],
[
"Classical IMT approaches relay on the statistical formalization of the MT problem. Given a source sentence $\\mathbf {x}$, SMT aims at finding its most likely translation $\\hat{\\mathbf {y}}$ BIBREF18:",
"For years, the prevailing approach to compute this expression have been phrase-based models BIBREF19. These models rely on a log-linear combination of different models BIBREF20: namely, phrase-based alignment models, reordering models and language models; among others BIBREF21, BIBREF22. However, more recently, this approach has shifted into neural models (see se:NMT)."
],
[
"Prefix-based IMT proposed a user–computer collaboration that starts with the system proposing an initial translation $\\mathbf {y}$ of length $I$. Then, the user corrects the leftmost wrong word $y_i$, inherently validating all preceding words. These words form a validated prefix $\\tilde{\\mathbf {y}}_p$, that includes the corrected word $\\tilde{y}_i$. The system reacts to this user feedback, generating a suffix $\\hat{\\mathbf {y}}_s$ that completes $\\tilde{\\mathbf {y}}_p$ to obtain a new translation of $\\mathbf {x}:\\hat{\\mathbf {y}}~=~\\tilde{\\mathbf {y}}_p\\,\\hat{\\mathbf {y}}_s$. This process is repeated until the user accepts the complete system suggestion. fi:IMT illustrates this protocol.",
"BIBREF5 formalized the suffix generation as follows:",
"which can be straightforwardly rewritten as:",
"This equation is very similar to eq:SMT: at each iteration, the process consists in a regular search in the translations space but constrained by the prefix $\\tilde{\\mathbf {y}}_p$."
],
[
"In NMT, eq:SMT is modeled by a neural network with parameters $\\mathbf {\\Theta }$:",
"This neural network usually follows an encoder-decoder architecture, featuring recurrent networks BIBREF23, BIBREF24, convolutional networks BIBREF25 or attention mechanisms BIBREF26. Model parameters are jointly estimated on large parallel corpora, using stochastic gradient descent BIBREF27, BIBREF28. At decoding time, the system obtains the most likely translation using a beam search method."
],
[
"The prefix-based IMT protocol (see se:PBIMT) can be naturally included into NMT systems since sentences are generated from left to right. In order to take into account the user's feedback and generate compatible hypothesis, the search space must be constraint. Given a prefix $\\tilde{\\mathbf {y}}_p$, only a single path accounts for it. The branching of the search process starts once this path has been covered. Introducing the validated prefix $\\tilde{\\mathbf {y}}_p$, eq:NMT becomes:",
"which implies a search over the space of translations, but constrained by the validated prefix $\\tilde{\\mathbf {y}}_p$ BIBREF15."
],
[
"In this section, we present our experimental conditions, including translation systems, corpora and evaluation metrics."
],
[
"SMT systems were trained with Moses BIBREF29, following the standard procedure: we estimated a 5-gram language model—smoothed with the improved KneserNey method—using SRILM BIBREF30, and optimized the weights of the log-linear model with MERT BIBREF31.",
"We built our NMT systems using NMT-Keras BIBREF32. We used long short-term memory units BIBREF33, with all model dimensions set to 512. We trained the system using Adam BIBREF34 with a fixed learning rate of $0.0002$ and a batch size of 60. We applied label smoothing of $0.1$ BIBREF35. At inference time, we used beam search with a beam size of 6. We applied joint byte pair encoding to all corpora BIBREF36, using $32,000$ merge operations.",
"Statistical IMT systems were implemented following the procedure of word graph exploration and generation of a best suffix for a given prefix described by BIBREF5. Neural IMT systems were built using the interactive branch of NMT-Keras."
],
[
"The first corpus used in our experimental session was the Dutch Bible BIBREF1. This corpus consists in a collection of different versions of the Dutch Bible: a version from 1637, another from 1657, another from 1888 and another from 2010. Except for the 2010 version, which is missing the last books, all versions contain the same texts. Moreover, since the authors mentioned that the translation from this last version is not very reliable and, considering that Dutch has not evolved significantly between 1637 and 1657, we decided to only use the 1637 version—considering this as the original document—and the 1888 version—considering 19$^{\\mathrm {th}}$ century Dutch as modern Dutch.",
"We selected El Quijote BIBREF2 as our second corpus. This corpus contains the famous 17$^{\\mathrm {th}}$ century Spanish novel by Miguel de Cervantes, and its correspondent 21$^{\\mathrm {st}}$ century version. Finally, we used El Conde Lucanor BIBREF2 as a third corpus. This data set contains the original 14$^{\\mathrm {th}}$ century Spanish novel by Don Juan Manuel, and its correspondent 21$^{\\mathrm {st}}$ century version. Due to the small size of the corpus, we decided to use it only as a test. Additionally, unable to find a suitable training corpus, we used the systems built for El Quijote—despite the original documents belonging to different time periods—in order to modernize El Conde Lucanor.",
"ta:corp presents the corpora statistics."
],
[
"In order to measure the gains in human effort reduction, we made use of the following metrics:",
"BIBREF37: measures the number of words edited by the user, normalized by the number of words in the final translation.",
"BIBREF5: measures the number of mouse actions made by the user, normalized by the number of characters in the final translation.",
"Additionally, to evaluate the quality of the modernization and the difficulty of each task, we made use of the following well-known metrics:",
"BiLingual Evaluation Understudy (BLEU) BIBREF38: computes the geometric average of the modified n-gram precision, multiplied by a brevity factor that penalizes short sentences.",
"Translation Error Rate (TER) BIBREF39: computes the number of word edit operations (insertion, substitution, deletion and swapping), normalized by the number of words in the final translation.",
"We used sacreBLEU BIBREF40 for ensuring consistent BLEU scores. For determining whether two systems presented statistically significant differences, we applied approximate randomization tests BIBREF41, with $10,000$ repetitions and using a $p$-value of $0.05$."
],
[
"Due to the high costs of an evaluation involving human agents, we carried out an automatic evaluation with simulated users whose desired modernizations correspond to the reference sentences.",
"At each iteration, the user corrects the leftmost wrong word from the system's hypothesis. With this correction, a new prefix is validated. The associated cost of this correction is of one mouse action and one word stroke. The system, then, reacts to this feedback, generating a new suffix that completes the prefix to conform a new hypothesis. This process is repeated until hypothesis and reference are the same."
],
[
"ta:quality presents the quality of the modernization. Both SMT and NMT approaches were able to significantly improved the baseline. That is, the modernized documents are easier to comprehend by a contemporary reader than the original documents. An exception to this is El Conde Lucanor. The SMT approach yielded significant improvements in terms of TER, but was worse in terms of BLEU. Moreover, the NMT approach yielded worst results in terms of both BLEU and TER. Most likely, this results are due to having used the systems trained with El Quijote for modernizing El Conde Lucanor (see se:corp).",
"When comparing the SMT and NMT approaches, we observe that SMT yielded the best results in all cases. This behavior was already perceived by BIBREF2 and is, most likely, due to the small size of the training corpora—a well-known problem in NMT. However, while the goal of modernization is making historical documents as easier to comprehend by contemporary people as possible, our goal is different. In this work, our goal is to obtain an error-free modern copy of a historical document. To achieve this, we proposed an interactive collaboration between a human expert and our modernizing system, in order to reduce the effort needed to generate such copy. ta:effort presents the experimental results.",
"Both SMT and NMT approaches yielded significant reductions of the human effort needed to modernize the Dutch Bible (up to 48 points in terms of WSR and 8 in terms of MAR) and El Quijote (up to 7 points in terms of WSR and 1 of MAR). For El Conde Lucanor, however, both approaches resulted in an increased of the effort need to generate an error-free modern version. This behavior was to be expected since the modernization quality for El Conde Lucanor was very low. Therefore, the system consistently generated wrong suffixes, resulting in the user having to make more corrections.",
"Regarding the performance of both approaches, SMT achieved the highest effort reduction. This was reasonably expected since its modernization quality was better. However, in past neural IMT works BIBREF15, the neural IMT approach was able to yield further improvements despite having a lower translation quality than its SMT counterpart. Most likely, the reason of this is that, due to the small training corpora, the neural model was not able to reach its best performance, Nonetheless, we should address this in a future work."
],
[
"fi:exIMT shows an example of modernizing a sentence from El Quijote with the interactive SMT approach. While the system's initial suggestion contains five errors, with the IMT protocol, the user only needs to make three corrections. With each correction, the system is able to improve its suggestions, reducing the total effort needed to achieve an error-free modernization. Note that this example has been chosen for illustrative purposes of a correct functioning of the system. The average sentences from El Quijote are longer, and there are times in which the system fails to take the human knowledge into account, resulting in an increase of the number of corrections. Nonetheless, as seen in se:res, overall the system is able to significantly decrease the human effort.",
"fi:exINMT contains an example of modernizing the same sentence as in fi:exIMT, using the interactive NMT approach. This is an example in which the system fails to take into account the user's corrections, resulting in an increase of the human effort. It is specially worth noting the introduction of non-existing words such as durdos and duradas. This problem was probably caused by an incorrect segmentation of a word, via the byte pair encoding process, and should be address in a future work. Nonetheless, as seen in se:res, overall the system is able to significantly decrease the human effort."
],
[
"In this work, we proposed a collaborative user–computer approach to create an error-free modern version of a historical document. We tested this proposal on a simulated environment, achieving significant reductions of the human effort. We built our modernization protocol based on both SMT and NMT approaches to prefix-based IMT. Although both systems yielded significant improvements for two data sets out of three, the SMT approach yielded the best results—both in terms of the human reduction and in the modernization quality of the initial system.",
"As a future work, we want to further research the behavior of the neural systems. For that, we would like to explore techniques for enriching the training corpus with additional data, and the incorrect generation of words due to subwords. We would also like to develop new protocols based on successful IMT approaches. Finally, we should test our proposal with real users to obtain actual measures of the effort reduction."
],
[
"The research leading to these results has received funding from the European Union through Programa Operativo del Fondo Europeo de Desarrollo Regional (FEDER) from Comunitat Valencia (2014–2020) under project Sistemas de frabricación inteligentes para la indústria 4.0 (grant agreement IDIFEDER/2018/025); and from Ministerio de Economía y Competitividad (MINECO) under project MISMIS-FAKEnHATE (grant agreement PGC2018-096212-B-C31). We gratefully acknowledge the support of NVIDIA Corporation with the donation of a GPU used for part of this research."
]
],
"section_name": [
"Introduction",
"Related Work",
"Interactive Machine Translation",
"Interactive Machine Translation ::: Prefix-based Interactive Machine Translation",
"Interactive Machine Translation ::: Neural Machine Translation",
"Interactive Machine Translation ::: Prefix-based Interactive Neural Machine Translation",
"Experiments",
"Experiments ::: MT Systems",
"Experiments ::: Corpora",
"Experiments ::: Metrics",
"Experiments ::: User Simulation",
"Results",
"Results ::: Qualitative Analysis",
"Conclusions and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"02d18cf62b5d8ee2d450dee4a2e0ec5b41cbdba1",
"043a6e27c166e47ae3e1c280c1e3445b96293889"
],
"answer": [
{
"evidence": [
"ta:quality presents the quality of the modernization. Both SMT and NMT approaches were able to significantly improved the baseline. That is, the modernized documents are easier to comprehend by a contemporary reader than the original documents. An exception to this is El Conde Lucanor. The SMT approach yielded significant improvements in terms of TER, but was worse in terms of BLEU. Moreover, the NMT approach yielded worst results in terms of both BLEU and TER. Most likely, this results are due to having used the systems trained with El Quijote for modernizing El Conde Lucanor (see se:corp).",
"When comparing the SMT and NMT approaches, we observe that SMT yielded the best results in all cases. This behavior was already perceived by BIBREF2 and is, most likely, due to the small size of the training corpora—a well-known problem in NMT. However, while the goal of modernization is making historical documents as easier to comprehend by contemporary people as possible, our goal is different. In this work, our goal is to obtain an error-free modern copy of a historical document. To achieve this, we proposed an interactive collaboration between a human expert and our modernizing system, in order to reduce the effort needed to generate such copy. ta:effort presents the experimental results.",
"FLOAT SELECTED: Table 2: Modernization quality. Baseline system corresponds to considering the original document as the modernized version. SMT and NMT are the SMT and NMT approaches respectively. † indicates statistically significant differences between the SMT/NMT system and the baseline. ‡ indicates statistically significance between the NMT and SMT systems. Best results are denoted in bold."
],
"extractive_spans": [],
"free_form_answer": "Baseline system corresponds to considering the original document as the modernized version. They used two approaches SMT and NMT and compared to the baseline, SMT showed best results.",
"highlighted_evidence": [
"Both SMT and NMT approaches were able to significantly improved the baseline.",
"When comparing the SMT and NMT approaches, we observe that SMT yielded the best results in all cases.",
"FLOAT SELECTED: Table 2: Modernization quality. Baseline system corresponds to considering the original document as the modernized version. SMT and NMT are the SMT and NMT approaches respectively. † indicates statistically significant differences between the SMT/NMT system and the baseline. ‡ indicates statistically significance between the NMT and SMT systems. Best results are denoted in bold."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Despise the promising results achieved in last years, machine translation (MT) is still far from producing high-quality translations BIBREF11. Therefore, a human agent has to supervise these translation in a post-editing stage. IMT was introduced with the goal of combining the knowledge of a human translator and the efficiency of an MT system. Although many protocols have been proposed in recent years BIBREF12, BIBREF13, BIBREF14, BIBREF15, the prefix-based remains as one of the most successful approaches BIBREF5, BIBREF16, BIBREF17. In this approach, the user corrects the leftmost wrong word from the translation hypothesis, inherently validating a correct prefix. With each new correction, the system generates a suffix that completes the prefix to produce a new translation."
],
"extractive_spans": [
"prefix-based "
],
"free_form_answer": "",
"highlighted_evidence": [
"Despise the promising results achieved in last years, machine translation (MT) is still far from producing high-quality translations BIBREF11. Therefore, a human agent has to supervise these translation in a post-editing stage. IMT was introduced with the goal of combining the knowledge of a human translator and the efficiency of an MT system. Although many protocols have been proposed in recent years BIBREF12, BIBREF13, BIBREF14, BIBREF15, the prefix-based remains as one of the most successful approaches BIBREF5, BIBREF16, BIBREF17. In this approach, the user corrects the leftmost wrong word from the translation hypothesis, inherently validating a correct prefix. With each new correction, the system generates a suffix that completes the prefix to produce a new translation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"1fc7cb570063d0dc7dff72e96a30cc9783167841",
"46c6160c14c53dad412721e96f738a336d31858f",
"8c0bbf3889bbb7491f6fb3684fc202209cb242fe"
],
"answer": [
{
"evidence": [
"The first corpus used in our experimental session was the Dutch Bible BIBREF1. This corpus consists in a collection of different versions of the Dutch Bible: a version from 1637, another from 1657, another from 1888 and another from 2010. Except for the 2010 version, which is missing the last books, all versions contain the same texts. Moreover, since the authors mentioned that the translation from this last version is not very reliable and, considering that Dutch has not evolved significantly between 1637 and 1657, we decided to only use the 1637 version—considering this as the original document—and the 1888 version—considering 19$^{\\mathrm {th}}$ century Dutch as modern Dutch.",
"We selected El Quijote BIBREF2 as our second corpus. This corpus contains the famous 17$^{\\mathrm {th}}$ century Spanish novel by Miguel de Cervantes, and its correspondent 21$^{\\mathrm {st}}$ century version. Finally, we used El Conde Lucanor BIBREF2 as a third corpus. This data set contains the original 14$^{\\mathrm {th}}$ century Spanish novel by Don Juan Manuel, and its correspondent 21$^{\\mathrm {st}}$ century version. Due to the small size of the corpus, we decided to use it only as a test. Additionally, unable to find a suitable training corpus, we used the systems built for El Quijote—despite the original documents belonging to different time periods—in order to modernize El Conde Lucanor."
],
"extractive_spans": [],
"free_form_answer": "Modern and historical versions of literature like the Bible and a Spanish novel.",
"highlighted_evidence": [
"The first corpus used in our experimental session was the Dutch Bible BIBREF1. This corpus consists in a collection of different versions of the Dutch Bible: a version from 1637, another from 1657, another from 1888 and another from 2010. Except for the 2010 version, which is missing the last books, all versions contain the same texts.",
"We selected El Quijote BIBREF2 as our second corpus. This corpus contains the famous 17$^{\\mathrm {th}}$ century Spanish novel by Miguel de Cervantes, and its correspondent 21$^{\\mathrm {st}}$ century version."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The first corpus used in our experimental session was the Dutch Bible BIBREF1. This corpus consists in a collection of different versions of the Dutch Bible: a version from 1637, another from 1657, another from 1888 and another from 2010. Except for the 2010 version, which is missing the last books, all versions contain the same texts. Moreover, since the authors mentioned that the translation from this last version is not very reliable and, considering that Dutch has not evolved significantly between 1637 and 1657, we decided to only use the 1637 version—considering this as the original document—and the 1888 version—considering 19$^{\\mathrm {th}}$ century Dutch as modern Dutch.",
"We selected El Quijote BIBREF2 as our second corpus. This corpus contains the famous 17$^{\\mathrm {th}}$ century Spanish novel by Miguel de Cervantes, and its correspondent 21$^{\\mathrm {st}}$ century version. Finally, we used El Conde Lucanor BIBREF2 as a third corpus. This data set contains the original 14$^{\\mathrm {th}}$ century Spanish novel by Don Juan Manuel, and its correspondent 21$^{\\mathrm {st}}$ century version. Due to the small size of the corpus, we decided to use it only as a test. Additionally, unable to find a suitable training corpus, we used the systems built for El Quijote—despite the original documents belonging to different time periods—in order to modernize El Conde Lucanor."
],
"extractive_spans": [
"Dutch Bible BIBREF1",
"El Quijote BIBREF2",
" El Conde Lucanor BIBREF2"
],
"free_form_answer": "",
"highlighted_evidence": [
"The first corpus used in our experimental session was the Dutch Bible BIBREF1. This corpus consists in a collection of different versions of the Dutch Bible: a version from 1637, another from 1657, another from 1888 and another from 2010. Except for the 2010 version, which is missing the last books, all versions contain the same texts. Moreover, since the authors mentioned that the translation from this last version is not very reliable and, considering that Dutch has not evolved significantly between 1637 and 1657, we decided to only use the 1637 version—considering this as the original document—and the 1888 version—considering 19$^{\\mathrm {th}}$ century Dutch as modern Dutch.",
"We selected El Quijote BIBREF2 as our second corpus. This corpus contains the famous 17$^{\\mathrm {th}}$ century Spanish novel by Miguel de Cervantes, and its correspondent 21$^{\\mathrm {st}}$ century version. Finally, we used El Conde Lucanor BIBREF2 as a third corpus. This data set contains the original 14$^{\\mathrm {th}}$ century Spanish novel by Don Juan Manuel, and its correspondent 21$^{\\mathrm {st}}$ century version. Due to the small size of the corpus, we decided to use it only as a test. Additionally, unable to find a suitable training corpus, we used the systems built for El Quijote—despite the original documents belonging to different time periods—in order to modernize El Conde Lucanor."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The first corpus used in our experimental session was the Dutch Bible BIBREF1. This corpus consists in a collection of different versions of the Dutch Bible: a version from 1637, another from 1657, another from 1888 and another from 2010. Except for the 2010 version, which is missing the last books, all versions contain the same texts. Moreover, since the authors mentioned that the translation from this last version is not very reliable and, considering that Dutch has not evolved significantly between 1637 and 1657, we decided to only use the 1637 version—considering this as the original document—and the 1888 version—considering 19$^{\\mathrm {th}}$ century Dutch as modern Dutch.",
"We selected El Quijote BIBREF2 as our second corpus. This corpus contains the famous 17$^{\\mathrm {th}}$ century Spanish novel by Miguel de Cervantes, and its correspondent 21$^{\\mathrm {st}}$ century version. Finally, we used El Conde Lucanor BIBREF2 as a third corpus. This data set contains the original 14$^{\\mathrm {th}}$ century Spanish novel by Don Juan Manuel, and its correspondent 21$^{\\mathrm {st}}$ century version. Due to the small size of the corpus, we decided to use it only as a test. Additionally, unable to find a suitable training corpus, we used the systems built for El Quijote—despite the original documents belonging to different time periods—in order to modernize El Conde Lucanor."
],
"extractive_spans": [
"Dutch Bible",
"El Quijote"
],
"free_form_answer": "",
"highlighted_evidence": [
"The first corpus used in our experimental session was the Dutch Bible BIBREF1. This corpus consists in a collection of different versions of the Dutch Bible: a version from 1637, another from 1657, another from 1888 and another from 2010. Except for the 2010 version, which is missing the last books, all versions contain the same texts. Moreover, since the authors mentioned that the translation from this last version is not very reliable and, considering that Dutch has not evolved significantly between 1637 and 1657, we decided to only use the 1637 version—considering this as the original document—and the 1888 version—considering 19$^{\\mathrm {th}}$ century Dutch as modern Dutch.\n\nWe selected El Quijote BIBREF2 as our second corpus. This corpus contains the famous 17$^{\\mathrm {th}}$ century Spanish novel by Miguel de Cervantes, and its correspondent 21$^{\\mathrm {st}}$ century version. Finally, we used El Conde Lucanor BIBREF2 as a third corpus. This data set contains the original 14$^{\\mathrm {th}}$ century Spanish novel by Don Juan Manuel, and its correspondent 21$^{\\mathrm {st}}$ century version. Due to the small size of the corpus, we decided to use it only as a test. Additionally, unable to find a suitable training corpus, we used the systems built for El Quijote—despite the original documents belonging to different time periods—in order to modernize El Conde Lucanor."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"013f22ce3df8760240f2b3cb3ed73fcf23e58f0c",
"4a0d452c811ef76c09f9ec0ed3ba6b6da7971b5d",
"bff36b3cf37506ad446b9a59267ff4e6410a5283"
],
"answer": [
{
"evidence": [
"We built our NMT systems using NMT-Keras BIBREF32. We used long short-term memory units BIBREF33, with all model dimensions set to 512. We trained the system using Adam BIBREF34 with a fixed learning rate of $0.0002$ and a batch size of 60. We applied label smoothing of $0.1$ BIBREF35. At inference time, we used beam search with a beam size of 6. We applied joint byte pair encoding to all corpora BIBREF36, using $32,000$ merge operations."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We built our NMT systems using NMT-Keras BIBREF32. We used long short-term memory units BIBREF33, with all model dimensions set to 512. We trained the system using Adam BIBREF34 with a fixed learning rate of $0.0002$ and a batch size of 60. We applied label smoothing of $0.1$ BIBREF35. At inference time, we used beam search with a beam size of 6. We applied joint byte pair encoding to all corpora BIBREF36, using $32,000$ merge operations."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We built our NMT systems using NMT-Keras BIBREF32. We used long short-term memory units BIBREF33, with all model dimensions set to 512. We trained the system using Adam BIBREF34 with a fixed learning rate of $0.0002$ and a batch size of 60. We applied label smoothing of $0.1$ BIBREF35. At inference time, we used beam search with a beam size of 6. We applied joint byte pair encoding to all corpora BIBREF36, using $32,000$ merge operations."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We built our NMT systems using NMT-Keras BIBREF32. We used long short-term memory units BIBREF33, with all model dimensions set to 512."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We built our NMT systems using NMT-Keras BIBREF32. We used long short-term memory units BIBREF33, with all model dimensions set to 512. We trained the system using Adam BIBREF34 with a fixed learning rate of $0.0002$ and a batch size of 60. We applied label smoothing of $0.1$ BIBREF35. At inference time, we used beam search with a beam size of 6. We applied joint byte pair encoding to all corpora BIBREF36, using $32,000$ merge operations."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We built our NMT systems using NMT-Keras BIBREF32. We used long short-term memory units BIBREF33, with all model dimensions set to 512. We trained the system using Adam BIBREF34 with a fixed learning rate of $0.0002$ and a batch size of 60. We applied label smoothing of $0.1$ BIBREF35. At inference time, we used beam search with a beam size of 6. We applied joint byte pair encoding to all corpora BIBREF36, using $32,000$ merge operations."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"3194d801458187bd8c3816cefd652e19fb4112c2",
"9fbd4336991cb69d6fa1334539d2f76556320776",
"c87ca14c68704f2a073ea82f631e701dfcacb4c3"
],
"answer": [
{
"evidence": [
"Classical IMT approaches relay on the statistical formalization of the MT problem. Given a source sentence $\\mathbf {x}$, SMT aims at finding its most likely translation $\\hat{\\mathbf {y}}$ BIBREF18:",
"For years, the prevailing approach to compute this expression have been phrase-based models BIBREF19. These models rely on a log-linear combination of different models BIBREF20: namely, phrase-based alignment models, reordering models and language models; among others BIBREF21, BIBREF22. However, more recently, this approach has shifted into neural models (see se:NMT).",
"Prefix-based IMT proposed a user–computer collaboration that starts with the system proposing an initial translation $\\mathbf {y}$ of length $I$. Then, the user corrects the leftmost wrong word $y_i$, inherently validating all preceding words. These words form a validated prefix $\\tilde{\\mathbf {y}}_p$, that includes the corrected word $\\tilde{y}_i$. The system reacts to this user feedback, generating a suffix $\\hat{\\mathbf {y}}_s$ that completes $\\tilde{\\mathbf {y}}_p$ to obtain a new translation of $\\mathbf {x}:\\hat{\\mathbf {y}}~=~\\tilde{\\mathbf {y}}_p\\,\\hat{\\mathbf {y}}_s$. This process is repeated until the user accepts the complete system suggestion. fi:IMT illustrates this protocol.",
"Interactive Machine Translation ::: Neural Machine Translation",
"In NMT, eq:SMT is modeled by a neural network with parameters $\\mathbf {\\Theta }$:",
"This neural network usually follows an encoder-decoder architecture, featuring recurrent networks BIBREF23, BIBREF24, convolutional networks BIBREF25 or attention mechanisms BIBREF26. Model parameters are jointly estimated on large parallel corpora, using stochastic gradient descent BIBREF27, BIBREF28. At decoding time, the system obtains the most likely translation using a beam search method.",
"Interactive Machine Translation ::: Prefix-based Interactive Neural Machine Translation",
"The prefix-based IMT protocol (see se:PBIMT) can be naturally included into NMT systems since sentences are generated from left to right. In order to take into account the user's feedback and generate compatible hypothesis, the search space must be constraint. Given a prefix $\\tilde{\\mathbf {y}}_p$, only a single path accounts for it. The branching of the search process starts once this path has been covered. Introducing the validated prefix $\\tilde{\\mathbf {y}}_p$, eq:NMT becomes:",
"which implies a search over the space of translations, but constrained by the validated prefix $\\tilde{\\mathbf {y}}_p$ BIBREF15."
],
"extractive_spans": [
"Classical IMT approaches",
"Prefix-based IMT ",
"Neural Machine Translation",
"Prefix-based Interactive Neural Machine Translation"
],
"free_form_answer": "",
"highlighted_evidence": [
"Classical IMT approaches relay on the statistical formalization of the MT problem. Given a source sentence $\\mathbf {x}$, SMT aims at finding its most likely translation $\\hat{\\mathbf {y}}$ BIBREF18:\n\nFor years, the prevailing approach to compute this expression have been phrase-based models BIBREF19. These models rely on a log-linear combination of different models BIBREF20: namely, phrase-based alignment models, reordering models and language models; among others BIBREF21, BIBREF22. However, more recently, this approach has shifted into neural models (see se:NMT).",
"Prefix-based IMT proposed a user–computer collaboration that starts with the system proposing an initial translation $\\mathbf {y}$ of length $I$. Then, the user corrects the leftmost wrong word $y_i$, inherently validating all preceding words. These words form a validated prefix $\\tilde{\\mathbf {y}}_p$, that includes the corrected word $\\tilde{y}_i$. The system reacts to this user feedback, generating a suffix $\\hat{\\mathbf {y}}_s$ that completes $\\tilde{\\mathbf {y}}_p$ to obtain a new translation of $\\mathbf {x}:\\hat{\\mathbf {y}}~=~\\tilde{\\mathbf {y}}_p\\,\\hat{\\mathbf {y}}_s$. This process is repeated until the user accepts the complete system suggestion. fi:IMT illustrates this protocol.",
"Interactive Machine Translation ::: Neural Machine Translation\nIn NMT, eq:SMT is modeled by a neural network with parameters $\\mathbf {\\Theta }$:\n\nThis neural network usually follows an encoder-decoder architecture, featuring recurrent networks BIBREF23, BIBREF24, convolutional networks BIBREF25 or attention mechanisms BIBREF26. Model parameters are jointly estimated on large parallel corpora, using stochastic gradient descent BIBREF27, BIBREF28. At decoding time, the system obtains the most likely translation using a beam search method.",
"Interactive Machine Translation ::: Prefix-based Interactive Neural Machine Translation\nThe prefix-based IMT protocol (see se:PBIMT) can be naturally included into NMT systems since sentences are generated from left to right. In order to take into account the user's feedback and generate compatible hypothesis, the search space must be constraint. Given a prefix $\\tilde{\\mathbf {y}}_p$, only a single path accounts for it. The branching of the search process starts once this path has been covered. Introducing the validated prefix $\\tilde{\\mathbf {y}}_p$, eq:NMT becomes:\n\nwhich implies a search over the space of translations, but constrained by the validated prefix $\\tilde{\\mathbf {y}}_p$ BIBREF15."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"SMT systems were trained with Moses BIBREF29, following the standard procedure: we estimated a 5-gram language model—smoothed with the improved KneserNey method—using SRILM BIBREF30, and optimized the weights of the log-linear model with MERT BIBREF31.",
"We built our NMT systems using NMT-Keras BIBREF32. We used long short-term memory units BIBREF33, with all model dimensions set to 512. We trained the system using Adam BIBREF34 with a fixed learning rate of $0.0002$ and a batch size of 60. We applied label smoothing of $0.1$ BIBREF35. At inference time, we used beam search with a beam size of 6. We applied joint byte pair encoding to all corpora BIBREF36, using $32,000$ merge operations.",
"Statistical IMT systems were implemented following the procedure of word graph exploration and generation of a best suffix for a given prefix described by BIBREF5. Neural IMT systems were built using the interactive branch of NMT-Keras."
],
"extractive_spans": [
"NMT systems using NMT-Keras",
"SMT systems were trained with Moses",
"Statistical IMT systems"
],
"free_form_answer": "",
"highlighted_evidence": [
"SMT systems were trained with Moses BIBREF29, following the standard procedure: we estimated a 5-gram language model—smoothed with the improved KneserNey method—using SRILM BIBREF30, and optimized the weights of the log-linear model with MERT BIBREF31.\n\nWe built our NMT systems using NMT-Keras BIBREF32. We used long short-term memory units BIBREF33, with all model dimensions set to 512. We trained the system using Adam BIBREF34 with a fixed learning rate of $0.0002$ and a batch size of 60. We applied label smoothing of $0.1$ BIBREF35. At inference time, we used beam search with a beam size of 6. We applied joint byte pair encoding to all corpora BIBREF36, using $32,000$ merge operations.\n\nStatistical IMT systems were implemented following the procedure of word graph exploration and generation of a best suffix for a given prefix described by BIBREF5. Neural IMT systems were built using the interactive branch of NMT-Keras."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"SMT systems were trained with Moses BIBREF29, following the standard procedure: we estimated a 5-gram language model—smoothed with the improved KneserNey method—using SRILM BIBREF30, and optimized the weights of the log-linear model with MERT BIBREF31.",
"We built our NMT systems using NMT-Keras BIBREF32. We used long short-term memory units BIBREF33, with all model dimensions set to 512. We trained the system using Adam BIBREF34 with a fixed learning rate of $0.0002$ and a batch size of 60. We applied label smoothing of $0.1$ BIBREF35. At inference time, we used beam search with a beam size of 6. We applied joint byte pair encoding to all corpora BIBREF36, using $32,000$ merge operations."
],
"extractive_spans": [],
"free_form_answer": "classification for SMT and neural methods for NMT",
"highlighted_evidence": [
"SMT systems were trained with Moses BIBREF29, following the standard procedure: we estimated a 5-gram language model—smoothed with the improved KneserNey method—using SRILM BIBREF30, and optimized the weights of the log-linear model with MERT BIBREF31.\n\nWe built our NMT systems using NMT-Keras BIBREF32. We used long short-term memory units BIBREF33, with all model dimensions set to 512. We trained the system using Adam BIBREF34 with a fixed learning rate of $0.0002$ and a batch size of 60. We applied label smoothing of $0.1$ BIBREF35. At inference time, we used beam search with a beam size of 6. We applied joint byte pair encoding to all corpora BIBREF36, using $32,000$ merge operations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"53da5e2511f9841201b8cbce5389aabc70c8b109",
"a141e12836ac5899d2557a77fbf02523e7ff8c75"
],
"answer": [
{
"evidence": [
"The first corpus used in our experimental session was the Dutch Bible BIBREF1. This corpus consists in a collection of different versions of the Dutch Bible: a version from 1637, another from 1657, another from 1888 and another from 2010. Except for the 2010 version, which is missing the last books, all versions contain the same texts. Moreover, since the authors mentioned that the translation from this last version is not very reliable and, considering that Dutch has not evolved significantly between 1637 and 1657, we decided to only use the 1637 version—considering this as the original document—and the 1888 version—considering 19$^{\\mathrm {th}}$ century Dutch as modern Dutch.",
"We selected El Quijote BIBREF2 as our second corpus. This corpus contains the famous 17$^{\\mathrm {th}}$ century Spanish novel by Miguel de Cervantes, and its correspondent 21$^{\\mathrm {st}}$ century version. Finally, we used El Conde Lucanor BIBREF2 as a third corpus. This data set contains the original 14$^{\\mathrm {th}}$ century Spanish novel by Don Juan Manuel, and its correspondent 21$^{\\mathrm {st}}$ century version. Due to the small size of the corpus, we decided to use it only as a test. Additionally, unable to find a suitable training corpus, we used the systems built for El Quijote—despite the original documents belonging to different time periods—in order to modernize El Conde Lucanor."
],
"extractive_spans": [],
"free_form_answer": "Dutch and Spanish",
"highlighted_evidence": [
"The first corpus used in our experimental session was the Dutch Bible BIBREF1. This corpus consists in a collection of different versions of the Dutch Bible: a version from 1637, another from 1657, another from 1888 and another from 2010. Except for the 2010 version, which is missing the last books, all versions contain the same texts. Moreover, since the authors mentioned that the translation from this last version is not very reliable and, considering that Dutch has not evolved significantly between 1637 and 1657, we decided to only use the 1637 version—considering this as the original document—and the 1888 version—considering 19$^{\\mathrm {th}}$ century Dutch as modern Dutch.\n\nWe selected El Quijote BIBREF2 as our second corpus. This corpus contains the famous 17$^{\\mathrm {th}}$ century Spanish novel by Miguel de Cervantes, and its correspondent 21$^{\\mathrm {st}}$ century version. Finally, we used El Conde Lucanor BIBREF2 as a third corpus. This data set contains the original 14$^{\\mathrm {th}}$ century Spanish novel by Don Juan Manuel, and its correspondent 21$^{\\mathrm {st}}$ century version. Due to the small size of the corpus, we decided to use it only as a test. Additionally, unable to find a suitable training corpus, we used the systems built for El Quijote—despite the original documents belonging to different time periods—in order to modernize El Conde Lucanor."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The first corpus used in our experimental session was the Dutch Bible BIBREF1. This corpus consists in a collection of different versions of the Dutch Bible: a version from 1637, another from 1657, another from 1888 and another from 2010. Except for the 2010 version, which is missing the last books, all versions contain the same texts. Moreover, since the authors mentioned that the translation from this last version is not very reliable and, considering that Dutch has not evolved significantly between 1637 and 1657, we decided to only use the 1637 version—considering this as the original document—and the 1888 version—considering 19$^{\\mathrm {th}}$ century Dutch as modern Dutch.",
"We selected El Quijote BIBREF2 as our second corpus. This corpus contains the famous 17$^{\\mathrm {th}}$ century Spanish novel by Miguel de Cervantes, and its correspondent 21$^{\\mathrm {st}}$ century version. Finally, we used El Conde Lucanor BIBREF2 as a third corpus. This data set contains the original 14$^{\\mathrm {th}}$ century Spanish novel by Don Juan Manuel, and its correspondent 21$^{\\mathrm {st}}$ century version. Due to the small size of the corpus, we decided to use it only as a test. Additionally, unable to find a suitable training corpus, we used the systems built for El Quijote—despite the original documents belonging to different time periods—in order to modernize El Conde Lucanor."
],
"extractive_spans": [
"Dutch",
"Spanish"
],
"free_form_answer": "",
"highlighted_evidence": [
"The first corpus used in our experimental session was the Dutch Bible BIBREF1. This corpus consists in a collection of different versions of the Dutch Bible: a version from 1637, another from 1657, another from 1888 and another from 2010.",
"We selected El Quijote BIBREF2 as our second corpus. This corpus contains the famous 17$^{\\mathrm {th}}$ century Spanish novel by Miguel de Cervantes, and its correspondent 21$^{\\mathrm {st}}$ century version. Finally, we used El Conde Lucanor BIBREF2 as a third corpus. This data set contains the original 14$^{\\mathrm {th}}$ century Spanish novel by Don Juan Manuel, and its correspondent 21$^{\\mathrm {st}}$ century version."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What previous approaches are presented for comparison?",
"What kind of data is used to train the model?",
"Does proposed approach use neural networks?",
"What machine learning techniques are used in the model architecture?",
"What language(s) is the model tested on?"
],
"question_id": [
"eccbbe3684d0cf6b794cb4eef379bb1c8bcc33bf",
"a3705b53c6710b41154c65327b7bbec175bdfae7",
"b62b7ec5128219f04be41854247d5af992797937",
"e8fa4303b36a47a5c87f862458442941bbdff7d9",
"51e9f446d987219bc069222731dfc1081957ce1f"
],
"question_writer": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
],
"search_query": [
"text generation",
"text generation",
"text generation",
"text generation",
"text generation"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1: Example of modernizing a historical document. The original text is a fragment from Hamlet. The modernized version of the Sonnet was obtained from (Crowther, 2003).",
"Fig. 2: Single iteration of prefix-based IMT. The user corrects the leftmost wrong word ası́, introducing the word me at position 5. Then, the system generates a new hypothesis that takes into account the inherently validated prefix (¡Bendito sea Dios, que me).",
"Table 1: Corpora statistics. |S| stands for number of sentences, |T | for number of tokens and |V | for size of the vocabulary. Monolingual refers to the monolingual data used to create the synthetic data. M denotes million and K thousand.",
"Table 2: Modernization quality. Baseline system corresponds to considering the original document as the modernized version. SMT and NMT are the SMT and NMT approaches respectively. † indicates statistically significant differences between the SMT/NMT system and the baseline. ‡ indicates statistically significance between the NMT and SMT systems. Best results are denoted in bold.",
"Table 3: IMT results. SMT and NMT are the IMT approaches based on SMT and NMT respectively. † indicates statistically significant differences between the SMT/NMT system and the baseline. ‡ indicates statistically significance between the NMT and SMT systems. Best results are denoted in bold.",
"Fig. 3: IMT session to modernize a sentence from El Quijote. At the initial iteration (IT-0), the system suggests an initial modernization. Then, at iteration 1, the user corrects the leftmost wrong",
"Fig. 4: Neural IMT session to modernize the same sentence from El Quijote as in Fig. 3. At the initial iteration (IT-0), the system suggests an initial modernization. Then, at iteration 1, the user corrects the leftmost wrong word (Durmamos). Taking this user feedback into account, the system suggests a new hypothesis. Similarly, at iteration 2, the user corrects the leftmost wrong word (de). The session ends when the user accepts the last modernization suggested by the system."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"5-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Figure3-1.png",
"9-Figure4-1.png"
]
} | [
"What previous approaches are presented for comparison?",
"What kind of data is used to train the model?",
"What machine learning techniques are used in the model architecture?",
"What language(s) is the model tested on?"
] | [
[
"1910.03355-7-Table2-1.png",
"1910.03355-Results-1",
"1910.03355-Related Work-1",
"1910.03355-Results-0"
],
[
"1910.03355-Experiments ::: Corpora-1",
"1910.03355-Experiments ::: Corpora-0"
],
[
"1910.03355-Interactive Machine Translation ::: Neural Machine Translation-0",
"1910.03355-Interactive Machine Translation-1",
"1910.03355-Experiments ::: MT Systems-1",
"1910.03355-Interactive Machine Translation ::: Prefix-based Interactive Machine Translation-0",
"1910.03355-Interactive Machine Translation ::: Prefix-based Interactive Neural Machine Translation-1",
"1910.03355-Experiments ::: MT Systems-0",
"1910.03355-Experiments ::: MT Systems-2",
"1910.03355-Interactive Machine Translation ::: Neural Machine Translation-1",
"1910.03355-Interactive Machine Translation ::: Prefix-based Interactive Neural Machine Translation-0",
"1910.03355-Interactive Machine Translation-0"
],
[
"1910.03355-Experiments ::: Corpora-1",
"1910.03355-Experiments ::: Corpora-0"
]
] | [
"Baseline system corresponds to considering the original document as the modernized version. They used two approaches SMT and NMT and compared to the baseline, SMT showed best results.",
"Modern and historical versions of literature like the Bible and a Spanish novel.",
"classification for SMT and neural methods for NMT",
"Dutch and Spanish"
] | 154 |
1603.09381 | Clinical Information Extraction via Convolutional Neural Network | We report an implementation of a clinical information extraction tool that leverages deep neural network to annotate event spans and their attributes from raw clinical notes and pathology reports. Our approach uses context words and their part-of-speech tags and shape information as features. Then we hire temporal (1D) convolutional neural network to learn hidden feature representations. Finally, we use Multilayer Perceptron (MLP) to predict event spans. The empirical evaluation demonstrates that our approach significantly outperforms baselines. | {
"paragraphs": [
[
"In the past few years, there has been much interest in applying neural network based deep learning techniques to solve all kinds of natural language processing (NLP) tasks. From low level tasks such as language modeling, POS tagging, named entity recognition, and semantic role labeling BIBREF0 , BIBREF1 , to high level tasks such as machine translation, information retrieval, semantic analysis BIBREF2 , BIBREF3 , BIBREF4 and sentence relation modeling tasks such as paraphrase identification and question answering BIBREF5 , BIBREF6 , BIBREF7 . Deep representation learning has demonstrated its importance for these tasks. All the tasks get performance improvement via learning either word level representations or sentence level representations.",
"In this work, we brought deep representation learning technologies to the clinical domain. Specifically, we focus on clinical information extraction, using clinical notes and pathology reports from the Mayo Clinic. Our system will identify event expressions consisting of the following components:",
"The input of our system consists of raw clinical notes or pathology reports like below:",
"And output annotations over the text that capture the key information such as event mentions and attributes. Table TABREF7 illustrates the output of clinical information extraction in details.",
"To solve this task, the major challenge is how to precisely identify the spans (character offsets) of the event expressions from raw clinical notes. Traditional machine learning approaches usually build a supervised classifier with features generated by the Apache clinical Text Analysis and Knowledge Extraction System (cTAKES) . For example, BluLab system BIBREF8 extracted morphological(lemma), lexical(token), and syntactic(part-of-speech) features encoded from cTAKES. Although using the domain specific information extraction tools can improve the performance, learning how to use it well for clinical domain feature engineering is still very time-consuming. In short, a simple and effective method that only leverage basic NLP modules and achieves high extraction performance is desired to save costs.",
"To address this challenge, we propose a deep neural networks based method, especially convolution neural network BIBREF0 , to learn hidden feature representations directly from raw clinical notes. More specifically, one method first extract a window of surrounding words for the candidate word. Then, we attach each word with their part-of-speech tag and shape information as extra features. Then our system deploys a temporal convolution neural network to learn hidden feature representations. Finally, our system uses Multilayer Perceptron (MLP) to predict event spans. Note that we use the same model to predict event attributes."
],
[
"The major advantage of our system is that we only leverage NLTK tokenization and a POS tagger to preprocess our training dataset. When implementing our neural network based clinical information extraction system, we found it is not easy to construct high quality training data due to the noisy format of clinical notes. Choosing the proper tokenizer is quite important for span identification. After several experiments, we found \"RegexpTokenizer\" can match our needs. This tokenizer can generate spans for each token via sophisticated regular expression like below,"
],
[
"Event span identification is the task of extracting character offsets of the expression in raw clinical notes. This subtask is quite important due to the fact that the event span identification accuracy will affect the accuracy of attribute identification. We first run our neural network classifier to identify event spans. Then, given each span, our system tries to identify attribute values."
],
[
"The way we use temporal convlution neural network for event span and attribute classification is similar with the approach proposed by BIBREF0 . Generally speaking, we can consider a word as represented by INLINEFORM0 discrete features INLINEFORM1 , where INLINEFORM2 is the dictionary for the INLINEFORM3 feature. In our scenario, we just use three features such as token mention, pos tag and word shape. Note that word shape features are used to represent the abstract letter pattern of the word by mapping lower-case letters to “x”, upper-case to “X”, numbers to “d”, and retaining punctuation. We associate to each feature a lookup table. Given a word, a feature vector is then obtained by concatenating all lookup table outputs. Then a clinical snippet is transformed into a word embedding matrix. The matrix can be fed to further 1-dimension convolutional neural network and max pooling layers. Below we will briefly introduce core concepts of Convoluational Neural Network (CNN).",
"Temporal Convolution applies one-dimensional convolution over the input sequence. The one-dimensional convolution is an operation between a vector of weights INLINEFORM0 and a vector of inputs viewed as a sequence INLINEFORM1 . The vector INLINEFORM2 is the filter of the convolution. Concretely, we think of INLINEFORM3 as the input sentence and INLINEFORM4 as a single feature value associated with the INLINEFORM5 -th word in the sentence. The idea behind the one-dimensional convolution is to take the dot product of the vector INLINEFORM6 with each INLINEFORM7 -gram in the sentence INLINEFORM8 to obtain another sequence INLINEFORM9 : DISPLAYFORM0 ",
"Usually, INLINEFORM0 is not a single value, but a INLINEFORM1 -dimensional word vector so that INLINEFORM2 . There exist two types of 1d convolution operations. One was introduced by BIBREF9 and also known as Time Delay Neural Networks (TDNNs). The other one was introduced by BIBREF0 . In TDNN, weights INLINEFORM3 form a matrix. Each row of INLINEFORM4 is convolved with the corresponding row of INLINEFORM5 . In BIBREF0 architecture, a sequence of length INLINEFORM6 is represented as: DISPLAYFORM0 ",
"where INLINEFORM0 is the concatenation operation. In general, let INLINEFORM1 refer to the concatenation of words INLINEFORM2 . A convolution operation involves a filter INLINEFORM3 , which is applied to a window of INLINEFORM4 words to produce the new feature. For example, a feature INLINEFORM5 is generated from a window of words INLINEFORM6 by: DISPLAYFORM0 ",
"where INLINEFORM0 is a bias term and INLINEFORM1 is a non-linear function such as the hyperbolic tangent. This filter is applied to each possible window of words in the sequence INLINEFORM2 to produce the feature map: DISPLAYFORM0 ",
"where INLINEFORM0 .",
"We also employ dropout on the penultimate layer with a constraint on INLINEFORM0 -norms of the weight vector. Dropout prevents co-adaptation of hidden units by randomly dropping out a proportion INLINEFORM1 of the hidden units during forward-backpropagation. That is, given the penultimate layer INLINEFORM2 , instead of using: DISPLAYFORM0 ",
"for output unit INLINEFORM0 in forward propagation, dropout uses: DISPLAYFORM0 ",
"where INLINEFORM0 is the element-wise multiplication operator and INLINEFORM1 is a masking vector of Bernoulli random variables with probability INLINEFORM2 of being 1. Gradients are backpropagated only through the unmasked units. At test step, the learned weight vectors are scaled by INLINEFORM3 such that INLINEFORM4 , and INLINEFORM5 is used to score unseen sentences. We additionally constrain INLINEFORM6 -norms of the weight vectors by re-scaling INLINEFORM7 to have INLINEFORM8 whenever INLINEFORM9 after a gradient descent step."
],
[
"We use the Clinical TempEval corpus as the evaluation dataset. This corpus was based on a set of 600 clinical notes and pathology reports from cancer patients at the Mayo Clinic. These notes were manually de-identified by the Mayo Clinic to replace names, locations, etc. with generic placeholders, but time expression were not altered. The notes were then manually annotated with times, events and temporal relations in clinical notes. These annotations include time expression types, event attributes and an increased focus on temporal relations. The event, time and temporal relation annotations were distributed separately from the text using the Anafora standoff format. Table TABREF19 shows the number of documents, event expressions in the training, development and testing portions of the 2016 THYME data."
],
[
"All of the tasks were evaluated using the standard metrics of precision(P), recall(R) and INLINEFORM0 : DISPLAYFORM0 ",
"where INLINEFORM0 is the set of items predicted by the system and INLINEFORM1 is the set of items manually annotated by the humans. Applying these metrics of the tasks only requires a definition of what is considered an \"item\" for each task. For evaluating the spans of event expressions, items were tuples of character offsets. Thus, system only received credit for identifying events with exactly the same character offsets as the manually annotated ones. For evaluating the attributes of event expression types, items were tuples of (begin, end, value) where begin and end are character offsets and value is the value that was given to the relevant attribute. Thus, systems only received credit for an event attribute if they both found an event with correct character offsets and then assigned the correct value for that attribute BIBREF10 ."
],
[
"We want to maximize the likelihood of the correct class. This is equivalent to minimizing the negative log-likelihood (NLL). More specifically, the label INLINEFORM0 given the inputs INLINEFORM1 is predicted by a softmax classifier that takes the hidden state INLINEFORM2 as input: DISPLAYFORM0 ",
"After that, the objective function is the negative log-likelihood of the true class labels INLINEFORM0 : DISPLAYFORM0 ",
"where INLINEFORM0 is the number of training examples and the superscript INLINEFORM1 indicates the INLINEFORM2 th example.",
"We use Lasagne deep learning framework. We first initialize our word representations using publicly available 300-dimensional Glove word vectors . We deploy CNN model with kernel width of 2, a filter size of 300, sequence length is INLINEFORM0 , number filters is INLINEFORM1 , stride is 1, pool size is INLINEFORM2 , cnn activation function is tangent, MLP activation function is sigmoid. MLP hidden dimension is 50. We initialize CNN weights using a uniform distribution. Finally, by stacking a softmax function on top, we can get normalized log-probabilities. Training is done through stochastic gradient descent over shuffled mini-batches with the AdaGrad update rule BIBREF11 . The learning rate is set to 0.05. The mini-batch size is 100. The model parameters were regularized with a per-minibatch L2 regularization strength of INLINEFORM3 ."
],
[
"Table TABREF28 shows results on the event expression tasks. Our initial submits RUN 4 and 5 outperformed the memorization baseline on every metric on every task. The precision of event span identification is close to the max report. However, our system got lower recall. One of the main reason is that our training objective function is accuracy-oriented. Table TABREF29 shows results on the phase 2 subtask."
],
[
"In this paper, we introduced a new clinical information extraction system that only leverage deep neural networks to identify event spans and their attributes from raw clinical notes. We trained deep neural networks based classifiers to extract clinical event spans. Our method attached each word to their part-of-speech tag and shape information as extra features. We then hire temporal convolution neural network to learn hidden feature representations. The entire experimental results demonstrate that our approach consistently outperforms the existing baseline methods on standard evaluation datasets.",
"Our research proved that we can get competitive results without the help of a domain specific feature extraction toolkit, such as cTAKES. Also we only leverage basic natural language processing modules such as tokenization and part-of-speech tagging. With the help of deep representation learning, we can dramatically reduce the cost of clinical information extraction system development."
]
],
"section_name": [
"Introduction",
"Constructing High Quality Training Dataset",
"Neural Network Classifier",
"Temporal Convolutional Neural Network",
"Dataset",
"Evaluation Metrics",
"Hyperparameters and Training Details",
"Results and Discussions",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"017be6803fe7623dc69833a7e95c9176f6703609",
"61f6b5eb2ba54debf9ee1475d052f1ec0c51eb6b",
"77de0c2b2a92ef27926b31fab5f77831042cc470"
],
"answer": [
{
"evidence": [
"Table TABREF28 shows results on the event expression tasks. Our initial submits RUN 4 and 5 outperformed the memorization baseline on every metric on every task. The precision of event span identification is close to the max report. However, our system got lower recall. One of the main reason is that our training objective function is accuracy-oriented. Table TABREF29 shows results on the phase 2 subtask."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Table 3) Best proposed result has F1 score of 0.844, 0.813, 0.870, 0.842, 0.844 compared to 0.855, 0.789, 0.852, 0.792, 0.833 on span, modality, degree, polarity and type respectively.",
"highlighted_evidence": [
"Our initial submits RUN 4 and 5 outperformed the memorization baseline on every metric on every task."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: System performance comparison. Note that Run4 means the window size is 4, Run5 means the window size is 5"
],
"extractive_spans": [],
"free_form_answer": "Their average F1 score is higher than that of baseline by 0.0234 ",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: System performance comparison. Note that Run4 means the window size is 4, Run5 means the window size is 5"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"All of the tasks were evaluated using the standard metrics of precision(P), recall(R) and INLINEFORM0 : DISPLAYFORM0",
"Table TABREF28 shows results on the event expression tasks. Our initial submits RUN 4 and 5 outperformed the memorization baseline on every metric on every task. The precision of event span identification is close to the max report. However, our system got lower recall. One of the main reason is that our training objective function is accuracy-oriented. Table TABREF29 shows results on the phase 2 subtask.",
"FLOAT SELECTED: Table 3: System performance comparison. Note that Run4 means the window size is 4, Run5 means the window size is 5",
"FLOAT SELECTED: Table 4: Phase 2: DocTimeRel"
],
"extractive_spans": [],
"free_form_answer": "on event expression tasks average by 2.3% with respect to F1; on phase 2 subtask by 11.3% with respect to recall",
"highlighted_evidence": [
"All of the tasks were evaluated using the standard metrics of precision(P), recall(R) and INLINEFORM0 : DISPLAYFORM0",
"Table TABREF28 shows results on the event expression tasks.",
"Table TABREF29 shows results on the phase 2 subtask.",
"FLOAT SELECTED: Table 3: System performance comparison. Note that Run4 means the window size is 4, Run5 means the window size is 5",
"FLOAT SELECTED: Table 4: Phase 2: DocTimeRel"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"241e39f9ccf9929fc7c62229bed255225b985ceb",
"b1125036f0c6c4cfec6d9956dd0da59365f2dd8d",
"e1bfbda5659174f57fe8693a94ee0f428f714f92"
],
"answer": [
{
"evidence": [
"Table TABREF28 shows results on the event expression tasks. Our initial submits RUN 4 and 5 outperformed the memorization baseline on every metric on every task. The precision of event span identification is close to the max report. However, our system got lower recall. One of the main reason is that our training objective function is accuracy-oriented. Table TABREF29 shows results on the phase 2 subtask.",
"FLOAT SELECTED: Table 3: System performance comparison. Note that Run4 means the window size is 4, Run5 means the window size is 5",
"FLOAT SELECTED: Table 4: Phase 2: DocTimeRel"
],
"extractive_spans": [],
"free_form_answer": "memorization, median report, max report",
"highlighted_evidence": [
"Our initial submits RUN 4 and 5 outperformed the memorization baseline on every metric on every task. The precision of event span identification is close to the max report. ",
"FLOAT SELECTED: Table 3: System performance comparison. Note that Run4 means the window size is 4, Run5 means the window size is 5",
"FLOAT SELECTED: Table 4: Phase 2: DocTimeRel"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF28 shows results on the event expression tasks. Our initial submits RUN 4 and 5 outperformed the memorization baseline on every metric on every task. The precision of event span identification is close to the max report. However, our system got lower recall. One of the main reason is that our training objective function is accuracy-oriented. Table TABREF29 shows results on the phase 2 subtask."
],
"extractive_spans": [
"memorization baseline"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our initial submits RUN 4 and 5 outperformed the memorization baseline on every metric on every task."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF28 shows results on the event expression tasks. Our initial submits RUN 4 and 5 outperformed the memorization baseline on every metric on every task. The precision of event span identification is close to the max report. However, our system got lower recall. One of the main reason is that our training objective function is accuracy-oriented. Table TABREF29 shows results on the phase 2 subtask."
],
"extractive_spans": [
"memorization"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our initial submits RUN 4 and 5 outperformed the memorization baseline on every metric on every task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"ae478237cbd79f85ce4c9eab30dc006ac24ac0f8",
"c82baa8eec2723303c61a0136f75e3ee82a37d0a"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: System performance comparison. Note that Run4 means the window size is 4, Run5 means the window size is 5"
],
"extractive_spans": [],
"free_form_answer": "Their average F1 score was 0.874 on span detection; 08115 on contextual modality detection; 0.8695 on degree detection; 0.839 on polarity detection; 0.844 on type detection",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: System performance comparison. Note that Run4 means the window size is 4, Run5 means the window size is 5"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF28 shows results on the event expression tasks. Our initial submits RUN 4 and 5 outperformed the memorization baseline on every metric on every task. The precision of event span identification is close to the max report. However, our system got lower recall. One of the main reason is that our training objective function is accuracy-oriented. Table TABREF29 shows results on the phase 2 subtask."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Table 3) Best proposed result has F1 score of 0.844, 0.813, 0.870, 0.842, 0.844 on span, modality, degree, polarity and type respectively.",
"highlighted_evidence": [
"Table TABREF28 shows results on the event expression tasks."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"6a6585ca796f79010eb0dbf7a0b4465a0ef555c8",
"7836fc518267e6a9f6271437b88afcd56101285e",
"eacfa1b8e0001ea45cb5a349cf79249c34e49a49"
],
"answer": [
{
"evidence": [
"We use the Clinical TempEval corpus as the evaluation dataset. This corpus was based on a set of 600 clinical notes and pathology reports from cancer patients at the Mayo Clinic. These notes were manually de-identified by the Mayo Clinic to replace names, locations, etc. with generic placeholders, but time expression were not altered. The notes were then manually annotated with times, events and temporal relations in clinical notes. These annotations include time expression types, event attributes and an increased focus on temporal relations. The event, time and temporal relation annotations were distributed separately from the text using the Anafora standoff format. Table TABREF19 shows the number of documents, event expressions in the training, development and testing portions of the 2016 THYME data."
],
"extractive_spans": [
"Clinical TempEval corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the Clinical TempEval corpus as the evaluation dataset."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the Clinical TempEval corpus as the evaluation dataset. This corpus was based on a set of 600 clinical notes and pathology reports from cancer patients at the Mayo Clinic. These notes were manually de-identified by the Mayo Clinic to replace names, locations, etc. with generic placeholders, but time expression were not altered. The notes were then manually annotated with times, events and temporal relations in clinical notes. These annotations include time expression types, event attributes and an increased focus on temporal relations. The event, time and temporal relation annotations were distributed separately from the text using the Anafora standoff format. Table TABREF19 shows the number of documents, event expressions in the training, development and testing portions of the 2016 THYME data."
],
"extractive_spans": [
"Clinical TempEval corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the Clinical TempEval corpus as the evaluation dataset."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the Clinical TempEval corpus as the evaluation dataset. This corpus was based on a set of 600 clinical notes and pathology reports from cancer patients at the Mayo Clinic. These notes were manually de-identified by the Mayo Clinic to replace names, locations, etc. with generic placeholders, but time expression were not altered. The notes were then manually annotated with times, events and temporal relations in clinical notes. These annotations include time expression types, event attributes and an increased focus on temporal relations. The event, time and temporal relation annotations were distributed separately from the text using the Anafora standoff format. Table TABREF19 shows the number of documents, event expressions in the training, development and testing portions of the 2016 THYME data."
],
"extractive_spans": [
"Clinical TempEval corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the Clinical TempEval corpus as the evaluation dataset. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5503ccf4ca48d180001bb12da6753cd546a42aa8",
"655467f355c7be1af8bba09a06bf070ee8aa7a9d"
],
"answer": [
{
"evidence": [
"The major advantage of our system is that we only leverage NLTK tokenization and a POS tagger to preprocess our training dataset. When implementing our neural network based clinical information extraction system, we found it is not easy to construct high quality training data due to the noisy format of clinical notes. Choosing the proper tokenizer is quite important for span identification. After several experiments, we found \"RegexpTokenizer\" can match our needs. This tokenizer can generate spans for each token via sophisticated regular expression like below,"
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (We then use ”PerceptronTagger” as our part-ofspeech tagger due to its fast tagging speed) PerceptronTagger.",
"highlighted_evidence": [
"like below,"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The major advantage of our system is that we only leverage NLTK tokenization and a POS tagger to preprocess our training dataset. When implementing our neural network based clinical information extraction system, we found it is not easy to construct high quality training data due to the noisy format of clinical notes. Choosing the proper tokenizer is quite important for span identification. After several experiments, we found \"RegexpTokenizer\" can match our needs. This tokenizer can generate spans for each token via sophisticated regular expression like below,"
],
"extractive_spans": [],
"free_form_answer": "Using NLTK POS tagger",
"highlighted_evidence": [
"The major advantage of our system is that we only leverage NLTK tokenization and a POS tagger to preprocess our training dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"By how much did their model outperform baselines?",
"Which baselines did they compare against?",
"What was their performance on this task?",
"What dataset did they use to evaluate?",
"How did they obtain part-of-speech tags?"
],
"question_id": [
"13fb28e8b7f34fe600b29fb842deef75608c1478",
"d5bce5da746a075421c80abe10c97ad11a96c6cd",
"930733efb3b97e1634b4dcd77123d4d5731e8807",
"11f9c207476af75a9272105e646df02594059c3f",
"b32de10d84b808886d7a91ab0c423d4fc751384c"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: An example of information extraction from clinical note.",
"Table 2: Number of documents, event expressions in the training, development and testing portions of the THYME data",
"Table 4: Phase 2: DocTimeRel",
"Table 3: System performance comparison. Note that Run4 means the window size is 4, Run5 means the window size is 5"
],
"file": [
"2-Table1-1.png",
"4-Table2-1.png",
"4-Table4-1.png",
"5-Table3-1.png"
]
} | [
"By how much did their model outperform baselines?",
"Which baselines did they compare against?",
"What was their performance on this task?",
"How did they obtain part-of-speech tags?"
] | [
[
"1603.09381-Results and Discussions-0",
"1603.09381-4-Table4-1.png",
"1603.09381-5-Table3-1.png"
],
[
"1603.09381-Results and Discussions-0",
"1603.09381-5-Table3-1.png",
"1603.09381-4-Table4-1.png"
],
[
"1603.09381-Results and Discussions-0",
"1603.09381-5-Table3-1.png"
],
[
"1603.09381-Constructing High Quality Training Dataset-0"
]
] | [
"on event expression tasks average by 2.3% with respect to F1; on phase 2 subtask by 11.3% with respect to recall",
"memorization, median report, max report",
"Answer with content missing: (Table 3) Best proposed result has F1 score of 0.844, 0.813, 0.870, 0.842, 0.844 on span, modality, degree, polarity and type respectively.",
"Using NLTK POS tagger"
] | 155 |
1708.05482 | A Question Answering Approach to Emotion Cause Extraction | Emotion cause extraction aims to identify the reasons behind a certain emotion expressed in text. It is a much more difficult task compared to emotion classification. Inspired by recent advances in using deep memory networks for question answering (QA), we propose a new approach which considers emotion cause identification as a reading comprehension task in QA. Inspired by convolutional neural networks, we propose a new mechanism to store relevant context in different memory slots to model context information. Our proposed approach can extract both word level sequence features and lexical features. Performance evaluation shows that our method achieves the state-of-the-art performance on a recently released emotion cause dataset, outperforming a number of competitive baselines by at least 3.01% in F-measure. | {
"paragraphs": [
[
"With the rapid growth of social network platforms, more and more people tend to share their experiences and emotions online.[2]Corresponding Author: [email protected] Emotion analysis of online text becomes a new challenge in Natural Language Processing (NLP). In recent years, studies in emotion analysis largely focus on emotion classification including detection of writers' emotions BIBREF0 as well as readers' emotions BIBREF1 . There are also some information extraction tasks defined in emotion analysis BIBREF2 , BIBREF3 , such as extracting the feeler of an emotion BIBREF4 . These methods assume that emotion expressions are already observed. Sometimes, however, we care more about the stimuli, or the cause of an emotion. For instance, Samsung wants to know why people love or hate Note 7 rather than the distribution of different emotions.",
"Ex.1 我的手机昨天丢了,我现在很难过。",
"Ex.1 Because I lost my phone yesterday, I feel sad now.",
"In an example shown above, “sad” is an emotion word, and the cause of “sad” is “I lost my phone”. The emotion cause extraction task aims to identify the reason behind an emotion expression. It is a more difficult task compared to emotion classification since it requires a deep understanding of the text that conveys an emotions.",
"Existing approaches to emotion cause extraction mostly rely on methods typically used in information extraction, such as rule based template matching, sequence labeling and classification based methods. Most of them use linguistic rules or lexicon features, but do not consider the semantic information and ignore the relation between the emotion word and emotion cause. In this paper, we present a new method for emotion cause extraction. We consider emotion cause extraction as a question answering (QA) task. Given a text containing the description of an event which [id=lq]may or may not cause a certain emotion, we take [id=lq]an emotion word [id=lq]in context, such as “sad”, as a query. The question to the QA system is: “Does the described event cause the emotion of sadness?”. The [id=lq]expected answer [id=lq]is either “yes” or “no”. (see Figure FIGREF1 ). We build our QA system based on a deep memory network. The memory network has two inputs: a piece of text, [id=lq]referred to as a story in QA systems, and a query. The [id=lq]story is represented using a sequence of word embeddings.",
"[id=lq]A recurrent structure is implemented to mine the deep relation between a query and a text. It measure[id=lq]s the [id=lq]importance of each word in the text by [id=lq]an attention mechanism. Based on the [id=lq]learned attention result, the network maps the text into a low dimensional vector space. This vector is [id=lq]then used to generate an answer. Existing memory network based approaches to QA use weighted sum of attentions to jointly consider short text segments stored in memory. However, they do not explicitly model [id=lq]sequential information in the context. In this paper, we propose a new deep memory network architecture to model the context of each word simultaneously by multiple memory slots which capture sequential information using convolutional operations BIBREF5 , and achieves the state-of-the-art performance compared to existing methods which use manual rules, common sense knowledge bases or other machine learning models.",
"The rest of the paper is organized as follows. Section SECREF2 gives a review of related works on emotion analysis. Section SECREF3 presents our proposed deep memory network based model for emotion cause extraction. Section SECREF4 discusses evaluation results. Finally, Section SECREF5 concludes the work and outlines the future directions."
],
[
"Identifying emotion categories in text is one of the key tasks in NLP BIBREF6 . Going one step further, emotion cause extraction can reveal important information about what causes a certain emotion and why there is an emotion change. In this section, we introduce related work on emotion analysis including emotion cause extraction.",
"In emotion analysis, we first need to determine the taxonomy of emotions. Researchers have proposed a list of primary emotions BIBREF7 , BIBREF8 , BIBREF9 . In this study, we adopt Ekman's emotion classification scheme BIBREF8 , which identifies six primary emotions, namely happiness, sadness, fear, anger, disgust and surprise, as known as the “Big6” scheme in the W3C Emotion Markup Language. This emotion classification scheme is agreed upon by most previous works in Chinese emotion analysis.",
"Existing work in emotion analysis mostly focuses on emotion classification BIBREF10 , BIBREF11 and emotion information extraction BIBREF12 . xu2012coarse used a coarse to fine method to classify emotions in Chinese blogs. gao2013joint proposed a joint model to co-train a polarity classifier and an emotion classifier. beck2014joint proposed a Multi-task Gaussian-process based method for emotion classification. chang2015linguistic used linguistic templates to predict reader's emotions. das2010finding used an unsupervised method to extract emotion feelers from Bengali blogs. There are other studies which focused on joint learning of sentiments BIBREF13 , BIBREF14 or emotions in tweets or blogs BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , and emotion lexicon construction BIBREF20 , BIBREF21 , BIBREF22 . However, the aforementioned work all focused on analysis of emotion expressions rather than emotion causes.",
"lee2010text first proposed a task on emotion cause extraction. They manually constructed a corpus from the Academia Sinica Balanced Chinese Corpus. Based on this corpus, chen2010emotion proposed a rule based method to detect emotion causes based on manually define linguistic rules. Some studies BIBREF23 , BIBREF24 , BIBREF25 extended the rule based method to informal text in Weibo text (Chinese tweets).",
"Other than rule based methods, russo2011emocause proposed a crowdsourcing method to construct a common-sense knowledge base which is related to emotion causes. But it is challenging to extend the common-sense knowledge base automatically. ghazi2015detecting used Conditional Random Fields (CRFs) to extract emotion causes. However, it requires emotion cause and emotion keywords to be in the same sentence. More recently, gui2016event proposed a multi-kernel based method to extract emotion causes through learning from a manually annotated emotion cause dataset.",
"[id=lq]Most existing work does not consider the relation between an emotion word and the cause of such an emotion, or they simply use the emotion word as a feature in their model learning. Since emotion cause extraction requires an understanding of a given piece of text in order to correctly identify the relation between the description of an event which causes an emotion and the expression of that emotion, it can essentially be considered as a QA task. In our work, we choose the memory network, which is designed to model the relation between a story and a query for QA systems BIBREF26 , BIBREF27 . Apart from its application in QA, memory network has also achieved great successes in other NLP tasks, such as machine translation BIBREF28 , sentiment analysis BIBREF29 or summarization BIBREF30 . To the best of our knowledge, this is the first work which uses memory network for emotion cause extraction."
],
[
"In this section, we will first define our task. [id=lq]Then, a brief introduction of memory network will be given, including its basic learning structure of memory network and deep architecture. Last, our modified deep memory network for emotion cause extraction will be presented."
],
[
"The formal definition of emotion cause extraction is given in BIBREF31 . In this task, a given document, which [id=lq]is a passage about an emotion event, contains an emotion word INLINEFORM0 and the cause of the event. The document is manually segmented in the clause level. For each clause INLINEFORM1 consisting of INLINEFORM2 words, the goal [id=lq]is to identify which clause contains the emotion cause. [id=lq]For data representation, we can map each word into a low dimensional embedding space, a.k.a word vector BIBREF32 . All the word vectors are stacked in a word embedding matrix INLINEFORM3 , where INLINEFORM4 is the dimension of word vector and INLINEFORM5 is the vocabulary size.",
"For example, the sentence, “I lost my phone yesterday, I feel so sad now.” shown in Figure 1, consists of two clauses. The first clause contains the emotion cause while the second clause [id=lq]expresses the emotion of sadness. [id=lq]Current methods to emotion cause extraction cannot handle complex sentence structures where the expression of an emotion and its cause are not adjacent. We envision that the memory network can [id=lq]better model the relation between [id=lq]a emotion word and [id=lq]its emotion causes in such complex sentence structures. In our approach, we only select the clause with the highest probability to be [id=lq] thean emotion cause in each document."
],
[
"We first present a basic memory network model for emotion cause extraction (shown in Figure 2). Given a clause INLINEFORM0 , and an emotion word, we [id=lq]first obtain the emotion word's representation in an embedding space[id=lq], denoted by INLINEFORM1 . For the clause, [id=lq]let the embedding representations of the words be denoted by INLINEFORM2 . Here, both INLINEFORM3 and INLINEFORM4 [id=lq]are defined in INLINEFORM5 . Then, we use the inner product to evaluate the correlation between each word [id=lq] INLINEFORM6 in a clause and the emotion word, denoted as INLINEFORM7 : DISPLAYFORM0 ",
"We then normalize the value of INLINEFORM0 to INLINEFORM1 using a softmax function, denoted by INLINEFORM2 [id=lq]as: DISPLAYFORM0 ",
"where INLINEFORM0 is the length of the clause. [id=lq] INLINEFORM1 also serves as the size of the memory. Obviously, INLINEFORM2 and INLINEFORM3 . [id=lq] INLINEFORM4 can serve as an attention weight to measure the importance of each word in our model.",
"Then, a sum over the word embedding INLINEFORM0 , weighted by the attention vector form the output of the memory network for the prediction of INLINEFORM1 : DISPLAYFORM0 ",
"The final prediction is an output from a softmax function, denoted as INLINEFORM0 : DISPLAYFORM0 ",
"Usually, INLINEFORM0 is a INLINEFORM1 weight matrix and INLINEFORM2 is the transposition. Since the answer in our task is a simple “yes” or “no”, we use a INLINEFORM3 matrix for INLINEFORM4 . As the distance between a clause and an emotion words is a very important feature according to BIBREF31 , we simply add this distance into the softmax function as an additional feature in our work.",
"The basic model can be extended to deep architecture consisting of multiple layers to handle INLINEFORM0 hop operations. The network is stacked as [id=lq]follows:",
"For hop 1, the query is INLINEFORM0 and the prediction vector is INLINEFORM1 ;",
"For hop INLINEFORM0 , the query is the prediction vector of the previous hop and the prediction vector is INLINEFORM1 ;",
"The output vector is at the top of the network. It is a softmax function on the prediction vector from hop INLINEFORM0 : INLINEFORM1 .",
"The illustration of a deep memory network with three layers is shown in Figure 3. Since [id=lq]a memory network models the emotion cause at a fine-grained level, each word has a corresponding weight to measure its importance in this task. Comparing [id=lq]to previous approaches [id=lq]in emotion cause extraction which are [id=lq]mostly based [id=lq]on manually defined rules or linguistic features, [id=lq]a memory network is a more principled way to identify the emotion cause from text. However, the basic [id=lq]memory network model [id=lq]does not capture the sequential information in context which is important in emotion cause extraction."
],
[
"It is often the case that the meaning of a word is determined by its context, such as the previous word and the following word. [id=lq]Also, negations and emotion transitions are context sensitive. However, the memory network described in Section SECREF3 has only one memory slot with size INLINEFORM0 to represent a clause, where INLINEFORM1 is the dimension of a word embedding and INLINEFORM2 is the length of a clause. It means that when the memory network models a clause, it only considers each word separately.",
"In order to capture [id=lq]context information for clauses, we propose a new architecture which contains more memory slot to model the context with a convolutional operation. The basic architecture of Convolutional Multiple-Slot Memory Network (in short: ConvMS-Memnet) is shown in Figure 4.",
"Considering the text length is usually short in the dataset used here for emotion cause extraction, we set the size of the convolutional kernel to 3. That is, the weight of word INLINEFORM0 [id=lq]in the INLINEFORM1 -th position considers both the previous word INLINEFORM2 and the following word INLINEFORM3 by a convolutional operation: DISPLAYFORM0 ",
"For the first and the last word in a clause, we use zero padding, INLINEFORM0 , where INLINEFORM1 is the length of a clause. Then, the attention [id=lq]weightsignal for each word position in the clause is [id=lq]now defined as: DISPLAYFORM0 ",
"Note that we obtain the attention for each position rather than each word. It means that the corresponding attention for the INLINEFORM0 -th word in the previous convolutional slot should be INLINEFORM1 . Hence, there are three prediction output vectors, namely, INLINEFORM2 , INLINEFORM3 , INLINEFORM4 : DISPLAYFORM0 ",
"At last, we concatenate the three vectors as INLINEFORM0 for the prediction by a softmax function: DISPLAYFORM0 ",
"Here, the size of INLINEFORM0 is INLINEFORM1 . Since the prediction vector is a concatenation of three outputs. We implement a concatenation operation rather than averaging or other operations because the parameters in different memory slots can be updated [id=lq]respectively in this way by back propagation. The concatenation of three output vectors forms a sequence-level feature which can be used in the training. Such a feature is important especially [id=lq]when the size of annotated training data is small.",
"For deep architecture with multiple layer[id=lq]s training, the network is more [id=lq]complex (shown in Figure 5).",
"For the first layer, the query is an embedding of the emotion word, INLINEFORM0 .",
"In the next layer, there are three input queries since the previous layer has three outputs: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 . So, for the INLINEFORM3 -th layer ( INLINEFORM4 ), we need to re-define the weight function (5) as:",
"In the last layer, [id=lq]the concatenation of the three prediction vectors form the final prediction vector to generate the answer.",
"For model training, we use stochastic gradient descent and back propagation to optimize the loss function. Word embeddings are learned using a skip-gram model. The size of the word embedding is 20 since the vocabulary size in our dataset is small. The dropout is set to 0.4."
],
[
"We first presents the experimental settings and then report the results in this section."
],
[
"We conduct experiments on a simplified Chinese emotion cause corpus BIBREF31 , the only publicly available dataset on this task to the best of our knowledge. The corpus contains 2,105 documents from SINA city news. Each document has only one emotion word and one or more emotion causes. The documents are segmented into clauses manually. The main task is to identify which clause contains the emotion cause.",
"",
"[id=lq]Details of the corpus are shown in Table 1. The metrics we used in evaluation follows lee2010text. It is commonly accepted so that we can compare our results with others. If a proposed emotion cause clause covers the annotated answer, the word sequence is considered correct. The precision, recall, and F-measure are defined by INLINEFORM0 ",
"In the experiments, we randomly select 90% of the dataset as training data and 10% as testing data. In order to obtain statistically credible results, we evaluate our method and baseline methods 25 times with different train/test splits."
],
[
"We compare with the following baseline methods:",
"RB (Rule based method): The rule based method proposed in BIBREF33 .",
"CB (Common-sense based method): This is the knowledge based method proposed by BIBREF34 . We use the Chinese Emotion Cognition Lexicon BIBREF35 as the common-sense knowledge base. The lexicon contains more than 5,000 kinds of emotion stimulation and their corresponding reflection words.",
"RB+CB+ML (Machine learning method trained from rule-based features and facts from a common-sense knowledge base): This methods was previously proposed for emotion cause classification in BIBREF36 . It takes rules and facts in a knowledge base as features for classifier training. We train a SVM using features extracted from the rules defined in BIBREF33 and the Chinese Emotion Cognition Lexicon BIBREF35 .",
"SVM: This is a SVM classifier using the unigram, bigram and trigram features. It is a baseline previously used in BIBREF24 , BIBREF31 ",
"Word2vec: This is a SVM classifier using word representations learned by Word2vec BIBREF32 as features.",
"Multi-kernel: This is the state-of-the-art method using the multi-kernel method BIBREF31 to identify the emotion cause. We use the best performance reported in their paper.",
"CNN: The convolutional neural network for sentence classification BIBREF5 .",
"Memnet: The deep memory network described in Section SECREF3 . Word embeddings are pre-trained by skip-grams. The number of hops is set to 3.",
"ConvMS-Memnet: The convolutional multiple-slot deep memory network we proposed in Section SECREF13 . Word embeddings are pre-trained by skip-grams. The number of hops is 3 in our experiments.",
"Table 2 shows the evaluation results. The rule based RB gives fairly high precision but with low recall. CB, the common-sense based method, achieves the highest recall. Yet, its precision is the worst. RB+CB, the combination of RB and CB gives higher the F-measure But, the improvement of 1.27% is only marginal compared to RB.",
"For machine learning methods, RB+CB+ML uses both rules and common-sense knowledge as features to train a machine learning classifier. It achieves F-measure of 0.5597, outperforming RB+CB. Both SVM and word2vec are word feature based methods and they have similar performance. For word2vec, even though word representations are obtained from the SINA news raw corpus, it still performs worse than SVM trained using n-gram features only. The multi-kernel method BIBREF31 is the best performer among the baselines because it considers context information in a structured way. It models text by its syntactic tree and also considers an emotion lexicon. Their work shows that the structure information is important for the emotion cause extraction task.",
"Naively applying the original deep memory network or convolutional network for emotion cause extraction outperforms all the baselines except the convolutional multi-kernel method. However, using our proposed ConvMS-Memnet architecture, we manage to boost the performance by 11.54% in precision, 4.84% in recall and 8.24% in F-measure respectively when compared to Memnet. The improvement is very significant with INLINEFORM0 -value less than 0.01 in INLINEFORM1 -test. The ConvMS-Memnet also outperforms the previous best-performing method, multi-kernel, by 3.01% in F-measure. It shows that by effectively capturing context information, ConvMS-Memnet is able to identify the emotion cause better compared to other methods."
],
[
"To gain better insights into our proposed ConvMS-Memnet, we conduct further experiments to understand the impact on performance by using: 1) pre-trained or randomly initialized word embedding; 2) multiple hops; 3) attention visualizations; 4) more training epochs.",
"In our ConvMS-Memnet, we use pre-trained word embedding as the input. The embedding maps each word into a lower dimensional real-value vector as its representation. Words sharing similar meanings should have similar representations. It enables our model to deal with synonyms more effectively. The question is, “can we train the network without using pre-trained word embeddings?\". We initialize word vectors randomly, and use an embedding matrix to update the word vectors in the training of the network simultaneously. Comparison results are shown in Table 3. It can be observed that pre-trained word embedding gives 2.59% higher F-measure compared to random initialization. This is partly due to the limited size of our training data. Hence using word embedding trained from other much larger corpus gives better results.",
"It is widely acknowledged that computational models using deep architecture with multiple layers have better ability to learn data representations with multiple levels of abstractions. In this section, we evaluate the power of multiple hops in this task. We set the number of hops from 1 to 9 with 1 standing for the simplest single layer network shown in Figure 4. The more hops are stacked, the more complicated the model is. Results are shown in Table 4. The single layer network has achieved a competitive performance. With the increasing number of hops, the performance improves. However, when the number of hops is larger than 3, the performance decreases due to overfitting. Since the dataset for this task is small, more parameters will lead to overfitting. As such, we choose 3 hops in our final model since it gives the best performance in our experiments.",
"Essentially, memory network aims to measure the weight of each word in the clause with respect to the emotion word. The question is, will the model really focus on the words which describe the emotion cause? We choose one example to show the attention results in Table 5:",
"Ex.2 家人/family 的/'s 坚持/insistence 更/more 让/makes 人/people 感动/touched",
"In this example, the cause of the emotion “touched” is “insistence”. We show in Table 5 the distribution of word-level attention weights in different hops of memory network training. We can observe that in the first two hops, the highest attention weights centered on the word “more\". However, from the third hop onwards, the highest attention weight moves to the word sub-sequence centred on the word “insistence”. This shows that our model is effective in identifying the most important keyword relating to the emotion cause. Also, better results are obtained using deep memory network trained with at least 3 hops. This is consistent with what we observed in Section UID45 .",
"In order to evaluate the quality of keywords extracted by memory networks, we define a new metric on the keyword level of emotion cause extraction. The keyword is defined as the word which obtains the highest attention weight in the identified clause. If the keywords extracted by our algorithm is located within the boundary of annotation, it is treated as correct. Thus, we can obtain the precision, recall, and F-measure by comparing the proposed keywords with the correct keywords by: INLINEFORM0 ",
"Since the reference methods do not focus on the keywords level, we only compare the performance of Memnet and ConvMS-Memnet in Table 6. It can be observed that our proposed ConvMS-Memnet outperforms Memnet by 5.6% in F-measure. It shows that by capturing context features, ConvMS-Memnet is able to identify the word level emotion cause better compare to Memnet.",
"In our model, the training epochs are set to 20. In this section, we examine the testing error using a case study. Due to the page length limit, we only choose one example from the corpus. The text below has four clauses:",
"Ex.3 45天,对于失去儿子的他们是多么的漫长,宝贝回家了,这个春节是多么幸福。",
"Ex.3 45 days, it is long time for the parents who lost their baby. If the baby comes back home, they would become so happy in this Spring Festival.",
"In this example, the cause of emotion “happy” is described in the third clause.",
"We show in Table 7 the probability of each clause containing an emotion cause in different training epochs. It is interesting to see that our model is able to detect the correct clause with only 5 epochs. With the increasing number of training epochs, the probability associated with the correct clause increases further while the probabilities of incorrect clauses decrease generally."
],
[
"We have shown in Section UID47 a simple example consisting of only four clauses from which our model can identify the clause containing the emotion cause correctly. We notice that for some complex text passages which contain long distance dependency relations, negations or emotion transitions, our model may have a difficulty in detecting the correct clause containing the emotion causes. It is a challenging task to properly model the discourse relations among clauses. In the future, we will explore different network architecture with consideration of various discourse relations possibly through transfer learning of larger annotated data available for other tasks.",
"Another shortcoming of our model is that, the answer generated from our model is simply “yes” or “no”. The main reason is that the size of the annotated corpus is too small to train a model which can output natural language answers in full sentences. Ideally, we would like to develop a model which can directly give the cause of an emotion expressed in text. However, since the manual annotation of data is too expensive for this task, we need to explore feasible ways to automatically collect annotate data for emotion cause detection. We also need to study effective evaluation mechanisms for such QA systems."
],
[
"In this [id=lq]work, we [id=lq]treat emotion cause extraction as a QA task and propose a new model based on deep memory networks for identifying [id=lq]the emotion causes for an emotion expressed in text. [id=lq]The key property of this approach is the use of context information in the learning process which is ignored in the original memory network. Our new [id=lq]memory network architecture is able [id=lq]to store context in different memory slots to capture context information [id=lq]in proper sequence by convolutional operation. Our model achieves the state-of-the-art performance on a dataset for emotion cause detection when compared to a number of competitive baselines. In the future, we will explore effective ways [id=lq]to model discourse relations among clauses and develop a QA system which can directly output the cause of emotions as answers."
],
[
"This work was supported by the National Natural Science Foundation of China 61370165, U1636103, 61632011, 61528302, National 863 Program of China 2015AA015405, Shenzhen Foundational Research Funding JCYJ20150625142543470, JCYJ20170307150024907 and Guangdong Provincial Engineering Technology Research Center for Data Science 2016KF09."
]
],
"section_name": [
"Introduction",
"Related Work",
"Our Approach",
"Task Definition",
"Memory Network",
"Convolutional Multiple-Slot Deep Memory Network",
"Experiments and Evaluation",
"Experimental Setup and Dataset",
"Evaluation and Comparison",
"More Insights into the ConvMS-Memnet",
"Limitations",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"06ab48e11191f95951f24427d07479a4b1ec15cb",
"5662bf7a7872f9ee9c8e04e1930f02477eb0532e",
"7c0233059d9a29e66b5be4c99cc6b53f57d0e635"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Comparison with existing methods."
],
"extractive_spans": [],
"free_form_answer": "0.6955",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Comparison with existing methods."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table 2 shows the evaluation results. The rule based RB gives fairly high precision but with low recall. CB, the common-sense based method, achieves the highest recall. Yet, its precision is the worst. RB+CB, the combination of RB and CB gives higher the F-measure But, the improvement of 1.27% is only marginal compared to RB.",
"FLOAT SELECTED: Table 2: Comparison with existing methods."
],
"extractive_spans": [],
"free_form_answer": "0.6955",
"highlighted_evidence": [
"Table 2 shows the evaluation results. ",
"FLOAT SELECTED: Table 2: Comparison with existing methods."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Comparison with existing methods."
],
"extractive_spans": [],
"free_form_answer": "69.55",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Comparison with existing methods."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"317f81267d7147028eb81378003fb9865f1200c6",
"aeb38f6c44f2949cdfa9cae6f9a585956e4c28e3",
"fabe532bb169318253cf5522492cfc79053919c9"
],
"answer": [
{
"evidence": [
"We compare with the following baseline methods:",
"RB (Rule based method): The rule based method proposed in BIBREF33 .",
"CB (Common-sense based method): This is the knowledge based method proposed by BIBREF34 . We use the Chinese Emotion Cognition Lexicon BIBREF35 as the common-sense knowledge base. The lexicon contains more than 5,000 kinds of emotion stimulation and their corresponding reflection words.",
"RB+CB+ML (Machine learning method trained from rule-based features and facts from a common-sense knowledge base): This methods was previously proposed for emotion cause classification in BIBREF36 . It takes rules and facts in a knowledge base as features for classifier training. We train a SVM using features extracted from the rules defined in BIBREF33 and the Chinese Emotion Cognition Lexicon BIBREF35 .",
"SVM: This is a SVM classifier using the unigram, bigram and trigram features. It is a baseline previously used in BIBREF24 , BIBREF31",
"Word2vec: This is a SVM classifier using word representations learned by Word2vec BIBREF32 as features.",
"Multi-kernel: This is the state-of-the-art method using the multi-kernel method BIBREF31 to identify the emotion cause. We use the best performance reported in their paper.",
"CNN: The convolutional neural network for sentence classification BIBREF5 .",
"Memnet: The deep memory network described in Section SECREF3 . Word embeddings are pre-trained by skip-grams. The number of hops is set to 3."
],
"extractive_spans": [
"RB (Rule based method)",
"CB (Common-sense based method)",
"RB+CB+ML (Machine learning method trained from rule-based features and facts from a common-sense knowledge base)",
"SVM",
"Word2vec",
"Multi-kernel",
"CNN",
"Memnet"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare with the following baseline methods:\n\nRB (Rule based method): The rule based method proposed in BIBREF33 .\n\nCB (Common-sense based method): This is the knowledge based method proposed by BIBREF34 . We use the Chinese Emotion Cognition Lexicon BIBREF35 as the common-sense knowledge base. The lexicon contains more than 5,000 kinds of emotion stimulation and their corresponding reflection words.\n\nRB+CB+ML (Machine learning method trained from rule-based features and facts from a common-sense knowledge base): This methods was previously proposed for emotion cause classification in BIBREF36 . It takes rules and facts in a knowledge base as features for classifier training. We train a SVM using features extracted from the rules defined in BIBREF33 and the Chinese Emotion Cognition Lexicon BIBREF35 .\n\nSVM: This is a SVM classifier using the unigram, bigram and trigram features. It is a baseline previously used in BIBREF24 , BIBREF31\n\nWord2vec: This is a SVM classifier using word representations learned by Word2vec BIBREF32 as features.\n\nMulti-kernel: This is the state-of-the-art method using the multi-kernel method BIBREF31 to identify the emotion cause. We use the best performance reported in their paper.\n\nCNN: The convolutional neural network for sentence classification BIBREF5 .\n\nMemnet: The deep memory network described in Section SECREF3 . Word embeddings are pre-trained by skip-grams. The number of hops is set to 3."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Evaluation and Comparison",
"We compare with the following baseline methods:",
"RB (Rule based method): The rule based method proposed in BIBREF33 .",
"CB (Common-sense based method): This is the knowledge based method proposed by BIBREF34 . We use the Chinese Emotion Cognition Lexicon BIBREF35 as the common-sense knowledge base. The lexicon contains more than 5,000 kinds of emotion stimulation and their corresponding reflection words.",
"RB+CB+ML (Machine learning method trained from rule-based features and facts from a common-sense knowledge base): This methods was previously proposed for emotion cause classification in BIBREF36 . It takes rules and facts in a knowledge base as features for classifier training. We train a SVM using features extracted from the rules defined in BIBREF33 and the Chinese Emotion Cognition Lexicon BIBREF35 .",
"SVM: This is a SVM classifier using the unigram, bigram and trigram features. It is a baseline previously used in BIBREF24 , BIBREF31",
"Word2vec: This is a SVM classifier using word representations learned by Word2vec BIBREF32 as features.",
"Multi-kernel: This is the state-of-the-art method using the multi-kernel method BIBREF31 to identify the emotion cause. We use the best performance reported in their paper.",
"CNN: The convolutional neural network for sentence classification BIBREF5 ."
],
"extractive_spans": [
"RB (Rule based method)",
"CB (Common-sense based method)",
"RB+CB+ML",
"SVM",
"Word2vec",
"Multi-kernel",
"CNN"
],
"free_form_answer": "",
"highlighted_evidence": [
"Evaluation and Comparison\nWe compare with the following baseline methods:\n\nRB (Rule based method): The rule based method proposed in BIBREF33 .\n\nCB (Common-sense based method): This is the knowledge based method proposed by BIBREF34 .",
"RB+CB+ML (Machine learning method trained from rule-based features and facts from a common-sense knowledge base): This methods was previously proposed for emotion cause classification in BIBREF36 .",
"SVM: This is a SVM classifier using the unigram, bigram and trigram features. It is a baseline previously used in BIBREF24 , BIBREF31\n\nWord2vec: This is a SVM classifier using word representations learned by Word2vec BIBREF32 as features.\n\nMulti-kernel: This is the state-of-the-art method using the multi-kernel method BIBREF31 to identify the emotion cause. ",
"CNN: The convolutional neural network for sentence classification BIBREF5 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"RB (Rule based method): The rule based method proposed in BIBREF33 .",
"CB (Common-sense based method): This is the knowledge based method proposed by BIBREF34 . We use the Chinese Emotion Cognition Lexicon BIBREF35 as the common-sense knowledge base. The lexicon contains more than 5,000 kinds of emotion stimulation and their corresponding reflection words.",
"RB+CB+ML (Machine learning method trained from rule-based features and facts from a common-sense knowledge base): This methods was previously proposed for emotion cause classification in BIBREF36 . It takes rules and facts in a knowledge base as features for classifier training. We train a SVM using features extracted from the rules defined in BIBREF33 and the Chinese Emotion Cognition Lexicon BIBREF35 .",
"SVM: This is a SVM classifier using the unigram, bigram and trigram features. It is a baseline previously used in BIBREF24 , BIBREF31",
"Word2vec: This is a SVM classifier using word representations learned by Word2vec BIBREF32 as features.",
"Multi-kernel: This is the state-of-the-art method using the multi-kernel method BIBREF31 to identify the emotion cause. We use the best performance reported in their paper.",
"CNN: The convolutional neural network for sentence classification BIBREF5 ."
],
"extractive_spans": [
"RB (Rule based method)",
"CB (Common-sense based method)",
"RB+CB+ML (Machine learning method trained from rule-based features and facts from a common-sense knowledge base)",
"SVM classifier using the unigram, bigram and trigram features",
"SVM classifier using word representations learned by Word2vec",
"multi-kernel method BIBREF31",
" convolutional neural network for sentence classification BIBREF5"
],
"free_form_answer": "",
"highlighted_evidence": [
"RB (Rule based method): The rule based method proposed in BIBREF33 .\n\nCB (Common-sense based method): This is the knowledge based method proposed by BIBREF34 . ",
"RB+CB+ML (Machine learning method trained from rule-based features and facts from a common-sense knowledge base): This methods was previously proposed for emotion cause classification in BIBREF36 .",
"SVM: This is a SVM classifier using the unigram, bigram and trigram features. ",
"Word2vec: This is a SVM classifier using word representations learned by Word2vec BIBREF32 as features.",
"Multi-kernel: This is the state-of-the-art method using the multi-kernel method BIBREF31 to identify the emotion cause.",
"CNN: The convolutional neural network for sentence classification BIBREF5 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0c78953c4b1b021c21b0280503e931e36688271b",
"375f4af2e9a27cff0ad86627d9f6ca87212b4713",
"dc56cfd24a9cbc2e711120a867870e4be2f80da5"
],
"answer": [
{
"evidence": [
"We conduct experiments on a simplified Chinese emotion cause corpus BIBREF31 , the only publicly available dataset on this task to the best of our knowledge. The corpus contains 2,105 documents from SINA city news. Each document has only one emotion word and one or more emotion causes. The documents are segmented into clauses manually. The main task is to identify which clause contains the emotion cause."
],
"extractive_spans": [
"simplified Chinese emotion cause corpus BIBREF31"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct experiments on a simplified Chinese emotion cause corpus BIBREF31 , the only publicly available dataset on this task to the best of our knowledge. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conduct experiments on a simplified Chinese emotion cause corpus BIBREF31 , the only publicly available dataset on this task to the best of our knowledge. The corpus contains 2,105 documents from SINA city news. Each document has only one emotion word and one or more emotion causes. The documents are segmented into clauses manually. The main task is to identify which clause contains the emotion cause."
],
"extractive_spans": [
"a simplified Chinese emotion cause corpus BIBREF31"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct experiments on a simplified Chinese emotion cause corpus BIBREF31 , the only publicly available dataset on this task to the best of our knowledge."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conduct experiments on a simplified Chinese emotion cause corpus BIBREF31 , the only publicly available dataset on this task to the best of our knowledge. The corpus contains 2,105 documents from SINA city news. Each document has only one emotion word and one or more emotion causes. The documents are segmented into clauses manually. The main task is to identify which clause contains the emotion cause."
],
"extractive_spans": [
"Chinese emotion cause corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct experiments on a simplified Chinese emotion cause corpus BIBREF31 , the only publicly available dataset on this task to the best of our knowledge."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"2bb38dcfab0dcc2f7541a334a2ad84ee34b272fb",
"cdc2fa54e952ba32949ca756aafcc82906f4b6fa"
],
"answer": [
{
"evidence": [
"Usually, INLINEFORM0 is a INLINEFORM1 weight matrix and INLINEFORM2 is the transposition. Since the answer in our task is a simple “yes” or “no”, we use a INLINEFORM3 matrix for INLINEFORM4 . As the distance between a clause and an emotion words is a very important feature according to BIBREF31 , we simply add this distance into the softmax function as an additional feature in our work."
],
"extractive_spans": [],
"free_form_answer": "the distance between a clause and an emotion words",
"highlighted_evidence": [
"As the distance between a clause and an emotion words is a very important feature according to BIBREF31 , we simply add this distance into the softmax function as an additional feature in our work."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"078b8c6aad06ca5f4870a54e3949bae011d9f6e5",
"5a419edaa8d756d471b0e4ed70700c392298cdee"
],
"answer": [
{
"evidence": [
"Note that we obtain the attention for each position rather than each word. It means that the corresponding attention for the INLINEFORM0 -th word in the previous convolutional slot should be INLINEFORM1 . Hence, there are three prediction output vectors, namely, INLINEFORM2 , INLINEFORM3 , INLINEFORM4 : DISPLAYFORM0",
"At last, we concatenate the three vectors as INLINEFORM0 for the prediction by a softmax function: DISPLAYFORM0",
"Here, the size of INLINEFORM0 is INLINEFORM1 . Since the prediction vector is a concatenation of three outputs. We implement a concatenation operation rather than averaging or other operations because the parameters in different memory slots can be updated [id=lq]respectively in this way by back propagation. The concatenation of three output vectors forms a sequence-level feature which can be used in the training. Such a feature is important especially [id=lq]when the size of annotated training data is small."
],
"extractive_spans": [],
"free_form_answer": "Concatenation of three prediction output vectors",
"highlighted_evidence": [
"Hence, there are three prediction output vectors, namely, INLINEFORM2 , INLINEFORM3 , INLINEFORM4 : DISPLAYFORM0\n\nAt last, we concatenate the three vectors as INLINEFORM0 for the prediction by a softmax function: DISPLAYFORM0\n\nHere, the size of INLINEFORM0 is INLINEFORM1 . ",
"The concatenation of three output vectors forms a sequence-level feature which can be used in the training. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Here, the size of INLINEFORM0 is INLINEFORM1 . Since the prediction vector is a concatenation of three outputs. We implement a concatenation operation rather than averaging or other operations because the parameters in different memory slots can be updated [id=lq]respectively in this way by back propagation. The concatenation of three output vectors forms a sequence-level feature which can be used in the training. Such a feature is important especially [id=lq]when the size of annotated training data is small."
],
"extractive_spans": [
"concatenation of three output vectors"
],
"free_form_answer": "",
"highlighted_evidence": [
"The concatenation of three output vectors forms a sequence-level feature which can be used in the training."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"what was their system's f1 score?",
"what were the baselines?",
"what emotion cause dataset was used?",
"what lexical features are extracted?",
"what word level sequences features are extracted?"
],
"question_id": [
"9ea3669528c2b295f21770cb7f70d0c4b4389223",
"9863f5765ba70f7ff336a580346ef70205abbbd8",
"ced63053eb631c78a4ddd8c85ec0f3323a631a54",
"f13a5b6a67a9b10fde68e8b33792879b8146102c",
"67c16ba64fe27838b1034d15194c07a9c98cdebe"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: An example of emotion cause extraction based on the QA framework.",
"Figure 2: A single layer memory network.",
"Figure 3: Deep memory network with three computational layers (hops).",
"Figure 4: A single layer ConvMS-Memnet.",
"Figure 5: ConvMS-Memnet with three computational layers (hops).",
"Table 2: Comparison with existing methods.",
"Table 1: Details of the dataset.",
"Table 3: Comparison of using pre-trained or randomly initialized word embedding.",
"Table 4: Performance with different number of hops in ConvMS-Memnet.",
"Table 5: The distribution of attention in different hops.",
"Table 6: Comparison of word level emotion cause extraction.",
"Table 7: The probability of a clause containing the emotion cause in different iterations in the multipleslot memory network."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"4-Figure4-1.png",
"5-Figure5-1.png",
"6-Table2-1.png",
"6-Table1-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"8-Table5-1.png",
"8-Table6-1.png",
"9-Table7-1.png"
]
} | [
"what was their system's f1 score?",
"what lexical features are extracted?",
"what word level sequences features are extracted?"
] | [
[
"1708.05482-6-Table2-1.png",
"1708.05482-Evaluation and Comparison-10"
],
[
"1708.05482-Memory Network-5"
],
[
"1708.05482-Convolutional Multiple-Slot Deep Memory Network-6"
]
] | [
"69.55",
"the distance between a clause and an emotion words",
"Concatenation of three prediction output vectors"
] | 156 |
1707.05589 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing code bases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset. | {
"paragraphs": [
[
"The scientific process by which the deep learning research community operates is guided by empirical studies that evaluate the relative quality of models. Complicating matters, the measured performance of a model depends not only on its architecture (and data), but it can strongly depend on hyperparameter values that affect learning, regularisation, and capacity. This hyperparameter dependence is an often inadequately controlled source of variation in experiments, which creates a risk that empirically unsound claims will be reported.",
"In this paper, we use a black-box hyperparameter optimisation technique to control for hyperparameter effects while comparing the relative performance of language modelling architectures based on LSTMs, Recurrent Highway Networks BIBREF0 and NAS BIBREF1 . We specify flexible, parameterised model families with the ability to adjust embedding and recurrent cell sizes for a given parameter budget and with fine grain control over regularisation and learning hyperparameters.",
"Once hyperparameters have been properly controlled for, we find that LSTMs outperform the more recent models, contra the published claims. Our result is therefore a demonstration that replication failures can happen due to poorly controlled hyperparameter variation, and this paper joins other recent papers in warning of the under-acknowledged existence of replication failure in deep learning BIBREF2 , BIBREF3 . However, we do show that careful controls are possible, albeit at considerable computational cost.",
"Several remarks can be made in light of these results. First, as (conditional) language models serve as the central building block of many tasks, including machine translation, there is little reason to expect that the problem of unreliable evaluation is unique to the tasks discussed here. However, in machine translation, carefully controlling for hyperparameter effects would be substantially more expensive because standard datasets are much larger. Second, the research community should strive for more consensus about appropriate experimental methodology that balances costs of careful experimentation with the risks associated with false claims. Finally, more attention should be paid to hyperparameter sensitivity. Models that introduce many new hyperparameters or which perform well only in narrow ranges of hyperparameter settings should be identified as such as part of standard publication practice."
],
[
"Our focus is on three recurrent architectures:",
"Our aim is strictly to do better model comparisons for these architectures and we thus refrain from including techniques that are known to push perplexities even lower, but which are believed to be largely orthogonal to the question of the relative merits of these recurrent cells. In parallel work with a remarkable overlap with ours, BIBREF5 demonstrate the utility of adding a Neural Cache BIBREF6 . Building on their work, BIBREF7 show that Dynamic Evaluation BIBREF8 contributes similarly to the final perplexity.",
"As pictured in Fig. FIGREF1 , our models with LSTM or NAS cells have all the standard components: an input embedding lookup table, recurrent cells stacked as layers with additive skip connections combining outputs of all layers to ease optimisation. There is an optional down-projection whose presence is governed by a hyperparameter from this combined output to a smaller space which reduces the number of output embedding parameters. Unless otherwise noted, input and output embeddings are shared, see BIBREF9 and BIBREF10 .",
"Dropout is applied to feedforward connections denoted by dashed arrows in the figure. From the bottom up: to embedded inputs (input dropout), to connections between layers (intra-layer dropout), to the combined and the down-projected outputs (output dropout). All these dropouts have random masks drawn independently per time step, in contrast to the dropout on recurrent states where the same mask is used for all time steps in the sequence.",
"RHN based models are typically conceived of as a single horizontal “highway” to emphasise how the recurrent state is processed through time. In Fig. FIGREF1 , we choose to draw their schema in a way that makes the differences from LSTMs immediately apparent. In a nutshell, the RHN state is passed from the topmost layer to the lowest layer of the next time step. In contrast, each LSTM layer has its own recurrent connection and state.",
"The same dropout variants are applied to all three model types, with the exception of intra-layer dropout which does not apply to RHNs since only the recurrent state is passed between the layers. For the recurrent states, all architectures use either variational dropout BIBREF11 or recurrent dropout BIBREF12 , unless explicitly noted otherwise."
],
[
"We compare models on three datasets. The smallest of them is the Penn Treebank corpus by BIBREF13 with preprocessing from BIBREF14 . We also include another word level corpus: Wikitext-2 by BIBREF15 . It is about twice the size of Penn Treebank with a larger vocabulary and much lighter preprocessing. The third corpus is Enwik8 from the Hutter Prize dataset BIBREF16 . Following common practice, we use the first 90 million characters for training, and the remaining 10 million evenly split between validation and test."
],
[
"When training word level models we follow common practice and use a batch size of 64, truncated backpropagation with 35 time steps, and we feed the final states from the previous batch as the initial state of the subsequent one. At the beginning of training and test time, the model starts with a zero state. To bias the model towards being able to easily start from such a state at test time, during training, with probability 0.01 a constant zero state is provided as the initial state.",
"Optimisation is performed by Adam BIBREF17 with INLINEFORM0 but otherwise default parameters ( INLINEFORM1 , INLINEFORM2 ). Setting INLINEFORM3 so turns off the exponential moving average for the estimates of the means of the gradients and brings Adam very close to RMSProp without momentum, but due to Adam's bias correction, larger learning rates can be used.",
"Batch size is set to 64. The learning rate is multiplied by 0.1 whenever validation performance does not improve ever during 30 consecutive checkpoints. These checkpoints are performed after every 100 and 200 optimization steps for Penn Treebank and Wikitext-2, respectively.",
"For character level models (i.e. Enwik8), the differences are: truncated backpropagation is performed with 50 time steps. Adam's parameters are INLINEFORM0 , INLINEFORM1 . Batch size is 128. Checkpoints are only every 400 optimisation steps and embeddings are not shared."
],
[
"For evaluation, the checkpoint with the best validation perplexity found by the tuner is loaded and the model is applied to the test set with a batch size of 1. For the word based datasets, using the training batch size makes results worse by 0.3 PPL while Enwik8 is practically unaffected due to its evaluation and training sets being much larger. Preliminary experiments indicate that MC averaging would bring a small improvement of about 0.4 in perplexity and 0.005 in bits per character, similar to the results of BIBREF11 , while being a 1000 times more expensive which is prohibitive on larger datasets. Therefore, throughout we use the mean-field approximation for dropout at test time."
],
[
"Hyperparameters are optimised by Google Vizier BIBREF19 , a black-box hyperparameter tuner based on batched GP bandits using the expected improvement acquisition function BIBREF20 . Tuners of this nature are generally more efficient than grid search when the number of hyperparameters is small. To keep the problem tractable, we restrict the set of hyperparameters to learning rate, input embedding ratio, input dropout, state dropout, output dropout, weight decay. For deep LSTMs, there is an extra hyperparameter to tune: intra-layer dropout. Even with this small set, thousands of evaluations are required to reach convergence.",
"Motivated by recent results from BIBREF21 , we compare models on the basis of the total number of trainable parameters as opposed to the number of hidden units. The tuner is given control over the presence and size of the down-projection, and thus over the tradeoff between the number of embedding vs. recurrent cell parameters. Consequently, the cells' hidden size and the embedding size is determined by the actual parameter budget, depth and the input embedding ratio hyperparameter.",
"For Enwik8 there are relatively few parameters in the embeddings since the vocabulary size is only 205. Here we choose not to share embeddings and to omit the down-projection unconditionally."
],
[
"We tested LSTMs of various depths and an RHN of depth 5 with parameter budgets of 10 and 24 million matching the sizes of the Medium and Large LSTMs by BIBREF18 . The results are summarised in Table TABREF9 .",
"Notably, in our experiments even the RHN with only 10M parameters has better perplexity than the 24M one in the original publication. Our 24M version improves on that further. However, a shallow LSTM-based model with only 10M parameters enjoys a very comfortable margin over that, with deeper models following near the estimated noise range. At 24M, all depths obtain very similar results, reaching exp(4.065) [fixed,zerofill,precision=1] at depth 4. Unsurprisingly, NAS whose architecture was chosen based on its performance on this dataset does almost equally well, even better than in BIBREF1 ."
],
[
"Wikitext-2 is not much larger than Penn Treebank, so it is not surprising that even models tuned for Penn Treebank perform reasonably on this dataset, and this is in fact how results in previous works were produced. For a fairer comparison, we also tune hyperparameters on the same dataset. In Table TABREF14 , we report numbers for both approaches. All our results are well below the previous state of the are for models without dynamic evaluation or caching. That said, our best result, exp(4.188) [fixed,zerofill,precision=1] compares favourably even to the Neural Cache BIBREF6 whose innovations are fairly orthogonal to the base model.",
"Shallow LSTMs do especially well here. Deeper models have gradually degrading perplexity, with RHNs lagging all of them by a significant margin. NAS is not quite up there with the LSTM suggesting its architecture might have overfitted to Penn Treebank, but data for deeper variants would be necessary to draw this conclusion."
],
[
"In contrast to the previous datasets, our numbers on this task (reported in BPC, following convetion) are slightly off the state of the art. This is most likely due to optimisation being limited to 14 epochs which is about a tenth of what the model of BIBREF0 was trained for. Nevertheless, we match their smaller RHN with our models which are very close to each other. NAS lags the other models by a surprising margin at this task."
],
[
"On two of the three datasets, we improved previous results substantially by careful model specification and hyperparameter optimisation, but the improvement for RHNs is much smaller compared to that for LSTMs. While it cannot be ruled out that our particular setup somehow favours LSTMs, we believe it is more likely that this effect arises due to the original RHN experimental condition having been tuned more extensively (this is nearly unavoidable during model development).",
"Naturally, NAS benefitted only to a limited degree from our tuning, since the numbers of BIBREF1 were already produced by employing similar regularisation methods and a grid search. The small edge can be attributed to the suboptimality of grid search (see Section SECREF23 ).",
"In summary, the three recurrent cell architectures are closely matched on all three datasets, with minuscule differences on Enwik8 where regularisation matters the least. These results support the claims of BIBREF21 , that capacities of various cells are very similar and their apparent differences result from trainability and regularisation. While comparing three similar architectures cannot prove this point, the inclusion of NAS certainly gives it more credence. This way we have two of the best human designed and one machine optimised cell that was the top performer among thousands of candidates."
],
[
"Down-projection was found to be very beneficial by the tuner for some depth/budget combinations. On Penn Treebank, it improved results by about 2–5 perplexity points at depths 1 and 2 at 10M, and depth 1 at 24M, possibly by equipping the recurrent cells with more capacity. The very same models benefited from down-projection on Wikitext-2, but even more so with gaps of about 10–18 points which is readily explained by the larger vocabulary size.",
"We further measured the contribution of other features of the models in a series of experiments. See Table TABREF22 . To limit the number of resource used, in these experiments only individual features were evaluated (not their combinations) on Penn Treebank at the best depth for each architecture (LSTM or RHN) and parameter budget (10M or 24M) as determined above.",
"First, we untied input and output embeddings which made perplexities worse by about 6 points across the board which is consistent with the results of BIBREF9 .",
"Second, without variational dropout the RHN models suffer quite a bit since there remains no dropout at all in between the layers. The deep LSTM also sees a similar loss of perplexity as having intra-layer dropout does not in itself provide enough regularisation.",
"Third, we were also interested in how recurrent dropout BIBREF12 would perform in lieu of variational dropout. Dropout masks were shared between time steps in both methods, and our results indicate no consistent advantage to either of them."
],
[
"With a large number of hyperparameter combinations evaluated, the question of how much the tuner overfits arises. There are multiple sources of noise in play,",
"non-deterministic ordering of floating-point operations in optimised linear algebra routines,",
"different initialisation seeds,",
"the validation and test sets being finite samples from a infinite population.",
"To assess the severity of these issues, we conducted the following experiment: models with the best hyperparameter settings for Penn Treebank and Wikitext-2 were retrained from scratch with various initialisation seeds and the validation and test scores were recorded. If during tuning, a model just got a lucky run due to a combination of UID19 and UID20 , then retraining with the same hyperparameters but with different seeds would fail to reproduce the same good results.",
"There are a few notable things about the results. First, in our environment (Tensorflow with a single GPU) even with the same seed as the one used by the tuner, the effect of UID19 is almost as large as that of UID19 and UID20 combined. Second, the variance induced by UID19 and UID20 together is roughly equivalent to an absolute difference of 0.4 in perplexity on Penn Treebank and 0.5 on Wikitext-2. Third, the validation perplexities of the best checkpoints are about one standard deviation lower than the sample mean of the reruns, so the tuner could fit the noise only to a limited degree.",
"Because we treat our corpora as a single sequence, test set contents are not i.i.d., and we cannot apply techniques such as the bootstrap to assess UID21 . Instead, we looked at the gap between validation and test scores as a proxy and observed that it is very stable, contributing variance of 0.12–0.3 perplexity to the final results on Penn Treebank and Wikitext-2, respectively.",
"We have not explicitly dealt with the unknown uncertainty remaining in the Gaussian Process that may affect model comparisons, apart from running it until apparent convergence. All in all, our findings suggest that a gap in perplexity of 1.0 is a statistically robust difference between models trained in this way on these datasets. The distribution of results was approximately normal with roughly the same variance for all models, so we still report numbers in a tabular form instead of plotting the distribution of results, for example in a violin plot BIBREF26 ."
],
[
"To further verify that the best hyperparameter setting found by the tuner is not a fluke, we plotted the validation loss against the hyperparameter settings. Fig. FIGREF24 shows one such typical plot, for a 4-layer LSTM. We manually restricted the ranges around the best hyperparameter values to around 15–25% of the entire tuneable range, and observed that the vast majority of settings in that neighbourhood produced perplexities within 3.0 of the best value. Widening the ranges further leads to quickly deteriorating results.",
"Satisfied that the hyperparameter surface is well behaved, we considered whether the same results could have possibly been achieved with a simple grid search. Omitting input embedding ratio because the tuner found having a down-projection suboptimal almost non-conditionally for this model, there remain six hyperparameters to tune. If there were 5 possible values on the grid for each hyperparameter (with one value in every 20% interval), then we would need INLINEFORM0 , nearly 8000 trials to get within 3.0 of the best perplexity achieved by the tuner in about 1500 trials."
],
[
"Normally, LSTMs have two independent gates controlling the retention of cell state and the admission of updates (Eq. EQREF26 ). A minor variant which reduces the number of parameters at the loss of some flexibility is to tie the input and forget gates as in Eq. . A possible middle ground that keeps the number of parameters the same but ensures that values of the cell state INLINEFORM0 remain in INLINEFORM1 is to cap the input gate as in Eq. . DISPLAYFORM0 ",
" Where the equations are based on the formulation of BIBREF27 . All LSTM models in this paper use the third variant, except those titled “Untied gates” and “Tied gates” in Table TABREF22 corresponding to Eq. EQREF26 and , respectively.",
"The results show that LSTMs are insensitive to these changes and the results vary only slightly even though more hidden units are allocated to the tied version to fill its parameter budget. Finally, the numbers suggest that deep LSTMs benefit from bounded cell states."
],
[
"During the transitional period when deep neural language models began to supplant their shallower predecessors, effect sizes tended to be large, and robust conclusions about the value of the modelling innovations could be made, even in the presence of poorly controlled “hyperparameter noise.” However, now that the neural revolution is in full swing, researchers must often compare competing deep architectures. In this regime, effect sizes tend to be much smaller, and more methodological care is required to produce reliable results. Furthermore, with so much work carried out in parallel by a growing research community, the costs of faulty conclusions are increased.",
"Although we can draw attention to this problem, this paper does not offer a practical methodological solution beyond establishing reliable baselines that can be the benchmarks for subsequent work. Still, we demonstrate how, with a huge amount of computation, noise levels of various origins can be carefully estimated and models meaningfully compared. This apparent tradeoff between the amount of computation and the reliability of results seems to lie at the heart of the matter. Solutions to the methodological challenges must therefore make model evaluation cheaper by, for instance, reducing the number of hyperparameters and the sensitivity of models to them, employing better hyperparameter optimisation strategies, or by defining “leagues” with predefined computational budgets for a single model representing different points on the tradeoff curve."
]
],
"section_name": [
"Introduction",
"Models",
"Datasets",
"Training details",
"Evaluation",
"Hyperparameter Tuning",
"Penn Treebank",
"Wikitext-2",
"Enwik8",
"Analysis",
"The Effect of Individual Features",
"Model Selection",
"Sensitivity",
"Tying LSTM gates",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"081ff5f95b69fccd56201c6c7914cad15d72ce53",
"34be1536f6c845811f241c0d3f75c3fbeab83add",
"60cda0dbf33886b2e576c9057358366aebd9a689"
],
"answer": [
{
"evidence": [
"In this paper, we use a black-box hyperparameter optimisation technique to control for hyperparameter effects while comparing the relative performance of language modelling architectures based on LSTMs, Recurrent Highway Networks BIBREF0 and NAS BIBREF1 . We specify flexible, parameterised model families with the ability to adjust embedding and recurrent cell sizes for a given parameter budget and with fine grain control over regularisation and learning hyperparameters.",
"Our aim is strictly to do better model comparisons for these architectures and we thus refrain from including techniques that are known to push perplexities even lower, but which are believed to be largely orthogonal to the question of the relative merits of these recurrent cells. In parallel work with a remarkable overlap with ours, BIBREF5 demonstrate the utility of adding a Neural Cache BIBREF6 . Building on their work, BIBREF7 show that Dynamic Evaluation BIBREF8 contributes similarly to the final perplexity."
],
"extractive_spans": [
"Recurrent Highway Networks",
"NAS",
"BIBREF5"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we use a black-box hyperparameter optimisation technique to control for hyperparameter effects while comparing the relative performance of language modelling architectures based on LSTMs, Recurrent Highway Networks BIBREF0 and NAS BIBREF1 ",
"In parallel work with a remarkable overlap with ours, BIBREF5 demonstrate the utility of adding a Neural Cache BIBREF6 "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Notably, in our experiments even the RHN with only 10M parameters has better perplexity than the 24M one in the original publication. Our 24M version improves on that further. However, a shallow LSTM-based model with only 10M parameters enjoys a very comfortable margin over that, with deeper models following near the estimated noise range. At 24M, all depths obtain very similar results, reaching exp(4.065) [fixed,zerofill,precision=1] at depth 4. Unsurprisingly, NAS whose architecture was chosen based on its performance on this dataset does almost equally well, even better than in BIBREF1 .",
"Wikitext-2 is not much larger than Penn Treebank, so it is not surprising that even models tuned for Penn Treebank perform reasonably on this dataset, and this is in fact how results in previous works were produced. For a fairer comparison, we also tune hyperparameters on the same dataset. In Table TABREF14 , we report numbers for both approaches. All our results are well below the previous state of the are for models without dynamic evaluation or caching. That said, our best result, exp(4.188) [fixed,zerofill,precision=1] compares favourably even to the Neural Cache BIBREF6 whose innovations are fairly orthogonal to the base model.",
"In contrast to the previous datasets, our numbers on this task (reported in BPC, following convetion) are slightly off the state of the art. This is most likely due to optimisation being limited to 14 epochs which is about a tenth of what the model of BIBREF0 was trained for. Nevertheless, we match their smaller RHN with our models which are very close to each other. NAS lags the other models by a surprising margin at this task."
],
"extractive_spans": [
"BIBREF1",
"Neural Cache BIBREF6",
"BIBREF0"
],
"free_form_answer": "",
"highlighted_evidence": [
"Unsurprisingly, NAS whose architecture was chosen based on its performance on this dataset does almost equally well, even better than in BIBREF1 .",
"That said, our best result, exp(4.188) [fixed,zerofill,precision=1] compares favourably even to the Neural Cache BIBREF6 whose innovations are fairly orthogonal to the base model.",
"This is most likely due to optimisation being limited to 14 epochs which is about a tenth of what the model of BIBREF0 was trained for."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, we use a black-box hyperparameter optimisation technique to control for hyperparameter effects while comparing the relative performance of language modelling architectures based on LSTMs, Recurrent Highway Networks BIBREF0 and NAS BIBREF1 . We specify flexible, parameterised model families with the ability to adjust embedding and recurrent cell sizes for a given parameter budget and with fine grain control over regularisation and learning hyperparameters.",
"Once hyperparameters have been properly controlled for, we find that LSTMs outperform the more recent models, contra the published claims. Our result is therefore a demonstration that replication failures can happen due to poorly controlled hyperparameter variation, and this paper joins other recent papers in warning of the under-acknowledged existence of replication failure in deep learning BIBREF2 , BIBREF3 . However, we do show that careful controls are possible, albeit at considerable computational cost."
],
"extractive_spans": [
"Recurrent Highway Networks",
"NAS "
],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we use a black-box hyperparameter optimisation technique to control for hyperparameter effects while comparing the relative performance of language modelling architectures based on LSTMs, Recurrent Highway Networks BIBREF0 and NAS BIBREF1",
"Once hyperparameters have been properly controlled for, we find that LSTMs outperform the more recent models, contra the published claims."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"795d808894cfe4a988bd10100b465a834611c3f7",
"7cc5b36b8b95c691557dbbe9f2b151b1eaeee36f",
"c137bd40ac6b059f4c030ecd95f7db00beed8644"
],
"answer": [
{
"evidence": [
"In contrast to the previous datasets, our numbers on this task (reported in BPC, following convetion) are slightly off the state of the art. This is most likely due to optimisation being limited to 14 epochs which is about a tenth of what the model of BIBREF0 was trained for. Nevertheless, we match their smaller RHN with our models which are very close to each other. NAS lags the other models by a surprising margin at this task.",
"We compare models on three datasets. The smallest of them is the Penn Treebank corpus by BIBREF13 with preprocessing from BIBREF14 . We also include another word level corpus: Wikitext-2 by BIBREF15 . It is about twice the size of Penn Treebank with a larger vocabulary and much lighter preprocessing. The third corpus is Enwik8 from the Hutter Prize dataset BIBREF16 . Following common practice, we use the first 90 million characters for training, and the remaining 10 million evenly split between validation and test."
],
"extractive_spans": [
"slightly off the state of the art"
],
"free_form_answer": "",
"highlighted_evidence": [
"In contrast to the previous datasets, our numbers on this task (reported in BPC, following convetion) are slightly off the state of the art.",
"The third corpus is Enwik8 from the Hutter Prize dataset BIBREF16 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: Validation and test set BPCs on Enwik8 from the Hutter Prize dataset.",
"In contrast to the previous datasets, our numbers on this task (reported in BPC, following convetion) are slightly off the state of the art. This is most likely due to optimisation being limited to 14 epochs which is about a tenth of what the model of BIBREF0 was trained for. Nevertheless, we match their smaller RHN with our models which are very close to each other. NAS lags the other models by a surprising margin at this task."
],
"extractive_spans": [],
"free_form_answer": "1.30 and 1.31",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Validation and test set BPCs on Enwik8 from the Hutter Prize dataset.",
"In contrast to the previous datasets, our numbers on this task (reported in BPC, following convetion) are slightly off the state of the art."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We compare models on three datasets. The smallest of them is the Penn Treebank corpus by BIBREF13 with preprocessing from BIBREF14 . We also include another word level corpus: Wikitext-2 by BIBREF15 . It is about twice the size of Penn Treebank with a larger vocabulary and much lighter preprocessing. The third corpus is Enwik8 from the Hutter Prize dataset BIBREF16 . Following common practice, we use the first 90 million characters for training, and the remaining 10 million evenly split between validation and test.",
"FLOAT SELECTED: Table 3: Validation and test set BPCs on Enwik8 from the Hutter Prize dataset.",
"In contrast to the previous datasets, our numbers on this task (reported in BPC, following convetion) are slightly off the state of the art. This is most likely due to optimisation being limited to 14 epochs which is about a tenth of what the model of BIBREF0 was trained for. Nevertheless, we match their smaller RHN with our models which are very close to each other. NAS lags the other models by a surprising margin at this task."
],
"extractive_spans": [],
"free_form_answer": "1.30 BPC is their best result",
"highlighted_evidence": [
"We compare models on three datasets. The smallest of them is the Penn Treebank corpus by BIBREF13 with preprocessing from BIBREF14 . We also include another word level corpus: Wikitext-2 by BIBREF15 . It is about twice the size of Penn Treebank with a larger vocabulary and much lighter preprocessing. The third corpus is Enwik8 from the Hutter Prize dataset BIBREF16 .",
"FLOAT SELECTED: Table 3: Validation and test set BPCs on Enwik8 from the Hutter Prize dataset.",
"In contrast to the previous datasets, our numbers on this task (reported in BPC, following convetion) are slightly off the state of the art."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"12420dee27cb307ade5c6c3830a6e036bfcbbf11",
"a0641ad66e3845aab4e912e7872c38897e86bbfb"
],
"answer": [
{
"evidence": [
"We compare models on three datasets. The smallest of them is the Penn Treebank corpus by BIBREF13 with preprocessing from BIBREF14 . We also include another word level corpus: Wikitext-2 by BIBREF15 . It is about twice the size of Penn Treebank with a larger vocabulary and much lighter preprocessing. The third corpus is Enwik8 from the Hutter Prize dataset BIBREF16 . Following common practice, we use the first 90 million characters for training, and the remaining 10 million evenly split between validation and test.",
"We tested LSTMs of various depths and an RHN of depth 5 with parameter budgets of 10 and 24 million matching the sizes of the Medium and Large LSTMs by BIBREF18 . The results are summarised in Table TABREF9 .",
"Notably, in our experiments even the RHN with only 10M parameters has better perplexity than the 24M one in the original publication. Our 24M version improves on that further. However, a shallow LSTM-based model with only 10M parameters enjoys a very comfortable margin over that, with deeper models following near the estimated noise range. At 24M, all depths obtain very similar results, reaching exp(4.065) [fixed,zerofill,precision=1] at depth 4. Unsurprisingly, NAS whose architecture was chosen based on its performance on this dataset does almost equally well, even better than in BIBREF1 .",
"Wikitext-2 is not much larger than Penn Treebank, so it is not surprising that even models tuned for Penn Treebank perform reasonably on this dataset, and this is in fact how results in previous works were produced. For a fairer comparison, we also tune hyperparameters on the same dataset. In Table TABREF14 , we report numbers for both approaches. All our results are well below the previous state of the are for models without dynamic evaluation or caching. That said, our best result, exp(4.188) [fixed,zerofill,precision=1] compares favourably even to the Neural Cache BIBREF6 whose innovations are fairly orthogonal to the base model."
],
"extractive_spans": [],
"free_form_answer": "58.3 perplexity in PTB, and 65.9 perplexity in Wikitext-2",
"highlighted_evidence": [
"We compare models on three datasets. The smallest of them is the Penn Treebank corpus by BIBREF13 with preprocessing from BIBREF14 . We also include another word level corpus: Wikitext-2 by BIBREF15 . It is about twice the size of Penn Treebank with a larger vocabulary and much lighter preprocessing. The third corpus is Enwik8 from the Hutter Prize dataset BIBREF16 ",
"Penn Treebank\nWe tested LSTMs of various depths and an RHN of depth 5 with parameter budgets of 10 and 24 million matching the sizes of the Medium and Large LSTMs by BIBREF18 . The results are summarised in Table TABREF9 .\n\nNotably, in our experiments even the RHN with only 10M parameters has better perplexity than the 24M one in the original publication. Our 24M version improves on that further. However, a shallow LSTM-based model with only 10M parameters enjoys a very comfortable margin over that, with deeper models following near the estimated noise range. At 24M, all depths obtain very similar results, reaching exp(4.065) [fixed,zerofill,precision=1] at depth 4. Unsurprisingly, NAS whose architecture was chosen based on its performance on this dataset does almost equally well, even better than in BIBREF1 .",
"Wikitext-2\nWikitext-2 is not much larger than Penn Treebank, so it is not surprising that even models tuned for Penn Treebank perform reasonably on this dataset, and this is in fact how results in previous works were produced. For a fairer comparison, we also tune hyperparameters on the same dataset. In Table TABREF14 , we report numbers for both approaches. All our results are well below the previous state of the are for models without dynamic evaluation or caching. That said, our best result, exp(4.188) [fixed,zerofill,precision=1] compares favourably even to the Neural Cache BIBREF6 whose innovations are fairly orthogonal to the base model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Notably, in our experiments even the RHN with only 10M parameters has better perplexity than the 24M one in the original publication. Our 24M version improves on that further. However, a shallow LSTM-based model with only 10M parameters enjoys a very comfortable margin over that, with deeper models following near the estimated noise range. At 24M, all depths obtain very similar results, reaching exp(4.065) [fixed,zerofill,precision=1] at depth 4. Unsurprisingly, NAS whose architecture was chosen based on its performance on this dataset does almost equally well, even better than in BIBREF1 .",
"Wikitext-2 is not much larger than Penn Treebank, so it is not surprising that even models tuned for Penn Treebank perform reasonably on this dataset, and this is in fact how results in previous works were produced. For a fairer comparison, we also tune hyperparameters on the same dataset. In Table TABREF14 , we report numbers for both approaches. All our results are well below the previous state of the are for models without dynamic evaluation or caching. That said, our best result, exp(4.188) [fixed,zerofill,precision=1] compares favourably even to the Neural Cache BIBREF6 whose innovations are fairly orthogonal to the base model."
],
"extractive_spans": [
"At 24M, all depths obtain very similar results, reaching exp(4.065) [fixed,zerofill,precision=1] at depth 4",
"our best result, exp(4.188)"
],
"free_form_answer": "",
"highlighted_evidence": [
"At 24M, all depths obtain very similar results, reaching exp(4.065) [fixed,zerofill,precision=1] at depth 4.",
"That said, our best result, exp(4.188) [fixed,zerofill,precision=1] compares favourably even to the Neural Cache BIBREF6 whose innovations are fairly orthogonal to the base model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"b6cc731ee5973740ec4c2c292cfec8dc5ddd883d",
"f9498c94e458ddbab7c6ae9a24a3cd3e8cadf606"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Dropout is applied to feedforward connections denoted by dashed arrows in the figure. From the bottom up: to embedded inputs (input dropout), to connections between layers (intra-layer dropout), to the combined and the down-projected outputs (output dropout). All these dropouts have random masks drawn independently per time step, in contrast to the dropout on recurrent states where the same mask is used for all time steps in the sequence.",
"The same dropout variants are applied to all three model types, with the exception of intra-layer dropout which does not apply to RHNs since only the recurrent state is passed between the layers. For the recurrent states, all architectures use either variational dropout BIBREF11 or recurrent dropout BIBREF12 , unless explicitly noted otherwise."
],
"extractive_spans": [
"dropout",
"variational dropout",
"recurrent dropout"
],
"free_form_answer": "",
"highlighted_evidence": [
"Dropout is applied to feedforward connections denoted by dashed arrows in the figure. From the bottom up: to embedded inputs (input dropout), to connections between layers (intra-layer dropout), to the combined and the down-projected outputs (output dropout). All these dropouts have random masks drawn independently per time step, in contrast to the dropout on recurrent states where the same mask is used for all time steps in the sequence.",
"The same dropout variants are applied to all three model types, with the exception of intra-layer dropout which does not apply to RHNs since only the recurrent state is passed between the layers. For the recurrent states, all architectures use either variational dropout BIBREF11 or recurrent dropout BIBREF12 , unless explicitly noted otherwise."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"190d7e87c35123512eadc3458a64fe1918cc8326",
"1cd633dd6885760994a9c66c094cbb47707dd9b1",
"4c9ceec0b67856650ddc64fc81573f9939d8febc"
],
"answer": [
{
"evidence": [
"In this paper, we use a black-box hyperparameter optimisation technique to control for hyperparameter effects while comparing the relative performance of language modelling architectures based on LSTMs, Recurrent Highway Networks BIBREF0 and NAS BIBREF1 . We specify flexible, parameterised model families with the ability to adjust embedding and recurrent cell sizes for a given parameter budget and with fine grain control over regularisation and learning hyperparameters."
],
"extractive_spans": [
"LSTMs",
"Recurrent Highway Networks",
"NAS"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we use a black-box hyperparameter optimisation technique to control for hyperparameter effects while comparing the relative performance of language modelling architectures based on LSTMs, Recurrent Highway Networks BIBREF0 and NAS BIBREF1 "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our focus is on three recurrent architectures:"
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Architecture section missing) The Long Short-Term Memory, Recurrent Highway Network and NAS",
"highlighted_evidence": [
"Our focus is on three recurrent architectures:"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Validation and test set perplexities on Penn Treebank for models with different numbers of parameters and depths. All results except those from Zaremba are with shared input and output embeddings. VD stands for Variational Dropout from Gal & Ghahramani (2016). †: parallel work."
],
"extractive_spans": [],
"free_form_answer": "LSTM, RHN and NAS.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Validation and test set perplexities on Penn Treebank for models with different numbers of parameters and depths. All results except those from Zaremba are with shared input and output embeddings. VD stands for Variational Dropout from Gal & Ghahramani (2016). †: parallel work."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"what are the recent models they compare with?",
"what were their results on the hutter prize dataset?",
"what was their newly established state of the art results?",
"what regularisation methods did they look at?",
"what architectures were reevaluated?"
],
"question_id": [
"58a3cfbbf209174fcffe44ce99840c758b448364",
"6c6e06f7bfb6d30003fd3801fdaf34649ef1b8f4",
"b6e97d1b1565732b1b3f1d74e6d2800dd21be37a",
"4f8b078b9f60be30520fd32a3d8601ab3babb5c0",
"54517cded8267ea6c9a3f3cf9c37a8d24b3f7c2c"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Recurrent networks with optional down-projection, per-step and per-sequence dropout (dashed and solid lines).",
"Table 1: Validation and test set perplexities on Penn Treebank for models with different numbers of parameters and depths. All results except those from Zaremba are with shared input and output embeddings. VD stands for Variational Dropout from Gal & Ghahramani (2016). †: parallel work.",
"Table 2: Validation and test set perplexities on Wikitext-2. All results are with shared input and output embeddings. †: parallel work.",
"Table 3: Validation and test set BPCs on Enwik8 from the Hutter Prize dataset.",
"Table 4: Validation and test set perplexities on Penn Treebank for variants of our best LSTM and RHN models of two sizes.",
"Figure 2: Average per-word negative log-likelihoods of hyperparameter combinations in the neighbourhood of the best solution for a 4-layer LSTM with 24M weights on the Penn Treebank dataset."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"8-Figure2-1.png"
]
} | [
"what were their results on the hutter prize dataset?",
"what was their newly established state of the art results?",
"what architectures were reevaluated?"
] | [
[
"1707.05589-Enwik8-0",
"1707.05589-Datasets-0",
"1707.05589-6-Table3-1.png"
],
[
"1707.05589-Penn Treebank-1",
"1707.05589-Datasets-0",
"1707.05589-Wikitext-2-0",
"1707.05589-Penn Treebank-0"
],
[
"1707.05589-Models-0",
"1707.05589-Introduction-1",
"1707.05589-4-Table1-1.png"
]
] | [
"1.30 BPC is their best result",
"58.3 perplexity in PTB, and 65.9 perplexity in Wikitext-2",
"LSTM, RHN and NAS."
] | 157 |
1910.14589 | Machine Translation of Restaurant Reviews: New Corpus for Domain Adaptation and Robustness | We share a French-English parallel corpus of Foursquare restaurant reviews (this https URL), and define a new task to encourage research on Neural Machine Translation robustness and domain adaptation, in a real-world scenario where better-quality MT would be greatly beneficial. We discuss the challenges of such user-generated content, and train good baseline models that build upon the latest techniques for MT robustness. We also perform an extensive evaluation (automatic and human) that shows significant improvements over existing online systems. Finally, we propose task-specific metrics based on sentiment analysis or translation accuracy of domain-specific polysemous words. | {
"paragraphs": [
[
"Very detailed information about social venues such as restaurants is available from user-generated reviews in applications like Google Maps, TripAdvisor or Foursquare. Most of these reviews are written in the local language and are not directly exploitable by foreign visitors: an analysis of the Foursquare database shows that, in Paris, only 49% of the restaurants have at least one review in English. It can be much worse for other cities and languages (e.g., only 1% of Seoul restaurants for a French-only speaker).",
"Machine Translation of such user-generated content can improve the situation and make the data available for direct display or for downstream NLP tasks (e.g., cross-lingual information retrieval, sentiment analysis, spam or fake review detection), provided its quality is sufficient.",
"We asked professionals to translate 11.5k French Foursquare reviews (18k sentences) to English. We believe that this resource will be valuable to the community for training and evaluating MT systems addressing challenges posed by user-generated content, which we discuss in detail in this paper.",
"We conduct extensive experiments and combine techniques that seek to solve these challenges (e.g., factored case, noise generation, domain adaptation with tags) on top of a strong Transformer baseline. In addition to BLEU evaluation and human evaluation, we use targeted metrics that measure how well polysemous words are translated, or how well sentiments expressed in the original review can still be recovered from its translation."
],
[
"Translating restaurant reviews written by casual customers presents several difficulties for NMT, in particular robustness to non-standard language and adaptation to a specific style or domain (see Section SECREF7 for details).",
"Concerning robustness to noisy user generated content, BIBREF0 stress differences with traditional domain adaptation problems, and propose a typology of errors, many of which we also detected in the Foursquare data. They also released a dataset (MTNT), whose sources were selected from a social media (Reddit) on the basis of being especially noisy (see Appendix for a comparison with Foursquare). These sources were then translated by humans to produce a parallel corpus that can be used to engineer more robust NMT systems and to evaluate them. This corpus was the basis of the WMT 2019 Robustness Task BIBREF1, in which BIBREF2 ranked first. We use the same set of robustness and domain adaptation techniques, which we study more in depth and apply to our review translation task.",
"BIBREF3, BIBREF4 and BIBREF5 propose to improve robustness by training models on data-augmented corpora, containing noisy sources obtained by random word or character deletions, insertions, substitutions or swaps. Recently, BIBREF6 proposed to use a similar technique along with noise generation through replacement of a clean source by one obtained by back-translation.",
"We employ several well-known domain adaptation techniques: back-translation of large monolingual corpora close to the domain BIBREF7, BIBREF8, fine-tuning with in-domain parallel data BIBREF9, BIBREF10, BIBREF11, domain tags for knowledge transfer between domains BIBREF12, BIBREF2.",
"Addressing the technical issues of robustness and adaptation of an NMT system is decisive for real-world deployment, but evaluation is also critical. This aspect is stressed by BIBREF13 (NMT of curated hotel descriptions), who point out that automatic metrics like BLEU tend to neglect semantic differences that have a small textual footprint, but may be seriously misleading in practice, for instance by interpreting available parking as if it meant free parking. To mitigate this, we conduct additional evaluations of our models: human evaluation, translation accuracy of polysemous words, and indirect evaluation with sentiment analysis."
],
[
"We present a new task of restaurant review translation, which combines domain adaptation and robustness challenges."
],
[
"We sampled 11.5k French reviews from Foursquare, mostly in the food category, split them into 18k sentences, and grouped them into train, valid and test sets (see Table TABREF6). The French reviews contain on average 1.5 sentences and 17.9 words. Then, we hired eight professional translators to translate them to English. Two of them created the training set by post-editing (PE) the outputs of baseline NMT systems. The other six translated the valid and test sets from scratch. They were asked to translate (or post-edit) the reviews sentence-by-sentence (to avoid any alignment problem), but they could see the full context. We manually filtered the test set to remove translations that were not satisfactory. The full reviews and additional metadata (e.g., location and type of the restaurant) are also available as part of this resource, to encourage research on contextual machine translation.",
"Foursquare-HT was translated from scratch by the same translators who post-edited Foursquare-PE. While we did not use it in this work, it can be used as extra training or development data. We also release a human translation of the French-language test set (668 sentences) of the Aspect-Based Sentiment Analysis task at SemEval 2016 BIBREF14."
],
[
"",
"",
"Translating restaurant reviews presents two main difficulties compared to common tasks in MT. First, the reviews are written in a casual style, close to spoken language. Some liberty is taken w.r.t. spelling, grammar, and punctuation. Slang is also very frequent. MT should be robust to these variations. Second, they generally are reactions, by clients of a restaurant, about its food quality, service or atmosphere, with specific words relating to these aspects or sentiments. These require some degree of domain adaptation. The table above illustrates these issues, with outputs from an online MT system. Examples of full reviews from Foursquare-PE along with metadata are shown in Appendix.",
"Examples 1 and 2 fall into the robustness category: 1 is an extreme form of SMS-like, quasi-phonetic, language (et quand j'ai vu ça); 2 is a literal transcription of a long-vowel phonetic stress (trop $\\rightarrow $ trooop). Example 3 falls into the domain category: in a restaurant context, cadre typically refers to the setting. Examples 4 and 5 involve both robustness and domain adaptation: pété un cable is a non-compositional slang expression and garçon is not a boy in this domain; nickel is slang for great, très is missing an accent, and pâtes is misspelled as pattes, which is another French word.",
"Regarding robustness, we found many of the same errors listed by BIBREF0 as noise in social media text: SMS language (é qd g vu sa), typos and phonetic spelling (pattes), repeated letters (trooop, merciiii), slang (nickel, bof, mdr), missing or wrong accents (tres), emoticons (`:-)') and emojis, missing punctuation, wrong or non-standard capitalization (lowercase proper names, capitalized words for emphasis). Regarding domain aspects, there are polysemous words with typical specific meaning carte $\\rightarrow $ map, menu; cadre $\\rightarrow $ frame, executive, setting), idiomatic expressions (à tomber par terre $\\rightarrow $ to die for), and venue-related named entities (La Boîte à Sardines)."
],
[
"We propose solutions for dealing with non-standard case, emoticons, emojis and other issues."
],
[
"We segment our training data into subwords with BPE BIBREF15, implemented in SentencePiece BIBREF16. BPE can deal with rare or unseen words by splitting them into more frequent subwords, but cannot deal with unseen characters. While this is not a problem in most tasks, Foursquare contains many emojis, and sometimes symbols in other scripts (e.g., Arabic). Unicode now defines around 3k emojis, most of which are likely to be out-of-vocabulary.",
"We replace rare characters on both sides of the training corpus by a placeholder (<x>). A model trained on this data is typically able to copy the placeholder at the correct position. Then, at inference time, we replace the output tokens <x> by the rare source-side characters, in the same order. This approach is similar to that of BIBREF18, who used the attention mechanism to replace UNK symbols with the aligned word in the source. BIBREF2 used the same technique to deal with emojis in the WMT robustness task."
],
[
"As shown in Table TABREF11, capital letters are another source of confusion. HONTE and honte are considered as two different words. The former is out-of-vocabulary and is split very aggressively by BPE. This causes the MT model to hallucinate."
],
[
"A solution is to lowercase the input, both at training and at test time. However, when doing so, some information may be lost (e.g., named entities, acronyms, emphasis) which may result in lower translation quality."
],
[
"BIBREF13 do factored machine translation BIBREF19, BIBREF20 where a word and its case are split in two different features. For instance, HONTE becomes honte + upper.",
"We implement this with two embedding matrices, one for words and one for case, and represent a token as the sum of the embeddings of its factors. For the target side, we follow BIBREF20 and have two softmax operations. We first predict the word in its lowercase form and then predict its case. The embeddings of the case and word are then summed and used as input for the next decoder step."
],
[
"BIBREF2 propose another approach, inline casing, which does not require any change in the model. We insert the case as a regular token into the sequence right after the word. Special tokens <U>, <L> and <T> (upper, lower and title) are used for this purpose and appended to the vocabulary. Contrary to the previous solution, there is only one embedding matrix and one softmax.",
"In practice, words are assumed to be lowercase by default and the <L> tokens are dropped to keep the factored sequences as short as possible. “Best fries EVER\" becomes “best <T> _f ries _ever <U>\". Like BIBREF2, we force SentencePiece to split mixed-case words like MacDonalds into single-case subwords (Mac and Donalds)."
],
[
"Another solution that we experiment with (see Section SECREF6) is to inject noise on the source side of the training data by changing random source words to upper (5% chance), title (10%) or lower case (20%)."
],
[
"One way to make an NMT system more robust is to train it with some of the most common errors that can be found in the in-domain data. Like BIBREF2, we detect the errors that occur naturally in the in-domain data and then apply them to our training corpus, while respecting their natural distribution. We call this “natural noise generation” in opposition to what is done in BIBREF3, BIBREF4, BIBREF6 or in Section SECREF10, where the noise is more synthetic."
],
[
"We compile a general-purpose French lexicon as a transducer, implemented to be traversed with extended edit distance flags, similar to BIBREF21. Whenever a word is not found in the lexicon (which means that it is a potential spelling mistake), we look for a French word in the lexicon within a maximum edit distance of 2, with the following set of edit operations:",
"",
"We apply the transducer to the French monolingual Foursquare data (close to 1M sentences) to detect and count noisy variants of known French words. This step produces a dictionary mapping the correct spelling to the list of observed errors and their respective frequencies.",
"In addition to automatically extracted spelling errors, we extract a set of common abbreviations from BIBREF22 and we manually identify a list of common errors in French:",
""
],
[
"With this dictionary, describing the real error distribution in Foursquare text, we take our large out-of-domain training corpus, and randomly replace source-side words with one of their variants (rules 1 to 6), while respecting the frequency of this variant in the real data. We also manually define regular expressions to randomly apply rules 7 to 11 (e.g., \"er \"$\\rightarrow $\"é \").",
"We obtain a noisy parallel corpus (which we use instead of the “clean” training data), where about 30% of all source sentences have been modified, as shown below:",
"",
""
],
[
"To adapt our models to the restaurant review domain we apply the following types of techniques: back-translation of in-domain English data, fine-tuning with small amounts of in-domain parallel data, and domain tags."
],
[
"Back-translation (BT) is a popular technique for domain adaptation when large amounts of in-domain monolingual data are available BIBREF7, BIBREF8. While our in-domain parallel corpus is small (12k pairs), Foursquare contains millions of English-language reviews. Thus, we train an NMT model in the reverse direction (EN$\\rightarrow $FR) and translate all the Foursquare English reviews to French. This gives a large synthetic parallel corpus.",
"This in-domain data is concatenated to the out-of-domain parallel data and used for training.",
"BIBREF8 show that doing back-translation with sampling instead of beam search brings large improvements due to increased diversity. Following this work, we test several settings:",
"",
"We use a temperature of $T=\\frac{1}{0.9}$ to avoid the extremely noisy output obtained with $T=1$ and strike a balance between quality and diversity."
],
[
"When small amounts of in-domain parallel data are available, fine-tuning (FT) is often the preferred solution for domain adaptation BIBREF9, BIBREF10. It consists in training a model on out-of-domain data, and then continuing its training for a few epochs on the in-domain data only."
],
[
"BIBREF12 propose a technique for multi-domain NMT, which consists in inserting a token in each source sequence specifying its domain. The system can learn the particularities of multiple domains (e.g., polysemous words that have a different meaning depending on the domain), which we can control at test time by manually setting the tag. BIBREF23 also use tags to control politeness in the model's output.",
"As our corpus (see Section SECREF28) is not clearly divided into domains, we apply the same technique as BIBREF12 but use corpus tags (each sub-corpus has its own tag: TED, Paracrawl, etc.) which we add to each source sequence. Like in BIBREF2, the Foursquare post-edited and back-translated data also get their own tags (PE and BT). Figure FIGREF27 gives an example where using the PE corpus tag at test time helps the model pick a more adequate translation."
],
[
"After some initial work with the WMT 2014 data, we built a new training corpus named UGC (User Generated Content), closer to our domain, by combining: Multi UN, OpenSubtitles, Wikipedia, Books, Tatoeba, TED talks, ParaCrawl and Gourmet (See Table TABREF31). UGC does not include Common Crawl (which contains many misaligned sentences and caused hallucinations), but it includes OpenSubtitles BIBREF24 (spoken-language, possibly closer to Foursquare). We observed an improvement of more than 1 BLEU on newstest2014 when switching to UGC, and almost 6 BLEU on Foursquare-valid."
],
[
"We use langid.py BIBREF25 to filter sentence pairs from UGC. We also remove duplicate sentence pairs, and lines longer than 175 words or with a length ratio greater than $1.5$ (see Table TABREF31). Then we apply SentencePiece and our rare character handling strategy (Section SECREF8). We use a joined BPE model of size 32k, trained on the concatenation of both sides of the corpus, and set SentencePiece's vocabulary threshold to 100. Finally, unless stated otherwise, we always use the inline casing approach (see Section SECREF10)."
],
[
"For all experiments, we use the Transformer Big BIBREF26 as implemented in Fairseq, with the hyperparameters of BIBREF27. Training is done on 8 GPUs, with accumulated gradients over 10 batches BIBREF27, and a max batch size of 3500 tokens per GPU. We train for 20 epochs, while saving a checkpoint every 2500 updates ($\\approx \\frac{2}{5}$ epoch on UGC) and average the 5 best checkpoints according to their perplexity on a validation set (a held-out subset of UGC).",
"For fine-tuning, we use a fixed learning rate, and a total batch size of 3500 tokens (training on a single GPU without delayed updates). To avoid overfitting on Foursquare-PE, we do early stopping according to perplexity on Foursquare-valid. For each fine-tuned model we test all 16 combinations of dropout in $\\lbrace 0.1,0.2,0.3,0.4\\rbrace $ and learning rate in $\\lbrace 1, 2, 5, 10\\rbrace \\times 10^{-5}$. We keep the model with the best perplexity on Foursquare-valid."
],
[
"During our work, we used BLEU BIBREF28 on newstest[2012, 2013] to ensure that our models stayed good on a more general domain, and on Foursquare-valid to measure performance on the Foursquare domain.",
"For sake of brevity, we only give the final BLEU scores on newstest2014 and Foursquare-test. Scores on Foursquare-valid, and MTNT-test (for comparison with BIBREF0, BIBREF2) are given in Appendix. We evaluate “detokenized” MT outputs against raw references using SacreBLEU BIBREF29.",
"In addition to BLEU, we do an indirect evaluation on an Aspect-Based Sentiment Analysis (ABSA) task, a human evaluation, and a task-related evaluation based on polysemous words."
],
[
"Table TABREF41 compares the case handling techniques presented in Section SECREF10. To better evaluate the robustness of our models to changes of case, we built 3 synthetic test sets from Foursquare-test, with the same target, but all source words in upper, lower or title case.",
"Inline and factored case perform equally well, significantly better than the default (cased) model, especially on all-uppercase inputs. Lowercasing the source is a good option, but gives a slightly lower score on regular Foursquare-test. Finally, synthetic case noise added to the source gives surprisingly good results. It could also be combined with factored or inline case."
],
[
"Table TABREF44 compares the baseline “inline case” model with the same model augmented with natural noise (Section SECREF17). Performance is the same on Foursquare-test, but significantly better on newstest2014 artificially augmented with Foursquare-like noise."
],
[
"Table TABREF46 shows the results of the back-translation (BT) techniques. Surprisingly, BT with beam search (BT-B) deteriorates BLEU scores on Foursquare-test, while BT with sampling gives a consistent improvement. BLEU scores on newstest2014 are not significantly impacted, suggesting that BT can be used for domain adaptation without hurting quality on other domains.",
"Table TABREF47 compares the domain adaptation techniques presented in Section SECREF5. We observe that:",
"",
"Concatenating the small Foursquare-PE corpus to the 50M general domain corpus does not help much, unless using corpus tags.",
"Foursquare-PE + tags is not as good as fine-tuning with Foursquare-PE. However, fine-tuned models get slightly worse results on news.",
"Back-translation combined with tags gives a large boost. The BT tag should not be used at test time, as it degrades results.",
"Using no tag at test time works fine, even though all training sentences had tags.",
"As shown in Table TABREF54, these techniques can be combined to achieve the best results. The natural noise does not have a significant effect on BLEU scores. Back-translation combined with fine-tuning gives the best performance on Foursquare (+4.5 BLEU vs UGC). However, using tags instead of fine-tuning strikes a better balance between general domain and in-domain performance."
],
[
"In this section we propose two metrics that target specific aspects of translation adequacy: translation accuracy of domain-specific polysemous words and Aspect-Based Sentiment Analysis performance on MT outputs."
],
[
"We propose to count polysemous words specific to our domain, similarly to BIBREF31, to measure the degree of domain adaptation. TER between the translation hypotheses and the post-edited references in Foursquare-PE reveals the most common substitutions (e.g., “card” is often replaced with “menu”, suggesting that “card” is a common mistranslation of the polysemous word “carte”). We filter this list manually to only keep words that are polysemous and that have a high frequency in the test set. Table TABREF58 gives the 3 most frequent ones.",
"Table TABREF59 shows the accuracy of our models when translating these words. We see that the domain-adapted model is better at translating domain-specific polysemous words."
],
[
"We also measure adequacy by how well the translation preserves the polarity of the sentence regarding various aspects. To evaluate this, we perform an indirect evaluation on the SemEval 2016 Aspect-Based Sentiment Analysis (ABSA) task BIBREF14. We use our internal ABSA systems trained on English or French SemEval 2016 data. The evaluation is done on the SemEval 2016 French test set: either the original version (ABSA French), or its translation (ABSA English). As shown in Table TABREF61, translations obtained with domain-adapted models lead to significantly better scores on ABSA than the generic models."
],
[
"We conduct a human evaluation to confirm the observations with BLEU and to overcome some of the limitations of this metric.",
"We select 4 MT models for evaluation (see Table TABREF63) and show their 4 outputs at once, sentence-by-sentence, to human judges, who are asked to rank them given the French source sentence in context (with the full review). For each pair of models, we count the number of wins, ties and losses, and apply the Wilcoxon signed-rank test.",
"We took the first 300 test sentences to create 6 tasks of 50 sentences each. Then we asked bilingual colleagues to rank the output of 4 models by their translation quality. They were asked to do one or more of these tasks. The judge did not know about the list of models, nor the model that produced any given translation. We got 12 answers. The inter-judge Kappa coefficient ranged from 0.29 to 0.63, with an average of 0.47, which is a good value given the difficulty of the task. Table TABREF63 gives the results of the evaluation, which confirm our observations with BLEU.",
"We also did a larger-scale monolingual evaluation using Amazon Mechanical Turk (see Appendix), which lead to similar conclusions."
],
[
"",
"We presented a new parallel corpus of user reviews of restaurants, which we think will be valuable to the community. We proposed combinations of multiple techniques for robustness and domain adaptation, which address particular challenges of this new task. We also performed an extensive evaluation to measure the improvements brought by these techniques.",
"According to BLEU, the best single technique for domain adaptation is fine-tuning. Corpus tags also achieve good results, without degrading performance on a general domain. Back-translation helps, but only with sampling or tags. The robustness techniques (natural noise, factored case, rare character placeholder) do not improve BLEU.",
"While our models are promising, they still show serious errors when applied to user-generated content: missing negations, hallucinations, unrecognized named entities, insensitivity to context. This suggests that this task is far from solved.",
"We hope that this corpus, our natural noise dictionary, model outputs and human rankings will help better understand and address these problems. We also plan to investigate these problems on lower resource languages, where we expect the task to be even harder."
]
],
"section_name": [
"Introduction",
"Related work",
"Task description",
"Task description ::: Corpus description",
"Task description ::: Challenges",
"Robustness to noise",
"Robustness to noise ::: Rare character placeholder",
"Robustness to noise ::: Capital letters",
"Robustness to noise ::: Capital letters ::: Lowercasing",
"Robustness to noise ::: Capital letters ::: Factored translation",
"Robustness to noise ::: Capital letters ::: Inline casing",
"Robustness to noise ::: Capital letters ::: Synthetic case noise",
"Robustness to noise ::: Natural noise",
"Robustness to noise ::: Natural noise ::: Detecting errors",
"Robustness to noise ::: Natural noise ::: Generating errors",
"Domain Adaptation",
"Domain Adaptation ::: Back-translation",
"Domain Adaptation ::: Fine-tuning",
"Domain Adaptation ::: Corpus tags",
"Experiments ::: Training data",
"Experiments ::: Pre-processing",
"Experiments ::: Model and settings",
"Experiments ::: Evaluation methodology",
"Experiments ::: BLEU evaluation ::: Capital letters",
"Experiments ::: BLEU evaluation ::: Natural noise",
"Experiments ::: BLEU evaluation ::: Domain adaptation",
"Experiments ::: Targeted evaluation",
"Experiments ::: Targeted evaluation ::: Translation of polysemous words",
"Experiments ::: Targeted evaluation ::: Indirect evaluation with sentiment analysis",
"Experiments ::: Human Evaluation",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"269271a55a9ec9cc379dd79fc067f1070b503d7a",
"841fec6909c356ee8a65ba2f839caacbbb8386a6"
],
"answer": [
{
"evidence": [
"For all experiments, we use the Transformer Big BIBREF26 as implemented in Fairseq, with the hyperparameters of BIBREF27. Training is done on 8 GPUs, with accumulated gradients over 10 batches BIBREF27, and a max batch size of 3500 tokens per GPU. We train for 20 epochs, while saving a checkpoint every 2500 updates ($\\approx \\frac{2}{5}$ epoch on UGC) and average the 5 best checkpoints according to their perplexity on a validation set (a held-out subset of UGC)."
],
"extractive_spans": [
" Transformer Big BIBREF26"
],
"free_form_answer": "",
"highlighted_evidence": [
"For all experiments, we use the Transformer Big BIBREF26 as implemented in Fairseq, with the hyperparameters of BIBREF27."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For all experiments, we use the Transformer Big BIBREF26 as implemented in Fairseq, with the hyperparameters of BIBREF27. Training is done on 8 GPUs, with accumulated gradients over 10 batches BIBREF27, and a max batch size of 3500 tokens per GPU. We train for 20 epochs, while saving a checkpoint every 2500 updates ($\\approx \\frac{2}{5}$ epoch on UGC) and average the 5 best checkpoints according to their perplexity on a validation set (a held-out subset of UGC)."
],
"extractive_spans": [
"Transformer Big"
],
"free_form_answer": "",
"highlighted_evidence": [
"For all experiments, we use the Transformer Big BIBREF26 as implemented in Fairseq, with the hyperparameters of BIBREF27. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"2b77504a6c0baf0c594416dcddb760d4e84587b7",
"6a048f848c794f5cbe375e027a15d40addb204ec",
"a31a273e65c45f373a1ddd50d3603e555b61738e"
],
"answer": [
{
"evidence": [
"After some initial work with the WMT 2014 data, we built a new training corpus named UGC (User Generated Content), closer to our domain, by combining: Multi UN, OpenSubtitles, Wikipedia, Books, Tatoeba, TED talks, ParaCrawl and Gourmet (See Table TABREF31). UGC does not include Common Crawl (which contains many misaligned sentences and caused hallucinations), but it includes OpenSubtitles BIBREF24 (spoken-language, possibly closer to Foursquare). We observed an improvement of more than 1 BLEU on newstest2014 when switching to UGC, and almost 6 BLEU on Foursquare-valid."
],
"extractive_spans": [
"WMT 2014",
" UGC (User Generated Content)"
],
"free_form_answer": "",
"highlighted_evidence": [
"After some initial work with the WMT 2014 data, we built a new training corpus named UGC (User Generated Content), closer to our domain, by combining: Multi UN, OpenSubtitles, Wikipedia, Books, Tatoeba, TED talks, ParaCrawl and Gourmet (See Table TABREF31). UGC does not include Common Crawl (which contains many"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We sampled 11.5k French reviews from Foursquare, mostly in the food category, split them into 18k sentences, and grouped them into train, valid and test sets (see Table TABREF6). The French reviews contain on average 1.5 sentences and 17.9 words. Then, we hired eight professional translators to translate them to English. Two of them created the training set by post-editing (PE) the outputs of baseline NMT systems. The other six translated the valid and test sets from scratch. They were asked to translate (or post-edit) the reviews sentence-by-sentence (to avoid any alignment problem), but they could see the full context. We manually filtered the test set to remove translations that were not satisfactory. The full reviews and additional metadata (e.g., location and type of the restaurant) are also available as part of this resource, to encourage research on contextual machine translation."
],
"extractive_spans": [
"11.5k French reviews from Foursquare"
],
"free_form_answer": "",
"highlighted_evidence": [
"QUESTION (2 / 5): WHAT DATASET WAS USED?",
"We sampled 11.5k French reviews from Foursquare, mostly in the food category, split them into 18k sentences, and grouped them into train, valid and test sets (see Table TABREF6). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"After some initial work with the WMT 2014 data, we built a new training corpus named UGC (User Generated Content), closer to our domain, by combining: Multi UN, OpenSubtitles, Wikipedia, Books, Tatoeba, TED talks, ParaCrawl and Gourmet (See Table TABREF31). UGC does not include Common Crawl (which contains many misaligned sentences and caused hallucinations), but it includes OpenSubtitles BIBREF24 (spoken-language, possibly closer to Foursquare). We observed an improvement of more than 1 BLEU on newstest2014 when switching to UGC, and almost 6 BLEU on Foursquare-valid."
],
"extractive_spans": [
"WMT 2014",
"UGC (User Generated Content)"
],
"free_form_answer": "",
"highlighted_evidence": [
"After some initial work with the WMT 2014 data, we built a new training corpus named UGC (User Generated Content), closer to our domain, by combining: Multi UN, OpenSubtitles, Wikipedia, Books, Tatoeba, TED talks, ParaCrawl and Gourmet (See Table TABREF31)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"9577d28927b6d719f26dd25d1cc813429b6afb0d",
"bba3c745e783ad4cbab5096d4e20f29b2c26556e",
"f6d08dae9ec955ebc7cb54b3dcee5b2f45650d75"
],
"answer": [
{
"evidence": [
"We took the first 300 test sentences to create 6 tasks of 50 sentences each. Then we asked bilingual colleagues to rank the output of 4 models by their translation quality. They were asked to do one or more of these tasks. The judge did not know about the list of models, nor the model that produced any given translation. We got 12 answers. The inter-judge Kappa coefficient ranged from 0.29 to 0.63, with an average of 0.47, which is a good value given the difficulty of the task. Table TABREF63 gives the results of the evaluation, which confirm our observations with BLEU."
],
"extractive_spans": [
" translation quality."
],
"free_form_answer": "",
"highlighted_evidence": [
" Then we asked bilingual colleagues to rank the output of 4 models by their translation quality"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We select 4 MT models for evaluation (see Table TABREF63) and show their 4 outputs at once, sentence-by-sentence, to human judges, who are asked to rank them given the French source sentence in context (with the full review). For each pair of models, we count the number of wins, ties and losses, and apply the Wilcoxon signed-rank test."
],
"extractive_spans": [],
"free_form_answer": "The outputs are ranked by human evaluators, the wins, ties and losses are counted, then the Wilcoxon signed-rank test is applied.",
"highlighted_evidence": [
"We select 4 MT models for evaluation (see Table TABREF63) and show their 4 outputs at once, sentence-by-sentence, to human judges, who are asked to rank them given the French source sentence in context (with the full review). For each pair of models, we count the number of wins, ties and losses, and apply the Wilcoxon signed-rank test."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We select 4 MT models for evaluation (see Table TABREF63) and show their 4 outputs at once, sentence-by-sentence, to human judges, who are asked to rank them given the French source sentence in context (with the full review). For each pair of models, we count the number of wins, ties and losses, and apply the Wilcoxon signed-rank test."
],
"extractive_spans": [
"number of wins, ties and losses, and apply the Wilcoxon signed-rank test"
],
"free_form_answer": "",
"highlighted_evidence": [
"We select 4 MT models for evaluation (see Table TABREF63) and show their 4 outputs at once, sentence-by-sentence, to human judges, who are asked to rank them given the French source sentence in context (with the full review). For each pair of models, we count the number of wins, ties and losses, and apply the Wilcoxon signed-rank test."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7f59d51dee8ea52775b35a89b82bb09c9a730ec1",
"fc70435a68c69598dd030092c4fda88607fdaae8"
],
"answer": [
{
"evidence": [
"In addition to BLEU, we do an indirect evaluation on an Aspect-Based Sentiment Analysis (ABSA) task, a human evaluation, and a task-related evaluation based on polysemous words.",
"During our work, we used BLEU BIBREF28 on newstest[2012, 2013] to ensure that our models stayed good on a more general domain, and on Foursquare-valid to measure performance on the Foursquare domain."
],
"extractive_spans": [
"BLEU BIBREF28",
"indirect evaluation on an Aspect-Based Sentiment Analysis (ABSA) task",
" task-related evaluation based on polysemous words"
],
"free_form_answer": "",
"highlighted_evidence": [
"In addition to BLEU, we do an indirect evaluation on an Aspect-Based Sentiment Analysis (ABSA) task, a human evaluation, and a task-related evaluation based on polysemous words.",
"During our work, we used BLEU BIBREF28 on newstest[2012, 2013] to ensure that our models stayed good on a more general domain, and on Foursquare-valid to measure performance on the Foursquare domain."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conduct extensive experiments and combine techniques that seek to solve these challenges (e.g., factored case, noise generation, domain adaptation with tags) on top of a strong Transformer baseline. In addition to BLEU evaluation and human evaluation, we use targeted metrics that measure how well polysemous words are translated, or how well sentiments expressed in the original review can still be recovered from its translation.",
"FLOAT SELECTED: Table 10: Number of correct translations for difficult polysemous words in Foursquare-test by different models. The first row is the number of source sentences that contain this word. Other domain-adapted models (e.g., “UGC + FT” or “UGC ⊕ BT”) also get ≈ 99% accuracy.",
"FLOAT SELECTED: Table 11: Indirect evaluation with Aspect-Based Sentiment Analysis (accuracy in %). ABSA French: ABSA model trained on French data and applied to the SemEval 2016 French test set; ABSA English: trained on English data and applied to human translations of the test set; ABSA English on MT outputs: applied to MT outputs instead of human translations."
],
"extractive_spans": [],
"free_form_answer": "BLEU, accuracy",
"highlighted_evidence": [
"In addition to BLEU evaluation and human evaluation, we use targeted metrics that measure how well polysemous words are translated, or how well sentiments expressed in the original review can still be recovered from its translation.",
"FLOAT SELECTED: Table 10: Number of correct translations for difficult polysemous words in Foursquare-test by different models. The first row is the number of source sentences that contain this word. Other domain-adapted models (e.g., “UGC + FT” or “UGC ⊕ BT”) also get ≈ 99% accuracy.",
"FLOAT SELECTED: Table 11: Indirect evaluation with Aspect-Based Sentiment Analysis (accuracy in %). ABSA French: ABSA model trained on French data and applied to the SemEval 2016 French test set; ABSA English: trained on English data and applied to human translations of the test set; ABSA English on MT outputs: applied to MT outputs instead of human translations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"25a59241eb78b08332751b3025ce5d2ffe6a946a",
"c24f2f5b1d82240b8c0edbe9428bc2bd9351b3ad",
"dffe91cb8f68548440191979c19f1a9a8603ad10"
],
"answer": [
{
"evidence": [
"As shown in Table TABREF54, these techniques can be combined to achieve the best results. The natural noise does not have a significant effect on BLEU scores. Back-translation combined with fine-tuning gives the best performance on Foursquare (+4.5 BLEU vs UGC). However, using tags instead of fine-tuning strikes a better balance between general domain and in-domain performance.",
"FLOAT SELECTED: Table 8: Combination of several robustness or domain adaptation techniques. At test time, we don’t use any tag on news, and use the PE tag on Foursquaretest (when applicable). BT: back-translation. PE: Foursquare-PE. FT: fine-tuning with Foursquare-PE. ⊕: concatenation."
],
"extractive_spans": [],
"free_form_answer": "Existing online systems compared in this work are Google Translate (Feb 2019) and DeepL (Feb 2019).",
"highlighted_evidence": [
"As shown in Table TABREF54, these techniques can be combined to achieve the best results.",
"FLOAT SELECTED: Table 8: Combination of several robustness or domain adaptation techniques. At test time, we don’t use any tag on news, and use the PE tag on Foursquaretest (when applicable). BT: back-translation. PE: Foursquare-PE. FT: fine-tuning with Foursquare-PE. ⊕: concatenation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 8: Combination of several robustness or domain adaptation techniques. At test time, we don’t use any tag on news, and use the PE tag on Foursquaretest (when applicable). BT: back-translation. PE: Foursquare-PE. FT: fine-tuning with Foursquare-PE. ⊕: concatenation."
],
"extractive_spans": [],
"free_form_answer": "Google Translate",
"highlighted_evidence": [
"FLOAT SELECTED: Table 8: Combination of several robustness or domain adaptation techniques. At test time, we don’t use any tag on news, and use the PE tag on Foursquaretest (when applicable). BT: back-translation. PE: Foursquare-PE. FT: fine-tuning with Foursquare-PE. ⊕: concatenation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 8: Combination of several robustness or domain adaptation techniques. At test time, we don’t use any tag on news, and use the PE tag on Foursquaretest (when applicable). BT: back-translation. PE: Foursquare-PE. FT: fine-tuning with Foursquare-PE. ⊕: concatenation."
],
"extractive_spans": [],
"free_form_answer": "Google Translate, DeepL",
"highlighted_evidence": [
"FLOAT SELECTED: Table 8: Combination of several robustness or domain adaptation techniques. At test time, we don’t use any tag on news, and use the PE tag on Foursquaretest (when applicable). BT: back-translation. PE: Foursquare-PE. FT: fine-tuning with Foursquare-PE. ⊕: concatenation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"what baseline models are trained?",
"what dataset was used?",
"what are the human evaluation metrics?",
"what automatic evaluation is performed?",
"what are the existing online systems?"
],
"question_id": [
"803babb71e1bdaf507847d6c712585f4128e9f47",
"5fd112980d0dd7f7ce30e6273fe6e7b230b13225",
"eaae11ffd4ff955de2cd6389b888f5fd2c660a32",
"290ebf0d1c49b67a6d1858366be751d89086a78b",
"806fefe0e331ddb3c17245d6a9fa7433798e367f"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Foursquare splits. Foursquare-PE is the training set. Foursquare-HT is not used in this work.",
"Table 2: Capital letters break NMT. BPE segmentation and translation of capitalized or lowercase input.",
"Figure 1: Example of ambiguous source sentence, where using corpus tags helps the model pick a more adequate translation.",
"Table 3: Size of the WMT and UGC training corpora (after filtering).",
"Table 4: Robustness to capital letters (see Section 4.2). Foursquare-test’s source side has been set to upper, lower or title case. The first column is case sensitive BLEU on Foursquare-test. “LC to cased” always gets the same scores because it is invariant to source case.",
"Table 5: Baseline model with or without natural noise (see Section 4.3). Noised news is the same type of noise, artificially applied to newstest2014.",
"Table 6: Comparison of different back-translation schemes (see Section 5.1). ⊕ denotes the concatenation of several training corpora.",
"Table 8: Combination of several robustness or domain adaptation techniques. At test time, we don’t use any tag on news, and use the PE tag on Foursquaretest (when applicable). BT: back-translation. PE: Foursquare-PE. FT: fine-tuning with Foursquare-PE. ⊕: concatenation.",
"Table 7: Domain adaptation with Foursquare-PE finetuning (FT) or corpus tags. The “tag” column represents the corpus tag used at test time (if any).",
"Table 9: French polysemous words found in Foursquare, and translation candidates in English. The most frequent meanings in Foursquare are underlined.",
"Table 10: Number of correct translations for difficult polysemous words in Foursquare-test by different models. The first row is the number of source sentences that contain this word. Other domain-adapted models (e.g., “UGC + FT” or “UGC ⊕ BT”) also get ≈ 99% accuracy.",
"Table 12: In-house human evaluation (“ ” means better with p ≤ 0.05). The 4 models Baseline, GT, Tags and Tags + noise correspond respectively to rows 2 (UGC with inline case), 3 (Google Translate), 6 (Combination of BT, PE and tags) and 8 (Same as 6 with natural noise) in Table 8.",
"Table 11: Indirect evaluation with Aspect-Based Sentiment Analysis (accuracy in %). ABSA French: ABSA model trained on French data and applied to the SemEval 2016 French test set; ABSA English: trained on English data and applied to human translations of the test set; ABSA English on MT outputs: applied to MT outputs instead of human translations.",
"Table 13: Noise comparison between Foursquare-test and MTNT-test (Michel and Neubig, 2018). Emojis, all-uppercase words (not counting acronyms) and spelling + grammar mistakes (according to MS Word) per 100 tokens.",
"Table 15: Large-scale Human Evaluation on Amazon Mechanical Turk (“ ” means p ≤ 0.01). The 4 models Baseline, GT, Tags and Tags + noise correspond respectively to rows 2 (UGC with inline case), 3 (Google Translate), 6 (Combination of BT, PE and tags) and 8 (Same as 6 with natural noise) in Table 8.",
"Table 14: Comparison of our models against the winner of the WMT 2019 Robustness Task on the MTNT test set (similar robustness challenges but different domain). We also give cased BLEU of our models on Foursquare-valid. Results on Foursquare-test are shown in the paper.",
"Table 17: Examples of challenging examples from Foursquare-PE. We show the full reviews with sentence delimiters (<s>) and metadata. The words that contain typos or that could cause trouble to a regular NMT model are shown in bold red.",
"Table 20: Examples of sentences from Foursquare-test with polysemous words (in bold red), where domain adaptation helps (with Foursquare-PE fine-tuning and back-translation).",
"Table 18: Examples of sentences from Foursquare-test with capitalized words, where default (cased) MT gets the translation wrong and inline case helps.",
"Table 19: Examples of sentences from Foursquare-test with noisy spelling (in bold red), where training with source-side natural noise helps.",
"Table 21: Examples of bad translations by our best model (Noise ⊕ BT ⊕ PE + tags). All examples are from Foursquare-test, except for the last one, which is from SemEval."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"5-Figure1-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"6-Table5-1.png",
"7-Table6-1.png",
"7-Table8-1.png",
"7-Table7-1.png",
"7-Table9-1.png",
"8-Table10-1.png",
"8-Table12-1.png",
"8-Table11-1.png",
"10-Table13-1.png",
"10-Table15-1.png",
"10-Table14-1.png",
"11-Table17-1.png",
"12-Table20-1.png",
"12-Table18-1.png",
"12-Table19-1.png",
"13-Table21-1.png"
]
} | [
"what are the human evaluation metrics?",
"what automatic evaluation is performed?",
"what are the existing online systems?"
] | [
[
"1910.14589-Experiments ::: Human Evaluation-2",
"1910.14589-Experiments ::: Human Evaluation-1"
],
[
"1910.14589-8-Table10-1.png",
"1910.14589-8-Table11-1.png",
"1910.14589-Experiments ::: Evaluation methodology-0",
"1910.14589-Introduction-3",
"1910.14589-Experiments ::: Evaluation methodology-2"
],
[
"1910.14589-7-Table8-1.png",
"1910.14589-Experiments ::: BLEU evaluation ::: Domain adaptation-7"
]
] | [
"The outputs are ranked by human evaluators, the wins, ties and losses are counted, then the Wilcoxon signed-rank test is applied.",
"BLEU, accuracy",
"Google Translate, DeepL"
] | 158 |
1801.05617 | Automatic Detection of Cyberbullying in Social Media Text | While social media offer great communication opportunities, they also increase the vulnerability of young people to threatening situations online. Recent studies report that cyberbullying constitutes a growing problem among youngsters. Successful prevention depends on the adequate detection of potentially harmful messages and the information overload on the Web requires intelligent systems to identify potential risks automatically. The focus of this paper is on automatic cyberbullying detection in social media text by modelling posts written by bullies, victims, and bystanders of online bullying. We describe the collection and fine-grained annotation of a training corpus for English and Dutch and perform a series of binary classification experiments to determine the feasibility of automatic cyberbullying detection. We make use of linear support vector machines exploiting a rich feature set and investigate which information sources contribute the most for this particular task. Experiments on a holdout test set reveal promising results for the detection of cyberbullying-related posts. After optimisation of the hyperparameters, the classifier yields an F1-score of 64% and 61% for English and Dutch respectively, and considerably outperforms baseline systems based on keywords and word unigrams. | {
"paragraphs": [
[
"Web 2.0 has had a substantial impact on communication and relationships in today's society. Children and teenagers go online more frequently, at younger ages, and in more diverse ways (e.g. smartphones, laptops and tablets). Although most of teenagers' Internet use is harmless and the benefits of digital communication are evident, the freedom and anonymity experienced online makes young people vulnerable, with cyberbullying being one of the major threats BIBREF0 , BIBREF1 , BIBREF2 .",
"Bullying is not a new phenomenon, and cyberbullying has manifested itself as soon as digital technologies have become primary communication tools. On the positive side, social media like blogs, social networking sites (e.g. Facebook) and instant messaging platforms (e.g. WhatsApp) make it possible to communicate with anyone and at any time. Moreover, they are a place where people engage in social interaction, offering the possibility to establish new relationships and maintain existing friendships BIBREF3 , BIBREF4 . On the negative side however, social media increase the risk of children being confronted with threatening situations including grooming or sexually transgressive behaviour, signals of depression and suicidal thoughts, and cyberbullying. Users are reachable 24/7 and are often able to remain anonymous if desired: this makes social media a convenient way for bullies to target their victims outside the school yard.",
"With regard to cyberbullying, a number of national and international initiatives have been launched over the past few years to increase children's online safety. Examples include KiVa, a Finnish cyberbullying prevention programme, the `Non au harcèlement' campaign in France, Belgian governmental initiatives and helplines (e.g. clicksafe.be, veiligonline.be, mediawijs.be) that provide information about online safety, and so on.",
"In spite of these efforts, a lot of undesirable and hurtful content remains online. BIBREF1 analysed a body of quantitative research on cyberbullying and observed cybervictimisation rates among teenagers between 20% and 40%. BIBREF5 focused on 12 to 17 year olds living in the United States and found that no less than 72% of them had encountered cyberbullying at least once within the year preceding the questionnaire. BIBREF6 surveyed 9 to 26 year olds in the United States, Canada, the United Kingdom and Australia, and found that 29% of the respondents had ever been victimised online. A study among 2,000 Flemish secondary school students (age 12 to 18) revealed that 11% of them had been bullied online at least once in the six months preceding the survey BIBREF7 . Finally, the 2014 large-scale EU Kids Online Report BIBREF8 published that 20% of 11 to 16 year olds had been exposed to hate messages online. In addition, youngsters were 12% more likely to be exposed to cyberbullying as compared to 2010, clearly demonstrating that cyberbullying is a growing problem.",
"The prevalence of cybervictimisation depends on the conceptualisation used in describing cyberbullying, but also on research variables such as location and the number and age span of its participants. Nevertheless, the above-mentioned studies demonstrate that online platforms are increasingly used for bullying, which is a cause for concern given its impact. As shown by BIBREF9 , BIBREF10 , BIBREF11 , cyberbullying can have a negative impact on the victim's self-esteem, academic achievement and emotional well-being. BIBREF12 found that self-reported effects of cyberbullying include negative effects on school grades, feelings like sadness, anger, fear, and depression and in extreme cases, cyberbullying could even lead to self-harm and suicidal thoughts.",
"The above studies demonstrate that cyberbullying is a serious problem the consequences of which can be dramatic. Successful early detection of cyberbullying attempts is therefore of key importance to youngsters' mental well-being. However, the amount of information on the Web makes it practically unfeasible for moderators to monitor all user-generated content manually. To tackle this problem, intelligent systems are required that process this information in a fast way and automatically signal potential threats. This way, moderators can respond quickly and prevent threatening situations from escalating. According to recent research, teenagers are generally in favour of such automatic monitoring, provided that effective follow-up strategies are formulated, and that privacy and autonomy are guaranteed BIBREF13 .",
"Parental control tools (e.g. NetNanny) already block unsuited or undesirable content and some social networks make use of keyword-based moderation tools (i.e., using lists of profane and insulting words to flag harmful content). However, such approaches typically fail to detect implicit or subtle forms of cyberbullying in which no explicit vocabulary is used. There is therefore a need for intelligent and self-learning systems that can go beyond keyword spotting and hence improve recall of cyberbullying detection.",
"The ultimate goal of this sort of research is to develop models which could improve manual monitoring for cyberbullying on social networks. We explore the automatic detection of textual signals of cyberbullying, in which it is approached as a complex phenomenon that can be realised in various ways (see Section SECREF15 for a detailed overview). While a lot of the related research focuses on the detection of cyberbullying `attacks', the present study takes into account a broader range of textual signals of cyberbullying, including posts written by bullies, as well as by victims and bystanders.",
"We propose a machine learning method to cyberbullying detection by making use of a linear SVM classifier BIBREF14 , BIBREF15 exploiting a varied set of features. To the best of our knowledge, this is the first approach to the annotation of fine-grained text categories related to cyberbullying and the detection of signals of cyberbullying events. It is also the first elaborate research on automatic cyberbullying detection on Dutch social media. For the present experiments, we focus on an English and Dutch ASKfm corpus, but the methodology adopted is language and genre independent, provided there is annotated data available.",
"The remainder of this paper is structured as follows: the next section presents a theoretic overview and gives an overview of the state of the art in cyberbullying detection, whereas Section SECREF3 describes the corpus. Next, we present the experimental setup and discuss our experimental results. Finally, Section SECREF6 concludes this paper and provides perspectives for further research."
],
[
"Cyberbullying is a widely covered topic in the realm of social sciences and psychology. A fair amount of research has been done on the definition and prevalence of the phenomenon BIBREF16 , BIBREF0 , BIBREF17 , the identification of different forms of cyberbullying BIBREF18 , BIBREF19 , BIBREF20 , and its consequences BIBREF9 , BIBREF12 , BIBREF21 . In contrast to the efforts made in defining and measuring cyberbullying, the number of studies that focus on its annotation and automatic detection, is limited BIBREF22 . Nevertheless, some important advances have been made in the domain over the past few years."
],
[
"Many social and psychological studies have worked towards a definition of cyberbullying. A common starting point for conceptualising cyberbullying are definitions of traditional (or offline) bullying. Seminal work has been published by BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , who describe bullying based on three main criteria, including i) intention (i.e., a bully intends to inflict harm on the victim), ii) repetition (i.e., bullying acts take place repeatedly over time) and iii) a power imbalance between the bully and the victim (i.e., a more powerful bully attacks a less powerful victim). With respect to cyberbullying, a number of definitions are based on the above-mentioned criteria. A popular definition is that of BIBREF21 which describes cyberbullying as “an aggressive, intentional act carried out by a group or individual, using electronic forms of contact, repeatedly and over time, against a victim who cannot easily defend him or herself”.",
"Nevertheless, some studies have underlined the differences between offline and online bullying, and have therefore questioned the relevance of the three criteria to the latter. Besides theoretical objections, a number of practical limitations have been observed. Firstly, while BIBREF23 claims intention to be inherent to traditional bullying, this is much harder to ascertain in an online environment. Online conversations lack the signals of a face-to-face interaction like intonation, facial expressions and gestures, which makes them more ambiguous than real-life conversations. The receiver may therefore get the wrong impression that they are being offended or ridiculed BIBREF19 . Another criterion for bullying that might not hold in online situations, is the power imbalance between bully and victim. Although this can be evident in real life (e.g. the bully is larger, stronger, older than the victim), it is hard to conceptualise or measure in an online environment. It may be related to technological skills, anonymity or the inability of the victim to get away from the bullying BIBREF27 , BIBREF17 , BIBREF28 . Empowering for the bully are also inherent characteristics of the Web: once defamatory or confidential information about a person is made public through the Internet, it is hard, if not impossible, to remove.",
"Finally, while arguing that repetition is a criterion to distinguish cyberbullying from single acts of aggression, BIBREF23 himself states that such a single aggressive action can be considered bullying under certain circumstances, although it is not entirely clear what these circumstances involve. Accordingly, BIBREF27 claim that repetition in cyberbullying is problematic to operationalise, as it is unclear what the consequences are of a single derogatory message on a public page. A single act of aggression or humiliation may result in continued distress and humiliation for the victim if it is shared or liked by multiple perpetrators or read by a large audience. BIBREF29 compare this with a `snowball effect': one post may be repeated or distributed by other people so that it becomes out of the control of the initial bully and has larger effects than was originally intended.",
"Given these arguments, a number of less `strict' definitions of cyberbullying were postulated by among others BIBREF6 , BIBREF5 , BIBREF1 , where a power imbalance and repetition are not deemed necessary conditions for cyberbullying.",
"The above paragraphs demonstrate that defining cyberbullying is far from trivial, and varying prevalence rates (cf. Section SECREF1 ) confirm that a univocal definition of the phenomenon is still lacking in the literature BIBREF1 . Based on existing conceptualisations, we define cyberbullying as content that is published online by an individual and that is aggressive or hurtful against a victim. Based on this definition, an annotation scheme was developed (see BIBREF30 ) to signal textual characteristics of cyberbullying, including posts from bullies, as well as reactions by victims and bystanders."
],
[
"As mentioned earlier, although research on cyberbullying detection is more limited than social studies on the phenomenon, some important advances have been made in recent years. In what follows, we present a brief overview of the most important natural language processing approaches to cyberbullying detection.",
"Although some studies have investigated the effectiveness of rule-based modelling BIBREF31 , the dominant approach to cyberbullying detection involves machine learning. Most machine learning approaches are based on supervised BIBREF32 , BIBREF33 , BIBREF34 or semi-supervised learning BIBREF35 . The former involves the construction of a classifier based on labeled training data, whereas semi-supervised approaches rely on classifiers that are built from a training corpus containing a small set of labeled and a large set of unlabelled instances (a method that is often used to handle data sparsity). As cyberbullying detection essentially involves the distinction between bullying and non-bullying posts, the problem is generally approached as a binary classification task where the positive class is represented by instances containing (textual) cyberbullying, while the negative class includes instances containing non-cyberbullying or `innocent' text.",
"A key challenge in cyberbullying research is the availability of suitable data, which is necessary to develop models that characterise cyberbullying. In recent years, only a few datasets have become publicly available for this particular task, such as the training sets provided in the context of the CAW 2.0 workshop and more recently, the Twitter Bullying Traces dataset BIBREF36 . As a result, several studies have worked with the former or have constructed their own corpus from social media websites that are prone to bullying content, such as YouTube BIBREF32 , BIBREF33 , Formspring BIBREF33 , and ASKfm BIBREF37 (the latter two are social networking sites where users can send each other questions or respond to them). Despite the bottleneck of data availability, existing approaches to cyberbullying detection have shown its potential, and the relevance of automatic text analysis techniques to ensure child safety online has been recognised BIBREF38 , BIBREF39 .",
"Among the first studies on cyberbullying detection are BIBREF34 , BIBREF31 , BIBREF33 , who explored the predictive power of INLINEFORM0 -grams (with and without tf-idf weighting), part-of-speech information (e.g. first and second pronouns), and sentiment information based on profanity lexicons for this task. Similar features were also exploited for the detection of cyberbullying events and fine-grained text categories related to cyberbullying BIBREF37 , BIBREF40 . More recent studies have demonstrated the added value of combining such content-based features with user-based information, such as including users' activities on a social network (i.e., the number of posts), their age, gender, location, number of friends and followers, and so on BIBREF32 , BIBREF35 , BIBREF41 . Moreover, semantic features have been explored to further improve classification performance of the task. To this end, topic model information BIBREF42 , as well as semantic relations between INLINEFORM1 -grams (according to a Word2Vec model BIBREF43 ) have been integrated.",
"As mentioned earlier, data collection remains a bottleneck in cyberbullying research. Although cyberbullying has been recognised as a serious problem (cf. Section SECREF1 ), real-world examples are often hard to find in public platforms. Naturally, the vast majority of communications do not contain traces of verbal aggression or transgressive behaviour. When constructing a corpus for machine learning purposes, this results in imbalanced datasets, meaning that one class (e.g. cyberbullying posts) is much less represented in the corpus than the other (e.g. non-cyberbullying posts). To tackle this problem, several studies have adopted resampling techniques BIBREF35 , BIBREF41 , BIBREF31 that create synthetic minority class examples or reduce the number of negative class examples (i.e., minority class oversampling and majority class undersampling BIBREF44 ).",
"Table TABREF9 presents a number of recent studies on cyberbullying detection, providing insight into the state of the art in cyberbullying research and the contribution of the current research to the domain.",
"The studies discussed in this section have demonstrated the feasibility of automatic cyberbullying detection in social media data by making use of a varied set of features. Most of them have, however, focussed on cyberbullying `attacks', or posts written by a bully. Moreover, it is not entirely clear if different forms of cyberbullying have been taken into account (e.g. sexual intimidation or harassment, or psychological threats), in addition to derogatory language or insults.",
"In the research described in this paper, cyberbullying is considered a complex phenomenon consisting of different forms of harmful behaviour online, which are described in more detail in our annotation scheme BIBREF30 . Purposing to facilitate manual monitoring efforts on social networks, we develop a system that automatically detects signals of cyberbullying, including attacks from bullies, as well as victim and bystander reactions. Similarly, BIBREF42 investigated bullying traces posted by different author roles (accuser, bully, reporter, victim). However, they collected tweets by using specific keywords (i.e., bully, bullied and bullying). As a result, their corpus contains many reports or testimonials of a cyberbullying incident (example 1), instead of actual signals that cyberbullying is going on. Moreover, their method implies that cyberbullying-related content devoid of such keywords will not be part of the training corpus.",
"`Some tweens got violent on the n train, the one boy got off after blows 2 the chest... Saw him cryin as he walkd away :( bullying not cool' BIBREF42 ",
"For this research, English and Dutch social media data were annotated for different forms of cyberbullying, based on the actors involved in a cyberbullying incident. After preliminary experiments for Dutch BIBREF37 , BIBREF40 , we currently explore the viability of detecting cyberbullying-related posts in Dutch and English social media. To this end, binary classification experiments are performed exploiting a rich feature set and optimised hyperparameters.",
"font=footnotesize,sc,justification=centering,labelsep=period"
],
[
"To be able to build representative models for cyberbullying, a suitable dataset is required. This section describes the construction of two corpora, English and Dutch, containing social media posts that are manually annotated for cyberbullying according to our fine-grained annotation scheme. This allows us to develop a detection system covering different forms and participants (or roles) involved in a cyberbullying event."
],
[
"Two corpora were constructed by collecting data from the social networking site ASKfm, where users can create profiles and ask or answer questions, with the option of doing so anonymously. ASKfm data typically consists of question-answer pairs published on a user's profile. The data were retrieved by crawling a number of seed profiles using the GNU Wget software in April and October, 2013. After language filtering (i.e., non-English or non-Dutch content was removed), the experimental corpora comprised 113,698 and 78,387 posts for English and Dutch, respectively."
],
[
"Cyberbullying has been a widely covered research topic recently and studies have shed light on direct and indirect types of cyberbullying, implicit and explicit forms, verbal and non-verbal cyberbullying, and so on. This is important from a sociolinguistic point of view, but knowing what cyberbullying involves is also crucial to build models for automatic cyberbullying detection. In the following paragraphs, we present our data annotation guidelines BIBREF30 and focus on different types and roles related to the phenomenon."
],
[
"Cyberbullying research is mainly centered around the conceptualisation, occurrence and prevention of the phenomenon BIBREF16 , BIBREF0 , BIBREF17 . Additionally, different forms of cyberbullying have been identified BIBREF18 , BIBREF12 , BIBREF20 and compared with forms of traditional or offline bullying BIBREF19 . Like traditional bullying, direct and indirect forms of cyberbullying have been identified. Direct cyberbullying refers to actions in which the victim is directly involved (e.g. sending a virus-infected file, excluding someone from an online group, insulting and threatening), whereas indirect cyberbullying can take place without awareness of the victim (e.g. outing or publishing confidential information, spreading gossip, creating a hate page on social networking sites) BIBREF19 .",
"The present annotation scheme describes some specific textual categories related to cyberbullying, including threats, insults, defensive statements from a victim, encouragements to the harasser, etc. (see Section SECREF15 for a complete overview). All of these forms were inspired by social studies on cyberbullying BIBREF7 , BIBREF19 and manual inspection of cyberbullying examples."
],
[
"Similarly to traditional bullying, cyberbullying involves a number of participants that adopt well-defined roles. Researchers have identified several roles in (cyber)bullying interactions. Although traditional studies on bullying have mainly concentrated on bullies and victims BIBREF46 , the importance of bystanders in a bullying episode has been acknowledged BIBREF47 , BIBREF48 . Bystanders can support the victim and mitigate the negative effects caused by the bullying BIBREF48 , especially on social networking sites, where they hold higher intentions to help the victim than in real life conversations BIBREF49 . While BIBREF46 distinguish four different bystanders, BIBREF50 distinguish three main types: i) bystanders who participate in the bullying, ii) who help or support the victim and iii) those who ignore the bullying. Given that passive bystanders are hard to recognise in online text, only the former two are included in our annotation scheme."
],
[
"To operationalise the task of automatic cyberbullying detection, we developed and tested a fine-grained annotation scheme and applied it to our corpora. While a detailed overview of the guidelines is presented in our technical report BIBREF30 , we briefly present the categories and main annotation steps below.",
"Threat/Blackmail: expressions containing physical or psychological threats or indications of blackmail.",
"Insult: expressions meant to hurt or offend the victim.",
"General insult: general expressions containing abusive, degrading or offensive language that are meant to insult the addressee.",
"Attacking relatives: insulting expressions towards relatives or friends of the victim.",
"Discrimination: expressions of unjust or prejudicial treatment of the victim. Two types of discrimination are distinguished (i.e., sexism and racism). Other forms of discrimination should be categorised as general insults.",
"Curse/Exclusion: expressions of a wish that some form of adversity or misfortune will befall the victim and expressions that exclude the victim from a conversation or a social group.",
"Defamation: expressions that reveal confident or defamatory information about the victim to a large public.",
"Sexual Talk: expressions with a sexual meaning or connotation. A distinction is made between innocent sexual talk and sexual harassment.",
"Defense: expressions in support of the victim, expressed by the victim himself or by a bystander.",
"Bystander defense: expressions by which a bystander shows support for the victim or discourages the harasser from continuing his actions.",
"Victim defense: assertive or powerless reactions from the victim.",
"Encouragement to the harasser: expressions in support of the harasser.",
"Other: expressions that contain any other form of cyberbullying-related behaviour than the ones described here.",
"Based on the literature on role-allocation in cyberbullying episodes BIBREF51 , BIBREF50 , four roles are distinguished, including victim, bully, and two types of bystanders.",
"Harasser or Bully: person who initiates the bullying.",
"Victim: person who is harassed.",
"Bystander-defender: person who helps the victim and discourages the harasser from continuing his actions.",
"Bystander-assistant: person who does not initiate, but helps or encourages the harasser.",
"Essentially, the annotation scheme describes two levels of annotation. Firstly, the annotators were asked to indicate, at the post level, whether the post under investigation was related to cyberbullying. If the post was considered a signal of cyberbullying, annotators identified the author's role. Secondly, at the subsentence level, the annotators were tasked with the identification of a number of fine-grained text categories related to cyberbullying. More concretely, they identified all text spans corresponding to one of the categories described in the annotation scheme. To provide the annotators with some context, all posts were presented within their original conversation when possible. All annotations were done using the Brat rapid annotation tool BIBREF52 , some examples of which are presented in Table TABREF33 .",
"font=footnotesize,sc,justification=centering,labelsep=period"
],
[
"The English and Dutch corpora were independently annotated for cyberbullying by trained linguists. All were Dutch native speakers and English second-language speakers. To demonstrate the validity of our guidelines, inter-annotator agreement scores were calculated using Kappa on a subset of each corpus. Inter-rater agreement for Dutch (2 raters) is calculated using Cohen's Kappa BIBREF53 . Fleiss' Kappa BIBREF54 is used for the English corpus ( INLINEFORM0 2 raters). Kappa scores for the identification of cyberbullying are INLINEFORM1 = 0.69 (Dutch) and INLINEFORM2 = 0.59 (English).",
"As shown in Table TABREF35 , inter-annotator agreement for the identification of the more fine-grained categories for English varies from fair to substantial BIBREF55 , except for defamation, which appears to be more difficult to recognise. No encouragements to the harasser were present in this subset of the corpus. For Dutch, the inter-annotator agreement is fair to substantial, except for curse and defamation. Analysis revealed that one of both annotators often annotated the latter as an insult, and in some cases even did not consider it as cyberbullying-related.",
"In short, the inter-rater reliability study shows that the annotation of cyberbullying is not trivial and that more fine-grained categories like defamation, curse and encouragements are sometimes hard to recognise. It appears that defamations were sometimes hard to distinguish from insults, whereas curses and exclusions were sometimes considered insults or threats. The analysis further reveals that encouragements to the harasser are subject to interpretation. Some are straightforward (e.g. `I agree we should send her hate'), whereas others are subject to the annotator's judgement and interpretation (e.g. `hahaha', `LOL')."
],
[
"In this paper, we explore the feasibility of automatically recognising signals of cyberbullying. A crucial difference with state-of-the-art approaches to cyberbullying detection is that we aim to model bullying attacks, as well as reactions from victims and bystanders (i.e., all under one binary label `signals of cyberbullying'), since these could likewise indicate that cyberbullying is going on. The experiments described in this paper focus on the detection of such posts, which are signals of a potential cyberbullying event to be further investigated by human moderators.",
"The English and Dutch corpus contain 113,698 and 78,387 posts, respectively. As shown in Table TABREF36 , the experimental corpus features a heavily imbalanced class distribution with the large majority of posts not being part of cyberbullying. In classification, this class imbalance can lead to decreased performance. We apply cost-sensitive SVM as a possible hyperparameter in optimisation to counter this. The cost-sensitive SVM reweighs the penalty parameter INLINEFORM0 of the error term by the inverse class-ratio. This means that misclassifications of the minority positive class are penalised more than classification errors on the majority negative class. Other pre-processing methods to handle data imbalance in classification include feature filtering metrics and data resampling BIBREF56 . These methods were omitted as they were found to be too computationally expensive given our high-dimensional dataset.",
"For the automatic detection of cyberbullying, we performed binary classification experiments using a linear kernel support vector machine (SVM) implemented in LIBLINEAR BIBREF57 by making use of Scikit-learn BIBREF58 , a machine learning library for Python. The motivation behind this is twofold: i) support vector machines (SVMs) have proven to work well for tasks similar to the ones under investigation BIBREF38 and ii) LIBLINEAR allows fast training of large-scale data which allow for a linear mapping (which was confirmed after a series of preliminary experiments using LIBSVM with linear, RBF and polynomial kernels).",
"The classifier was optimised for feature type (cf. Section SECREF38 ) and hyperparameter combinations (cf. Table TABREF37 ). Model selection was done using 10-fold cross validation in grid search over all possible feature types (i.e., groups of similar features, like different orders of INLINEFORM0 -gram bag-of-words features) and hyperparameter configurations. The best performing hyperparameters are selected by F INLINEFORM1 -score on the positive class. The winning model is then retrained on all held-in data and subsequently tested on a hold-out test set to assess whether the classifier is over- or under-fitting. The holdout represents a random sample ( INLINEFORM2 ) of all data. The folds were randomly stratified splits over the hold-in class distribution. Testing all feature type combinations is a rudimentary form of feature selection and provides insight into which types of features work best for this particular task.",
"Feature selection over all individual features was not performed because of the large feature space (NL: 795,072 and EN: 871,296 individual features). BIBREF59 , among other researchers, demonstrated the importance of joint optimisation, where feature selection and hyperparameter optimisation are performed simultaneously, since the techniques mutually influence each other.",
"The optimised models are evaluated against two baseline systems: i) an unoptimised linear-kernel SVM (configured with default parameter settings) based on word INLINEFORM0 -grams only and, ii) a keyword-based system that marks posts as positive for cyberbullying if they contain a word from existing vocabulary lists composed by aggressive language and profanity terms."
],
[
"As pre-processing, we applied tokenisation, PoS-tagging and lemmatisation to the data using the LeTs Preprocess Toolkit BIBREF60 . In supervised learning, a machine learning algorithm takes a set of training instances (of which the label is known) and seeks to build a model that generates a desired prediction for an unseen instance. To enable the model construction, all instances are represented as a vector of features (i.e., inherent characteristics of the data) that contain information that is potentially useful to distinguish cyberbullying from non-cyberbullying content.",
"We experimentally tested whether cyberbullying events can be recognised automatically by lexical markers in a post. To this end, all posts were represented by a number of information sources (or features) including lexical features like bags-of-words, sentiment lexicon features and topic model features, which are described in more detail below. Prior to feature extraction, some data cleaning steps were executed, such as the replacement of hyperlinks and @-replies, removal of superfluous white spaces, and the replacement of abbreviations by their full form (based on an existing mapping dictionary ). Additionally, tokenisation was applied before INLINEFORM0 -gram extraction and sentiment lexicon matching, and stemming was applied prior to extracting topic model features.",
"After pre-processing of the corpus, the following feature types were extracted:",
"Word INLINEFORM0 -gram bag-of-words: binary features indicating the presence of word unigrams, bigrams and trigrams.",
"Character INLINEFORM0 -gram bag-of-words: binary features indicating the presence of character bigrams, trigrams and fourgrams (without crossing word boundaries). Character INLINEFORM1 -grams provide some abstraction from the word level and provide robustness to the spelling variation that characterises social media data.",
"Term lists: one binary feature derived for each one out of six lists, indicating the presence of an item from the list in a post: proper names, `allness' indicators (e.g. always, everybody), diminishers (e.g. slightly, relatively), intensifiers (e.g. absolutely, amazingly), negation words and aggressive language and profanity words. Person alternation is a binary feature indicating whether the combination of a first and second person pronoun occurs in order to capture interpersonal intent.",
"Subjectivity lexicon features: positive and negative opinion word ratios, as well as the overall post polarity were calculated using existing sentiment lexicons. For Dutch, we made use of the Duoman BIBREF61 and Pattern BIBREF62 lexicons. For English, we included the Hu and Liu opinion lexicon BIBREF63 , the MPQA lexicon BIBREF64 , General Inquirer Sentiment Lexicon BIBREF65 , AFINN BIBREF66 , and MSOL BIBREF67 . For both languages, we included the relative frequency of all 68 psychometric categories in the Linguistic Inquiry and Word Count (LIWC) dictionary for English BIBREF68 and Dutch BIBREF69 .",
"Topic model features: by making use of the Gensim topic modelling library BIBREF70 , several LDA BIBREF71 and LSI BIBREF72 topic models with varying granularity ( INLINEFORM0 = 20, 50, 100 and 200) were trained on data corresponding to each fine-grained category of a cyberbullying event (e.g. threats, defamations, insults, defenses). The topic models were based on a background corpus (EN: INLINEFORM1 tokens, NL: INLINEFORM2 tokens) scraped with the BootCAT BIBREF73 web-corpus toolkit. BootCaT collects ASKfm user profiles using lists of manually determined seed words that are characteristic of the cyberbullying categories.",
"When applied to the training data, this resulted in INLINEFORM0 and INLINEFORM1 features for English and Dutch, respectively."
],
[
"In this section, we present the results of our experiments on the automatic detection of cyberbullying-related posts in an English (EN) and Dutch (NL) corpus of ASKfm posts. Ten-fold cross-validation was performed in exhaustive grid-search over different feature type and hyperparameter combinations (see Section SECREF4 ). The unoptimised word INLINEFORM0 -gram-based classifier and keyword-matching system serve as baselines for comparison. Precision, Recall and F INLINEFORM1 performance metrics were calculated on the positive class (i.e., `binary averaging'). We also report Area Under the ROC curve (AUC) scores, a performance metric that is more robust to data imbalance than precision, recall and micro-averaged F-score BIBREF74 .",
"Table TABREF45 gives us an indication of which feature type combinations score best and hence contribute most to this task. A total of 31 feature type combinations, each with 28 different hyperparameter sets have been tested. Table TABREF45 shows the results for the three best scoring systems by included feature types with optimised hyperparameters. The maximum attained F INLINEFORM0 -score in cross-validation is 64.26% for English and 61.20% for Dutch and shows that the classifier benefits from a variety of feature types. The results on the holdout test set show that the trained systems generalise well on unseen data, indicating little under- or overfitting. The simple keyword-matching baseline system has the lowest performance for both languages even though it obtains high recall for English, suggesting that profane language characterises many cyberbullying-related posts. Feature group and hyperparameter optimisation provides a considerable performance increase over the unoptimised word INLINEFORM1 -gram baseline system. The top-scoring systems for each language do not differ a lot in performance, except the best system for Dutch, which trades recall for precision when compared to the runner-ups.",
"Table TABREF47 presents the scores of the (hyperparameter-optimised) single feature type systems, to gain insight into the performance of these feature types when used individually. Analysis of the combined and single feature type sets reveals that word INLINEFORM0 -grams, character INLINEFORM1 -grams, and subjectivity lexicons prove to be strong features for this task. In effect, adding character INLINEFORM2 -grams always improved classification performance for both languages. They likely provide robustness to lexical variation in social media text, as compared to word INLINEFORM3 -grams. While subjectivity lexicons appear to be discriminative features, term lists perform badly on their own as well as in combinations for both languages. This shows once again (cf. profanity baseline) that cyberbullying detection requires more sophisticated information sources than profanity lists. Topic models seem to do badly for both languages on their own, but in combination, they improve Dutch performance consistently. A possible explanation for their varying performance in both languages would be that the topic models trained on the Dutch background corpus are of better quality than the English ones. In effect, a random selection of background corpus texts reveals that the English scrape contains more noisy data (i.e., low word-count posts and non-English posts) than the Dutch data.",
"A shallow qualitative analysis of the classification output provided insight into some of the classification mistakes.",
"Table TABREF52 gives an overview of the error rates per cyberbullying category of the best performing and baseline systems. This could give an indication of which types of bullying the current system has trouble classifying. All categories are always considered positive for cyberbullying (i.e., the error rate equals the false negative rate), except for Sexual and Insult which can also be negative (in case of harmless sexual talk and `socially acceptable' insulting language like `hi bitches, in for a movie?' the corresponding category was indicated, but the post itself was not annotated as cyberbullying) and Not cyberbullying, which is always negative. Error rates often being lowest for the profanity baseline confirms that it performs particularly well in terms of recall (at the expense of precision, see Table TABREF47 ) When looking at the best system for both languages, we see that Defense is the hardest category to correctly classify. This should not be a surprise as the category comprises defensive posts from bystanders and victims, which contain less aggressive language than cyberbullying attacks and are often shorter in length than the latter. Assertive defensive posts (i.e., a subcategory of Defense) that attack the bully) are, however, more often correctly classified. There are not enough instances of Encouragement for either language in the holdout to be representative. In both languages, threats, curses and incidences of sexual harassment are most easily recognisable, showing (far) lower error rates than the categories Defamation, Defense, Encouragements to the harasser, and Insult.",
"Qualitative error analysis of the English and Dutch predictions reveals that false positives often contain aggressive language directed at a second person, often denoting personal flaws or containing sexual and profanity words. We see that misclassifications are often short posts containing just a few words and that false negatives often lack explicit verbal signs of cyberbullying (e.g. insulting or profane words) or are ironic (examples 2 and 3). Additionally, we see that cyberbullying posts containing misspellings or grammatical errors and incomplete words are also hard to recognise as such (examples 4 and 5). The Dutch and English data are overall similar with respect to qualitative properties of classification errors.",
"In short, the experiments show that our classifier clearly outperforms both a keyword-based and word INLINEFORM0 -gram baseline. However, analysis of the classifier output reveals that false negatives often lack explicit clues that cyberbullying is going on, indicating that our system might benefit from irony recognition and integrating world knowledge to capture such implicit realisations of cyberbullying.",
"Given that we present the first elaborate research on detecting signals of cyberbullying regardless of the author role instead of bully posts alone, crude comparison with the state of the art would be irrelevant. We observe, however, that our classifier obtains competitive results compared to BIBREF32 , BIBREF33 , BIBREF35 , BIBREF34 , BIBREF37 ."
],
[
"The goal of the current research was to investigate the automatic detection of cyberbullying-related posts on social media. Given the information overload on the web, manual monitoring for cyberbullying has become unfeasible. Automatic detection of signals of cyberbullying would enhance moderation and allow to respond quickly when necessary.",
"Cyberbullying research has often focused on detecting cyberbullying `attacks', hence overlooking posts written by victims and bystanders. However, these posts could just as well indicate that cyberbullying is going on. The main contribution of this paper is that it presents a system for detecting signals of cyberbullying on social media, including posts from bullies, victims and bystanders. A manually annotated cyberbullying dataset was created for two languages, which will be made available for public scientific use. Moreover, while a fair amount of research has been done on cyberbullying detection for English, we believe this is one of the first papers that focus on Dutch as well.",
"A set of binary classification experiments were conducted to explore the feasibility of automatic cyberbullying detection on social media. In addition, we sought to determine which information sources contribute to this task. Two classifiers were trained on English and Dutch ASKfm data and evaluated on a holdout test of the same genre. Our experiments reveal that the current approach is a promising strategy for detecting signals of cyberbullying in social media data automatically. After feature selection and hyperparameter optimisation, the classifiers achieved an F INLINEFORM0 -score of 64.32% and 58.72% for English and Dutch, respectively. The systems hereby significantly outperformed a keyword and an (unoptimised) INLINEFORM1 -gram baseline. Analysis of the results revealed that false positives often include implicit cyberbullying or offenses through irony, the challenge of which will constitute an important area for future work.",
"Another interesting direction for future work would be the detection of fine-grained cyberbullying-related categories such as threats, curses and expressions of racism and hate. When applied in a cascaded model, the system could find severe cases of cyberbullying with high precision. This would be particularly interesting for monitoring purposes, since it would allow to prioritise signals of bullying that are in urgent need for manual inspection and follow-up.",
"Finally, future work will focus on the detection of participants (or roles) typically involved in cyberbullying. This would allow to analyse the context of a cyberbullying incident and hence evaluate its severity. When applied as moderation support on online platforms, such a system would allow to provide feedback in function of the recipient (i.e., a bully, victim, or bystander)."
],
[
"The work presented in this paper was carried out in the framework of the AMiCA IWT SBO-project 120007 project, funded by the government Flanders Innovation & Entrepreneurship (VLAIO) agency."
]
],
"section_name": [
"Introduction",
"Related Research",
"A Definition of Cyberbullying",
"Detecting and Preventing Cyberbullying",
"Data Collection and Annotation",
"Data Collection",
"Data Annotation",
"Types of Cyberbullying",
"Roles in Cyberbullying",
"Annotation Guidelines",
"Annotation Statistics",
"Experimental Setup",
"Pre-processing and Feature Engineering",
"Results",
"Conclusions and Future Research",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"1aa6175a8a3a2580f0eb01f548026debd8afe6ef",
"215ddc312acf99a51c5ed8b7a14144c3c2b9da76",
"89b357854ba93fe95b74dbb94da51501544a7f01"
],
"answer": [
{
"evidence": [
"The optimised models are evaluated against two baseline systems: i) an unoptimised linear-kernel SVM (configured with default parameter settings) based on word INLINEFORM0 -grams only and, ii) a keyword-based system that marks posts as positive for cyberbullying if they contain a word from existing vocabulary lists composed by aggressive language and profanity terms."
],
"extractive_spans": [
"an unoptimised linear-kernel SVM",
"a keyword-based system that marks posts as positive for cyberbullying if they contain a word from existing vocabulary lists composed by aggressive language and profanity terms"
],
"free_form_answer": "",
"highlighted_evidence": [
"The optimised models are evaluated against two baseline systems: i) an unoptimised linear-kernel SVM (configured with default parameter settings) based on word INLINEFORM0 -grams only and, ii) a keyword-based system that marks posts as positive for cyberbullying if they contain a word from existing vocabulary lists composed by aggressive language and profanity terms."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The optimised models are evaluated against two baseline systems: i) an unoptimised linear-kernel SVM (configured with default parameter settings) based on word INLINEFORM0 -grams only and, ii) a keyword-based system that marks posts as positive for cyberbullying if they contain a word from existing vocabulary lists composed by aggressive language and profanity terms."
],
"extractive_spans": [
"unoptimised linear-kernel SVM",
"keyword-based system"
],
"free_form_answer": "",
"highlighted_evidence": [
"The optimised models are evaluated against two baseline systems: i) an unoptimised linear-kernel SVM (configured with default parameter settings) based on word INLINEFORM0 -grams only and, ii) a keyword-based system that marks posts as positive for cyberbullying if they contain a word from existing vocabulary lists composed by aggressive language and profanity terms."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The optimised models are evaluated against two baseline systems: i) an unoptimised linear-kernel SVM (configured with default parameter settings) based on word INLINEFORM0 -grams only and, ii) a keyword-based system that marks posts as positive for cyberbullying if they contain a word from existing vocabulary lists composed by aggressive language and profanity terms."
],
"extractive_spans": [],
"free_form_answer": "Linear-kernel SVM based on word n-grams, vocabulary-based classifier.",
"highlighted_evidence": [
"The optimised models are evaluated against two baseline systems: i) an unoptimised linear-kernel SVM (configured with default parameter settings) based on word INLINEFORM0 -grams only and, ii) a keyword-based system that marks posts as positive for cyberbullying if they contain a word from existing vocabulary lists composed by aggressive language and profanity terms."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"886ac593e7b715c5a3cca4f692b55aa237f626fd",
"9bb4bdd560f76cf854f1a2548207ecc64b38ce15",
"cd709b9ed52f45e9d34cf7a742533fe1959e653f"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2. Inter-annotator agreement on the fine-grained categories related to cyberbullying."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2. Inter-annotator agreement on the fine-grained categories related to cyberbullying."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"The English and Dutch corpora were independently annotated for cyberbullying by trained linguists. All were Dutch native speakers and English second-language speakers. To demonstrate the validity of our guidelines, inter-annotator agreement scores were calculated using Kappa on a subset of each corpus. Inter-rater agreement for Dutch (2 raters) is calculated using Cohen's Kappa BIBREF53 . Fleiss' Kappa BIBREF54 is used for the English corpus ( INLINEFORM0 2 raters). Kappa scores for the identification of cyberbullying are INLINEFORM1 = 0.69 (Dutch) and INLINEFORM2 = 0.59 (English)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To demonstrate the validity of our guidelines, inter-annotator agreement scores were calculated using Kappa on a subset of each corpus."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"The English and Dutch corpora were independently annotated for cyberbullying by trained linguists. All were Dutch native speakers and English second-language speakers. To demonstrate the validity of our guidelines, inter-annotator agreement scores were calculated using Kappa on a subset of each corpus. Inter-rater agreement for Dutch (2 raters) is calculated using Cohen's Kappa BIBREF53 . Fleiss' Kappa BIBREF54 is used for the English corpus ( INLINEFORM0 2 raters). Kappa scores for the identification of cyberbullying are INLINEFORM1 = 0.69 (Dutch) and INLINEFORM2 = 0.59 (English)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The English and Dutch corpora were independently annotated for cyberbullying by trained linguists. All were Dutch native speakers and English second-language speakers. To demonstrate the validity of our guidelines, inter-annotator agreement scores were calculated using Kappa on a subset of each corpus. Inter-rater agreement for Dutch (2 raters) is calculated using Cohen's Kappa BIBREF53 . Fleiss' Kappa BIBREF54 is used for the English corpus ( INLINEFORM0 2 raters). Kappa scores for the identification of cyberbullying are INLINEFORM1 = 0.69 (Dutch) and INLINEFORM2 = 0.59 (English)."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"0c8738564ed8113010154ba84fd778bf08260928",
"0fbe2b44a4333539273df5b36ee8b8ec2883c43a",
"26408e7b7f3fc7ae7e9a1ebe88b2248db765b2a8"
],
"answer": [
{
"evidence": [
"The English and Dutch corpus contain 113,698 and 78,387 posts, respectively. As shown in Table TABREF36 , the experimental corpus features a heavily imbalanced class distribution with the large majority of posts not being part of cyberbullying. In classification, this class imbalance can lead to decreased performance. We apply cost-sensitive SVM as a possible hyperparameter in optimisation to counter this. The cost-sensitive SVM reweighs the penalty parameter INLINEFORM0 of the error term by the inverse class-ratio. This means that misclassifications of the minority positive class are penalised more than classification errors on the majority negative class. Other pre-processing methods to handle data imbalance in classification include feature filtering metrics and data resampling BIBREF56 . These methods were omitted as they were found to be too computationally expensive given our high-dimensional dataset.",
"The classifier was optimised for feature type (cf. Section SECREF38 ) and hyperparameter combinations (cf. Table TABREF37 ). Model selection was done using 10-fold cross validation in grid search over all possible feature types (i.e., groups of similar features, like different orders of INLINEFORM0 -gram bag-of-words features) and hyperparameter configurations. The best performing hyperparameters are selected by F INLINEFORM1 -score on the positive class. The winning model is then retrained on all held-in data and subsequently tested on a hold-out test set to assess whether the classifier is over- or under-fitting. The holdout represents a random sample ( INLINEFORM2 ) of all data. The folds were randomly stratified splits over the hold-in class distribution. Testing all feature type combinations is a rudimentary form of feature selection and provides insight into which types of features work best for this particular task."
],
"extractive_spans": [],
"free_form_answer": "Random 10 percent out of 78381 posts.",
"highlighted_evidence": [
"The English and Dutch corpus contain 113,698 and 78,387 posts, respectively.",
"The holdout represents a random sample ( INLINEFORM2 ) of all data."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The classifier was optimised for feature type (cf. Section SECREF38 ) and hyperparameter combinations (cf. Table TABREF37 ). Model selection was done using 10-fold cross validation in grid search over all possible feature types (i.e., groups of similar features, like different orders of INLINEFORM0 -gram bag-of-words features) and hyperparameter configurations. The best performing hyperparameters are selected by F INLINEFORM1 -score on the positive class. The winning model is then retrained on all held-in data and subsequently tested on a hold-out test set to assess whether the classifier is over- or under-fitting. The holdout represents a random sample ( INLINEFORM2 ) of all data. The folds were randomly stratified splits over the hold-in class distribution. Testing all feature type combinations is a rudimentary form of feature selection and provides insight into which types of features work best for this particular task."
],
"extractive_spans": [
"sample ( INLINEFORM2 ) of all data"
],
"free_form_answer": "",
"highlighted_evidence": [
"The winning model is then retrained on all held-in data and subsequently tested on a hold-out test set to assess whether the classifier is over- or under-fitting. The holdout represents a random sample ( INLINEFORM2 ) of all data. The folds were randomly stratified splits over the hold-in class distribution. Testing all feature type combinations is a rudimentary form of feature selection and provides insight into which types of features work best for this particular task."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Two corpora were constructed by collecting data from the social networking site ASKfm, where users can create profiles and ask or answer questions, with the option of doing so anonymously. ASKfm data typically consists of question-answer pairs published on a user's profile. The data were retrieved by crawling a number of seed profiles using the GNU Wget software in April and October, 2013. After language filtering (i.e., non-English or non-Dutch content was removed), the experimental corpora comprised 113,698 and 78,387 posts for English and Dutch, respectively."
],
"extractive_spans": [],
"free_form_answer": "78387",
"highlighted_evidence": [
"After language filtering (i.e., non-English or non-Dutch content was removed), the experimental corpora comprised 113,698 and 78,387 posts for English and Dutch, respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"743924d4225e502c48a989e143bfbd959fafad48",
"a50f2e36c1d1fc8c3beb5a2a42f41b2129d5511b"
],
"answer": [
{
"evidence": [
"The English and Dutch corpus contain 113,698 and 78,387 posts, respectively. As shown in Table TABREF36 , the experimental corpus features a heavily imbalanced class distribution with the large majority of posts not being part of cyberbullying. In classification, this class imbalance can lead to decreased performance. We apply cost-sensitive SVM as a possible hyperparameter in optimisation to counter this. The cost-sensitive SVM reweighs the penalty parameter INLINEFORM0 of the error term by the inverse class-ratio. This means that misclassifications of the minority positive class are penalised more than classification errors on the majority negative class. Other pre-processing methods to handle data imbalance in classification include feature filtering metrics and data resampling BIBREF56 . These methods were omitted as they were found to be too computationally expensive given our high-dimensional dataset.",
"The classifier was optimised for feature type (cf. Section SECREF38 ) and hyperparameter combinations (cf. Table TABREF37 ). Model selection was done using 10-fold cross validation in grid search over all possible feature types (i.e., groups of similar features, like different orders of INLINEFORM0 -gram bag-of-words features) and hyperparameter configurations. The best performing hyperparameters are selected by F INLINEFORM1 -score on the positive class. The winning model is then retrained on all held-in data and subsequently tested on a hold-out test set to assess whether the classifier is over- or under-fitting. The holdout represents a random sample ( INLINEFORM2 ) of all data. The folds were randomly stratified splits over the hold-in class distribution. Testing all feature type combinations is a rudimentary form of feature selection and provides insight into which types of features work best for this particular task."
],
"extractive_spans": [],
"free_form_answer": "Random 90 percent out of 113698 posts.",
"highlighted_evidence": [
"The English and Dutch corpus contain 113,698 and 78,387 posts, respectively.",
"The holdout represents a random sample ( INLINEFORM2 ) of all data."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Two corpora were constructed by collecting data from the social networking site ASKfm, where users can create profiles and ask or answer questions, with the option of doing so anonymously. ASKfm data typically consists of question-answer pairs published on a user's profile. The data were retrieved by crawling a number of seed profiles using the GNU Wget software in April and October, 2013. After language filtering (i.e., non-English or non-Dutch content was removed), the experimental corpora comprised 113,698 and 78,387 posts for English and Dutch, respectively."
],
"extractive_spans": [],
"free_form_answer": "113698",
"highlighted_evidence": [
"After language filtering (i.e., non-English or non-Dutch content was removed), the experimental corpora comprised 113,698 and 78,387 posts for English and Dutch, respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"24889a2b4c0852482930d3c8862a05f7450c4265",
"c5698a950c3aa5de41800c7493647091413ab59b"
],
"answer": [
{
"evidence": [
"After pre-processing of the corpus, the following feature types were extracted:",
"Word INLINEFORM0 -gram bag-of-words: binary features indicating the presence of word unigrams, bigrams and trigrams.",
"Character INLINEFORM0 -gram bag-of-words: binary features indicating the presence of character bigrams, trigrams and fourgrams (without crossing word boundaries). Character INLINEFORM1 -grams provide some abstraction from the word level and provide robustness to the spelling variation that characterises social media data.",
"Term lists: one binary feature derived for each one out of six lists, indicating the presence of an item from the list in a post: proper names, `allness' indicators (e.g. always, everybody), diminishers (e.g. slightly, relatively), intensifiers (e.g. absolutely, amazingly), negation words and aggressive language and profanity words. Person alternation is a binary feature indicating whether the combination of a first and second person pronoun occurs in order to capture interpersonal intent.",
"Subjectivity lexicon features: positive and negative opinion word ratios, as well as the overall post polarity were calculated using existing sentiment lexicons. For Dutch, we made use of the Duoman BIBREF61 and Pattern BIBREF62 lexicons. For English, we included the Hu and Liu opinion lexicon BIBREF63 , the MPQA lexicon BIBREF64 , General Inquirer Sentiment Lexicon BIBREF65 , AFINN BIBREF66 , and MSOL BIBREF67 . For both languages, we included the relative frequency of all 68 psychometric categories in the Linguistic Inquiry and Word Count (LIWC) dictionary for English BIBREF68 and Dutch BIBREF69 .",
"Topic model features: by making use of the Gensim topic modelling library BIBREF70 , several LDA BIBREF71 and LSI BIBREF72 topic models with varying granularity ( INLINEFORM0 = 20, 50, 100 and 200) were trained on data corresponding to each fine-grained category of a cyberbullying event (e.g. threats, defamations, insults, defenses). The topic models were based on a background corpus (EN: INLINEFORM1 tokens, NL: INLINEFORM2 tokens) scraped with the BootCAT BIBREF73 web-corpus toolkit. BootCaT collects ASKfm user profiles using lists of manually determined seed words that are characteristic of the cyberbullying categories."
],
"extractive_spans": [
"Word INLINEFORM0 -gram bag-of-words",
"Character INLINEFORM0 -gram bag-of-words",
"Term lists",
"Subjectivity lexicon features",
"Topic model features"
],
"free_form_answer": "",
"highlighted_evidence": [
"After pre-processing of the corpus, the following feature types were extracted:\n\nWord INLINEFORM0 -gram bag-of-words: binary features indicating the presence of word unigrams, bigrams and trigrams.\n\nCharacter INLINEFORM0 -gram bag-of-words: binary features indicating the presence of character bigrams, trigrams and fourgrams (without crossing word boundaries). ",
"Term lists: one binary feature derived for each one out of six lists, indicating the presence of an item from the list in a post: proper names, `allness' indicators (e.g. always, everybody), diminishers (e.g. slightly, relatively), intensifiers (e.g. absolutely, amazingly), negation words and aggressive language and profanity words. ",
"Subjectivity lexicon features: positive and negative opinion word ratios, as well as the overall post polarity were calculated using existing sentiment lexicons.",
"Topic model features: by making use of the Gensim topic modelling library BIBREF70 , several LDA BIBREF71 and LSI BIBREF72 topic models with varying granularity ( INLINEFORM0 = 20, 50, 100 and 200) were trained on data corresponding to each fine-grained category of a cyberbullying event (e.g. threats, defamations, insults, defenses)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"After pre-processing of the corpus, the following feature types were extracted:",
"Word INLINEFORM0 -gram bag-of-words: binary features indicating the presence of word unigrams, bigrams and trigrams.",
"Character INLINEFORM0 -gram bag-of-words: binary features indicating the presence of character bigrams, trigrams and fourgrams (without crossing word boundaries). Character INLINEFORM1 -grams provide some abstraction from the word level and provide robustness to the spelling variation that characterises social media data.",
"Term lists: one binary feature derived for each one out of six lists, indicating the presence of an item from the list in a post: proper names, `allness' indicators (e.g. always, everybody), diminishers (e.g. slightly, relatively), intensifiers (e.g. absolutely, amazingly), negation words and aggressive language and profanity words. Person alternation is a binary feature indicating whether the combination of a first and second person pronoun occurs in order to capture interpersonal intent.",
"Subjectivity lexicon features: positive and negative opinion word ratios, as well as the overall post polarity were calculated using existing sentiment lexicons. For Dutch, we made use of the Duoman BIBREF61 and Pattern BIBREF62 lexicons. For English, we included the Hu and Liu opinion lexicon BIBREF63 , the MPQA lexicon BIBREF64 , General Inquirer Sentiment Lexicon BIBREF65 , AFINN BIBREF66 , and MSOL BIBREF67 . For both languages, we included the relative frequency of all 68 psychometric categories in the Linguistic Inquiry and Word Count (LIWC) dictionary for English BIBREF68 and Dutch BIBREF69 .",
"Topic model features: by making use of the Gensim topic modelling library BIBREF70 , several LDA BIBREF71 and LSI BIBREF72 topic models with varying granularity ( INLINEFORM0 = 20, 50, 100 and 200) were trained on data corresponding to each fine-grained category of a cyberbullying event (e.g. threats, defamations, insults, defenses). The topic models were based on a background corpus (EN: INLINEFORM1 tokens, NL: INLINEFORM2 tokens) scraped with the BootCAT BIBREF73 web-corpus toolkit. BootCaT collects ASKfm user profiles using lists of manually determined seed words that are characteristic of the cyberbullying categories."
],
"extractive_spans": [
"Topic model features",
"Subjectivity lexicon features",
"Term lists",
"Character INLINEFORM0 -gram bag-of-words",
"Word INLINEFORM0 -gram bag-of-words"
],
"free_form_answer": "",
"highlighted_evidence": [
"After pre-processing of the corpus, the following feature types were extracted:\n\nWord INLINEFORM0 -gram bag-of-words: binary features indicating the presence of word unigrams, bigrams and trigrams.\n\nCharacter INLINEFORM0 -gram bag-of-words: binary features indicating the presence of character bigrams, trigrams and fourgrams (without crossing word boundaries). Character INLINEFORM1 -grams provide some abstraction from the word level and provide robustness to the spelling variation that characterises social media data.\n\nTerm lists: one binary feature derived for each one out of six lists, indicating the presence of an item from the list in a post: proper names, `allness' indicators (e.g. always, everybody), diminishers (e.g. slightly, relatively), intensifiers (e.g. absolutely, amazingly), negation words and aggressive language and profanity words. Person alternation is a binary feature indicating whether the combination of a first and second person pronoun occurs in order to capture interpersonal intent.\n\nSubjectivity lexicon features: positive and negative opinion word ratios, as well as the overall post polarity were calculated using existing sentiment lexicons. For Dutch, we made use of the Duoman BIBREF61 and Pattern BIBREF62 lexicons. For English, we included the Hu and Liu opinion lexicon BIBREF63 , the MPQA lexicon BIBREF64 , General Inquirer Sentiment Lexicon BIBREF65 , AFINN BIBREF66 , and MSOL BIBREF67 . For both languages, we included the relative frequency of all 68 psychometric categories in the Linguistic Inquiry and Word Count (LIWC) dictionary for English BIBREF68 and Dutch BIBREF69 .\n\nTopic model features: by making use of the Gensim topic modelling library BIBREF70 , several LDA BIBREF71 and LSI BIBREF72 topic models with varying granularity ( INLINEFORM0 = 20, 50, 100 and 200) were trained on data corresponding to each fine-grained category of a cyberbullying event (e.g. threats, defamations, insults, defenses). The topic models were based on a background corpus (EN: INLINEFORM1 tokens, NL: INLINEFORM2 tokens) scraped with the BootCAT BIBREF73 web-corpus toolkit. BootCaT collects ASKfm user profiles using lists of manually determined seed words that are characteristic of the cyberbullying categories."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"007813a6aa32cf30cff1d9dffd011b6921e01892",
"4245c5b77dbca18269233038edd79a2ed9640c16"
],
"answer": [
{
"evidence": [
"Two corpora were constructed by collecting data from the social networking site ASKfm, where users can create profiles and ask or answer questions, with the option of doing so anonymously. ASKfm data typically consists of question-answer pairs published on a user's profile. The data were retrieved by crawling a number of seed profiles using the GNU Wget software in April and October, 2013. After language filtering (i.e., non-English or non-Dutch content was removed), the experimental corpora comprised 113,698 and 78,387 posts for English and Dutch, respectively."
],
"extractive_spans": [
"social networking site ASKfm"
],
"free_form_answer": "",
"highlighted_evidence": [
"Two corpora were constructed by collecting data from the social networking site ASKfm, where users can create profiles and ask or answer questions, with the option of doing so anonymously."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Two corpora were constructed by collecting data from the social networking site ASKfm, where users can create profiles and ask or answer questions, with the option of doing so anonymously. ASKfm data typically consists of question-answer pairs published on a user's profile. The data were retrieved by crawling a number of seed profiles using the GNU Wget software in April and October, 2013. After language filtering (i.e., non-English or non-Dutch content was removed), the experimental corpora comprised 113,698 and 78,387 posts for English and Dutch, respectively."
],
"extractive_spans": [
" social networking site ASKfm"
],
"free_form_answer": "",
"highlighted_evidence": [
"Two corpora were constructed by collecting data from the social networking site ASKfm, where users can create profiles and ask or answer questions, with the option of doing so anonymously."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What are their baselines?",
"Do they report the annotation agreement?",
"How long is the test dataset for Dutch?",
"How long is the training dataset for English?",
"What features are used?",
"What is the source of the data?"
],
"question_id": [
"458e5ed506883bfec6623102ec9f43c071f0616f",
"85ab5f773b297bcf48a274634d402a35e1d57446",
"5154f63c50729b8ac04939588c2f5ffeb916e3df",
"2aeabec8a734a6e8ca9e7a308dd8c9a1011b3d6e",
"f2b8a2ed5916d75cf568a931829a5a3cde2fc345",
"c0af44ebd7cd81270d9b5b54d4a40feed162fa54"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"cyberbullying",
"cyberbullying",
"cyberbullying",
"cyberbullying",
"cyberbullying",
"cyberbullying"
],
"topic_background": [
"research",
"research",
"research",
"research",
"research",
"research"
]
} | {
"caption": [
"Table 1. Definitions and brat annotation examples of more fine-grained text categories related to cyberbullying.",
"Table 2. Inter-annotator agreement on the fine-grained categories related to cyberbullying.",
"Table 3. Statistics of the English and Dutch cyberbullying corpus.",
"Table 4. Hyperparameters in grid-search model selection.",
"Table 5. Cross-validated and hold-out scores (%) according to different metrics (F1, precision, recall, accuracy and area under the curve) for the English and Dutch three best and worst combined feature type systems.",
"Table 6. Feature group mapping (Table 5).",
"Table 7. Cross-validated and hold-out scores (%) according to different metrics (F1, precision, recall, accuracy and area under the ROC curve) for English and Dutch single feature type systems.",
"Table 8. Error rates (%) per cyberbullying subcategory on hold-out for English and Dutch systems.",
"Table 9. Error rates (%) per cyberbullying participant role on hold-out for English and Dutch systems.",
"Table 10. Overview of the most related cyberbullying detection approaches."
],
"file": [
"9-Table1-1.png",
"10-Table2-1.png",
"11-Table3-1.png",
"11-Table4-1.png",
"13-Table5-1.png",
"14-Table6-1.png",
"14-Table7-1.png",
"15-Table8-1.png",
"16-Table9-1.png",
"17-Table10-1.png"
]
} | [
"What are their baselines?",
"How long is the test dataset for Dutch?",
"How long is the training dataset for English?"
] | [
[
"1801.05617-Experimental Setup-5"
],
[
"1801.05617-Experimental Setup-1",
"1801.05617-Data Collection-0",
"1801.05617-Experimental Setup-3"
],
[
"1801.05617-Experimental Setup-1",
"1801.05617-Data Collection-0",
"1801.05617-Experimental Setup-3"
]
] | [
"Linear-kernel SVM based on word n-grams, vocabulary-based classifier.",
"78387",
"113698"
] | 159 |
1905.08067 | Understanding the Radical Mind: Identifying Signals to Detect Extremist Content on Twitter | The Internet and, in particular, Online Social Networks have changed the way that terrorist and extremist groups can influence and radicalise individuals. Recent reports show that the mode of operation of these groups starts by exposing a wide audience to extremist material online, before migrating them to less open online platforms for further radicalization. Thus, identifying radical content online is crucial to limit the reach and spread of the extremist narrative. In this paper, our aim is to identify measures to automatically detect radical content in social media. We identify several signals, including textual, psychological and behavioural, that together allow for the classification of radical messages. Our contribution is three-fold: (1) we analyze propaganda material published by extremist groups and create a contextual text-based model of radical content, (2) we build a model of psychological properties inferred from these material, and (3) we evaluate these models on Twitter to determine the extent to which it is possible to automatically identify online radical tweets. Our results show that radical users do exhibit distinguishable textual, psychological, and behavioural properties. We find that the psychological properties are among the most distinguishing features. Additionally, our results show that textual models using vector embedding features significantly improves the detection over TF-IDF features. We validate our approach on two experiments achieving high accuracy. Our findings can be utilized as signals for detecting online radicalization activities. | {
"paragraphs": [
[
"The rise of Online Social Networks (OSN) has facilitated a wide application of its data as sensors for information to solve different problems. For example, Twitter data has been used for predicting election results, detecting the spread of flu epidemics, and a source for finding eye-witnesses during criminal incidents and crises BIBREF0 , BIBREF1 . This phenomenon is possible due to the great overlap between our online and offline worlds. Such seamless shift between both worlds has also affected the modus operandi of cyber-criminals and extremist groups BIBREF2 . They have benefited tremendously from the Internet and OSN platforms as it provides them with opportunities to spread their propaganda, widen their reach for victims, and facilitate potential recruitment opportunities. For instance, recent studies show that the Internet and social media played an important role in the increased amount of violent, right-wing extremism BIBREF3 . Similarly, radical groups such as Al-Qaeda and ISIS have used social media to spread their propaganda and promoted their digital magazine, which inspired the Boston Marathon bombers in 2010 BIBREF4 .",
"To limit the reach of cyber-terrorists, several private and governmental organizations are policing online content and utilising big data technologies to minimize the damage and counter the spread of such information. For example, the UK launched a Counter Terrorism Internet Referral Unit in 2010 aiming to remove unlawful Internet content and it supports the police in investigating terrorist and radicalizing activities online. The Unit reports that among the most frequently referred links were those coming from several OSNs, such as Facebook and Twitter BIBREF2 . Similarly, several OSNs are constantly working on detecting and removing users promoting extremist content. In 2018, Twitter announced that over INLINEFORM0 million accounts were suspended for terrorist content BIBREF5 .",
"Realizing the danger of violent extremism and radicalization and how it is becoming a major challenge to societies worldwide, many researchers have attempted to study the behaviour of pro-extremist users online. Looking at existing literature, we find that a number of existing studies incorporate methods to identify distinguishing properties that can aid in automatic detection of these users BIBREF6 , BIBREF7 . However, many of them depend on performing a keyword-based textual analysis which, if used alone, may have several shortcomings, such as producing a large number of false positives and having a high dependency on the data being studied. In addition, it can be evaded using automated tools to adjust the writing style.",
"Another angle for analyzing written text is by looking at the psychological properties that can be inferred regarding their authors. This is typically called psycholinguistics, where one examines how the use of the language can be indicative of different psychological states. Examples of such psychological properties include introversion, extroversion, sensitivity, and emotions. One of the tools that automates the process of extracting psychological meaning from text is the Linguistic Inquiry and Word Count (LIWC) BIBREF8 tool. This approach has been used in the literature to study the behaviour of different groups and to predict their psychological states, such as predicting depression BIBREF9 . More recently, it has also been applied to uncover different psychological properties of extremist groups and understand their intentions behind the recruitment campaigns BIBREF10 .",
"Building on the findings of previous research efforts, this paper aims to study the effects of using new textual and psycholinguistic signals to detect extremist content online. These signals are developed based on insights gathered from analyzing propaganda material published by known extremist groups. In this study, we focus mainly on the ISIS group as they are one of the leading terrorist groups that utilise social media to share their propaganda and recruit individuals. We analyze the propaganda material they publish in their online English magazine called Dabiq, and use data-mining techniques to computationally uncover contextual text and psychological properties associated with these groups. From our analysis of these texts, we are able to extract a set of signals that provide some insight into the mindset of the radical group. This allows us to create a general radical profile that we apply as a signal to detect pro-ISIS supporters on Twitter. Our results show that these identified signals are indeed critical to help improve existing efforts to detect online radicalization."
],
[
"In recent years, there has been an increase in online accounts advocating and supporting terrorist groups such as ISIS BIBREF5 . This phenomenon has attracted researchers to study their online existence, and research ways to automatically detect these accounts and limit their spread. Ashcroft et al. BIBREF6 make an attempt to automatically detect Jihadist messages on Twitter. They adopt a machine-learning method to classify tweets as ISIS supporters or not. In the article, the authors focus on English tweets that contain a reference to a set of predefined English hashtags related to ISIS. Three different classes of features are used, including stylometric features, temporal features and sentiment features. However, one of the main limitations of their approach is that it is highly dependent on the data. Rowe and Saif BIBREF7 focused on studying Europe-based Twitter accounts in order to understand what happens before, during, and after they exhibit pro-ISIS behaviour. They define such behaviour as sharing of pro-ISIS content and/or using pro-ISIS terms. To achieve this, they use a term-based approach such that a user is considered to exhibit a radicalization behaviour if he/she uses more pro-ISIS terms than anti-ISIS terms. While such an approach seems effective in distinguishing radicalised users, it is unable to properly deal with lexical ambiguity (i.e., polysemy). Furthermore, in BIBREF11 the authors focused on detecting Twitter users who are involved with “Media Mujahideen”, a Jihadist group who distribute propaganda content online. They used a machine learning approach using a combination of data-dependent and data-independent features. Similar to BIBREF7 they used textual features as well as temporal features to classify tweets and accounts. The experiment was based on a limited set of Twitter accounts, which makes it difficult to generalize the results for a more complex and realistic scenario.",
"Radicalization literature also looked at psychological factors involved with adopting such behaviour. Torok BIBREF12 used a grounded theory approach to develop an explanatory model for the radicalization process utilizing concepts of psychiatric power. Their findings show that the process typically starts with the social isolation of individuals. This isolation seems to be self-imposed as individuals tend to spend a long time engaging with radical content. This leads to the concept of homophily, the tendency to interact and associate with similar others. Through constant interaction with like-minded people, an individual gradually strengthens their mindset and progresses to more extreme levels. Similarly, they start to feel as being part of a group with a strong group identity which leads to group polarization. In psychology, group polarization occurs when discussion leads the group to adopt actions that are more extreme than the initial actions of the individual group members BIBREF13 . Moreover, the National Police Service Agency of the Netherlands developed a model to describe the phases a Jihadist may pass through before committing an act of terrorism BIBREF14 . These sequential phases of radicalism include strong links between the person's psychological and emotional state (e.g., social alienation, depression, lack of confidence in authority) and their susceptibility to radicalization."
],
[
"As illustrated in Fig. FIGREF1 , our approach consists of two main phases: Phase 1:Radical Properties Extraction, where articles from Dabiq extremist magazines are input into this step to perform two parallel tasks. In the first task, we build a language model using (i) Term-Frequency Inverse-Document-Frequency (TF-IDF) scores of uni-, bi-, and tri-grams, and (ii) Word embeddings generated from a word2vec model BIBREF15 . The output of this task is a radical corpus of top k-grams, and a word embedding model giving a vector representation for each word in the corpus. The second task seeks to create a psychological profile based on the language used in the extremist propaganda articles, consisting of a set of emotional and topical categories using LIWC dictionary-based tool. Phase 2: Tweet classification involves the use of the models generated from Phase 1 to engineer features related to radical activities. We identify three groups of features and then train a binary classifier to detect radical tweets."
],
[
"Feature engineering is the process of exploring large spaces of heterogeneous features with the aim of discovering meaningful features that may aid in modeling the problem at hand. We explore three categories of information to identify relevant features to detect radical content. Some features are user-based while others are message-based. The three categories are: 1) Radical language (Textual features INLINEFORM0 ); 2) Psychological signals (Psychological features INLINEFORM1 ); and 3) Behavioural features ( INLINEFORM2 ). In the following, we detail each of these categories.",
"In order to understand how radical messages are constructed and used, as mentioned earlier, we analyze content of ISIS propaganda material published in Dabiq magazine. Dabiq is an online magazine published by ISIS terrorist groups with the purpose of recruiting people and promoting their propaganda and ideology. Using this data source, we investigate what topics, textual properties, and linguistic cues exist in these magazines. Our intuition is that utilising these linguistic cues from the extremist propaganda would allow us to detect supporters of ISIS group who are influenced by their propaganda.",
"We use two methods to extract the radical language from the propaganda corpus. First we calculate tf-idf scores for each gram in the propaganda corpus. We use uni-grams, bi-grams, and tri-grams to capture phrases and context in which words are being used. We then select the top scoring grams to be used as features for the language model. N-grams and words frequency have been used in the literature to classify similar problems, such as hate-speech and extremist text and have proven successful BIBREF16 . The second method we use is word embeddings to capture semantic meanings. Research in NLP has compared the effectiveness of word embedding methods for encoding semantic meaning and found that semantic relationships between words are best captured by word vectors within word embedding models BIBREF17 . Therefore, we train word2vec model on our propaganda corpus to build the lexical semantic aspects of the text using vector space models. We learn word embeddings using skip-gram word2vec model implemented in the gensim package with vector size of 100 and window size of 5. This word embedding model is used to obtain the vector representation for each word. We aggregate the vectors for each word in the tweet, and concatenate the maximum and average for each word vector dimension, such that any given tweet is represented in 200 dimension sized vector. This approach of aggregating vectors was used successfully in previous research BIBREF18 . Moreover, since ISIS supporters typically advocate for violent behaviour and tend to use offensive curse words, we use dictionaries of violent words and curse words to record the ratio of such words in the tweet. We also count the frequency of words with all capital letters as they are traditionally used to convey yelling behaviour.",
"Research in fields such as linguistics, social science, and psychology suggest that the use of language and the word choices we make in our daily communication, can act as a powerful signal to detect our emotional and psychological states BIBREF8 . Several psychological properties are unintentionally transmitted when we communicate. Additionally, literature from the fields of terrorism and psychology suggests that terrorists may differ from non-terrorists in their psychological profiles BIBREF19 . A number of studies looked at the motivating factors surrounding terrorism, radicalization, and recruitment tactics, and found that terrorist groups tend to target vulnerable individuals who have feelings of desperation and displaced aggression. In particular research into the recruiting tactics of ISIS groups, it was found that they focus on harnessing the individual's need for significance. They seek out vulnerable people and provide them with constant attention BIBREF20 . Similarly, these groups create a dichotomy and promote the mentality of dividing the world into “us” versus “them” BIBREF21 . Inspired by previous research, we extract psychological properties from the radical corpus in order to understand the personality, emotions, and the different psychological properties conveyed in these articles.",
"We utilise LIWC dictionaries to assign a score to a set of psychological, personality, and emotional categories. Mainly, we look at the following properties: (1) Summary variables: Analytically thinking which reflects formal, logical, and hierarchical thinking (high value), versus informal, personal, and narrative thinking (low value). Clout which reflects high expertise and confidence levels (high value), versus tentative, humble, and anxious levels (low value). Tone which reflects positive emotions (high value) versus more negative emotions such as anxiety, sadness, or anger (low value). Authentic which reflects whether the text is conveying honesty and disclosing (high value) versus more guarded, and distanced (low value). (2) Big five: Measures the five psychological properties (OCEAN), namely Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. (3) Emotional Analysis: Measures the positive emotions conveyed in the text, and the negative emotions (including anger, sadness, anxiety). (4) Personal Drives: Focuses on five personal drives, namely power, reward, risk, achievement, and affiliation. (5) Personal Pronouns: Counts the number of 1st, 2nd, and 3rd personal pronouns used. For each Twitter user, we calculate their psychological profiles across these categories. Additionally, using Minkowski distance measure, we calculate the distance between each of these profiles and the average values of the psychological properties created from the ISIS magazines.",
"This category consists of measuring behavioural features to capture different properties related to the user and their behaviour. This includes how active the user is (frequency of tweets posted) and the followers/following ratio. Additionally, we use features to capture users' interactions with others through using hashtags, and engagement in discussions using mention action. To capture this, we construct the mention interaction graph ( INLINEFORM0 ) from our dataset, such that INLINEFORM1 = INLINEFORM2 , where INLINEFORM3 represents the user nodes and INLINEFORM4 represents the set of edges. The graph INLINEFORM5 is a directed graph, where an edge INLINEFORM6 exists between two user nodes INLINEFORM7 and INLINEFORM8 , if user INLINEFORM9 mentions user INLINEFORM10 . After constructing the graph, we measure the degree of influence each user has over their network using different centrality measures, such as degree centrality, betweenness centrality, and HITS-Hub. Such properties have been adopted in the research literature to study properties of cyber-criminal networks and their behaviour BIBREF22 , BIBREF23 ."
],
[
" We acquired a publicly available dataset of tweets posted by known pro-ISIS Twitter accounts that was published during the 2015 Paris attacks by Kaggle data science community. The dataset consists of around INLINEFORM0 tweets posted by more than 100 users. These tweets were labelled as being pro-ISIS by looking at specific indicators, such as a set of keywords used (in the user's name, description, tweet text), their network of follower/following of other known radical accounts, and sharing of images of the ISIS flag or some radical leaders. To validate that these accounts are indeed malicious, we checked the current status of the users' accounts in the dataset and found that most of them had been suspended by Twitter. This suggests that they did, in fact, possess a malicious behaviour that opposes the Twitter platform terms of use which caused them to be suspended. We filter out any tweets posted by existing active users and label this dataset as known-bad.",
"To model the normal behaviour, we collected a random sample of tweets from ten-trending topics in Twitter using the Twitter streaming API. These topics were related to news events and on-going social events (e.g., sports, music). We filter out any topics and keywords that may be connected to extremist views. This second dataset consists of around INLINEFORM0 tweets published by around INLINEFORM1 users. A random sample of 200 tweets was manually verified to ascertain it did not contain radical views. We label this dataset as our random-good data.",
"A third dataset is used which was acquired from Kaggle community. This dataset is created to be a counterpoise to the pro-ISIS dataset (our known-bad) as it consists of tweets talking about topics concerning ISIS without being radical. It contains INLINEFORM0 tweets from around INLINEFORM1 users collected on two separate days. We verify that this dataset is indeed non radical by checking the status of users in Twitter and found that a subset ( INLINEFORM2 users) was suspended. We remove those from the dataset and only keep users that are still active on Twitter. This dataset is labelled as counterpoise data.",
"We performed a series of preprocessing steps to clean the complete dataset and prepare it for feature extraction. These steps are: (1) We remove any duplicates and re-tweets from the dataset in order to reduce noise. (2) We remove tweets that have been authored by verified users accounts, as they are typically accounts associated with known public figures. (3) All stop words (e.g., and, or, the) and punctuation marks are removed from the text of the tweet. (4) If the tweet text contains a URL, we record the existence of the URL in a new attribute, hasURL, and then remove it from the tweet text. (5) If the tweet text contains emojis (e.g., :-), :), :P), we record the existence of the emoji in a new attribute, hasEmj, and then remove it from the tweet text. (6) If the tweet text contains any words with all capital characters, we record its existence in a new attribute, allCaps, and then normalize the text to lower-case and filter out any non-alphabetic characters. (7) We tokenize the cleansed tweet text into words, then we perform lemmatization, the process of reducing inflected words to their roots (lemma), and store the result in a vector."
],
[
"We conducted two experiments using the datasets described in Section SECREF11 . Our hypothesis is that supporters of groups such as ISIS may exhibit similar textual and psychological properties when communicating in social media to the properties seen in the propaganda magazines. A tweet is considered radical if it promotes violence, racism, or supports violent behaviour. In Exp 1 we use the first two datasets, i.e., the known-bad and the random-good datasets to classify tweets to radical and normal classes. For Exp 2 we examine if our classifier can also distinguish between tweets that are discussing similar topics (ISIS related) by using the known-bad and the counterpoise datasets.",
"The classification task is binomial (binary) classification where the output of the model predicts whether the input tweet is considered radical or normal. In order to handle the imbalanced class problem in the dataset, there are multiple techniques suggested in the literature Oversampling or undersampling of the minority/majority classes are common techniques. Another technique that is more related to the classification algorithm is cost sensitive learning, which penalizes the classification model for making a mistake on the minority class. This is achieved by applying a weighted cost on misclassifying of the minority class BIBREF24 . We will use the last approach to avoid downsampling of our dataset.",
"Previous research investigating similar problems reported better performances for Random Forest (RF) classifiers BIBREF25 . RF usually performs very well as it is scalable and is robust to outliers. RF typically outperforms decision trees as it has a hierarchical structure and is based on multiple trees. This allows RF to be able to model non-linear decision boundaries. Moreover, Neural Networks (NN) also produced good results when applied to problems related to image recognition, text and natural language processing BIBREF26 . However, they usually tend to require very large amounts of data to train. For the purpose of this study, we experimented with multiple classification algorithms, including RF, NN, SVM, and KNN and found that RF and NN produced the best performance. Due to space limitation, we only report results obtained using RF model. We configured the model to use 100 estimators trees with a maximum depth of 50, and we selected gini impurity for the split criteria. We used the out-of-bag samples (oob) score to estimate the generalization accuracy of the model. Additionally, since RF tends to be biased towards the majority class, we apply the cost sensitive learning method described earlier to make RF more suitable for imbalanced data BIBREF24 .",
"We divided the dataset to training set (80%) and testing set (20%), where the testing set is held out for validation. We reported validation results using different combinations of the features categories (i.e., INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ) and different evaluation metrics: accuracy, recall, precision, f-measure, and area under the ROC curve. Recall measures how many radical tweets we are able to detect, while precision measures how many radical tweets we can detect without falsely accusing anyone. For instance, if we identify every single tweet as radical, we will expose all radical tweets and thus obtain high recall, but at the same time, we will call everyone in the population a radical and thus obtain low precision. F-measure is the average of both precision and recall."
],
[
"Exp 1: The classification results using the known-bad and random-good datasets are reported in Table TABREF16 . The table shows the average accuracy, precision, recall and f-measure scores obtained from each feature category ( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ) and their combination ( INLINEFORM3 ). We also compared the two textual models, and find that results obtained from using word embedding outperforms the use of n-grams tf-idf scores. This confirms that contextual information is important in detecting radicalization activities. Furthermore, our model performed best using the INLINEFORM4 features across all metrics. This means that the model is able to distinguish between both radical and non-radical with high confidence using only INLINEFORM5 .",
"Exp2: In this experiment, we tested the performance of our classifier in distinguishing between radical and normal tweets that discusses ISIS-related topics. Although this task is more challenging given the similarity of the topic discussed in the two classes, we find that the model still achieves high performance. Table TABREF17 shows the different metrics obtained from each feature category. The INLINEFORM0 feature group obtains 80% accuracy, and 91%, 100% for INLINEFORM1 and INLINEFORM2 feature groups, respectively. The results are consistent with the ones obtained from the first experiment with the features from INLINEFORM3 group contributing to the high accuracy of the model. The area under the Receiver Operator Characteristic (ROC) curve, which measures accuracy based on TP, and FP rates, is shown in Fig. FIGREF18 for each classification model."
],
[
"We investigated which features contribute most to the classification task to distinguish between radical and non-radical tweets. We used the mean decrease impurity method of random forests BIBREF27 to identify the most important features in each feature category. The ten most important features are shown in Table TABREF22 . We found that the most important feature for distinguishing radical tweets is the psychological feature distance measure. This measures how similar the Twitter user is to the average psychological profile calculated from the propaganda magazine articles. Following this is the Us-them dichotomy which looks at the total number of pronouns used (I,they, we, you). This finding is in line with the tactics reported in the radicalization literature with regards to emphasizing the separation between the radical group and the world.",
"Moreover, among the top contributing features are behavioural features related to the number of mentions a single user makes, and their HITS hub and authority rank among their interaction network. This relates to how active the user is in interacting with other users and how much attention they receive from their community. This links to the objectives of those radical users in spreading their ideologies and reaching out to potential like-minded people. As for the INLINEFORM0 category, we find that the use of word2vec embedding improves the performance in comparison with using the tf-idf features. Additionally, all bi-grams and tri-grams features did not contribute much to the classification; only uni-grams did. This can be related to the differences in the writing styles when constructing sentences and phrases in articles and in the social media context (especially given the limitation of the number of words allowed by the Twitter platform). Additionally, the violent word ratio, longWords, and allCaps features are among the top contributing features from this category. This finding agrees to a large extent with observations from the literature regarding dealing with similar problems, where the use of dictionaries of violent words aids with the prediction of violent extremist narrative."
],
[
"In this paper, we identified different signals that can be utilized to detect evidence of online radicalization. We derived linguistic and psychological properties from propaganda published by ISIS for recruitment purposes. We utilize these properties to detect pro-ISIS tweets that are influenced by their ideology. Unlike previous efforts, these properties do not only focus on lexical keyword analysis of the messages, but also add a contextual and psychological dimension. We validated our approach in different experiments and the results show that this method is robust across multiple datasets. This system can aid law enforcement and OSN companies to better address such threats and help solve a challenging real-world problem. In future work, we aim to investigate if the model is resilient to different evasion techniques that users may adopt. We will also expand the analysis to other languages."
]
],
"section_name": [
"Introduction",
"Related Work",
"Methodology",
"Feature Engineering",
"Dataset",
"Experimental Set-up",
"Results",
"Features Significance",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"18183e8aaa99bb347dd2400a68f1c7b01165bdfe",
"c4b5b7116ba84abd67a08eaf7d4720a356e3a804"
],
"answer": [
{
"evidence": [
"Building on the findings of previous research efforts, this paper aims to study the effects of using new textual and psycholinguistic signals to detect extremist content online. These signals are developed based on insights gathered from analyzing propaganda material published by known extremist groups. In this study, we focus mainly on the ISIS group as they are one of the leading terrorist groups that utilise social media to share their propaganda and recruit individuals. We analyze the propaganda material they publish in their online English magazine called Dabiq, and use data-mining techniques to computationally uncover contextual text and psychological properties associated with these groups. From our analysis of these texts, we are able to extract a set of signals that provide some insight into the mindset of the radical group. This allows us to create a general radical profile that we apply as a signal to detect pro-ISIS supporters on Twitter. Our results show that these identified signals are indeed critical to help improve existing efforts to detect online radicalization.",
"Another angle for analyzing written text is by looking at the psychological properties that can be inferred regarding their authors. This is typically called psycholinguistics, where one examines how the use of the language can be indicative of different psychological states. Examples of such psychological properties include introversion, extroversion, sensitivity, and emotions. One of the tools that automates the process of extracting psychological meaning from text is the Linguistic Inquiry and Word Count (LIWC) BIBREF8 tool. This approach has been used in the literature to study the behaviour of different groups and to predict their psychological states, such as predicting depression BIBREF9 . More recently, it has also been applied to uncover different psychological properties of extremist groups and understand their intentions behind the recruitment campaigns BIBREF10 .",
"We acquired a publicly available dataset of tweets posted by known pro-ISIS Twitter accounts that was published during the 2015 Paris attacks by Kaggle data science community. The dataset consists of around INLINEFORM0 tweets posted by more than 100 users. These tweets were labelled as being pro-ISIS by looking at specific indicators, such as a set of keywords used (in the user's name, description, tweet text), their network of follower/following of other known radical accounts, and sharing of images of the ISIS flag or some radical leaders. To validate that these accounts are indeed malicious, we checked the current status of the users' accounts in the dataset and found that most of them had been suspended by Twitter. This suggests that they did, in fact, possess a malicious behaviour that opposes the Twitter platform terms of use which caused them to be suspended. We filter out any tweets posted by existing active users and label this dataset as known-bad."
],
"extractive_spans": [
"English"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this study, we focus mainly on the ISIS group as they are one of the leading terrorist groups that utilise social media to share their propaganda and recruit individuals. We analyze the propaganda material they publish in their online English magazine called Dabiq, and use data-mining techniques to computationally uncover contextual text and psychological properties associated with these groups.",
"One of the tools that automates the process of extracting psychological meaning from text is the Linguistic Inquiry and Word Count (LIWC) BIBREF8 tool.",
"We acquired a publicly available dataset of tweets posted by known pro-ISIS Twitter accounts that was published during the 2015 Paris attacks by Kaggle data science community. The dataset consists of around INLINEFORM0 tweets posted by more than 100 users. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Building on the findings of previous research efforts, this paper aims to study the effects of using new textual and psycholinguistic signals to detect extremist content online. These signals are developed based on insights gathered from analyzing propaganda material published by known extremist groups. In this study, we focus mainly on the ISIS group as they are one of the leading terrorist groups that utilise social media to share their propaganda and recruit individuals. We analyze the propaganda material they publish in their online English magazine called Dabiq, and use data-mining techniques to computationally uncover contextual text and psychological properties associated with these groups. From our analysis of these texts, we are able to extract a set of signals that provide some insight into the mindset of the radical group. This allows us to create a general radical profile that we apply as a signal to detect pro-ISIS supporters on Twitter. Our results show that these identified signals are indeed critical to help improve existing efforts to detect online radicalization."
],
"extractive_spans": [
"English"
],
"free_form_answer": "",
"highlighted_evidence": [
"We analyze the propaganda material they publish in their online English magazine called Dabiq, and use data-mining techniques to computationally uncover contextual text and psychological properties associated with these groups."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"af1f2739be96cda6cb62cd291392b7e293e46a66"
],
"answer": [
{
"evidence": [
"We investigated which features contribute most to the classification task to distinguish between radical and non-radical tweets. We used the mean decrease impurity method of random forests BIBREF27 to identify the most important features in each feature category. The ten most important features are shown in Table TABREF22 . We found that the most important feature for distinguishing radical tweets is the psychological feature distance measure. This measures how similar the Twitter user is to the average psychological profile calculated from the propaganda magazine articles. Following this is the Us-them dichotomy which looks at the total number of pronouns used (I,they, we, you). This finding is in line with the tactics reported in the radicalization literature with regards to emphasizing the separation between the radical group and the world.",
"Moreover, among the top contributing features are behavioural features related to the number of mentions a single user makes, and their HITS hub and authority rank among their interaction network. This relates to how active the user is in interacting with other users and how much attention they receive from their community. This links to the objectives of those radical users in spreading their ideologies and reaching out to potential like-minded people. As for the INLINEFORM0 category, we find that the use of word2vec embedding improves the performance in comparison with using the tf-idf features. Additionally, all bi-grams and tri-grams features did not contribute much to the classification; only uni-grams did. This can be related to the differences in the writing styles when constructing sentences and phrases in articles and in the social media context (especially given the limitation of the number of words allowed by the Twitter platform). Additionally, the violent word ratio, longWords, and allCaps features are among the top contributing features from this category. This finding agrees to a large extent with observations from the literature regarding dealing with similar problems, where the use of dictionaries of violent words aids with the prediction of violent extremist narrative."
],
"extractive_spans": [],
"free_form_answer": "They use a lot of \"us\" and \"them\" in their vocabulary. They use a lot of mentions, and they tend to be \"central\" in their network. They use a lot of violent words. ",
"highlighted_evidence": [
"We investigated which features contribute most to the classification task to distinguish between radical and non-radical tweets. We used the mean decrease impurity method of random forests BIBREF27 to identify the most important features in each feature category. The ten most important features are shown in Table TABREF22 . We found that the most important feature for distinguishing radical tweets is the psychological feature distance measure. This measures how similar the Twitter user is to the average psychological profile calculated from the propaganda magazine articles. Following this is the Us-them dichotomy which looks at the total number of pronouns used (I,they, we, you). This finding is in line with the tactics reported in the radicalization literature with regards to emphasizing the separation between the radical group and the world.",
"Moreover, among the top contributing features are behavioural features related to the number of mentions a single user makes, and their HITS hub and authority rank among their interaction network. This relates to how active the user is in interacting with other users and how much attention they receive from their community.",
"Additionally, the violent word ratio, longWords, and allCaps features are among the top contributing features from this category. This finding agrees to a large extent with observations from the literature regarding dealing with similar problems, where the use of dictionaries of violent words aids with the prediction of violent extremist narrative."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"038fbffa1975fca97b083a5e71d1ee9c6d5161ed",
"23077486a62c06088f4e3e02da5db37b1a4cafc0",
"9668cb87460a75d700bf7b32ec6060e96ac26962"
],
"answer": [
{
"evidence": [
"Building on the findings of previous research efforts, this paper aims to study the effects of using new textual and psycholinguistic signals to detect extremist content online. These signals are developed based on insights gathered from analyzing propaganda material published by known extremist groups. In this study, we focus mainly on the ISIS group as they are one of the leading terrorist groups that utilise social media to share their propaganda and recruit individuals. We analyze the propaganda material they publish in their online English magazine called Dabiq, and use data-mining techniques to computationally uncover contextual text and psychological properties associated with these groups. From our analysis of these texts, we are able to extract a set of signals that provide some insight into the mindset of the radical group. This allows us to create a general radical profile that we apply as a signal to detect pro-ISIS supporters on Twitter. Our results show that these identified signals are indeed critical to help improve existing efforts to detect online radicalization."
],
"extractive_spans": [
" online English magazine called Dabiq"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this study, we focus mainly on the ISIS group as they are one of the leading terrorist groups that utilise social media to share their propaganda and recruit individuals. We analyze the propaganda material they publish in their online English magazine called Dabiq, and use data-mining techniques to computationally uncover contextual text and psychological properties associated with these groups."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Building on the findings of previous research efforts, this paper aims to study the effects of using new textual and psycholinguistic signals to detect extremist content online. These signals are developed based on insights gathered from analyzing propaganda material published by known extremist groups. In this study, we focus mainly on the ISIS group as they are one of the leading terrorist groups that utilise social media to share their propaganda and recruit individuals. We analyze the propaganda material they publish in their online English magazine called Dabiq, and use data-mining techniques to computationally uncover contextual text and psychological properties associated with these groups. From our analysis of these texts, we are able to extract a set of signals that provide some insight into the mindset of the radical group. This allows us to create a general radical profile that we apply as a signal to detect pro-ISIS supporters on Twitter. Our results show that these identified signals are indeed critical to help improve existing efforts to detect online radicalization."
],
"extractive_spans": [
"Dabiq"
],
"free_form_answer": "",
"highlighted_evidence": [
"We analyze the propaganda material they publish in their online English magazine called Dabiq, and use data-mining techniques to computationally uncover contextual text and psychological properties associated with these groups. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Building on the findings of previous research efforts, this paper aims to study the effects of using new textual and psycholinguistic signals to detect extremist content online. These signals are developed based on insights gathered from analyzing propaganda material published by known extremist groups. In this study, we focus mainly on the ISIS group as they are one of the leading terrorist groups that utilise social media to share their propaganda and recruit individuals. We analyze the propaganda material they publish in their online English magazine called Dabiq, and use data-mining techniques to computationally uncover contextual text and psychological properties associated with these groups. From our analysis of these texts, we are able to extract a set of signals that provide some insight into the mindset of the radical group. This allows us to create a general radical profile that we apply as a signal to detect pro-ISIS supporters on Twitter. Our results show that these identified signals are indeed critical to help improve existing efforts to detect online radicalization."
],
"extractive_spans": [
"English magazine called Dabiq"
],
"free_form_answer": "",
"highlighted_evidence": [
"We analyze the propaganda material they publish in their online English magazine called Dabiq, and use data-mining techniques to computationally uncover contextual text and psychological properties associated with these groups."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"2c2fbbea5e4873fd45b750c7ad276939308af9a4",
"8dccd47351e2e407b5412934e422d7c5a2434815",
"fa3c30dc6062d778c5cf1fb378fdb8c1f71bd2f2"
],
"answer": [
{
"evidence": [
"This category consists of measuring behavioural features to capture different properties related to the user and their behaviour. This includes how active the user is (frequency of tweets posted) and the followers/following ratio. Additionally, we use features to capture users' interactions with others through using hashtags, and engagement in discussions using mention action. To capture this, we construct the mention interaction graph ( INLINEFORM0 ) from our dataset, such that INLINEFORM1 = INLINEFORM2 , where INLINEFORM3 represents the user nodes and INLINEFORM4 represents the set of edges. The graph INLINEFORM5 is a directed graph, where an edge INLINEFORM6 exists between two user nodes INLINEFORM7 and INLINEFORM8 , if user INLINEFORM9 mentions user INLINEFORM10 . After constructing the graph, we measure the degree of influence each user has over their network using different centrality measures, such as degree centrality, betweenness centrality, and HITS-Hub. Such properties have been adopted in the research literature to study properties of cyber-criminal networks and their behaviour BIBREF22 , BIBREF23 ."
],
"extractive_spans": [
"frequency of tweets posted",
"followers/following ratio",
"degree of influence each user has over their network"
],
"free_form_answer": "",
"highlighted_evidence": [
"This category consists of measuring behavioural features to capture different properties related to the user and their behaviour. This includes how active the user is (frequency of tweets posted) and the followers/following ratio. Additionally, we use features to capture users' interactions with others through using hashtags, and engagement in discussions using mention action. To capture this, we construct the mention interaction graph ( INLINEFORM0 ) from our dataset, such that INLINEFORM1 = INLINEFORM2 , where INLINEFORM3 represents the user nodes and INLINEFORM4 represents the set of edges. The graph INLINEFORM5 is a directed graph, where an edge INLINEFORM6 exists between two user nodes INLINEFORM7 and INLINEFORM8 , if user INLINEFORM9 mentions user INLINEFORM10 . After constructing the graph, we measure the degree of influence each user has over their network using different centrality measures, such as degree centrality, betweenness centrality, and HITS-Hub. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"This category consists of measuring behavioural features to capture different properties related to the user and their behaviour. This includes how active the user is (frequency of tweets posted) and the followers/following ratio. Additionally, we use features to capture users' interactions with others through using hashtags, and engagement in discussions using mention action. To capture this, we construct the mention interaction graph ( INLINEFORM0 ) from our dataset, such that INLINEFORM1 = INLINEFORM2 , where INLINEFORM3 represents the user nodes and INLINEFORM4 represents the set of edges. The graph INLINEFORM5 is a directed graph, where an edge INLINEFORM6 exists between two user nodes INLINEFORM7 and INLINEFORM8 , if user INLINEFORM9 mentions user INLINEFORM10 . After constructing the graph, we measure the degree of influence each user has over their network using different centrality measures, such as degree centrality, betweenness centrality, and HITS-Hub. Such properties have been adopted in the research literature to study properties of cyber-criminal networks and their behaviour BIBREF22 , BIBREF23 ."
],
"extractive_spans": [
"frequency of tweets posted",
" followers/following ratio",
"using hashtags",
"using mention action"
],
"free_form_answer": "",
"highlighted_evidence": [
"his category consists of measuring behavioural features to capture different properties related to the user and their behaviour. This includes how active the user is (frequency of tweets posted) and the followers/following ratio. Additionally, we use features to capture users' interactions with others through using hashtags, and engagement in discussions using mention action.",
"his category consists of measuring behavioural features to capture different properties related to the user and their behaviour. This includes how active the user is (frequency of tweets posted) and the followers/following ratio. Additionally, we use features to capture users' interactions with others through using hashtags, and engagement in discussions using mention action."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"This category consists of measuring behavioural features to capture different properties related to the user and their behaviour. This includes how active the user is (frequency of tweets posted) and the followers/following ratio. Additionally, we use features to capture users' interactions with others through using hashtags, and engagement in discussions using mention action. To capture this, we construct the mention interaction graph ( INLINEFORM0 ) from our dataset, such that INLINEFORM1 = INLINEFORM2 , where INLINEFORM3 represents the user nodes and INLINEFORM4 represents the set of edges. The graph INLINEFORM5 is a directed graph, where an edge INLINEFORM6 exists between two user nodes INLINEFORM7 and INLINEFORM8 , if user INLINEFORM9 mentions user INLINEFORM10 . After constructing the graph, we measure the degree of influence each user has over their network using different centrality measures, such as degree centrality, betweenness centrality, and HITS-Hub. Such properties have been adopted in the research literature to study properties of cyber-criminal networks and their behaviour BIBREF22 , BIBREF23 ."
],
"extractive_spans": [
"frequency of tweets posted",
"followers/following ratio",
"users' interactions with others through using hashtags",
"engagement in discussions using mention action"
],
"free_form_answer": "",
"highlighted_evidence": [
"This category consists of measuring behavioural features to capture different properties related to the user and their behaviour. This includes how active the user is (frequency of tweets posted) and the followers/following ratio. Additionally, we use features to capture users' interactions with others through using hashtags, and engagement in discussions using mention action. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"8cc8fa7a4aedf098b9db373f8b641c938890de69",
"a23e2a9b75739fabbb6879e46ff45c2cb8dc478e",
"bb258d4983358d5d140536d58c4d49b615ef8b8c"
],
"answer": [
{
"evidence": [
"We utilise LIWC dictionaries to assign a score to a set of psychological, personality, and emotional categories. Mainly, we look at the following properties: (1) Summary variables: Analytically thinking which reflects formal, logical, and hierarchical thinking (high value), versus informal, personal, and narrative thinking (low value). Clout which reflects high expertise and confidence levels (high value), versus tentative, humble, and anxious levels (low value). Tone which reflects positive emotions (high value) versus more negative emotions such as anxiety, sadness, or anger (low value). Authentic which reflects whether the text is conveying honesty and disclosing (high value) versus more guarded, and distanced (low value). (2) Big five: Measures the five psychological properties (OCEAN), namely Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. (3) Emotional Analysis: Measures the positive emotions conveyed in the text, and the negative emotions (including anger, sadness, anxiety). (4) Personal Drives: Focuses on five personal drives, namely power, reward, risk, achievement, and affiliation. (5) Personal Pronouns: Counts the number of 1st, 2nd, and 3rd personal pronouns used. For each Twitter user, we calculate their psychological profiles across these categories. Additionally, using Minkowski distance measure, we calculate the distance between each of these profiles and the average values of the psychological properties created from the ISIS magazines."
],
"extractive_spans": [
"Analytically thinking",
"Clout",
"Tone",
"Authentic",
"Openness",
"Conscientiousness",
"Extraversion",
"Agreeableness",
"Neuroticism",
"positive emotions",
"negative emotions",
"personal drives, namely power, reward, risk, achievement, and affiliation",
"number of 1st, 2nd, and 3rd personal pronouns used."
],
"free_form_answer": "",
"highlighted_evidence": [
"We utilise LIWC dictionaries to assign a score to a set of psychological, personality, and emotional categories. Mainly, we look at the following properties: (1) Summary variables: Analytically thinking which reflects formal, logical, and hierarchical thinking (high value), versus informal, personal, and narrative thinking (low value). Clout which reflects high expertise and confidence levels (high value), versus tentative, humble, and anxious levels (low value). Tone which reflects positive emotions (high value) versus more negative emotions such as anxiety, sadness, or anger (low value). Authentic which reflects whether the text is conveying honesty and disclosing (high value) versus more guarded, and distanced (low value). (2) Big five: Measures the five psychological properties (OCEAN), namely Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. (3) Emotional Analysis: Measures the positive emotions conveyed in the text, and the negative emotions (including anger, sadness, anxiety). (4) Personal Drives: Focuses on five personal drives, namely power, reward, risk, achievement, and affiliation. (5) Personal Pronouns: Counts the number of 1st, 2nd, and 3rd personal pronouns used. For each Twitter user, we calculate their psychological profiles across these categories. Additionally, using Minkowski distance measure, we calculate the distance between each of these profiles and the average values of the psychological properties created from the ISIS magazines."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We utilise LIWC dictionaries to assign a score to a set of psychological, personality, and emotional categories. Mainly, we look at the following properties: (1) Summary variables: Analytically thinking which reflects formal, logical, and hierarchical thinking (high value), versus informal, personal, and narrative thinking (low value). Clout which reflects high expertise and confidence levels (high value), versus tentative, humble, and anxious levels (low value). Tone which reflects positive emotions (high value) versus more negative emotions such as anxiety, sadness, or anger (low value). Authentic which reflects whether the text is conveying honesty and disclosing (high value) versus more guarded, and distanced (low value). (2) Big five: Measures the five psychological properties (OCEAN), namely Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. (3) Emotional Analysis: Measures the positive emotions conveyed in the text, and the negative emotions (including anger, sadness, anxiety). (4) Personal Drives: Focuses on five personal drives, namely power, reward, risk, achievement, and affiliation. (5) Personal Pronouns: Counts the number of 1st, 2nd, and 3rd personal pronouns used. For each Twitter user, we calculate their psychological profiles across these categories. Additionally, using Minkowski distance measure, we calculate the distance between each of these profiles and the average values of the psychological properties created from the ISIS magazines."
],
"extractive_spans": [
"Openness",
"Conscientiousness",
"Extraversion",
"Agreeableness",
"Neuroticism"
],
"free_form_answer": "",
"highlighted_evidence": [
"Big five: Measures the five psychological properties (OCEAN), namely Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We utilise LIWC dictionaries to assign a score to a set of psychological, personality, and emotional categories. Mainly, we look at the following properties: (1) Summary variables: Analytically thinking which reflects formal, logical, and hierarchical thinking (high value), versus informal, personal, and narrative thinking (low value). Clout which reflects high expertise and confidence levels (high value), versus tentative, humble, and anxious levels (low value). Tone which reflects positive emotions (high value) versus more negative emotions such as anxiety, sadness, or anger (low value). Authentic which reflects whether the text is conveying honesty and disclosing (high value) versus more guarded, and distanced (low value). (2) Big five: Measures the five psychological properties (OCEAN), namely Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. (3) Emotional Analysis: Measures the positive emotions conveyed in the text, and the negative emotions (including anger, sadness, anxiety). (4) Personal Drives: Focuses on five personal drives, namely power, reward, risk, achievement, and affiliation. (5) Personal Pronouns: Counts the number of 1st, 2nd, and 3rd personal pronouns used. For each Twitter user, we calculate their psychological profiles across these categories. Additionally, using Minkowski distance measure, we calculate the distance between each of these profiles and the average values of the psychological properties created from the ISIS magazines."
],
"extractive_spans": [],
"free_form_answer": "summary variable - analytically thinking, clout, tone, authentic, Big five variable - openness, conscientiousness, extraversion, agreeableness, neuroticism, Emotional variables - positive emotions in the text, negative emotions in the text, personal drives - power, reward, risk, achievement, affiliation, personal pronouns - counts the number of 1st, 2nd, and 3rd personal pronouns used, Minkowski distance between each profile and average values of these features created from the ISIS magazines",
"highlighted_evidence": [
"We utilise LIWC dictionaries to assign a score to a set of psychological, personality, and emotional categories. Mainly, we look at the following properties: (1) Summary variables: Analytically thinking which reflects formal, logical, and hierarchical thinking (high value), versus informal, personal, and narrative thinking (low value). Clout which reflects high expertise and confidence levels (high value), versus tentative, humble, and anxious levels (low value). Tone which reflects positive emotions (high value) versus more negative emotions such as anxiety, sadness, or anger (low value). Authentic which reflects whether the text is conveying honesty and disclosing (high value) versus more guarded, and distanced (low value). (2) Big five: Measures the five psychological properties (OCEAN), namely Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. (3) Emotional Analysis: Measures the positive emotions conveyed in the text, and the negative emotions (including anger, sadness, anxiety). (4) Personal Drives: Focuses on five personal drives, namely power, reward, risk, achievement, and affiliation. (5) Personal Pronouns: Counts the number of 1st, 2nd, and 3rd personal pronouns used. For each Twitter user, we calculate their psychological profiles across these categories. Additionally, using Minkowski distance measure, we calculate the distance between each of these profiles and the average values of the psychological properties created from the ISIS magazines."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"35b660f2968ff05c98ec97539b7460adf8fca67e",
"759a4496feef45bd6611bc1561ece27e0372e7c1",
"d08bc302b967b811bd0b30bdfffe0cc155071178"
],
"answer": [
{
"evidence": [
"We use two methods to extract the radical language from the propaganda corpus. First we calculate tf-idf scores for each gram in the propaganda corpus. We use uni-grams, bi-grams, and tri-grams to capture phrases and context in which words are being used. We then select the top scoring grams to be used as features for the language model. N-grams and words frequency have been used in the literature to classify similar problems, such as hate-speech and extremist text and have proven successful BIBREF16 . The second method we use is word embeddings to capture semantic meanings. Research in NLP has compared the effectiveness of word embedding methods for encoding semantic meaning and found that semantic relationships between words are best captured by word vectors within word embedding models BIBREF17 . Therefore, we train word2vec model on our propaganda corpus to build the lexical semantic aspects of the text using vector space models. We learn word embeddings using skip-gram word2vec model implemented in the gensim package with vector size of 100 and window size of 5. This word embedding model is used to obtain the vector representation for each word. We aggregate the vectors for each word in the tweet, and concatenate the maximum and average for each word vector dimension, such that any given tweet is represented in 200 dimension sized vector. This approach of aggregating vectors was used successfully in previous research BIBREF18 . Moreover, since ISIS supporters typically advocate for violent behaviour and tend to use offensive curse words, we use dictionaries of violent words and curse words to record the ratio of such words in the tweet. We also count the frequency of words with all capital letters as they are traditionally used to convey yelling behaviour."
],
"extractive_spans": [
"N-grams",
"word2vec"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use two methods to extract the radical language from the propaganda corpus. First we calculate tf-idf scores for each gram in the propaganda corpus. We use uni-grams, bi-grams, and tri-grams to capture phrases and context in which words are being used. We then select the top scoring grams to be used as features for the language model. N-grams and words frequency have been used in the literature to classify similar problems, such as hate-speech and extremist text and have proven successful BIBREF16 ",
"Therefore, we train word2vec model on our propaganda corpus to build the lexical semantic aspects of the text using vector space models. We learn word embeddings using skip-gram word2vec model implemented in the gensim package with vector size of 100 and window size of 5. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use two methods to extract the radical language from the propaganda corpus. First we calculate tf-idf scores for each gram in the propaganda corpus. We use uni-grams, bi-grams, and tri-grams to capture phrases and context in which words are being used. We then select the top scoring grams to be used as features for the language model. N-grams and words frequency have been used in the literature to classify similar problems, such as hate-speech and extremist text and have proven successful BIBREF16 . The second method we use is word embeddings to capture semantic meanings. Research in NLP has compared the effectiveness of word embedding methods for encoding semantic meaning and found that semantic relationships between words are best captured by word vectors within word embedding models BIBREF17 . Therefore, we train word2vec model on our propaganda corpus to build the lexical semantic aspects of the text using vector space models. We learn word embeddings using skip-gram word2vec model implemented in the gensim package with vector size of 100 and window size of 5. This word embedding model is used to obtain the vector representation for each word. We aggregate the vectors for each word in the tweet, and concatenate the maximum and average for each word vector dimension, such that any given tweet is represented in 200 dimension sized vector. This approach of aggregating vectors was used successfully in previous research BIBREF18 . Moreover, since ISIS supporters typically advocate for violent behaviour and tend to use offensive curse words, we use dictionaries of violent words and curse words to record the ratio of such words in the tweet. We also count the frequency of words with all capital letters as they are traditionally used to convey yelling behaviour."
],
"extractive_spans": [
"uni-grams",
"bi-grams",
"tri-grams"
],
"free_form_answer": "",
"highlighted_evidence": [
" We use uni-grams, bi-grams, and tri-grams to capture phrases and context in which words are being used. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use two methods to extract the radical language from the propaganda corpus. First we calculate tf-idf scores for each gram in the propaganda corpus. We use uni-grams, bi-grams, and tri-grams to capture phrases and context in which words are being used. We then select the top scoring grams to be used as features for the language model. N-grams and words frequency have been used in the literature to classify similar problems, such as hate-speech and extremist text and have proven successful BIBREF16 . The second method we use is word embeddings to capture semantic meanings. Research in NLP has compared the effectiveness of word embedding methods for encoding semantic meaning and found that semantic relationships between words are best captured by word vectors within word embedding models BIBREF17 . Therefore, we train word2vec model on our propaganda corpus to build the lexical semantic aspects of the text using vector space models. We learn word embeddings using skip-gram word2vec model implemented in the gensim package with vector size of 100 and window size of 5. This word embedding model is used to obtain the vector representation for each word. We aggregate the vectors for each word in the tweet, and concatenate the maximum and average for each word vector dimension, such that any given tweet is represented in 200 dimension sized vector. This approach of aggregating vectors was used successfully in previous research BIBREF18 . Moreover, since ISIS supporters typically advocate for violent behaviour and tend to use offensive curse words, we use dictionaries of violent words and curse words to record the ratio of such words in the tweet. We also count the frequency of words with all capital letters as they are traditionally used to convey yelling behaviour."
],
"extractive_spans": [],
"free_form_answer": "ratio of violent words in the tweet, ratio of curse words in the tweet, frequency of words with all capital letters, 200 dimension sized vector for the tweet calculated using word embedding, tf-idf scores for top scoring uni-grams, bi-grams and tri-grams",
"highlighted_evidence": [
"We use two methods to extract the radical language from the propaganda corpus. First we calculate tf-idf scores for each gram in the propaganda corpus. We use uni-grams, bi-grams, and tri-grams to capture phrases and context in which words are being used. We then select the top scoring grams to be used as features for the language model.",
"We learn word embeddings using skip-gram word2vec model implemented in the gensim package with vector size of 100 and window size of 5. This word embedding model is used to obtain the vector representation for each word. We aggregate the vectors for each word in the tweet, and concatenate the maximum and average for each word vector dimension, such that any given tweet is represented in 200 dimension sized vector. ",
"Moreover, since ISIS supporters typically advocate for violent behaviour and tend to use offensive curse words, we use dictionaries of violent words and curse words to record the ratio of such words in the tweet. We also count the frequency of words with all capital letters as they are traditionally used to convey yelling behaviour."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What languages feature in the dataset?",
"What textual, psychological and behavioural patterns are observed in radical users?",
"Where is the propaganda material sourced from?",
"Which behavioural features are used?",
"Which psychological features are used?",
"Which textual features are used?"
],
"question_id": [
"a4a9971799c8860b50f219c93f050ebf6a627b3d",
"778c6a27182349dc5275282c3e9577bda2555c3d",
"42dcf1bb19b8470993c05e55413eed487b0f2559",
"2ecd12069388fd58ad5f8f4ae7ac1bb4f56497b9",
"824629b36a75753b1500d9dcaee0fc3c758297b1",
"31894361833b3e329a1fb9ebf85a78841cff229f"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1: Approach overview",
"TABLE I: Exp 1: Evaluation metrics across all feature groups",
"TABLE II: Exp 2: Evaluation metrics across all feature groups",
"Fig. 2: ROC curve for Exp1 (top), Exp2 (bottom).",
"TABLE III: Features Importance"
],
"file": [
"3-Figure1-1.png",
"5-TableI-1.png",
"5-TableII-1.png",
"5-Figure2-1.png",
"6-TableIII-1.png"
]
} | [
"What textual, psychological and behavioural patterns are observed in radical users?",
"Which psychological features are used?",
"Which textual features are used?"
] | [
[
"1905.08067-Features Significance-0",
"1905.08067-Features Significance-1"
],
[
"1905.08067-Feature Engineering-4"
],
[
"1905.08067-Feature Engineering-2"
]
] | [
"They use a lot of \"us\" and \"them\" in their vocabulary. They use a lot of mentions, and they tend to be \"central\" in their network. They use a lot of violent words. ",
"summary variable - analytically thinking, clout, tone, authentic, Big five variable - openness, conscientiousness, extraversion, agreeableness, neuroticism, Emotional variables - positive emotions in the text, negative emotions in the text, personal drives - power, reward, risk, achievement, affiliation, personal pronouns - counts the number of 1st, 2nd, and 3rd personal pronouns used, Minkowski distance between each profile and average values of these features created from the ISIS magazines",
"ratio of violent words in the tweet, ratio of curse words in the tweet, frequency of words with all capital letters, 200 dimension sized vector for the tweet calculated using word embedding, tf-idf scores for top scoring uni-grams, bi-grams and tri-grams"
] | 160 |
1910.03943 | Hotel2vec: Learning Attribute-Aware Hotel Embeddings with Self-Supervision | We propose a neural network architecture for learning vector representations of hotels. Unlike previous works, which typically only use user click information for learning item embeddings, we propose a framework that combines several sources of data, including user clicks, hotel attributes (e.g., property type, star rating, average user rating), amenity information (e.g., the hotel has free Wi-Fi or free breakfast), and geographic information. During model training, a joint embedding is learned from all of the above information. We show that including structured attributes about hotels enables us to make better predictions in a downstream task than when we rely exclusively on click data. We train our embedding model on more than 40 million user click sessions from a leading online travel platform and learn embeddings for more than one million hotels. Our final learned embeddings integrate distinct sub-embeddings for user clicks, hotel attributes, and geographic information, providing an interpretable representation that can be used flexibly depending on the application. We show empirically that our model generates high-quality representations that boost the performance of a hotel recommendation system in addition to other applications. An important advantage of the proposed neural model is that it addresses the cold-start problem for hotels with insufficient historical click information by incorporating additional hotel attributes which are available for all hotels. | {
"paragraphs": [
[
"Learning semantic representations (embeddings) of different entities, such as textual, commercial, and physical, has been a recent and active area of research. Such representations can facilitate applications that rely on a notion of similarity, for example recommendation systems and ranking algorithms in e-commerce.",
"In natural language processing, word2vec BIBREF0 learns vector representations of words from large quantities of text, where each word is mapped to a $d$-dimensional vector such that semantically similar words have geometrically closer vectors. This is achieved by predicting either the context words appearing in a window around a given target word (skip-gram model), or the target word given the context (CBOW model). The main assumption is that words appearing frequently in similar contexts share statistical properties (the distributional hypothesis). Crucially, word2vec models, like many other word embedding models, preserve sequential information encoded in text so as to leverage word co-occurrence statistics. The skip-gram model has been adapted to other domains in order to learn dense representations of items other than words. For example, product embeddings in e-commerce BIBREF1 or vacation rental embeddings in the hospitality domain BIBREF2 can be learned by treating purchase histories or user click sequences as sentences and applying a word2vec approach.",
"Most of the prior work on item embedding exploit the co-occurrence of items in a sequence as the main signal for learning the representation. One disadvantage of this approach is that it fails to incorporate rich structured information associated with the embedded items. For example, in the travel domain, where we seek to embed hotels and other travel-related entities, it could be helpful to encode explicit information such as user ratings, star ratings, hotel amenities, and location in addition to implicit information encoded in the click-stream.",
"In this work, we propose an algorithm for learning hotel embeddings that combines sequential user click information in a word2vec approach with additional structured information about hotels. We propose a neural architecture that adopts and extends the skip-gram model to accommodate arbitrary relevant information of embedded items, including but not limited to geographic information, ratings, and item attributes. In experimental results, we show that enhancing the neural network to jointly encode click and supplemental structured information outperforms a skip-gram model that encodes the click information alone. The proposed architecture also naturally handles the cold-start problem for hotels with little or no historical clicks. Specifically, we can infer an embedding for these properties by leveraging their supplemental structured metadata.",
"Compared to previous work on item embeddings, the novel contributions of this paper are as follows:",
"We propose a novel framework for fusing multiple sources of information about an item (such as user click sequences and item-specific information) to learn item embeddings via self-supervised learning.",
"We generate an interpretable embedding which can be decomposed into sub-embeddings for clicks, location, ratings, and attributes, and employed either as separate component embeddings or a single, unified embedding.",
"It is also dynamic, meaning it is easy to reflect future changes in attributes such as star-rating or addition of amenities in the embedding vectors without retraining.",
"We address the cold-start problem by including hotel metadata which are independent of user click-stream interactions and available for all hotels. This helps us to better impute embeddings for sparse items/hotels.",
"We show significant gains over previous work based on click-embedding in several experimental studies.",
"The structure of the remainder of this paper is as follows. Section 2 gives an overview of some of the recent works on neural embedding. Section 3 provides details of the proposed framework, including the neural network architecture, training methodology, and how the cold-start problem is addressed. In Section 4, we present experimental results on several different tasks and a comparison with previous state-of-the-art work. Section 5 concludes the paper."
],
[
"Recommendation is an inherently challenging task that requires learning user interests and behaviour. There has been a significant body of research on advancing it using various frameworks BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. Learning a semantic representation/embedding of the items being recommended is a critical piece of most of these frameworks.",
"Deep learning models have been widely used for learning embeddings BIBREF8, BIBREF9, BIBREF10. One prominent use case is learning product embeddings for e-commerce. In BIBREF1, BIBREF11, the authors develop an approach based on the skip-gram model BIBREF0, frequently used in natural language processing. They leverage users' purchase histories obtained from their e-mail receipts to learn a dense representation of products. Each user's complete purchase history is represented as a sequence, which is treated as a sentence in which the items are considered words.",
"In more recent work BIBREF2, the authors use the skip-gram framework to learn embeddings for vacation rental properties. They extend the ideas in BIBREF1 to take into account a user's click stream data during a session. A key contribution of their method is the modification of the skip-gram model to always include the booked hotels in the context of each target token, so that special attention is paid to bookings. They also improve negative sampling by sampling from the same market, which leads to better within-market listing similarities. Nevertheless, their model relies exclusively on large amounts of historical user engagement data, which is a major drawback when such data are sparse.",
"In another relevant work, BIBREF12, authors propose a framework for YouTube video recommendation which fuses multiple features (e.g., video watches, search tokens, geo embeddings) into a unified representation via a neural architecture. They then use these embeddings for candidate generation and ranking. The main limitation of this work is that the individual embeddings are learned separately, and then combined via a neural network to perform classification.",
"Similar to our work on hotel2vec, there are also some works which attempt to include explicit item attributes (e.g., size, artist, model, color) within the sequence prediction framework using various strategies. In BIBREF13, the item metadata is injected into the model as side information to regularize the item embeddings. In their approach, they only use one feature (singer ID) in the experiments. In addition, their approach does not accommodate learning independent embedding vectors for each attribute group. Most recently, BIBREF14 propose a method where they train separate encoders for text data, click-stream session data, and product image data, and then use a simple weighted average to unify these embeddings. The weights are learned using grid search on the downstream task. While their approach allows for exploring independent embedding vectors, the sub-embeddings of different attribute groups are learned independently rather than jointly. In addition to efforts extending the skip-gram framework, emerging research attempts to extend GloVe BIBREF15 by incorporating various attributes. BIBREF16 incorporate attribute information into GloVe by modifying the loss function such that the representation of a location can be learned by combining both text and structural data."
],
[
"Similar to BIBREF0, by treating the clicks made by users within an interactive web session as words, and sequences of clicks as sentences, we seek to predict the context hotels (words), given a target hotel (word) in the session (sentence). On a high level, this is the approach proposed in BIBREF1, BIBREF2. We refer to this approach as a session-only model.",
"As mentioned earlier, one drawback of this approach is that it does not use any information apart from the click data, making it very challenging to make predictions for unseen hotels or hotels with sparse click data. In addition, the model may be forced to learn certain semantic features which capture aspects of user interest, hotel geographic information, hotel attributes, and so on, as latent variables as opposed to leveraging them as explicitly-provided input features. To address these shortcomings, we propose adding more explicit information about the hotel as model input. Intuitively, this should make the model more efficient during training as well as provide information that it can use when making predictions on unseen or sparse hotels.",
"Another major advantage of our model is its use of different projection layers for various hotel/item attributes. This enables us to learn independent embedding vectors representing different facets of the property, in addition to an enriched, unified embedding for each hotel. This model also provides a dynamic framework for updating the embedding of a hotel, once its user-rating or other attribute information changes over time. This is not trivial in session-only models, unless we re-train a new model based on recent click data post attribute changes. In the remainder of the paper, we refer to our proposed model as an enriched model, in contrast to the session-only model introduced above."
],
[
"Figure FIGREF7 illustrates the proposed architecture for an enriched, hotel2vec model. As we can see, each aspect of the hotel is embedded separately, and these representations are later concatenated and further compressed before being used for context prediction.",
"Formally, a click session is defined as a sequence of hotels (items) $\\lbrace h_1, h_2, \\cdots , h_n\\rbrace $ clicked on by a user during a defined window of time or visit. We denote the click, amenity, geographic, and enriched embedding vectors with $\\mathbf {V}_c$, $\\mathbf {V}_a$, $\\mathbf {V}_g$, and $\\mathbf {V}_e$ respectively. These are defined as follows:",
"where $I_c$ is the one-hot encoding of hotels in the click session, and $I_g$ is a continuous vector with geographical coordinates of the hotel. Amenity features can be categorical or numerical with possible missing values. Thus, $I_a$ is partitioned per feature, where for numerical features we simply use an element of $I_a$ assigned with the value of that feature, and for categorical features with $m$ categories, we assign $m$ elements of $I_a$ and set the corresponding category to 1 and the others to 0. If the feature is missing, we set everything to 0. $\\operatornamewithlimits{ReLU}$ is the rectified linear unit activation function BIBREF17 and $f(x; \\mathbf {W})$ is a normalized projection layer parameterized with trainable weights $\\mathbf {W}$, i.e., $f(x; \\mathbf {W}) = \\operatornamewithlimits{ReLU}(\\frac{x \\mathbf {W}}{\\hphantom{{}_2}\\Vert x \\mathbf {W} \\Vert _{\\scriptstyle {2}}})$.",
"We train our model using negative sampling based on optimizing the noise contrastive estimation (NCE) loss BIBREF18. More formally, given $h_t$ as the target, we estimate the probability of $h_c$ being a context hotel to be",
"where $\\mathbf {W}_{c,:}$ is the $c^{\\text{th}}$ row of $W_{\\scriptstyle _{NCE}}$. We find parameters of the model by maximizing the probabilities of correct predictions. We train the model using backpropagation and minimizing the following loss function:",
"",
"where $\\mathbf {V}_{e_{\\scriptstyle {t}}}$ is the enriched embedding of $h_t$, $\\mathbf {W}_{i,:}$ is $i^{\\text{th}}$ row of $W_{\\scriptstyle _{NCE}}$ matrix, $\\mathcal {N}_c = \\lbrace h_i| 1 \\le i \\le N, h_i \\sim P_n(h_c)\\rbrace $ is the set of negative examples, and $P_n(h_c)$ is the distribution which we use to pick the negative samples. We train our model by maximizing equation DISPLAY_FORM10 using batch stochastic gradient descent."
],
[
"It is well known BIBREF18, BIBREF0, BIBREF19 that using negative sampling, a version of noise contrastive estimation, significantly decreases the amount of time required to train a classifier with a large number of possible classes. In the case of recommendation, there is typically a large inventory of items available to recommend to the user, and thus we train our skip-gram model using negative sampling. However, it is not uncommon that users frequently search exclusively within a particular subdomain. For example, in hotel search, a customer looking to stay in Miami will focus on that market and rarely across different markets. This motivates a more targeted strategy when selecting negative samples: we select half of our negative samples following the schema in BIBREF20, i.e., from the complete set of all hotels, and the other half uniformly at random from the same market as the clicked hotel. Throughout this paper, a market is defined as a set of similar hotels in the same geographic region. It's worth noting that there may be multiple markets in the same city or other geo region. In the experimental section, we show that this improves the model's within-market similarities and its predictions."
],
[
"In practice, many hotels/items appear infrequently or never in historical data. Recommender systems typically have difficulty handling these items effectively due to the lack of relevant training data. Apart from the obvious negative impacts on searchability and sales, neglecting these items can introduce a feedback loop. That is, the less these items are recommended, or the more they are recommended in inappropriate circumstances, the more the data reinforces their apparent lack of popularity.",
"Dealing with such hotels/items and choosing appropriate weights for them is referred to as the \"cold start problem.\" One of the main advantages of the enriched hotel2vec model over session-only approaches is its ability to better handle cold start cases. Although an item might lack sufficient prior user engagement, there are often other attributes available. For example, in our use case, thousands of new properties are added to the lodging platform's inventory each quarter. While we don't have prior user engagement data from which to learn a click embedding, we do have other attributes such as geographical location, star rating, amenities, etc. Hotel2vec can take advantage of this supplemental information to provide a better cold-start embedding."
],
[
"In this section, we present several experiments to evaluate the performance of the trained hotel2vec embeddings. Before diving into the details of the experiments, we first describe the dataset and model parameters."
],
[
"Our dataset contains more than 40M user click sessions, which includes more than 1.1 million unique hotels. A click session is defined as a span of clicks performed by a user with no gap of more than 7 days. We randomly split the sessions into training, validation, and test with a ratio of 8:1:1.",
"We use a system with 64GB RAM, 8 CPU cores, and a Tesla V100 GPU. We use Python 3 as the programming language and the Tensorflow BIBREF21 library for the neural network architecture and gradient calculations. is sufficient for prevention of overfitting.",
"We tune the hyperparameters for all models, including the baseline session-only model, on the validation set. We search for a learning rate from $\\lbrace 0.01, 0.1, 0.5, 1.0, 2.5\\rbrace $ and embedding dimensions from $\\lbrace 32, 128\\rbrace $. To train the model weights, we use stochastic gradient descent (SGD) with exponential decay since it performs better than other optimizers in our case, and a batch size of 4096.",
"For our implementation of the session-only model, a learning rate of 0.5 and embedding dimension of 32 worked best. Throughout the remainder of the paper, we refer to this model as the session-32 model. For our enriched model (hotel2vec), a learning rate of 0.05 worked best; for the dimensionality of the embedding vectors, we found that letting $V_c, V_e \\in {R}^{32}$, $V_a \\in {R}^{15}$ and $V_g \\in {R}^{5}$ worked best. We refer to this model as the enriched-32 model."
],
[
""
],
[
"A robust metric for evaluating a set of hotel embeddings (or, more generally, any set of items displayed to a user in response to an information need) is its ability to predict a user's next click/selection. In this section, we compare our model based on the hits@k metric in various scenarios. Hits@k measures the average number of times the correct selection appears in the top k predictions.",
"We consider two main scenarios: in the first, we are given the current hotel clicked by the user, and we try to predict the next clicked hotel among all approximately 1.1M hotels (raw evaluation). The second scenario is identical except we limit the candidates to hotels within the same market (filtered evaluation).",
"Table TABREF19 shows hits@k for $k \\in \\lbrace 10, 100, 1000\\rbrace $ for both the Session-32 and Enriched-32 models. The enriched model outperforms the session-only model by a huge margin, demonstrating the utility of including item attributes when learning embeddings. We also compare both models in the filtered scenario. This is a more realistic case because limiting hotels to the same market reduces the effect of other information the recommender system can use to provide more relevant suggestions to the user. Table TABREF19 shows predictions results in the filtered scenario.",
"As demonstrated by Table TABREF19, the enriched model outperforms the baseline session model significantly in both scenarios. This shows the effectiveness of hotel2vec in incorporating both click sessions and item/hotel attributes for better recommendations."
],
[
"In this section, rather than using the model's output probabilities to induce a ranking over hotels, we measure hits@k over the ranking induced using cosine similarity of the embedding vectors. This is useful in scenarios where it isn't feasible to directly use the model's probabilities. Table TABREF21 shows the results for various embeddings. We show that using the enriched vectors one achieves the highest performance.",
"We also see from Table TABREF21 that using cosine similarity instead of the whole network does not result in a huge decrease in performance. Finally, Table TABREF21 also shows that even the standalone click vectors obtained from the enriched model outperform the embeddings obtained from the session-only model."
],
[
"We expect hotels in the same market to be more similar to each other than to hotels in other markets. To evaluate how well this market-level information is encoded by the learned embeddings, we calculate the average similarity between pairs of markets, with the expectation that we should see a strong diagonal component in the similarity matrix. We note that our model is not explicitly trained to learn this kind of market information. However, it is able to learn this by combining the click sessions and hotel attribute information. Figure FIGREF13 shows the average similarity scores between hotels in multiple famous cities using two of the embedding vectors. As Figure FIGREF13 clearly depicts, there is a strong similarity between hotels of the same city. Also, markets that are closer to each other (all US cities vs European vs Asian), or for reasons other than geographic proximity are expected to be more similar (e.g., Las Vegas and Macao, or Tokyo and Paris) do indeed have a higher similarity. For comparison, Figure FIGREF13 shows the average cosine similarity between and within markets for the session-only model embeddings. This model captures within-market similarity well but is not as effective as the enriched model for capturing cross-market similarity. For instance, the session-only model fails to recover the similarity between Las Vegas and Macao."
],
[
"",
"The learned hotel embeddings can be used for recommending similar hotels in various situations. In this section, we show examples of how these embeddings are helpful with real examples of hotels from our dataset."
],
[
"To further illuminate the nature of the embeddings learned by the hotel2vec model, we examine a low-dimensional projection of hotel embeddings in the Miami market (Figures FIGREF25 and FIGREF25). The colors signify the grouping of hotels into various competing subcategories (i.e., similar hotels), manually annotated by a human domain expert. The enriched model is significantly better at clustering similar hotels than the session-only model."
],
[
"A common scenario is finding similar hotels to a target hotel in other destinations. For example, when the user searches for a specific hotel name (e.g., Hotel Beacon, NY) we would like to be able to recommend a few similar hotels. The learned embeddings can be used to find top-k most similar hotels to a given one. Given a target hotel $h$, we compute the cosine similarity of every other hotel with $h$ and pick the most similar hotels. Rigid evaluation of this system requires A/B testing; here we show a few examples comparing our enriched embeddings and the session-only embeddings in Figure FIGREF29 to provide some intuition for the behavior of the two models."
],
[
"We also investigate whether we can perform meaningful algebraic operations on trained hotel embeddings (similar to the semantic analogy task in BIBREF0). We pose the question \"$h_1$ is to $h_2$ as $h_3$ is to $h_x$\" and find $h_x$ as the hotel with the closest vector to $\\mathbf {V_{e_1}}-\\mathbf {V_{e_2}}+\\mathbf {V_{e_3}}$. Figure FIGREF31 shows an example of such analogy. $h_1$ is a Marriott hotel in NY, $h_2$ is a Hilton in NY, and $h_3$ is a Marriott in LA (near airport). The obtained $h_x$, is a Hilton hotel in LA near the airport, showing the amount of information captured by the enriched embeddings."
],
[
"Here we analyze how well the model learns embeddings for hotels with little to no presence in the training data. To demonstrate the effectiveness of our model, we compare the enriched model's hits@k with the session-only model's hits@k, for 14K target hotels that were absent during training. Table TABREF33 shows results in the filtered scenario. As we can see, the proposed enriched embedding significantly outperforms the session based embeddings for cold-start hotels.",
"In addition, we use a simple heuristic for cold-start imputation and compare the results with the enriched model for cold-start hotels. To impute vectors for cold-start hotels, we borrow the idea in BIBREF2 and use price, star rating, geodesic distance, type of the property (e.g., hotel, vacation rental, etc.) size in terms of number of rooms, and the geographic market information. For each imputed property, we collect the most similar properties in the same market based on the above features, considering only those properties that fall within a radius of 5km of the target hotel. Results are in Table TABREF33. The heuristic imputation technique improves the Session-32 model's performance on cold-start hotels, but it remains well below that of the enriched model."
],
[
"In this section, we first look at the learning curves for both the session-32 and enriched-32 models. Then, we analyse the effect of $N$ (number of negative samples), $lr$ (learning rate), and the optimization algorithm on the performance of our model.",
"Figure FIGREF35 shows the overall training progress of both the session-32 and enriched-32 models with their respective best hyperparameters. As shown in Figure FIGREF35, our model achieves similar performance with fewer data.",
"An interesting phenomenon is the effect of increasing the number of negative samples on training time and accuracy. Although it takes more time to create a large number of negative samples, as Figure FIGREF36 shows, using more negative samples results in faster training times.",
"We show empirical experiments with various optimization algorithms and learning rates, summarized in Figure FIGREF37. Surprisingly, we see that SGD with exponential learning rate decay outperforms most optimizers with sophisticated learning rate adaptations. We believe this is due to large variance and overfitting in the early stages of training. These issues have been observed in other tasks such as BIBREF22, BIBREF23, BIBREF24, suggesting the need to use tricks such as warm-up heuristics when using momentum-based optimization algorithms to learn embeddings on large, diverse datasets such as ours."
],
[
"In this work, we propose a framework to learn a semantic representation of hotels by jointly embedding hotel click data, geographic information, user rating, and attributes (such as stars, whether it has free breakfast, whether pets are allowed, etc.). Our neural network architecture extends the skip-gram model to accommodate multiple features and encode each one separately. We then fuse the sub-embeddings to predict hotels in the same session. Through experimental results, we show that enriching the neural network with supplemental, structured hotel information results in superior embeddings when compared to a model that relies solely on click information. Our final embedding can be decomposed into multiple sub-embeddings, each encoding the representation for a different hotel aspect, resulting in an interpretable representation. It is also dynamic, in a sense that if one of the attributes or user ratings changes for a hotel, we can feed the updated data to the model and easily obtain a new embedding. Although we mainly focus on learning embeddings for hotels, the same framework can be applied to general item embedding, such as product embedding on Amazon, Ebay, or Spotify."
],
[
"The authors would like to thank Ion Lesan, Peter Barszczewski, Daniele Donghi, Ankur Aggrawal for helping us collecting hotel's attribute, click and geographical data. We would also like to thank Dan Friedman and Thomas Mulc for providing useful comments and feedback."
]
],
"section_name": [
"Introduction",
"Related Work",
"The Proposed Framework",
"The Proposed Framework ::: Neural Network Architecture",
"The Proposed Framework ::: Negative Sampling",
"The Proposed Framework ::: Cold Start Problem",
"Experimental Results",
"Experimental Results ::: Experimental Framework",
"Experimental Results ::: Quantitative Analysis",
"Experimental Results ::: Quantitative Analysis ::: Hits@k for hotel context prediction",
"Experimental Results ::: Quantitative Analysis ::: Comparison using cosine similarity",
"Experimental Results ::: Quantitative Analysis ::: Average intra/inter market embedding similarities",
"Experimental Results ::: Qualitative Analysis",
"Experimental Results ::: Qualitative Analysis ::: Visualization of embeddings",
"Experimental Results ::: Qualitative Analysis ::: Most similar hotels",
"Experimental Results ::: Qualitative Analysis ::: Algebraic operations on hotel embeddings",
"Experimental Results ::: Addressing the Cold Start Problem",
"Experimental Results ::: Training Convergence Analysis",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"b97b7ce10ae728411e25c7f695989f0801d142ec",
"bdc1fb63bfd8fc7484a63d61f0c3397be1e56d52",
"cf440e8bdd3e45dea5040779f9f489e81719ba7a"
],
"answer": [
{
"evidence": [
"In practice, many hotels/items appear infrequently or never in historical data. Recommender systems typically have difficulty handling these items effectively due to the lack of relevant training data. Apart from the obvious negative impacts on searchability and sales, neglecting these items can introduce a feedback loop. That is, the less these items are recommended, or the more they are recommended in inappropriate circumstances, the more the data reinforces their apparent lack of popularity.",
"Dealing with such hotels/items and choosing appropriate weights for them is referred to as the \"cold start problem.\" One of the main advantages of the enriched hotel2vec model over session-only approaches is its ability to better handle cold start cases. Although an item might lack sufficient prior user engagement, there are often other attributes available. For example, in our use case, thousands of new properties are added to the lodging platform's inventory each quarter. While we don't have prior user engagement data from which to learn a click embedding, we do have other attributes such as geographical location, star rating, amenities, etc. Hotel2vec can take advantage of this supplemental information to provide a better cold-start embedding."
],
"extractive_spans": [],
"free_form_answer": "Dealing with hotels/items that appear infrequently or never in historical data and choosing appropriate weights for them is referred to as the \"cold start problem.\"",
"highlighted_evidence": [
"In practice, many hotels/items appear infrequently or never in historical data. Recommender systems typically have difficulty handling these items effectively due to the lack of relevant training data. Apart from the obvious negative impacts on searchability and sales, neglecting these items can introduce a feedback loop. That is, the less these items are recommended, or the more they are recommended in inappropriate circumstances, the more the data reinforces their apparent lack of popularity.\n\nDealing with such hotels/items and choosing appropriate weights for them is referred to as the \"cold start problem.\" "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In practice, many hotels/items appear infrequently or never in historical data. Recommender systems typically have difficulty handling these items effectively due to the lack of relevant training data. Apart from the obvious negative impacts on searchability and sales, neglecting these items can introduce a feedback loop. That is, the less these items are recommended, or the more they are recommended in inappropriate circumstances, the more the data reinforces their apparent lack of popularity."
],
"extractive_spans": [
"hotels/items appear infrequently or never in historical data",
"Recommender systems typically have difficulty handling these items effectively due to the lack of relevant training data"
],
"free_form_answer": "",
"highlighted_evidence": [
"In practice, many hotels/items appear infrequently or never in historical data. Recommender systems typically have difficulty handling these items effectively due to the lack of relevant training data. Apart from the obvious negative impacts on searchability and sales, neglecting these items can introduce a feedback loop. That is, the less these items are recommended, or the more they are recommended in inappropriate circumstances, the more the data reinforces their apparent lack of popularity."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"05aafda49cde7e168da174c52ce0ba6f26a8a083",
"9d0e13107c33b95679d27fd3943e1f0509c18edb"
],
"answer": [
{
"evidence": [
"A robust metric for evaluating a set of hotel embeddings (or, more generally, any set of items displayed to a user in response to an information need) is its ability to predict a user's next click/selection. In this section, we compare our model based on the hits@k metric in various scenarios. Hits@k measures the average number of times the correct selection appears in the top k predictions."
],
"extractive_spans": [
"the average number of times the correct selection appears in the top k predictions"
],
"free_form_answer": "",
"highlighted_evidence": [
" In this section, we compare our model based on the hits@k metric in various scenarios. Hits@k measures the average number of times the correct selection appears in the top k predictions."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Experimental Results ::: Quantitative Analysis ::: Hits@k for hotel context prediction",
"A robust metric for evaluating a set of hotel embeddings (or, more generally, any set of items displayed to a user in response to an information need) is its ability to predict a user's next click/selection. In this section, we compare our model based on the hits@k metric in various scenarios. Hits@k measures the average number of times the correct selection appears in the top k predictions.",
"Experimental Results ::: Quantitative Analysis ::: Comparison using cosine similarity",
"In this section, rather than using the model's output probabilities to induce a ranking over hotels, we measure hits@k over the ranking induced using cosine similarity of the embedding vectors. This is useful in scenarios where it isn't feasible to directly use the model's probabilities. Table TABREF21 shows the results for various embeddings. We show that using the enriched vectors one achieves the highest performance.",
"Experimental Results ::: Quantitative Analysis ::: Average intra/inter market embedding similarities",
"We expect hotels in the same market to be more similar to each other than to hotels in other markets. To evaluate how well this market-level information is encoded by the learned embeddings, we calculate the average similarity between pairs of markets, with the expectation that we should see a strong diagonal component in the similarity matrix. We note that our model is not explicitly trained to learn this kind of market information. However, it is able to learn this by combining the click sessions and hotel attribute information. Figure FIGREF13 shows the average similarity scores between hotels in multiple famous cities using two of the embedding vectors. As Figure FIGREF13 clearly depicts, there is a strong similarity between hotels of the same city. Also, markets that are closer to each other (all US cities vs European vs Asian), or for reasons other than geographic proximity are expected to be more similar (e.g., Las Vegas and Macao, or Tokyo and Paris) do indeed have a higher similarity. For comparison, Figure FIGREF13 shows the average cosine similarity between and within markets for the session-only model embeddings. This model captures within-market similarity well but is not as effective as the enriched model for capturing cross-market similarity. For instance, the session-only model fails to recover the similarity between Las Vegas and Macao.",
"Experimental Results ::: Qualitative Analysis ::: Visualization of embeddings",
"To further illuminate the nature of the embeddings learned by the hotel2vec model, we examine a low-dimensional projection of hotel embeddings in the Miami market (Figures FIGREF25 and FIGREF25). The colors signify the grouping of hotels into various competing subcategories (i.e., similar hotels), manually annotated by a human domain expert. The enriched model is significantly better at clustering similar hotels than the session-only model.",
"Experimental Results ::: Qualitative Analysis ::: Most similar hotels",
"A common scenario is finding similar hotels to a target hotel in other destinations. For example, when the user searches for a specific hotel name (e.g., Hotel Beacon, NY) we would like to be able to recommend a few similar hotels. The learned embeddings can be used to find top-k most similar hotels to a given one. Given a target hotel $h$, we compute the cosine similarity of every other hotel with $h$ and pick the most similar hotels. Rigid evaluation of this system requires A/B testing; here we show a few examples comparing our enriched embeddings and the session-only embeddings in Figure FIGREF29 to provide some intuition for the behavior of the two models.",
"Experimental Results ::: Qualitative Analysis ::: Algebraic operations on hotel embeddings",
"We also investigate whether we can perform meaningful algebraic operations on trained hotel embeddings (similar to the semantic analogy task in BIBREF0). We pose the question \"$h_1$ is to $h_2$ as $h_3$ is to $h_x$\" and find $h_x$ as the hotel with the closest vector to $\\mathbf {V_{e_1}}-\\mathbf {V_{e_2}}+\\mathbf {V_{e_3}}$. Figure FIGREF31 shows an example of such analogy. $h_1$ is a Marriott hotel in NY, $h_2$ is a Hilton in NY, and $h_3$ is a Marriott in LA (near airport). The obtained $h_x$, is a Hilton hotel in LA near the airport, showing the amount of information captured by the enriched embeddings."
],
"extractive_spans": [
"Hits@k for hotel context prediction",
"Comparison using cosine similarity",
"Average intra/inter market embedding similarities",
"Visualization of embeddings",
"Most similar hotels",
"Algebraic operations on hotel embeddings"
],
"free_form_answer": "",
"highlighted_evidence": [
"Experimental Results ::: Quantitative Analysis ::: Hits@k for hotel context prediction\nA robust metric for evaluating a set of hotel embeddings (or, more generally, any set of items displayed to a user in response to an information need) is its ability to predict a user's next click/selection. In this section, we compare our model based on the hits@k metric in various scenarios. Hits@k measures the average number of times the correct selection appears in the top k predictions.",
"Experimental Results ::: Quantitative Analysis ::: Comparison using cosine similarity\nIn this section, rather than using the model's output probabilities to induce a ranking over hotels, we measure hits@k over the ranking induced using cosine similarity of the embedding vectors.",
"Experimental Results ::: Quantitative Analysis ::: Average intra/inter market embedding similarities\nWe expect hotels in the same market to be more similar to each other than to hotels in other markets. To evaluate how well this market-level information is encoded by the learned embeddings, we calculate the average similarity between pairs of markets, with the expectation that we should see a strong diagonal component in the similarity matrix.",
"Experimental Results ::: Qualitative Analysis ::: Visualization of embeddings\nTo further illuminate the nature of the embeddings learned by the hotel2vec model, we examine a low-dimensional projection of hotel embeddings in the Miami market (Figures FIGREF25 and FIGREF25).",
"Experimental Results ::: Qualitative Analysis ::: Most similar hotels\nA common scenario is finding similar hotels to a target hotel in other destinations.",
"Experimental Results ::: Qualitative Analysis ::: Algebraic operations on hotel embeddings\nWe also investigate whether we can perform meaningful algebraic operations on trained hotel embeddings (similar to the semantic analogy task in BIBREF0)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"428b992c78253ad598a8edeca94681149921bc65",
"bd62cf0b73705ddbdd296d16f0ae432873164556"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"In this work, we propose a framework to learn a semantic representation of hotels by jointly embedding hotel click data, geographic information, user rating, and attributes (such as stars, whether it has free breakfast, whether pets are allowed, etc.). Our neural network architecture extends the skip-gram model to accommodate multiple features and encode each one separately. We then fuse the sub-embeddings to predict hotels in the same session. Through experimental results, we show that enriching the neural network with supplemental, structured hotel information results in superior embeddings when compared to a model that relies solely on click information. Our final embedding can be decomposed into multiple sub-embeddings, each encoding the representation for a different hotel aspect, resulting in an interpretable representation. It is also dynamic, in a sense that if one of the attributes or user ratings changes for a hotel, we can feed the updated data to the model and easily obtain a new embedding. Although we mainly focus on learning embeddings for hotels, the same framework can be applied to general item embedding, such as product embedding on Amazon, Ebay, or Spotify."
],
"extractive_spans": [],
"free_form_answer": "None",
"highlighted_evidence": [
"Although we mainly focus on learning embeddings for hotels, the same framework can be applied to general item embedding, such as product embedding on Amazon, Ebay, or Spotify."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"4c56cfd64590481b9566c379ad33160dd0bbf314",
"95cae5b3c6b5fc683a9031ea6fe1d7b406997dd5",
"b87ead34be48c56e64cb9276dbdbdeb4406fe9e2"
],
"answer": [
{
"evidence": [
"Our dataset contains more than 40M user click sessions, which includes more than 1.1 million unique hotels. A click session is defined as a span of clicks performed by a user with no gap of more than 7 days. We randomly split the sessions into training, validation, and test with a ratio of 8:1:1."
],
"extractive_spans": [
"Our dataset contains more than 40M user click sessions"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our dataset contains more than 40M user click sessions, which includes more than 1.1 million unique hotels. A click session is defined as a span of clicks performed by a user with no gap of more than 7 days. We randomly split the sessions into training, validation, and test with a ratio of 8:1:1."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our dataset contains more than 40M user click sessions, which includes more than 1.1 million unique hotels. A click session is defined as a span of clicks performed by a user with no gap of more than 7 days. We randomly split the sessions into training, validation, and test with a ratio of 8:1:1."
],
"extractive_spans": [
" dataset contains more than 40M user click sessions, which includes more than 1.1 million unique hotels"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our dataset contains more than 40M user click sessions, which includes more than 1.1 million unique hotels"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our dataset contains more than 40M user click sessions, which includes more than 1.1 million unique hotels. A click session is defined as a span of clicks performed by a user with no gap of more than 7 days. We randomly split the sessions into training, validation, and test with a ratio of 8:1:1."
],
"extractive_spans": [],
"free_form_answer": "A dataset containing 40M user click sessions with more than 1.1M unique hotels.",
"highlighted_evidence": [
"Our dataset contains more than 40M user click sessions, which includes more than 1.1 million unique hotels."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"what is the cold-start problem?",
"how was the experiment evaluated?",
"what other applications did they experiment in?",
"what dataset was used for training?"
],
"question_id": [
"cef3a26d8b46cd057bcc2abd3d648dc15336a2bf",
"636ac549cf4917c5922cd09a655abf278924c930",
"c61c0b25f9de4a7ca2013d2e4aba8a5047e14ce4",
"1d047286ac63e5dca1ab811172b89d7d125679e5"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: The block-diagram of the enriched, hotel2vec model with a single encoding layer.",
"Table 1: Context hotels prediction.",
"Table 2: Predicting the next click among all possible hotels using cosine similarity of the vectors.",
"Figure 2: Average cosine similarity of hotels for various pairs of markets using enriched and amenity embedding vectors.",
"Figure 3: Low dimensional visualization of hotel embeddings from the Miami area. Different colors represent expert annotations of competing hotels. Our model has successfully captured most of the similarities.",
"Figure 4: Example of recommendations based on cosine similarity of enriched embedding vectors. Ranking by the Session-32 model placed 3rd before 1st (3,1,2), though it is a hostel, cheaper, and has a lower user rating than the target hotel.",
"Figure 5: Example of algebraic operations on the embeddings for the hotel analogy task.",
"Table 3: Cold start experiments.",
"Figure 8: Various optimization algorithms and learning rates. Sophisticated momentum methods seem to overfit to the early batches too quickly.",
"Figure 6: Training progress of both models.",
"Figure 7: Effect of negative sampling on prediction. Higher number of negative samples results in faster training times."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png",
"6-Figure4-1.png",
"6-Figure5-1.png",
"6-Table3-1.png",
"7-Figure8-1.png",
"7-Figure6-1.png",
"7-Figure7-1.png"
]
} | [
"what is the cold-start problem?",
"what other applications did they experiment in?",
"what dataset was used for training?"
] | [
[
"1910.03943-The Proposed Framework ::: Cold Start Problem-1",
"1910.03943-The Proposed Framework ::: Cold Start Problem-0"
],
[
"1910.03943-Conclusion-0"
],
[
"1910.03943-Experimental Results ::: Experimental Framework-0"
]
] | [
"Dealing with hotels/items that appear infrequently or never in historical data and choosing appropriate weights for them is referred to as the \"cold start problem.\"",
"None",
"A dataset containing 40M user click sessions with more than 1.1M unique hotels."
] | 161 |
1909.00786 | Editing-Based SQL Query Generation for Cross-Domain Context-Dependent Questions | We focus on the cross-domain context-dependent text-to-SQL generation task. Based on the observation that adjacent natural language questions are often linguistically dependent and their corresponding SQL queries tend to overlap, we utilize the interaction history by editing the previous predicted query to improve the generation quality. Our editing mechanism views SQL as sequences and reuses generation results at the token level in a simple manner. It is flexible to change individual tokens and robust to error propagation. Furthermore, to deal with complex table structures in different domains, we employ an utterance-table encoder and a table-aware decoder to incorporate the context of the user utterance and the table schema. We evaluate our approach on the SParC dataset and demonstrate the benefit of editing compared with the state-of-the-art baselines which generate SQL from scratch. Our code is available at this https URL. | {
"paragraphs": [
[
"Generating SQL queries from user utterances is an important task to help end users acquire information from databases. In a real-world application, users often access information in a multi-turn interaction with the system by asking a sequence of related questions. As the interaction proceeds, the user often makes reference to the relevant mentions in the history or omits previously conveyed information assuming it is known to the system.",
"Therefore, in the context-dependent scenario, the contextual history is crucial to understand the follow-up questions from users, and the system often needs to reproduce partial sequences generated in previous turns. Recently, suhr2018learning proposes a context-dependent text-to-SQL model including an interaction-level encoder and an attention mechanism over previous utterances. To reuse what has been generated, they propose to copy complete segments from the previous query. While their model is successful to reason about explicit and implicit references, it does not need to explicitly address different database schemas because the ATIS contains only the flight-booking domain. Furthermore, the model is confined to copy whole segments which are extracted by a rule-based procedure, limiting its capacity to utilize the previous query when only one or a few tokens are changed in the segment.",
"To exploit the correlation between sequentially generated queries and generalize the system to different domains, in this paper, we study an editing-based approach for cross-domain context-dependent text-to-SQL generation task. We propose query generation by editing the query in the previous turn. To this end, we first encode the previous query as a sequence of tokens, and the decoder computes a switch to change it at the token level. This sequence editing mechanism models token-level changes and is thus robust to error propagation. Furthermore, to capture the user utterance and the complex database schemas in different domains, we use an utterance-table encoder based on BERT to jointly encode the user utterance and column headers with co-attention, and adopt a table-aware decoder to perform SQL generation with attentions over both the user utterance and column headers.",
"We evaluate our model on SParC BIBREF0, a new large-scale dataset for cross-domain semantic parsing in context consisting of coherent question sequences annotated with SQL queries over 200 databases in 138 domains. Experiment results show that by generating from the previous query, our model delivers an improvement of 7% question match accuracy and 11% interaction match accuracy over the previous state-of-the-art. Further analysis shows that our editing approach is more robust to error propagation than copying segments, and the improvement becomes more significant if the basic text-to-SQL generation accuracy (without editing) improves."
],
[
"We use SParC BIBREF0, a large-scale cross-domain context-dependent semantic parsing dataset with SQL labels, as our main evaluation benchmark. A SParC example is shown in Table TABREF1. We also report performance on ATIS BIBREF1, BIBREF2 for direct comparison to suhr2018learning. In addition, we evaluate the cross-domain context-independent text-to-SQL ability of our model on Spider BIBREF3, which SParC is built on.",
"We summarize and compare the data statistics in Table and Table . While the ATIS dataset has been extensively studied, it is limited to a particular domain. By contrast, SParC is both context-dependent and cross-domain. Each interaction in SParC is constructed using a question in Spider as the interaction goal, where the annotator asks inter-related questions to obtain information that completes the goal. SParC contains interactions over 200 databases and it follows the same database split as Spider where each database appears only in one of train, dev and test sets. In summary, SParC introduces new challenges to context-dependent text-to-SQL because it (1) contains more complex context dependencies, (2) has greater semantic coverage, and (3) adopts a cross-domain task setting."
],
[
"Let $X$ denote a natural language utterance and $Y$ denote the corresponding SQL query. Context-independent semantic parsing considers individual $(X,Y)$ pairs and maps $X$ to $Y$. In context-dependent semantic parsing, we consider an interaction $I$ consisting of $n$ utterance-query pairs in a sequence:",
"At each turn $t$, the goal is to generate $Y_t$ given the current utterance $X_t$ and the interaction history",
"Furthermore, in the cross-domain setting, each interaction is grounded to a different database. Therefore, the model is also given the schema of the current database as an input. We consider relational databases with multiple tables, and each table contains multiple column headers:",
"where $m$ is the number of column headers, and each $c_l$ consists of multiple words including its table name and column name (§ SECREF11)."
],
[
"We employ an encoder-decoder architecture with attention mechanisms BIBREF4, BIBREF5 as illustrated in Figure FIGREF2. The framework consists of (1) an utterance-table encoder to explicitly encode the user utterance and table schema at each turn, (2) A turn attention incorporating the recent history for decoding, (3) a table-aware decoder taking into account the context of the utterance, the table schema, and the previously generated query to make editing decisions."
],
[
"An effective encoder captures the meaning of user utterances, the structure of table schema, and the relationship between the two. To this end, we build an utterance-table encoder with co-attention between the two as illustrated in Figure FIGREF7.",
"Figure FIGREF7 shows the utterance encoder. For the user utterance at each turn, we first use a bi-LSTM to encode utterance tokens. The bi-LSTM hidden state is fed into a dot-product attention layer BIBREF5 over the column header embeddings. For each utterance token embedding, we get an attention weighted average of the column header embeddings to obtain the most relevant columns BIBREF6. We then concatenate the bi-LSTM hidden state and the column attention vector, and use a second layer bi-LSTM to generate the utterance token embedding $\\mathbf {h}^{E}$.",
"Figure FIGREF7 shows the table encoder. For each column header, we concatenate its table name and its column name separated by a special dot token (i.e., table name . column name). Each column header is processed by a bi-LSTM layer. To better capture the internal structure of the table schemas (e.g., foreign key), we then employ a self-attention BIBREF7 among all column headers. We then use an attention layer to capture the relationship between the utterance and the table schema. We concatenate the self-attention vector and the utterance attention vector, and use a second layer bi-LSTM to generate the column header embedding $\\mathbf {h}^{C}$.",
"Note that the two embeddings depend on each other due to the co-attention, and thus the column header representation changes across different utterances in a single interaction.",
"Utterance-Table BERT Embedding. We consider two options as the input to the first layer bi-LSTM. The first choice is the pretrained word embedding. Second, we also consider the contextualized word embedding based on BERT BIBREF8. To be specific, we follow hwang2019comprehensive to concatenate the user utterance and all the column headers in a single sequence separated by the [SEP] token:",
"This sequence is fed into the pretrained BERT model whose hidden states at the last layer is used as the input embedding."
],
[
"To capture the information across different utterances, we use an interaction-level encoder BIBREF9 on top of the utterance-level encoder. At each turn, we use the hidden state at the last time step from the utterance-level encoder as the utterance encoding. This is the input to a uni-directional LSTM interaction encoder:",
"The hidden state of this interaction encoder $\\mathbf {h}^{I}$ encodes the history as the interaction proceeds.",
"Turn Attention When issuing the current utterance, the user may omit or explicitly refer to the previously mentioned information. To this end, we adopt the turn attention mechanism to capture correlation between the current utterance and the utterance(s) at specific turn(s). At the current turn $t$, we compute the turn attention by the dot-product attention between the current utterance and previous utterances in the history, and then add the weighted average of previous utterance embeddings to the current utterance embedding:",
"The $\\mathbf {c}_{t}^{\\text{turn}}$ summarizes the context information and the current user query and will be used as the initial decoder state as described in the following."
],
[
"We use an LSTM decoder with attention to generate SQL queries by incorporating the interaction history, the current user utterance, and the table schema.",
"Denote the decoding step as $k$, we provide the decoder input as a concatenation of the embedding of SQL query token $\\mathbf {q}_k$ and a context vector $\\mathbf {c}_k$:",
"where $\\mathbf {h}^{D}$ is the hidden state of the decoder $\\text{LSTM}^{D}$, and the hidden state $\\mathbf {h}^{D}_{0}$ is initialized by $\\mathbf {c}_{t}^{\\text{turn}}$. When the query token is a SQL keyword, $\\mathbf {q}_k$ is a learned embedding; when it is a column header, we use the column header embedding given by the table-utterance encoder as $\\mathbf {q}_k$. The context vector $\\mathbf {c}_k$ is described below.",
"Context Vector with the Table and User Utterance. The context vector consists of attentions to both the table and the user utterance. First, at each step $k$, the decoder computes the attention between the decoder hidden state and the column header embedding.",
"where $l$ is the index of column headers and $\\mathbf {h}^{C}_{l}$ is its embedding. Second, it also computes the attention between the decoder hidden state and the utterance token embeddings:",
"where $i$ is the turn index, $j$ is the token index, and $\\mathbf {h}^{E}_{i,j}$ is the token embedding for the $j$-th token of $i$-th utterance. The context vector $\\mathbf {c}_k$ is a concatenation of the two:",
"Output Distribution. In the output layer, our decoder chooses to generate a SQL keyword (e.g., SELECT, WHERE, GROUP BY, ORDER BY) or a column header. This is critical for the cross-domain setting where the table schema changes across different examples. To achieve this, we use separate layers to score SQL keywords and column headers, and finally use the softmax operation to generate the output probability distribution:",
""
],
[
"In an interaction with the system, the user often asks a sequence of closely related questions to complete the final query goal. Therefore, the query generated for the current turn often overlaps significantly with the previous ones.",
"To empirically verify the usefulness of leveraging the previous query, we consider the process of generating the current query by applying copy and insert operations to the previous query. Figure FIGREF18 shows the SQL query length and the number of copy and insert operations at different turns. As the interaction proceeds, the user question becomes more complicated as it requires longer SQL query to answer. However, more query tokens overlap with the previous query, and thus the number of new tokens remains small at the third turn and beyond.",
"Based on this observation, we extend our table-ware decoder with a query editing mechanism. We first encode the previous query using another bi-LSTM, and its hidden states are the query token embeddings $\\mathbf {h}^{Q}_{i,j^{\\prime }}$ (i.e., the $j^{\\prime }$-th token of the $i$-th query). We then extend the context vector with the attention to the previous query:",
"where $\\mathbf {c}_k^{\\text{query}}$ is produced by an attention to query tokens $\\mathbf {h}^{Q}_{i,j^{\\prime }}$ in the same form as Equation DISPLAY_FORM16.",
"At each decoding step, we predict a switch $p_{\\text{copy}}$ to decide if we need copy from the previous query or insert a new token.",
"Then, we use a separate layer to score the query tokens at turn $t-1$, and the output distribution is modified as the following to take into account the editing probability:",
"While the copy mechanism has been introduced by gu2016incorporating and see2017get, they focus on summarization or response generation applications by copying from the source sentences. By contrast, our focus is on editing the previously generated query while incorporating the context of user utterances and table schemas.",
""
],
[
"Semantic parsing is the task of mapping natural language sentences into formal representations. It has been studied for decades including using linguistically-motivated compositional representations, such as logical forms BIBREF10, BIBREF11 and lambda calculus BIBREF12, BIBREF13, and using executable programs, such as SQL queries BIBREF14, BIBREF15 and other general-purpose programming languages BIBREF16, BIBREF17. Most of the early studies worked on a few domains and small datasets such as GeoQuery BIBREF10 and Overnight BIBREF18.",
"Recently, large and cross-domain text-to-SQL datasets such as WikiSQL BIBREF15 and Spider BIBREF3 have received an increasing amount of attention as many data-driven neural approaches achieve promising results BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF6, BIBREF26, BIBREF27, BIBREF28, BIBREF29, BIBREF30. Most of them still focus on context-independent semantic parsing by converting single-turn questions into executable queries.",
"Relatively less effort has been devoted to context-dependent semantic parsing on datasets including ATIS BIBREF1, BIBREF31, SpaceBook BIBREF32, SCONE BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, SequentialQA BIBREF38, SParC BIBREF0 and CoSQL BIBREF39. On ATIS, miller1996fully maps utterances to semantic frames which are then mapped to SQL queries; zettlemoyer2009learning starts with context-independent Combinatory Categorial Grammar (CCG) parsing and then resolves references to generate lambda-calculus logical forms for sequences of sentences. The most relevant to our work is suhr2018learning, who generate ATIS SQL queries from interactions by incorporating history with an interaction-level encoder and copying segments of previously generated queries. Furthermore, SCONE contains three domains using stack- or list-like elements and most queries include a single binary predicate. SequentialQA is created by decomposing some complicated questions in WikiTableQuestions BIBREF40. Since both SCONE and SequentialQA are annotated with only denotations but not query labels, they don't include many questions with rich semantic and contextual types. For example, SequentialQA BIBREF38 requires that the answer to follow-up questions must be a subset of previous answers, and most of the questions can be answered by simple SQL queries with SELECT and WHERE clauses.",
"Concurrent with our work, yu2019cosql introduced CoSQL, a large-scale cross-domain conversational text-to-SQL corpus collected under the Wizard-of-Oz setting. Each dialogue in CoSQL simulates a DB querying scenario with a crowd worker as a user and a college computer science student who is familiar with SQL as an expert. Question-SQL pairs in CoSQL reflect greater diversity in user backgrounds compared to other corpora and involve frequent changes in user intent between pairs or ambiguous questions that require user clarification. These features pose new challenges for text-to-SQL systems.",
"Our work is also related to recently proposed approaches to code generation by editing BIBREF41, BIBREF42, BIBREF43. While they follow the framework of generating code by editing the relevant examples retrieved from training data, we focus on a context-dependent setting where we generate queries from the previous query predicted by the system itself."
],
[
""
],
[
"On both Spider and SParC, we use the exact set match accuracy between the gold and the predicted queries . To avoid ordering issues, instead of using simple string matching, yu2018spider decompose predicted queries into different SQL clauses such as SELECT, WHERE, GROUP BY, and ORDER BY and compute scores for each clause using set matching separately. On SparC, we report two metrics: question match accuracy which is the score average over all questions and interaction match accuracy which is average over all interactions.",
""
],
[
"",
"SParC. We compare with the two baseline models released by yu2019sparc. (1) Context-dependent Seq2Seq (CD-Seq2Seq): This model is adapted from suhr2018learning. The original model was developed for ATIS and does not take the database schema as input hence cannot generalize well across domains. yu2019sparc adapt it to perform context-dependent SQL generation in multiple domains by adding a bi-LSTM database schema encoder which takes bag-of-words representations of column headers as input. They also modify the decoder to select between a SQL keyword or a column header.",
"(2) SyntaxSQL-con: This is adapted from the original context-agnostic SyntaxSQLNet BIBREF44 by using bi-LSTMs to encode the interaction history including the utterance and the associated SQL query response. It also employs a column attention mechanism to compute representations of the previous question and SQL query.",
"Spider. We compare with the results as reported in yu2018syntaxsqlnet. Furthermore, we also include recent results from lee2019recursive who propose to use recursive decoding procedure, bogin2019representing introducing graph neural networks for encoding schemas, and guo2019towards who achieve state-of-the-art performance by using an intermediate representation to bridge natural language questions and SQL queries."
],
[
"Our model is implemented in PyTorch BIBREF45. We use pretrained 300-dimensional GloVe BIBREF46 word embedding. All LSTM layers have 300 hidden size, and we use 1 layer for encoder LSTMs, and 2 layers for decoder LSTMs. We use the ADAM optimizer BIBREF47 to minimize the token-level cross-entropy loss with a batch size of 16. Model parameters are randomly initialized from a uniform distribution $U[-0.1,0.1]$. The main model has an initial learning rate of 0.001 and it will be multiplied by 0.8 if the validation loss increases compared with the previous epoch. When using BERT instead of GloVe, we use the pretrained small uncased BERT model with 768 hidden size, and we fine tune it with a separate constant learning rate of 0.00001. The training typically converges in 10 epochs."
],
[
"Spider. Table TABREF28 shows the results on Spider dataset. Since each question is standalone, we don't use interaction-level decoder or query editing. Our method can achieve the performance of 36.4% on dev set and 32.9% on test set, serving as a strong model for the context-independent cross-domain text-to-SQL generation. This demonstrates the effectiveness of our utterance-table encoder and table-aware decoder to handle the semantics of user utterances and the complexity of table schemas to generate complex SQL queries in unseen domains.",
"Furthermore, adding the utterance-table BERT embedding gives significant improvement, achieving 57.6% on dev set and 53.4% on test set, which is comparable to the state-of-the-art results from IRNet with BERT. We attribute our BERT model's high performance to (1) the empirically powerful text understanding ability of pretrained BERT model and (2) the early interaction between utterances and column headers when they are concatenated in a single sequence as the BERT input.",
"SParC. Table shows the results on SParC dataset. Similar to Spider, our model without previous query as input already outperforms SyntaxSQL-con, achieving 31.4% question matching accuracy and 14.7% interaction matching accuracy. In addition, compared with CD-Seq2Seq, our model enjoys the benefits of the table-utterance encoder, turn attention, and the joint consideration of utterances and table schemas during the decoding stage. This boosts the performance by 10% question accuracy and 6% interaction accuracy.",
"Furthermore, we also investigate the effect of copying segment. We use the same segment copy procedure as suhr2018learning: first deterministically extract segments from the previous query and encode each segment using an LSTM, then generate a segment by computing its output probability based on its segment encoding. However, since the segment extraction from suhr2018learning is exclusively designed for the ATIS dataset, we implement our own segment extraction procedure by extracting SELECT, FROM, GROUP BY, ORDER BY clauses as well as different conditions in WHERE clauses. In this way, 3.9 segments can be extracted per SQL on average. We found that adding segment copying to CD-Seq2Seq gives a slightly lower performance on question matching and a small gain on interaction matching, while using segments extracted from the gold query can have much higher results. This demonstrates that segment copy is vulnerable to error propagation. In addition, it can only copy whole segments hence has difficulty capturing the changes of only one or a few tokens in the query.",
"To better understand how models perform as the interaction proceeds, Figure FIGREF30 (Left) shows the performance split by turns on the dev set. The questions asked in later turns are more difficult to answer given longer context history. While the baselines have lower performance as the turn number increases, our model still maintains 38%-48% accuracy for turn 2 and 3, and 20% at turn 4 or beyond. Similarly, Figure FIGREF30 (Right) shows the performance split by hardness levels with the frequency of examples. This also demonstrates our model is more competitive in answering hard and extra hard questions.",
"ATIS. We also report our model performance on ATIS in Table . Our model achieves 36.2% dev and 43.9% test string accuracy, comparable to suhr2018learning. On ATIS, we only apply our editing mechanism and reuse their utterance encoder instead of the BERT utterance-table encoder, because ATIS is single domain."
],
[
"We further investigate the effect of our query editing mechanism. To this end, we apply editing from both the gold query and the predicted query on our model with or without the utterance-table BERT embedding. We also perform an ablation study to validate the contribution of query attention and sequence editing separately.",
"As shown in Table , editing the gold query consistently improves both question match and interaction match accuracy. This shows the editing approach is indeed helpful to improve the generation quality when the previous query is the oracle.",
"Using the predicted query is a more realistic setting, and in this case, the model is affected by error propagation due to the incorrect queries produced by itself. For the model without the utterance-table BERT embedding, using the predicted query only gives around 1.5% improvement. As shown in Figure FIGREF33, this is because the editing mechanism is more helpful for turn 4 which is a small fraction of all question examples. For the model with the utterance-table BERT embedding, the query generation accuracy at each turn is significantly improved, thus reducing the error propagation effect. In this case, the editing approach delivers consistent improvements of 7% increase on question matching accuracy and 11% increase on interaction matching accuracy. Figure FIGREF33 also shows that query editing with BERT benefits all turns.",
"Finally, as an ablation study, Table also reports the result with only query attention (use predicted query) on the dev set. This improves over our vanilla BERT model without query attention and achieves 42.7% question and 21.6% interaction matching accuracy. With query editing, our best model further improves to 47.2% question and 29.5% interaction matching accuracy. This demonstrates the effectiveness of our query attention and query editing separately, both of which are essential to make use of the previous query.",
""
],
[
"In this paper, we propose an editing-based encoder-decoder model to address the problem of context-dependent cross-domain text-to-SQL generation. While being simple, empirical results demonstrate the benefits of our editing mechanism. The approach is more robust to error propagation than copying segments, and its performance increases when the basic text-to-SQL generation quality (without editing) is better."
],
[
"We thank the anonymous reviewers for their thoughtful detailed comments."
]
],
"section_name": [
"Introduction",
"Cross-Domain Context-Depencent Semantic Parsing ::: Datasets",
"Cross-Domain Context-Depencent Semantic Parsing ::: Task Formulation",
"Methodology",
"Methodology ::: Utterance-Table Encoder",
"Methodology ::: Interaction Encoder with Turn Attention",
"Methodology ::: Table-aware Decoder",
"Methodology ::: Query Editing Mechanism",
"Related Work",
"Experimental Results",
"Experimental Results ::: Metrics",
"Experimental Results ::: Baselines",
"Experimental Results ::: Implementation Details",
"Experimental Results ::: Overall Results",
"Experimental Results ::: Effect of Query Editing",
"Conclusions",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"4f88494209a2074459fa976dc4b38af58435d786",
"c05d93a06c59edf4658d496e28c9461eb207b35d"
],
"answer": [
{
"evidence": [
"We evaluate our model on SParC BIBREF0, a new large-scale dataset for cross-domain semantic parsing in context consisting of coherent question sequences annotated with SQL queries over 200 databases in 138 domains. Experiment results show that by generating from the previous query, our model delivers an improvement of 7% question match accuracy and 11% interaction match accuracy over the previous state-of-the-art. Further analysis shows that our editing approach is more robust to error propagation than copying segments, and the improvement becomes more significant if the basic text-to-SQL generation accuracy (without editing) improves."
],
"extractive_spans": [
"improvement of 7% question match accuracy and 11% interaction match accuracy"
],
"free_form_answer": "",
"highlighted_evidence": [
"Experiment results show that by generating from the previous query, our model delivers an improvement of 7% question match accuracy and 11% interaction match accuracy over the previous state-of-the-art."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate our model on SParC BIBREF0, a new large-scale dataset for cross-domain semantic parsing in context consisting of coherent question sequences annotated with SQL queries over 200 databases in 138 domains. Experiment results show that by generating from the previous query, our model delivers an improvement of 7% question match accuracy and 11% interaction match accuracy over the previous state-of-the-art. Further analysis shows that our editing approach is more robust to error propagation than copying segments, and the improvement becomes more significant if the basic text-to-SQL generation accuracy (without editing) improves."
],
"extractive_spans": [
"our model delivers an improvement of 7% question match accuracy and 11% interaction match accuracy over the previous state-of-the-art"
],
"free_form_answer": "",
"highlighted_evidence": [
"Experiment results show that by generating from the previous query, our model delivers an improvement of 7% question match accuracy and 11% interaction match accuracy over the previous state-of-the-art."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0668bafab09d9cce962b49854f43b239fc60d639",
"22add05592275c916dbd932d59e7e8fbef5fdfe3",
"e05e650142acac17ab3ba0e11bbc9f2e150278e1"
],
"answer": [
{
"evidence": [
"Spider. We compare with the results as reported in yu2018syntaxsqlnet. Furthermore, we also include recent results from lee2019recursive who propose to use recursive decoding procedure, bogin2019representing introducing graph neural networks for encoding schemas, and guo2019towards who achieve state-of-the-art performance by using an intermediate representation to bridge natural language questions and SQL queries."
],
"extractive_spans": [
"guo2019towards who achieve state-of-the-art performance"
],
"free_form_answer": "",
"highlighted_evidence": [
"Spider. We compare with the results as reported in yu2018syntaxsqlnet. Furthermore, we also include recent results from lee2019recursive who propose to use recursive decoding procedure, bogin2019representing introducing graph neural networks for encoding schemas, and guo2019towards who achieve state-of-the-art performance by using an intermediate representation to bridge natural language questions and SQL queries."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"SParC. We compare with the two baseline models released by yu2019sparc. (1) Context-dependent Seq2Seq (CD-Seq2Seq): This model is adapted from suhr2018learning. The original model was developed for ATIS and does not take the database schema as input hence cannot generalize well across domains. yu2019sparc adapt it to perform context-dependent SQL generation in multiple domains by adding a bi-LSTM database schema encoder which takes bag-of-words representations of column headers as input. They also modify the decoder to select between a SQL keyword or a column header.",
"(2) SyntaxSQL-con: This is adapted from the original context-agnostic SyntaxSQLNet BIBREF44 by using bi-LSTMs to encode the interaction history including the utterance and the associated SQL query response. It also employs a column attention mechanism to compute representations of the previous question and SQL query.",
"Spider. We compare with the results as reported in yu2018syntaxsqlnet. Furthermore, we also include recent results from lee2019recursive who propose to use recursive decoding procedure, bogin2019representing introducing graph neural networks for encoding schemas, and guo2019towards who achieve state-of-the-art performance by using an intermediate representation to bridge natural language questions and SQL queries."
],
"extractive_spans": [],
"free_form_answer": "For SParC, context-dependent seq2seq and syntaxSQL-con. For Spider, a recursive decoding procedure, graph neural networks, and intermediate representation models.",
"highlighted_evidence": [
"SParC. We compare with the two baseline models released by yu2019sparc. (1) Context-dependent Seq2Seq (CD-Seq2Seq): This model is adapted from suhr2018learning. The original model was developed for ATIS and does not take the database schema as input hence cannot generalize well across domains. yu2019sparc adapt it to perform context-dependent SQL generation in multiple domains by adding a bi-LSTM database schema encoder which takes bag-of-words representations of column headers as input. They also modify the decoder to select between a SQL keyword or a column header.\n\n(2) SyntaxSQL-con: This is adapted from the original context-agnostic SyntaxSQLNet BIBREF44 by using bi-LSTMs to encode the interaction history including the utterance and the associated SQL query response. It also employs a column attention mechanism to compute representations of the previous question and SQL query.",
"Spider. We compare with the results as reported in yu2018syntaxsqlnet. Furthermore, we also include recent results from lee2019recursive who propose to use recursive decoding procedure, bogin2019representing introducing graph neural networks for encoding schemas, and guo2019towards who achieve state-of-the-art performance by using an intermediate representation to bridge natural language questions and SQL queries."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"SParC. We compare with the two baseline models released by yu2019sparc. (1) Context-dependent Seq2Seq (CD-Seq2Seq): This model is adapted from suhr2018learning. The original model was developed for ATIS and does not take the database schema as input hence cannot generalize well across domains. yu2019sparc adapt it to perform context-dependent SQL generation in multiple domains by adding a bi-LSTM database schema encoder which takes bag-of-words representations of column headers as input. They also modify the decoder to select between a SQL keyword or a column header.",
"(2) SyntaxSQL-con: This is adapted from the original context-agnostic SyntaxSQLNet BIBREF44 by using bi-LSTMs to encode the interaction history including the utterance and the associated SQL query response. It also employs a column attention mechanism to compute representations of the previous question and SQL query.",
"Spider. We compare with the results as reported in yu2018syntaxsqlnet. Furthermore, we also include recent results from lee2019recursive who propose to use recursive decoding procedure, bogin2019representing introducing graph neural networks for encoding schemas, and guo2019towards who achieve state-of-the-art performance by using an intermediate representation to bridge natural language questions and SQL queries."
],
"extractive_spans": [],
"free_form_answer": "SQLNet, SyntaxSQLNet,\nSyntxSQLNet + data augmentation,\nRecursive Decodoing Procedure Lee(2019),\nGNN,\nIRNet and IRNet(BERT)",
"highlighted_evidence": [
"SParC. We compare with the two baseline models released by yu2019sparc. (1) Context-dependent Seq2Seq (CD-Seq2Seq): This model is adapted from suhr2018learning. The original model was developed for ATIS and does not take the database schema as input hence cannot generalize well across domains. yu2019sparc adapt it to perform context-dependent SQL generation in multiple domains by adding a bi-LSTM database schema encoder which takes bag-of-words representations of column headers as input. They also modify the decoder to select between a SQL keyword or a column header.\n\n(2) SyntaxSQL-con: This is adapted from the original context-agnostic SyntaxSQLNet BIBREF44 by using bi-LSTMs to encode the interaction history including the utterance and the associated SQL query response. It also employs a column attention mechanism to compute representations of the previous question and SQL query.\n\nSpider. We compare with the results as reported in yu2018syntaxsqlnet. Furthermore, we also include recent results from lee2019recursive who propose to use recursive decoding procedure, bogin2019representing introducing graph neural networks for encoding schemas, and guo2019towards who achieve state-of-the-art performance by using an intermediate representation to bridge natural language questions and SQL queries.\n\n"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
]
}
],
"nlp_background": [
"zero",
"zero"
],
"paper_read": [
"no",
"no"
],
"question": [
"How big is benefit in experiments of this editing approach compared to generating entire SQL from scratch?",
"What are state-of-the-art baselines?"
],
"question_id": [
"075d6ab5dd132666e85d0b6ad238118271dfc147",
"f2b1e87f61c65aaa99bcf9825de11ae237260270"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Dataset Statistics.",
"Figure 1: Model architecture of editing the previous query with attentions to the user utterances, the table schema, and the previously generated query.",
"Table 1: dorm Table 2: has Table 3: amenity",
"Figure 3: Number of operations at different turns.",
"Table 4: Spider results on dev set and test set.",
"Table 5: SParC results. For our models, we only report test set results of our best model on the dev set. ∗We improve the CD-Seq2Seq performance over Yu et al. (2019b) by separating and parsing the column names (e.g., stu fname→ student first name) and using the schema-specific output vocabulary during decoding.",
"Table 6: ATIS results on dev set and test set.",
"Figure 4: Performance split by different turns (Left) and hardness levels (Right) on SParC dev set.",
"Figure 5: Effect of query editing at different turns on SParC dev set."
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"4-Table1-1.png",
"5-Figure3-1.png",
"7-Table4-1.png",
"8-Table5-1.png",
"8-Table6-1.png",
"9-Figure4-1.png",
"9-Figure5-1.png"
]
} | [
"What are state-of-the-art baselines?"
] | [
[
"1909.00786-Experimental Results ::: Baselines-1",
"1909.00786-Experimental Results ::: Baselines-2",
"1909.00786-Experimental Results ::: Baselines-3"
]
] | [
"SQLNet, SyntaxSQLNet,\nSyntxSQLNet + data augmentation,\nRecursive Decodoing Procedure Lee(2019),\nGNN,\nIRNet and IRNet(BERT)"
] | 163 |
1909.03087 | ACUTE-EVAL: Improved Dialogue Evaluation with Optimized Questions and Multi-turn Comparisons | While dialogue remains an important end-goal of natural language research, the difficulty of evaluation is an oft-quoted reason why it remains troublesome to make real progress towards its solution. Evaluation difficulties are actually two-fold: not only do automatic metrics not correlate well with human judgments, but also human judgments themselves are in fact difficult to measure. The two most used human judgment tests, single-turn pairwise evaluation and multi-turn Likert scores, both have serious flaws as we discuss in this work. ::: We instead provide a novel procedure involving comparing two full dialogues, where a human judge is asked to pay attention to only one speaker within each, and make a pairwise judgment. The questions themselves are optimized to maximize the robustness of judgments across different annotators, resulting in better tests. We also show how these tests work in self-play model chat setups, resulting in faster, cheaper tests. We hope these tests become the de facto standard, and will release open-source code to that end. | {
"paragraphs": [
[
"Dialogue between human and machine is an important end-goal of natural language research. The open-ended nature of generating sequences in a multi-turn setup naturally makes the task difficult to evaluate – with full evaluation possessing many of the difficulties of the task itself as it requires deep understanding of the content of the conversation. As in many other natural language generation (NLG) tasks, automatic metrics have not been shown to have a clear correlation with human evaluations BIBREF0, BIBREF1. This means the current standard for all dialogue research involves human trials, which slows down research and greatly increases the cost of model development.",
"Unfortunately, human judgments are themselves difficult to measure. The two most used approaches, single-turn pairwise evaluation BIBREF2, BIBREF3, and multi-turn Likert scores BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 have serious limitations. Single-turn pairwise evaluation provides the benefits and simplicity of an A/B test, allowing for cheap and fast annotations, with comparisons that are robust to annotator score bias, but fail to take into account the multi-turn aspect of conversations. To give a trivial example, such comparisons fail to capture whether the model would repeat itself in a multi-turn conversation because they only look at one turn; repetition is a known issue that humans dislike BIBREF6.",
"Multi-turn Likert scores require the annotator to have a multi-turn conversation and then provide an integer score, which is more costly and time-consuming to run but evaluates full conversations more accurately. The integer scores however suffer from differing bias and variance per annotator, which researchers have tried to mitigate BIBREF9, but nevertheless due to its lack of sensitivity often yields comparisons that are not statistically significant. Furthermore, due to strong anchoring effects during model evaluation, i.e. that annotators are affected by the first systems they evaluate, Likert comparisons are generally not comparable across multiple papers. This mandates that evaluations of new models be simultaneously collected with baselines, further increasing the cost of developing additional models BIBREF6.",
"In this work we introduce Acute-eval, a method that combines the benefits, and attempts to mitigate the deficiencies, of the above two approaches by introducing a pairwise relative comparison setup for multi-turn dialogues. In each trial, we show the annotator two whole conversations, with the second speaker in each conversation highlighted, as the judgment should be independent of the quality of the first speaker, see Figure FIGREF1. We then show a carefully worded question with two choices: speaker A or B, where the question measures a desired quality such as which speaker is more engaging, interesting or knowledgeable. Our experiments show that annotators perform well in this setup, and that our method can reveal subtle but significant differences between conversational models that other approaches, such as multi-turn Likert, cannot.",
"Overall, our work provides the following contributions:",
"A new evaluation method with a clear mechanism that provides fast, cheap iteration. This evaluation method allows efficient reuse of data from prior papers, allowing new models to be evaluated independently of baselines, and dramatically lowers the cost of annotation.",
"We optimize question choices to find those with the highest agreement, increasing confidence in the desired test. We provide the wording of the questions that we found to work best for several questions of interest (most engaging, human, interesting or knowledgeable conversationalist) for further research use.",
"We provide an explicit benchmark comparison between current best performing retrieval and generative models on two recent tasks, PersonaChat BIBREF5 and Wizard of Wikipedia BIBREF7 for several question choices, revealing the current state-of-the-art, and to be used for benchmarking on these tasks in the future.",
"We show that our test can be applied to self-chats rather than human-model conversation logs, which can reveal problems with existing models at a cheaper price, and provides high agreement with the human-model evaluations.",
"We will release the code for running these tests."
],
[
"Dialogue tasks have traditionally been separated into two areas: goal-oriented and chitchat. Goal-oriented tasks typically have a clearer evaluation, e.g. task completion can be measured if the correct actions are taken BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14. Chitchat tasks are more open ended, and instead feature conversations without a precise goal that can be automatically evaluated. For example, conversations where two speaking partners are discussing interests BIBREF5 or topics BIBREF7. We study the latter in this work.",
"Evaluation of chitchat tasks with automatic metrics is difficult precisely because of their open-ended nature. For example, the answer to the question “What are you doing tonight?” has many possible answers, each with little word overlap. This means standard metrics for tasks like question-answering or machine translation do not work well, and have poor correlation with human judgments BIBREF0, BIBREF15. Nevertheless, a number of studies do report automatic metrics, without human studies BIBREF16, BIBREF17. Researchers have made attempts to improve automatic evaluation, trying methods such as adversarial evaluation BIBREF18, learning a scoring model BIBREF1, or a learnt ensemble of automatic metrics BIBREF19, but their value is as yet not fully understood.",
"Currently the standard approach in chitchat dialogue is to perform human evaluations BIBREF2, BIBREF20, BIBREF21, BIBREF4, BIBREF5, BIBREF7, typically reporting a judgment such as conversation quality or appropriateness via a Likert scale or pairwise comparison. While conversations are naturally multi-turn, pairwise setups typically consider single turn evaluations, taking the “gold” dialogue history from human-human logs, and only consider altering a single utterance. A more complete multi-turn evaluation is typically measured with a Likert scale (usually 1-4 or 1-5) after the conversation takes place. Some works such as BIBREF6 ask a series of questions relating to different aspects of conversational ability. There are some notable variants from these standard setups. BIBREF22 provide a method that combines continuous scales and relative assessments, but in single-turn, rather than multi-turn evaluation. BIBREF19 compare human evaluations to automatic metrics computed on self-chats. Note that we also use self-chats in this work, but we evaluate these with humans, rather than automatic metrics.",
"Finally, this work expands upon some of the ideas present in BIBREF6. In that work, a test for interestingness of a specificity-controlled model conducted with pairwise chat logs was mentioned, similar to the ones used here, but was not the focus of their work. In our work, we conduct a full study of novel variants of this approach, consider optimizing the questions for robust measurements over four types of questions, utilize self-chat logs in addition to human-bot logs, and benchmark state-of-the-art models across two recent tasks."
],
[
"To compare two dialogue models, model A and model B, our evaluation asks humans to directly compare side-by-side multi-turn dialogues conducted by these models. See Figure FIGREF1 for an example.",
"Our method is thus the following: (1) collect conversation logs for model A; similarly for model B. (2) In a number of trials, ask annotators to make binary judgments between sampled pairs from the logs, and collate the results to determine the winner, either A or B, and the statistical significance.",
"We consider different approaches to step (1) and (2) below."
],
[
"Our standard setup is to compare conversation logs between models and humans. In each evaluation trial we then show a human annotator two of the previously obtained conversations, one of model $A$ conversing with a human, and one of model $B$ conversing with a (possibly different) human. The annotator sees the conversations side by side on the same screen, with the two models' utterances highlighted in different colors, and the human utterances in gray to minimally distract from the models.",
"The annotator is posed a question phrasing (e.g. “which speaker is more knowledgeable” or “which speaker sounds more human?”), and asked to make a binary choice between model $A$ and model $B$. They are strongly encouraged to provide a short text justification for their choice. We collect $N$ trials of such pairwise judgments, and use them to decide which model wins. Statistical significance can be computed using a binomial test."
],
[
"Human-model conversation logs are themselves time-consuming and expensive to collect, which limits rapid iterative model development. We investigate if it is possible to remove the human from the conversation, and only use human annotators in the final pairwise conversation evaluation step. The concept of self-chats BIBREF21, BIBREF19, whereby a model talks to itself, playing the roles of both speaking partners, has been previously explored in other contexts. Such logs are easy to collect for models A and B, involving simply running inference for both speaker roles. We then use these logs in the Acute-eval pairwise comparison setup as described above."
],
[
"So far, we have not detailed the actual question(s) asked of the annotators. The framing and phrasing of questions in surveys is known to greatly affect the direction of responses, and therefore, in the case of evaluation, inter-annotator agreement. Though this has been noted in prior work BIBREF1, we have found no systematic experimentation on question formulation or task presentation. We therefore aim to propose and evaluate multiple potential question wordings to achieve higher agreement.",
"To do this, we build an initial test that compares human-human logs with human-model logs where the model is a relatively low quality baseline model. The aim is that there should be a clear and agreeable difference between human and model which is visible to human annotators. We ask annotators to make judgments between these two, where we choose pairs where the human should be judged as superior.",
"We then run independent trials with different question phrasing, and find the questions with highest inter-annotator agreement. The winning questions can then be used in future experiments by ourselves, and other researchers. Although having high inter-annotator agreement does not guarantee that crowdworkers interpret the question as intended, it increases the chance the question is understood uniformly. That is, the researcher still has to exercise care in the formulation of the question so that they believe it measures the quantity they are interested in. In our experiments we find questions with high-agreement rate over four axes: engagingness, interestingness, knowledge and humanness."
],
[
"We use crowdworkers for our annotations. We recommend limiting the number of annotations a single worker may complete to be only a few pairs (in our experiments, if we are making $N$ model comparisons then we allow $N$ annotations). In preliminary trials, we found that limiting the influence of any one worker was important for replicability, but that results were highly consistent across multiple runs with this limitation.",
"Additionally, the first comparison any worker is asked to annotate consists of a conversation between a weak baseline model and human, and a human-human conversation. If a worker fails to rate the human-human conversation as better, we remove their annotations from the results, in order to remove poor quality annotators. We additionally remove workers who never give a reason for their choice. Note that adding such worker quality tests to pairwise annotation tasks is straightforward where the gold annotation is known, while it is harder for Likert tests which have integer scores. One may also increase the number of quality-control annotations to decrease the likelihood of fraudulent workers, but we found using a single control question had a reasonable cost-noise ratio.",
"Each specific pair of conversations is shown at most once, given that there are at least as many possible pairs of conversations as desired annotations. If there are more conversations available for each model than desired annotations, each conversation is shown at most once - that is, in only one annotation. We found that maximizing the diversity of pairs improved robustness of our evaluation across multiple replication experiments."
],
[
"We perform experiments on two tasks, PersonaChat and Wizard of Wikipedia, which evaluate different aspects of conversational ability. We first optimize the questions to maximize worker agreement, and then benchmark existing state-of-the-art models on each task."
],
[
"PersonaChat BIBREF5 is a chitchat dialogue task involving two participants (two humans or a human and a bot). Each participant is given a persona – a short collection of personal traits such as I'm left handed or My favorite season is spring – and are instructed to get to know each other by chatting naturally using their designated personas, for 6–8 turns. The original dataset contains nearly 9000 human-human training conversations; most models are pretrained with a larger corpus, and then fine-tuned on this set.",
"PersonaChat was the subject of the NeurIPS 2018 ConvAI2 Challenge BIBREF8, in which competitor's models were first evaluated with respect to automatic metrics, and then with respect to human judgment via human-bot chats followed by the question “How much did you enjoy talking to this user?\" on a scale of 1–4. A total of 9 systems were evaluated using human annotators, 100 conversations for each. In this work, we leverage the human-model chat logs from the ConvAI2 competition for three models: Lost in Conversation (LIC), which won the competition, and Hugging Face (HF; BIBREF23, BIBREF23) which won the automatic evaluation track, and the KVMemNN BIBREF24 baseline released by the competition organizers (KV; BIBREF8, BIBREF8). LIC and HF are large pretrained and fine-tuned generative Transformer models, while KV is a retrieval model with no pretraining.",
"Secondly, we also compare to recently published models from BIBREF6. The authors studied the effects of controllable generation. and showed that Repetition-controlled (RC), Inquisitive (INQ), and Interesting (INT) models obtained the highest human Likert scores in their study, however their comparison to models from other studies is not direct. We thus compare to these models as well; we use the human-model conversation logs from their work, 100 for each model.",
"Finally, we also compare to the Polyencoder model (PE, BIBREF25, BIBREF25), a recent state-of-the-art retrieval model. It is a type of large Transformer architecture pretrained on Reddit, which learns a small number of global features to represent the input so that retrieval can be computed efficiently. As no conversation logs were provided in that work, we additionally collect human-model conversations for that model.",
"Overall, we benchmark 7 models, and compare them to human (H) performance in a number of different settings: with human-model and self-chat over three questions: engagingness, humamnness and interestingness."
],
[
"Wizard of Wikipedia BIBREF7 is a chitchat dialogue task where two speakers discuss a topic in depth, chosen from 1247 topics. One speaker (termed the Wizard) is meant to be both engaging and knowledgeable on the topics, and has access to an information retrieval system over Wikipedia to supplement their own knowledge. The other speaker (the Apprentice) is meant to be curious and eager to learn about the topic. The original dataset contains over 18,000 human-human dialogues, and has been used to train various kinds of models to imitate the human wizards. These include the Memory Network Transformer, in both generative and retrieval versions that employs the retrieved knowledge by attending over it before producing an utterance (GK and RK respectively), and baselines that do not have access to the knowledge (GU and RU). See Figure FIGREF25 for an example chat. We use the human-model logs from that paper (100 conversations for each model) on unseen test topics and evaluate them against humans (H), using both engagingness and knowledgeability questions. We note the original paper tested engagingness only."
],
[
"We are interested in evaluating models in terms of four axes: engagingness, interestingness, knowledge and humanness. In order to find the questions with highest inter-annotator agreement, we run multiple trials of experiments according to the setup described below. Each trial tests the effectiveness of a single question and consists of the same set of multi-turn conversation logs, presented to the human annotators. We test 13 questions: three regarding engagingness, four regarding interestingness, three regarding humanness, and three regarding knowledgeability (see Table TABREF11).",
"We compare human-human logs with human-model logs where the model is a relatively low quality baseline model, with the aim that there should be a clear and agreeable difference between human and model which is visible to human annotators. For PersonaChat we use a greedy generative baseline, and for Wizard we use the GU (generative unknowledgeable) model. Both of these baselines exhibit strong repetitive behavior which is known to be highly disfavored by crowdworkers BIBREF6. We select a single handpicked conversation pair for each of the tasks, and collect $\\sim $20 annotations per question.",
"We calculate the inter-annotator agreement for each question. The question achieving the highest inter-annotator agreement is selected for use in the rest of our experiments. The specific question phrasing and the texts accompanying the option for Speaker 1 (i.e. the left-hand conversation) are listed in Table TABREF11 along with inter-annotator agreements. As can be seen, the phrasing of the question is important, with poor phrasing choices leading to much lower agreement levels, e.g. 86.7% agreement in the best case for interestingness, and 69.6% in the worst case.",
"As a preliminary sanity check, we ran A/A tests over each of the engagingness, interestingness, and humanness best questions, with the same model appearing as both Speaker 1 and 2. All three tests came back close to 50-50.",
"Overall, we see this question optimization step as an important pre-requisite for our main experiments, and use the best discovered phrasing in each case. We encourage further research to use them as well."
],
[
"We first compare all 7 models and humans on the PersonaChat task using Acute-eval over the human-model chats using the optimized engagingness question. In total, we evaluate 28 paired comparisons. Results are given in Table TABREF18. Bold win percentages indicate significance.",
"We first observe that the models form a clean well-ordered set, and there are no rock-paper-scissors effects, giving an order Human $>$ PE $>$ LIC $>$ INT $>$ HF $>$ INQ $>$ KV $>$ RC. In general, these results agree closely with the known Likert comparisons made in prior papers, shown in Table TABREF19. Similar conclusions are derived for the interestingness and humanness questions as well, see Tables TABREF26 and TABREF24, note the model ordering is slightly different for those questions. BIBREF6 previously showed that different models often exhibit different rankings for different metrics, and Acute-eval results remain largely consistent with Likert.",
"A surprising result for the community is that the retrieval model PE outperforms all generative models, as the community has focused heavily on building generative models, e.g. almost all 23 entrants to the ConvAI2 competition BIBREF8. Now that the current best performing models have been benchmarked against each other we hope future research will use the same approach so the state-of-the-art can be clearly tracked."
],
[
"We perform Acute-eval over self-chats instead of human-model chats. We compare all models and humans (via human-human chats) in an otherwise identical setup to the human-bot evaluation for PersonaChat. Results are given in Table TABREF20.",
"We observe very similar conclusions to human-model chats in terms of winning models, making this a viable cheaper alternative to collecting human-model conversations, thus being considerably cheaper to collect. This approach also appears to require relatively fewer annotations/person-hours in this case to achieve statistical significance. One important caveat is the performance of the HF model. HF self-chats surface degeneracies in the model itself, and do not look natural (see Figure FIGREF22 for examples), explaining its poor performance compared to all other models. All other models do not exhibit this behavior and apart from HF, are ordered by humans exactly the same as for human-bot chats. For example, see Figure FIGREF23 for PE engaging in self-chat more successfully. However, due to the inadequacies of a specific model, in this case HF, conclusions from self-chat performance results must therefore be handled with care, but we believe are a reasonable choice for early experiments in the model development cycle, enabling faster research iteration.",
"One concern with self-chat is that powerful models could easily cheat, and simply recall training examples with perfect accuracy. In practice, we found that none of the models exhibit this behavior: $<$1% of the Polyencoder's call-response utterance pairs produced during self-chats come directly from the training set. The worst offender, INQ, has roughly 10% of pairs coming from training, but this stems from it using the same generic greeting and response in nearly all conversations (“Hello, how are you doing today?”, “I am doing well, how about yourself?”)."
],
[
"We similarly compare all 4 models and humans on the optimized engaging and knowledge questions. The results are given in Tables TABREF27 and TABREF28. We again find retrieval models outperform generative models, with knowledge attention (GK) clearly helping the generative models, but with RU and RK very close.",
"Results largely agree between the two questions, except retrieval with knowledge (RK) more clearly beats the generative version (GK) than retrieval without (RU) when the question is about knowledge. For the engagingness question, where it makes sense that this is less important, there is little difference between knowledge or not."
],
[
"We compare Acute-eval to multi-turn Likert for both tasks by computing pairwise Likert differences, where known, from the original papers. We do not compare across papers as evaluation setups differ. Values are provided in Tables TABREF19, TABREF26, TABREF24 and TABREF27. While the tests generally agree, Acute-eval can be a more sensitive test, which more often yields significance. On Wizard of Wikipedia where all Likert matchups are known, 8 of the pairwise matchups are significant for our test with human-model chats, while 6 are significant for Likert. On PersonaChat for the interestingness question, 6 of 10 matchups are significant for Acute-eval, including all known Likert matchups, which only has 2 of 3 that are significant. For the humanness question, 5 of 10 matchups are significant for Acute-eval, including all known Likert matchups, which only has 2 of 3 that are significant. For the engagingness question, 5 of the 9 Likert matchups are significant. All 9 are significant for Acute-eval when using self-chats; 3 are significant for human-model chats.",
"We compare the cost effectiveness of Likert to Acute-eval human-model and self-chat comparisons in Figure FIGREF30. Shown is the PersonaChat Engagingness question comparing RC and INT models, a fairly tight matchup. We show the % chance of achieving significance when drawing pairs of dialogues at random, plotting with respect to person-hours spent annotating. In this case Likert fails to achieve significance, likely due to bias and variance issues with integer scores. Acute-eval human-model and self-chat pairwise tests perform well, achieving significance; self-chat requires fewer person-hours."
],
[
"Studying the ability of machines to communicate with humans is an important long-term goal of AI research. Unfortunately, measuring progress towards that goal has been hampered by the trustworthiness of evaluation itself. Current human evaluation methods such as multi-turn Likert are expensive to run, have annotator bias and variance problems, and can fail to yield statistical significance.",
"In this work we have contributed a novel evaluation method that alleviates some of these problems. By optimizing questions and performing comparisons on pairs of human-bot dialogues we arrive at more sensitive statistical tests when benchmarking current state-of-the models. Utilizing self-chat bot evaluations we can often improve sensitivity, while yielding even cheaper evaluations. We will publicly release the code for our tests, and recommend them to be used in future research studies in order to push forward the state of the art."
]
],
"section_name": [
"Introduction",
"Related Work",
"Method: Acute-eval",
"Method: Acute-eval ::: Human-Model chats",
"Method: Acute-eval ::: Self-Chats",
"Method: Acute-eval ::: Question Optimization",
"Method: Acute-eval ::: Annotation Quality",
"Experiments",
"Experiments ::: PersonaChat task",
"Experiments ::: Wizard of Wikipedia task",
"Experiments ::: Question Optimization",
"Experiments ::: Benchmarking: Evaluation of State-of-the-art ::: PersonaChat",
"Experiments ::: Benchmarking: Evaluation of State-of-the-art ::: Self-Chat",
"Experiments ::: Benchmarking: Evaluation of State-of-the-art ::: Wizard of Wikipedia",
"Experiments ::: Benchmarking: Evaluation of State-of-the-art ::: Comparison to Likert",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"0702327cd74c6a57148f3c25a595e8cd878b09b3",
"1bf4e3d3ac055cefabadd7a82af36626e04e8e91",
"4e5d5688a9c7e85a440444e1d6a2869f2ac8e2ad"
],
"answer": [
{
"evidence": [
"We perform experiments on two tasks, PersonaChat and Wizard of Wikipedia, which evaluate different aspects of conversational ability. We first optimize the questions to maximize worker agreement, and then benchmark existing state-of-the-art models on each task."
],
"extractive_spans": [],
"free_form_answer": "Datasets from PersonaChat and Wizard of Wikipedia tasks.",
"highlighted_evidence": [
"We perform experiments on two tasks, PersonaChat and Wizard of Wikipedia, which evaluate different aspects of conversational ability. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We perform experiments on two tasks, PersonaChat and Wizard of Wikipedia, which evaluate different aspects of conversational ability. We first optimize the questions to maximize worker agreement, and then benchmark existing state-of-the-art models on each task."
],
"extractive_spans": [
"PersonaChat",
"Wizard of Wikipedia"
],
"free_form_answer": "",
"highlighted_evidence": [
"We perform experiments on two tasks, PersonaChat and Wizard of Wikipedia, which evaluate different aspects of conversational ability. We first optimize the questions to maximize worker agreement, and then benchmark existing state-of-the-art models on each task."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"PersonaChat BIBREF5 is a chitchat dialogue task involving two participants (two humans or a human and a bot). Each participant is given a persona – a short collection of personal traits such as I'm left handed or My favorite season is spring – and are instructed to get to know each other by chatting naturally using their designated personas, for 6–8 turns. The original dataset contains nearly 9000 human-human training conversations; most models are pretrained with a larger corpus, and then fine-tuned on this set.",
"Wizard of Wikipedia BIBREF7 is a chitchat dialogue task where two speakers discuss a topic in depth, chosen from 1247 topics. One speaker (termed the Wizard) is meant to be both engaging and knowledgeable on the topics, and has access to an information retrieval system over Wikipedia to supplement their own knowledge. The other speaker (the Apprentice) is meant to be curious and eager to learn about the topic. The original dataset contains over 18,000 human-human dialogues, and has been used to train various kinds of models to imitate the human wizards. These include the Memory Network Transformer, in both generative and retrieval versions that employs the retrieved knowledge by attending over it before producing an utterance (GK and RK respectively), and baselines that do not have access to the knowledge (GU and RU). See Figure FIGREF25 for an example chat. We use the human-model logs from that paper (100 conversations for each model) on unseen test topics and evaluate them against humans (H), using both engagingness and knowledgeability questions. We note the original paper tested engagingness only."
],
"extractive_spans": [
"PersonaChat BIBREF5",
"Wizard of Wikipedia BIBREF7"
],
"free_form_answer": "",
"highlighted_evidence": [
"PersonaChat BIBREF5 is a chitchat dialogue task involving two participants (two humans or a human and a bot). Each participant is given a persona – a short collection of personal traits such as I'm left handed or My favorite season is spring – and are instructed to get to know each other by chatting naturally using their designated personas, for 6–8 turns. The original dataset contains nearly 9000 human-human training conversations; most models are pretrained with a larger corpus, and then fine-tuned on this set.",
"Wizard of Wikipedia BIBREF7 is a chitchat dialogue task where two speakers discuss a topic in depth, chosen from 1247 topics. One speaker (termed the Wizard) is meant to be both engaging and knowledgeable on the topics, and has access to an information retrieval system over Wikipedia to supplement their own knowledge. The other speaker (the Apprentice) is meant to be curious and eager to learn about the topic. The original dataset contains over 18,000 human-human dialogues, and has been used to train various kinds of models to imitate the human wizards. These include the Memory Network Transformer, in both generative and retrieval versions that employs the retrieved knowledge by attending over it before producing an utterance (GK and RK respectively), and baselines that do not have access to the knowledge (GU and RU). See Figure FIGREF25 for an example chat. We use the human-model logs from that paper (100 conversations for each model) on unseen test topics and evaluate them against humans (H), using both engagingness and knowledgeability questions. We note the original paper tested engagingness only."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"5d5897d6be1ebd0989beb490a3e764475b90bc1e",
"bd661b8725f94b145324dd64fa1f5f13b97c7ef0"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 5: Relative cost effectiveness of potential collection methods: Likert and ACUTE-EVAL human-model chat and self-chat pairwise tests. Our methods obtain statistical significance with fewer person hours; Likert fails in this case."
],
"extractive_spans": [],
"free_form_answer": "by 5 times",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 5: Relative cost effectiveness of potential collection methods: Likert and ACUTE-EVAL human-model chat and self-chat pairwise tests. Our methods obtain statistical significance with fewer person hours; Likert fails in this case."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"Which dialogue data do they use to evaluate on?",
"How much faster are pairwise annotations than other annotations?"
],
"question_id": [
"78c7318b2218b906a67d8854f3e511034075f79a",
"697c5d2ba7e019ddb91a1de5031a90fe741f2468"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: ACUTE-EVAL asks humans to compare two multiturn dialogues, and independent of the gray speakers, choose between Speaker 1 (light blue) and Speaker 2 (dark blue).",
"Table 1: Optimizing questions: we measure the agreement rates for the most chosen response for different phrasings of questions, and choose the most agreed upon versions. Starred agreements indicate statistical significance (binomial test, p < .05), and bold agreements indicate the question was used in future trials.",
"Table 2: ACUTE-EVAL results on the Engagingness question for the PersonaChat models talking to humans. Bold win percentages indicate significance (p < .05).",
"Table 3: Likert pairwise differences for Engagingness on PersonaChat, where known. Differences are collected from multiple papers and may not be directly comparable.",
"Table 4: ACUTE-EVAL results for self-chats for the Engagingness question on PersonaChat. Results largely agree with the human-model evaluations (Table 2) and the Likert evaluations (Table 3).",
"Figure 2: Randomly chosen example of Hugging Face (HF) model talking with itself. HF self-chat degenerates rapidly, explaining its poor performance. Other models handle self-chat more successfully, see Fig. 3 and Supplementary Material.",
"Figure 3: Randomly chosen example of Polyencoder (PE) model talking with itself (self-chat).",
"Table 5: Results on the Humanness question for the PersonaChat models talking to humans. ACUTE-EVAL (left) is able to identify significant differences between INT and RC when Likert (known published differences, right) does not.",
"Table 6: Results on the Interestingness question for the PersonaChat models talking to humans. ACUTE-EVAL (left) is able to identify significant differences between INT and RC when Likert (known published differences, right) does not.",
"Table 8: ACUTE-EVAL results on the Knowledgeability question for Wizard of Wikipedia models (G/R for Generative/Retrieval and U/K with and without access to knowledge.",
"Figure 4: Example of the Wizard Retrieval (RK) talking with a human. The Wizard model is able to use facts from Wikipedia during its conversation.",
"Figure 5: Relative cost effectiveness of potential collection methods: Likert and ACUTE-EVAL human-model chat and self-chat pairwise tests. Our methods obtain statistical significance with fewer person hours; Likert fails in this case.",
"Table 7: Results on the Engagingness question for the Wizard of Wikipedia models (G/R for Generative/Retrieval and U/K for with and without access to knowledge. Left shows the ACUTE-EVAL results, and right shows known Likert differences. Our method shows statistical significance between several methods that Likert does not.",
"Figure 6: Randomly chosen examples of Hugging Face (HF) model talking with with a human (left) and itself (self-chat, right). HF self-chat degenerates rapidly, explaining its poor performance. Other models do not have this degeneration feature.",
"Figure 7: Examples of Lost in Conversation (LIC) model talking with a human subject (left), and itself (right). Both examples were selected randomly.",
"Figure 8: Examples of Polyencoder (PE) model talking with a human subject (left), and itself (right). Both examples were selected randomly.",
"Figure 9: Examples of Wizard of Wikipedia chats. Left shows Generative model (GK) talking with a human subject. Right shows the Retrieval model (RK)."
],
"file": [
"1-Figure1-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"6-Figure2-1.png",
"6-Figure3-1.png",
"6-Table5-1.png",
"6-Table6-1.png",
"7-Table8-1.png",
"7-Figure4-1.png",
"7-Figure5-1.png",
"7-Table7-1.png",
"10-Figure6-1.png",
"10-Figure7-1.png",
"11-Figure8-1.png",
"11-Figure9-1.png"
]
} | [
"Which dialogue data do they use to evaluate on?",
"How much faster are pairwise annotations than other annotations?"
] | [
[
"1909.03087-Experiments ::: PersonaChat task-0",
"1909.03087-Experiments ::: Wizard of Wikipedia task-0",
"1909.03087-Experiments-0"
],
[
"1909.03087-7-Figure5-1.png"
]
] | [
"Datasets from PersonaChat and Wizard of Wikipedia tasks.",
"by 5 times"
] | 164 |
2004.02105 | Unsupervised Domain Clusters in Pretrained Language Models | The notion of"in-domain data"in NLP is often over-simplistic and vague, as textual data varies in many nuanced linguistic aspects such as topic, style or level of formality. In addition, domain labels are many times unavailable, making it challenging to build domain-specific systems. We show that massive pre-trained language models implicitly learn sentence representations that cluster by domains without supervision -- suggesting a simple data-driven definition of domains in textual data. We harness this property and propose domain data selection methods based on such models, which require only a small set of in-domain monolingual data. We evaluate our data selection methods for neural machine translation across five diverse domains, where they outperform an established approach as measured by both BLEU and by precision and recall of sentence selection with respect to an oracle. | {
"paragraphs": [
[
"It is common knowledge in modern NLP that using large amounts of high-quality training data is a key aspect in building successful machine-learning based systems. For this reason, a major challenge when building such systems is obtaining data in the domain of interest. But what defines a domain? Natural language varies greatly across topics, styles, levels of formality, genres and many other linguistic nuances BIBREF0, BIBREF1, BIBREF2. This overwhelming diversity of language makes it hard to find the right data for the task, as it is nearly impossible to well-define the exact requirements from such data with respect to all the aforementioned aspects. On top of that, domain labels are usually unavailable – e.g. in large-scale web-crawled data like Common Crawl which was recently used to train state-of-the-art pretrained language models for various tasks BIBREF3.",
"Domain data selection is the task of selecting the most appropriate data for a domain from a large corpus given a smaller set of in-domain data BIBREF4, BIBREF5, BIBREF6, BIBREF7. In this work, we propose to use the recent, highly successful self-supervised pre-trained language models, e.g. devlin-etal-2019-bert,liu2019roberta for domain data selection. As pretrained LMs demonstrate state-of-the-art performance across many NLP tasks after being trained on massive amounts of data, we hypothesize that the robust representations they learn can be useful for mapping sentences to domains in an unsupervised, data-driven approach. We show that these models indeed learn to cluster sentence representations to domains without further supervision (e.g. Figure FIGREF2), and quantify this phenomenon by fitting Gaussian Mixture Models (GMMs) to the learned representations and measuring the purity of the resulting unsupervised clustering. We then propose methods to leverage these emergent domain clusters for domain data selection in two ways:",
"Via distance-based retrieval in the sentence embedding space induced by the pretrained language model.",
"By fine-tuning the pretrained language model for binary classification, where positive examples are from the domain of interest.",
"Our methods enable to select relevant data for the task while requiring only a small set of monolingual in-domain data. As they are based solely on the representations learned by self-supervised LMs, they do not require additional domain labels which are usually vague and over-simplify the notion of domain in textual data. We evaluate our method on data selection for neural machine translation (NMT) using the multi-domain German-English parallel corpus composed by BIBREF8. Our data selection methods enable to train NMT models that outperform those trained using the well-established cross-entropy difference method of BIBREF4 across five diverse domains, achieving a recall of more than 95% in all cases with respect to an oracle that selects the “true” in-domain data.",
"Our contributions in this work are as follows. First, we show that pre-trained language models are highly capable of clustering textual data to domains with high accuracy in a purely unsupervised manner. Second, we propose methods to select in-domain data based on this property using vector-space retrieval and positive-unlabeled fine-tuning of pretrained language models for binary classification. Third, we show the applicability of our proposed data selection methods on a popular benchmark for domain adaptation in machine translation. An additional contribution is a new, improved data split we create for this benchmark, as we point on issues with previous splits used in the literature. The code and data for this work is publicly available. We hope this work will encourage more research on understanding the data landscape in NLP, enabling to “find the right data for the task” in the age of massive models and diverse data sources."
],
[
"The proliferation of massive pretrained neural language models such as ELMo BIBREF9, BERT BIBREF10 or RoBERTa BIBREF11 has enabled great progress on many NLP benchmarks BIBREF12, BIBREF13. Larger and larger models trained on billions of tokens of raw text are released in an ever-increasing pace BIBREF3, enabling the NLP community to fine-tune them for the task of interest. While many works tried to “probe” those models for the morphological, syntactic and semantic information they capture BIBREF14, BIBREF15, BIBREF16, an important aspect of language remained overlooked in this context – the domain the data comes from, often referred to as the “data distribution”.",
"The definition of domain is many times vague and over-simplistic (e.g. “medical text” may be used for biomedical research papers and for clinical conversations between doctors and patients, although the two vary greatly in topic, formality etc.). A common definition treats a domain as a data source: “a domain is defined by a corpus from a specific source, and may differ from other domains in topic, genre, style, level of formality, etc.” BIBREF8. We claim that a more data-driven definition should take place, as different data sources may have sentences with similar traits and vice versa - a single massive web-crawled corpus contains texts in numerous styles, topics and registers. Our analysis in Section SECREF2 shows examples for such cases, e.g. a sentence discussing “Viruses and virus-like organisms” in a legal corpus.",
"We hypothesize that massive pretrained LMs can learn representations that cluster to domains, as texts from similar domains will appear in similar contexts. We test this hypothesis across several large, publicly-available pretrained LMs; we explore both masked-language-models (MLMs) and auto-regressive LMs."
],
[
"We encode multi-domain data at the sentence level into vector representations. We then cluster these vector representations for each model using a Gaussian Mixture Model (GMM) with $k$ pre-defined clusters. We chose GMM as our clustering approach as it allows soft assignments (vs. hard assignments as in e.g. K-means) which we think fits the task better (as a sentence can be seen as drawn from a mixture of several domain). In all cases, to create a sentence representation we perform average pooling of the last hidden state (before the softmax layer) for each token in the sentence. To accelerate the clustering process and enable visualization we also experiment with performing dimensionality reduction with PCA over the sentence vectors before clustering them. We experiment with k in 5, 10 and 15 to test how adding flexibility would improve the domain clustering accuracy."
],
[
"For MLM-based models we use BERT BIBREF10, DistilBERT BIBREF18 and RoBERTa BIBREF11 (in both the base and large versions). For autoregressive models we use GPT-2 BIBREF19 and XLNet BIBREF20. In all cases we use the implementations from the HuggingFace Transformers toolkit BIBREF21. We also evaluated three additional, simpler baselines. The first is using representations from word2vec BIBREF22, where we average-pooled the word vectors for the tokens that were present in the model vocabulary. The second is using Latent Dirichlet Allocation (LDA, BIBREF23), which is a classic approach to unsupervised clustering of text. We also report results for a baseline which assigns sentences by sampling randomly from a uniform distribution over the clusters."
],
[
"To evaluate the unsupervised domain clustering we used the multi-domain corpus proposed by BIBREF8 which includes textual data in five diverse domains: subtitles, medical text (PDF documents from the European Medicines Agency), legal text (legislative text of the European Union), translations of the Koran, and IT-related text (manuals and localization files of open-source software). This dataset includes parallel sentences in English and German; for this experiment we used the English portion of the data. See more details on the dataset in Section SECREF22. We used 2000 distinct sentences from each domain. To evaluate whether the resulting clusters indeed capture the domains the data was drawn from we measure the clustering purity, which is a well-known metric for evaluating clustering BIBREF24. To measure the clustering purity, we assign each unsupervised cluster with the most common “true” domain in the sentences assigned to that cluster, and then compute the accuracy according to this majority-based cluster-domain assignment (note that in this case several unsupervised clusters can be assigned to the same domain). In cases where randomness is involved we run each experiment five times with different initializations and report the mean and variance of the purity metric for each model."
],
[
"As can be seen in Table TABREF7, pre-trained language models are indeed highly capable of generating sentence representations that cluster by domains, resulting in up to 87.66%, 89.04% and 89.94% accuracy when using k=5, k=10 and k=15 clusters, respectively, across 10,000 sentences in 5 domains. We find these scores remarkably high given our straight-forward average-pooling strategy and that no domain-supervision was involved in the process of learning the pre-trained representations. Figure FIGREF15 also demonstrates the quality of the obtained clusters in 2D using the BERT-base model, where the ellipses describe the mean and variance parameters learned for each cluster by the GMM with $k=5$.",
"We note that some classes of models did better than others: while all vector-based models did far better than the random and LDA baselines, the MLM-based models dominated in all cases over word2vec and the auto-regressive models. This may be explained by the fact that the MLM-based models use the entire sentence context when generating the representations for each token, while the auto-regressive models only use the past context, and word2vec uses a limited window context. Using PCA improved performance in most cases and especially for the auto-regressive models, although the results for the MLMs remain high in both cases – suggesting that these models encode the information very differently."
],
[
"As can be seen in Figure FIGREF15, in some areas the domains are somewhat overlapping in the embedding space, which may lead to outlier cases where examples from one domain are assigned to a cluster of a another domain. We plot a confusion matrix (Figure FIGREF20) to analyze this further based on the clustering with BERT-base and k=5. We first note that the outlier sentences are much shorter than the average sentence length in the corpus (11.62 tokens on average for outliers vs. 20.5 tokens on average in general). This makes sense as shorter sentences contain less information, making it harder to assign them to an appropriate cluster. Table TABREF19 shows examples of outlier sentences, assigned to clusters of domains different from their originating domain. We can see that in many cases the assignments are sensible – for example for sentences originating from the subtitles corpus, a sentence that mentions “great priest” is assigned to the Koran cluster, a sentence that mentions “The International Criminal Court in The Hague” is assigned to the Law cluster, a sentence that mentions “the virus” is assigned to the Medical cluster and so on. This strengthens our claim that defining domains based on the corpus they originated from may be over-simplistic, and using a more data-driven approach may enable to find better domain assignments across different corpora.",
"The domain that attracted the largest number of outliers is the IT domain cluster, with 597 sentences assigned to it from other domains. Looking more closely we find that more than half of these sentences (340 out of 597) included numbers (e.g. “34% 25% 34%” (from medical), “(b) reference number 20 is deleted;” (from law), “(Command of Prostration # 1)” (from Koran) or “The message, R2.” (from subtitles)). As numbers appear in many different contexts, they may be harder to assign to a specific domain by the context-aware language models in such short sentences. The second largest attractor of outliers is the Subtitles cluster, with 372 sentences assigned to it from other domains. We find that most of these sentences contain personal pronouns or question marks (228 out of 372, 61.2%) while the ratio of such sentences in the entire corpus is only 40%. Examples include “Why did you choose the name & amarok;?” (from IT), or “What is Avonex?” (from Medical). This may be expected as the subtitles corpus mainly includes transcriptions of spoken, conversational language, and “conversation tends to have more verbs, more personal pronouns, and more questions” BIBREF25. Another possible reason for the subtitles domain to attract outliers is the fact that this is the least-topical cluster: movies and TV series may discuss diverse topics, unlike medical, religious, legal and technical texts that may have a more cohesive topic."
],
[
"As we showed that pre-trained language models are indeed very useful in clustering sentence representations by domains in an unsupervised manner, we now seek to harness this property for a down-stream task – domain data selection for machine translation. Domain data selection is the task of selecting examples from a large corpus which are as close as possible to the domain of interest, given a smaller set of in-domain examples. The selected examples can be used to either (1) train a domain-specific model from scratch BIBREF5, (2) fine-tune a pre-trained general-domain model BIBREF26, BIBREF7, or (3) prioritize data for annotation as in an Active-Learning framework, if only monolingual data is available BIBREF27. To demonstrate the need for domain data selection and set the stage for our data selection experiments, we perform preliminary experiments with NMT in a multi-domain scenario."
],
[
"To simulate a diverse multi-domain setting we use the dataset proposed in BIBREF8, as it was recently adopted for domain adaptation research in NMT BIBREF28, BIBREF29, BIBREF30, BIBREF31. The dataset includes parallel text in German and English from five diverse domains (Medical, Law, Koran, IT, Subtitles; as discussed in Section SECREF2), available via OPUS BIBREF32, BIBREF33.",
"In a preliminary analysis of the data we found that in both the original train/dev/test split by BIBREF8 and in the more recent split by BIBREF29 there was overlap between the training data and the dev/test data. Fixing these issues is important, as it may affect the conclusions one draws from experiments with this dataset. For example, as overlapping development sets favor memorization of the training set, one may choose checkpoints and report results on over-fitting models. This is especially relevant with neural sequence-to-sequence models, as they are highly susceptible to memorization BIBREF34 and hallucination BIBREF35, as confirmed by BIBREF29.",
"To create a better experimental setting to test generalization within and across domains, we create a new data split where we ensure that no such overlap between the training, development and test sets occur. We started from the split of BIBREF29 as it included newer versions of some of the datasets. Furthermore, we did not allow more than one translation of a given source or target sentence, as such cases were very frequent in the dataset and usually stand for duplicate sentence pairs (See Table TABREF24). For example, applying this filtering reduced the size of the Koran corpus from 533,128 sentence pairs to only 17,982. Finally, following BIBREF29 we cap the subtitles corpus to 500,000 sentence pairs as it is much larger than the rest. We make the new split publicly available and hope it will enable better future experimentation on this important subject."
],
[
"Experimental Setup We follow BIBREF28 and train domain-specific models for all domains. We then evaluate each model across the different domain test sets, enabling us to understand the effect of different domains on the downstream MT performance and to set up strong baselines for data selection experiments. We also train a general-domain model using the available data from all domains, as it is also a common approach in multi-domain scenarios BIBREF29. In all experiments we use a similar Transformer BIBREF36 model, and only control for the training data. More details on the exact training and hyperparameter settings for the NMT models are available in the supplementary material.",
"Results The results for the cross-domain evaluation are available in Table TABREF28. In most cases, the best results for each domain are obtained by training on the in-domain data. Training on all the available data helped mostly for the Koran test set. This is expected as the training data for this domain is considerably smaller than the training data for rest of the domains (Table TABREF24). We can also see that more data is not necessarily better BIBREF37: while the subtitles corpus is the largest of all 5 and includes 500,000 sentence pairs, it is second to last in performance as measured by the average BLEU across all test sets.",
"Cross-Domain BLEU vs. Cluster Proximity An interesting observation can be made with respect to the visual analysis of the domain clusters as depicted in Figure FIGREF15: as the Medical cluster (in Yellow), Law cluster (in Purple) and IT cluster (in Red) are close to each other in the embedding space, their cross-domain BLEU scores are also higher. For example, note how in the results for the Medical domain-specific model (first row in Table TABREF28), the BLEU scores on the Law and IT test sets are much higher in comparison to those on the Koran and Subtitles test sets, which clusters are farther away in the visualized embedding space. Similarly, as the Subtitles cluster (Blue) is closer to the Koran cluster (Green), the highest cross-domain BLEU score on the Koran test set is from the Subtitles model. To further quantify this phenomenon, we plot and measure Pearson's correlation between the cosine similarity of the centroids for the English BERT-based dev sentence representations for each domain pair, and the cross-domain BLEU score for this domain pair. This is shown in Figure FIGREF29. We can see the general trend where the closer the domain centroids are (with a similarity of 1 for training and evaluating on the same domain), the higher the cross-domain BLEU is between those domains, resulting in a Pearson's correlation of 0.81 (strong correlation). This suggests that such preliminary visual analysis can be a useful tool for understanding the relationship between diverse datasets, and motivates the use of pre-trained language model representations for domain data selection in MT."
],
[
"As shown in the previous section, using the right data is critical for achieving good performance on an in-domain test set, and more data is not necessarily better. However, in real-world scenarios, the availability of data labeled by domain is limited, e.g. when working with large scale, web-crawled data. In this section we focus on a data-selection scenario where only a very small number of in-domain sentences are used to select data from a larger unlabeled parallel corpus. An established method for data selection was proposed by BIBREF4, which was also used in training the winning systems in WMT 2019 BIBREF39, BIBREF40. This method compares the cross-entropy, according to domain-specific and non-domain-specific language models, for each candidate sentence for selection. The sentences are then ranked by the cross-entropy difference, and only the top sentences are selected for training.",
"While the method by BIBREF4 is tried-and-true, it is based on simple n-gram language models which cannot generalize beyond the n-grams that are seen in the in-domain set. In addition, it is restricted to the in-domain and general-domain datasets it is trained on, which are usually small. On the contrary, pre-trained language models are trained on massive amounts of text, and, as we showed through unsupervised clustering, learn representations with domain-relevant information. In the following sections, we investigate whether this property of pretrained language models makes them useful for domain data selection."
],
[
"We propose two methods for domain data selection with pretrained language models.",
"Domain-Cosine In this method we first compute a query vector, which is the element-wise average over the vector representations of the sentences in the small in-domain set. We use the same sentence-level average-pooling approach as described in Section SECREF2 to obtain sentence representations. We then retrieve the most relevant sentences in the training set by computing the cosine similarity of each sentence with this query vector and ranking the sentences accordingly.",
"Domain-Finetune It is now common knowledge that pretrained language models are especially useful when fine-tuned for the task of interest in an end-to-end manner BIBREF41. In this method we fine-tune the pretrained LM for binary classification, where we use the in-domain sentences as positive examples, and randomly sampled general-domain sentences as negative examples. We then apply this classifier on the general-domain data and pick the sentences that are classified as positive as in-domain, or choose the top-k sentences as ranked by the classifier output distribution. This can be seen as an instance of positive-unlabeled learning for document-set expansion; see BIBREF42 for a recent discussion and methodology for this task.",
"Negative Sampling with Pre-ranking One problem that may rise when randomly sampling negative examples is that unlabeled in-domain sentences from the general-domain data may be sampled as negative examples – deteriorating the classifier performance. To alleviate this issue, we perform a biased sampling of negative examples. We first rank the general-domain data using the Domain-Cosine method, and then sample negative examples under a certain threshold in the ranking (in our experiments we sampled from the bottom two-thirds). Table TABREF31 shows an ablation for such pre-ranking, measuring precision, recall and F1 for binary classification on a held-out set for each domain. When not using pre-ranking, as the training data for the domain is larger, the precision is lower – since more in-domain examples are drawn as negative samples. Using pre-ranking indeed alleviates this issue, achieving higher F1 scores in all cases. Given the results in Table TABREF31 we always use pre-ranking in the following experiments."
],
[
"We perform data selection experiments for each domain in the multi-domain dataset. As the small set of monolingual in-domain data we take the 2000 development sentences from each domain. For the general-domain corpus we concatenate the training data from all domains, resulting in 1,456,317 sentences. To enable faster experimentation we used DistilBERT BIBREF18 for the Domain-Cosine and Domain-Finetune methods. More technical details are available in the supplementary material. We compare our methods to four approches: (1) The established method by BIBREF4, (2) a random selection baseline, (3) an oracle which is trained on all the available in-domain data, and (4) the model we train on all the domains concatenated. We select the top 500k examples to cover the size of every specific in-domain dataset. We train Transformer NMT models on the selected data with a similar configuration to the ones trained in the cross-domain evaluation."
],
[
"The results are available in Table TABREF32. We can see that all selection methods performed much better in terms of BLEU than random selection. It is also nice to see that all selection methods performed better than using all the available data or the oracle-selected data when averaged across all domains, showing again that more data is not necessarily better in multi-domain scenarios and that data selection is a useful approach. Regarding a comparison of the data selection methods, Moore-Lewis performed better than Domain-Cosine, while Domain-Finetune performed best, showing the benefit of fine-tuning large pretrained models for the data selection task. Using the positively-labeled examples alone (Domain-Finetune-Positive) performed worse than using the top 500k examples but better than Domain-Cosine, while not requiring to determine the number of selected sentences."
],
[
"We perform an analysis on the selected datasets, where we measure the precision and recall of sentence selection with respect to the oracle selection. The results are available in Table TABREF34. As also reflected in the BLEU scores, the Domain-Finetune method resulted in the highest domain recall with a minimum of 97.5, while Moore-Lewis and Domain-Cosine scored 89.4 and 78.8 respectively. We find these results very appealing given that only 2000 in-domain sentences were used for selection for each domain out of 1.45 million sentences. Also note that we used DistilBERT in these experiments: we believe that using larger, non-distilled models may result in even better selection performance (although at the price of larger computational requirements).",
"px"
],
[
"px Previous works used n-gram LMs for data selection BIBREF4, BIBREF5 or other count-based methods BIBREF43, BIBREF44, BIBREF45, BIBREF46. While such methods work well in practice, they cannot generalize beyond the N-grams observed in the in-domain datasets, which are usually small.",
"BIBREF6 proposed to replace n-gram models with RNN-based LMs with notable improvements. However, such methods do not capture the rich sentence-level global context as in the recent self-attention-based MLMs; as we showed in the clustering experiments, autoregressive neural LMs were inferior to masked LMs in clustering the data by domain. In addition, training very large neural LMs may be prohibitive without relying on pre-training.",
"Regarding domain clustering for MT, BIBREF47 discovered topics using LDA instead of using domain labels. BIBREF48 induced latent subdomains from the training data using a dedicated probabilistic model.",
"Many works used vector-based retrieval for data selection; BIBREF49 learn to select data using Bayesian optimization, and explored word2vec for that purpose. BIBREF50 create paragraph vectors for data selection in the context of SMT. BIBREF51 use internal representations from the NMT model to perform data selection. BIBREF52 propose a mechanism for incorporating retrieved sentences for each instance for domain adaptation in NMT, using representations extracted from a pre-trained NMT model. BIBREF53 explored instance-based data selection in a multi-domain scenario using information retrieval methods.",
"Other related works on domain adaptation include BIBREF30 that adapts multi-domain NMT models with domain-aware feature embeddings, which are learned via an auxiliary language modeling task. BIBREF54 proposed neural-network based classifiers for data selection in SMT. For more related work on data selection and domain adaptation in the context of MT, see the surveys by BIBREF55 for SMT and more recently BIBREF56 for NMT.",
"Unrelated to MT, BIBREF57 used BERT to select data for tasks from the GLUE benchmark BIBREF12. However, they assumed supervision for all the different tasks/domains, while we propose an unsupervised method requiring only a small set of in-domain data. Also in the context of pretrained language models, BIBREF58 show the importance of additional pretraining with in-domain data to improve the down-stream task-specific performance.",
"While previous work made important contributions to domain data selection, our work is the first to explore massive pretrained language models for both unsupervised domain clustering and for data selection in NMT."
],
[
"We showed that massive pre-trained language models are highly effective in mapping data to domains in a fully-unsupervised manner using average-pooled sentence representations and GMM-based clustering. We suggest that such clusters are a more appropriate, data driven approach to domains in natural language than simplistic labels (e.g. “medical text”), and that it will improve over time as better and larger pretrained LMs will become available. We proposed new methods to harness this property for domain data selection using distance-based ranking in vector space and pretrained LM fine-tuning, requiring only a small set of in-domain data. We demonstrated the effectiveness of our methods on a new, improved data split we created for a previously studied multi-domain machine translation benchmark. Our methods perform similarly or better than an established data selection method and oracle in-domain training across all five domains in the benchmark.",
"This work just scratches the surface with what can be done on the subject; possible avenues for future work include extending this with multilingual data selection and multilingual LMs BIBREF59, BIBREF60, BIBREF61, BIBREF62, using such selection methods with domain-curriculum training BIBREF63, BIBREF64, applying them on noisy, web-crawled data BIBREF65 or for additional tasks BIBREF58. Another interesting avenue is applying this to unsupervised NMT, which is highly sensitive to domain mismatch BIBREF66, BIBREF67. We hope this work will encourage more research on finding the right data for the task, towards more efficient and robust NLP."
],
[
"We thank Wei Wang for early discussions on domain adaptation and data selection that inspired this work during Roee's internship in Google Translate."
],
[
"Figure FIGREF45 details the hyperparameter configuration we used to train the NMT models. We use Transformer models BIBREF36 in the Base configuration using the implementation provided in Fairseq BIBREF71. For all models we use a joint BPE vocabulary BIBREF74 learned with 32k merge operations over the concatenated corpus in both languages, enabling to tie all the embedding layers BIBREF73. We perform early stopping if the BLEU score on the domain-specific development set did not improve in 10 consequent checkpoints. We use the ADAM BIBREF69 optimizer with an initial learning rate of $5\\cdot {}10^-4$ and a maximum of 4096 tokens per batch. We trained all models on a single NVIDIA GPU. We decode using beam search with a beam size of 5. For pre-processing we used the Moses BIBREF70 pipeline including tokenization, normalize-punctuation, non-printing character removal, truecasing and cleaning. We removed examples with sequences longer than 100 tokens from the training data (before subword segmentation)."
],
[
"Table TABREF44 shows details about the overlap between the training, development and test sets for the different data splits of the multi-domain dataset. The overlap was computed using the English part of the corpus."
],
[
"We learn GMMs with full covariance matrices, i.e. without constraints on covariance matrices that determine the shape of each component in the mixture, as implemented in scikit-learn BIBREF72. We train the models until convergence or for a maximum of 150 EM iterations."
],
[
"We fine-tune the binary classification head for 5 epochs. We use the ADAM BIBREF69 optimizer with an initial learning rate of $2\\cdot {}10^-5$. We train the model using 4 NVIDIA GPUs with 256 sentences per batch (64 per GPU)."
],
[
"We used the implementation of BIBREF4 by Pamela Shapiro, as available in: https://github.com/pamelashapiro/moore-lewis. This implementation uses the KenLM N-Gram language model toolkit BIBREF68."
],
[
"Figure FIGREF46 shows visualizations of the multi-domain dataset from additional pre-trained masked language models (BERT large and RoBERTa), and Figure FIGREF47 shows the same visualization for autoregressive models (XLNet and GPT2)."
]
],
"section_name": [
"Introduction",
"Emerging Domain Clusters in Pretrained Language Models ::: Motivation",
"Emerging Domain Clusters in Pretrained Language Models ::: Method",
"Emerging Domain Clusters in Pretrained Language Models ::: Models and Baselines",
"Emerging Domain Clusters in Pretrained Language Models ::: Evaluation",
"Emerging Domain Clusters in Pretrained Language Models ::: Results and Discussion",
"Emerging Domain Clusters in Pretrained Language Models ::: Analysis",
"Neural Machine Translation in a Multi-Domain Scenario",
"Neural Machine Translation in a Multi-Domain Scenario ::: Multi-Domain Dataset",
"Neural Machine Translation in a Multi-Domain Scenario ::: Cross-Domain Experiments",
"Domain Data Selection with Pretrained Language Models",
"Domain Data Selection with Pretrained Language Models ::: Methods",
"Domain Data Selection with Pretrained Language Models ::: Experimental Setup",
"Domain Data Selection with Pretrained Language Models ::: Results",
"Domain Data Selection with Pretrained Language Models ::: Analysis",
"Related Work",
"Conclusions and Future Work",
"Acknowledgements",
"Appendix ::: NMT Training",
"Appendix ::: Data Split",
"Appendix ::: GMM Clustering",
"Appendix ::: Language Model Finetuning",
"Appendix ::: Moore-Lewis Implementation",
"Appendix ::: Additional Visualizations"
]
} | {
"answers": [
{
"annotation_id": [
"63eb92587d3fc830282e9d9cc716a8266f159d4e",
"6990873965517dd5895fcf8eba0bee69f3e69c12"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 6: SacreBLEU scores for the data selection experiments. Highest scores per column are marked in bold."
],
"extractive_spans": [],
"free_form_answer": "Average SacreBLEU score accross all domains is improved from 40.88 to 41.26.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 6: SacreBLEU scores for the data selection experiments. Highest scores per column are marked in bold."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The results are available in Table TABREF32. We can see that all selection methods performed much better in terms of BLEU than random selection. It is also nice to see that all selection methods performed better than using all the available data or the oracle-selected data when averaged across all domains, showing again that more data is not necessarily better in multi-domain scenarios and that data selection is a useful approach. Regarding a comparison of the data selection methods, Moore-Lewis performed better than Domain-Cosine, while Domain-Finetune performed best, showing the benefit of fine-tuning large pretrained models for the data selection task. Using the positively-labeled examples alone (Domain-Finetune-Positive) performed worse than using the top 500k examples but better than Domain-Cosine, while not requiring to determine the number of selected sentences.",
"FLOAT SELECTED: Table 6: SacreBLEU scores for the data selection experiments. Highest scores per column are marked in bold."
],
"extractive_spans": [],
"free_form_answer": "On average the three selection methods had better BLEU scores than Random and Oracle methods. \nThe proposed method Domain-Finetune-Top-500k had better BLEU score than random by 4.34, better than Moore-Lewis by 0.38, better than Oracle by 0.92, and better than All method by 1.4",
"highlighted_evidence": [
"The results are available in Table TABREF32. We can see that all selection methods performed much better in terms of BLEU than random selection. It is also nice to see that all selection methods performed better than using all the available data or the oracle-selected data when averaged across all domains, showing again that more data is not necessarily better in multi-domain scenarios and that data selection is a useful approach.",
"FLOAT SELECTED: Table 6: SacreBLEU scores for the data selection experiments. Highest scores per column are marked in bold."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"6eefc352df1bf0c0490cc0e9681db39525d814c8",
"8e064ea28d1f7033dfc9a6d3f9241594c1e3d1c5"
],
"answer": [
{
"evidence": [
"Our methods enable to select relevant data for the task while requiring only a small set of monolingual in-domain data. As they are based solely on the representations learned by self-supervised LMs, they do not require additional domain labels which are usually vague and over-simplify the notion of domain in textual data. We evaluate our method on data selection for neural machine translation (NMT) using the multi-domain German-English parallel corpus composed by BIBREF8. Our data selection methods enable to train NMT models that outperform those trained using the well-established cross-entropy difference method of BIBREF4 across five diverse domains, achieving a recall of more than 95% in all cases with respect to an oracle that selects the “true” in-domain data."
],
"extractive_spans": [
"method of BIBREF4"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our data selection methods enable to train NMT models that outperform those trained using the well-established cross-entropy difference method of BIBREF4 across five diverse domains, achieving a recall of more than 95% in all cases with respect to an oracle that selects the “true” in-domain data."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As shown in the previous section, using the right data is critical for achieving good performance on an in-domain test set, and more data is not necessarily better. However, in real-world scenarios, the availability of data labeled by domain is limited, e.g. when working with large scale, web-crawled data. In this section we focus on a data-selection scenario where only a very small number of in-domain sentences are used to select data from a larger unlabeled parallel corpus. An established method for data selection was proposed by BIBREF4, which was also used in training the winning systems in WMT 2019 BIBREF39, BIBREF40. This method compares the cross-entropy, according to domain-specific and non-domain-specific language models, for each candidate sentence for selection. The sentences are then ranked by the cross-entropy difference, and only the top sentences are selected for training."
],
"extractive_spans": [
"established method for data selection was proposed by BIBREF4"
],
"free_form_answer": "",
"highlighted_evidence": [
"An established method for data selection was proposed by BIBREF4, which was also used in training the winning systems in WMT 2019 BIBREF39, BIBREF40. This method compares the cross-entropy, according to domain-specific and non-domain-specific language models, for each candidate sentence for selection. The sentences are then ranked by the cross-entropy difference, and only the top sentences are selected for training."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"091988510ea404c7f86b86d5e5f8452df3a1a449",
"18252b2d661eebf65565c083003581db0a20b227",
"9d9e5b2240115968f3580cac0036a87f3201e6e2"
],
"answer": [
{
"evidence": [
"To evaluate the unsupervised domain clustering we used the multi-domain corpus proposed by BIBREF8 which includes textual data in five diverse domains: subtitles, medical text (PDF documents from the European Medicines Agency), legal text (legislative text of the European Union), translations of the Koran, and IT-related text (manuals and localization files of open-source software). This dataset includes parallel sentences in English and German; for this experiment we used the English portion of the data. See more details on the dataset in Section SECREF22. We used 2000 distinct sentences from each domain. To evaluate whether the resulting clusters indeed capture the domains the data was drawn from we measure the clustering purity, which is a well-known metric for evaluating clustering BIBREF24. To measure the clustering purity, we assign each unsupervised cluster with the most common “true” domain in the sentences assigned to that cluster, and then compute the accuracy according to this majority-based cluster-domain assignment (note that in this case several unsupervised clusters can be assigned to the same domain). In cases where randomness is involved we run each experiment five times with different initializations and report the mean and variance of the purity metric for each model."
],
"extractive_spans": [
"subtitles, medical text (PDF documents from the European Medicines Agency), legal text (legislative text of the European Union), translations of the Koran, and IT-related text (manuals and localization files of open-source software)"
],
"free_form_answer": "",
"highlighted_evidence": [
"To evaluate the unsupervised domain clustering we used the multi-domain corpus proposed by BIBREF8 which includes textual data in five diverse domains: subtitles, medical text (PDF documents from the European Medicines Agency), legal text (legislative text of the European Union), translations of the Koran, and IT-related text (manuals and localization files of open-source software)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To evaluate the unsupervised domain clustering we used the multi-domain corpus proposed by BIBREF8 which includes textual data in five diverse domains: subtitles, medical text (PDF documents from the European Medicines Agency), legal text (legislative text of the European Union), translations of the Koran, and IT-related text (manuals and localization files of open-source software). This dataset includes parallel sentences in English and German; for this experiment we used the English portion of the data. See more details on the dataset in Section SECREF22. We used 2000 distinct sentences from each domain. To evaluate whether the resulting clusters indeed capture the domains the data was drawn from we measure the clustering purity, which is a well-known metric for evaluating clustering BIBREF24. To measure the clustering purity, we assign each unsupervised cluster with the most common “true” domain in the sentences assigned to that cluster, and then compute the accuracy according to this majority-based cluster-domain assignment (note that in this case several unsupervised clusters can be assigned to the same domain). In cases where randomness is involved we run each experiment five times with different initializations and report the mean and variance of the purity metric for each model."
],
"extractive_spans": [
"subtitles",
"medical",
"legal",
"Koran",
"IT"
],
"free_form_answer": "",
"highlighted_evidence": [
"To evaluate the unsupervised domain clustering we used the multi-domain corpus proposed by BIBREF8 which includes textual data in five diverse domains: subtitles, medical text (PDF documents from the European Medicines Agency), legal text (legislative text of the European Union), translations of the Koran, and IT-related text (manuals and localization files of open-source software). This dataset includes parallel sentences in English and German; for this experiment we used the English portion of the data. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To evaluate the unsupervised domain clustering we used the multi-domain corpus proposed by BIBREF8 which includes textual data in five diverse domains: subtitles, medical text (PDF documents from the European Medicines Agency), legal text (legislative text of the European Union), translations of the Koran, and IT-related text (manuals and localization files of open-source software). This dataset includes parallel sentences in English and German; for this experiment we used the English portion of the data. See more details on the dataset in Section SECREF22. We used 2000 distinct sentences from each domain. To evaluate whether the resulting clusters indeed capture the domains the data was drawn from we measure the clustering purity, which is a well-known metric for evaluating clustering BIBREF24. To measure the clustering purity, we assign each unsupervised cluster with the most common “true” domain in the sentences assigned to that cluster, and then compute the accuracy according to this majority-based cluster-domain assignment (note that in this case several unsupervised clusters can be assigned to the same domain). In cases where randomness is involved we run each experiment five times with different initializations and report the mean and variance of the purity metric for each model."
],
"extractive_spans": [
"subtitles",
"medical text",
"legal text",
"translations of the Koran",
"IT-related text"
],
"free_form_answer": "",
"highlighted_evidence": [
"To evaluate the unsupervised domain clustering we used the multi-domain corpus proposed by BIBREF8 which includes textual data in five diverse domains: subtitles, medical text (PDF documents from the European Medicines Agency), legal text (legislative text of the European Union), translations of the Koran, and IT-related text (manuals and localization files of open-source software)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2f76ba307b1be78ab53ef85ab1d283383450e351",
"8ebeffae6e044068e012d891a1324a5b0ff91121",
"9ed0cf6e2f741b8bddde9ebcc9b92c66160ff341"
],
"answer": [
{
"evidence": [
"For MLM-based models we use BERT BIBREF10, DistilBERT BIBREF18 and RoBERTa BIBREF11 (in both the base and large versions). For autoregressive models we use GPT-2 BIBREF19 and XLNet BIBREF20. In all cases we use the implementations from the HuggingFace Transformers toolkit BIBREF21. We also evaluated three additional, simpler baselines. The first is using representations from word2vec BIBREF22, where we average-pooled the word vectors for the tokens that were present in the model vocabulary. The second is using Latent Dirichlet Allocation (LDA, BIBREF23), which is a classic approach to unsupervised clustering of text. We also report results for a baseline which assigns sentences by sampling randomly from a uniform distribution over the clusters."
],
"extractive_spans": [
"BERT",
"DistilBERT",
"RoBERTa"
],
"free_form_answer": "",
"highlighted_evidence": [
"For MLM-based models we use BERT BIBREF10, DistilBERT BIBREF18 and RoBERTa BIBREF11 (in both the base and large versions)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For MLM-based models we use BERT BIBREF10, DistilBERT BIBREF18 and RoBERTa BIBREF11 (in both the base and large versions). For autoregressive models we use GPT-2 BIBREF19 and XLNet BIBREF20. In all cases we use the implementations from the HuggingFace Transformers toolkit BIBREF21. We also evaluated three additional, simpler baselines. The first is using representations from word2vec BIBREF22, where we average-pooled the word vectors for the tokens that were present in the model vocabulary. The second is using Latent Dirichlet Allocation (LDA, BIBREF23), which is a classic approach to unsupervised clustering of text. We also report results for a baseline which assigns sentences by sampling randomly from a uniform distribution over the clusters."
],
"extractive_spans": [
"BERT",
"DistilBERT",
"RoBERTa",
"GPT-2",
"XLNet"
],
"free_form_answer": "",
"highlighted_evidence": [
"For MLM-based models we use BERT BIBREF10, DistilBERT BIBREF18 and RoBERTa BIBREF11 (in both the base and large versions). For autoregressive models we use GPT-2 BIBREF19 and XLNet BIBREF20. In all cases we use the implementations from the HuggingFace Transformers toolkit BIBREF21. We also evaluated three additional, simpler baselines. The first is using representations from word2vec BIBREF22, where we average-pooled the word vectors for the tokens that were present in the model vocabulary. The second is using Latent Dirichlet Allocation (LDA, BIBREF23), which is a classic approach to unsupervised clustering of text. We also report results for a baseline which assigns sentences by sampling randomly from a uniform distribution over the clusters."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For MLM-based models we use BERT BIBREF10, DistilBERT BIBREF18 and RoBERTa BIBREF11 (in both the base and large versions). For autoregressive models we use GPT-2 BIBREF19 and XLNet BIBREF20. In all cases we use the implementations from the HuggingFace Transformers toolkit BIBREF21. We also evaluated three additional, simpler baselines. The first is using representations from word2vec BIBREF22, where we average-pooled the word vectors for the tokens that were present in the model vocabulary. The second is using Latent Dirichlet Allocation (LDA, BIBREF23), which is a classic approach to unsupervised clustering of text. We also report results for a baseline which assigns sentences by sampling randomly from a uniform distribution over the clusters."
],
"extractive_spans": [
"BERT",
"DistilBERT",
"RoBERTa",
"GPT-2",
"XLNet"
],
"free_form_answer": "",
"highlighted_evidence": [
"For MLM-based models we use BERT BIBREF10, DistilBERT BIBREF18 and RoBERTa BIBREF11 (in both the base and large versions). For autoregressive models we use GPT-2 BIBREF19 and XLNet BIBREF20. In all cases we use the implementations from the HuggingFace Transformers toolkit BIBREF21."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How much improvement is there in the BLEU score?",
"What is the established approach used for comparison?",
"What are the five domains?",
"Which pre-trained language models are used?"
],
"question_id": [
"e25b73f700e8c958b64951f14a71bc60d225125c",
"908ba58d26d15c14600623498d4e86c9b73b14b2",
"3e0fd1a3944e207edbbe7c7108239dbaf3bccd4f",
"c0847af3958d791beaa14c4040ada2d364251c4d"
],
"question_writer": [
"74eea9f3f4f790836045fcc75d0b3f5156901499",
"74eea9f3f4f790836045fcc75d0b3f5156901499",
"74eea9f3f4f790836045fcc75d0b3f5156901499",
"74eea9f3f4f790836045fcc75d0b3f5156901499"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: A 2D visualization of average-pooled BERT hidden-state sentence representations using PCA. The colors represent the domain for each sentence.",
"Table 1: Unsupervised domain clustering as measured by purity for the different models. Best results are marked in bold for each setting.",
"Figure 3: A confusion matrix for clustering with k=5 using BERT-base.",
"Figure 2: A 2D visualization of the unsupervised GMM clustering for the same sentences as in Figure 1.",
"Table 2: Sentences from one domain which were assigned to a cluster of another domain by the BERT-based clustering, k=5.",
"Table 3: Number of training examples for each domain in the original split (Müller et al., 2019) and in our split.",
"Table 4: SacreBLEU (Post, 2018) scores of our baseline systems on the test sets of the new data split. Each row represents the results from one model on each test set. The best result in each column is marked in bold.",
"Figure 4: The cosine similarity between the centroids of the BERT representations for each domain pair vs. the corresponding cross-domain BLEU.",
"Table 5: Ablation analysis showing precision (p) recall (r) and F1 for the binary classification accuracy on a held-out set, with and without pre-ranking.",
"Table 6: SacreBLEU scores for the data selection experiments. Highest scores per column are marked in bold.",
"Table 7: Precision (p) and recall (r) for data selection of 500k sentences with respect to the oracle selection.",
"Figure 5: The hyperparameter configuration we used for NMT model training using Fairseq (Ott et al., 2019).",
"Table 8: Details about the different data splits for the multi-domain corpus.",
"Figure 6: 2D visualizations of the unsupervised GMM-based clustering for different pretrained MLMs.",
"Figure 7: 2D visualizations of the unsupervised GMM-based clustering for different pretrained auto-regressive LMs."
],
"file": [
"1-Figure1-1.png",
"3-Table1-1.png",
"4-Figure3-1.png",
"4-Figure2-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"6-Table4-1.png",
"7-Figure4-1.png",
"7-Table5-1.png",
"8-Table6-1.png",
"8-Table7-1.png",
"14-Figure5-1.png",
"15-Table8-1.png",
"16-Figure6-1.png",
"17-Figure7-1.png"
]
} | [
"How much improvement is there in the BLEU score?"
] | [
[
"2004.02105-Domain Data Selection with Pretrained Language Models ::: Results-0",
"2004.02105-8-Table6-1.png"
]
] | [
"On average the three selection methods had better BLEU scores than Random and Oracle methods. \nThe proposed method Domain-Finetune-Top-500k had better BLEU score than random by 4.34, better than Moore-Lewis by 0.38, better than Oracle by 0.92, and better than All method by 1.4"
] | 165 |
1909.01720 | Different Absorption from the Same Sharing: Sifted Multi-task Learning for Fake News Detection | Recently, neural networks based on multi-task learning have achieved promising performance on fake news detection, which focus on learning shared features among tasks as complementary features to serve different tasks. However, in most of the existing approaches, the shared features are completely assigned to different tasks without selection, which may lead to some useless and even adverse features integrated into specific tasks. In this paper, we design a sifted multi-task learning method with a selected sharing layer for fake news detection. The selected sharing layer adopts gate mechanism and attention mechanism to filter and select shared feature flows between tasks. Experiments on two public and widely used competition datasets, i.e. RumourEval and PHEME, demonstrate that our proposed method achieves the state-of-the-art performance and boosts the F1-score by more than 0.87%, 1.31%, respectively. | {
"paragraphs": [
[
"In recent years, the proliferation of fake news with various content, high-speed spreading, and extensive influence has become an increasingly alarming issue. A concrete instance was cited by Time Magazine in 2013 when a false announcement of Barack Obama's injury in a White House explosion “wiped off 130 Billion US Dollars in stock value in a matter of seconds\". Other examples, an analysis of the US Presidential Election in 2016 BIBREF0 revealed that fake news was widely shared during the three months prior to the election with 30 million total Facebook shares of 115 known pro-Trump fake stories and 7.6 million of 41 known pro-Clinton fake stories. Therefore, automatically detecting fake news has attracted significant research attention in both industries and academia.",
"Most existing methods devise deep neural networks to capture credibility features for fake news detection. Some methods provide in-depth analysis of text features, e.g., linguistic BIBREF1, semantic BIBREF2, emotional BIBREF3, stylistic BIBREF4, etc. On this basis, some work additionally extracts social context features (a.k.a. meta-data features) as credibility features, including source-based BIBREF5, user-centered BIBREF6, post-based BIBREF7 and network-based BIBREF8, etc. These methods have attained a certain level of success. Additionally, recent researches BIBREF9, BIBREF10 find that doubtful and opposing voices against fake news are always triggered along with its propagation. Fake news tends to provoke controversies compared to real news BIBREF11, BIBREF12. Therefore, stance analysis of these controversies can serve as valuable credibility features for fake news detection.",
"There is an effective and novel way to improve the performance of fake news detection combined with stance analysis, which is to build multi-task learning models to jointly train both tasks BIBREF13, BIBREF14, BIBREF15. These approaches model information sharing and representation reinforcement between the two tasks, which expands valuable features for their respective tasks. However, prominent drawback to these methods and even typical multi-task learning methods, like the shared-private model, is that the shared features in the shared layer are equally sent to their respective tasks without filtering, which causes that some useless and even adverse features are mixed in different tasks, as shown in Figure FIGREF2(a). By that the network would be confused by these features, interfering effective sharing, and even mislead the predictions.",
"To address the above problems, we design a sifted multi-task learning model with filtering mechanism (Figure FIGREF2(b)) to detect fake news by joining stance detection task. Specifically, we introduce a selected sharing layer into each task after the shared layer of the model for filtering shared features. The selected sharing layer composes of two cells: gated sharing cell for discarding useless features and attention sharing cell for focusing on features that are conducive to their respective tasks. Besides, to better capture long-range dependencies and improve the parallelism of the model, we apply transformer encoder module BIBREF16 to our model for encoding input representations of both tasks. Experimental results reveal that the proposed model outperforms the compared methods and gains new benchmarks.",
"In summary, the contributions of this paper are as follows:",
"We explore a selected sharing layer relying on gate mechanism and attention mechanism, which can selectively capture valuable shared features between tasks of fake news detection and stance detection for respective tasks.",
"The transformer encoder is introduced into our model for encoding inputs of both tasks, which enhances the performance of our method by taking advantages of its long-range dependencies and parallelism.",
"Experiments on two public, widely used fake news datasets demonstrate that our method significantly outperforms previous state-of-the-art methods."
],
[
"Fake News Detection Exist studies for fake news detection can be roughly summarized into two categories. The first category is to extract or construct comprehensive and complex features with manual ways BIBREF5, BIBREF8, BIBREF17. The second category is to automatically capture deep features based on neural networks. There are two ways in this category. One is to capture linguistic features from text content, such as semantic BIBREF7, BIBREF18, writing styles BIBREF4, and textual entailments BIBREF19. The other is to focus on gaining effective features from the organic integration of text and user interactions BIBREF20, BIBREF21. User interactions include users' behaviours, profiles, and networks between users. In this work, following the second way, we automatically learn representations of text and stance information from response and forwarding (users' behaviour) based on multi-task learning for fake news detection.",
"Stance Detection The researches BIBREF22, BIBREF23 demonstrate that the stance detected from fake news can serve as an effective credibility indicator to improve the performance of fake news detection. The common way of stance detection in rumors is to catch deep semantics from text content based on neural networksBIBREF24. For instance, Kochkina et al.BIBREF25 project branch-nested LSTM model to encode text of each tweet considering the features and labels of the predicted tweets for stance detection, which reflects the best performance in RumourEval dataset. In this work, we utilize transformer encoder to acquire semantics from responses and forwarding of fake news for stance detection.",
"Multi-task Learning A collection of improved models BIBREF26, BIBREF27, BIBREF28 are developed based on multi-task learning. Especially, shared-private model, as a popular multi-task learning model, divides the features of different tasks into private and shared spaces, where shared features, i.e., task-irrelevant features in shared space, as supplementary features are used for different tasks. Nevertheless, the shared space usually mixes some task-relevant features, which makes the learning of different tasks introduce noise. To address this issue, Liu et al. BIBREF29 explore an adversarial shared-private model to alleviate the shared and private latent feature spaces from interfering with each other. However, these models transmit all shared features in the shared layer to related tasks without distillation, which disturb specific tasks due to some useless and even harmful shared features. How to solve this drawback is the main challenge of this work."
],
[
"We propose a novel sifted multi-task learning method on the ground of shared-private model to jointly train the tasks of stance detection and fake news detection, filter original outputs of shared layer by a selected sharing layer. Our model consists of a 4-level hierarchical structure, as shown in Figure FIGREF6. Next, we will describe each level of our proposed model in detail."
],
[
"In our notation, a sentence of length $l$ tokens is indicated as ${\\rm \\textbf {X}}=\\lbrace x_1, x_2, ... ,x_l\\rbrace $. Each token is concatenated by word embeddings and position embeddings. Word embeddings $w_i$ of token $x_i$ are a $d_w$-dimensional vector obtained by pre-trained Word2Vec model BIBREF30, i.e., $w_i \\in \\mathbb {R}^{d_w}$. Position embeddings refer to vectorization representations of position information of words in a sentence. We employ one-hot encoding to represent position embeddings $p_i$ of token $x_i$, where $p_i \\in \\mathbb {R}^{d_p}$, $d_p$ is the positional embedding dimension. Therefore, the embeddings of a sentence are represented as $ {\\rm \\textbf {E}}=\\lbrace [w_1;p_1 ], [w_2;p_2], ..., [w_l;p_l]\\rbrace , {\\rm \\textbf {E}}\\in \\mathbb {R}^{l \\times (d_p+d_w)}$. In particular, we adopt one-hot encoding to embed positions of tokens, rather than sinusoidal position encoding recommended in BERT model BIBREF31. The reason is that our experiments show that compared with one-hot encoding, sinusoidal position encoding not only increases the complexity of models but also performs poorly on relatively small datasets."
],
[
"Shared-private feature extractor is mainly used for extracting shared features and private features among different tasks. In this paper, we apply the encoder module of transformer BIBREF16 (henceforth, transformer encoder) to the shared-private extractor of our model. Specially, we employ two transformer encoders to encode the input embeddings of the two tasks as their respective private features. A transformer encoder is used to encode simultaneously the input embeddings of the two tasks as shared features of both tasks. This process is illustrated by the shared-private layer of Figure FIGREF6. The red box in the middle denotes the extraction of shared features and the left and right boxes represent the extraction of private features of two tasks. Next, we take the extraction of the private feature of fake news detection as an example to elaborate on the process of transformer encoder.",
"The kernel of transformer encoder is the scaled dot-product attention, which is a special case of attention mechanism. It can be precisely described as follows:",
"where ${\\rm \\textbf {Q}} \\in \\mathbb {R}^{l \\times (d_p+d_w)}$, ${\\rm \\textbf {K}} \\in \\mathbb {R}^{l \\times (d_p+d_w)}$, and ${\\rm \\textbf {V}} \\in \\mathbb {R}^{l \\times (d_p+d_w)}$ are query matrix, key matrix, and value matrix, respectively. In our setting, the query ${\\rm \\textbf {Q}}$ stems from the inputs itself, i.e., ${\\rm \\textbf {Q}}={\\rm \\textbf {K}}={\\rm \\textbf {V}}={\\rm \\textbf {E}}$.",
"To explore the high parallelizability of attention, transformer encoder designs a multi-head attention mechanism based on the scaled dot-product attention. More concretely, multi-head attention first linearly projects the queries, keys and values $h$ times by using different linear projections. Then $h$ projections perform the scaled dot-product attention in parallel. Finally, these results of attention are concatenated and once again projected to get the new representation. Formally, the multi-head attention can be formulated as follows:",
"where ${\\rm \\textbf {W}}_i^Q \\in \\mathbb {R}^{(d_p+d_w) \\times d_k}$, ${\\rm \\textbf {W}}_i^K \\in \\mathbb {R}^{(d_p+d_w) \\times d_k}$, ${\\rm \\textbf {W}}_i^V \\in \\mathbb {R}^{(d_p+d_w) \\times d_k}$ are trainable projection parameters. $d_k$ is $(d_p+d_w)/h$, $h$ is the number of heads. In Eq.(DISPLAY_FORM11), ${\\rm \\textbf {W}}^o \\in \\mathbb {R}^{(d_p+d_w) \\times (d_p+d_w)}$ is also trainable parameter."
],
[
"In order to select valuable and appropriate shared features for different tasks, we design a selected sharing layer following the shared layer. The selected sharing layer consists of two cells: gated sharing cell for filtering useless features and attention sharing cell for focusing on valuable shared features for specific tasks. The description of this layer is depicted in Figure FIGREF6 and Figure FIGREF15. In the following, we introduce two cells in details.",
"Gated Sharing Cell Inspired by forgotten gate mechanism of LSTM BIBREF32 and GRU BIBREF33, we design a single gated cell to filter useless shared features from shared layer. There are two reasons why we adopt single-gate mechanism. One is that transformer encoder in shared layer can efficiently capture the features of long-range dependencies. The features do not need to capture repeatedly by multiple complex gate mechanisms of LSTM and GRU. The other is that single-gate mechanism is more convenient for training BIBREF34. Formally, the gated sharing cell can be expressed as follows:",
"where ${\\rm \\textbf {H}}_{shared}\\! \\in \\! \\mathbb {R}^{1 \\times l(d_p+d_w)}$ denotes the outputs of shared layer upstream, ${\\rm \\textbf {W}}_{fake} \\in \\mathbb {R}^{l(d_p+d_w) \\times l(d_p+d_w)}$ and ${\\rm \\textbf {b}}_{fake} \\in \\mathbb {R}^{1 \\times l(d_p+d_w)}$ are trainable parameters. $\\sigma $ is a non-linear activation - sigmoid, which makes final choices for retaining and discarding features in shared layer.",
"Then the shared features after filtering via gated sharing cell ${\\rm \\textbf {g}}_{fake}$ for the task of fake news detection are represented as:",
"where $\\odot $ denotes element-wise multiplication.",
"Similarly, for the auxiliary task - the task of stance detection, filtering process in the gated sharing cell is the same as the task of fake news detection, so we do not reiterate them here.",
"Attention Sharing Cell To focus on helpful shared features that are beneficial to specific tasks from upstream shared layer, we devise an attention sharing cell based on attention mechanism. Specifically, this cell utilizes input embeddings of the specific task to weight shared features for paying more attention to helpful features. The inputs of this cell include two matrixes: the input embeddings of the specific task and the shared features of both tasks. The basic attention architecture of this cell, the same as shared-private feature extractor, also adopts transformer encoder (the details in subsection SECREF8). However, in this architecture, query matrix and key matrix are not projections of the same matrix, i.e., query matrix ${\\rm \\textbf {E}}_{fake}$ is the input embeddings of fake news detection task, and key matrix ${\\rm \\textbf {K}}_{shared}$ and value matrix ${\\rm \\textbf {V}}_{shared}$ are the projections of shared features ${\\rm \\textbf {H}}_{shared}$. Formally, the attention sharing cell can be formalized as follows:",
"where the dimensions of ${\\rm \\textbf {E}}_{fake}$, ${\\rm \\textbf {K}}_{shared}$, and ${\\rm \\textbf {V}}_{shared}$ are all $\\mathbb {R}^{l\\times (d_p+d_w)}$. The dimensions of remaining parameters in Eqs.(DISPLAY_FORM16, DISPLAY_FORM17) are the same as in Eqs.(DISPLAY_FORM10, DISPLAY_FORM11). Moreover, in order to guarantee the diversity of focused shared features, the number of heads $h$ should not be set too large. Experiments show that our method performs the best performance when $h$ is equal to 2.",
"Integration of the Two Cells We first convert the output of the two cells to vectors ${\\rm \\textbf {G}}$ and ${\\rm \\textbf {A}}$, respectively, and then integrate the vectors in full by the absolute difference and element-wise product BIBREF35.",
"where $\\odot $ denotes element-wise multiplication and $;$ denotes concatenation."
],
[
"As the last layer, softmax functions are applied to achieve the classification of different tasks, which emits the prediction of probability distribution for the specific task $i$.",
"where $\\hat{{\\rm \\textbf {y}}}_i$ is the predictive result, ${\\rm \\textbf {F}}_i$ is the concatenation of private features ${\\rm \\textbf {H}}_i$ of task $i$ and the outputs ${\\rm \\textbf {SSL}}_i$ of selected sharing layer for task $i$. ${\\rm \\textbf {W}}_i$ and ${\\rm \\textbf {b}}_i$ are trainable parameters.",
"Given the prediction of all tasks, a global loss function forces the model to minimize the cross-entropy of prediction and true distribution for all the tasks:",
"where $\\lambda _i$ is the weight for the task $i$, and $N$ is the number of tasks. In this paper, $N=2$, and we give more weight $\\lambda $ to the task of fake news detection."
],
[
"We use two public datasets for fake news detection and stance detection, i.e., RumourEval BIBREF36 and PHEME BIBREF12. We introduce both the datasets in details from three aspects: content, labels, and distribution.",
"Content. Both datasets contain Twitter conversation threads associated with different newsworthy events including the Ferguson unrest, the shooting at Charlie Hebdo, etc. A conversation thread consists of a tweet making a true and false claim, and a series of replies. Labels. Both datasets have the same labels on fake news detection and stance detection. Fake news is labeled as true, false, and unverified. Because we focus on classifying true and false tweets, we filter the unverified tweets. Stance of tweets is annotated as support, deny, query, and comment. Distribution. RumourEval contains 325 Twitter threads discussing rumours and PHEME includes 6,425 Twitter threads. Threads, tweets, and class distribution of the two datasets are shown in Table TABREF24.",
"In consideration of the imbalance label distributions, in addition to accuracy (A) metric, we add Precision (P), Recall (R) and F1-score (F1) as complementary evaluation metrics for tasks. We hold out 10% of the instances in each dataset for model tuning, and the rest of the instances are performed 5-fold cross-validation throughout all experiments."
],
[
"Pre-processing - Processing useless and inappropriate information in text: (1) removing nonalphabetic characters; (2) removing website links of text content; (3) converting all words to lower case and tokenize texts.",
"Parameters - hyper-parameters configurations of our model: for each task, we strictly turn all the hyper-parameters on the validation dataset, and we achieve the best performance via a small grid search. The sizes of word embeddings and position embeddings are set to 200 and 100. In transformer encoder, attention heads and blocks are set to 6 and 2 respectively, and the dropout of multi-head attention is set to 0.7. Moreover, the minibatch size is 64; the initial learning rate is set to 0.001, the dropout rate to 0.3, and $\\lambda $ to 0.6 for fake news detection."
],
[
"SVM A Support Vector Machines model in BIBREF36 detects misinformation relying on manually extracted features.",
"CNN A Convolutional Neural Network model BIBREF37 employs pre-trained word embeddings based on Word2Vec as input embeddings to capture features similar to n-grams.",
"TE Tensor Embeddings BIBREF38 leverages tensor decomposition to derive concise claim embeddings, which are used to create a claim-by-claim graph for label propagation.",
"DeClarE Evidence-Aware Deep Learning BIBREF39 encodes claims and articles by Bi-LSTM and focuses on each other based on attention mechanism, and then concatenates claim source and article source information.",
"MTL-LSTM A multi-task learning model based on LSTM networks BIBREF14 trains jointly the tasks of veracity classification, rumor detection, and stance detection.",
"TRNN Tree-structured RNN BIBREF40 is a bottom-up and a top-down tree-structured model based on recursive neural networks.",
"Bayesian-DL Bayesian Deep Learning model BIBREF41 first adopts Bayesian to represent both the prediction and uncertainty of claim and then encodes replies based on LSTM to update and generate a posterior representations."
],
[
"We perform experiments on RumourEval and PHEME datasets to evaluate the performance of our method and the baselines. The experimental results are shown in Table TABREF27. We gain the following observations:",
"On the whole, most well-designed deep learning methods, such as ours, Bayesian-DL, and TRNN, outperform feature engineering-based methods, like SVM. This illustrates that deep learning methods can represent better intrinsic semantics of claims and replies.",
"In terms of recall (R), our method and MTL-LSTM, both based on multi-task learning, achieve more competitive performances than other baselines, which presents that sufficient features are shared for each other among multiple tasks. Furthermore, our method reflects a more noticeable performance boost than MTL-LSTM on both datasets, which extrapolates that our method earns more valuable shared features.",
"Although our method shows relatively low performance in terms of precision (P) and recall (R) compared with some specific models, our method achieves the state-of-the-art performance in terms of accuracy (A) and F1-score (F1) on both datasets. Taking into account the tradeoff among different performance measures, this reveals the effectiveness of our method in the task of fake news detection."
],
[
"To evaluate the effectiveness of different components in our method, we ablate our method into several simplified models and compare their performance against related methods. The details of these methods are described as follows:",
"Single-task Single-task is a model with transformer encoder as the encoder layer of the model for fake news detection.",
"MT-lstm The tasks of fake news detection and stance detection are integrated into a shared-private model and the encoder of the model is achieved by LSTM.",
"MT-trans The only difference between MT-trans and MT-lstm is that encoder of MT-trans is composed of transformer encoder.",
"MT-trans-G On the basis of MT-trans, MT-trans-G adds gated sharing cell behind the shared layer of MT-trans to filter shared features.",
"MT-trans-A Unlike MT-trans-G, MT-trans-A replaces gated sharing cell with attention sharing cell for selecting shared features.",
"MT-trans-G-A Gated sharing cell and attention sharing cell are organically combined as selected sharing layer behind the shared layer of MT-trans, called MT-trans-G-A.",
"Table TABREF30 provides the experimental results of these methods on RumourEval and PHEME datasets. We have the following observations:",
"Effectiveness of multi-task learning. MT-trans boosts about 9% and 15% performance improvements in accuracy on both datasets compared with Single-task, which indicates that the multi-task learning method is effective to detect fake news.",
"Effectiveness of transformer encoder. Compared with MT-lstm, MT-trans obtains more excellent performance, which explains that transformer encoder has better encoding ability than LSTM for news text on social media.",
"Effectiveness of the selected sharing layer. Analysis of the results of the comparison with MT-trans, MT-trans-G, MT-Trans-A, and MT-trans-G-A shows that MT-trans-G-A ensures optimal performance with the help of the selected sharing layer of the model, which confirms the reasonability of selectively sharing different features for different tasks."
],
[
"Although the sifted multi-task learning method outperforms previous state-of-the-art methods on two datasets (From Table TABREF27), we observe that the proposed method achieves more remarkable performance boosts on PHEME than on RumourEval. There are two reasons for our analysis according to Table TABREF24 and Table TABREF27. One is that the number of training examples in RumourEval (including 5,568 tweets) is relatively limited as compared with PHEME (including 105,354 tweets), which is not enough to train deep neural networks. Another is that PHEME includes more threads (6,425 threads) than RumourEval (325 threads) so that PHEME can offer more rich credibility features to our proposed method."
],
[
"In order to obtain deeper insights and detailed interpretability about the effectiveness of the selected shared layer of the sifted multi-task learning method, we devise experiments to explore some ideas in depth: 1) Aiming at different tasks, what effective features can the selected sharing layer in our method obtain? 2) In the selected sharing layer, what features are learned from different cells?"
],
[
"We visualize shared features learned from the tasks of fake news detection and stance detection. Specifically, we first look up these elements with the largest values from the outputs of the shared layer and the selected shared layer respectively. Then, these elements are mapped into the corresponding values in input embeddings so that we can find out specific tokens. The experimental results are shown in Figure FIGREF35. We draw the following observations:",
"Comparing PL-FND and PL-SD, private features in private layer from different tasks are different. From PL-FND, PL-SD, and SLT, the combination of the private features and shared features from shared layer increase the diversity of features and help to promote the performance of both fake news detection and stance detection.",
"By compared SL, SSL-FND, and SSL-SD, selected sharing layers from different tasks can not only filter tokens from shared layer (for instance, `what', `scary', and `fact' present in SL but not in SSL-SD), but also capture helpful tokens for its own task (like `false' and `real' in SSL-FND, and `confirm' and `misleading' in SSL-SD)."
],
[
"To answer the second question, we examine the neuron behaviours of gated sharing cell and attention sharing cell in the selected sharing layer, respectively. More concretely, taking the task of fake news detection as an example, we visualize feature weights of ${\\rm \\textbf {H}}_{shared}$ in the shared layer and show the weight values ${\\rm \\textbf {g}}_{fake}$ in gated sharing cell. By that we can find what kinds of features are discarded as interference, as shown in Figure FIGREF42(a). In addition, for attention sharing cell, we visualize which tokens are concerned in attention sharing cell, as shown in Figure FIGREF42(b). From Figure FIGREF42(a) and FIGREF42(b), we obtain the following observations:",
"In Figure FIGREF42(a), only the tokens “gunmen, hostages, Sydney, ISIS\" give more attention compared with vanilla shared-private model (SP-M). In more details, `gunmen' and `ISIS' obtain the highest weights. These illustrate that gated sharing cell can effectively capture key tokens.",
"In Figure FIGREF42(b), “live coverage\", as a prominent credibility indicator, wins more concerns in the task of fake news detection than other tokens. By contrast, when the sentence of Figure FIGREF42(b) is applied to the task of stance detection, the tokens “shut down\" obtain the maximum weight, instead of “live coverage\". These may reveal that attention sharing cell focuses on different helpful features from the shared layer for different tasks."
],
[
"In this paper, we explored a sifted multi-task learning method with a novel selected sharing structure for fake news detection. The selected sharing structure fused single gate mechanism for filtering useless shared features and attention mechanism for paying close attention to features that were helpful to target tasks. We demonstrated the effectiveness of the proposed method on two public, challenging datasets and further illustrated by visualization experiments. There are several important directions remain for future research: (1) the fusion mechanism of private and shared features; (2) How to represent meta-data of fake news better to integrate into inputs."
],
[
"The research work is supported by “the World-Class Universities(Disciplines) and the Characteristic Development Guidance Funds for the Central Universities\"(PY3A022), Shenzhen Science and Technology Project(JCYJ20180306170836595), the National Natural Science Fund of China (No.F020807), Ministry of Education Fund Project “Cloud Number Integration Science and Education Innovation\" (No.2017B00030), Basic Scientific Research Operating Expenses of Central Universities (No.ZDYF2017006)."
]
],
"section_name": [
"Introduction",
"Related Work",
"Method",
"Method ::: Input Embeddings",
"Method ::: Shared-private Feature Extractor",
"Method ::: Selected Sharing Layer",
"Method ::: The Output Layer",
"Experiments ::: Datasets and Evaluation Metrics",
"Experiments ::: Settings",
"Experiments ::: Performance Evaluation ::: Baselines",
"Experiments ::: Performance Evaluation ::: Compared with State-of-the-art Methods",
"Experiments ::: Discussions ::: Model Ablation",
"Experiments ::: Discussions ::: Error Analysis",
"Experiments ::: Case Study",
"Experiments ::: Case Study ::: The Visualization of Shared Features Learned from Two Tasks",
"Experiments ::: Case Study ::: The Visualization of Different Features Learned from Different Cells",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"33a31de95a57e8394cba1cf7905f746c2e9af207",
"7b97652ae8ca84bb16214d0a13fe4fcf11f35922",
"d1a14d5c4e894b4167da3072f7cba9a71e051b4c"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 4: Typical tokens obtained by different layers of the sifted multi-task learning method. In our proposed method, typical tokens are captured by shared layer (SL), selected sharing layer for fake news detection (SSLFND), selected sharing layer for stance detection (SSL-SD), private layer for fake news detection (PL-FND), and private layer for stance detection (PL-SD) respectively. A column of the same color represents the distribution of one token in different layers, while the last two columns denote unique tokens captured by different layers."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 4: Typical tokens obtained by different layers of the sifted multi-task learning method. In our proposed method, typical tokens are captured by shared layer (SL), selected sharing layer for fake news detection (SSLFND), selected sharing layer for stance detection (SSL-SD), private layer for fake news detection (PL-FND), and private layer for stance detection (PL-SD) respectively. A column of the same color represents the distribution of one token in different layers, while the last two columns denote unique tokens captured by different layers."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"0bd10fa6cab5a049289823bae7462330bd490a23",
"6607b89163cc6f4553ad07646a48919d6e61889f",
"858c9d75cb7687e45813c4c9f52e2fd4c99bb87f"
],
"answer": [
{
"evidence": [
"Parameters - hyper-parameters configurations of our model: for each task, we strictly turn all the hyper-parameters on the validation dataset, and we achieve the best performance via a small grid search. The sizes of word embeddings and position embeddings are set to 200 and 100. In transformer encoder, attention heads and blocks are set to 6 and 2 respectively, and the dropout of multi-head attention is set to 0.7. Moreover, the minibatch size is 64; the initial learning rate is set to 0.001, the dropout rate to 0.3, and $\\lambda $ to 0.6 for fake news detection."
],
"extractive_spans": [],
"free_form_answer": "size of word embeddings is 200, size of position embedding is 100, the number of attention heads in transformer block is 6, the number of attention block Is 2, dropout of multi-head attention is 0.7, minibatch size is 64, the initiall learning rate is .001. In fake news detection, the dropout rate is 0.3 and lambda is 0.6.",
"highlighted_evidence": [
"Parameters - hyper-parameters configurations of our model: for each task, we strictly turn all the hyper-parameters on the validation dataset, and we achieve the best performance via a small grid search. The sizes of word embeddings and position embeddings are set to 200 and 100. In transformer encoder, attention heads and blocks are set to 6 and 2 respectively, and the dropout of multi-head attention is set to 0.7. Moreover, the minibatch size is 64; the initial learning rate is set to 0.001, the dropout rate to 0.3, and $\\lambda $ to 0.6 for fake news detection."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Parameters - hyper-parameters configurations of our model: for each task, we strictly turn all the hyper-parameters on the validation dataset, and we achieve the best performance via a small grid search. The sizes of word embeddings and position embeddings are set to 200 and 100. In transformer encoder, attention heads and blocks are set to 6 and 2 respectively, and the dropout of multi-head attention is set to 0.7. Moreover, the minibatch size is 64; the initial learning rate is set to 0.001, the dropout rate to 0.3, and $\\lambda $ to 0.6 for fake news detection."
],
"extractive_spans": [],
"free_form_answer": "The sizes of word embeddings and position embeddings are set to 200 and 100, attention heads and blocks are set to 6 and 2 respectively, and the dropout of multi-head attention is set to 0.7, the minibatch size is 64, the initial learning rate is set to 0.001, the dropout rate to 0.3, and $\\lambda $ to 0.6 for fake news detection.",
"highlighted_evidence": [
" The sizes of word embeddings and position embeddings are set to 200 and 100. In transformer encoder, attention heads and blocks are set to 6 and 2 respectively, and the dropout of multi-head attention is set to 0.7. Moreover, the minibatch size is 64; the initial learning rate is set to 0.001, the dropout rate to 0.3, and $\\lambda $ to 0.6 for fake news detection."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Parameters - hyper-parameters configurations of our model: for each task, we strictly turn all the hyper-parameters on the validation dataset, and we achieve the best performance via a small grid search. The sizes of word embeddings and position embeddings are set to 200 and 100. In transformer encoder, attention heads and blocks are set to 6 and 2 respectively, and the dropout of multi-head attention is set to 0.7. Moreover, the minibatch size is 64; the initial learning rate is set to 0.001, the dropout rate to 0.3, and $\\lambda $ to 0.6 for fake news detection."
],
"extractive_spans": [],
"free_form_answer": "Size of word embeddings is 200, size of position embeddings is 100, 6 attention heads and 2 blocks in encoder, dropout in multi-head attention is 0.7, minibatch size is 64, initial learning rate is 0.001, dropout rate is 0.3, lambda is 0.6.",
"highlighted_evidence": [
"The sizes of word embeddings and position embeddings are set to 200 and 100. In transformer encoder, attention heads and blocks are set to 6 and 2 respectively, and the dropout of multi-head attention is set to 0.7. Moreover, the minibatch size is 64; the initial learning rate is set to 0.001, the dropout rate to 0.3, and $\\lambda $ to 0.6 for fake news detection."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"d9d333df2f48f3a64d2e40e8ccee5801c3bf9680",
"dc7ead2112e6e573ae1fbfd3b01d7adb1cd078a8"
],
"answer": [
{
"evidence": [
"There is an effective and novel way to improve the performance of fake news detection combined with stance analysis, which is to build multi-task learning models to jointly train both tasks BIBREF13, BIBREF14, BIBREF15. These approaches model information sharing and representation reinforcement between the two tasks, which expands valuable features for their respective tasks. However, prominent drawback to these methods and even typical multi-task learning methods, like the shared-private model, is that the shared features in the shared layer are equally sent to their respective tasks without filtering, which causes that some useless and even adverse features are mixed in different tasks, as shown in Figure FIGREF2(a). By that the network would be confused by these features, interfering effective sharing, and even mislead the predictions."
],
"extractive_spans": [
"shared features in the shared layer are equally sent to their respective tasks without filtering"
],
"free_form_answer": "",
"highlighted_evidence": [
"However, prominent drawback to these methods and even typical multi-task learning methods, like the shared-private model, is that the shared features in the shared layer are equally sent to their respective tasks without filtering, which causes that some useless and even adverse features are mixed in different tasks, as shown in Figure FIGREF2(a)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To address the above problems, we design a sifted multi-task learning model with filtering mechanism (Figure FIGREF2(b)) to detect fake news by joining stance detection task. Specifically, we introduce a selected sharing layer into each task after the shared layer of the model for filtering shared features. The selected sharing layer composes of two cells: gated sharing cell for discarding useless features and attention sharing cell for focusing on features that are conducive to their respective tasks. Besides, to better capture long-range dependencies and improve the parallelism of the model, we apply transformer encoder module BIBREF16 to our model for encoding input representations of both tasks. Experimental results reveal that the proposed model outperforms the compared methods and gains new benchmarks."
],
"extractive_spans": [
"transformer"
],
"free_form_answer": "",
"highlighted_evidence": [
"Besides, to better capture long-range dependencies and improve the parallelism of the model, we apply transformer encoder module BIBREF16 to our model for encoding input representations of both tasks. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"348cd472744ac4df7a9f796a8989a730383cafd6",
"569bc90853e3fb60a6ab6be26788be2eff3c6126"
],
"answer": [
{
"evidence": [
"We propose a novel sifted multi-task learning method on the ground of shared-private model to jointly train the tasks of stance detection and fake news detection, filter original outputs of shared layer by a selected sharing layer. Our model consists of a 4-level hierarchical structure, as shown in Figure FIGREF6. Next, we will describe each level of our proposed model in detail."
],
"extractive_spans": [],
"free_form_answer": "The selected sharing layer is trained jointly on the tasks of stance detection and fake news detection",
"highlighted_evidence": [
"We propose a novel sifted multi-task learning method on the ground of shared-private model to jointly train the tasks of stance detection and fake news detection, filter original outputs of shared layer by a selected sharing layer."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We propose a novel sifted multi-task learning method on the ground of shared-private model to jointly train the tasks of stance detection and fake news detection, filter original outputs of shared layer by a selected sharing layer. Our model consists of a 4-level hierarchical structure, as shown in Figure FIGREF6. Next, we will describe each level of our proposed model in detail."
],
"extractive_spans": [],
"free_form_answer": "By jointly training the tasks of stance and fake news detection.",
"highlighted_evidence": [
"We propose a novel sifted multi-task learning method on the ground of shared-private model to jointly train the tasks of stance detection and fake news detection, filter original outputs of shared layer by a selected sharing layer. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"What are the hyperparameter setting of the MTL model?",
"What architecture does the rest of the multi-task learning setup use?",
"How is the selected sharing layer trained?"
],
"question_id": [
"2f142cd11731d29d0c3fa426e26ef80d997862e0",
"ce23849e9e9a22626965f1ca8ca948a5c87280e9",
"d9a45fea8539aac01dec01f29b7d04b44b9c2ca6",
"246e924017c48fa1f069361c44133fdf4f0386e1"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Two schemes for sharing features among tasks. Red circles and blue boxes represent the taskspecific features, while the red and blue triangles mean shared features that benefit Task A and Task B, respectively.",
"Figure 2: The architecture of the sifted multi-task learning method based on shared-private model. In particular, the two blue boxes represent selected sharing layers of stance detection and fake news detection and the red box denotes shared layer between tasks.",
"Figure 3: The details of selected sharing layer.",
"Table 1: Statistics of the two datasets.",
"Table 3: Ablation analysis of the sifted multi-task learning method.",
"Figure 4: Typical tokens obtained by different layers of the sifted multi-task learning method. In our proposed method, typical tokens are captured by shared layer (SL), selected sharing layer for fake news detection (SSLFND), selected sharing layer for stance detection (SSL-SD), private layer for fake news detection (PL-FND), and private layer for stance detection (PL-SD) respectively. A column of the same color represents the distribution of one token in different layers, while the last two columns denote unique tokens captured by different layers.",
"Figure 5: (a) In fake news detection task, the GSC line denotes the weight values gfake of gated sharing cell, while the SL line represents feature weights of Hshared in the shared layer. Two horizontal lines give two different borders to determine the importance of tokens. (b) The red and green heatmaps describe the neuron behaviours of attention sharing cell Afake in fake news detection task and Astance in stance detection task, respectively."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"6-Table1-1.png",
"7-Table3-1.png",
"7-Figure4-1.png",
"8-Figure5-1.png"
]
} | [
"What are the hyperparameter setting of the MTL model?",
"How is the selected sharing layer trained?"
] | [
[
"1909.01720-Experiments ::: Settings-1"
],
[
"1909.01720-Method-0"
]
] | [
"Size of word embeddings is 200, size of position embeddings is 100, 6 attention heads and 2 blocks in encoder, dropout in multi-head attention is 0.7, minibatch size is 64, initial learning rate is 0.001, dropout rate is 0.3, lambda is 0.6.",
"By jointly training the tasks of stance and fake news detection."
] | 166 |
1908.10090 | On NMT Search Errors and Model Errors: Cat Got Your Tongue? | We report on search errors and model errors in neural machine translation (NMT). We present an exact inference procedure for neural sequence models based on a combination of beam search and depth-first search. We use our exact search to find the global best model scores under a Transformer base model for the entire WMT15 English-German test set. Surprisingly, beam search fails to find these global best model scores in most cases, even with a very large beam size of 100. For more than 50% of the sentences, the model in fact assigns its global best score to the empty translation, revealing a massive failure of neural models in properly accounting for adequacy. We show by constraining search with a minimum translation length that at the root of the problem of empty translations lies an inherent bias towards shorter translations. We conclude that vanilla NMT in its current form requires just the right amount of beam search errors, which, from a modelling perspective, is a highly unsatisfactory conclusion indeed, as the model often prefers an empty translation. | {
"paragraphs": [
[
"[0]Now at Google.",
"Neural machine translation BIBREF0 , BIBREF1 , BIBREF2 assigns the probability INLINEFORM0 of a translation INLINEFORM1 of length INLINEFORM2 over the target language vocabulary INLINEFORM3 for a source sentence INLINEFORM4 of length INLINEFORM5 over the source language vocabulary INLINEFORM6 via a left-to-right factorization using the chain rule: DISPLAYFORM0 ",
"The task of finding the most likely translation INLINEFORM0 for a given source sentence INLINEFORM1 is known as the decoding or inference problem: DISPLAYFORM0 ",
"The NMT search space is vast as it grows exponentially with the sequence length. For example, for a common vocabulary size of INLINEFORM0 , there are already more possible translations with 20 words or less than atoms in the observable universe ( INLINEFORM1 ). Thus, complete enumeration of the search space is impossible. The size of the NMT search space is perhaps the main reason why – besides some preliminary studies BIBREF3 , BIBREF4 , BIBREF5 – analyzing search errors in NMT has received only limited attention. To the best of our knowledge, none of the previous studies were able to quantify the number of search errors in unconstrained NMT due to the lack of an exact inference scheme that – although too slow for practical MT – guarantees to find the global best model score for analysis purposes.",
"[t!] BeamSearch INLINEFORM0 [1] INLINEFORM1 : Source sentence, INLINEFORM2 : Beam size INLINEFORM3 Initialize with empty translation prefix and zero score INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 Hypotheses ending with INLINEFORM8 are not expanded INLINEFORM9 Add all possible continuations INLINEFORM10 Select INLINEFORM11 -best INLINEFORM12 INLINEFORM13 INLINEFORM14 ",
"[t!] DFS INLINEFORM0 [1] INLINEFORM1 : Source sentence",
" INLINEFORM0 : Translation prefix (default: INLINEFORM1 )",
" INLINEFORM0 : INLINEFORM1 (default: INLINEFORM2 )",
" INLINEFORM0 : Lower bound INLINEFORM1 INLINEFORM2 Trigger INLINEFORM3 update INLINEFORM4 Initialize INLINEFORM5 with dummy value INLINEFORM6 INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 INLINEFORM11 INLINEFORM12 ",
"In this work we propose such an exact decoding algorithm for NMT that exploits the monotonicity of NMT scores: Since the conditional log-probabilities in Eq. EQREF1 are always negative, partial hypotheses can be safely discarded once their score drops below the log-probability of any complete hypothesis. Using our exact inference scheme we show that beam search does not find the global best model score for more than half of the sentences. However, these search errors, paradoxically, often prevent the decoder from suffering from a frequent but very serious model error in NMT, namely that the empty hypothesis often gets the global best model score. Our findings suggest a reassessment of the amount of model and search errors in NMT, and we hope that they will spark new efforts in improving NMT modeling capabilities, especially in terms of adequacy."
],
[
"Decoding in NMT (Eq. EQREF2 ) is usually tackled with beam search, which is a time-synchronous approximate search algorithm that builds up hypotheses from left to right. A formal algorithm description is given in Alg. SECREF1 . Beam search maintains a set of active hypotheses INLINEFORM0 . In each iteration, all hypotheses in INLINEFORM1 that do not end with the end-of-sentence symbol INLINEFORM2 are expanded and collected in INLINEFORM3 . The best INLINEFORM4 items in INLINEFORM5 constitute the set of active hypotheses INLINEFORM6 in the next iteration (line 11 in Alg. SECREF1 ), where INLINEFORM7 is the beam size. The algorithm terminates when the best hypothesis in INLINEFORM8 ends with the end-of-sentence symbol INLINEFORM9 . Hypotheses are called complete if they end with INLINEFORM10 and partial if they do not.",
"Beam search is the ubiquitous decoding algorithm for NMT, but it is prone to search errors as the number of active hypotheses is limited by INLINEFORM0 . In particular, beam search never compares partial hypotheses of different lengths with each other. As we will see in later sections, this is one of the main sources of search errors. However, in many cases, the model score found by beam search is a reasonable approximation to the global best model score. Let INLINEFORM1 be the model score found by beam search ( INLINEFORM2 in line 12, Alg. SECREF1 ), which is a lower bound on the global best model score: INLINEFORM3 . Furthermore, since the conditionals INLINEFORM4 in Eq. EQREF1 are log-probabilities and thus non-positive, expanding a partial hypothesis is guaranteed to result in a lower model score, i.e.: DISPLAYFORM0 ",
"Consequently, when we are interested in the global best hypothesis INLINEFORM0 , we only need to consider partial hypotheses with scores greater than INLINEFORM1 . In our exact decoding scheme we traverse the NMT search space in a depth-first order, but cut off branches along which the accumulated model score falls below INLINEFORM2 . During depth-first search (DFS), we update INLINEFORM3 when we find a better complete hypothesis. Alg. SECREF1 specifies the DFS algorithm formally. An important detail is that elements in INLINEFORM4 are ordered such that the loop in line 5 considers the INLINEFORM5 token first. This often updates INLINEFORM6 early on and leads to better pruning in subsequent recursive calls."
],
[
"We conduct all our experiments in this section on the entire English-German WMT news-test2015 test set (2,169 sentences) with a Transformer base BIBREF13 model trained with Tensor2Tensor BIBREF14 on parallel WMT18 data excluding ParaCrawl. Our pre-processing is as described by BIBREF15 and includes joint subword segmentation using byte pair encoding BIBREF16 with 32K merges. We report cased BLEU scores. An open-source implementation of our exact inference scheme is available in the SGNMT decoder BIBREF17 , BIBREF4 .",
"Our main result is shown in Tab. TABREF9 . Greedy and beam search both achieve reasonable BLEU scores but rely on a high number of search errors to not be affected by a serious NMT model error: For 51.8% of the sentences, NMT assigns the global best model score to the empty translation, i.e. a single INLINEFORM0 token. Fig. FIGREF10 visualizes the relationship between BLEU and the number of search errors. Large beam sizes reduce the number of search errors, but the BLEU score drops because translations are too short. Even a large beam size of 100 produces 53.62% search errors. Fig. FIGREF11 shows that beam search effectively reduces search errors with respect to greedy decoding to some degree, but is ineffective in reducing search errors even further. For example, Beam-10 yields 15.9% fewer search errors (absolute) than greedy decoding (57.68% vs. 73.58%), but Beam-100 improves search only slightly (53.62% search errors) despite being 10 times slower than beam-10.",
"The problem of empty translations is also visible in the histogram over length ratios (Fig. FIGREF13 ). Beam search – although still slightly too short – roughly follows the reference distribution, but exact search has an isolated peak in INLINEFORM0 from the empty translations.",
"Tab. TABREF14 demonstrates that the problems of search errors and empty translations are not specific to the Transformer base model and also occur with other architectures. Even a highly optimized Transformer Big model from our WMT18 shared task submission BIBREF15 has 25.8% empty translations.",
"Fig. FIGREF15 shows that long source sentences are more affected by both beam search errors and the problem of empty translations. The global best translation is empty for almost all sentences longer than 40 tokens (green curve). Even without sentences where the model prefers the empty translation, a large amount of search errors remain (blue curve)."
],
[
"To find out more about the length deficiency we constrained exact search to certain translation lengths. Constraining search that way increases the run time as the INLINEFORM0 -bounds are lower. Therefore, all results in this section are conducted on only a subset of the test set to keep the runtime under control. We first constrained search to translations longer than 0.25 times the source sentence length and thus excluded the empty translation from the search space. Although this mitigates the problem slightly (Fig. FIGREF16 ), it still results in a peak in the INLINEFORM1 cluster. This suggests that the problem of empty translations is the consequence of an inherent model bias towards shorter hypotheses and cannot be fixed with a length constraint.",
"We then constrained exact search to either the length of the best Beam-10 hypothesis or the reference length. Tab. TABREF18 shows that exact search constrained to the Beam-10 hypothesis length does not improve over beam search, suggesting that any search errors between beam search score and global best score for that length are insignificant enough so as not to affect the BLEU score. The oracle experiment in which we constrained exact search to the correct reference length (last row in Tab. TABREF18 ) improved the BLEU score by 0.9 points.",
"A popular method to counter the length bias in NMT is length normalization BIBREF6 , BIBREF7 which simply divides the sentence score by the sentence length. We can find the global best translations under length normalization by generalizing our exact inference scheme to length dependent lower bounds INLINEFORM0 . The generalized scheme finds the best model scores for each translation length INLINEFORM1 in a certain range (e.g. zero to 1.2 times the source sentence length). The initial lower bounds are derived from the Beam-10 hypothesis INLINEFORM2 as follows: DISPLAYFORM0 ",
"Exact search under length normalization does not suffer from the length deficiency anymore (last row in Tab. TABREF19 ), but it is not able to match our best BLEU score under Beam-10 search. This suggests that while length normalization biases search towards translations of roughly the correct length, it does not fix the fundamental modelling problem."
],
[
"Other researchers have also noted that large beam sizes yield shorter translations BIBREF19 . BIBREF20 argue that this model error is due to the locally normalized maximum likelihood training objective in NMT that underestimates the margin between the correct translation and shorter ones if trained with regularization and finite data. A similar argument was made by BIBREF10 who pointed out the difficulty for a locally normalized model to estimate the “budget” for all remaining (longer) translations. BIBREF21 demonstrated that NMT models are often poorly calibrated, and that that can cause the length deficiency. BIBREF5 argued that uncertainty caused by noisy training data may play a role. BIBREF22 showed that the consistent best string problem for RNNs is decidable. We provide an alternative DFS algorithm that relies on the monotonic nature of model scores rather than consistency, and that often converges in practice.",
"To the best of our knowledge, this is the first work that reports the exact number of search errors in NMT as prior work often relied on approximations, e.g. via INLINEFORM0 -best lists BIBREF3 or constraints BIBREF4 ."
],
[
"We have presented an exact inference scheme for NMT. Exact search may not be practical, but it allowed us to discover deficiencies in widely used NMT models. We linked deteriorating BLEU scores of large beams with the reduction of search errors and showed that the model often prefers the empty translation – an evidence of NMT's failure to properly model adequacy. Our investigations into length constrained exact search suggested that simple heuristics like length normalization are unlikely to remedy the problem satisfactorily."
],
[
"This work was supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC) grant EP/L027623/1 and has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service funded by EPSRC Tier-2 capital grant EP/P020259/1."
]
],
"section_name": [
"Introduction",
"Exact Inference for Neural Models",
"Results without Length Constraints",
"Results with Length Constraints",
"Related Work",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"8924f75702457022bb04cde00573e44159fd513e",
"ef49561e9028330b5db31277bcc9c6438cb38698"
],
"answer": [
{
"evidence": [
"To find out more about the length deficiency we constrained exact search to certain translation lengths. Constraining search that way increases the run time as the INLINEFORM0 -bounds are lower. Therefore, all results in this section are conducted on only a subset of the test set to keep the runtime under control. We first constrained search to translations longer than 0.25 times the source sentence length and thus excluded the empty translation from the search space. Although this mitigates the problem slightly (Fig. FIGREF16 ), it still results in a peak in the INLINEFORM1 cluster. This suggests that the problem of empty translations is the consequence of an inherent model bias towards shorter hypotheses and cannot be fixed with a length constraint.",
"We then constrained exact search to either the length of the best Beam-10 hypothesis or the reference length. Tab. TABREF18 shows that exact search constrained to the Beam-10 hypothesis length does not improve over beam search, suggesting that any search errors between beam search score and global best score for that length are insignificant enough so as not to affect the BLEU score. The oracle experiment in which we constrained exact search to the correct reference length (last row in Tab. TABREF18 ) improved the BLEU score by 0.9 points."
],
"extractive_spans": [
"search to translations longer than 0.25 times the source sentence length",
"search to either the length of the best Beam-10 hypothesis or the reference length"
],
"free_form_answer": "",
"highlighted_evidence": [
"We first constrained search to translations longer than 0.25 times the source sentence length and thus excluded the empty translation from the search space.",
"We then constrained exact search to either the length of the best Beam-10 hypothesis or the reference length."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To find out more about the length deficiency we constrained exact search to certain translation lengths. Constraining search that way increases the run time as the INLINEFORM0 -bounds are lower. Therefore, all results in this section are conducted on only a subset of the test set to keep the runtime under control. We first constrained search to translations longer than 0.25 times the source sentence length and thus excluded the empty translation from the search space. Although this mitigates the problem slightly (Fig. FIGREF16 ), it still results in a peak in the INLINEFORM1 cluster. This suggests that the problem of empty translations is the consequence of an inherent model bias towards shorter hypotheses and cannot be fixed with a length constraint."
],
"extractive_spans": [],
"free_form_answer": "They set translation length longer than minimum 0.25 times the source sentence length",
"highlighted_evidence": [
"We first constrained search to translations longer than 0.25 times the source sentence length and thus excluded the empty translation from the search space. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"109e45ebea249dac66ce77dca41cb67cb4a2e906",
"d27df10debeb09e2157e8fbd73dd0fa47128c0ed",
"fc0b52d77c8da1f60f8c6d3091a7aafb16504e8c"
],
"answer": [
{
"evidence": [
"We conduct all our experiments in this section on the entire English-German WMT news-test2015 test set (2,169 sentences) with a Transformer base BIBREF13 model trained with Tensor2Tensor BIBREF14 on parallel WMT18 data excluding ParaCrawl. Our pre-processing is as described by BIBREF15 and includes joint subword segmentation using byte pair encoding BIBREF16 with 32K merges. We report cased BLEU scores. An open-source implementation of our exact inference scheme is available in the SGNMT decoder BIBREF17 , BIBREF4 ."
],
"extractive_spans": [
"2,169 sentences"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct all our experiments in this section on the entire English-German WMT news-test2015 test set (2,169 sentences) with a Transformer base BIBREF13 model trained with Tensor2Tensor BIBREF14 on parallel WMT18 data excluding ParaCrawl. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conduct all our experiments in this section on the entire English-German WMT news-test2015 test set (2,169 sentences) with a Transformer base BIBREF13 model trained with Tensor2Tensor BIBREF14 on parallel WMT18 data excluding ParaCrawl. Our pre-processing is as described by BIBREF15 and includes joint subword segmentation using byte pair encoding BIBREF16 with 32K merges. We report cased BLEU scores. An open-source implementation of our exact inference scheme is available in the SGNMT decoder BIBREF17 , BIBREF4 ."
],
"extractive_spans": [
"2,169 sentences"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct all our experiments in this section on the entire English-German WMT news-test2015 test set (2,169 sentences) with a Transformer base BIBREF13 model trained with Tensor2Tensor BIBREF14 on parallel WMT18 data excluding ParaCrawl. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conduct all our experiments in this section on the entire English-German WMT news-test2015 test set (2,169 sentences) with a Transformer base BIBREF13 model trained with Tensor2Tensor BIBREF14 on parallel WMT18 data excluding ParaCrawl. Our pre-processing is as described by BIBREF15 and includes joint subword segmentation using byte pair encoding BIBREF16 with 32K merges. We report cased BLEU scores. An open-source implementation of our exact inference scheme is available in the SGNMT decoder BIBREF17 , BIBREF4 ."
],
"extractive_spans": [
"2,169 sentences"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct all our experiments in this section on the entire English-German WMT news-test2015 test set (2,169 sentences) with a Transformer base BIBREF13 model trained with Tensor2Tensor BIBREF14 on parallel WMT18 data excluding ParaCrawl."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"what were the length constraints they set?",
"what is the test set size?"
],
"question_id": [
"96459b02efa82993a0b413530ed0b517c6633eea",
"6c1614991647705265fb348d28ba60dd3b63b799"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Table 1: NMT with exact inference. In the absence of search errors, NMT often prefers the empty translation, causing a dramatic drop in length ratio and BLEU.",
"Figure 1: BLEU over the percentage of search errors. Large beam sizes yield fewer search errors but the BLEU score suffers from a length ratio below 1.",
"Figure 2: Even large beam sizes produce a large number of search errors.",
"Figure 3: Histogram over target/source length ratios.",
"Figure 5: Histogram over length ratios with minimum translation length constraint of 0.25 times the source sentence length. Experiment conducted on 73.0% of the test set.",
"Table 2: ∗: The recurrent LSTM, the convolutional SliceNet (Kaiser et al., 2017), and the Transformer-Big systems are strong baselines from a WMT’18 shared task submission (Stahlberg et al., 2018a).",
"Figure 4: Number of search errors under Beam-10 and empty global bests over the source sentence length.",
"Table 3: Exact search under length constraints. Experiment conducted on 48.3% of the test set.",
"Table 4: Length normalization fixes translation lengths, but prevents exact search from matching the BLEU score of Beam-10. Experiment conducted on 48.3% of the test set."
],
"file": [
"3-Table1-1.png",
"3-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"4-Figure5-1.png",
"4-Table2-1.png",
"4-Figure4-1.png",
"5-Table3-1.png",
"5-Table4-1.png"
]
} | [
"what were the length constraints they set?"
] | [
[
"1908.10090-Results with Length Constraints-1",
"1908.10090-Results with Length Constraints-0"
]
] | [
"They set translation length longer than minimum 0.25 times the source sentence length"
] | 167 |
1910.07154 | Unsupervised Question Answering for Fact-Checking | Recent Deep Learning (DL) models have succeeded in achieving human-level accuracy on various natural language tasks such as question-answering, natural language inference (NLI), and textual entailment. These tasks not only require the contextual knowledge but also the reasoning abilities to be solved efficiently. In this paper, we propose an unsupervised question-answering based approach for a similar task, fact-checking. We transform the FEVER dataset into a Cloze-task by masking named entities provided in the claims. To predict the answer token, we utilize pre-trained Bidirectional Encoder Representations from Transformers (BERT). The classifier computes label based on the correctly answered questions and a threshold. Currently, the classifier is able to classify the claims as "SUPPORTS" and "MANUAL_REVIEW". This approach achieves a label accuracy of 80.2% on the development set and 80.25% on the test set of the transformed dataset. | {
"paragraphs": [
[
"Every day textual information is being added/updated on Wikipedia, as well as other social media platforms like Facebook, Twitter, etc. These platforms receive a huge amount of unverified textual data from all its users such as News Channels, Bloggers, Journalists, Field-Experts which ought to be verified before other users start consuming it. This information boom has increased the demand of information verification also known as Fact Checking. Apart from the encyclopedia and other platforms, domains like scientific publications and e-commerce also require information verification for reliability purposes. Generally, Wikipedia authors, bloggers, journalists and scientists provide references to support their claims. Providing referenced text against the claims makes the fact checking task a little easier as the verification system no longer needs to search for the relevant documents.",
"Wikipedia manages to verify all this new information with a number of human reviewers. Manual review processes introduce delays in publishing and is not a well scalable approach. To address this issue, researchers have launched relevant challenges, such as the Fake News Challenge (BIBREF0), Fact Extraction and VERification (FEVER) (BIBREF1) challenge along with the datasets. Moreover, Thorne and Vlachos (BIBREF2) released a survey on the current models for automated fact-checking. FEVER is the largest dataset and contains around 185k claims from the corpus of 5.4M Wikipedia articles. The claims are labeled as “SUPPORTS”, “REFUTES”, or “NOT ENOUGH INFO”, based on the evidence set.",
"In this paper, we propose an unsupervised question-answering based approach for solving the fact-checking problem. This approach is inspired from the memory-based reading comprehension task that humans perform at an early age. As we know that kids in schools, first read and learn the syllabus content so that they can answer the questions in the exam. Similarly, our model learns a language model and linguistics features in unsupervised fashion from the provided Wikipedia pages.",
"To transform the FEVER dataset into the above-mentioned task, we first generate the questions from the claims. In literature, there are majorly two types of Question Generation systems: Rule-based and Neural Question Generation (NQG) model based. Ali et al. (BIBREF3) proposed a rule-based pipeline to automate the question generation using POS (Part-of-speech) tagging and Named Entity Recognition (NER) tagging from the sentences. Recently, many NQG models have been introduced to generate questions in natural language. Serban et al. (BIBREF4) achieved better performance for question generation utilizing (passage, question, answer) triplets as training data and an encoder-decoder based architecture as their learning model.",
"Du et al. (BIBREF5) introduced a sequence-to-sequence model with an attention mechanism, outperforming rule-base question generation systems. Although the models proposed in (BIBREF6; BIBREF7) are effective, they require a passage to generate the plausible questions which is not readily available in the FEVER dataset. To resolve the issues and to keep the system simple but effective, we chose to generate questions similar to a Cloze-task or masked language modeling task. Such a task makes the problem more tractable as the masked entities are already known (i.e. named entities) and tight as there is only one correct answer for a given question. Later when the answers are generated, due to the question generation process, it becomes very easy to identify the correct answers.",
"We use the BERT's (Bidirectional Encoder Representations from Transformers) (BIBREF8) masked language model, that is pre-trained on Wikipedia articles for predicting the masked entities. Currently, neither the claim verification process nor the question generation process mandates explicit reasoning. For the same reason, it is difficult to put “REFUTES” or “NOT ENOUGH INFO” labels. To resolve this issue, we classify the unsupported claims as “MANUAL_REVIEW” instead of labeling them as “NOT ENOUGH INFO” or “REFUTES”.",
"In the literature, the shared task has been tackled using pipeline-based supervised models (BIBREF9; BIBREF10; BIBREF11). To our knowledge, only BIBREF10 has provided the confusion matrix for each of the labels for their supervised system. For the same reason, we are only providing the comparison of the label accuracy on the “SUPPORTS” label in the results section."
],
[
"In this section, we explain the design and all the underlying methods that our system has adopted. Our system is a pipeline consisting of three stages: (1) Question Generation, (2) Question Answering, (3) Label Classification. The question generation stage attempts to convert the claims into appropriate questions and answers. It generates questions similar to a Cloze-task or masked language modeling task where the named entities are masked with a blank. Question Answering stage predicts the masked blanks in an unsupervised manner. The respective predictions are then compared with the original answers and exported into a file for label classification. The label classifier calculates the predicted label based on a threshold."
],
[
"The claims generally feature information about one or more entities. These entities can be of many types such as PERSON, CITY, DATE. Since the entities can be considered as the content words for the claim, we utilize these entities to generate the questions. Although function words such as conjunctions and prepositions form relationship between entities in the claims, we currently do not make use of such function words to avoid generating complex questions. The types of entities in a sentence can be recognized by using Stanford CoreNLP (BIBREF12) NER tagger.",
"In our case, FEVER claims are derived from Wikipedia. We first collect all the claims from the FEVER dataset along with “id”, “label” and “verifiable” fields. We don't perform any normalization on the claims such as lowercasing, transforming the spaces to underscore or parenthesis to special characters as it may decrease the accuracy of the NER tagger. These claims are then processed by the NER tagger to identify the named entities and their type. The named entities are then used to generate the questions by masking the entities for the subsequent stage.",
"This process not only transforms the dataset but also transforms the task into a Cloze-task or masked language modeling task. Although the original masked language modeling task masks some of the tokens randomly, here we mask the named entities for generating the questions."
],
[
"Originally inspired by the Cloze-task and developed to learn to predict the masked entities as well as the next sentence, BERT creates a deep bidirectional transformer model for the predictions. Since the FEVER claims are masked to generate the questions, we use BERT to tokenize the claims. We observed that the BERT tokenizer sometimes fails to tokenize the named entities correctly (e.g. Named entity “Taran” was tokenized as “Tara”, “##n”). This is due to the insufficient vocabulary used while training the WordPiece tokenizer.",
"To resolve this, we use Spacy Tokenizer whenever the WordPiece Tokenizer fails. Once the claim is tokenized, we use the PyTorch Implementation of the BERT model (BertForMaskedLM model) to predict the vocabulary index of the masked token. The predicted vocabulary index is then converted to the actual token. We compare the predicted token against the actual answer to calculate the label accuracy based on the classification threshold."
],
[
"In this stage, we compute the final label based on the correctness score of the predictions that we received from the previous stage. The correctness score ($s$) is computed as:",
"where $n_c$ indicates the number of correct questions, and $N$ is the total number of questions generated for the given claim. The label is assigned based on the correctness score ($s$) and the derived threshold ($\\phi $) as:",
"Here, the classification threshold ($\\phi $) is derived empirically based on the precision-recall curve."
],
[
"We utilize standard pre-trained BERT-Base-uncased model configurations as given below:",
"Layers: 12",
"Hidden Units: 768",
"Attention heads: 12",
"Trainable parameters: 110M",
"We fine-tune our model (BERT) on the masked language modeling task on the wiki-text provided along with the FEVER dataset for 2 epochs.",
"Note that Stanford CoreNLP NER tagger and the BERT model are the same for all the experiments and all the sets (development set, test set, training set). We use the same PyTorch library mentioned in Section 2.2 for the fine-tuning as well."
],
[
"For the subtask of question generation, the results in Table TABREF3 show that the system is able to generate questions given a claim with considerably good accuracy. The conversion accuracy is defined as the ratio of the number of claims in which the named entities are extracted to the number of claims. The results also support our assumption that the claims generally feature information about one or more entities.",
"Table TABREF16 shows the performance of our Fact Checking system on the “SUPPORTS” label, the output of our system. We compare the results against two different classification thresholds. Table TABREF3 shows that on an average there are 3 questions generated per claim. Here, $\\phi $ = 0.76 suggests that at least 3 out of the 4 questions have to be answered correctly while $\\phi $ = 0.67 suggests that at least 2 out of the 3 questions has to be answered correctly for the claim to be classified as “SUPPORTS”.",
"If only 1 question is generated, then it has to be answered correctly for the claim to be classified as “SUPPORTS” in case of both the thresholds.",
"In contrast to the results reported in Table TABREF16, here we consider $\\phi $ = 0.76 to be a better classification threshold as it improvises over False Positives considerably over the entire dataset.",
"Although our unsupervised model doesn't support all the labels, to show the effectiveness of the approach, we compare the label accuracy of “SUPPORTS” label against a supervised approach – HexaF. Results from Table TABREF17 suggests that our approach is comparable to HexaF for $\\phi $ = 0.76."
],
[
"The typical errors that we observed for the question generation system are due to the known limitations of the NER tagger. Most of the claims that the system failed to generate the questions from contain entity types for which the tagger is not trained.",
"For instance, the claim “A View to a Kill is an action movie.” has a movie title (i.e. A View to a Kill) and a movie genre (i.e. action) but Stanford CoreNLP NER tagger is not trained to identify such type of entities."
],
[
"We describe the most recurrent failure cases of our answering model in the description below.",
"Limitations of Vocabulary. Names like “Burnaby” or “Nikolaj” were not part of the original vocabulary while pre-training the BERT model, which makes it difficult to predict them using the same model. This was one of the most recurring error types.",
"Limitations of Tokenizer. The WordPiece tokenizer splits the token into multiple tokens. E.g. “Taran” into “Tara”, “##n”. In such cases, the answering system predicts the first token only which would be a substring of the correct answer. As we don't explicitly put a rule to avoid such cases, they are considered as incorrect answers."
],
[
"In this paper, we presented a transformer-based unsupervised question-answering pipeline to solve the fact checking task. The pipeline consisted of three stages: (1) Question Generation (similar to a Cloze-task), (2) Question Answering, (3) Label Classification. We use Stanford CoreNLP NER tagger to convert the claim into a Cloze-task by masking the named entities. The Question Generation task achieves almost 90% accuracy in transforming the FEVER dataset into a Cloze-task. To answer the questions generated, we utilize masked language modeling approach from the BERT model. We could achieve 80.2% label accuracy on “SUPPORTS” label. From the results, we conclude that it is possible to verify the facts with the right kind of factoid questions."
],
[
"To date, our approach only generates two labels “SUPPORTS” and “MANUAL_REVIEW”. We are working on extending this work to also generate “REFUTED” by improving our question generation framework. We will also work on generating questions using recent Neural Question Generation approaches. Later, to achieve better accuracy for tokenizing as well as answering, we plan to train the WordPiece Tokenizer from scratch."
],
[
"The authors thank Dr. Amit Nanavati and Dr. Ratnik Gandhi for their insightful comments, suggestions, and feedback. This research was supported by the TensorFlow Research Cloud (TFRC) program."
]
],
"section_name": [
"Introduction",
"System Description",
"System Description ::: Question Generation",
"System Description ::: Question Answering",
"System Description ::: Label Classification",
"System Description ::: Model and Training details",
"Results",
"Error Analysis ::: Question Generation",
"Error Analysis ::: Question Answering",
"Conclusion",
"Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"15a7ac8243d23d11dd4de3b95b70b720ab024b7e",
"330dc2ac943d617fd7689f2ac534a7f163b76c8e",
"70eb1fc90dd19523c03a0f06f40bcc52234e659b"
],
"answer": [
{
"evidence": [
"Although our unsupervised model doesn't support all the labels, to show the effectiveness of the approach, we compare the label accuracy of “SUPPORTS” label against a supervised approach – HexaF. Results from Table TABREF17 suggests that our approach is comparable to HexaF for $\\phi $ = 0.76."
],
"extractive_spans": [
"we compare the label accuracy of “SUPPORTS” label against a supervised approach – HexaF"
],
"free_form_answer": "",
"highlighted_evidence": [
"Although our unsupervised model doesn't support all the labels, to show the effectiveness of the approach, we compare the label accuracy of “SUPPORTS” label against a supervised approach – HexaF."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Although our unsupervised model doesn't support all the labels, to show the effectiveness of the approach, we compare the label accuracy of “SUPPORTS” label against a supervised approach – HexaF. Results from Table TABREF17 suggests that our approach is comparable to HexaF for $\\phi $ = 0.76."
],
"extractive_spans": [
"HexaF"
],
"free_form_answer": "",
"highlighted_evidence": [
"Although our unsupervised model doesn't support all the labels, to show the effectiveness of the approach, we compare the label accuracy of “SUPPORTS” label against a supervised approach – HexaF. Results from Table TABREF17 suggests that our approach is comparable to HexaF for $\\phi $ = 0.76."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: Comparison of the Label accuracy on Development set."
],
"extractive_spans": [],
"free_form_answer": "HexaF - UCL ",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Comparison of the Label accuracy on Development set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"annotation_id": [
"77c6a9a4aa54e6c4649504e16d47030bacb28bfe",
"992edd65018c5602cdb16685b1c2e7b4d2474b65"
],
"answer": [
{
"evidence": [
"Table TABREF16 shows the performance of our Fact Checking system on the “SUPPORTS” label, the output of our system. We compare the results against two different classification thresholds. Table TABREF3 shows that on an average there are 3 questions generated per claim. Here, $\\phi $ = 0.76 suggests that at least 3 out of the 4 questions have to be answered correctly while $\\phi $ = 0.67 suggests that at least 2 out of the 3 questions has to be answered correctly for the claim to be classified as “SUPPORTS”."
],
"extractive_spans": [
"0.76",
"0.67"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare the results against two different classification thresholds.",
"Here, $\\phi $ = 0.76 suggests that at least 3 out of the 4 questions have to be answered correctly while $\\phi $ = 0.67 suggests that at least 2 out of the 3 questions has to be answered correctly for the claim to be classified as “SUPPORTS”."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF16 shows the performance of our Fact Checking system on the “SUPPORTS” label, the output of our system. We compare the results against two different classification thresholds. Table TABREF3 shows that on an average there are 3 questions generated per claim. Here, $\\phi $ = 0.76 suggests that at least 3 out of the 4 questions have to be answered correctly while $\\phi $ = 0.67 suggests that at least 2 out of the 3 questions has to be answered correctly for the claim to be classified as “SUPPORTS”."
],
"extractive_spans": [
"0.76 suggests that at least 3 out of the 4 questions have to be answered correctly",
"0.67 suggests that at least 2 out of the 3 questions has to be answered correctly"
],
"free_form_answer": "",
"highlighted_evidence": [
" We compare the results against two different classification thresholds. Table TABREF3 shows that on an average there are 3 questions generated per claim. Here, $\\phi $ = 0.76 suggests that at least 3 out of the 4 questions have to be answered correctly while $\\phi $ = 0.67 suggests that at least 2 out of the 3 questions has to be answered correctly for the claim to be classified as “SUPPORTS”."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"15355b05088e9573d81f1dca5261a525cb24dd25",
"7649706b06c1f45f195cb6d8cc0f0b05c0c0e5c5",
"a7ddfe63efecfd7f716a42cec42520fe9b718f1e"
],
"answer": [
{
"evidence": [
"In our case, FEVER claims are derived from Wikipedia. We first collect all the claims from the FEVER dataset along with “id”, “label” and “verifiable” fields. We don't perform any normalization on the claims such as lowercasing, transforming the spaces to underscore or parenthesis to special characters as it may decrease the accuracy of the NER tagger. These claims are then processed by the NER tagger to identify the named entities and their type. The named entities are then used to generate the questions by masking the entities for the subsequent stage."
],
"extractive_spans": [
"The named entities are then used to generate the questions by masking the entities for the subsequent stage."
],
"free_form_answer": "",
"highlighted_evidence": [
"The named entities are then used to generate the questions by masking the entities for the subsequent stage.",
"The named entities are then used to generate the questions by masking the entities for the subsequent stage."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"In this section, we explain the design and all the underlying methods that our system has adopted. Our system is a pipeline consisting of three stages: (1) Question Generation, (2) Question Answering, (3) Label Classification. The question generation stage attempts to convert the claims into appropriate questions and answers. It generates questions similar to a Cloze-task or masked language modeling task where the named entities are masked with a blank. Question Answering stage predicts the masked blanks in an unsupervised manner. The respective predictions are then compared with the original answers and exported into a file for label classification. The label classifier calculates the predicted label based on a threshold."
],
"extractive_spans": [
"similar to a Cloze-task or masked language modeling task where the named entities are masked with a blank"
],
"free_form_answer": "",
"highlighted_evidence": [
"It generates questions similar to a Cloze-task or masked language modeling task where the named entities are masked with a blank. Question Answering stage predicts the masked blanks in an unsupervised manner."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"12a9488206edcaec548b2e35d0e469b6c385d28b",
"f16c33f82c1130a349a6307637d109e3cf51476b"
],
"answer": [
{
"evidence": [
"Wikipedia manages to verify all this new information with a number of human reviewers. Manual review processes introduce delays in publishing and is not a well scalable approach. To address this issue, researchers have launched relevant challenges, such as the Fake News Challenge (BIBREF0), Fact Extraction and VERification (FEVER) (BIBREF1) challenge along with the datasets. Moreover, Thorne and Vlachos (BIBREF2) released a survey on the current models for automated fact-checking. FEVER is the largest dataset and contains around 185k claims from the corpus of 5.4M Wikipedia articles. The claims are labeled as “SUPPORTS”, “REFUTES”, or “NOT ENOUGH INFO”, based on the evidence set."
],
"extractive_spans": [
"around 185k claims from the corpus of 5.4M Wikipedia articles"
],
"free_form_answer": "",
"highlighted_evidence": [
"FEVER is the largest dataset and contains around 185k claims from the corpus of 5.4M Wikipedia articles."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Wikipedia manages to verify all this new information with a number of human reviewers. Manual review processes introduce delays in publishing and is not a well scalable approach. To address this issue, researchers have launched relevant challenges, such as the Fake News Challenge (BIBREF0), Fact Extraction and VERification (FEVER) (BIBREF1) challenge along with the datasets. Moreover, Thorne and Vlachos (BIBREF2) released a survey on the current models for automated fact-checking. FEVER is the largest dataset and contains around 185k claims from the corpus of 5.4M Wikipedia articles. The claims are labeled as “SUPPORTS”, “REFUTES”, or “NOT ENOUGH INFO”, based on the evidence set."
],
"extractive_spans": [
"185k claims"
],
"free_form_answer": "",
"highlighted_evidence": [
"FEVER is the largest dataset and contains around 185k claims from the corpus of 5.4M Wikipedia articles."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What baseline did they use?",
"What is the threshold?",
"How was the masking done?",
"How large is the FEVER dataset?"
],
"question_id": [
"e4ea0569b637d5f56f63e933b8f269695fe1a926",
"e3c44964eb6ddc554901244eb6595f26a9bae47e",
"905a8d775973882227549e960c7028e4a3561752",
"76f90c88926256e7f90d2104a88acfdd7fc5475e"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: An overview of the model pipeline",
"Table 1: Performance of the question generation system on FEVER Dataset.",
"Table 2: Performance of the question generation system on FEVER Dataset.",
"Table 3: Comparison of the Label accuracy on Development set."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png"
]
} | [
"What baseline did they use?"
] | [
[
"1910.07154-4-Table3-1.png",
"1910.07154-Results-4"
]
] | [
"HexaF - UCL "
] | 169 |
1901.09501 | Toward Unsupervised Text Content Manipulation | Controlled generation of text is of high practical use. Recent efforts have made impressive progress in generating or editing sentences with given textual attributes (e.g., sentiment). This work studies a new practical setting of text content manipulation. Given a structured record, such as `(PLAYER: Lebron, POINTS: 20, ASSISTS: 10)', and a reference sentence, such as `Kobe easily dropped 30 points', we aim to generate a sentence that accurately describes the full content in the record, with the same writing style (e.g., wording, transitions) of the reference. The problem is unsupervised due to lack of parallel data in practice, and is challenging to minimally yet effectively manipulate the text (by rewriting/adding/deleting text portions) to ensure fidelity to the structured content. We derive a dataset from a basketball game report corpus as our testbed, and develop a neural method with unsupervised competing objectives and explicit content coverage constraints. Automatic and human evaluations show superiority of our approach over competitive methods including a strong rule-based baseline and prior approaches designed for style transfer. | {
"paragraphs": [
[
"Generating natural language text to describe structured content, such as a database record or a table, is of ubiquitous use in real-life applications including data report generation BIBREF0 , article writing BIBREF1 , BIBREF2 , dialog systems BIBREF3 , BIBREF4 , and many others. Recent efforts have developed many techniques to improve fidelity to the source content, such as new powerful neural architectures BIBREF5 , BIBREF6 , hybrid generation and retrieval BIBREF7 , BIBREF8 , and so forth, most of which are applied in supervised context.",
"Language is rich with variation–given a data record, there are diverse possible ways of saying the same content, with different word choices, expressions, transitions, tone, etc. Previous data-to-text work has largely focused only on content fidelity, while ignoring and lacking control over the rich stylistic properties of language. It can be practically useful to generate text that is not only describing the conditioning content, but also following a designated writing style, e.g., as provided in a piece of reference text.",
"In this work, we study the new yet practical problem in which we aim to express given content with a sentence and mimic the writing style of a reference sentence (Table TABREF1 ). More specifically, we are given a structured data record containing the content to describe, along with a sentence about a similar but different matter. Our goal is to generate a new sentence that precisely depicts all content in the record, while at the same time using as much of the writing style of reference sentence as possible. As above, the problem differs critically from the supervised data-to-text BIBREF0 or retrieval-and-rewriting work BIBREF7 , BIBREF8 as we have imposed an additional goal of preserving the reference text style. The resulting problem is typically unsupervised due to lack of parallel data.",
"The problem also differs in important ways from the emerging task of text style transfer BIBREF9 , BIBREF10 which assumes an existing sentence of certain content, and modifies single or multiple textual attributes of the sentence (e.g., transferring negative sentiment to positive) without changing the content. Our task, on the contrary, assumes abstract style is encoded in a reference sentence and attempts to modify its concrete content to express new information from the structured record. The different setting can lead to different application scenarios in practice, and pose unique technical challenges. In particular, though the most recent style transfer research BIBREF11 , BIBREF12 has controlled multiple categorical attributes which are largely independent or loosely correlated to each other, a content record in our task, in comparison, can contain varying number of entries which are of different types (e.g., player, points, defensive/offensive rebounds, etc), having many possible values (e.g., hundreds of players), and are structurally coupled (e.g., 32 points by Lebron). A model must understand the content structure, and minimally yet sufficiently manipulate the reference sentence by rewriting, adding, or deleting text portions, with necessary polishing for grammatical correctness and fluency. We name the problem text content manipulation. Our empirical studies show the most recent models designed for style transfer fail to perform well in the task.",
"In this paper, we first develop a large unsupervised dataset as a testbed of the new task. The dataset is derived from an NBA game report corpus BIBREF0 . In each data instance, besides a content record and a reference sentence as the problem inputs, we also collect side information useful for unsupervised learning. Specifically, each instance has an auxiliary sentence that was originally written by human reporters to describe the content record without seeing (and thus stylistically irrelevant to) the reference sentence. We also provide the structured record of the reference sentence. The side information can provide valuable clues for models to understand the content structure and text semantics at training time. We do not rely on the side information at test time.",
"We then propose a neural method to tackle the problem. With a hybrid attention and copy mechanism, the model effectively encodes the reference and faithfully copies content from the record. The model is learned with two competing objectives of reconstructing the auxiliary sentence (for content fidelity) and the reference sentence (for style preservation). We further improve the model with an explicit content coverage constraint which encourages to precisely and fully convey the structured content.",
"For empirical study, we devise automatic metrics to measure content fidelity and style preservation, respectively. We also perform human evaluations to compare different approaches. Results demonstrate the proposed method significantly improves over others, including a strong rule-based baseline and the recent style transfer models."
],
[
"Generating text conditioning on structured input has been widely studied in recent work, such as BIBREF3 , BIBREF1 , BIBREF4 , BIBREF0 . Those methods are based on neural sequence to sequence models and trained with supervised data. This line of work has focused primarily on generating more accurate description of the given data, while does not study the problem of controlling the writing style of outputs. Our task takes a step forward to simultaneously describing desired content and controlling stylistic properties. Furthermore, our task is challenging due to its unsupervised setting in practice.",
"Beyond generating text from scratch, there is another line of work that first retrieves a similar sentence and then rewrites it to express desired information BIBREF8 , BIBREF7 , BIBREF13 , BIBREF14 . For example, BIBREF8 used the framework to generate response in dialogues, while BIBREF7 studied programming code generation. The goal of the work is to manifest useful information from neighbors, usually in a supervised context, without aiming at controlling writing characteristics, and thus has fundamentally different assumptions to ours.",
"Recently, there has been growing interest in text style transfer, in which many techniques for controlled text generation are developed BIBREF9 , BIBREF10 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF11 , BIBREF12 . The main idea underlying those models is to learn disentangled representations of text so as modify textual attributes or style of interest. Those papers used different objectives to encourage learning disentangled representations. BIBREF9 used pre-trained classifiers as the supervision. BIBREF10 used a GAN-based approach in which binary classifiers were used as discriminators. BIBREF15 proposed to use more structured discriminators such as language models to provide better supervision to the generator. BIBREF16 , BIBREF11 further augmented prior work using back-translation technique to incorporate cycle-consistency loss. Both BIBREF11 and BIBREF12 generalized the task to controlling multiple categorical attributes at the same time. Our work differs from those in that we assume an existing sentence to provide the source of style and a structured record as the source of content. The input content record in our task is also more structured than the style attributes which are typically loosely connected and of a pre-fixed number. The resulting content manipulation setting poses unique challenges in controlling, as discussed more in the empirical study."
],
[
"We first formally define the problem of unsupervised text content manipulation, and establish the notations. We then present a large dataset for the task."
],
[
"Without loss of generality, consider a content record INLINEFORM0 , where each element INLINEFORM1 is a data tuple which typically includes a data type (e.g., points), a value (e.g., 32), and other information (such as the associated player, e.g., Lebron_James). INLINEFORM2 is the number of tuples in record INLINEFORM3 , which can vary across different records. We are also given a reference sentence INLINEFORM4 which is assumed to describe content that has a similar but not exact the same structure with that of the record INLINEFORM5 . For example, in Table TABREF1 , both the content record and the reference sentence involve two players, respectively, but the number of associated data tuples as well as the types are different (e.g., Lebron_James in the record has 3 box-score entries, while Jrue_Holiday in the reference has only 2).",
"We may also have access to other side information at training time. For example, in the dataset developed below, each content record INLINEFORM0 is associated with an auxiliary sentence INLINEFORM1 that was originally written to describe INLINEFORM2 without following the reference INLINEFORM3 . Each reference sentence INLINEFORM4 also has its corresponding record INLINEFORM5 containing the content information. The side information can provide valuable clues for models to understand the content structure and text semantics at training time. For example, the auxiliary sentence provides a hint on how the desired content can be presented in natural language, though it is stylistically irrelevant to the reference sentence. Note that, at test time, a solution to the task should only rely on the inputs INLINEFORM6 without using the side information.",
"The goal of the task is to generate a new realistic sentence INLINEFORM0 that achieves (1) content fidelity by accurately describing the full content in INLINEFORM1 , and at the same time (2) style preservation by retaining as much of the writing style and characteristics of reference INLINEFORM2 as possible. The task is unsupervised as there is no ground-truth sentence for training."
],
[
"We now present a dataset developed for the task. Our dataset is derived from a recent large table-to-document corpus BIBREF0 which consists of box-score tables of NBA basketball games and associated documents as game reports. The corpus is originally used for studying supervised game report generation which has attracted increasing research interest BIBREF18 , BIBREF0 .",
"To obtain our data, we first split each game report into individual sentences, and, for each sentence, find its corresponding data in the box-score table as the content record. A record can contain a varying number of tuples, with each tuple containing three fields, namely a data type, a value, and an associated player or team, e.g., (team_points, 106, Lakers). As the original corpus is already largely clean, we found some simple rules are sufficient to obtain high-quality results in this step. Please see the supplementary materials for more details. Each of the resulting record-sentence pairs is treated as a pair of INLINEFORM0 , namely (content record, auxiliary sentence). The next step is to find a suitable reference sentence INLINEFORM1 for each content record INLINEFORM2 . As defined above, the reference sentence should cover similar but not the same content as in record INLINEFORM3 . We achieve this by retrieving from the data another record-sentence pair using INLINEFORM4 , where the retrieved record is designated to have a slightly different structure than that of INLINEFORM5 by having less or more tuples and different data types. More details of the retrieval method are deferred to supplements. The retrieved record-sentence pair thus plays the role of INLINEFORM6 and is paired with INLINEFORM7 to form an instance.",
"Table TABREF6 summarizes the statistics of the final dataset. The vocabulary size is 8.4K. We can see that the training set contains over 31K instances. Each content record contains around 5 tuples, each of which takes one of the 34 data types."
],
[
"We next develop methods to tackle the problem. As shown in the empirical study (section SECREF5 ), a simple rule-based method that matches INLINEFORM0 with INLINEFORM1 and performs text replacement would fail in terms of content fidelity due to the different structures between INLINEFORM2 and INLINEFORM3 . Previous approaches for (multi-attribute) style transfer do not apply well either, because of the different underlying task assumptions and the rich content structures of records with varying lengths.",
"In the following, we present a new neural approach that addresses the challenges of text content manipulation. We first describe the model architecture, then develop unsupervised learning objectives, and finally add a content coverage constraint to improve learning. Figure FIGREF7 provides an illustration of the proposed approach.",
"Let INLINEFORM0 denote the model that takes in a record INLINEFORM1 and a reference sentence INLINEFORM2 , and generates an output sentence INLINEFORM3 . Here INLINEFORM4 is the model parameter."
],
[
"We conduct both automatic and human evaluations to assess the model performance. For automatic evaluation, we use two metrics to measure content fidelity and style preservation, respectively. Results show our model balances well between the two goals, and outperforms a variety of comparison methods. All code will be released soon."
],
[
"We compare with a diverse set of approaches:",
"[leftmargin=*]",
"AttnCopy-S2S. We first evaluate a base sequence-to-sequence BIBREF22 model with the above attention-copy mechanism, which takes in record INLINEFORM0 and generates its descriptive sentence INLINEFORM1 . The evaluation provides a sense of the difficulty in describing desired content.",
"Rule-based Method. A straightforward way for text content manipulation is to match between INLINEFORM0 , INLINEFORM1 and INLINEFORM2 with certain rules, and replace corresponding portions in INLINEFORM3 with those in INLINEFORM4 . Specifically, we first build a mapping between the tuples of INLINEFORM5 and INLINEFORM6 through their data types, and a mapping between INLINEFORM7 and INLINEFORM8 through data values, types and indicative tokens (e.g., “12 points” in INLINEFORM9 indicates 12 is of type player points or team_points). The two mappings connect INLINEFORM10 and INLINEFORM11 , enabling us to swap appropriate text in INLINEFORM12 to express content INLINEFORM13 .",
"In theory, rule-based method sets the best possible style preservation performance, as it only replaces content related tokens (particularly numbers) without modifying other parts of the reference sentence. The output, however, tends to miss or contain extra content compared to the content record of interest.",
"Multi-Attribute Style Transfer (MAST) BIBREF11 . We compare with the most recent style transfer approach that models multiple attributes. To apply to our setting, we treat content record INLINEFORM0 as the attributes. The method is based on back-translation BIBREF23 that first generates a target sentence INLINEFORM1 conditioning on INLINEFORM2 , and then treat it as the reference to reconstruct INLINEFORM3 conditioning on INLINEFORM4 . Auxiliary sentence INLINEFORM5 is used in an extra auto-encoding loss.",
"Adversarial Style Transfer (AdvST) BIBREF12 . As another latest style transfer approach capable of handling more than one attributes, the model also mixes back-translation with auto-encoding as the above method, and additionally uses adversarial training to disentangle content and style representations.",
"Ours w/o Coverage. For ablation study, we compare with a model variant that omits the content coverage constraint. That is, the model is trained by maximizing only Eq.( EQREF13 ).",
"We use single-layer LSTM RNNs in all encoders and decoders, and use the Luong attention BIBREF19 . Both the embedding dimensions and hidden dimensions are set to 384. During training, we first set INLINEFORM0 and pre-train the model to convergence so that the model captures the full characteristics of the reference sentence. We then set INLINEFORM1 for full training. We apply Adam optimization BIBREF24 with an initial learning rate of 0.001 and gradient norm clipping of 15. For inference we use beam search with beam-width 5. The maximum decoding length is set to 50."
],
[
"As no ground truth annotations are available, we first set up automatic metrics for quantitatively measuring the key aspects of model performance.",
"We use separate metrics to evaluate in terms of the two primary goals of the task, namely content fidelity and style preservation, respectively. A desired solution should balance and excel on both metrics.",
"[leftmargin=*]",
"Content fidelity. Following the table-to-document task BIBREF0 where our dataset is derived from, we use an information extraction (IE) approach to measure content fidelity. That is, given a generated sentence INLINEFORM0 and the conditioning content record INLINEFORM1 , we extract data tuples from INLINEFORM2 with an IE tool, and compute the precision and recall against INLINEFORM3 . We use the IE model provided in BIBREF0 and re-train with INLINEFORM4 pairs in our dataset. The IE model achieves around 87% precision and 76% recall on the test set, which is comparable to the one used in BIBREF0 .",
"Style preservation. A generated sentence is desired to retain stylistic properties, such as word choice and expressions, of the input reference sentence. Inspired by the text style transfer literature BIBREF15 , BIBREF11 , we measure the BLEU score between generated and reference sentences. To reduce the influence of new content, we first mask in both sentences all obvious content tokens, including player/team names and numbers, by replacing them with a special token <M>, and then compute the BLEU score. In this way, the above rule-based method has a maximum BLEU score of 100, which is consistent with our intuition above.",
"We now compare the performance of different methods in terms of the above metrics. Table TABREF29 shows the results.",
"The first block shows the two baseline models providing reference performance. The AttnCopy-S2S model only concerns about content fidelity, and achieves a high content precision score (but a low recall). However, its style BLEU is particularly low, which verifies the rich variation in language and that direct supervised learning is incapable of controlling the variation. We can see that the rule-based method achieves reasonably good precision and recall, setting a strong baseline for content fidelity. As discussed above, the rule-based method can reach the maximum BLEU (100) after masking out content tokens. To improve over the strong rule-based baseline, we would expect a method that provides significantly higher precision/recall, while keeping a high BLEU score. The two style transfer methods (MAST and AdvST) fail the expectation, as their content fidelity performance is greatly inferior or merely comparable to the rule-based method. This is partially because these models are built on a different task assumption (i.e., modifying independent textual attributes) and cannot manipulate content well. In comparison, our proposed model achieves better content precision/recall, substantially improving over other methods (e.g., with a 15-point precision boost in comparison with the rule-based baseline) except for AttnCopy-S2S which has failed in style control. Our method also manages to preserve a high BLEU score of over 80. The superior performance of the full model compared to the variant Ours-w/o-Coverage demonstrates the usefulness of the content coverage constraint (Eq. EQREF15 ). By explicitly encouraging the model to mention each of the data tuples exactly once—a common pattern of human-written descriptions—the model achieves higher content fidelity with less style-preservation ability “sacrificed”."
],
[
"We also carried out human evaluation for a more thorough and accurate comparison. Following the experimental settings in prior work BIBREF11 , BIBREF12 , BIBREF10 , we undertook two types of human studies: (1) We asked human turkers to score generated sentences in three aspects, namely content fidelity, style preservation, and sentence fluency. Each score is from 1 (strongly bad) to 5 (strongly good); (2) We present to annotators a pair of generated sentences, one from our model and the other from a comparison method. We then ask the annotators to rank the two sentences by considering all the criteria. Annotators can also choose “no preference” if the sentences are equally good or bad. For each study, we evaluate on 80 test instances, and compare our model with the rule-based method, AdvST style transfer model (which has shown better performance on the task than the other style transfer model MAST), and the model variant without coverage constraint.",
"Table TABREF31 shows the human evaluation results. From the top block of the table, as expected and discussed above, the rule-based method sets the records of style preservation and fluency scores, as it only conducts lightweight token replacement on reference sentences. However, its content fidelity score is very low. In contrast, our model achieves a reasonably high content score of 3.88, which is much higher than those of other methods. The model is also more balanced across the three criteria, achieving reasonably high scores in both style preservation and language fluency. The fluency of the full model is slightly inferior to the variant without coverage constraint, which is not unexpected since the full model has modified more portions of reference sentence in order to better describe the desired content, which would tend to introduce more language mistakes as well.",
"The bottom block of Table TABREF31 shows the results of ranking sentence pairs. We can see that our model consistently outperforms the comparison methods with over 50% wins."
],
[
"We take a closer look at the model performance by studying generated sentences from different models.",
"Table TABREF33 shows example outputs on three test cases given content record INLINEFORM0 and reference sentence INLINEFORM1 . We can see that, in general, the proposed full model can manipulate the reference sentence more accurately to express the new content. For example, in the first case, the rule-based method was confused between the winning and losing teams, due to its incapacity of understanding the semantics of text such as “held off”. The style transfer model AdvST failed to comprehend the content record well and generated irrelevant data “100 - 100”. The simplified variant without explicit coverage constraint copied the content of Bulls twice. In contrast, the full model successfully generates the desired sentence. Similarly, in the second and third cases, other methods tend to keep irrelevant content originally in the reference sentence (e.g., “and 5 rebounds” in the second case), or miss necessary information in the record (e.g., one of the player names was missed in the third case). The proposed model performs better in properly adding or deleting text portions for accurate content descriptions, though sometimes it can yield sentences of lower language quality (e.g., in the third case).",
"Table TABREF34 shows some failure cases by the proposed model along with the respective desired outputs. Despite the enhanced performance over other methods, the model can still get confused in presence of complicated content records or non-straightforward correspondence between the semantic structures of content record and reference sentence. It is desirable to further improve the modeling of both content and reference to better understand the underlying semantics and achieve better manipulation results."
],
[
"We have proposed a new and practical task of text content manipulation which aims to generate a sentence that describes desired content from a structured record (content fidelity) and meanwhile follows the writing style of a reference sentence (style preservation). To study the unsupervised problem, we derived a new dataset, and developed a method with competing learning objectives and an explicit coverage constraint. For empirical study, we devised two automatic metrics to measure different aspects of model performance. Both automatic and human evaluations showed superiority of the proposed approach."
]
],
"section_name": [
"Introduction",
"Related Work",
"Task and Dataset",
"Task Definition",
"Dataset",
"Model",
"Experiments",
"Experimental Setup",
"Automatic Evaluation",
"Human Evaluation",
"Qualitative Study",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"c4dbb14fff81e4cb4135b848a6821701974d784c",
"efdd355db0d7255ca3aa2662f8dbeeea129e6334"
],
"answer": [
{
"evidence": [
"To obtain our data, we first split each game report into individual sentences, and, for each sentence, find its corresponding data in the box-score table as the content record. A record can contain a varying number of tuples, with each tuple containing three fields, namely a data type, a value, and an associated player or team, e.g., (team_points, 106, Lakers). As the original corpus is already largely clean, we found some simple rules are sufficient to obtain high-quality results in this step. Please see the supplementary materials for more details. Each of the resulting record-sentence pairs is treated as a pair of INLINEFORM0 , namely (content record, auxiliary sentence). The next step is to find a suitable reference sentence INLINEFORM1 for each content record INLINEFORM2 . As defined above, the reference sentence should cover similar but not the same content as in record INLINEFORM3 . We achieve this by retrieving from the data another record-sentence pair using INLINEFORM4 , where the retrieved record is designated to have a slightly different structure than that of INLINEFORM5 by having less or more tuples and different data types. More details of the retrieval method are deferred to supplements. The retrieved record-sentence pair thus plays the role of INLINEFORM6 and is paired with INLINEFORM7 to form an instance."
],
"extractive_spans": [],
"free_form_answer": "The structured data is obtained from the box-score tables.",
"highlighted_evidence": [
"To obtain our data, we first split each game report into individual sentences, and, for each sentence, find its corresponding data in the box-score table as the content record. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We now present a dataset developed for the task. Our dataset is derived from a recent large table-to-document corpus BIBREF0 which consists of box-score tables of NBA basketball games and associated documents as game reports. The corpus is originally used for studying supervised game report generation which has attracted increasing research interest BIBREF18 , BIBREF0 .",
"To obtain our data, we first split each game report into individual sentences, and, for each sentence, find its corresponding data in the box-score table as the content record. A record can contain a varying number of tuples, with each tuple containing three fields, namely a data type, a value, and an associated player or team, e.g., (team_points, 106, Lakers). As the original corpus is already largely clean, we found some simple rules are sufficient to obtain high-quality results in this step. Please see the supplementary materials for more details. Each of the resulting record-sentence pairs is treated as a pair of INLINEFORM0 , namely (content record, auxiliary sentence). The next step is to find a suitable reference sentence INLINEFORM1 for each content record INLINEFORM2 . As defined above, the reference sentence should cover similar but not the same content as in record INLINEFORM3 . We achieve this by retrieving from the data another record-sentence pair using INLINEFORM4 , where the retrieved record is designated to have a slightly different structure than that of INLINEFORM5 by having less or more tuples and different data types. More details of the retrieval method are deferred to supplements. The retrieved record-sentence pair thus plays the role of INLINEFORM6 and is paired with INLINEFORM7 to form an instance."
],
"extractive_spans": [
"split each game report into individual sentences, and, for each sentence, find its corresponding data in the box-score table as the content record",
"we found some simple rules are sufficient to obtain high-quality results"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our dataset is derived from a recent large table-to-document corpus BIBREF0 which consists of box-score tables of NBA basketball games and associated documents as game reports. The corpus is originally used for studying supervised game report generation which has attracted increasing research interest BIBREF18 , BIBREF0 .\n\nTo obtain our data, we first split each game report into individual sentences, and, for each sentence, find its corresponding data in the box-score table as the content record. A record can contain a varying number of tuples, with each tuple containing three fields, namely a data type, a value, and an associated player or team, e.g., (team_points, 106, Lakers). As the original corpus is already largely clean, we found some simple rules are sufficient to obtain high-quality results in this step. Please see the supplementary materials for more details."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"64dfc9e166d2bfe85090a2616cd37f45cb64562d",
"b338b5936aafd9d684677459b19c9f1c82df2d2b",
"ba163e72a1a0e6f58ec4117b74d6d7c65d2ea14f"
],
"answer": [
{
"evidence": [
"Multi-Attribute Style Transfer (MAST) BIBREF11 . We compare with the most recent style transfer approach that models multiple attributes. To apply to our setting, we treat content record INLINEFORM0 as the attributes. The method is based on back-translation BIBREF23 that first generates a target sentence INLINEFORM1 conditioning on INLINEFORM2 , and then treat it as the reference to reconstruct INLINEFORM3 conditioning on INLINEFORM4 . Auxiliary sentence INLINEFORM5 is used in an extra auto-encoding loss.",
"Adversarial Style Transfer (AdvST) BIBREF12 . As another latest style transfer approach capable of handling more than one attributes, the model also mixes back-translation with auto-encoding as the above method, and additionally uses adversarial training to disentangle content and style representations."
],
"extractive_spans": [
"Multi-Attribute Style Transfer",
"Adversarial Style Transfer "
],
"free_form_answer": "",
"highlighted_evidence": [
"Multi-Attribute Style Transfer (MAST) BIBREF11 . We compare with the most recent style transfer approach that models multiple attributes. ",
"Adversarial Style Transfer (AdvST) BIBREF12 . As another latest style transfer approach capable of handling more than one attributes, the model also mixes back-translation with auto-encoding as the above method, and additionally uses adversarial training to disentangle content and style representations."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We compare with a diverse set of approaches:",
"[leftmargin=*]",
"AttnCopy-S2S. We first evaluate a base sequence-to-sequence BIBREF22 model with the above attention-copy mechanism, which takes in record INLINEFORM0 and generates its descriptive sentence INLINEFORM1 . The evaluation provides a sense of the difficulty in describing desired content.",
"Rule-based Method. A straightforward way for text content manipulation is to match between INLINEFORM0 , INLINEFORM1 and INLINEFORM2 with certain rules, and replace corresponding portions in INLINEFORM3 with those in INLINEFORM4 . Specifically, we first build a mapping between the tuples of INLINEFORM5 and INLINEFORM6 through their data types, and a mapping between INLINEFORM7 and INLINEFORM8 through data values, types and indicative tokens (e.g., “12 points” in INLINEFORM9 indicates 12 is of type player points or team_points). The two mappings connect INLINEFORM10 and INLINEFORM11 , enabling us to swap appropriate text in INLINEFORM12 to express content INLINEFORM13 .",
"In theory, rule-based method sets the best possible style preservation performance, as it only replaces content related tokens (particularly numbers) without modifying other parts of the reference sentence. The output, however, tends to miss or contain extra content compared to the content record of interest.",
"Multi-Attribute Style Transfer (MAST) BIBREF11 . We compare with the most recent style transfer approach that models multiple attributes. To apply to our setting, we treat content record INLINEFORM0 as the attributes. The method is based on back-translation BIBREF23 that first generates a target sentence INLINEFORM1 conditioning on INLINEFORM2 , and then treat it as the reference to reconstruct INLINEFORM3 conditioning on INLINEFORM4 . Auxiliary sentence INLINEFORM5 is used in an extra auto-encoding loss.",
"Adversarial Style Transfer (AdvST) BIBREF12 . As another latest style transfer approach capable of handling more than one attributes, the model also mixes back-translation with auto-encoding as the above method, and additionally uses adversarial training to disentangle content and style representations."
],
"extractive_spans": [
"AttnCopy-S2S",
"Rule-based Method",
"Multi-Attribute Style Transfer (MAST) BIBREF11",
"Adversarial Style Transfer (AdvST) BIBREF12"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare with a diverse set of approaches:\n\n[leftmargin=*]\n\nAttnCopy-S2S. We first evaluate a base sequence-to-sequence BIBREF22 model with the above attention-copy mechanism, which takes in record INLINEFORM0 and generates its descriptive sentence INLINEFORM1 . The evaluation provides a sense of the difficulty in describing desired content.\n\nRule-based Method. A straightforward way for text content manipulation is to match between INLINEFORM0 , INLINEFORM1 and INLINEFORM2 with certain rules, and replace corresponding portions in INLINEFORM3 with those in INLINEFORM4 . Specifically, we first build a mapping between the tuples of INLINEFORM5 and INLINEFORM6 through their data types, and a mapping between INLINEFORM7 and INLINEFORM8 through data values, types and indicative tokens (e.g., “12 points” in INLINEFORM9 indicates 12 is of type player points or team_points). The two mappings connect INLINEFORM10 and INLINEFORM11 , enabling us to swap appropriate text in INLINEFORM12 to express content INLINEFORM13 .\n\nIn theory, rule-based method sets the best possible style preservation performance, as it only replaces content related tokens (particularly numbers) without modifying other parts of the reference sentence. The output, however, tends to miss or contain extra content compared to the content record of interest.\n\nMulti-Attribute Style Transfer (MAST) BIBREF11 . We compare with the most recent style transfer approach that models multiple attributes. To apply to our setting, we treat content record INLINEFORM0 as the attributes. The method is based on back-translation BIBREF23 that first generates a target sentence INLINEFORM1 conditioning on INLINEFORM2 , and then treat it as the reference to reconstruct INLINEFORM3 conditioning on INLINEFORM4 . Auxiliary sentence INLINEFORM5 is used in an extra auto-encoding loss.\n\nAdversarial Style Transfer (AdvST) BIBREF12 . As another latest style transfer approach capable of handling more than one attributes, the model also mixes back-translation with auto-encoding as the above method, and additionally uses adversarial training to disentangle content and style representations."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Multi-Attribute Style Transfer (MAST) BIBREF11 . We compare with the most recent style transfer approach that models multiple attributes. To apply to our setting, we treat content record INLINEFORM0 as the attributes. The method is based on back-translation BIBREF23 that first generates a target sentence INLINEFORM1 conditioning on INLINEFORM2 , and then treat it as the reference to reconstruct INLINEFORM3 conditioning on INLINEFORM4 . Auxiliary sentence INLINEFORM5 is used in an extra auto-encoding loss.",
"Adversarial Style Transfer (AdvST) BIBREF12 . As another latest style transfer approach capable of handling more than one attributes, the model also mixes back-translation with auto-encoding as the above method, and additionally uses adversarial training to disentangle content and style representations."
],
"extractive_spans": [
"Multi-Attribute Style Transfer",
"Adversarial Style Transfer"
],
"free_form_answer": "",
"highlighted_evidence": [
"Multi-Attribute Style Transfer (MAST) BIBREF11 . We compare with the most recent style transfer approach that models multiple attributes. ",
"Adversarial Style Transfer (AdvST) BIBREF12 . As another latest style transfer approach capable of handling more than one attributes, the model also mixes back-translation with auto-encoding as the above method, and additionally uses adversarial training to disentangle content and style representations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"150700f52da53c6823872c2fa8cd58a978d8d638",
"d249921e9caecc889f89334826c101a341df0289"
],
"answer": [
{
"evidence": [
"We then propose a neural method to tackle the problem. With a hybrid attention and copy mechanism, the model effectively encodes the reference and faithfully copies content from the record. The model is learned with two competing objectives of reconstructing the auxiliary sentence (for content fidelity) and the reference sentence (for style preservation). We further improve the model with an explicit content coverage constraint which encourages to precisely and fully convey the structured content."
],
"extractive_spans": [],
"free_form_answer": "A combination of Content Objective and Style Objective",
"highlighted_evidence": [
"The model is learned with two competing objectives of reconstructing the auxiliary sentence (for content fidelity) and the reference sentence (for style preservation). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We then propose a neural method to tackle the problem. With a hybrid attention and copy mechanism, the model effectively encodes the reference and faithfully copies content from the record. The model is learned with two competing objectives of reconstructing the auxiliary sentence (for content fidelity) and the reference sentence (for style preservation). We further improve the model with an explicit content coverage constraint which encourages to precisely and fully convey the structured content."
],
"extractive_spans": [],
"free_form_answer": "Reconstructing the auxiliary sentence and reconstructing the reference sentence.",
"highlighted_evidence": [
"The model is learned with two competing objectives of reconstructing the auxiliary sentence (for content fidelity) and the reference sentence (for style preservation). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"1e71095d4f5a8b97b636340ca8ac3388da233cdd",
"78837edf076e456961d817864d016c3802ad22d1",
"b4e8631faba2027b9ed7af63105baa9c57cc49de"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Content Coverage Constraint section) We thus devise an additional learning constraint based on the nature of content description—each data tuple in the content record should usually be mentioned exactly once in the generated sentence.\nThe copy mechanism over content record x enables a simple yet effective way to encourage the behavior. Intuitively, we want each tuple to be copied once and only once on average.",
"highlighted_evidence": [
"Model"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"denote the model"
],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How do they obtain structured data?",
"Which prior approaches for style transfer do they test with?",
"Which competing objectives for their unsupevised method do they use?",
"Which content coverage constraints do they design?"
],
"question_id": [
"182eb91090017a7c8ea38a88b219b641842664e4",
"0ef114d24a7a32821967e912dff23c016c4eab41",
"67672648e7ebcbef18921006e2c8787966f8cdf2",
"c32fc488f0527f330273263fa8956788bd071efc"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: An example input (content record and reference sentence) of text content manipulation and its desired output. Text portions that fulfill the writing style are highlight in blue.",
"Table 2: Data Statistics.",
"Figure 1: A (simplified) data example (left) and the model overview (right).",
"Table 3: Model Performance under Automatic Evaluation. Results are averaged over 3 runs ± one standard deviation. Models in the first block (AttnCopy Seq2seq and Rule-based) represent two baselines for reference performance. We have highlighted the best results in blocks 2 and 3 under different metrics. Our model achieves significant higher content precision and recall compared to both rule-based and style transfer methods, and reaches a high BLEU score in style preservation.",
"Table 4: Human Evaluation Results. Top: Humans are asked to score the model outputs in terms of content fidelity, style preservation, and fluecny, respectively, from 1 (strongly bad) to 5 (strongly good). As expected, the rule-based method reaches the maximum possible scores in terms of style preservation and fluency, but a much lower score in terms of content fidelity. Our model is more balanced across all aspects, and performs significantly better in accurately describing desired content. Bottom: Humans are asked to rank a pair of generated sentences in which one is from our model and the other from the comparison method. Our model wins on more than 50% instances compared to each of other models.",
"Table 5: Example Outputs by Different Models. Text of erroneous content is highlighted in red, where [...] indicates desired content is missing. Text portions in the reference sentences and the generated sentences by our model that fulfill the stylistic characteristics are highlighted in blue. Please see the text for more details.",
"Table 6: Example Erroneous Outputs. Text of erroneous content is highlighted in red. Missing content is denoted with [...]. We also show the desired correct outputs. In the first example, the model was confused by the data types; while in the second example, the model fails to understand there is only one team in the content record x and the number 88 is the free-throw percentage."
],
"file": [
"2-Table1-1.png",
"4-Table2-1.png",
"5-Figure1-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"10-Table5-1.png",
"11-Table6-1.png"
]
} | [
"How do they obtain structured data?",
"Which competing objectives for their unsupevised method do they use?"
] | [
[
"1901.09501-Dataset-0",
"1901.09501-Dataset-1"
],
[
"1901.09501-Introduction-5"
]
] | [
"The structured data is obtained from the box-score tables.",
"Reconstructing the auxiliary sentence and reconstructing the reference sentence."
] | 170 |
1705.02023 | Senti17 at SemEval-2017 Task 4: Ten Convolutional Neural Network Voters for Tweet Polarity Classification | This paper presents Senti17 system which uses ten convolutional neural networks (ConvNet) to assign a sentiment label to a tweet. The network consists of a convolutional layer followed by a fully-connected layer and a Softmax on top. Ten instances of this network are initialized with the same word embeddings as inputs but with different initializations for the network weights. We combine the results of all instances by selecting the sentiment label given by the majority of the ten voters. This system is ranked fourth in SemEval-2017 Task4 over 38 systems with 67.4% | {
"paragraphs": [
[
"Polarity classification is the basic task of sentiment analysis in which the polarity of a given text should be classified into three categories: positive, negative or neutral. In Twitter where the tweet is short and written in informal language, this task needs more attention. SemEval has proposed the task of Message Polarity Classification in Twitter since 2013, the objective is to classify a tweet into one of the three polarity labels BIBREF0 .",
"We can remark that in 2013, 2014 and 2015 most best systems were based on a rich feature extraction process with a traditional classifier such as SVM BIBREF1 or Logistic regression BIBREF2 . In 2014, kimconvolutional2014 proposed to use one convolutional neural network for sentence classification, he fixed the size of the input sentence and concatenated its word embeddings for representing the sentence, this architecture has been exploited in many later works. severynunitn:2015 adapted the convolutional network proposed by kimconvolutional2014 for sentiment analysis in Twitter, their system was ranked second in SemEval-2015 while the first system BIBREF3 combined four systems based on feature extraction and the third ranked system used logistic regression with different groups of features BIBREF2 .",
"In 2016, we remark that the number of participations which use feature extraction systems were degraded, and the first four systems used Deep Learning, the majority used a convolutional network except the fourth one BIBREF4 . Despite of that, using Deep Learning for sentiment analysis in Twitter has not yet shown a big improvement in comparison to feature extraction, the fifth and sixth systems BIBREF5 in 2016 which were built upon feature extraction process were only (3 and 3.5% respectively) less than the first system. But We think that Deep Learning is a promising direction in sentiment analysis. Therefore, we proposed to use convolutional networks for Twitter polarity classification.",
"Our proposed system consists of a convolutional layer followed by fully connected layer and a softmax on top. This is inspired by kimconvolutional2014, we just added a fully connected layer. This architecture gives a good performance but it could be improved. Regarding the best system in 2016 BIBREF6 , it uses different word embeddings for initialisation then it combines the predictions of different nets using a meta-classifier, Word2vec and Glove have been used to vary the tweet representation.",
"In our work, we propose to vary the neural network weights instead of tweet representation which can get the same effect of varying the word embeddings, therefore we vary the initial weights of the network to produce ten different nets, a voting system over the these ten voters will decide the sentiment label for a tweet.",
"The remaining of this paper is organized as follows: Section 2 describes the system architecture, Section 3 presents our experiments and results and Section 4 is devoted for the conclusion."
],
[
"The architecture of our convolutional neural net- work for sentiment classification is shown on Fig. 1. Our network is composed of a single convolutional layer followed by a non-linearity, max pooling, Dropout, fully connected layer and a soft-max classification layer. Here we describe this architecture:"
],
[
"We first tokenize each tweet to get all terms using HappyTokenizer which captures the words, emoticons and punctuations. We also replace each web link by the term url and each user name by uuser. Then, we used Structured Skip-Gram embeddings (SSG) BIBREF7 which was compiled by BIBREF4 using 52 million tweets.",
"Each term in the tweet is replaced by its SSG embedding which is a vector of d dimensions, all term vectors are concatenated to form the input matrix where the number of rows is d and the number of columns is set to be maxl: the max tweet length in the training dataset. This 2-dim matrix is the input layer for the neural network."
],
[
"We connect the input matrix with different convolutional layers, each one applies a convolution operation between the input matrix and a filter of size m x d. This is an element-wise operation which creates f vectors of maxl-m+1 dimension where f is the number of filters or feature maps.",
"This layer is supposed to capture the common patterns among the training tweets which have the same filter size but occur at any position of the tweet. To capture the common patterns which have different sizes we have to use more than one layer therefore we defined 8 different layers connected to the input matrix with different filter sizes but the same number of feature maps."
],
[
"Each convolutional layer is typically followed by a non-linear activation function, RELU (Rectified Linear Unit ) layer will apply an element-wise operation to swap the negative numbers to 0. The output of a ReLU layer is the same size as the input, just with all the negative values removed. It speeds up the training and is supposed to produce more accurate results."
],
[
"This layer reduces the size of the output of activation layer, for each vector it selects the max value. Different variation of pooling layer can be used: average or k-max pooling."
],
[
"Dropout is used after the max pooling to regularize the ConvNet and prevent overfitting. It assumes that we can still obtain a reasonable classification even when some of the neurones are dropped. Dropout consists in randomly setting a fraction p of input units to 0 at each update during training time."
],
[
"We concatenate the results of all pooling layers after applying Dropout, these units are connected to a fully connected layer. This layer performs a matrix multiplication between its weights and the input units. A RELU non-linarity is applied on the results of this layer."
],
[
"The output of the fully connected layer is passed to a Softmax layer. It computes the probability distribution over the labels in order to decide the most probable label for a tweet."
],
[
"For training the network, we used about 30000 English tweets provided by SemEval organisers and the test set of 2016 which contains 12000 tweets as development set. The test set of 2017 is used to evaluate the system in SemEval-2017 competition. For implementing our system we used python and Keras.",
"We set the network parameters as follows: SSG embbeding size d is chosen to be 200, the tweet max legnth maxl is 99. For convolutional layers, we set the number of feature maps f to 50 and used 8 filter sizes (1,2,3,4,5,2,3,4). The p value of Dropout layer is set to 0.3. We used Nadam optimizer BIBREF8 to update the weights of the network and back-propogation algorithm to compute the gradients. The batch size is set to be 50 and the training data is shuffled after each iteration.",
"We create ten instances of this network, we randomly initialize them using the uniform distribution, we repeat the random initialization for each instance 100 times, then we pick the networks which gives the highest average recall score as it is considered the official measure for system ranking. If the top network of each instance gives more than 95% of its results identical to another chosen network, we choose the next top networks to make sure that the ten networks are enough different.",
"Thus, we have ten classifiers, we count the number of classifiers which give the positive, negative and neutral sentiment label to each tweet and select the sentiment label which have the highest number of votes. For each new tweet from the test set, we convert it to 2-dim matrix, if the tweet is longer than maxl, it will be truncated. We then feed it into the ten networks and pass the results to the voting system.",
"Official ranking: Our system is ranked fourth over 38 systems in terms of macro-average recall. Table 4 shows the results of our system on the test set of 2016 and 2017."
],
[
"We presented our deep learning approach to Twitter sentiment analysis. We used ten convolutional neural network voters to get the polarity of a tweet, each voter has been trained on the same training data using the same word embeddings but different initial weights. The results demonstrate that our system is competitive as it is ranked forth in SemEval-2017 task 4-A. "
]
],
"section_name": [
"Introduction",
"System Architecture",
"Tweet Representation",
"Convolutional Layers",
"Activation Layer",
"Max-Pooling Layer",
"Dropout Layer",
"Fully Conected Layer",
"Softmax Layer",
"Experiments and Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"94eb4b0d140fd1bd4812441777661224c18b8259",
"d7d51ef55a02d73913c0540cfa1898fb09d18c25"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Official ranking: Our system is ranked fourth over 38 systems in terms of macro-average recall. Table 4 shows the results of our system on the test set of 2016 and 2017."
],
"extractive_spans": [
"macro-average recall"
],
"free_form_answer": "",
"highlighted_evidence": [
"Official ranking: Our system is ranked fourth over 38 systems in terms of macro-average recall. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"1534d81b0373ae105a4d1ad5bf7fc24bdb39606c",
"2ea761b9c499406ed1402f014266d9e8b87ff727",
"8f780c62b5a594030a37f821338fb53ea26bdee2"
],
"answer": [
{
"evidence": [
"Thus, we have ten classifiers, we count the number of classifiers which give the positive, negative and neutral sentiment label to each tweet and select the sentiment label which have the highest number of votes. For each new tweet from the test set, we convert it to 2-dim matrix, if the tweet is longer than maxl, it will be truncated. We then feed it into the ten networks and pass the results to the voting system."
],
"extractive_spans": [],
"free_form_answer": "3",
"highlighted_evidence": [
"Thus, we have ten classifiers, we count the number of classifiers which give the positive, negative and neutral sentiment label to each tweet and select the sentiment label which have the highest number of votes."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Thus, we have ten classifiers, we count the number of classifiers which give the positive, negative and neutral sentiment label to each tweet and select the sentiment label which have the highest number of votes. For each new tweet from the test set, we convert it to 2-dim matrix, if the tweet is longer than maxl, it will be truncated. We then feed it into the ten networks and pass the results to the voting system."
],
"extractive_spans": [],
"free_form_answer": "3",
"highlighted_evidence": [
"Thus, we have ten classifiers, we count the number of classifiers which give the positive, negative and neutral sentiment label to each tweet and select the sentiment label which have the highest number of votes. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Thus, we have ten classifiers, we count the number of classifiers which give the positive, negative and neutral sentiment label to each tweet and select the sentiment label which have the highest number of votes. For each new tweet from the test set, we convert it to 2-dim matrix, if the tweet is longer than maxl, it will be truncated. We then feed it into the ten networks and pass the results to the voting system."
],
"extractive_spans": [],
"free_form_answer": "3",
"highlighted_evidence": [
"Thus, we have ten classifiers, we count the number of classifiers which give the positive, negative and neutral sentiment label to each tweet and select the sentiment label which have the highest number of votes. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"what were the evaluation metrics?",
"how many sentiment labels do they explore?"
],
"question_id": [
"8908d1b865137bc309dde10a93735ec76037e5f9",
"d207f78beb6cd754268881bf575c8f98000667ea"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Figure 1: Network architecture."
],
"file": [
"3-Figure1-1.png"
]
} | [
"how many sentiment labels do they explore?"
] | [
[
"1705.02023-Experiments and Results-3"
]
] | [
"3"
] | 171 |
1904.02954 | Alternative Weighting Schemes for ELMo Embeddings | ELMo embeddings (Peters et. al, 2018) had a huge impact on the NLP community and may recent publications use these embeddings to boost the performance for downstream NLP tasks. However, integration of ELMo embeddings in existent NLP architectures is not straightforward. In contrast to traditional word embeddings, like GloVe or word2vec embeddings, the bi-directional language model of ELMo produces three 1024 dimensional vectors per token in a sentence. Peters et al. proposed to learn a task-specific weighting of these three vectors for downstream tasks. However, this proposed weighting scheme is not feasible for certain tasks, and, as we will show, it does not necessarily yield optimal performance. We evaluate different methods that combine the three vectors from the language model in order to achieve the best possible performance in downstream NLP tasks. We notice that the third layer of the published language model often decreases the performance. By learning a weighted average of only the first two layers, we are able to improve the performance for many datasets. Due to the reduced complexity of the language model, we have a training speed-up of 19-44% for the downstream task. | {
"paragraphs": [
[
"Peters2018 presented in their work Deep Contextualized Word Representations (often referred to as ELMo embeddings) a method that uses a bidirectional language model (biLM) to derive word representations which are based on the complete context of a sentence. They demonstrated that these ELMo embeddings can substantially increase the performance for various NLP tasks. This new type of word representations had a big impact on the NLP community and many new architectures, for example, many from EMNLP 2018, report a better performance when using ELMo embeddings.",
"Traditional word embeddings, like word2vec or GloVe, mapped each token in a sentence to a single dense vector. In contrast to that, the published ELMo implementation computes three layers of a language model: The first layer is a CNN that computes a non-contextualized word representation based on the characters of a word, followed by two bidirectional LSTM layers that take the context of the sentence into account.",
"The output of the three layers is integrated into task-specific neural architectures. However, the integration of ELMo into neural architectures is not straightforward. For example, Peters et al. describe two methods for the integration: Either the output of the last layer is used for downstream tasks, or a task-specific weighting of the three layer outputs is learned: $\\text{ELMo}_{\\text{weighted\\_average}} = \\gamma \\sum _{j=0}^{2}s_j h_j$ ",
"with $s \\in \\mathbb {R}^3$ softmax-normalized weights, $h_j$ the output of the $j$ -th layer of the biLM and $\\gamma $ a scalar that is used to scale the entire ELMo vector.",
"Learning this weighted average is not always easy, as it can require substantial changes in existent network architectures and some deep learning frameworks (for example Keras) lack the possibility to easily implement such a learned weighted average. Further, for unsupervised tasks, such a weighted average cannot be learned.",
"Hence, several authors used simplified ways to integrate ELMo embeddings in their architectures. Some use the output of the last ELMo layer, some concatenate all three vectors BIBREF1 , BIBREF2 , and others compute a (fixed) average of the three layers.",
"It is unclear what the impact of these different weighting schemes is for downstream tasks. Is the (rather complicated) learned weighted average proposed by Peters et al. needed to achieve optimal performance? Will a simpler method, like computing a fixed average, decrease the performance?",
"In this paper, we evaluate different schemes to combine the three ELMo vectors. We analyze the impact of these schemes for downstream NLP tasks. First, we study this for a BiLSTM-CRF architecture which only uses ELMo embeddings as input representation. Next, we study the different weighting schemes for the more complex models included in AllenNLP, which concatenate ELMo embeddings with other input representations like GloVe word embeddings.",
"In this paper we show that 1) the weighting scheme can have a significant impact on downstream NLP tasks, 2) that the learned weighted average proposed by Peters et al. does not yield the optimal performance for all datasets, and 3) that the second layer of the biLM yields in many cases a better performance than the third (last) layer.",
"Surprisingly, using the output of the second layer of the biLM model yields a better performance than using the third (last) layer in many downstream NLP tasks. Using this insight, we present a weighting scheme that learns a weighted average of the first two layers of the biLM. This scheme outperforms the originally proposed weighting scheme by Peters et al. for several datasets. Further, it is computationally faster than the original method. For downstream tasks, we saw a training speed-up of 19-44%."
],
[
"To our knowledge, only Peters2018 evaluated different weighting schemes. They evaluated to use either the output of the last layer or to learn a task-specific weighted average of all three layer outputs. They compare these two options in their paper and show a slight advantage for learning a weighted average. However, the evaluation is in our opinion insufficient. First, they evaluate both options on the development set, so it remains unclear if there are changes for unseen data (test set). Further, they evaluate it only with a single random seed. As shown in BIBREF3 , the performance of a neural network can change significantly with a different random seed. For example, we observe test score differences of up to 1.5 percentage points when the same model is trained with a different random seed with the AllenNLP model for the Stanford Sentiment Treebank (SST-5). The differences Peters et al. report between using the last layer and learning a task-specific weighting are rather small (0.4 - 0.7 percentage points). It is not clear if these differences are due to the effect of different random seeds or due to the weighting scheme."
],
[
"The published bidirectional language model (biLM) produces three 1024 dimensional vectors for each token in a sentence. In this paper we systematically study the following methods to combine the three vectors returned by the biLM:",
"Individual Layers: Only a single layer is used for the downstream task.",
"Concatenation: All three vectors are concatenated.",
"Fixed Average: We compute an average of all three vectors.",
"Learned Weighted Average: We learn a task-specific weighted average (ELMo $_\\text{weighted\\_average}$ ).",
"Learned Weighted Average of the 1st and 2nd Layer: We learn a task-specific weighted average of the first two layers."
],
[
"We test the different weighting schemes with two experiments. For the first experiment, we evaluate a neural network that solely uses ELMo embeddings as a representation of the input. This experiment shows how suitable the schemes are when no other features are used. In the second experiment, we evaluate the schemes with the more advanced, state-of-the-art architectures from AllenNLP. These models often concatenate the ELMo embeddings with other input representations. For example, the NER model from AllenNLP concatenates the ELMo embedding with GloVe embeddings and with a task-specific character-based word representation (similar to Ma2016). We expect that the results in the second experiment vary from the first experiment. If a particular weighting scheme lacks specific information, the network might still retrieve it from the other input representations.",
"For the first experiment, we use a BiLSTM-CRF architecture for sequence tagging BIBREF4 . We use ELMo embeddings instead of word embeddings. Two bidirectional LSTM layers (with 100 recurrent units each) are followed by a conditional random field (CRF) to produce the most likely tag sequence. The network was trained using Adam optimizer BIBREF5 and a variational dropout BIBREF6 of 0.5 was added to recurrent and output units.",
"We trained this architecture for the following datasets: Arguments: Argument component detection (major claim, claim, premise) in 402 persuasive essays BIBREF7 . Development and test set were 80 randomly selected essays each. ACE Entities/Events: ACE 2005 dataset BIBREF8 consists of 599 annotated documents from six different domains (newswire, broadcast news, broadcast conversations, blogs, forums, and speeches). We train the architecture to either detect events or to detect entities in these documents. We used 90 randomly selected documents each for the development and test set. POS: We use the part-of-speech tags from Universal Dependencies v. 1.3 for English with the provided data splits. We reduced the training set to the first 500 sentences to increase the difficulty for the network. The development and test set were kept unchanged. Chunking: CoNLL 2000 shared task dataset on chunking. NER: CoNLL 2003 shared task on named entity recognition. GENIA NER: The Bio-Entity Recognition Task at JNLPBA BIBREF9 annotated Medline abstracts with information on bio-entities (like protein or DNA-names). The dataset consists of 2000 abstracts for training (we used 400 of those as development set) and the test set contains 404 abstracts. WNUT16: WNUT16 was a shared task on Named Entity Recognition over Twitter BIBREF10 . Training data are 2,394 annotated tweets, development data are 1,000 tweets, and test data are 3,856 tweets.",
"For the second experiment, we use the existent AllenNLP models that reproduce the experiments of Peters et al. We use the CoNLL 2003 NER model, the Stanford Sentiment Treebank (SST-5) model, the constituency parsing model for the Penn TreeBank, and the Stanford Natural Language Inference Corpus (SNLI) model. The $F_1$ -score is computed for the NER tasks and parsing; accuracy is computed for the SST-task and the SNLI-task.",
"Not all experiments from the paper of Peters et al. are reproducible with AllenNLP. AllenNLP currently has no model for the SQuAD task. For the Coref-task, the AllenNLP configuration is missing some features and does not use ELMo embeddings. For the SRL-task, AllenNLP uses a different metric that is not comparable to the official metric.",
"For both experiments, we use the pre-trained ELMo 5.5B model, which was trained on a dataset of 5.5 billion tokens. We trained each setup with ten different random seed and report average test scores."
],
[
"The results of the BiLSTM-CRF model, that uses only the ELMo embeddings as input representations, are shown in the upper part of Table 1 .",
"We observe that the output of the first layer yields in most cases the worst performance. This was expected, as the first layer is a CNN that computes a character-based word representation. It does not take the context of a word into account.",
"In our experiment, we don't observe a difference between the computation of an unweighted average and of learning a task-specific weighted average. For four datasets, the unweighted average yielded better performance, while for the other four other datasets, the learned weighted average yielded better performance. However, the differences are insignificant.",
"To our surprise, using the second layer of the biLM yields in most cases a better performance than using the third (last) layer of the biLM. For 7 out of 8 datasets it outperforms even the learned weighted average method proposed by Peters et al. Only for the GENIA dataset achieved the learned weighted average a significantly better performance than using the output of the second layer. However, for this dataset, it appears that context information is less critical, as the system achieves a high performance by using only the characters of a word (1. Layer).",
"The results for the second experiment, that uses AllenNLP and ELMo embeddings in combination with other input representations, are presented in the lower part of Table 1 .",
"In contrast to our first experiment, we notice much smaller differences between different weighting schemes. In most cases, the differences are not statistically significant. When a network solely depends on ELMo embeddings as input, all relevant information to solve the task must be included. However, if ELMo is combined with other input representations, like GloVe embeddings, the dependency on the ELMo embeddings decreases. If ELMo does not capture critical word properties, the network can still fall back on one of the other input representations.",
"We notice that computing a weighted average of the first two layers yields the highest performance in 4 out of 5 datasets. It appears that computing the third layer of the biLM does not necessarily add value for downstream tasks. Removing it increases training as well as inference speed, as computing the bidirectional LSTM of the third layer is rather slow. By not computing the third layer of the biLM, we observe a training speed-up of 38% for the NER model, 44% for the SST model, 19% for the parsing model, and 27% for the SNLI model. The training of the SNLI model requires on a Tesla P100 GPU about one day."
],
[
"As noted by Peter2018b, the lower layers of the biLM specialize in local syntactic relationships, allowing the higher layers to model longer range relationships. Knowing beforehand which of these properties are relevant for an NLP task is difficult. In our experiments, we observed significant performance differences in downstream tasks for the three biLM layers. The last, most abstract layer, often yielded mediocre results when it was used as the only input representation. For most datasets, the output of the second layer appears to be the most relevant representation. It offers the best trade-off between local syntactic features and more abstract, long-range relationships.",
"As it is not known in advance which layer produces the best input representation, learning a task-specific weighted average of the three layers appears advisable. However, for several datasets, it appears that the output of the third layer does not add value as it is a too abstract representation for the NLP tasks. The learned weighted average method presented by Peters et al. regularizes the three (softmax-normalized) weights $s_j$ . As a consequence, a zero or small $s_j$ value is not possible, and all three vectors are used even if one vector (e.g. the third layer output) decreases the performance. This explains why removing the (sometimes harmful) third layer can improve the model performance. Further, by removing the last layer, we observe a significant training speed-up. For the models included in AllenNLP, we observed a training speed-up of 19-44%, while improving the test performance in 3 out of 5 datasets. This speed-up can be crucial for cases that require fast training of inference speeds.",
"The weighting scheme appears especially important when these vectors are used as the only input representation for the task. In that case, we advise testing different weighting schemes. If ELMo is used in conjunction with other input representations, the weighting scheme was less critical."
],
[
"This work was supported by the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1 and grant GU 798/17-1). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X(p) Pascal GPU used for this research."
]
],
"section_name": [
"Introduction",
"Related Work",
"Alternative Weighting Schemes",
"Evaluation of Weighting Schemes",
"Results",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"1c26b95fe0a8d6c4f9955f0bf5354af72a0413d2",
"de5fcefebb6f5828f3e8fe143f5d699d60dadf08"
],
"answer": [
{
"evidence": [
"We trained this architecture for the following datasets: Arguments: Argument component detection (major claim, claim, premise) in 402 persuasive essays BIBREF7 . Development and test set were 80 randomly selected essays each. ACE Entities/Events: ACE 2005 dataset BIBREF8 consists of 599 annotated documents from six different domains (newswire, broadcast news, broadcast conversations, blogs, forums, and speeches). We train the architecture to either detect events or to detect entities in these documents. We used 90 randomly selected documents each for the development and test set. POS: We use the part-of-speech tags from Universal Dependencies v. 1.3 for English with the provided data splits. We reduced the training set to the first 500 sentences to increase the difficulty for the network. The development and test set were kept unchanged. Chunking: CoNLL 2000 shared task dataset on chunking. NER: CoNLL 2003 shared task on named entity recognition. GENIA NER: The Bio-Entity Recognition Task at JNLPBA BIBREF9 annotated Medline abstracts with information on bio-entities (like protein or DNA-names). The dataset consists of 2000 abstracts for training (we used 400 of those as development set) and the test set contains 404 abstracts. WNUT16: WNUT16 was a shared task on Named Entity Recognition over Twitter BIBREF10 . Training data are 2,394 annotated tweets, development data are 1,000 tweets, and test data are 3,856 tweets."
],
"extractive_spans": [
"Argument component detection",
"ACE Entities/Events",
"POS",
"Chunking",
"WNUT16",
"CoNLL 2003 shared task on named entity recognition",
"GENIA NER"
],
"free_form_answer": "",
"highlighted_evidence": [
"We trained this architecture for the following datasets: Arguments: Argument component detection (major claim, claim, premise) in 402 persuasive essays BIBREF7 . Development and test set were 80 randomly selected essays each. ACE Entities/Events: ACE 2005 dataset BIBREF8 consists of 599 annotated documents from six different domains (newswire, broadcast news, broadcast conversations, blogs, forums, and speeches). We train the architecture to either detect events or to detect entities in these documents. We used 90 randomly selected documents each for the development and test set. POS: We use the part-of-speech tags from Universal Dependencies v. 1.3 for English with the provided data splits. We reduced the training set to the first 500 sentences to increase the difficulty for the network. The development and test set were kept unchanged. Chunking: CoNLL 2000 shared task dataset on chunking. NER: CoNLL 2003 shared task on named entity recognition. GENIA NER: The Bio-Entity Recognition Task at JNLPBA BIBREF9 annotated Medline abstracts with information on bio-entities (like protein or DNA-names). The dataset consists of 2000 abstracts for training (we used 400 of those as development set) and the test set contains 404 abstracts. WNUT16: WNUT16 was a shared task on Named Entity Recognition over Twitter BIBREF10 . Training data are 2,394 annotated tweets, development data are 1,000 tweets, and test data are 3,856 tweets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For the first experiment, we use a BiLSTM-CRF architecture for sequence tagging BIBREF4 . We use ELMo embeddings instead of word embeddings. Two bidirectional LSTM layers (with 100 recurrent units each) are followed by a conditional random field (CRF) to produce the most likely tag sequence. The network was trained using Adam optimizer BIBREF5 and a variational dropout BIBREF6 of 0.5 was added to recurrent and output units.",
"We trained this architecture for the following datasets: Arguments: Argument component detection (major claim, claim, premise) in 402 persuasive essays BIBREF7 . Development and test set were 80 randomly selected essays each. ACE Entities/Events: ACE 2005 dataset BIBREF8 consists of 599 annotated documents from six different domains (newswire, broadcast news, broadcast conversations, blogs, forums, and speeches). We train the architecture to either detect events or to detect entities in these documents. We used 90 randomly selected documents each for the development and test set. POS: We use the part-of-speech tags from Universal Dependencies v. 1.3 for English with the provided data splits. We reduced the training set to the first 500 sentences to increase the difficulty for the network. The development and test set were kept unchanged. Chunking: CoNLL 2000 shared task dataset on chunking. NER: CoNLL 2003 shared task on named entity recognition. GENIA NER: The Bio-Entity Recognition Task at JNLPBA BIBREF9 annotated Medline abstracts with information on bio-entities (like protein or DNA-names). The dataset consists of 2000 abstracts for training (we used 400 of those as development set) and the test set contains 404 abstracts. WNUT16: WNUT16 was a shared task on Named Entity Recognition over Twitter BIBREF10 . Training data are 2,394 annotated tweets, development data are 1,000 tweets, and test data are 3,856 tweets.",
"For the second experiment, we use the existent AllenNLP models that reproduce the experiments of Peters et al. We use the CoNLL 2003 NER model, the Stanford Sentiment Treebank (SST-5) model, the constituency parsing model for the Penn TreeBank, and the Stanford Natural Language Inference Corpus (SNLI) model. The $F_1$ -score is computed for the NER tasks and parsing; accuracy is computed for the SST-task and the SNLI-task."
],
"extractive_spans": [],
"free_form_answer": "Various sequence tagging tasks: Argument detection, ACE entity and event detection, part-of-speech tagging, CoNLL chunking, CoNLL named entity recognition, GENIA bio-entity recognition, WNUT named entity recognition. They also evaluate on Stanford Sentiment Treebank, Penn TreeBank constituency parsing, and Stanford Natural Language Inference.",
"highlighted_evidence": [
"For the first experiment, we use a BiLSTM-CRF architecture for sequence tagging BIBREF4 .",
"We trained this architecture for the following datasets: Arguments: Argument component detection (major claim, claim, premise) in 402 persuasive essays BIBREF7 .",
"ACE Entities/Events: ACE 2005 dataset BIBREF8 consists of 599 annotated documents from six different domains (newswire, broadcast news, broadcast conversations, blogs, forums, and speeches). We train the architecture to either detect events or to detect entities in these documents. ",
"POS: We use the part-of-speech tags from Universal Dependencies v. 1.3 for English with the provided data splits.",
"Chunking: CoNLL 2000 shared task dataset on chunking.",
"NER: CoNLL 2003 shared task on named entity recognition.",
"GENIA NER: The Bio-Entity Recognition Task at JNLPBA BIBREF9 annotated Medline abstracts with information on bio-entities (like protein or DNA-names).",
"WNUT16: WNUT16 was a shared task on Named Entity Recognition over Twitter BIBREF10 .",
"We use the CoNLL 2003 NER model, the Stanford Sentiment Treebank (SST-5) model, the constituency parsing model for the Penn TreeBank, and the Stanford Natural Language Inference Corpus (SNLI) model. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86",
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"3294d661d62766327ffcabeed72ff6df6e4cb416",
"40bc7688a0d122845974bdbc342bcb1cf8a15f0c",
"cb0e31f1a201b8cd85830494aa86dae900b8414a"
],
"answer": [
{
"evidence": [
"We trained this architecture for the following datasets: Arguments: Argument component detection (major claim, claim, premise) in 402 persuasive essays BIBREF7 . Development and test set were 80 randomly selected essays each. ACE Entities/Events: ACE 2005 dataset BIBREF8 consists of 599 annotated documents from six different domains (newswire, broadcast news, broadcast conversations, blogs, forums, and speeches). We train the architecture to either detect events or to detect entities in these documents. We used 90 randomly selected documents each for the development and test set. POS: We use the part-of-speech tags from Universal Dependencies v. 1.3 for English with the provided data splits. We reduced the training set to the first 500 sentences to increase the difficulty for the network. The development and test set were kept unchanged. Chunking: CoNLL 2000 shared task dataset on chunking. NER: CoNLL 2003 shared task on named entity recognition. GENIA NER: The Bio-Entity Recognition Task at JNLPBA BIBREF9 annotated Medline abstracts with information on bio-entities (like protein or DNA-names). The dataset consists of 2000 abstracts for training (we used 400 of those as development set) and the test set contains 404 abstracts. WNUT16: WNUT16 was a shared task on Named Entity Recognition over Twitter BIBREF10 . Training data are 2,394 annotated tweets, development data are 1,000 tweets, and test data are 3,856 tweets.",
"For the second experiment, we use the existent AllenNLP models that reproduce the experiments of Peters et al. We use the CoNLL 2003 NER model, the Stanford Sentiment Treebank (SST-5) model, the constituency parsing model for the Penn TreeBank, and the Stanford Natural Language Inference Corpus (SNLI) model. The $F_1$ -score is computed for the NER tasks and parsing; accuracy is computed for the SST-task and the SNLI-task."
],
"extractive_spans": [],
"free_form_answer": "Argument detection, ACE 2005, Universal Dependencies part-of-speech tags, CoNLL 2000 chunking shared task, CoNLL 2003 named entity recognition shared task, GENIA NER Bio-Entity Recognition, WNUT16 Twitter named entity recognition shared task, Stanford Sentiment Treebank, Penn TreeBank constituency parsing, Stanford Natural Language Inference corpus",
"highlighted_evidence": [
"We trained this architecture for the following datasets: Arguments: Argument component detection (major claim, claim, premise) in 402 persuasive essays BIBREF7 . Development and test set were 80 randomly selected essays each. ACE Entities/Events: ACE 2005 dataset BIBREF8 consists of 599 annotated documents from six different domains (newswire, broadcast news, broadcast conversations, blogs, forums, and speeches). We train the architecture to either detect events or to detect entities in these documents. We used 90 randomly selected documents each for the development and test set. POS: We use the part-of-speech tags from Universal Dependencies v. 1.3 for English with the provided data splits. We reduced the training set to the first 500 sentences to increase the difficulty for the network. The development and test set were kept unchanged. Chunking: CoNLL 2000 shared task dataset on chunking. NER: CoNLL 2003 shared task on named entity recognition. GENIA NER: The Bio-Entity Recognition Task at JNLPBA BIBREF9 annotated Medline abstracts with information on bio-entities (like protein or DNA-names). The dataset consists of 2000 abstracts for training (we used 400 of those as development set) and the test set contains 404 abstracts. WNUT16: WNUT16 was a shared task on Named Entity Recognition over Twitter BIBREF10 . Training data are 2,394 annotated tweets, development data are 1,000 tweets, and test data are 3,856 tweets.",
"We use the CoNLL 2003 NER model, the Stanford Sentiment Treebank (SST-5) model, the constituency parsing model for the Penn TreeBank, and the Stanford Natural Language Inference Corpus (SNLI) model. The $F_1$ -score is computed for the NER tasks and parsing; accuracy is computed for the SST-task and the SNLI-task."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We trained this architecture for the following datasets: Arguments: Argument component detection (major claim, claim, premise) in 402 persuasive essays BIBREF7 . Development and test set were 80 randomly selected essays each. ACE Entities/Events: ACE 2005 dataset BIBREF8 consists of 599 annotated documents from six different domains (newswire, broadcast news, broadcast conversations, blogs, forums, and speeches). We train the architecture to either detect events or to detect entities in these documents. We used 90 randomly selected documents each for the development and test set. POS: We use the part-of-speech tags from Universal Dependencies v. 1.3 for English with the provided data splits. We reduced the training set to the first 500 sentences to increase the difficulty for the network. The development and test set were kept unchanged. Chunking: CoNLL 2000 shared task dataset on chunking. NER: CoNLL 2003 shared task on named entity recognition. GENIA NER: The Bio-Entity Recognition Task at JNLPBA BIBREF9 annotated Medline abstracts with information on bio-entities (like protein or DNA-names). The dataset consists of 2000 abstracts for training (we used 400 of those as development set) and the test set contains 404 abstracts. WNUT16: WNUT16 was a shared task on Named Entity Recognition over Twitter BIBREF10 . Training data are 2,394 annotated tweets, development data are 1,000 tweets, and test data are 3,856 tweets."
],
"extractive_spans": [
"Arguments",
"ACE 2005 dataset",
"part-of-speech tags from Universal Dependencies v. 1.3 for English",
"CoNLL 2000 shared task dataset on chunking",
"CoNLL 2003 shared task on named entity recognition",
"GENIA NER",
"WNUT16"
],
"free_form_answer": "",
"highlighted_evidence": [
"We trained this architecture for the following datasets: Arguments: Argument component detection (major claim, claim, premise) in 402 persuasive essays BIBREF7 . Development and test set were 80 randomly selected essays each. ACE Entities/Events: ACE 2005 dataset BIBREF8 consists of 599 annotated documents from six different domains (newswire, broadcast news, broadcast conversations, blogs, forums, and speeches). We train the architecture to either detect events or to detect entities in these documents. We used 90 randomly selected documents each for the development and test set. POS: We use the part-of-speech tags from Universal Dependencies v. 1.3 for English with the provided data splits. We reduced the training set to the first 500 sentences to increase the difficulty for the network. The development and test set were kept unchanged. Chunking: CoNLL 2000 shared task dataset on chunking. NER: CoNLL 2003 shared task on named entity recognition. GENIA NER: The Bio-Entity Recognition Task at JNLPBA BIBREF9 annotated Medline abstracts with information on bio-entities (like protein or DNA-names). The dataset consists of 2000 abstracts for training (we used 400 of those as development set) and the test set contains 404 abstracts. WNUT16: WNUT16 was a shared task on Named Entity Recognition over Twitter BIBREF10 . Training data are 2,394 annotated tweets, development data are 1,000 tweets, and test data are 3,856 tweets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We trained this architecture for the following datasets: Arguments: Argument component detection (major claim, claim, premise) in 402 persuasive essays BIBREF7 . Development and test set were 80 randomly selected essays each. ACE Entities/Events: ACE 2005 dataset BIBREF8 consists of 599 annotated documents from six different domains (newswire, broadcast news, broadcast conversations, blogs, forums, and speeches). We train the architecture to either detect events or to detect entities in these documents. We used 90 randomly selected documents each for the development and test set. POS: We use the part-of-speech tags from Universal Dependencies v. 1.3 for English with the provided data splits. We reduced the training set to the first 500 sentences to increase the difficulty for the network. The development and test set were kept unchanged. Chunking: CoNLL 2000 shared task dataset on chunking. NER: CoNLL 2003 shared task on named entity recognition. GENIA NER: The Bio-Entity Recognition Task at JNLPBA BIBREF9 annotated Medline abstracts with information on bio-entities (like protein or DNA-names). The dataset consists of 2000 abstracts for training (we used 400 of those as development set) and the test set contains 404 abstracts. WNUT16: WNUT16 was a shared task on Named Entity Recognition over Twitter BIBREF10 . Training data are 2,394 annotated tweets, development data are 1,000 tweets, and test data are 3,856 tweets.",
"For the second experiment, we use the existent AllenNLP models that reproduce the experiments of Peters et al. We use the CoNLL 2003 NER model, the Stanford Sentiment Treebank (SST-5) model, the constituency parsing model for the Penn TreeBank, and the Stanford Natural Language Inference Corpus (SNLI) model. The $F_1$ -score is computed for the NER tasks and parsing; accuracy is computed for the SST-task and the SNLI-task."
],
"extractive_spans": [],
"free_form_answer": "For the first experiment, the datasets used were: argument component detection persuasive essays, ACE 2005 dataset of entities/essays, POS tags from Universal Dependencies, CoNLL 2000 shared task on chunking, CoNLL 2003\nshared task on named entity recognition, the Bio-Entity Recognition Task dataset, WNUT 16 dataset on NER over tweets. For the second experiment, they used the CoNLL 2003 NER\ndataset, the Stanford Sentiment Treebank (SST5) dataset, the constituency parsing model for the\nPenn TreeBank as dataset, and the Stanford Natural Language Inference Corpus (SNLI) dataset.",
"highlighted_evidence": [
"We trained this architecture for the following datasets: Arguments: Argument component detection (major claim, claim, premise) in 402 persuasive essays BIBREF7 . Development and test set were 80 randomly selected essays each. ACE Entities/Events: ACE 2005 dataset BIBREF8 consists of 599 annotated documents from six different domains (newswire, broadcast news, broadcast conversations, blogs, forums, and speeches). We train the architecture to either detect events or to detect entities in these documents. We used 90 randomly selected documents each for the development and test set. POS: We use the part-of-speech tags from Universal Dependencies v. 1.3 for English with the provided data splits. We reduced the training set to the first 500 sentences to increase the difficulty for the network. The development and test set were kept unchanged. Chunking: CoNLL 2000 shared task dataset on chunking. NER: CoNLL 2003 shared task on named entity recognition. GENIA NER: The Bio-Entity Recognition Task at JNLPBA BIBREF9 annotated Medline abstracts with information on bio-entities (like protein or DNA-names). The dataset consists of 2000 abstracts for training (we used 400 of those as development set) and the test set contains 404 abstracts. WNUT16: WNUT16 was a shared task on Named Entity Recognition over Twitter BIBREF10 . Training data are 2,394 annotated tweets, development data are 1,000 tweets, and test data are 3,856 tweets.",
"For the second experiment, we use the existent AllenNLP models that reproduce the experiments of Peters et al. We use the CoNLL 2003 NER model, the Stanford Sentiment Treebank (SST-5) model, the constituency parsing model for the Penn TreeBank, and the Stanford Natural Language Inference Corpus (SNLI) model. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"197290cb509b9a046b311719c6ce1ce408f3be8a",
"4857c606a55a83454e8d81ffe17e05cf8bc4b75f"
]
}
],
"nlp_background": [
"two",
"two"
],
"paper_read": [
"no",
"no"
],
"question": [
"Which downstream tasks are used for evaluation in this paper?",
"Which datasets are used for evaluation?"
],
"question_id": [
"c79f168503a60d1b08bb2c9aac124199d210b06d",
"9dd8ce48a2a59a63ae6366ab8b2b8828e5ae7f35"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"elmo",
"elmo"
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Upper half: Average scores of 10 training runs with different random seeds using the BiLSTM-CRF architecture of Reimers and Gurevych (2017). ELMo is used as the only input representation. Lower half: Average scores of 10 training runs using AllenNLP. ELMo is used in combination with other input representations. Bold entries mark the best performance per row. Red entries indicate statistically significantly worse entries with p < 0.01. F.-Avg.: unweighted average of all three layers, W.-Avg.: Learned weighted average of the three layers, as proposed by Peters et al., W.-Avg. 1 & 2: Learned weighted average of the first and second layer of the biLM."
],
"file": [
"3-Table1-1.png"
]
} | [
"Which downstream tasks are used for evaluation in this paper?",
"Which datasets are used for evaluation?"
] | [
[
"1904.02954-Evaluation of Weighting Schemes-3",
"1904.02954-Evaluation of Weighting Schemes-1",
"1904.02954-Evaluation of Weighting Schemes-2"
],
[
"1904.02954-Evaluation of Weighting Schemes-3",
"1904.02954-Evaluation of Weighting Schemes-2"
]
] | [
"Various sequence tagging tasks: Argument detection, ACE entity and event detection, part-of-speech tagging, CoNLL chunking, CoNLL named entity recognition, GENIA bio-entity recognition, WNUT named entity recognition. They also evaluate on Stanford Sentiment Treebank, Penn TreeBank constituency parsing, and Stanford Natural Language Inference.",
"For the first experiment, the datasets used were: argument component detection persuasive essays, ACE 2005 dataset of entities/essays, POS tags from Universal Dependencies, CoNLL 2000 shared task on chunking, CoNLL 2003\nshared task on named entity recognition, the Bio-Entity Recognition Task dataset, WNUT 16 dataset on NER over tweets. For the second experiment, they used the CoNLL 2003 NER\ndataset, the Stanford Sentiment Treebank (SST5) dataset, the constituency parsing model for the\nPenn TreeBank as dataset, and the Stanford Natural Language Inference Corpus (SNLI) dataset."
] | 174 |
1804.03839 | Generating Clues for Gender based Occupation De-biasing in Text | Vast availability of text data has enabled widespread training and use of AI systems that not only learn and predict attributes from the text but also generate text automatically. However, these AI models also learn gender, racial and ethnic biases present in the training data. In this paper, we present the first system that discovers the possibility that a given text portrays a gender stereotype associated with an occupation. If the possibility exists, the system offers counter-evidences of opposite gender also being associated with the same occupation in the context of user-provided geography and timespan. The system thus enables text de-biasing by assisting a human-in-the-loop. The system can not only act as a text pre-processor before training any AI model but also help human story writers write stories free of occupation-level gender bias in the geographical and temporal context of their choice. | {
"paragraphs": [
[
"AI systems are increasing and Natural Language Generation is getting ever more automated with emerging creative AI systems. These creative systems rely heavily on past available textual data. But often, as evident from studies done on Hollywood and Bollywood story plots and scripts, these texts are biased in terms of gender, race or ethnicity. Hence there is a need for a de-biasing system for textual stories that are used for training these creative systems.",
"Such de-biasing systems may be of two types 1) an end-to-end system that takes in a biased text and returns an unbiased version of it or 2) a system with a human-in-the-loop that takes a text, analyzes it and returns meaningful clues or pieces of evidence to the human who can appropriately modify the text to create an unbiased version. Since multiple types of biases may exist in the given text, the former de-biasing system requires identifying which biases to focus on and how to paraphrase or modify the sentence to de-bias it. These notions can often be subjective and it might be desirable to have a human-in-the-loop. This is the focus of the latter de-biasing system as well as the approach taken by us in the paper.",
"Gender stereotyping with respect to occupations is one of the most pervasive biases that cuts across countries and age groups BIBREF0 . In this paper, we focus on de-biasing with respect to gender stereotyping in occupations. This bias has also been recently noted in machine translation systems BIBREF1 . In this translation tool, the sentences “He is a nurse. She is a doctor\" were translated from English to Turkish and back to English which inappropriately returned “She is a nurse. He is a doctor\"!",
"In this paper, our system takes a piece of text and finds mentions of named entities and their corresponding occupations. From the gender of the named entities, the system suggests examples of real people with alternate gender who also had the corresponding occupation.",
"The rest of the paper is organized as follows - Section 2 describes the related work, Section 3 discusses about the design and Section 4 lays out the implementation of our de-biasing system. In Section 5 we describe a walk-through of our system and in Section 6 we conclude our paper."
],
[
"Analysis of gender bias in machine learning in recent years has not only revealed the prevalence of such biases but also motivated much of the recent interest and work in de-biasing of ML models. BIBREF2 have pointed to the presence of gender bias in structured prediction from images. BIBREF3 , BIBREF0 notice these biases in movies while BIBREF4 , BIBREF5 notice the same in children books and music lyrics.",
"De-biasing the training algorithm as a way to remove the biases focusses on training paradigms that would result in fair predictions by an ML model. In the Bayesian network setting, Kushner et al. have proposed a latent-variable based approach to ensure counter-factual fairness in ML predictions. Another interesting technique ( BIBREF6 and BIBREF7 ) is to train a primary classifier while simultaneously trying to \"deceive\" an adversarial classifier that tries to predict gender from the predictions of the primary classifier.",
"De-biasing the model after training as a way to remove bias focuses on \"fixing\" the model after training is complete. BIBREF8 in their famous work on gender bias in word embeddings take this approach to \"fix\" the embeddings after training.",
"De-biasing the data at the source fixes the data set before it is consumed for training. This is the approach we take in this paper by trying to de-bias the data or suggesting the possibility of de-biasing the data to a human-in-the-loop. A related task is to modify or paraphrase text data to obfuscate gender as in BIBREF9 Another closely related work is to change the style of the text to different levels of formality as in BIBREF10 ."
],
[
"Our system allows the user to input a text snippet and choose the timespan and the demographic information. It highlights the named entities and their occupations which have a possibility of being biased. Further, the system outputs pieces of evidence in the form of examples of real people with that occupation from the selected time frame and region but having the opposite gender as shown in figure FIGREF3 ",
"Our de-biasing algorithm is capable of tagging 996 occupations gathered from different sources*. A user who uses our de-biasing system can utilize the time-frame and region information to check for bias in a particular text snippet. The detected bias can be shown to the user with pieces of evidence that can be then used to revisit the text and fix it."
],
[
"Our dataset comprises of the following - 1) Occupation Data 2) Names Data. We will iterate over each of this one by one.",
"Occupation Data: We gathered occupation lists from different sources on the internet including crowdsourced lists and government lists. Then, we classified the occupations into 2 categories - gender-specific occupation and gender-neutral occupations. These are used in the algorithm for bias checking which will be explained in the next sub-section.",
"Names Data: We created a corpus of 5453 male and 6990 female names sourced from [ref: CMU repository of names]. For the dataset to map names to a gender, we referred to the NLTK data set and the records of baby names and their genders."
],
[
"Our system is represented in figure FIGREF7 . We have the following components in our system -",
"The task of mapping occupations to named entity or a person is crucial to perform debiasing on the text. Often, the occupation of a person is mentioned with linking to the pronouns than the named entity itself. Hence, there is a need to resolve these co-references. We employ pronoun chaining using spaCy and replace the name of the pronoun with the named entity in the text entered by the user.",
"After we have done co-referencing, we parse the text to identify Subject, Verb, Object tuples. These tuples are further used to associate subjects i.e. named entity with its occupation i.e. object.",
"We employ 3 specific types of tagging in our system -",
"Occupation Tagging - We use a dictionary based tagging mechanism to annotate occupation mentions in the text using the occupation dataset described in the previous section.",
"Person Tagging - We use a dictionary based tagging for annotating person names in the text using the Names Dataset described in the previous section.",
"Gender Tagging - We further use the names dataset to resolve the genders of the persons identified in the previous person tagging step.",
"At the end of this step, we obtain a set of 3-tuples INLINEFORM0 person, gender, occupation INLINEFORM1 .",
"In this step, the goal is to check if INLINEFORM0 named entity, gender, occupation INLINEFORM1 is potentially biased. This is done by first checking if the mentioned occupation is gender specific or gender neutral. If the occupation is gender specific, then we can clearly say it is free of bias. Otherwise, if the occupation is gender neutral, we try to fetch evidence examples of both genders performing that occupation in the given timeframe and demography. If we find no examples matching the query of the opposite gender, then we say that the text is free of bias. Else, the system flags the sentence by highlighting the named entity and occupation and notifies the user about the possibility of bias.",
"In this section, we describe how we used SPARQL queries to fetch instances of people in DBpedia which belong to a certain gender, who lived in a certain time-frame and region and worked on a certain occupation.",
"In code-block below, we write a sample query that returns evidences of all female Chemists who were born in a city in US. The query returns 3-tuples containing the person's name, birth city, birth date and death date.",
"SELECT * WHERE {",
" ?person rdf:type \"Chemist\"@en",
" ?person foaf:gender \"female\"@en .",
" ",
" ?person dbo:birthPlace ?bCity .",
" ?bCity dbo:country \"USA\"@en .",
" ",
" ?person dbo:birthDate ?bDate .",
" ?person dbo:deathDate ?dDate .",
"}",
"",
"As the next step, we filter these 3-tuple responses by checking if the life of the person (demarcated by the period between the birth and death dates) overlaps with the time-frame given by the user as input."
],
[
"Consider a story-writer as a user of our system. The task is to be able to write bias free stories which are liked by viewers and earns high revenue in the BOX office. Here are few scenarios where this system can be used to identify bias."
],
[
"The story-writer plans to write a story based in United States of America between timeframe 1980-2000. The story-writer uses our system and types in the natural language story -",
"John is a doctor. He treats his",
"patients well. One day, he fell",
"sick and started thinking about",
"what he had been doing his whole",
"life.",
"",
"This story interacts with our backend system and identifies if the story contains any occupational bias. Here, John is the named entity and doctor is the associated occupation. Furthermore, the system identifies John as a male character. It tries to search in backend if 'doctor' is a gender specific occupation or a gender neutral occupation. After detecting that it is a gender neutral occupation, the system checks the DBpedia corpus from 1980-2000 and fetches the instances of female doctors in the same timeframe in the United States. It displays the evidences for the user to go back and revisit and rewrite the story as below.",
"Mary is a doctor. She treats her",
"patients well. One day, she fell",
"sick and started thinking about",
"what she had been doing her whole",
"life.",
"",
"The screen-shots of the interface are represented in FIGREF18 "
],
[
"The story-writer plans to write a story based in United States between the timeframe 1700-1800. He/She uses the story and feeds it to the tool.",
"The tool displays no evidences and shows that the story free from bias with occupation point of view. The screen-shot of the interface is shown in FIGREF20 "
],
[
"The story-writer plans to write a story based in Russia between the timeframe 1980-2000. He/She uses the story and feeds it to the tool.",
"The tool displays no evidences and shows the story free from bias with occupation point of view. The screen-shot of the interface is shown in FIGREF21 ",
"Hence, the observation is that when we change the year and location parameters in the tool, the tool can automatically respond to the change. Therefore the system is sensitive to the subjectivity of bias in various cultural contexts and timeframes."
],
[
"The goal of our system is to be able to remove occupational hierarchy articulated in textual stories. It is common in movies, novels & pictorial depictions to show man as boss, doctor, pilot and women as secretary, nurse and stewardess. In this work, we presented a tool which detects occupations and understand hierarchy and then generate pieces of evidences to show that counter-factual evidences exist. For example, while interchanging ({male, doctor}, {female, nurse}) to ({male, nurse}, {female, doctor}) makes sense as there might be evidences in the past supporting the claim but interchanging {male, gangster} to {female, gangster} might not have evidences in the past for most of the locations.",
"To further explain it more, given a sentence -",
"As a future work, we are working on building reasoning systems which automatically regenerate an unbiased version of text."
],
[
"Occupation De-biasing is a first-of-a-kind tool to identify possibility of gender bias from occupation point of view, and to generate pieces of evidences by responding to different cultural contexts. Our future work would involve exploring other dimensions of biases and have a more sophisticated definition of bias in text."
]
],
"section_name": [
"Introduction",
"Past Work and Motivation",
"System Overview",
"Dataset Collection",
"Methodology",
"Tool Walk-through using an example",
"Scenario 1 : Year 1980-2000 in US",
"Scenario 2 : Year 1700-1800 in US",
"Scenario 3 : Year 1980-2000 in Russia",
"Discussion",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"716a9998b828d4a0f465744e59ed6055380b4e59",
"c18649a9ac4fbfb17a3b607a0abf1f1bc44ec408",
"ef4dbeaf7fbaea90b10315578554cd880e033e01"
],
"answer": [
{
"evidence": [
"Such de-biasing systems may be of two types 1) an end-to-end system that takes in a biased text and returns an unbiased version of it or 2) a system with a human-in-the-loop that takes a text, analyzes it and returns meaningful clues or pieces of evidence to the human who can appropriately modify the text to create an unbiased version. Since multiple types of biases may exist in the given text, the former de-biasing system requires identifying which biases to focus on and how to paraphrase or modify the sentence to de-bias it. These notions can often be subjective and it might be desirable to have a human-in-the-loop. This is the focus of the latter de-biasing system as well as the approach taken by us in the paper."
],
"extractive_spans": [
"identifying which biases to focus on and how to paraphrase or modify the sentence to de-bias it"
],
"free_form_answer": "",
"highlighted_evidence": [
" Since multiple types of biases may exist in the given text, the former de-biasing system requires identifying which biases to focus on and how to paraphrase or modify the sentence to de-bias it. These notions can often be subjective and it might be desirable to have a human-in-the-loop. This is the focus of the latter de-biasing system as well as the approach taken by us in the paper."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Such de-biasing systems may be of two types 1) an end-to-end system that takes in a biased text and returns an unbiased version of it or 2) a system with a human-in-the-loop that takes a text, analyzes it and returns meaningful clues or pieces of evidence to the human who can appropriately modify the text to create an unbiased version. Since multiple types of biases may exist in the given text, the former de-biasing system requires identifying which biases to focus on and how to paraphrase or modify the sentence to de-bias it. These notions can often be subjective and it might be desirable to have a human-in-the-loop. This is the focus of the latter de-biasing system as well as the approach taken by us in the paper."
],
"extractive_spans": [
"appropriately modify the text to create an unbiased version"
],
"free_form_answer": "",
"highlighted_evidence": [
"Such de-biasing systems may be of two types 1) an end-to-end system that takes in a biased text and returns an unbiased version of it or 2) a system with a human-in-the-loop that takes a text, analyzes it and returns meaningful clues or pieces of evidence to the human who can appropriately modify the text to create an unbiased version."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Such de-biasing systems may be of two types 1) an end-to-end system that takes in a biased text and returns an unbiased version of it or 2) a system with a human-in-the-loop that takes a text, analyzes it and returns meaningful clues or pieces of evidence to the human who can appropriately modify the text to create an unbiased version. Since multiple types of biases may exist in the given text, the former de-biasing system requires identifying which biases to focus on and how to paraphrase or modify the sentence to de-bias it. These notions can often be subjective and it might be desirable to have a human-in-the-loop. This is the focus of the latter de-biasing system as well as the approach taken by us in the paper."
],
"extractive_spans": [
"modify the text to create an unbiased version"
],
"free_form_answer": "",
"highlighted_evidence": [
"Such de-biasing systems may be of two types 1) an end-to-end system that takes in a biased text and returns an unbiased version of it or 2) a system with a human-in-the-loop that takes a text, analyzes it and returns meaningful clues or pieces of evidence to the human who can appropriately modify the text to create an unbiased version."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"4f2f146cf4d83a981ad52e138adb65e71abde5ba",
"ff693391104139a7741b6b9332da8943a17de97e"
],
"answer": [
{
"evidence": [
"Our dataset comprises of the following - 1) Occupation Data 2) Names Data. We will iterate over each of this one by one.",
"Occupation Data: We gathered occupation lists from different sources on the internet including crowdsourced lists and government lists. Then, we classified the occupations into 2 categories - gender-specific occupation and gender-neutral occupations. These are used in the algorithm for bias checking which will be explained in the next sub-section.",
"Names Data: We created a corpus of 5453 male and 6990 female names sourced from [ref: CMU repository of names]. For the dataset to map names to a gender, we referred to the NLTK data set and the records of baby names and their genders."
],
"extractive_spans": [],
"free_form_answer": "A dataset they created that contains occupation and names data.",
"highlighted_evidence": [
"Our dataset comprises of the following - 1) Occupation Data 2) Names Data. ",
"Occupation Data: We gathered occupation lists from different sources on the internet including crowdsourced lists and government lists.",
"Names Data: We created a corpus of 5453 male and 6990 female names sourced from [ref: CMU repository of names]. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our dataset comprises of the following - 1) Occupation Data 2) Names Data. We will iterate over each of this one by one.",
"Occupation Data: We gathered occupation lists from different sources on the internet including crowdsourced lists and government lists. Then, we classified the occupations into 2 categories - gender-specific occupation and gender-neutral occupations. These are used in the algorithm for bias checking which will be explained in the next sub-section.",
"Names Data: We created a corpus of 5453 male and 6990 female names sourced from [ref: CMU repository of names]. For the dataset to map names to a gender, we referred to the NLTK data set and the records of baby names and their genders."
],
"extractive_spans": [
"1) Occupation Data",
"2) Names Data"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our dataset comprises of the following - 1) Occupation Data 2) Names Data. We will iterate over each of this one by one.\n\nOccupation Data: We gathered occupation lists from different sources on the internet including crowdsourced lists and government lists. Then, we classified the occupations into 2 categories - gender-specific occupation and gender-neutral occupations. These are used in the algorithm for bias checking which will be explained in the next sub-section.\n\nNames Data: We created a corpus of 5453 male and 6990 female names sourced from [ref: CMU repository of names]. For the dataset to map names to a gender, we referred to the NLTK data set and the records of baby names and their genders."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1f0d3d1883cc79d743362b7a3631802f9a9cb391",
"4391b9706bdde35168d5c8eccc43cf7c2f24065f"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"9ee5a8c89b71c2ef9141a5da424dc7815f52c343",
"b2d4c777ae66b27c15aab2a9afbf508d14ac8952",
"ba33c75c420f8f802414b920e8f4f49b2388cd9d"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What does the human-in-the-loop do to help their system?",
"Which dataset do they use to train their model?",
"Can their approach be extended to eliminate racial or ethnic biases?",
"How do they evaluate their de-biasing approach?"
],
"question_id": [
"5cc5e2db82f5d40a5244224dad94da50b4f673db",
"ab975efc916c34f55e1144b1d28e7dfdc257e371",
"e7ce612f53e9be705cdb8daa775eae51778825ef",
"6c5a64b5150305c584326882d37af5b0e58de2fd"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: System Diagram for Text De-biasing",
"Figure 2: System Diagram - Occupation De-biasing System",
"Figure 3: Scenario 1",
"Figure 4: Scenario 2",
"Figure 5: Scenario 3"
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"4-Figure4-1.png",
"5-Figure5-1.png"
]
} | [
"Which dataset do they use to train their model?"
] | [
[
"1804.03839-Dataset Collection-2",
"1804.03839-Dataset Collection-1",
"1804.03839-Dataset Collection-0"
]
] | [
"A dataset they created that contains occupation and names data."
] | 175 |
2004.02214 | Prototype-to-Style: Dialogue Generation with Style-Aware Editing on Retrieval Memory | The ability of a dialog system to express prespecified language style during conversations has a direct, positive impact on its usability and on user satisfaction. We introduce a new prototype-to-style (PS) framework to tackle the challenge of stylistic dialogue generation. The framework uses an Information Retrieval (IR) system and extracts a response prototype from the retrieved response. A stylistic response generator then takes the prototype and the desired language style as model input to obtain a high-quality and stylistic response. To effectively train the proposed model, we propose a new style-aware learning objective as well as a de-noising learning strategy. Results on three benchmark datasets from two languages demonstrate that the proposed approach significantly outperforms existing baselines in both in-domain and cross-domain evaluations | {
"paragraphs": [
[
"Most early research on dialogue response generation focused on generating grammatical and contextually relevant responses BIBREF0, BIBREF1, BIBREF2. While promising results have been demonstrated BIBREF3, BIBREF4, syntactically coherent responses alone do not guarantee an engaging and attractive dialogue system. Expressing a unique and consistent speaking style has been shown to be crucial for increasing the user's engagement with dialogue systems BIBREF5. There are various definitions of language style BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. In this work, from a purely computational standpoint, we refer to language style as any characteristic style of expression. Hence, our work is in line with previous work on dialogue generation with emotion BIBREF11, BIBREF12, BIBREF13, BIBREF14; response attitude BIBREF15, and speaker personality BIBREF16.",
"The aforementioned approaches explicitly incorporate the language style information into the model configuration either via embeddings or memory modules to control the process of response generation. In our replication experiments, we found that these approaches tend to overemphasise the importance of the language style. As a result, the generated responses tend to be generic and non-informative BIBREF17, but they do express a distinct style; e.g., they generate a generic response: “I am happy to hear that.\" that conveys a `happy' emotion to different queries.",
"In this work, we propose a novel prototype-to-style (PS) framework to tackle the challenge of stylistic dialogue generation. Our motivation is two-fold: (1) Human-written responses are informative and diverse, which could be leveraged as guidance for the generation model; (2) However, the retrieved response is not guaranteed to express the desired language style. Moreover, the quality of the retrieved response varies among different queries due to the instability of the IR system. Therefore, to transform the retrieved result into a relevant and stylistic response, an adequate editing process is necessary.",
"An illustration of the proposed framework is shown in Figure FIGREF2, where a prototype is first extracted from the retrieved response. The stylistic response generator then takes the desired language style and the extracted prototype as additional input to obtain an adequate and stylistic response. The proposed stylistic response generator mainly inherits from the GPT-2 model BIBREF18 which is pre-trained with a large unlabeled text corpus. However, the GPT-2 model does not naturally fit the task of dialogue generation. To this end, we design various adaptations to the model architecture to extend the GPT-2 model to address the task of dialogue generation. Furthermore, in order to control the style of the generated responses, we train the model with a novel style-aware maximum likelihood estimation (MLE) objective that encodes additional style knowledge into the model's parameters. Finally, to mitigate the possible effect that the retrieved response containing irrelevant and inappropriate information with respect to the input query, we adopt a de-noising learning strategy BIBREF19, BIBREF20 to prevent the model from uncritically copying the prototype.",
"To fully evaluate the proposed approach, we conduct extensive experiments on three benchmark datasets. Results of both human and automatic evaluation show that the proposed approach significantly outperforms several strong baselines. In addition, we also conduct an extensive cross-domain experiment to demonstrate that the proposed approach is more robust than such baselines.",
"It should be noted that stylistic dialogue generation is different from the task of text style transfer. Text style transfer aims to rewrite the input sentences such that they possess certain language styles, while rigorously preserving their semantic meaning BIBREF21. On the other hand, stylistic dialogue generation does not aim at preserving the semantic meaning of the input sentences. Instead, it aims at generating sentences that are adequate and relevant responses to the input sentences, while expressing the prespecified language styles.",
"In summary, the contributions of this work are: (1) We propose a novel framework that tackles the challenge of stylistic dialogue generation by leveraging useful information contained in the retrieved responses; (2) We propose a new stylistic response generator by making proper adaptations to a large-scale pre-trained language model. We train our model with a new style-aware learning objective in a de-noising manner. Experiments show that the proposed model outperforms many strong baselines on three benchmark datasets on both in-domain and cross-domain evaluations."
],
[
"We summarize three categories of relevant work in the following."
],
[
"The task of text style transfer aims to transfer the style contained in a sentence while preserving its meaning. BIBREF22 proposed a DRG framework to tackle this task with the help of external knowledge. Recently, based on the pre-trained language model, BIBREF23 further improved the system performance under the same DRG framework."
],
[
"Many prior works BIBREF24, BIBREF25, BIBREF26, BIBREF27 proposed to leverage information from the retrieved responses to improve the system performance on non-task oriented dialogue generation. It should be noted that all these approaches aim to improve the content quality of the generated responses but do not take the style aspect into consideration."
],
[
"Extensive research has tried to tackle the task of stylistic dialogue generation. BIBREF16 proposed to represent the user's personality with embeddings and incorporated them into the decoder structure to control the response generation process. BIBREF15 used reinforcement learning to train the generation model via the interaction with a pre-trained classifier to generate responses with specified attitude. BIBREF11, BIBREF12, BIBREF13, BIBREF14 incorporated external knowledge into the model architecture either via embeddings or internal and external memory modules, such that during the generation process, emotion-based styles can be dynamically controlled. BIBREF28 proposed to use a shared latent space for stylistic dialogue generation."
],
[
"The proposed framework leverages the results acquired from an IR system, A major challenge is that the retrieved response is not guaranteed to express the desired language style. At the first step, a neutral response prototype is extracted by masking all stylistic words contained in the retrieved response. A stylistic response generator then takes the desired language style and the extracted prototype as additional input to generate an adequate and stylistic response to the input query. To better emphasize the generation of stylistic expressions, we propose a style-aware learning objective. Finally, to prevent the model from learning to uncritically copy the prototype, we adopt a de-noising learning strategy BIBREF19, BIBREF20 to train the generator."
],
[
"The response prototype is constructed from the retrieved response by masking the stylistic words. To determine whether a word is stylistic, we use the pointwise mutual information (PMI) BIBREF29 metric. The relevance between the word $x$ and the style $s$ is measured as",
"where $p(x, s)$ is the frequency that the word $x$ appears in a response with style $s$ in the training corpus. And a word $x$ is stylistic given the style $s$ if $\\textup {PMI}(x,s)\\ge t_s$. In our experiments, we empirically set $t_s$ as $t_s = \\frac{3}{4}\\times \\max _{v\\in \\mathcal {V}}\\textup {PMI}(v; s)$, where $\\mathcal {V}$ is the vocabulary set of the training corpus. Given the set of all possible language styles $\\mathcal {S}$, the stylistic vocabulary $\\mathcal {SV}$ is defined as all words that express any style $s\\in \\mathcal {S}$. An example is provided in Figure FIGREF2 where the prototype: “That's _ . I will go with my _ together !” is extracted from the retrieved response by masking the stylistic words great, bro and buddies."
],
[
"The proposed Stylistic Response Generator inherits from the GPT-2 BIBREF18 model which consists of a 12-layer decoder-only Transformer BIBREF30. To make use of the GPT-2 model, the input tokens must be a consecutive natural sequence (e.g. sentence, document). Based on the input sequence, the input representation is constructed by adding up the token embeddings and the corresponding position embeddings.",
"To achieve the goal of adapting the GPT-2 model under the proposed PS framework, we first make modifications to the form of the input sequence. As shown in Figure FIGREF6, we construct the input sequence as the concatenation of the input query, the response prototype and the reference response. Then we introduce a special token $[B]$ to indicate the boundary between these three parts. To further ensure the model can identify the different parts of the input sequence, we introduce a new segment level input which consists of three learnable segment embeddings $E_Q$, $E_P$ and $E_R$ to indicate the positions of the input query, the response prototype and the response history. To control the language style of the generated response, we propose to incorporate learnable style embeddings into the input representation. Specifically, we add the style embeddings to the entire part of the response history. This way, the model is constantly aware of the desired language style through the entire generation process."
],
[
"We propose to use a new style-aware learning objective to train the stylistic response generator. Consider a training instance consists of the input query ${\\bf X} = (x_1, ..., x_N)$, the reference response ${\\bf Y} = (y_1, ..., y_T)$, the reference language style $s$ and the response prototype ${\\bf C} = (c_1, ..., c_T)$, the proposed objective is defined as",
"where $\\theta $ are the model parameters and $\\mathcal {SV}$ is the stylistic vocabulary introduced in SV. By increasing $\\alpha $, the proposed objective encodes more knowledge about stylistic expressions into the model parameters.",
"We find that including the language model as an auxiliary objective in addition to the supervised style-aware learning objective helps to improve generalization as well as accelerate convergence. This observation is in line with BIBREF31, BIBREF32. In this work, the language model objective is defined as the reconstruction loss of the input query based on itself:",
"The final learning objective is then defined as",
"where $\\beta $ regulates the importance of the auxiliary objective."
],
[
"We use a de-noising training strategy similar to DBLP:conf/nips/JainS08, DBLP:conf/cvpr/KrullBJ19 for training data construction, as shown in Figure FIGREF17. Specifically, during training, the response prototype is extracted from the reference response by the following steps. First, we mask all the stylistic words in the reference response. Second, we randomly select some words (40%) and replace it with a special token [MASK] or a random word drawn from the vocabulary.",
"The second step is necessary otherwise the model will learn to generate a response by uncritically copying the response prototype, since the prototype after the first step is always an integral part of the golden response. This copy mechanism is undesirable since during testing the retrieved response is likely to contain information that is irrelevant to the input query. Thus, we deliberately train the response generator with noisy input to let the model learn to filter out the inappropriate information contained in the response prototype."
],
[
"We conduct extensive experiments on three dialogue datasets: gender-specific (Chinese) dataset, emotion-specific (Chinese) dataset, and sentiment-specific (English) dataset. For each dataset, we randomly select 200 instances as a held-out test set for evaluation."
],
[
"We use a publicly available gender-specific dialogue dataset BIBREF33. In this dataset, each response contains one specific gender preference including Female, Male and Neutral."
],
[
"We use a publicly available emotion-specific dataset BIBREF11 which contains responses with 6 different emotions including Like, Disgust, Happy, Anger, Sad and Other."
],
[
"To construct this dataset, we first build a classifier on the basis of BERT BIBREF34 and finetuned it on the the SemEval-2017 Subtask A dataset BIBREF35. This dataset consists of twitter instances with different sentiments including Positive, Negative and Neutral.",
"The sentiment classifier attains 81.4% classification accuracy which is further used to annotate the OpenSubtitles dataset BIBREF36. The data statistic of the resulting sentiment-specific dialogue dataset is shown in Table TABREF21."
],
[
"As there is no off-the-shelf pre-trained word-level language model in Chinese, we manually pre-trained one. The corpus collection and model pre-training details are presented in the supplementary material. For the English pre-trained language model, we use the PyTorch adaptation released by the HuggingFace team.",
"To optimize the model, we use the Adam optimizer BIBREF37 with a batch size of 64 and learning rate of 2e-5. During inference, the retrieval system is built from the training corpus, and the retrieved responses are selected using the Jaccard similarity BIBREF38 between queries.",
"During the inference stage, we retrieve the candidates from the training set. Specifically, we employ Jacquard Similarity to calculate the similarity between the input query q and queries in training set and find the most similar query q$^\\prime $. Then we directly adopt the response of the retrieved query q$^\\prime $ to construct the response prototype."
],
[
"We compare the proposed approach with several competitive baselines that can be categorized into two classes: generative approaches and retrieval-based approaches."
],
[
"Standard sequence-to-sequence model with attention mechanism BIBREF39, BIBREF40."
],
[
"To examine the effect of leveraging the pre-trained language model for the task of dialogue generation, we directly fine-tune the GPT-2 model on the dialogue data without any designed adaptations."
],
[
"Model proposed by BIBREF16 which incorporates distributed style embeddings into the structure of decoding cells to control the generation process."
],
[
"Model proposed by BIBREF11 which uses memory modules to control the stylistic expressions in the generated responses."
],
[
"Model proposed by BIBREF27 which modifies the retrieved response based on the lexical difference between the input and the retrieved query. This approach does not take the style aspect into consideration."
],
[
"For this approach, we apply the state-of-the-art style transfer BIBREF23 model on the retrieved response. This approach does not consider the input query information during the transfer process."
],
[
"Given the input query, a style classifier is used to rerank the top 10 retrieved responses. The response with the highest score on the desired style is selected."
],
[
"The full model proposed in this work."
],
[
"In the ablated model, we examine how the retrieved prototype effects our model's performance. To this end, we remove the response prototype from the input representation."
],
[
"The quality of dialogue responses is known to be difficult to measure automatically BIBREF41; we therefore rely on human evaluation. To evaluate the responses, we hire five annotators from a commercial annotation company. To prevent introducing potential bias to the annotators, all results are randomly shuffled before being evaluated. All results are evaluated by the annotators following the metrics below."
],
[
"This metric evaluates the content quality of the generated responses. The annotators are asked to give a score within 5-point scale where 5 means perfectly human-like response (relevant, fluent and informative), 3 means marginally acceptable and 1 means unreadable and impossible to understand."
],
[
"This metric measures how well the generated responses express the desired style. The annotators give a score ranging from 1 to 5 to this metric, where 5 means very strong style, 3 means no obvious style and 1 means very conflicted style. The style conflict means the generated style is conflicted to the desired one (e.g. female to male, positive to negative emotion)."
],
[
"The annotators are further asked to jointly evaluate the content quality and the style expression of the generated responses from different approaches. Then the annotators give a ranking to each result where top 1 means the best."
],
[
"Both human and automatic evaluation results on the three benchmark datasets are shown in Table TABREF25, TABREF26 and TABREF27. For each dataset, we present results on individual styles as well as the overall results. We observe that the proposed model achieves the top performance results on most of the metrics. It generates responses with both intense style and high response quality. In addition, we also measure the diversity of the generated responses with two automatic metrics: Distinct-1 and Distinct-2 BIBREF16. The results show that the proposed model achieves the closest performance to that of the RRe approach whose responses are all written by human. On the ranking metric which jointly evaluates the content quality and the style expression, the proposed model outperforms other approaches by a substantial margin.",
"From the results in Table TABREF26 and TABREF27, we can observe that ECM obtains the highest style expression scores on the emotion and sentiment dialogue datasets. This is because ECM directly incorporates the style information into its model architecture to force the generation of stylistic expressions. However, as shown in the quality scores, this behavior also undermines the quality of the generated responses. Therefore, the overall performance of ECM is not optimal as shown in the results of the ranking metric.",
"From the experiment results, we observe that removing retrieved information (PS w/o R) from the proposed model causes a drastic drop on the quality score. This demonstrates that the retrieved information is indispensable for the model to generate a stylistic response and maintain a high response quality. In addition, comparing with GPT2-FT baseline, the ablated model (PS w/o R) shows similar content quality and much stronger stylistic expression, which is gained from the model architectural design and the new training strategy."
],
[
"We present further discussions and empirical analysis of the proposed approach."
],
[
"In practice, a satisfactory stylistic dialogue system should express the desired style on the premise of the response quality. Based on the criterion of human evaluation metric, 3 is the marginal score of acceptance. So we deem a response as marginally acceptable by actual users when both quality and style expression scores are greater or equal to 3. On the other hand, 4 is the score that well satisfies the users, so responses with both scores greater or equal to 4 are deemed as satisfying to actual users.",
"The ratios of both scores $\\ge 3$ and $\\ge 4$ are shown in Figure FIGREF47, from which we can see that the proposed approach outperforms all other approaches on $\\ge 3$-ratio and $\\ge 4$-ratio. The proposed model best balances the trade-off between the response quality and style expression and therefore generating most acceptable and satisfying responses."
],
[
"To evaluate the robustness of different approaches, we further analyze their performances when there is a notable difference between the data distribution of the training and testing set. Specifically, we use the models trained on gender-specific dataset to conduct inference on the test set of emotion-specific dataset and vise versa, which is regarded as domain variation. In Figure FIGREF50, we show the data distributions of these two datasets from which we can observe a notable distribution discrepancy. For evaluation, all results are evaluated with the same metrics as in the previous experiments. The averages response quality scores before and after domain variation are shown in Figure FIGREF55. For a direct comparison, the in-domain performance of each model can be found in Table TABREF25 and TABREF26.",
"As shown in Figure FIGREF55, some of the strong baselines exhibit a drastic drop in response quality after domain variation such as GPT2-FT and PS w/o R. In contrast, the PS model successfully maintains high response quality in spite of domain variation. The model seems to benefit from leveraging retrieved results to bridge the gap between the two different domains. This can also be observed in the results of RST and RRe which also use the retrieved results and get a even higher performance when facing domain variation."
],
[
"We present several examples of generated responses by the proposed PS approach. Table TABREF51 shows responses with different gender and emotion styles, and Table TABREF52 shows responses with different sentiments. Examples in Table TABREF51 show that the proposed approach is able to extract informative details such as “have nightmares” and “higher salary” that are relevant to the queries from the retrieved responses. By taking the desired style as input, the proposed model generates adequate and stylistic responses while producing the informative details. Examples in Table TABREF52 also demonstrate that the proposed model is able to generate responses with desired sentiments based on the informative details (e.g. “_ want us to target _ ones _”, “_ can make _ decision.” and “_ sound _ to me _”) contained in the retrieved response."
],
[
"In this work, we propose a novel PS framework to tackle the task of stylistic dialogue generation. Additionally, we propose a new stylistic response generator which works coherently with the proposed framework. We conduct extensive experiments on three benchmark datasets from two languages. Results of human and automatic evaluation show that the proposed approach outperforms many strong baselines by a substantial margin."
]
],
"section_name": [
"Introduction",
"Related Work",
"Related Work ::: Text Style Transfer:",
"Related Work ::: Retrieval Guided Dialogue Generation:",
"Related Work ::: Stylistic Dialogue Generation:",
"Methodology",
"Methodology ::: Prototype Extraction",
"Methodology ::: Stylistic Response Generator",
"Methodology ::: Learning ::: Style-Aware Learning Objective",
"Methodology ::: Learning ::: De-noising Training",
"Datasets",
"Datasets ::: Gender-Specific Dialogue Dataset",
"Datasets ::: Emotion-Specific Dialogue Dataset",
"Datasets ::: Sentiment-Specific Dialogue Dataset",
"Experiments ::: Pretraining and Implementation Details",
"Experiments ::: Model Comparison",
"Experiments ::: Model Comparison ::: Generative Approaches ::: Seq2seq:",
"Experiments ::: Model Comparison ::: Generative Approaches ::: GPT2-FT:",
"Experiments ::: Model Comparison ::: Generative Approaches ::: Speaker:",
"Experiments ::: Model Comparison ::: Generative Approaches ::: ECM:",
"Experiments ::: Model Comparison ::: Retrieval-Based Approaches ::: Skeleton-to-Response (SR):",
"Experiments ::: Model Comparison ::: Retrieval-Based Approaches ::: Retrieval + Style Transfer (RST):",
"Experiments ::: Model Comparison ::: Retrieval-Based Approaches ::: Retrieval + Reranking (RRe):",
"Experiments ::: Model Comparison ::: Ablation Study ::: PS:",
"Experiments ::: Model Comparison ::: Ablation Study ::: PS w/o R:",
"Experiments ::: Evaluation Metrics",
"Experiments ::: Evaluation Metrics ::: Quality:",
"Experiments ::: Evaluation Metrics ::: Style Expression:",
"Experiments ::: Evaluation Metrics ::: Ranking:",
"Experiments ::: Main Results",
"Experiments ::: Further Analysis",
"Experiments ::: Further Analysis ::: Balance between Quality and Style",
"Experiments ::: Further Analysis ::: Cross-Domain Evaluation",
"Experiments ::: Case Study",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"537431564082f12904a3128720f82bfffb7443f9",
"899ffcc65bb4c50bad43e4b0b6d7bf317e8de3dc",
"9dcce15d1ed49451b8a220408f6a6d9fff9b7d0e"
],
"answer": [
{
"evidence": [
"Experiments ::: Evaluation Metrics ::: Ranking:",
"The annotators are further asked to jointly evaluate the content quality and the style expression of the generated responses from different approaches. Then the annotators give a ranking to each result where top 1 means the best."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Ranking:\nThe annotators are further asked to jointly evaluate the content quality and the style expression of the generated responses from different approaches. Then the annotators give a ranking to each result where top 1 means the best."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Experiments ::: Evaluation Metrics ::: Style Expression:",
"This metric measures how well the generated responses express the desired style. The annotators give a score ranging from 1 to 5 to this metric, where 5 means very strong style, 3 means no obvious style and 1 means very conflicted style. The style conflict means the generated style is conflicted to the desired one (e.g. female to male, positive to negative emotion)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Experiments ::: Evaluation Metrics ::: Style Expression:\nThis metric measures how well the generated responses express the desired style. The annotators give a score ranging from 1 to 5 to this metric, where 5 means very strong style, 3 means no obvious style and 1 means very conflicted style. The style conflict means the generated style is conflicted to the desired one (e.g. female to male, positive to negative emotion)."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Experiments ::: Evaluation Metrics ::: Style Expression:",
"This metric measures how well the generated responses express the desired style. The annotators give a score ranging from 1 to 5 to this metric, where 5 means very strong style, 3 means no obvious style and 1 means very conflicted style. The style conflict means the generated style is conflicted to the desired one (e.g. female to male, positive to negative emotion)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Experiments ::: Evaluation Metrics ::: Style Expression:\nThis metric measures how well the generated responses express the desired style. The annotators give a score ranging from 1 to 5 to this metric, where 5 means very strong style, 3 means no obvious style and 1 means very conflicted style. The style conflict means the generated style is conflicted to the desired one (e.g. female to male, positive to negative emotion)."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"2c563cd5e1881f182a17a5f2e6ff4897fe89beb4",
"4ac45443c98583e655f1296c48c9e52897cc8724"
],
"answer": [
{
"evidence": [
"We compare the proposed approach with several competitive baselines that can be categorized into two classes: generative approaches and retrieval-based approaches.",
"Experiments ::: Model Comparison ::: Generative Approaches ::: Seq2seq:",
"Standard sequence-to-sequence model with attention mechanism BIBREF39, BIBREF40.",
"Experiments ::: Model Comparison ::: Generative Approaches ::: GPT2-FT:",
"To examine the effect of leveraging the pre-trained language model for the task of dialogue generation, we directly fine-tune the GPT-2 model on the dialogue data without any designed adaptations.",
"Experiments ::: Model Comparison ::: Generative Approaches ::: Speaker:",
"Model proposed by BIBREF16 which incorporates distributed style embeddings into the structure of decoding cells to control the generation process.",
"Experiments ::: Model Comparison ::: Generative Approaches ::: ECM:",
"Model proposed by BIBREF11 which uses memory modules to control the stylistic expressions in the generated responses.",
"Experiments ::: Model Comparison ::: Retrieval-Based Approaches ::: Skeleton-to-Response (SR):",
"Model proposed by BIBREF27 which modifies the retrieved response based on the lexical difference between the input and the retrieved query. This approach does not take the style aspect into consideration.",
"Experiments ::: Model Comparison ::: Retrieval-Based Approaches ::: Retrieval + Style Transfer (RST):",
"For this approach, we apply the state-of-the-art style transfer BIBREF23 model on the retrieved response. This approach does not consider the input query information during the transfer process.",
"Experiments ::: Model Comparison ::: Retrieval-Based Approaches ::: Retrieval + Reranking (RRe):",
"Given the input query, a style classifier is used to rerank the top 10 retrieved responses. The response with the highest score on the desired style is selected."
],
"extractive_spans": [
"Seq2seq",
"GPT2-FT",
"Speaker",
"ECM",
"Skeleton-to-Response (SR)",
"Retrieval + Style Transfer (RST)",
"Retrieval + Reranking (RRe)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare the proposed approach with several competitive baselines that can be categorized into two classes: generative approaches and retrieval-based approaches.\n\nExperiments ::: Model Comparison ::: Generative Approaches ::: Seq2seq:\nStandard sequence-to-sequence model with attention mechanism BIBREF39, BIBREF40.\n\nExperiments ::: Model Comparison ::: Generative Approaches ::: GPT2-FT:\nTo examine the effect of leveraging the pre-trained language model for the task of dialogue generation, we directly fine-tune the GPT-2 model on the dialogue data without any designed adaptations.\n\nExperiments ::: Model Comparison ::: Generative Approaches ::: Speaker:\nModel proposed by BIBREF16 which incorporates distributed style embeddings into the structure of decoding cells to control the generation process.\n\nExperiments ::: Model Comparison ::: Generative Approaches ::: ECM:\nModel proposed by BIBREF11 which uses memory modules to control the stylistic expressions in the generated responses.\n\nExperiments ::: Model Comparison ::: Retrieval-Based Approaches ::: Skeleton-to-Response (SR):\nModel proposed by BIBREF27 which modifies the retrieved response based on the lexical difference between the input and the retrieved query. This approach does not take the style aspect into consideration.\n\nExperiments ::: Model Comparison ::: Retrieval-Based Approaches ::: Retrieval + Style Transfer (RST):\nFor this approach, we apply the state-of-the-art style transfer BIBREF23 model on the retrieved response. This approach does not consider the input query information during the transfer process.\n\nExperiments ::: Model Comparison ::: Retrieval-Based Approaches ::: Retrieval + Reranking (RRe):\nGiven the input query, a style classifier is used to rerank the top 10 retrieved responses. The response with the highest score on the desired style is selected."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We compare the proposed approach with several competitive baselines that can be categorized into two classes: generative approaches and retrieval-based approaches.",
"Experiments ::: Model Comparison ::: Generative Approaches ::: Seq2seq:",
"Standard sequence-to-sequence model with attention mechanism BIBREF39, BIBREF40.",
"Experiments ::: Model Comparison ::: Generative Approaches ::: GPT2-FT:",
"To examine the effect of leveraging the pre-trained language model for the task of dialogue generation, we directly fine-tune the GPT-2 model on the dialogue data without any designed adaptations.",
"Experiments ::: Model Comparison ::: Generative Approaches ::: Speaker:",
"Model proposed by BIBREF16 which incorporates distributed style embeddings into the structure of decoding cells to control the generation process.",
"Experiments ::: Model Comparison ::: Generative Approaches ::: ECM:",
"Model proposed by BIBREF11 which uses memory modules to control the stylistic expressions in the generated responses.",
"Experiments ::: Model Comparison ::: Retrieval-Based Approaches ::: Skeleton-to-Response (SR):",
"Model proposed by BIBREF27 which modifies the retrieved response based on the lexical difference between the input and the retrieved query. This approach does not take the style aspect into consideration.",
"Experiments ::: Model Comparison ::: Retrieval-Based Approaches ::: Retrieval + Style Transfer (RST):",
"For this approach, we apply the state-of-the-art style transfer BIBREF23 model on the retrieved response. This approach does not consider the input query information during the transfer process.",
"Experiments ::: Model Comparison ::: Retrieval-Based Approaches ::: Retrieval + Reranking (RRe):",
"Given the input query, a style classifier is used to rerank the top 10 retrieved responses. The response with the highest score on the desired style is selected."
],
"extractive_spans": [
"Generative Approaches ::: Seq2seq",
"Generative Approaches ::: GPT2-FT:",
"Generative Approaches ::: Speaker:",
"Generative Approaches ::: ECM:",
"Retrieval-Based Approaches ::: Skeleton-to-Response (SR)",
"Retrieval-Based Approaches ::: Retrieval + Style Transfer (RST)",
"Retrieval-Based Approaches ::: Retrieval + Style Transfer (RST)",
"Retrieval-Based Approaches ::: Retrieval + Reranking (RRe)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare the proposed approach with several competitive baselines that can be categorized into two classes: generative approaches and retrieval-based approaches.\n\nExperiments ::: Model Comparison ::: Generative Approaches ::: Seq2seq:\nStandard sequence-to-sequence model with attention mechanism BIBREF39, BIBREF40.\n\nExperiments ::: Model Comparison ::: Generative Approaches ::: GPT2-FT:\nTo examine the effect of leveraging the pre-trained language model for the task of dialogue generation, we directly fine-tune the GPT-2 model on the dialogue data without any designed adaptations.\n\nExperiments ::: Model Comparison ::: Generative Approaches ::: Speaker:\nModel proposed by BIBREF16 which incorporates distributed style embeddings into the structure of decoding cells to control the generation process.\n\nExperiments ::: Model Comparison ::: Generative Approaches ::: ECM:\nModel proposed by BIBREF11 which uses memory modules to control the stylistic expressions in the generated responses.\n\nExperiments ::: Model Comparison ::: Retrieval-Based Approaches ::: Skeleton-to-Response (SR):\nModel proposed by BIBREF27 which modifies the retrieved response based on the lexical difference between the input and the retrieved query. This approach does not take the style aspect into consideration.\n\nExperiments ::: Model Comparison ::: Retrieval-Based Approaches ::: Retrieval + Style Transfer (RST):\nFor this approach, we apply the state-of-the-art style transfer BIBREF23 model on the retrieved response. This approach does not consider the input query information during the transfer process.\n\nExperiments ::: Model Comparison ::: Retrieval-Based Approaches ::: Retrieval + Reranking (RRe):\nGiven the input query, a style classifier is used to rerank the top 10 retrieved responses. The response with the highest score on the desired style is selected"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"35cbe2695ec1bbfa9f3af7df29b823d75a6d95c3",
"9eef91715b10afbe6b78eca7bbe86dc761ae886b",
"b3178f884cd117ab48bd087611f6922995832f7d"
],
"answer": [
{
"evidence": [
"We conduct extensive experiments on three dialogue datasets: gender-specific (Chinese) dataset, emotion-specific (Chinese) dataset, and sentiment-specific (English) dataset. For each dataset, we randomly select 200 instances as a held-out test set for evaluation."
],
"extractive_spans": [],
"free_form_answer": "Chinese and English",
"highlighted_evidence": [
"We conduct extensive experiments on three dialogue datasets: gender-specific (Chinese) dataset, emotion-specific (Chinese) dataset, and sentiment-specific (English) dataset. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conduct extensive experiments on three dialogue datasets: gender-specific (Chinese) dataset, emotion-specific (Chinese) dataset, and sentiment-specific (English) dataset. For each dataset, we randomly select 200 instances as a held-out test set for evaluation."
],
"extractive_spans": [
"Chinese",
"English"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct extensive experiments on three dialogue datasets: gender-specific (Chinese) dataset, emotion-specific (Chinese) dataset, and sentiment-specific (English) dataset. For each dataset, we randomly select 200 instances as a held-out test set for evaluation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conduct extensive experiments on three dialogue datasets: gender-specific (Chinese) dataset, emotion-specific (Chinese) dataset, and sentiment-specific (English) dataset. For each dataset, we randomly select 200 instances as a held-out test set for evaluation."
],
"extractive_spans": [],
"free_form_answer": "English and Chinese",
"highlighted_evidence": [
"We conduct extensive experiments on three dialogue datasets: gender-specific (Chinese) dataset, emotion-specific (Chinese) dataset, and sentiment-specific (English) dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"852665b7f6e5aa0b646a6eed25269e25613444ac",
"de32f78361ae220924351d77ea51ca087abf59ac"
],
"answer": [
{
"evidence": [
"We conduct extensive experiments on three dialogue datasets: gender-specific (Chinese) dataset, emotion-specific (Chinese) dataset, and sentiment-specific (English) dataset. For each dataset, we randomly select 200 instances as a held-out test set for evaluation."
],
"extractive_spans": [
"gender-specific (Chinese) dataset",
"emotion-specific (Chinese) dataset",
"sentiment-specific (English) dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct extensive experiments on three dialogue datasets: gender-specific (Chinese) dataset, emotion-specific (Chinese) dataset, and sentiment-specific (English) dataset. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conduct extensive experiments on three dialogue datasets: gender-specific (Chinese) dataset, emotion-specific (Chinese) dataset, and sentiment-specific (English) dataset. For each dataset, we randomly select 200 instances as a held-out test set for evaluation.",
"Datasets ::: Gender-Specific Dialogue Dataset",
"We use a publicly available gender-specific dialogue dataset BIBREF33. In this dataset, each response contains one specific gender preference including Female, Male and Neutral.",
"Datasets ::: Emotion-Specific Dialogue Dataset",
"We use a publicly available emotion-specific dataset BIBREF11 which contains responses with 6 different emotions including Like, Disgust, Happy, Anger, Sad and Other.",
"Datasets ::: Sentiment-Specific Dialogue Dataset",
"To construct this dataset, we first build a classifier on the basis of BERT BIBREF34 and finetuned it on the the SemEval-2017 Subtask A dataset BIBREF35. This dataset consists of twitter instances with different sentiments including Positive, Negative and Neutral."
],
"extractive_spans": [
"Gender-Specific Dialogue Dataset",
"Emotion-Specific Dialogue Dataset",
"Sentiment-Specific Dialogue Dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct extensive experiments on three dialogue datasets: gender-specific (Chinese) dataset, emotion-specific (Chinese) dataset, and sentiment-specific (English) dataset. For each dataset, we randomly select 200 instances as a held-out test set for evaluation.\n\nDatasets ::: Gender-Specific Dialogue Dataset\nWe use a publicly available gender-specific dialogue dataset BIBREF33. In this dataset, each response contains one specific gender preference including Female, Male and Neutral.\n\nDatasets ::: Emotion-Specific Dialogue Dataset\nWe use a publicly available emotion-specific dataset BIBREF11 which contains responses with 6 different emotions including Like, Disgust, Happy, Anger, Sad and Other.\n\nDatasets ::: Sentiment-Specific Dialogue Dataset\nTo construct this dataset, we first build a classifier on the basis of BERT BIBREF34 and finetuned it on the the SemEval-2017 Subtask A dataset BIBREF35. This dataset consists of twitter instances with different sentiments including Positive, Negative and Neutral."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Is there a metric that also rewards good stylistic response?",
"What are existing baseline models on these benchmark datasets?",
"On what two languages is experimented on?",
"What three benchmark datasets are used?"
],
"question_id": [
"f7a27de3eb6447377eb48ef6d2201205ff943751",
"2df3cd12937591481e85cf78c96a24190ad69e50",
"fcb0ac1934e2fd9f58f4b459e6853999a27844f9",
"fc9aa04de4018b7d55e19a39663a2e9837328de7"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Prototype-to-Style Framework: It first constructs a neutral response prototype by masking the stylistic words from the retrieved response. The stylistic response generator then takes the extracted prototype and the desired language style information to generate an adequate and stylistic response.",
"Figure 2: Illustration of the proposed Stylistic Response Generator: The input representation is constructed by adding up four different level embeddings. By specifying different style embeddings, the model can generate responses with different language styles.",
"Figure 3: Illustration of de-noising training strategy.",
"Table 1: Data Statistic of Sentiment-Specific Dataset",
"Table 2: Evaluation Results on Gender-Specific Dialogue Generation: ↑ means the higher the better and ↓ means the lower the better, bold font denotes the best scores for each metric. Sign tests on evaluation scores show that the proposed model significantly outperforms other models with p-value < 0.05 with the only exception marked by †.",
"Table 3: Evaluation Results on Emotional-Specific Dialogue Generation",
"Table 4: Evaluation Results on Sentiment-Specific Dialogue Generation",
"Figure 4: Balance between Quality and Style: The≥ 3- ratio means the ratio of responses whose both scores are greater or equal to 3; ≥ 4-ratio means the ratio of responses whose both scores are greater or equal to 4.",
"Table 5: Examples of generated responses with different gender and emotion styles. The words in red color are the informative details that the model extracts from the retrieved response.",
"Figure 5: Blue and red dots represent the words in gender-specific and emotion-specific dataset. Each word wd is denoted as (xd, yd, zd) where (xd, yd) is TSNE representation of its pretrained Glove embeddings (Pennington et al., 2014) and zd is the word frequency in the corresponding dataset. A notable distribution discrepancy between two domains can be observed.",
"Figure 6: In-domain and cross-domain evaluations on the quality of generated responses. The red column represents the averaged quality score on in-domain test set, and the blue column denotes the averaged quality score after domain variation",
"Table 1: Examples of gender response classification dataset : Both Chinese and translated versions are provided.",
"Table 2: Data Statistic of Gender-Specific Dialogue Dataset",
"Table 3: Data Statistic of Sentiment-Specific Dialogue Dataset",
"Table 4: Cross-Domain Evaluation Results on Gender-Specific Dialogue Generation: (↑ means the higher the better",
"Table 5: Cross-Domain Evaluation Results on Emotional-Specific Dialogue Generation: (↑ means the higher the",
"Figure 1: In-domain and cross-domain evaluations on the quality of the generated responses. The red column represents the averaged quality score on in-domain test set, and the blue column denotes the averaged quality score after domain variation.",
"Figure 2: In-domain and cross-domain evaluations on the style expression of the generated responses. The red column represents the averaged ranking on in-domain test set, and the blue column denotes the averaged style expression score after domain variation.",
"Figure 3: In-domain and cross-domain evaluations on the ranking of generated responses. The red column represents the averaged ranking on in-domain test set, and the blue column denotes the averaged ranking after domain variation."
],
"file": [
"2-Figure1-1.png",
"4-Figure2-1.png",
"4-Figure3-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"7-Figure4-1.png",
"8-Table5-1.png",
"8-Figure5-1.png",
"8-Figure6-1.png",
"12-Table1-1.png",
"12-Table2-1.png",
"13-Table3-1.png",
"13-Table4-1.png",
"14-Table5-1.png",
"14-Figure1-1.png",
"14-Figure2-1.png",
"15-Figure3-1.png"
]
} | [
"On what two languages is experimented on?"
] | [
[
"2004.02214-Datasets-0"
]
] | [
"English and Chinese"
] | 176 |
1911.05652 | Relative contributions of Shakespeare and Fletcher in Henry VIII: An Analysis Based on Most Frequent Words and Most Frequent Rhythmic Patterns | The versified play Henry VIII is nowadays widely recognized to be a collaborative work not written solely by William Shakespeare. We employ combined analysis of vocabulary and versification together with machine learning techniques to determine which authors also took part in the writing of the play and what were their relative contributions. Unlike most previous studies, we go beyond the attribution of particular scenes and use the rolling attribution approach to determine the probabilities of authorship of pieces of texts, without respecting the scene boundaries. Our results highly support the canonical division of the play between William Shakespeare and John Fletcher proposed by James Spedding, but also bring new evidence supporting the modifications proposed later by Thomas Merriam. | {
"paragraphs": [
[
"In the first collection of William Shakespeare’s works published in 1623 (the so-called First Folio) a play appears entitled The Famous History of the Life of King Henry the Eight for the very first time. Nowadays it is widely recognized that along with Shakespeare, other authors were involved in the writing of this play, yet there are different opinions as to who these authors were and what the precise shares were of their authorial contributions. This article aims to contribute to the question of the play’s authorship using combined analysis of vocabulary and versification and modern machine learning techniques (as proposed in BIBREF0, BIBREF1)."
],
[
"While the stylistic dissimilarity of Henry VIII (henceforth H8) to Shakespeare’s other plays had been pointed out before BIBREF2, it was not until the mid-nineteenth century that Shakespeare’s sole authorship was called into question. In 1850 British scholar James Spedding published an article BIBREF3 attributing several scenes to John Fletcher. Spedding supported this with data from the domain of versification, namely the ratios of iambic lines ending with a stressed syllable (“The view of earthly glory: men might say”) to lines ending with an extra unstressed one (“Till this time pomp was single, but now married”), pointing out that the distribution of values across scenes is strongly bimodal.",
"Since then many scholars have brought new evidence supporting Spedding’s division of the play based both on versification and linguistic features. This includes e.g. frequencies of enjambment BIBREF4, frequencies of particular types of unstressed line endings BIBREF5, BIBREF6, frequencies of contractions BIBREF7, vocabulary richness BIBREF8, phrase length measured by the number of words BIBREF9, or complex versification analysis BIBREF10, BIBREF11. From the very beginning, beside advocates of Shakespeare’s sole authorship (e.g. BIBREF13, BIBREF14), there were also those who supported alternative hypotheses concerning mixed authorship of either Shakespeare, Fletcher, and Philip Massinger BIBREF15, BIBREF16, BIBREF17, Fletcher and Massinger only BIBREF18, BIBREF19, Shakespeare and an unknown author BIBREF20, Shakespeare, Fletcher, Massinger, and an unknown author BIBREF21, BIBREF22 or Shakespeare and Fletcher with different shares than those proposed by Spedding BIBREF23.",
"More recent articles usually fall in the last mentioned category and attribute the play to Shakespeare and Fletcher (although the shares proposed by them differ). Thomas Horton BIBREF24 employed discriminant analysis of three sets of function words and on this basis attributed most of the scenes to Shakespeare or left them undecided. Thomas Merriam proposed a modification to Spedding’s original attribution concerning re-attribution of several parts of supposedly Fletcher’s scenes back to Shakespeare and vice versa. This was based on measuring the confidence intervals and principal component analysis of frequencies of selected function words in Shakespeare’s and Fletcher’s plays BIBREF25, controversial CUSUM technique concerning the occurrences of another set of selected function words and lines ending with an extra unstressed syllable BIBREF26 or principal component analysis of 64 most frequent words BIBREF27. Eisen, Riberio, Segarra, and Egan BIBREF28 used Word adjacency networks BIBREF29 to analyze the frequencies of collocations of selected function words in particular scenes of the play. In contrast to Spedding, they reattribute several scenes back to Shakespeare. Details on Spedding’s attribution as well as the ones mentioned in this paragraph are given in Table TABREF3.",
"In the present study, with regard to the aforementioned studies, Shakespeare, Fletcher, and Massinger are considered as candidates to the authorship of H8."
],
[
"In the first experiment we perform an attribution of individual scenes of H8 using the Support Vector Machine as a classifier and the frequencies of 500 most frequent rhythmic types and the frequencies of 500 most frequent words as a feature set. As training samples, individual scenes of plays written by Shakespeare, Fletcher, and Massinger are used that come roughly from the period when H8 was supposedly written, namely:",
"Shakespeare: The Tragedy of Coriolanus (5 scenes), The Tragedy of Cymbeline (27 scenes), The Winter’s Tale (12 scenes), The Tempest (9 scenes)",
"Fletcher: Valentinian (21 scenes), Monsieur Thomas (28 scenes), The Woman’s Prize (23 scenes), Bonduca (18 scenes)",
"Massinger: The Duke of Milan (10 scenes), The Unnatural Combat (11 scenes), The Renegado (25 scenes)",
"Altogether there are thus 53 training samples for Shakespeare, 90 training samples for Fletcher and 46 training samples for Massinger. In order to estimate the accuracy of the model, cross-validation is performed in the following way:",
"To avoid the risk of overfitting which may be caused by testing the model on the scenes from the same play as it was trained on, we do not perform a standard k-fold cross validation. Instead, we classify scenes of each play by a model trained on the rest, i.e. 5 scenes of Shakespeare’s Coriolanus are classified by a model trained on the scenes from the remaining 3 plays by Shakespeare, 4 plays by Fletcher and 5 plays by Massinger, 27 scenes of Cymbeline are classified in the same way and so on.",
"Since the training data are imbalanced (which may bias the results), we level the number of training samples per author by random selection.",
"To obtain more representative results, the entire process is repeated 30 times (with a new random selection in each iteration) thus resulting in 30 classifications of each scene.",
"For the sake of comparison of the attribution power of both feature subsets, cross-validations are performed not only of the combined models (500 words $\\cup $ 500 rhythmic types), but also of the words-based models (500 words) and versification-based models (500 rhythmic types) alone.",
"As shown in Table TABREF14, the versification-based models yield a very high accuracy with the recognition of Shakespeare and Fletcher (0.97 to 1 with the exception of Valentinian), yet slightly lower accuracy with the recognition of Massinger (0.81 to 0.88). The accuracy of words-based models remains very high across all three authors (0.95 to 1); in three cases it is nevertheless outperformed by the combined model. We thus may conclude that combined models provide a reliable discriminator between Shakespeare’s, Fletcher’s and Massinger’s styles.",
"Table TABREF19 gives the results of the classifiers when applied to the individual scenes of H8 on the basis of which we may conclude:",
"It is very unlikely that Massinger took part in the text of H8. Out of 17 scenes only 2 are attributed to Massinger by any of the models (2.1, 4.2), and in both cases by a mere minority of votes.",
"The probability that the text of H8 is a result of collaboration between Shakespeare and Fletcher is very high: with 7 scenes all the 30 models agree upon Shakespeare’s authorship, with 5 scenes all the 30 models agree upon Fletcher’s authorship.",
"Our results correspond to the Spedding’s attribution to a high extent. With the exception of two scenes, the majority of models always predict the same author to which it is attributed by Spedding. The two exceptions are the second scene of act 3, where Spedding supposed mixed authorship, and the first scene of act 4, which was originally attributed to Fletcher."
],
[
"Even though the classification of individual scenes clearly indicates that H8 is a result of collaboration between Shakespeare and Fletcher, we should not accept it as the final result since most studies suggest that—at least in the case of the second scene of act 3—the shift of authorship did not happen on the scenes’ boundaries (as shown in Table TABREF3). To get a more reliable picture of the authors’ shares, we’ve employed so called rolling attribution.",
"Rolling attribution was originally introduced by Maciej Eder BIBREF31 as a technique designed for cases involving mixed authorship. Unlike common tasks, in rolling attribution neither the entire text nor its logical parts (chapters, scenes etc.) are being classified but instead its overlapping parts of fixed length. Assume a text which one supposes to be a result of a collaboration between two (or more) authors consisting of $n$ lines $l_1, l_2, l_3, \\ldots , l_{n}$. Let $k$ and $d$ be arbitrarily chosen values so that $k \\in \\mathbb {N}$, $k < n$ and $d \\in \\mathbb {N}$, $d < n - k$, $d \\le k$. For each $i; i \\in \\lbrace 0, d, 2d, 3d, \\ldots \\rbrace , i < n - k$ a battery of attributions is performed of all the sections s consisting of lines $l_{i+1}, l_{i+2}, l_{i+3}, \\ldots , l_{i+k}$. To achieve a better sensitivity to authorship transitions Eder suggests not to work with simple predictions (labeling the section as being written by a single author) but—if it’s possible with a given classifier—rather a probability distribution over candidate authors.",
"We first test the performance of rolling attribution on 4 plays by Shakespeare and 4 plays by Fletcher contained in the training set. For each play we train 30 models on the remaining data with number of training samples randomly leveled in each iteration. Each target play is segmented into overlapping parts with $k = 100$ and $d = 5$ (each successive series of five lines except for the initial 19 and final 19 ones are thus classified 600 times—30 times within 20 different parts). The output of classification of each part is transformed to probability distribution using Platt’s scaling BIBREF32.",
"Fig. FIGREF21 gives the results for each of the eight plays. Each data point corresponds to a group of five lines and gives the mean probability of Shakespeare’s and Fletcher’s authorship. For the sake of clarity, the values for Fletcher are displayed as negative. The distance between Shakespeare’s data point and Fletcher’s data point thus always equals 1. The black curve gives the average of both values. The results suggest the rolling attribution method with combined versification and lexical features to be very reliable: (1) Probability of Fletcher’s authorship is very low for vast majority of Shakespeare’s work. The only place where Fletcher is assigned higher probability than Shakespeare is the sequence of 10 five-line groups in the second act of scene 2 of the Tempest. (2) Probability of Shakespeare’s authorship is very low for vast majority of Fletcher’s work. The only place where Shakespeare comes closer to Fletcher’s values is the first scene of act 5 of Bonduca. Having only 10 groups misattributed out of 4412 we may estimate the accuracy of rolling attribution to be as high as 0.9977 when distinguishing between Shakespeare and Fletcher.",
"After validation of the method we proceed to H8. Fig. FIGREF30 gives the results of rolling attribution based on a combined vector of most frequent types and most frequent words, and additionally for each of these feature subsets alone. Models were trained on all 8 plays in the training set with the same setting as above ($k = 100; d = 5$). It once again supports Spedding’s attribution to a high extent:",
"For scenes 1.1 and 1.2 rhythmic types, words as well as the combined model indicate Shakespeare to be the author. All three sets of models indicate that the shift of authorship happened at the end of scene 1.2.",
"For scenes 1.3, 1.4, 2.1 and 2.2 all three sets of models indicate Fletcher to be the author. Rhythmic types indicate that the shift of authorship happened at the end of 2.2, while word-based models indicate that the shift happened before the end of the scene. (Recall that the shift of authorship within 2.2 is proposed also by Thomas Merriam (cf. Table TABREF3) even though a little bit further at line 1164.)",
"Scenes 2.3 and 2.4 are according to all sets of models authored by Shakespeare. All three sets of models indicate that the shift happened at the end of scene 2.4.",
"According to all sets of models, scene 3.1 was written by Fletcher. All three sets of models indicate that the shift happened at the scene’s end.",
"Scene 3.2 is usually attributed to both Shakespeare and Fletcher. All three sets of models support this. While Spedding and other authors locate the shift to line 2081, all our sets of models indicate that it occurred later. Combined models locate it precisely at line 2200 (in agreement with earlier studies by Merriam BIBREF25, BIBREF26. A certain decrease in the probability of Shakespeare’s authorship found in the neighborhood of line 2081 in word-based models and combined models may support Merriam’s later attributions BIBREF27, i.e. mixed authorship even after the line 2081.",
"For scenes 4.1 and 4.2 the rhythmic types indicate Shakespeare’s authorship of the first (contrary to Spedding) and Fletcher’s authorship of the latter. Location of the shift does not however fully correspond to the scene boundaries. Probabilities extracted from word-based models and combined models are close to 0.5 for both authors which may support Merriam’s attribution (mixed authorship).",
"Scene 5.1 is according to all sets of models authored by Shakespeare. Rhythmic types and combined models locate the shift at its end; word-based models locate it a little later on.",
"Scenes 5.2, 5.3, 5.4 and 5.5 are Fletcher’s according to word-based models and combined models. Rhythmic types indicate the possibility of Shakespeare’s share in 5.4."
],
[
"Combined versification-based and word-based models trained on 17th century English drama yield a high accuracy of authorship recognition. We can thus state with high reliability that H8 is a result of collaboration between William Shakespeare and John Fletcher, while the participation of Philip Massinger is rather unlikely.",
"The rolling attribution method suggests that particular scenes are indeed mostly a work of a single author and that their contributions roughly correspond to what has been proposed by James Spedding BIBREF3. The main differences between our results and Spedding’s attribution are the ambivalent outputs of models for both scenes of act 4. However, it is worth noting that Spedding himself expressed some doubts about the authorship of these scenes. Other differences are rather marginal and usually support the modifications of Spedding’s original attribution, as proposed by Thomas Merriam BIBREF25, BIBREF26, BIBREF27."
]
],
"section_name": [
"Introduction",
"History and related works",
"Attribution of Particular Scenes",
"Rolling attribution of the play",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"9b3364bd681cb1e550b3f1a104bbacc1571ed4d6",
"aa2b540950267a1efa248f6eb5610b4868de83ff",
"fb10f8c9d4adb157b1a539122ecfb70768f436b1"
],
"answer": [
{
"evidence": [
"While the stylistic dissimilarity of Henry VIII (henceforth H8) to Shakespeare’s other plays had been pointed out before BIBREF2, it was not until the mid-nineteenth century that Shakespeare’s sole authorship was called into question. In 1850 British scholar James Spedding published an article BIBREF3 attributing several scenes to John Fletcher. Spedding supported this with data from the domain of versification, namely the ratios of iambic lines ending with a stressed syllable (“The view of earthly glory: men might say”) to lines ending with an extra unstressed one (“Till this time pomp was single, but now married”), pointing out that the distribution of values across scenes is strongly bimodal."
],
"extractive_spans": [
"the ratios of iambic lines ending with a stressed syllable (“The view of earthly glory: men might say”) to lines ending with an extra unstressed one (“Till this time pomp was single, but now married”)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Spedding supported this with data from the domain of versification, namely the ratios of iambic lines ending with a stressed syllable (“The view of earthly glory: men might say”) to lines ending with an extra unstressed one (“Till this time pomp was single, but now married”), pointing out that the distribution of values across scenes is strongly bimodal."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"While the stylistic dissimilarity of Henry VIII (henceforth H8) to Shakespeare’s other plays had been pointed out before BIBREF2, it was not until the mid-nineteenth century that Shakespeare’s sole authorship was called into question. In 1850 British scholar James Spedding published an article BIBREF3 attributing several scenes to John Fletcher. Spedding supported this with data from the domain of versification, namely the ratios of iambic lines ending with a stressed syllable (“The view of earthly glory: men might say”) to lines ending with an extra unstressed one (“Till this time pomp was single, but now married”), pointing out that the distribution of values across scenes is strongly bimodal."
],
"extractive_spans": [
"the ratios of iambic lines ending with a stressed syllable (“The view of earthly glory: men might say”) to lines ending with an extra unstressed one (“Till this time pomp was single, but now married”), pointing out that the distribution of values across scenes is strongly bimodal."
],
"free_form_answer": "",
"highlighted_evidence": [
"Spedding supported this with data from the domain of versification, namely the ratios of iambic lines ending with a stressed syllable (“The view of earthly glory: men might say”) to lines ending with an extra unstressed one (“Till this time pomp was single, but now married”), pointing out that the distribution of values across scenes is strongly bimodal."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"965d2107ed0935846365b6709010cc5cf0dd2a5a",
"fb27111db3c3d39a70652f08d7e53bc63aa4d827"
],
"answer": [
{
"evidence": [
"Combined versification-based and word-based models trained on 17th century English drama yield a high accuracy of authorship recognition. We can thus state with high reliability that H8 is a result of collaboration between William Shakespeare and John Fletcher, while the participation of Philip Massinger is rather unlikely."
],
"extractive_spans": [
"high reliability"
],
"free_form_answer": "",
"highlighted_evidence": [
"Combined versification-based and word-based models trained on 17th century English drama yield a high accuracy of authorship recognition. We can thus state with high reliability that H8 is a result of collaboration between William Shakespeare and John Fletcher, while the participation of Philip Massinger is rather unlikely."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Combined versification-based and word-based models trained on 17th century English drama yield a high accuracy of authorship recognition. We can thus state with high reliability that H8 is a result of collaboration between William Shakespeare and John Fletcher, while the participation of Philip Massinger is rather unlikely."
],
"extractive_spans": [],
"free_form_answer": "very",
"highlighted_evidence": [
"Combined versification-based and word-based models trained on 17th century English drama yield a high accuracy of authorship recognition. We can thus state with high reliability that H8 is a result of collaboration between William Shakespeare and John Fletcher, while the participation of Philip Massinger is rather unlikely."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"annotation_id": [
"4fcc24fb31ad69dac38871b3736402d09fc19c55",
"e65e97e6cb01a2aab6ca3934206c3df306b0c3ba",
"ec5087417c63ae2350aed4bbde9e96249d441b1b"
],
"answer": [
{
"evidence": [
"Combined versification-based and word-based models trained on 17th century English drama yield a high accuracy of authorship recognition. We can thus state with high reliability that H8 is a result of collaboration between William Shakespeare and John Fletcher, while the participation of Philip Massinger is rather unlikely."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We can thus state with high reliability that H8 is a result of collaboration between William Shakespeare and John Fletcher, while the participation of Philip Massinger is rather unlikely."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"Combined versification-based and word-based models trained on 17th century English drama yield a high accuracy of authorship recognition. We can thus state with high reliability that H8 is a result of collaboration between William Shakespeare and John Fletcher, while the participation of Philip Massinger is rather unlikely."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We can thus state with high reliability that H8 is a result of collaboration between William Shakespeare and John Fletcher, while the participation of Philip Massinger is rather unlikely."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"While the stylistic dissimilarity of Henry VIII (henceforth H8) to Shakespeare’s other plays had been pointed out before BIBREF2, it was not until the mid-nineteenth century that Shakespeare’s sole authorship was called into question. In 1850 British scholar James Spedding published an article BIBREF3 attributing several scenes to John Fletcher. Spedding supported this with data from the domain of versification, namely the ratios of iambic lines ending with a stressed syllable (“The view of earthly glory: men might say”) to lines ending with an extra unstressed one (“Till this time pomp was single, but now married”), pointing out that the distribution of values across scenes is strongly bimodal."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"While the stylistic dissimilarity of Henry VIII (henceforth H8) to Shakespeare’s other plays had been pointed out before BIBREF2, it was not until the mid-nineteenth century that Shakespeare’s sole authorship was called into question."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"38f2d3e2492f54b6cce474f05f19361a67d70c93",
"4aba22fa8a7103ed8df31f4f7d2c7791a3200a87"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"In the first experiment we perform an attribution of individual scenes of H8 using the Support Vector Machine as a classifier and the frequencies of 500 most frequent rhythmic types and the frequencies of 500 most frequent words as a feature set. As training samples, individual scenes of plays written by Shakespeare, Fletcher, and Massinger are used that come roughly from the period when H8 was supposedly written, namely:"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In the first experiment we perform an attribution of individual scenes of H8 using the Support Vector Machine as a classifier and the frequencies of 500 most frequent rhythmic types and the frequencies of 500 most frequent words as a feature set."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"a5664fb4d80831b5184c39026567876512218b13",
"c965f1d144c4b652b26af844e57d1dbf14865cc8",
"fec778e1a81b81ffbb210ac195908595d62b4e19"
],
"answer": [
{
"evidence": [
"More recent articles usually fall in the last mentioned category and attribute the play to Shakespeare and Fletcher (although the shares proposed by them differ). Thomas Horton BIBREF24 employed discriminant analysis of three sets of function words and on this basis attributed most of the scenes to Shakespeare or left them undecided. Thomas Merriam proposed a modification to Spedding’s original attribution concerning re-attribution of several parts of supposedly Fletcher’s scenes back to Shakespeare and vice versa. This was based on measuring the confidence intervals and principal component analysis of frequencies of selected function words in Shakespeare’s and Fletcher’s plays BIBREF25, controversial CUSUM technique concerning the occurrences of another set of selected function words and lines ending with an extra unstressed syllable BIBREF26 or principal component analysis of 64 most frequent words BIBREF27. Eisen, Riberio, Segarra, and Egan BIBREF28 used Word adjacency networks BIBREF29 to analyze the frequencies of collocations of selected function words in particular scenes of the play. In contrast to Spedding, they reattribute several scenes back to Shakespeare. Details on Spedding’s attribution as well as the ones mentioned in this paragraph are given in Table TABREF3."
],
"extractive_spans": [
"Thomas Merriam proposed a modification to Spedding’s original attribution concerning re-attribution of several parts of supposedly Fletcher’s scenes back to Shakespeare and vice versa."
],
"free_form_answer": "",
"highlighted_evidence": [
"Thomas Merriam proposed a modification to Spedding’s original attribution concerning re-attribution of several parts of supposedly Fletcher’s scenes back to Shakespeare and vice versa. This was based on measuring the confidence intervals and principal component analysis of frequencies of selected function words in Shakespeare’s and Fletcher’s plays BIBREF25, controversial CUSUM technique concerning the occurrences of another set of selected function words and lines ending with an extra unstressed syllable BIBREF26 or principal component analysis of 64 most frequent words BIBREF27. Eisen, Riberio, Segarra, and Egan BIBREF28 used Word adjacency networks BIBREF29 to analyze the frequencies of collocations of selected function words in particular scenes of the play. In contrast to Spedding, they reattribute several scenes back to Shakespeare. Details on Spedding’s attribution as well as the ones mentioned in this paragraph are given in Table TABREF3."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"More recent articles usually fall in the last mentioned category and attribute the play to Shakespeare and Fletcher (although the shares proposed by them differ). Thomas Horton BIBREF24 employed discriminant analysis of three sets of function words and on this basis attributed most of the scenes to Shakespeare or left them undecided. Thomas Merriam proposed a modification to Spedding’s original attribution concerning re-attribution of several parts of supposedly Fletcher’s scenes back to Shakespeare and vice versa. This was based on measuring the confidence intervals and principal component analysis of frequencies of selected function words in Shakespeare’s and Fletcher’s plays BIBREF25, controversial CUSUM technique concerning the occurrences of another set of selected function words and lines ending with an extra unstressed syllable BIBREF26 or principal component analysis of 64 most frequent words BIBREF27. Eisen, Riberio, Segarra, and Egan BIBREF28 used Word adjacency networks BIBREF29 to analyze the frequencies of collocations of selected function words in particular scenes of the play. In contrast to Spedding, they reattribute several scenes back to Shakespeare. Details on Spedding’s attribution as well as the ones mentioned in this paragraph are given in Table TABREF3."
],
"extractive_spans": [
"a modification to Spedding’s original attribution concerning re-attribution of several parts of supposedly Fletcher’s scenes back to Shakespeare and vice versa."
],
"free_form_answer": "",
"highlighted_evidence": [
"Thomas Merriam proposed a modification to Spedding’s original attribution concerning re-attribution of several parts of supposedly Fletcher’s scenes back to Shakespeare and vice versa. This was based on measuring the confidence intervals and principal component analysis of frequencies of selected function words in Shakespeare’s and Fletcher’s plays BIBREF25, controversial CUSUM technique concerning the occurrences of another set of selected function words and lines ending with an extra unstressed syllable BIBREF26 or principal component analysis of 64 most frequent words BIBREF27."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"More recent articles usually fall in the last mentioned category and attribute the play to Shakespeare and Fletcher (although the shares proposed by them differ). Thomas Horton BIBREF24 employed discriminant analysis of three sets of function words and on this basis attributed most of the scenes to Shakespeare or left them undecided. Thomas Merriam proposed a modification to Spedding’s original attribution concerning re-attribution of several parts of supposedly Fletcher’s scenes back to Shakespeare and vice versa. This was based on measuring the confidence intervals and principal component analysis of frequencies of selected function words in Shakespeare’s and Fletcher’s plays BIBREF25, controversial CUSUM technique concerning the occurrences of another set of selected function words and lines ending with an extra unstressed syllable BIBREF26 or principal component analysis of 64 most frequent words BIBREF27. Eisen, Riberio, Segarra, and Egan BIBREF28 used Word adjacency networks BIBREF29 to analyze the frequencies of collocations of selected function words in particular scenes of the play. In contrast to Spedding, they reattribute several scenes back to Shakespeare. Details on Spedding’s attribution as well as the ones mentioned in this paragraph are given in Table TABREF3."
],
"extractive_spans": [
"measuring the confidence intervals and principal component analysis of frequencies of selected function words in Shakespeare’s and Fletcher’s plays BIBREF25, controversial CUSUM technique concerning the occurrences of another set of selected function words and lines ending with an extra unstressed syllable BIBREF26 or principal component analysis of 64 most frequent words BIBREF27. "
],
"free_form_answer": "",
"highlighted_evidence": [
"Thomas Merriam proposed a modification to Spedding’s original attribution concerning re-attribution of several parts of supposedly Fletcher’s scenes back to Shakespeare and vice versa. This was based on measuring the confidence intervals and principal component analysis of frequencies of selected function words in Shakespeare’s and Fletcher’s plays BIBREF25, controversial CUSUM technique concerning the occurrences of another set of selected function words and lines ending with an extra unstressed syllable BIBREF26 or principal component analysis of 64 most frequent words BIBREF27. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"annotation_id": [
"4c76c354dc36c292e8b7c829d9d2cd6886fac0d0",
"8aeee059f13728d5ce97eebf43e7de0af6f97539"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat",
"somewhat",
"somewhat",
"somewhat"
],
"question": [
"What IS versification?",
"How confident is the conclusion about Shakespeare vs Flectcher?",
"Is Henry VIII reflective of Shakespeare in general?",
"Is vocabulary or versification more important for the analysis?",
"What are the modifications by Thomas Merriam?",
"What are stop words in Shakespeare?"
],
"question_id": [
"044cb5ef850c0a2073682bb31d919d504667f907",
"c845110efee2f633d47f5682573bc6091e8f5023",
"2301424672cb79297cf7ad95f23b58515e4acce8",
"6c05376cd0f011e00d1ada0254f6db808f33c3b7",
"9925e7d8757e8fd7411bcb5250bc08158a244fb3",
"fa468c31dd0f9095d7cec010f2262eeed565a7d2"
],
"question_writer": [
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Selected attributions of Henry VIII. S denotes attribution of the scene to Shakespeare, F denotes Fletcher, N denotes unassigned, S* denotes Shakespeare with “mere Fletcherian interpolation”. Where the attribution gives precise division of the scene, the subscripted number indicates the last line of a given passage (Through Line Numbering as used in the Norton Facsimile of the First Folio).",
"Table 2: Accuracy of authorship recognition provided by the models based on (1) 500 most frequent rhythmic types, (2) 500 most frequent words, (3) 1000- dimensional vectors combining features (1) and (2). The number gives the share of correctly classified scenes through all 30 iterations.",
"Table 3: Classification of individual scenes of H8. The number indicates how many times out of 30 iterations the author has been predicted to a given scene. The highest value in each row is printed in bold. The rightmost column indicates to which author the scene is attributed by Spedding. Where Spedding differs from our results, we use a bold face.",
"Fig. 1: Rolling attribution of 4 plays by Shakespeare and 4 plays by Fletcher based on 500 most frequent rhythmic types and 500 most frequent words. Vertical lines indicate scene boundaries.",
"Fig. 2: Rolling attribution of H8 based on 500 most frequent rhythmic types and 500 most frequent words. Vertical lines indicate scene boundaries (label on top) or other landmark indicated in other articles (label on bottom giving the line number according to TLN as used in the Norton Facsimile of the First Folio). Dashed line indicates results of rolling attribution based solely on 500 most frequent rhythmic types, dotted line indicates results of rolling attribution based solely on 500 most frequent words."
],
"file": [
"3-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"7-Figure1-1.png",
"8-Figure2-1.png"
]
} | [
"How confident is the conclusion about Shakespeare vs Flectcher?"
] | [
[
"1911.05652-Conclusions-0"
]
] | [
"very"
] | 177 |
1703.10090 | A Short Review of Ethical Challenges in Clinical Natural Language Processing | Clinical NLP has an immense potential in contributing to how clinical practice will be revolutionized by the advent of large scale processing of clinical records. However, this potential has remained largely untapped due to slow progress primarily caused by strict data access policies for researchers. In this paper, we discuss the concern for privacy and the measures it entails. We also suggest sources of less sensitive data. Finally, we draw attention to biases that can compromise the validity of empirical research and lead to socially harmful applications. | {
"paragraphs": [
[
"The use of notes written by healthcare providers in the clinical settings has long been recognized to be a source of valuable information for clinical practice and medical research. Access to large quantities of clinical reports may help in identifying causes of diseases, establishing diagnoses, detecting side effects of beneficial treatments, and monitoring clinical outcomes BIBREF0 , BIBREF1 , BIBREF2 . The goal of clinical natural language processing (NLP) is to develop and apply computational methods for linguistic analysis and extraction of knowledge from free text reports BIBREF3 , BIBREF4 , BIBREF5 . But while the benefits of clinical NLP and data mining have been universally acknowledged, progress in the development of clinical NLP techniques has been slow. Several contributing factors have been identified, most notably difficult access to data, limited collaboration between researchers from different groups, and little sharing of implementations and trained models BIBREF6 . For comparison, in biomedical NLP, where the working data consist of biomedical research literature, these conditions have been present to a much lesser degree, and the progress has been more rapid BIBREF7 . The main contributing factor to this situation has been the sensitive nature of data, whose processing may in certain situations put patient's privacy at risk.",
"The ethics discussion is gaining momentum in general NLP BIBREF8 . We aim in this paper to gather the ethical challenges that are especially relevant for clinical NLP, and to stimulate discussion about those in the broader NLP community. Although enhancing privacy through restricted data access has been the norm, we do not only discuss the right to privacy, but also draw attention to the social impact and biases emanating from clinical notes and their processing. The challenges we describe here are in large part not unique to clinical NLP, and are applicable to general data science as well."
],
[
"Because of legal and institutional concerns arising from the sensitivity of clinical data, it is difficult for the NLP community to gain access to relevant data BIBREF9 , BIBREF10 . This is especially true for the researchers not connected with a healthcare organization. Corpora with transparent access policies that are within reach of NLP researchers exist, but are few. An often used corpus is MIMICII(I) BIBREF11 , BIBREF12 . Despite its large size (covering over 58,000 hospital admissions), it is only representative of patients from a particular clinical domain (the intensive care in this case) and geographic location (a single hospital in the United States). Assuming that such a specific sample is representative of a larger population is an example of sampling bias (we discuss further sources of bias in section \"Social impact and biases\" ). Increasing the size of a sample without recognizing that this sample is atypical for the general population (e.g. not all patients are critical care patients) could also increase sampling bias BIBREF13 . We need more large corpora for various medical specialties, narrative types, as well as languages and geographic areas.",
"Related to difficult access to raw clinical data is the lack of available annotated datasets for model training and benchmarking. The reality is that annotation projects do take place, but are typically constrained to a single healthcare organization. Therefore, much of the effort put into annotation is lost afterwards due to impossibility of sharing with the larger research community BIBREF6 , BIBREF14 . Again, exceptions are either few—e.g. THYME BIBREF15 , a corpus annotated with temporal information—or consist of small datasets resulting from shared tasks like the i2b2 and ShARe/CLEF. In addition, stringent access policies hamper reproduction efforts, impede scientific oversight and limit collaboration, not only between institutions but also more broadly between the clinical and NLP communities.",
"There are known cases of datasets that had been used in published research (including reproduction) in its full form, like MiPACQ, Blulab, EMC Dutch Clinical Corpus and 2010 i2b2/VA BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , but were later trimmed down or made unavailable, likely due to legal issues. Even if these datasets were still available in full, their small size is still a concern, and the comments above regarding sampling bias certainly apply. For example, a named entity recognizer trained on 2010 i2b2/VA data, which consists of 841 annotated patient records from three different specialty areas, will due to its size only contain a small portion of possible named entities. Similarly, in linking clinical concepts to an ontology, where the number of output classes is larger BIBREF20 , the small amount of training data is a major obstacle to deployment of systems suitable for general use."
],
[
"Clinical notes contain detailed information about patient-clinician encounters in which patients confide not only their health complaints, but also their lifestyle choices and possibly stigmatizing conditions. This confidential relationship is legally protected in US by the HIPAA privacy rule in the case of individuals' medical data. In EU, the conditions for scientific usage of health data are set out in the General Data Protection Regulation (GDPR). Sanitization of sensitive data categories and individuals' informed consent are in the forefront of those legislative acts and bear immediate consequences for the NLP research.",
"The GDPR lists general principles relating to processing of personal data, including that processing must be lawful (e.g. by means of consent), fair and transparent; it must be done for explicit and legitimate purposes; and the data should be kept limited to what is necessary and as long as necessary. This is known as data minimization, and it includes sanitization. The scientific usage of health data concerns “special categories of personal data\". Their processing is only allowed when the data subject gives explicit consent, or the personal data is made public by the data subject. Scientific usage is defined broadly and includes technological development, fundamental and applied research, as well as privately funded research.",
"paragraph4 0.9ex plus1ex minus.2ex-1em Sanitization Sanitization techniques are often seen as the minimum requirement for protecting individuals' privacy when collecting data BIBREF21 , BIBREF22 . The goal is to apply a procedure that produces a new version of the dataset that looks like the original for the purposes of data analysis, but which maintains the privacy of those in the dataset to a certain degree, depending on the technique. Documents can be sanitized by replacing, removing or otherwise manipulating the sensitive mentions such as names and geographic locations. A distinction is normally drawn between anonymization, pseudonymization and de-identification. We refer the reader to Polonetsky et al. PolonetskyEtAl2016 for an excellent overview of these procedures.",
"Although it is a necessary first step in protecting the privacy of patients, sanitization has been criticized for several reasons. First, it affects the integrity of the data, and as a consequence, their utility BIBREF23 . Second, although sanitization in principle promotes data access and sharing, it may often not be sufficient to eliminate the need for consent. This is largely due to the well-known fact that original sensitive data can be re-identified through deductive disclosure BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 . Finally, sanitization focuses on protecting the individual, whereas ethical harms are still possible on the group level BIBREF30 , BIBREF31 . Instead of working towards increasingly restrictive sanitization and access measures, another course of action could be to work towards heightening the perception of scientific work, emphasizing professionalism and existence of punitive measures for illegal actions BIBREF32 , BIBREF33 .",
"paragraph4 0.9ex plus1ex minus.2ex-1em Consent Clinical NLP typically requires a large amount of clinical records describing cases of patients with a particular condition. Although obtaining consent is a necessary first step, obtaining explicit informed consent from each patient can also compromise the research in several ways. First, obtaining consent is time consuming by itself, and it results in financial and bureaucratic burdens. It can also be infeasible due to practical reasons such as a patient's death. Next, it can introduce bias as those willing to grant consent represent a skewed population BIBREF34 . Finally, it can be difficult to satisfy the informedness criterion: Information about the experiment sometimes can not be communicated in an unambiguous way, or experiments happen at speed that makes enacting informed consent extremely hard BIBREF35 .",
"The alternative might be a default opt-in policy with a right to withdraw (opt-out). Here, consent can be presumed either in a broad manner—allowing unspecified future research, subject to ethical restrictions—or a tiered manner—allowing certain areas of research but not others BIBREF33 , BIBREF36 . Since the information about the intended use is no longer uniquely tied to each research case but is more general, this could facilitate the reuse of datasets by several research teams, without the need to ask for consent each time. The success of implementing this approach in practice is likely to depend on public trust and awareness about possible risks and opportunities. We also believe that a distinction between academic research and commercial use of clinical data should be implemented, as the public is more willing to allow research than commercial exploitation BIBREF37 , BIBREF38 .",
"Yet another possibility is open consent, in which individuals make their data publicly available. Initiatives like Personal Genome Project may have an exemplary role, however, they can only provide limited data and they represent a biased population sample BIBREF33 .",
"paragraph4 0.9ex plus1ex minus.2ex-1em Secure access Since withholding data from researchers would be a dubious way of ensuring confidentiality BIBREF21 , the research has long been active on secure access and storage of sensitive clinical data, and the balance between the degree of privacy loss and the degree of utility. This is a broad topic that is outside the scope of this article. The interested reader can find the relevant information in Dwork and Pottenger DworkAndPottenger2013, Malin et al. MalinEtAL2013 and Rindfleisch Rindfleisch1997.",
"paragraph4 0.9ex plus1ex minus.2ex-1em Promotion of knowledge and application of best-of-class approaches to health data is seen as one of the ethical duties of researchers BIBREF23 , BIBREF37 . But for this to be put in practice, ways need to be guaranteed (e.g. with government help) to provide researchers with access to the relevant data. Researchers can also go to the data rather than have the data sent to them. It is an open question though whether medical institutions—especially those with less developed research departments—can provide the infrastructure (e.g. enough CPU and GPU power) needed in statistical NLP. Also, granting access to one healthcare organization at a time does not satisfy interoperability (cross-organizational data sharing and research), which can reduce bias by allowing for more complete input data. Interoperability is crucial for epidemiology and rare disease research, where data from one institution can not yield sufficient statistical power BIBREF13 .",
"paragraph4 0.9ex plus1ex minus.2ex-1em Are there less sensitive data? One criterion which may have influence on data accessibility is whether the data is about living subjects or not. The HIPAA privacy rule under certain conditions allows disclosure of personal health information of deceased persons, without the need to seek IRB agreement and without the need for sanitization BIBREF39 . It is not entirely clear though how often this possibility has been used in clinical NLP research or broader.",
"Next, the work on surrogate data has recently seen a surge in activity. Increasingly more health-related texts are produced in social media BIBREF40 , and patient-generated data are available online. Admittedly, these may not resemble the clinical discourse, yet they bear to the same individuals whose health is documented in the clinical reports. Indeed, linking individuals' health information from online resources to their health records to improve documentation is an active line of research BIBREF41 . Although it is generally easier to obtain access to social media data, the use of social media still requires similar ethical considerations as in the clinical domain. See for example the influential study on emotional contagion in Facebook posts by Kramer et al. KramerEtAl2014, which has been criticized for not properly gaining prior consent from the users who were involved in the study BIBREF42 .",
"Another way of reducing sensitivity of data and improving chances for IRB approval is to work on derived data. Data that can not be used to reconstruct the original text (and when sanitized, can not directly re-identify the individual) include text fragments, various statistics and trained models. Working on randomized subsets of clinical notes may also improve the chances of obtaining the data. When we only have access to trained models from disparate sources, we can refine them through ensembling and creation of silver standard corpora, cf. Rebholz-Schuhmann et al. RebholzSchuhmannEtAl2011.",
"Finally, clinical NLP is also possible on veterinary texts. Records of companion animals are perhaps less likely to involve legal issues, while still amounting to a large pool of data. As an example, around 40M clinical documents from different veterinary clinics in UK and Australia are stored centrally in the VetCompass repository. First NLP steps in this direction were described in the invited talk at the Clinical NLP 2016 workshop BIBREF43 ."
],
[
"Unlocking knowledge from free text in the health domain has a tremendous societal value. However, discrimination can occur when individuals or groups receive unfair treatment as a result of automated processing, which might be a result of biases in the data that were used to train models. The question is therefore what the most important biases are and how to overcome them, not only out of ethical but also legal responsibility. Related to the question of bias is so-called algorithm transparency BIBREF44 , BIBREF45 , as this right to explanation requires that influences of bias in training data are charted. In addition to sampling bias, which we introduced in section 2, we discuss in this section further sources of bias. Unlike sampling bias, which is a corpus-level bias, these biases here are already present in documents, and therefore hard to account for by introducing larger corpora.",
"paragraph4 0.9ex plus1ex minus.2ex-1em Data quality Texts produced in the clinical settings do not always tell a complete or accurate patient story (e.g. due to time constraints or due to patient treatment in different hospitals), yet important decisions can be based on them. As language is situated, a lot of information may be implicit, such as the circumstances in which treatment decisions are made BIBREF47 . If we fail to detect a medical concept during automated processing, this can not necessarily be a sign of negative evidence. Work on identifying and imputing missing values holds promise for reducing incompleteness, see Lipton et al. LiptonEtAl2016 for an example in sequential modeling applied to diagnosis classification.",
"paragraph4 0.9ex plus1ex minus.2ex-1em Reporting bias Clinical texts may include bias coming from both patient's and clinician's reporting. Clinicians apply their subjective judgments to what is important during the encounter with patients. In other words, there is separation between, on the one side, what is observed by the clinician and communicated by the patient, and on the other, what is noted down. Cases of more serious illness may be more accurately documented as a result of clinician's bias (increased attention) and patient's recall bias. On the other hand, the cases of stigmatized diseases may include suppressed information. In the case of traffic injuries, documentation may even be distorted to avoid legal consequences BIBREF48 .",
"We need to be aware that clinical notes may reflect health disparities. These can originate from prejudices held by healthcare practitioners which may impact patients' perceptions; they can also originate from communication difficulties in the case of ethnic differences BIBREF49 . Finally, societal norms can play a role. Brady et al. BradyEtAl2016 find that obesity is often not documented equally well for both sexes in weight-addressing clinics. Young males are less likely to be recognized as obese, possibly due to societal norms seeing them as “stocky\" as opposed to obese. Unless we are aware of such bias, we may draw premature conclusions about the impact of our results.",
"It is clear that during processing of clinical texts, we should strive to avoid reinforcing the biases. It is difficult to give a solution on how to actually reduce the reporting bias after the fact. One possibility might be to model it. If we see clinical reports as noisy annotations for the patient story in which information is left-out or altered, we could try to decouple the bias from the reports. Inspiration could be drawn, for example, from the work on decoupling reporting bias from annotations in visual concept recognition BIBREF50 .",
"paragraph4 0.9ex plus1ex minus.2ex-1em Observational bias Although variance in health outcome is affected by social, environmental and behavioral factors, these are rarely noted in clinical reports BIBREF13 . The bias of missing explanatory factors because they can not be identified within the given experimental setting is also known as the streetlight effect. In certain cases, we could obtain important prior knowledge (e.g. demographic characteristics) from data other than clinical notes.",
"paragraph4 0.9ex plus1ex minus.2ex-1em Dual use We have already mentioned linking personal health information from online texts to clinical records as a motivation for exploring surrogate data sources. However, this and many other applications also have potential to be applied in both beneficial and harmful ways. It is easy to imagine how sensitive information from clinical notes can be revealed about an individual who is present in social media with a known identity. More general examples of dual use are when the NLP tools are used to analyze clinical notes with a goal of determining individuals' insurability and employability."
],
[
"In this paper, we reviewed some challenges that we believe are central to the work in clinical NLP. Difficult access to data due to privacy concerns has been an obstacle to progress in the field. We have discussed how the protection of privacy through sanitization measures and the requirement for informed consent may affect the work in this domain. Perhaps, it is time to rethink the right to privacy in health in the light of recent work in ethics of big data, especially its uneasy relationship to the right to science, i.e. being able to benefit from science and participate in it BIBREF51 , BIBREF52 . We also touched upon possible sources of bias that can have an effect on the application of NLP in the health domain, and which can ultimately lead to unfair or harmful treatment."
],
[
"We would like to thank Madhumita and the anonymous reviewers for useful comments. Part of this research was carried out in the framework of the Accumulate IWT SBO project, funded by the government agency for Innovation by Science and Technology (IWT). "
]
],
"section_name": [
"Introduction",
"Sensitivity of data and privacy",
"Protecting the individual",
"Social impact and biases",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"476412f0f96437196074ec65a58141a0df0c1fd0",
"82db5b5bc35bcfbd125dab6c49e887d878a96442",
"d37953a8c60f334e7a38bb0902a19ec12f6d3584"
],
"answer": [
{
"evidence": [
"Because of legal and institutional concerns arising from the sensitivity of clinical data, it is difficult for the NLP community to gain access to relevant data BIBREF9 , BIBREF10 . This is especially true for the researchers not connected with a healthcare organization. Corpora with transparent access policies that are within reach of NLP researchers exist, but are few. An often used corpus is MIMICII(I) BIBREF11 , BIBREF12 . Despite its large size (covering over 58,000 hospital admissions), it is only representative of patients from a particular clinical domain (the intensive care in this case) and geographic location (a single hospital in the United States). Assuming that such a specific sample is representative of a larger population is an example of sampling bias (we discuss further sources of bias in section \"Social impact and biases\" ). Increasing the size of a sample without recognizing that this sample is atypical for the general population (e.g. not all patients are critical care patients) could also increase sampling bias BIBREF13 . We need more large corpora for various medical specialties, narrative types, as well as languages and geographic areas.",
"Related to difficult access to raw clinical data is the lack of available annotated datasets for model training and benchmarking. The reality is that annotation projects do take place, but are typically constrained to a single healthcare organization. Therefore, much of the effort put into annotation is lost afterwards due to impossibility of sharing with the larger research community BIBREF6 , BIBREF14 . Again, exceptions are either few—e.g. THYME BIBREF15 , a corpus annotated with temporal information—or consist of small datasets resulting from shared tasks like the i2b2 and ShARe/CLEF. In addition, stringent access policies hamper reproduction efforts, impede scientific oversight and limit collaboration, not only between institutions but also more broadly between the clinical and NLP communities.",
"There are known cases of datasets that had been used in published research (including reproduction) in its full form, like MiPACQ, Blulab, EMC Dutch Clinical Corpus and 2010 i2b2/VA BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , but were later trimmed down or made unavailable, likely due to legal issues. Even if these datasets were still available in full, their small size is still a concern, and the comments above regarding sampling bias certainly apply. For example, a named entity recognizer trained on 2010 i2b2/VA data, which consists of 841 annotated patient records from three different specialty areas, will due to its size only contain a small portion of possible named entities. Similarly, in linking clinical concepts to an ontology, where the number of output classes is larger BIBREF20 , the small amount of training data is a major obstacle to deployment of systems suitable for general use.",
"Finally, clinical NLP is also possible on veterinary texts. Records of companion animals are perhaps less likely to involve legal issues, while still amounting to a large pool of data. As an example, around 40M clinical documents from different veterinary clinics in UK and Australia are stored centrally in the VetCompass repository. First NLP steps in this direction were described in the invited talk at the Clinical NLP 2016 workshop BIBREF43 ."
],
"extractive_spans": [],
"free_form_answer": "MIMICII(I), THYME, results from i2b2 and ShARe/CLEF shared task, MiPACQ, Blulab, EMC Dutch Clinical Corpus, 2010 i2b2/VA, VetCompass",
"highlighted_evidence": [
"An often used corpus is MIMICII(I)",
"Again, exceptions are either few—e.g. THYME BIBREF15 , a corpus annotated with temporal information—or consist of small datasets resulting from shared tasks like the i2b2 and ShARe/CLEF.",
"There are known cases of datasets that had been used in published research (including reproduction) in its full form, like MiPACQ, Blulab, EMC Dutch Clinical Corpus and 2010 i2b2/VA BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , but were later trimmed down or made unavailable, likely due to legal issues",
"As an example, around 40M clinical documents from different veterinary clinics in UK and Australia are stored centrally in the VetCompass repository"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"paragraph4 0.9ex plus1ex minus.2ex-1em Are there less sensitive data? One criterion which may have influence on data accessibility is whether the data is about living subjects or not. The HIPAA privacy rule under certain conditions allows disclosure of personal health information of deceased persons, without the need to seek IRB agreement and without the need for sanitization BIBREF39 . It is not entirely clear though how often this possibility has been used in clinical NLP research or broader.",
"Next, the work on surrogate data has recently seen a surge in activity. Increasingly more health-related texts are produced in social media BIBREF40 , and patient-generated data are available online. Admittedly, these may not resemble the clinical discourse, yet they bear to the same individuals whose health is documented in the clinical reports. Indeed, linking individuals' health information from online resources to their health records to improve documentation is an active line of research BIBREF41 . Although it is generally easier to obtain access to social media data, the use of social media still requires similar ethical considerations as in the clinical domain. See for example the influential study on emotional contagion in Facebook posts by Kramer et al. KramerEtAl2014, which has been criticized for not properly gaining prior consent from the users who were involved in the study BIBREF42 .",
"Another way of reducing sensitivity of data and improving chances for IRB approval is to work on derived data. Data that can not be used to reconstruct the original text (and when sanitized, can not directly re-identify the individual) include text fragments, various statistics and trained models. Working on randomized subsets of clinical notes may also improve the chances of obtaining the data. When we only have access to trained models from disparate sources, we can refine them through ensembling and creation of silver standard corpora, cf. Rebholz-Schuhmann et al. RebholzSchuhmannEtAl2011.",
"Finally, clinical NLP is also possible on veterinary texts. Records of companion animals are perhaps less likely to involve legal issues, while still amounting to a large pool of data. As an example, around 40M clinical documents from different veterinary clinics in UK and Australia are stored centrally in the VetCompass repository. First NLP steps in this direction were described in the invited talk at the Clinical NLP 2016 workshop BIBREF43 ."
],
"extractive_spans": [
"deceased persons",
"surrogate data",
"derived data",
"veterinary texts"
],
"free_form_answer": "",
"highlighted_evidence": [
"One criterion which may have influence on data accessibility is whether the data is about living subjects or not. The HIPAA privacy rule under certain conditions allows disclosure of personal health information of deceased persons, without the need to seek IRB agreement and without the need for sanitization BIBREF39 .",
"Next, the work on surrogate data has recently seen a surge in activity. Increasingly more health-related texts are produced in social media BIBREF40 , and patient-generated data are available online. Admittedly, these may not resemble the clinical discourse, yet they bear to the same individuals whose health is documented in the clinical reports.",
"Another way of reducing sensitivity of data and improving chances for IRB approval is to work on derived data. Data that can not be used to reconstruct the original text (and when sanitized, can not directly re-identify the individual) include text fragments, various statistics and trained models.",
"Finally, clinical NLP is also possible on veterinary texts. Records of companion animals are perhaps less likely to involve legal issues, while still amounting to a large pool of data."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"paragraph4 0.9ex plus1ex minus.2ex-1em Are there less sensitive data? One criterion which may have influence on data accessibility is whether the data is about living subjects or not. The HIPAA privacy rule under certain conditions allows disclosure of personal health information of deceased persons, without the need to seek IRB agreement and without the need for sanitization BIBREF39 . It is not entirely clear though how often this possibility has been used in clinical NLP research or broader.",
"Next, the work on surrogate data has recently seen a surge in activity. Increasingly more health-related texts are produced in social media BIBREF40 , and patient-generated data are available online. Admittedly, these may not resemble the clinical discourse, yet they bear to the same individuals whose health is documented in the clinical reports. Indeed, linking individuals' health information from online resources to their health records to improve documentation is an active line of research BIBREF41 . Although it is generally easier to obtain access to social media data, the use of social media still requires similar ethical considerations as in the clinical domain. See for example the influential study on emotional contagion in Facebook posts by Kramer et al. KramerEtAl2014, which has been criticized for not properly gaining prior consent from the users who were involved in the study BIBREF42 .",
"Another way of reducing sensitivity of data and improving chances for IRB approval is to work on derived data. Data that can not be used to reconstruct the original text (and when sanitized, can not directly re-identify the individual) include text fragments, various statistics and trained models. Working on randomized subsets of clinical notes may also improve the chances of obtaining the data. When we only have access to trained models from disparate sources, we can refine them through ensembling and creation of silver standard corpora, cf. Rebholz-Schuhmann et al. RebholzSchuhmannEtAl2011.",
"Finally, clinical NLP is also possible on veterinary texts. Records of companion animals are perhaps less likely to involve legal issues, while still amounting to a large pool of data. As an example, around 40M clinical documents from different veterinary clinics in UK and Australia are stored centrally in the VetCompass repository. First NLP steps in this direction were described in the invited talk at the Clinical NLP 2016 workshop BIBREF43 ."
],
"extractive_spans": [
"personal health information of deceased persons",
"surrogate data",
"derived data. Data that can not be used to reconstruct the original text",
"veterinary texts"
],
"free_form_answer": "",
"highlighted_evidence": [
"One criterion which may have influence on data accessibility is whether the data is about living subjects or not. The HIPAA privacy rule under certain conditions allows disclosure of personal health information of deceased persons, without the need to seek IRB agreement and without the need for sanitization ",
"Next, the work on surrogate data has recently seen a surge in activity. Increasingly more health-related texts are produced in social media BIBREF40 , and patient-generated data are available online.",
"Another way of reducing sensitivity of data and improving chances for IRB approval is to work on derived data. Data that can not be used to reconstruct the original text (and when sanitized, can not directly re-identify the individual) include text fragments, various statistics and trained models.",
"Finally, clinical NLP is also possible on veterinary texts. Records of companion animals are perhaps less likely to involve legal issues, while still amounting to a large pool of data."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"08f81a5d78e451df16193028defb70150c4201c9",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"08a77843a23b7ce3870829ca2e66177ccc043d30"
]
},
{
"annotation_id": [
"473c1dd99342a2d0a2d834e39b1db5c0dcb941ff",
"d9d9da75c47cafea26475fdb50c83ef9e62967a1"
],
"answer": [
{
"evidence": [
"Unlocking knowledge from free text in the health domain has a tremendous societal value. However, discrimination can occur when individuals or groups receive unfair treatment as a result of automated processing, which might be a result of biases in the data that were used to train models. The question is therefore what the most important biases are and how to overcome them, not only out of ethical but also legal responsibility. Related to the question of bias is so-called algorithm transparency BIBREF44 , BIBREF45 , as this right to explanation requires that influences of bias in training data are charted. In addition to sampling bias, which we introduced in section 2, we discuss in this section further sources of bias. Unlike sampling bias, which is a corpus-level bias, these biases here are already present in documents, and therefore hard to account for by introducing larger corpora.",
"paragraph4 0.9ex plus1ex minus.2ex-1em Data quality Texts produced in the clinical settings do not always tell a complete or accurate patient story (e.g. due to time constraints or due to patient treatment in different hospitals), yet important decisions can be based on them. As language is situated, a lot of information may be implicit, such as the circumstances in which treatment decisions are made BIBREF47 . If we fail to detect a medical concept during automated processing, this can not necessarily be a sign of negative evidence. Work on identifying and imputing missing values holds promise for reducing incompleteness, see Lipton et al. LiptonEtAl2016 for an example in sequential modeling applied to diagnosis classification.",
"paragraph4 0.9ex plus1ex minus.2ex-1em Reporting bias Clinical texts may include bias coming from both patient's and clinician's reporting. Clinicians apply their subjective judgments to what is important during the encounter with patients. In other words, there is separation between, on the one side, what is observed by the clinician and communicated by the patient, and on the other, what is noted down. Cases of more serious illness may be more accurately documented as a result of clinician's bias (increased attention) and patient's recall bias. On the other hand, the cases of stigmatized diseases may include suppressed information. In the case of traffic injuries, documentation may even be distorted to avoid legal consequences BIBREF48 .",
"We need to be aware that clinical notes may reflect health disparities. These can originate from prejudices held by healthcare practitioners which may impact patients' perceptions; they can also originate from communication difficulties in the case of ethnic differences BIBREF49 . Finally, societal norms can play a role. Brady et al. BradyEtAl2016 find that obesity is often not documented equally well for both sexes in weight-addressing clinics. Young males are less likely to be recognized as obese, possibly due to societal norms seeing them as “stocky\" as opposed to obese. Unless we are aware of such bias, we may draw premature conclusions about the impact of our results.",
"paragraph4 0.9ex plus1ex minus.2ex-1em Observational bias Although variance in health outcome is affected by social, environmental and behavioral factors, these are rarely noted in clinical reports BIBREF13 . The bias of missing explanatory factors because they can not be identified within the given experimental setting is also known as the streetlight effect. In certain cases, we could obtain important prior knowledge (e.g. demographic characteristics) from data other than clinical notes.",
"paragraph4 0.9ex plus1ex minus.2ex-1em Dual use We have already mentioned linking personal health information from online texts to clinical records as a motivation for exploring surrogate data sources. However, this and many other applications also have potential to be applied in both beneficial and harmful ways. It is easy to imagine how sensitive information from clinical notes can be revealed about an individual who is present in social media with a known identity. More general examples of dual use are when the NLP tools are used to analyze clinical notes with a goal of determining individuals' insurability and employability."
],
"extractive_spans": [
"Texts produced in the clinical settings do not always tell a complete or accurate patient story (e.g. due to time constraints or due to patient treatment in different hospitals), yet important decisions can be based on them. As language is situated, a lot of information may be implicit, such as the circumstances in which treatment decisions are made",
"discrimination can occur when individuals or groups receive unfair treatment as a result of automated processing, which might be a result of biases in the data that were used to train models.",
"Clinical texts may include bias coming from both patient's and clinician's reporting.",
"prejudices held by healthcare practitioners which may impact patients' perceptions",
"communication difficulties in the case of ethnic differences",
"Observational bias Although variance in health outcome is affected by social, environmental and behavioral factors, these are rarely noted in clinical reports",
"Dual use"
],
"free_form_answer": "",
"highlighted_evidence": [
"However, discrimination can occur when individuals or groups receive unfair treatment as a result of automated processing, which might be a result of biases in the data that were used to train models.",
"Texts produced in the clinical settings do not always tell a complete or accurate patient story (e.g. due to time constraints or due to patient treatment in different hospitals), yet important decisions can be based on them. As language is situated, a lot of information may be implicit, such as the circumstances in which treatment decisions are made",
"Clinical texts may include bias coming from both patient's and clinician's reporting. Clinicians apply their subjective judgments to what is important during the encounter with patients.",
"We need to be aware that clinical notes may reflect health disparities. These can originate from prejudices held by healthcare practitioners which may impact patients' perceptions; they can also originate from communication difficulties in the case of ethnic differences BIBREF49 . Finally, societal norms can play a role.",
"Observational bias Although variance in health outcome is affected by social, environmental and behavioral factors, these are rarely noted in clinical reports BIBREF13 . ",
"Dual use",
" More general examples of dual use are when the NLP tools are used to analyze clinical notes with a goal of determining individuals' insurability and employability."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Because of legal and institutional concerns arising from the sensitivity of clinical data, it is difficult for the NLP community to gain access to relevant data BIBREF9 , BIBREF10 . This is especially true for the researchers not connected with a healthcare organization. Corpora with transparent access policies that are within reach of NLP researchers exist, but are few. An often used corpus is MIMICII(I) BIBREF11 , BIBREF12 . Despite its large size (covering over 58,000 hospital admissions), it is only representative of patients from a particular clinical domain (the intensive care in this case) and geographic location (a single hospital in the United States). Assuming that such a specific sample is representative of a larger population is an example of sampling bias (we discuss further sources of bias in section \"Social impact and biases\" ). Increasing the size of a sample without recognizing that this sample is atypical for the general population (e.g. not all patients are critical care patients) could also increase sampling bias BIBREF13 . We need more large corpora for various medical specialties, narrative types, as well as languages and geographic areas.",
"Unlocking knowledge from free text in the health domain has a tremendous societal value. However, discrimination can occur when individuals or groups receive unfair treatment as a result of automated processing, which might be a result of biases in the data that were used to train models. The question is therefore what the most important biases are and how to overcome them, not only out of ethical but also legal responsibility. Related to the question of bias is so-called algorithm transparency BIBREF44 , BIBREF45 , as this right to explanation requires that influences of bias in training data are charted. In addition to sampling bias, which we introduced in section 2, we discuss in this section further sources of bias. Unlike sampling bias, which is a corpus-level bias, these biases here are already present in documents, and therefore hard to account for by introducing larger corpora.",
"paragraph4 0.9ex plus1ex minus.2ex-1em Data quality Texts produced in the clinical settings do not always tell a complete or accurate patient story (e.g. due to time constraints or due to patient treatment in different hospitals), yet important decisions can be based on them. As language is situated, a lot of information may be implicit, such as the circumstances in which treatment decisions are made BIBREF47 . If we fail to detect a medical concept during automated processing, this can not necessarily be a sign of negative evidence. Work on identifying and imputing missing values holds promise for reducing incompleteness, see Lipton et al. LiptonEtAl2016 for an example in sequential modeling applied to diagnosis classification.",
"We need to be aware that clinical notes may reflect health disparities. These can originate from prejudices held by healthcare practitioners which may impact patients' perceptions; they can also originate from communication difficulties in the case of ethnic differences BIBREF49 . Finally, societal norms can play a role. Brady et al. BradyEtAl2016 find that obesity is often not documented equally well for both sexes in weight-addressing clinics. Young males are less likely to be recognized as obese, possibly due to societal norms seeing them as “stocky\" as opposed to obese. Unless we are aware of such bias, we may draw premature conclusions about the impact of our results."
],
"extractive_spans": [],
"free_form_answer": "sampling bias, unfair treatment due to biased data, incomplete clinical stories, and reflection of health disparities.",
"highlighted_evidence": [
" Assuming that such a specific sample is representative of a larger population is an example of sampling bias (we discuss further sources of bias in section \"Social impact and biases\" )",
"However, discrimination can occur when individuals or groups receive unfair treatment as a result of automated processing, which might be a result of biases in the data that were used to train models",
"Texts produced in the clinical settings do not always tell a complete or accurate patient story (e.g. due to time constraints or due to patient treatment in different hospitals), yet important decisions can be based on them",
"We need to be aware that clinical notes may reflect health disparities. These can originate from prejudices held by healthcare practitioners which may impact patients' perceptions; they can also originate from communication difficulties in the case of ethnic difference"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"08a77843a23b7ce3870829ca2e66177ccc043d30",
"08f81a5d78e451df16193028defb70150c4201c9"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What sources of less sensitive data are available?",
"Other than privacy, what are the other major ethical challenges in clinical data?"
],
"question_id": [
"8c89f1d1b3c2a45c0254c4c8d6e700ab9a4b4ffb",
"f5bc07df5c61dcb589a848bd36f4ce9c22abd46a"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [],
"file": []
} | [
"What sources of less sensitive data are available?",
"Other than privacy, what are the other major ethical challenges in clinical data?"
] | [
[
"1703.10090-Sensitivity of data and privacy-1",
"1703.10090-Protecting the individual-12",
"1703.10090-Sensitivity of data and privacy-0",
"1703.10090-Protecting the individual-9",
"1703.10090-Sensitivity of data and privacy-2",
"1703.10090-Protecting the individual-10",
"1703.10090-Protecting the individual-11"
],
[
"1703.10090-Sensitivity of data and privacy-0",
"1703.10090-Social impact and biases-6",
"1703.10090-Social impact and biases-3",
"1703.10090-Social impact and biases-2",
"1703.10090-Social impact and biases-0",
"1703.10090-Social impact and biases-5",
"1703.10090-Social impact and biases-1"
]
] | [
"MIMICII(I), THYME, results from i2b2 and ShARe/CLEF shared task, MiPACQ, Blulab, EMC Dutch Clinical Corpus, 2010 i2b2/VA, VetCompass",
"sampling bias, unfair treatment due to biased data, incomplete clinical stories, and reflection of health disparities."
] | 178 |
1905.10039 | Outline Generation: Understanding the Inherent Content Structure of Documents | In this paper, we introduce and tackle the Outline Generation (OG) task, which aims to unveil the inherent content structure of a multi-paragraph document by identifying its potential sections and generating the corresponding section headings. Without loss of generality, the OG task can be viewed as a novel structured summarization task. To generate a sound outline, an ideal OG model should be able to capture three levels of coherence, namely the coherence between context paragraphs, that between a section and its heading, and that between context headings. The first one is the foundation for section identification, while the latter two are critical for consistent heading generation. In this work, we formulate the OG task as a hierarchical structured prediction problem, i.e., to first predict a sequence of section boundaries and then a sequence of section headings accordingly. We propose a novel hierarchical structured neural generation model, named HiStGen, for the task. Our model attempts to capture the three-level coherence via the following ways. First, we introduce a Markov paragraph dependency mechanism between context paragraphs for section identification. Second, we employ a section-aware attention mechanism to ensure the semantic coherence between a section and its heading. Finally, we leverage a Markov heading dependency mechanism and a review mechanism between context headings to improve the consistency and eliminate duplication between section headings. Besides, we build a novel WIKIOG dataset, a public collection which consists of over 1.75 million document-outline pairs for research on the OG task. Experimental results on our benchmark dataset demonstrate that our model can significantly outperform several state-of-the-art sequential generation models for the OG task. | {
"paragraphs": [
[
"Document understanding is one of the critical and challenging tasks in information processing. There have been many related research topics in this direction, such as keyword detection BIBREF0 , BIBREF1 , topic modeling BIBREF2 , BIBREF3 , headline generation BIBREF4 , BIBREF5 and text summarization BIBREF6 , BIBREF7 . Keyword detection and topic modeling aim to describe a document by a few important words or topics (i.e., distributions of words) for concise representation; While headline generation and text summarization attempt to compress the document into one or a few sentences to capture the key information. As we can see, most existing research on document understanding has focused on the coarse-grained understanding of documents by capturing its global semantics. In this paper, we attempt to provide fine-grained understanding of documents by unveiling its inhere content structure BIBREF8 , BIBREF9 , i.e., to understand how the document is organized and what it talks about in each part .",
"We thus introduce the Outline Generation (OG) task in this work. Given a multi-paragraph document, the OG task aims to identify its potential sections and generate the corresponding section headings. Figure FIGREF3 shows some typical outline of articles, where Figure FIGREF3 (a) depicts the outline of a Wikipedia article with a two-level hierarchy, and Figure FIGREF3 (b) depicts a typical outline of a research paper. As we can see, the outline can clearly capture the content structure of a document with concise text descriptions (i.e., section headings), which can not only help navigate the reading but also significantly reduce the cognitive burden over the document. Moreover, outlines can also facilitate a variety of text analysis applications such as text clustering and topic survey.",
"In a conceptual level, the OG task could be viewed as a kind of summarization task. However, from the examples shown in Figure FIGREF3 , we can find clear differences between the OG task and traditional summarization tasks. Firstly, the OG task produces a structured output with short descriptions (i.e., keywords or key phrases), while the output of traditional summarization is usually a set of unstructured sentences. Secondly, the OG task needs to summarize the paragraphs (into sections) in a strict sequential order, while the sentences in traditional summarization usually do not map to the paragraphs linearly. Thirdly, the section headings in one outline usually follow a similar style (e.g., topical headings as in Figure FIGREF3 (a) and functional headings as in Figure FIGREF3 (b)), while there is no such requirements in traditional summarization. Therefore, the OG task is actually a novel structured summarization task with its own special challenges.",
"If we take a further look at the OG task, we can find there are actually two structured prediction problem within it, i.e., to identify a sequence of sections (i.e., paragraphs with coherent information/topics), and to generate a sequence of section headings (i.e., short descriptions that summarize the sections) accordingly. Both problems are non-trivial. For section identification, it is unknown how many sections there are in a document. For section heading generation, headings should be able to reflect the section content in a consistent style. To achieve these two goals, an ideal OG model should be able to capture three levels of coherence, namely the coherence between context paragraphs, that between a section and its heading, and that between context headings. The first one is the foundation for section identification, while the latter two are critical for consistent heading generation.",
"In this work, we formulate the OG task as a hierarchical structured prediction problem and introduce a novel hierarchical structured neural generation model, named HiStGen, to solve it. In this model, we view the section boundary prediction problem as a first-level sequential labeling process, and the section heading generation as a second-level structured prediction which depends on the predicted boundary labels from the lower level. For section identification, we employ a Markov paragraph dependency mechanism to model the coherence in adjacent paragraphs to help decide the section boundaries. For section heading generation, we leverage a section-aware attention mechanism BIBREF10 to allow the decoder to focus on the most informative content within a section for heading generation. Furthermore, we introduce a Markov heading dependency mechanism and a review mechanism BIBREF11 between context headings. The Markov heading dependency mechanism is used for modeling the consistency between adjacent headings, while the review mechanism is employed to avoid the repetition in the generated headings.",
"To facilitate the study and evaluation of the OG task, we build a new benchmark dataset based on Wikipedia articles. As we can see, in most multi-paragraph Wikipedia articles, human editors would segment the article into several sections and provide the outline as an overview of the content structure. Therefore, we can directly leverage these articles to build the benchmark. Specifically, we collect Wikipedia articles with outlines under “celebrity\", “cities” and “music” category, and obtain hundreds of thousands of articles respectively. We remove the outlines from Wikipedia articles to form the raw text input. The task is to recover the sections and section headings simultaneously. We call this benchmark dataset as WIKIOG.",
"For evaluation, we compare with several state-of-the-art methods to verify the effectiveness of our model. Empirical results demonstrate that outline generation for capturing the inherent content structure is feasible and our proposed method can outperform all the baselines significantly. We also provide detailed analysis on the proposed model, and conduct case studies to provide better understanding on the learned content structure.",
"The main contributions of this paper include:"
],
[
"To the best of our knowledge, outline generation over a multi-paragraph document is a new task in the natural language processing community. The most closely related tasks to the OG task are keyword extraction, headline generation, text summarization and storyline generation tasks, which have been studied extensively in the past decades.",
"Keyword extraction aims to automatically extract some keywords from a document. Most of the existing keyword extraction methods have addressed this problem through two steps. The first step is to acquire a list of keyword candidates (e.g., n-grams or chunks) with heuristic methods BIBREF12 , BIBREF13 . The second step is to rank candidates on their importance to the document, either with supervised machine learning methods BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 or unsupervised machine learning methods BIBREF18 , BIBREF19 , BIBREF20 , BIBREF0 . However, these approaches could neither identify keywords that do not appear in the text, nor capture the real semantic meaning behind the text. Recently, natural language generation models are used to automatically generate keywords. BIBREF21 BIBREF21 applied an encoder-decoder framework BIBREF22 with a copy mechanism BIBREF23 to this task, achieving state-of-the-art performance. BIBREF11 BIBREF11 modeled correlation among multiple keywords in an end-to-end fashion to eliminate duplicate keywords and improve result coherence.",
"Headline generation aims to describe a document by a compact and informative headline, with the constraint that only a short sequence of words is allowed to generate BIBREF4 . Early work has pointed out that a purely extractive approach is not appropriate to generate headlines from the document text BIBREF24 . This is due to two major reasons: (1) The single sentence extracted from the document is often longer than the desired headline size; (2) Sometimes the most important information is distributed across several sentences in the document. Hence, many studies have focused on either extracting and reordering n-grams from the document BIBREF24 , or selecting one or two informative sentences from the document, and then reducing them to the target headline size BIBREF4 . Recently, the task is formulated as a Seq2Seq learning problem and neural encoder-decoder architectures have been widely adopted to solve it. BIBREF25 BIBREF25 trained an encoder-decoder recurrent neural network with attention for generating news headlines using the news articles from the English Gigaword corpus. BIBREF26 BIBREF26 proposed to generate the headline from multiple summaries using a hierarchical attention model for the New York Times corpus.",
"Text summarization is the process of automatically generating one or more natural summaries from an input document that retain the most important information. Most summarization models studied in the past are extractive in nature BIBREF27 , BIBREF28 , BIBREF29 , which try to extract the most important sentences in the document and rearranging them into a new summary. Recent abstractive summarization models have shown better flexibility and can generate more novel summaries. Many abstractive models BIBREF30 , BIBREF5 , BIBREF31 are based on the neural encoder-decoder architecture. To facilitate the research, a set of summarization tasks have been proposed in the Document Understanding Conference (DUC). These tasks often provide multiple human-generated reference summaries of the document for evaluation.",
"Storyline generation aims to summarize the development of certain events and understand how events evolve over time. BIBREF32 BIBREF32 formalized different types of sub-events into local and global aspects. Some studies have been conducted in storyline generation with Bayesian networks to detect storylines BIBREF33 , BIBREF34 . BIBREF35 BIBREF35 firstly obtained relevant tweets and then generate storylines via graph optimization for the Tweets2011 corpus.",
"The OG task introduced in our work is related to the keyword extraction, headline generation, text summarization and storyline generation tasks but with some clear differences. Firstly, the output of keyword extraction is usually a set of unstructured keywords, while the OG task produces a structured output with short descriptions. Secondly, the output of the headline generation task is a single heading at the document-level with coarse-grained semantics, while the output of our OG task is a sequence of headings at the section-level with fine-grained semantics. Thirdly, text summarization aims to capture the major content of a document by producing a few unstructured sentences, while our OG task attempts to unveil the inherent content structure of a document by identifying its potential sections and generating the corresponding section headings. Finally, storyline generation is based on the multiple sub-events along the timeline, while the OG task focuses on the multiple sections. Therefore, most existing methods applied for these related tasks may not fit the OG task directly."
],
[
"In this section, we introduce the OG task, and describe the benchmark dataset WIKIOG in detail. A summary of key notations in this work is presented in Table TABREF7 ."
],
[
"Given a multi-paragraph document, the OG task aims to unveil its inherent content structure, i.e., to identify the potential sections (i.e., sequential paragraphs with coherent information/topics) of the document, as well as to generate the section headings (i.e., a short description that summarizes the section) correctly. Specifically, headings over different sections should be consistent in style and exclusive on topics, i.e., they should cover different aspects in a similar style. For example, as shown in Figure FIGREF3 (b), headings in a research paper might include introduction, related work, method and so on. These headings are exclusive to each other and mainly describe the function of each section in the paper.",
"Formally, given a document INLINEFORM0 composed of a sequence of paragraphs INLINEFORM1 , the OG task is to learn a structured prediction model INLINEFORM2 for INLINEFORM3 to identify a sequence of sections INLINEFORM4 and produce the corresponding section headings INLINEFORM5 simultaneously, DISPLAYFORM0 ",
"where INLINEFORM0 ."
],
[
"In order to study and evaluate the OG task, we build a new benchmark dataset WIKIOG. We take Wikipedia articles as our source articles since (1) Wikipedia is publicly available and easy to collect; (2) Most multi-paragraph Wikipedia articles contain outlines as an overview of the article, which are constructed by professional human editors. Specifically, we collect English Wikipedia articles under three categories, i.e., “celebrity”, “cities” and “music”. We only make use of the first-level headings as our ground-truth, and leave the deeper-level headings (e.g., second-level headings) generation for the future study. Articles with no headings or more than ten first-level headings are removed, leaving us roughly INLINEFORM0 million articles in total. Table TABREF9 shows the overall statistics of our WIKIOG benchmark dataset.",
"For the OG task, we remove the outlines from Wikipedia articles, and concatenate all the paragraphs together to form the raw text input INLINEFORM0 . We record all the sections by their boundaries INLINEFORM1 as well as all the corresponding section headings INLINEFORM2 . In this way, we obtain the INLINEFORM3 paragraph, section boundary label, section heading INLINEFORM4 triples, i.e., INLINEFORM5 , as ground-truth data for training/validation/testing."
],
[
"In this section, we introduce our proposed approach for the OG task in detail. We first give an overview of the problem formulation and the model architecture. We then describe each component of our model as well as the learning procedure specifically."
],
[
"Without loss of generality, the OG task can be decomposed into two structured prediction problems: 1) Section Identification: a sequential labeling process to identify the section boundaries; and 2) Section Heading Generation: a sequential generation process to produce short text descriptions for each identified section. These two structured prediction problems are coupled in the sense that the section heading prediction is dependent on the section prediction results. Therefore, in this work, we formulate the OG task as a hierarchical structured prediction problem and introduce a novel hierarchical structured neural generation model, named HiStGen for short, to solve it. The overall architecture of HiStGen is illustrated in Figure FIGREF8 .",
"Basically, the HiStGen employs the encoder-decoder framework. In the encoding phase, to obtain the representation of a multi-paragraph document, HiStGen utilizes the hierarchical encoder framework BIBREF36 to obtain the document representation. The decoding phase is hierarchical, where we exploit three-level coherence for better OG prediction. Specifically, we employ a Markov paragraph dependency mechanism between context paragraphs for the section boundary prediction problem. Moreover, HiStGen employs a section-aware attention mechanism between a section and its heading, and a Markov heading dependency mechanism and a review mechanism between context headings for the heading generation problem whenever a new section is identified. We will discuss the details of these model designs in the following sections."
],
[
"The goal of the encoder is to map the input document to a vector representation. In HiStGen, we adopt a hierarchical encoder framework, where we use a word encoder to encode the words of a paragraph INLINEFORM0 , and use a paragraph encoder to encode the paragraphs of a document INLINEFORM1 .",
"As depicted in Figure FIGREF8 , each word INLINEFORM0 in each paragraph INLINEFORM1 is represented by its distributed representation INLINEFORM2 . We use a bi-directional GRU as both the word and paragraph encoder, which summarizes not only the preceding words/paragraphs, but also the following words/paragraphs. The forward GRU in word encoder reads the words in the INLINEFORM3 -th paragraph INLINEFORM4 in the left-to-right direction, resulting in a sequence of hidden states INLINEFORM5 . The backward GRU reads INLINEFORM6 in the reversed direction and outputs INLINEFORM7 . We obtain the hidden state for a given word INLINEFORM8 by concatenating the forward and backward hidden states, i.e., INLINEFORM9 . Then, we concatenate the last hidden states of the forward and backward passes as the embedding representation of the paragraph INLINEFORM10 , denoted as INLINEFORM11 . A paragraph encoder is used to sequentially receive the embeddings of paragraphs INLINEFORM12 in a similar way. The hidden representation of each paragraph is given by INLINEFORM13 , where INLINEFORM14 and INLINEFORM15 are the forward and backward hidden states of the paragraph encoder respectively."
],
[
"The goal of the hierarchical decoder is to produce an outline for an input article, which could be decomposed into two dependent steps: (1) Section Boundary Prediction: to predict a sequence of section boundary labels over the paragraphs; and (2) Section Heading Generation: to generate the section heading for a newly detected section.",
"This step is to break up a multi-paragraph document INLINEFORM0 into multiple successive sections INLINEFORM1 by predicting the section boundary labels INLINEFORM2 , where INLINEFORM3 . If INLINEFORM4 , INLINEFORM5 is the inner paragraph of a section and the section prediction continues. If INLINEFORM6 , INLINEFORM7 is the last paragraph of one section and the corresponding heading should be generated. Note that a section is a sequence of information coherent paragraphs, while the coherence modeling is non-trivial in nature. In this paper, we introduce a Markov paragraph dependency mechanism for modeling the coherence between context paragraphs and identifying section boundaries.",
"[leftmargin=*]",
"Markov Paragraph Dependency Mechanism. The key assumption of the Markov paragraph dependency mechanism is that the coherence between paragraphs has a Markov property. Therefore, we can identify a section, i.e., to decide whether a target paragraph is a last paragraph of a section, by looking at its previous and successive paragraph. As shown in Figure FIGREF8 , we utilize the hidden representation of the current paragraph INLINEFORM0 , the previous paragraph INLINEFORM1 , and the next paragraph INLINEFORM2 to predict the section boundary label INLINEFORM3 . Specifically, the section boundary label INLINEFORM4 is modeled with binary output: DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 stands for the sigmoid function, INLINEFORM1 , and INLINEFORM2 are learned parameters.",
"This step executes when a new section is detected, i.e., INLINEFORM0 . Based on the detected section INLINEFORM1 , to generate the heading INLINEFORM2 , we employ 1) a section-aware attention mechanism: maintaining a section-aware context vector to make sure more important content in the target section is attended; 2) a Markov heading dependency mechanism: maintaining the representation of the previously generated heading for new heading generation to improve the consistency between headings; and 3) a review mechanism: maintaining a heading-aware context vector to utilize contextual information of generated headings to eliminate duplication between headings. The first one is used to capture the coherence between a section and its heading, and the latter two are used to capture the coherence between context headings.",
"Afterwards, the section-aware context vector INLINEFORM0 and the heading-aware context vector INLINEFORM1 are provided as extra inputs to derive the hidden state INLINEFORM2 of the INLINEFORM3 -th word INLINEFORM4 in INLINEFORM5 , and later the probability distribution for choosing the word INLINEFORM6 .",
"Concretely, INLINEFORM0 is defined as DISPLAYFORM0 ",
"where INLINEFORM0 is a GRU unit, INLINEFORM1 is the predicted word from vocabulary at INLINEFORM2 -th step when decoding the heading INLINEFORM3 . The probability distribution for choosing the word INLINEFORM4 is defined as DISPLAYFORM0 ",
"where INLINEFORM0 is a nonlinear function that computes the probability vector for all legal output words at each output time. We now describe the specific mechanism in the follows.",
"[leftmargin=*]",
"Section-Aware Attention Mechanism. The key idea of the section-aware attention mechanism is to make the generation of a section heading focusing on the target section. Concretely, as shown in Figure FIGREF21 , we maintain a section-aware context vector INLINEFORM0 for generating the INLINEFORM1 -th word INLINEFORM2 in the INLINEFORM3 -th heading INLINEFORM4 . Based on the INLINEFORM5 -th section INLINEFORM6 , INLINEFORM7 is a weighted sum of the hidden representations of all the paragraphs in INLINEFORM8 : DISPLAYFORM0 ",
"where INLINEFORM0 indicates how much the INLINEFORM1 -th paragraph INLINEFORM2 from the source section INLINEFORM3 contributes to generating the INLINEFORM4 -th word in target heading INLINEFORM5 , and is usually computed as: DISPLAYFORM0 ",
"where INLINEFORM0 represents the hidden state (just before emitting the INLINEFORM1 -th word INLINEFORM2 in INLINEFORM3 -th heading INLINEFORM4 ) of the decoder.",
"Markov Heading Dependency Mechanism. The headings in an outline should be consistent in style and it is necessary to capture the dependence between context headings. To achieve this purpose, we introduce a Markov heading dependency mechanism, for the section heading generation process. Note that different from the Markov paragraph dependency mechanism, the Markov heading dependency mechanism only looks at the previous generated heading since there is no successive heading generated yet.",
"Concretely, as shown in Figure FIGREF21 , the Markov heading dependency mechanism uses the accumulation of all the hidden states of the previous decoder and pass it to the next decoder. In this way, the generation of a new heading is decided by both the section content and the previous generated heading.",
"As we can see, the Markov heading dependency mechanism conveys strong dependency requirement between headings by involving all the previous states. The initial hidden state of the decoder INLINEFORM0 of heading INLINEFORM1 is the “mixture” of probabilities: DISPLAYFORM0 ",
"where INLINEFORM0 are learned parameters. INLINEFORM1 is the representation of paragraph INLINEFORM2 , where INLINEFORM3 is the last paragraph of the section INLINEFORM4 . The passed information INLINEFORM5 is the average of all the output states of the decoder for the heading INLINEFORM6 and defined as: DISPLAYFORM0 ",
"where INLINEFORM0 is the output state of the decoder for the heading INLINEFORM1 at the INLINEFORM2 -th step.",
"Review Mechanism. Headings should cover all topics in the source document and be exclusive to each other. To avoid duplicate generation, we incorporate a review mechanism BIBREF11 between context headings as shown in Figure FIGREF21 . It models the correlation between the headings that have been generated and the heading that is going to be generated to generate a heading to cover topics that have not been summarized by previous headings.",
"Specifically, we construct a heading-aware review set which contains contextual information of generated headings. The heading-aware review set is defined as INLINEFORM0 , which is the collection of all the decoder hidden states before generating the INLINEFORM1 -th word INLINEFORM2 in the INLINEFORM3 -th heading INLINEFORM4 . When decoding the word INLINEFORM5 , the heading-aware review set INLINEFORM6 is integrated into the heading-aware context vector INLINEFORM7 : DISPLAYFORM0 ",
"where INLINEFORM0 indicated how much the INLINEFORM1 -word in the INLINEFORM2 -th heading contributed to generating the INLINEFORM3 -th word in target heading INLINEFORM4 , and is computed as: DISPLAYFORM0 ",
"where INLINEFORM0 is defined as DISPLAYFORM0 ",
"where INLINEFORM0 are learned parameters. The heading-aware review set gets updated consequently as INLINEFORM1 in the decoding process."
],
[
"In the training phase, we employ maximum likelihood estimation (MLE) to learn our HiStGen model in an end-to-end way. Specifically, the training objective is a probability over the training corpus INLINEFORM0 with decomposition into the ordered conditionals: DISPLAYFORM0 ",
"We apply stochastic gradient decent method Adam BIBREF37 to learn the model parameters INLINEFORM0 and INLINEFORM1 . Note that, during the training, we provide the model with the specific section boundary label INLINEFORM2 , and thus we do not have to sample.",
"In the testing phase, given a new multi-paragraph document, we compute Eqn. ( EQREF19 ) and ( EQREF20 ) to predict the section boundary label for each paragraph, and then pick the word with the highest probability using Eqn. ( EQREF24 ) to generate the heading for each identified section."
],
[
"In this section, we conduct experiments to verify the effectiveness of our proposed model."
],
[
"To evaluate the performance of our model, we conducted experiments on our WIKIOG benchmark dataset. In preprocessing, all the words in documents and headings are white-space tokenized and lower-cased, and pure digit words and non-English characters are removed. Beyond the three separate datasets (i.e., “celebrity”, “cities” and “music”), we also mix them together to form a “mixture” dataset. For each dataset in WIKIOG, we randomly divide it into a training set (80%), a development set (10%), and a test set (10%).",
"We construct two separate vocabularies for input documents and target headings by using 130000 and 16000 most frequent words on each side in the training data. All the other words outside the vocabularies are replaced by a special token INLINEFORM0 UNK INLINEFORM1 symbol. We implement our models in Tensorflow. Specifically, we use a bi-directional GRU for the word/paragraph encoder respectively and another forward GRU for the heading decoder, with the GRU hidden unit size set as 300 in both the encoder and decoder. The dimension of word embeddings in documents and headings is 300. The learning rate of Adam algorithm is set as INLINEFORM2 . The learnable parameters (e.g., the parameters INLINEFORM3 , INLINEFORM4 and INLINEFORM5 ) are uniformly initialized in the range of INLINEFORM6 . The mini-batch size for the update is set as 64. We clip the gradient when its norm exceeds 5.",
"We run our model on a Tesla K80 GPU card, and we run the training for up to 12 epochs, which takes approximately two days. We select the model that achieves the lowest perplexity on the development set, and report results on the test set."
],
[
"Here, we first employ some degraded HiStGen models to investigate the effect of our proposed mechanisms, namely",
"[leftmargin=*]",
"HiStGen INLINEFORM0 removes the Markov paragraph dependency mechanism between context paragraphs, and the section boundary label is only decided by the representation of current paragraph.",
"HiStGen INLINEFORM0 removes the section-aware attention mechanism between a section and its heading.",
"HiStGen INLINEFORM0 removes the Markov heading dependency mechanism between context headings, and the initial hidden state of the decoder is only decided by the representation of last paragraph in the section.",
"HiStGen INLINEFORM0 removes the review mechanism between context headings.",
"HiStGen INLINEFORM0 removes all the mechanisms and reduces to a vanilla hierarchical sequence-to-sequence generation model.",
"We also apply two types of step-wise process for the OG task.",
"[leftmargin=*]",
"First-Identify-then-Generate (IG). The first step is to identify the potential sections, and the second step is to generate the heading for each section. For the section identification step, based on the hidden representations of the input paragraphs (described in Section SECREF15 ), we employ two methods:",
"[leftmargin=*]",
"Conditional random field (CRF) is a well-known sequential labeling model. Here we follow the work BIBREF38 where the CRF model is built upon the hierarchical encoder, and use the representation of the target paragraph and meanwhile take a chain dependence assumption between the labels, for section boundary prediction.",
"Global paragraph dependency mechanism (GPD) considers all the context paragraphs in a document, not just the previous and successive paragraph as in our Markov paragraph dependency mechanism, to predict the section boundary label for a target paragraph.",
"For the heading generation step, we employ both extractive (TextRank and TopicRank) and generative (Hier and GHD) methods over the detected sections:",
"[leftmargin=*]",
"TextRank BIBREF18 is a graph-based method inspired by the PageRank algorithm.",
"TopicRank BIBREF20 represents a document as a complete graph depending on a topical representation of the document.",
"Hier BIBREF36 takes the section as input using a hierarchical encoder structure (words form paragraph, paragraphs form section) and employs the section-aware attention (described in Section UID22 ) in the decoding phase.",
"GHD further employs a global heading dependency mech- anism based on the Hier, where all the previous generated headings are taken into account to initialize the hidden state of the current decoder, not just the previous one as in our Markov heading dependency mechanism.",
"By combining these two-step methods, we obtain eight types of IG methods denoted as IG INLINEFORM0 , IG INLINEFORM1 , IG INLINEFORM2 , IG INLINEFORM3 , IG INLINEFORM4 , IG INLINEFORM5 , IG INLINEFORM6 and IG INLINEFORM7 .",
"First-Generate-then-Aggregate (GA). The first step is to generate the heading for each paragraph, and the second step is to aggregate the paragraph with respect to their headings. For the heading generation step, we also employ the TextRank, TopicRank, Hier and GHD method over the paragraphs. For the heading aggregation step, we combine successive paragraphs with the same heading into one section. Similarly, we refer to these four types of GA process as GA INLINEFORM0 , GA INLINEFORM1 , GA INLINEFORM2 and GA INLINEFORM3 ."
],
[
"To measure the quality of outline generated by our model and the baselines, we employ three automatic metrics, namely",
"[leftmargin=*]",
"EM INLINEFORM0 : evaluates the overall accuracy of the generated outline based on exact matching. That is, if both the predicted section boundaries and the generated section headings in a document exactly match with the ground-truth, we treat the document as a positive sample. Otherwise the document is a negative sample.",
"EM INLINEFORM0 : evaluates the accuracy of the section boundary prediction based on exact matching. Namely, if the predicted section boundaries in a document exactly match with the ground-truth, we treat the document as a positive sample. Otherwise the document is a negative sample.",
"Rouge INLINEFORM0 evaluates the similarities between generated headings and referenced headings only for the correctly predicted sections. Specifically, we employ Rouge-1 BIBREF39 to measure the uni-gram recall on the reference headings."
],
[
"We conduct ablation analysis to investigate the effect of proposed mechanisms in our HiStGen model. As shown in table TABREF55 , we can observe that: (1) By removing the Markov paragraph dependence mechanism, the performance of INLINEFORM0 in terms of EM INLINEFORM1 has a significant drop as compared with INLINEFORM2 . The results indicate that modeling the dependency between adjacent paragraphs does help decide the section boundaries. (2) INLINEFORM3 performs worse than INLINEFORM4 and INLINEFORM5 in terms of Rouge INLINEFORM6 , showing that the coherence between a section and its heading (captured by the section-aware attention mechanism) has much bigger impact than that between context headings (captured by the Markov heading dependency mechanism and review mechanism) for heading generation. (3) HiStGen INLINEFORM7 gives the worst performance, indicating that traditional seq2seq model without considering three-level coherence is not suitable for the OG task. (4) By including all the mechanisms, INLINEFORM8 achieves the best performance in terms of all the evaluation metrics."
],
[
"The overall performance comparisons between our HiStGen and the step-wise baselines are shown in Table TABREF61 . We have the following observations: (1) The INLINEFORM0 process (i.e., INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 ) performs very poorly. By looking at the results of the INLINEFORM5 methods, we find that INLINEFORM6 tends to segment the document into too much sections since it usually generates different headings even for paragraphs that should belong to a same section. (2) For the INLINEFORM7 process, the methods based on INLINEFORM8 perform better than that based on INLINEFORM9 . For example, the relative improvement of INLINEFORM10 over INLINEFORM11 is about INLINEFORM12 in terms of EM INLINEFORM13 on the mixture set. We analyze the results and find that using INLINEFORM14 can obtain better section prediction results, showing that the dependency on the context labels is more important than that on all the paragraphs for section identification. Moreover, for the INLINEFORM15 process, the generative methods can achieve significantly better results than the extractive methods, since those extractive methods are unsupervised in nature. (3) Our INLINEFORM16 model can outperform all the step-wise baselines significantly (p-value INLINEFORM17 0.01). As compared with the best-performing baseline INLINEFORM18 , the relative improvement of INLINEFORM19 over INLINEFORM20 is about INLINEFORM21 in terms of EM INLINEFORM22 on the mixture set. The results demonstrate the effectiveness of our end-to-end learning model.",
"We further compare the section boundary prediction performance between our Markov paragraph dependency mechanism (MPD for short) and the two baseline methods, i.e., INLINEFORM0 and INLINEFORM1 , by keeping the rest components the same. The results are shown in Figure FIGREF65 . We can find that: (1) The improvements of INLINEFORM2 over INLINEFORM3 , showing that the consideration of the previous and successive paragraph is better than the consideration of all the paragraphs in a document for section boundary prediction. The reason might be by considering all the paragraphs, INLINEFORM4 tends to bring noisy information that may hurt the prediction on section boundaries. Moreover, INLINEFORM5 leads to much higher computing complexity than INLINEFORM6 (i.e., INLINEFORM7 ). (2) INLINEFORM8 performs better than INLINEFORM9 , demonstrating that depending on the semantic representations of the previous and successive paragraph is more beneficial than only depending on the labels of the previous and successive paragraph in section boundary prediction. All the improvements over the baselines are statistically significant (p-value < 0.01).",
"We evaluate the section heading generation ability to demonstrate the effectiveness of our Markov heading dependency mechanism and review mechanism. Here we suppose that sections in an article are already given, and only need to predict the corresponding headings for each section. We consider two generative baselines INLINEFORM0 and INLINEFORM1 , where INLINEFORM2 is an extension of INLINEFORM3 by employing a global heading dependency mechanism. We then introduce our Markov heading dependency mechanism based on the INLINEFORM4 , named Hier INLINEFORM5 , and further employ the review mechanism, named Hier INLINEFORM6 . All these methods employ the section-aware attention in generation. The performance under Rouge INLINEFORM7 is shown in Table TABREF68 . We can find that: (1) Hier performs worst among all the methods, showing that the independence between context headings is not good for section heading generation. (2) By incorporating all the previous generated headings to model the dependence between context headings, INLINEFORM8 shows slight improvements on the heading generation performance. It indicates that the global dependency may not be effective in heading generation by involving too much context information, and also leads to high computing complexity. (3) The improvements of INLINEFORM9 over INLINEFORM10 indicate that the dependency between adjacent headings is sufficient for generating good and consistent section headings. (4) The improvements of INLINEFORM11 over INLINEFORM12 demonstrate that the review mechanism is also helpful in improving the quality of section heading generation. All the improvements over the baselines are statistically significant (p-value INLINEFORM13 0.01)."
],
[
"To better understand how different models perform, we conduct some case studies. We take one Wikipedia article from the “celebrity” test data as an example. As shown in Figure FIGREF62 , there are 15 paragraphs in this article, which are segmented into 7 sections. We show the identified sections and generated headings from our model as well as that from the baseline model INLINEFORM0 . We can find that: (1) The number of sections predicted by INLINEFORM1 is larger than the ground-truth (i.e., INLINEFORM2 ) and the segmentation is totally wrong. The results show that using current paragraph representation and context label dependency, CRF may not be able to make correct section boundary prediction. (2) Without considering the coherence between context headings, INLINEFORM3 generates repetitive headings (e.g., “career” repeats twice) and the heading with inconsistent style (e.g., “citizen political” is not suitable for the description of a celebrity). (3) Our INLINEFORM4 can generate right section boundaries and consistent headings. Note that INLINEFORM5 generates “family” for the third section whose true heading is “personal life”. As we look at that section, we found that “family” is actually a very proper heading and INLINEFORM6 did not generate the “personal life” as the heading possibly due to the review mechanism by avoiding partial duplication with the “early life” heading."
],
[
"In this paper we introduced a challenging OG task to unveil the inherent content structure of a multi-paragraph document by identifying its potential sections and generating the corresponding section headings. To tackle the problem, we formulated the OG task as a hierarchical structured prediction problem and developed a novel hierarchical structured neural generation model to capture the three levels of coherence. Furthermore, we built a new benchmark dataset WIKIOG to study and evaluate the OG task. The experimental results demonstrated that our model can well capture the inherent content structure of documents. In the future work, we would like to extend our model to produce hierarchical outlines for documents."
],
[
"This work was funded by the National Natural Science Foundation of China (NSFC) under Grants No. 61425016, 61722211, 61773362, and 61872338, the Youth Innovation Promotion Association CAS under Grants No. 20144310, and 2016102, the National Key R&D Program of China under Grants No. 2016QY02D0405, and the Foundation and Frontier Research Key Program of Chongqing Science and Technology Commission (No. cstc2017jcyjBX0059)."
]
],
"section_name": [
"Introduction",
"Related Work",
"Problem Statement",
"Task Description",
"Benchmark Construction",
"Our Approach",
"Overview",
"Encoder",
"Hierarchical Decoder",
"Model Training and Testing",
"Experiments",
"Experimental Settings",
"Baselines",
"Evaluation Metrics",
"Model Ablation",
"Baseline Comparison",
"Case Study",
"Conclusion and future work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"af42f242d18c991ed72564fb7b20724d0283beba",
"b9ce75593029c2d7c5efeda532619d018e6fde55",
"e4b4548cbc47998c276aea88cedb033979c3a9f5"
],
"answer": [
{
"evidence": [
"EM INLINEFORM0 : evaluates the overall accuracy of the generated outline based on exact matching. That is, if both the predicted section boundaries and the generated section headings in a document exactly match with the ground-truth, we treat the document as a positive sample. Otherwise the document is a negative sample.",
"EM INLINEFORM0 : evaluates the accuracy of the section boundary prediction based on exact matching. Namely, if the predicted section boundaries in a document exactly match with the ground-truth, we treat the document as a positive sample. Otherwise the document is a negative sample."
],
"extractive_spans": [],
"free_form_answer": "EM-outline, EM-sec, Rouge",
"highlighted_evidence": [
"EM INLINEFORM0 : evaluates the overall accuracy of the generated outline based on exact matching. ",
"EM INLINEFORM0 : evaluates the accuracy of the section boundary prediction based on exact matching."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: Model analysis of our HiStGen model under the automatic evaluation. Two-tailed t-tests demonstrate the improvements of HiStGen to the variants are statistically significant (‡ indicates p-value < 0.01)."
],
"extractive_spans": [],
"free_form_answer": "EMoutline, EMsec, Rougehead",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Model analysis of our HiStGen model under the automatic evaluation. Two-tailed t-tests demonstrate the improvements of HiStGen to the variants are statistically significant (‡ indicates p-value < 0.01)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To measure the quality of outline generated by our model and the baselines, we employ three automatic metrics, namely",
"[leftmargin=*]",
"EM INLINEFORM0 : evaluates the overall accuracy of the generated outline based on exact matching. That is, if both the predicted section boundaries and the generated section headings in a document exactly match with the ground-truth, we treat the document as a positive sample. Otherwise the document is a negative sample.",
"EM INLINEFORM0 : evaluates the accuracy of the section boundary prediction based on exact matching. Namely, if the predicted section boundaries in a document exactly match with the ground-truth, we treat the document as a positive sample. Otherwise the document is a negative sample.",
"Rouge INLINEFORM0 evaluates the similarities between generated headings and referenced headings only for the correctly predicted sections. Specifically, we employ Rouge-1 BIBREF39 to measure the uni-gram recall on the reference headings."
],
"extractive_spans": [
"EM INLINEFORM0 ",
"EM INLINEFORM0",
"Rouge INLINEFORM0"
],
"free_form_answer": "",
"highlighted_evidence": [
"To measure the quality of outline generated by our model and the baselines, we employ three automatic metrics, namely\n\n[leftmargin=*]\n\nEM INLINEFORM0 : evaluates the overall accuracy of the generated outline based on exact matching. That is, if both the predicted section boundaries and the generated section headings in a document exactly match with the ground-truth, we treat the document as a positive sample. Otherwise the document is a negative sample.\n\nEM INLINEFORM0 : evaluates the accuracy of the section boundary prediction based on exact matching. Namely, if the predicted section boundaries in a document exactly match with the ground-truth, we treat the document as a positive sample. Otherwise the document is a negative sample.\n\nRouge INLINEFORM0 evaluates the similarities between generated headings and referenced headings only for the correctly predicted sections. Specifically, we employ Rouge-1 BIBREF39 to measure the uni-gram recall on the reference headings."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"53fd30cd891a7d6705fb7692ada6a708cc3f9362",
"f26ca0b221d6465995008876bc2dbf15ee4ff59c"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4: Comparisons between our HiStGen and step-wise baselines in terms of EMoutline (%)."
],
"extractive_spans": [],
"free_form_answer": "IG CRF+GHD",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Comparisons between our HiStGen and step-wise baselines in terms of EMoutline (%)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: Model analysis of our HiStGen model under the automatic evaluation. Two-tailed t-tests demonstrate the improvements of HiStGen to the variants are statistically significant (‡ indicates p-value < 0.01).",
"FLOAT SELECTED: Table 4: Comparisons between our HiStGen and step-wise baselines in terms of EMoutline (%)."
],
"extractive_spans": [],
"free_form_answer": "HiStGen_P, HiStGen_S, HiStGen_H, HiStGen_R, HiStGen_PSHR, IGCRF+TextRank, IGCRF+TopicRank, IGCRF+Hier, IGCRF+GHD, IGGPD+TextRank, IGGPD+TopicRank, IGGPD+Hier, IGGPD+GHD, GATextRank, GATopicRank, GAHier, GAGHD",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Model analysis of our HiStGen model under the automatic evaluation. Two-tailed t-tests demonstrate the improvements of HiStGen to the variants are statistically significant (‡ indicates p-value < 0.01).",
"FLOAT SELECTED: Table 4: Comparisons between our HiStGen and step-wise baselines in terms of EMoutline (%)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"what evaluation metrics were used?",
"what state of the art models did they compare with?"
],
"question_id": [
"8126c6b8a0cab3e22661d3d71d96aa57360da65c",
"2f01d3e5120d1fef4b01028536cb5fe0abad1968"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Figure 1: Examples of outlines in different types of documents.",
"Table 1: A summary of key notations in this work.",
"Table 2: Data statistics: #s denotes the number of sections, #p denotes the number of paragraphs, and #w denotes the number of words.",
"Figure 2: The basic architecture of hierarchical structured neural generationmodel (HiStGen). The detail of the section heading generation step in the hierarchical decoder is shown in Figure 3.",
"Figure 3: The detail of the section heading generation step in the HiStGen model.",
"Table 3: Model analysis of our HiStGen model under the automatic evaluation. Two-tailed t-tests demonstrate the improvements of HiStGen to the variants are statistically significant (‡ indicates p-value < 0.01).",
"Table 4: Comparisons between our HiStGen and step-wise baselines in terms of EMoutline (%).",
"Figure 4: An example from the test WIKIOG data. p1 to p15 are the paragraphs in the article. Red colored arrows stand for the section boundaries, and texts below brackets stand for the section headings. The two results below are the outputs of the IGCRF+Hier and HiStGen model.",
"Figure 5: Performance comparison of the section boundary prediction under EMsec metric.",
"Table 5: Evaluation results(%) of the section heading generation under Rougehead metric when the real sections are given aforehead."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"3-Table2-1.png",
"4-Figure2-1.png",
"6-Figure3-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"9-Figure4-1.png",
"9-Figure5-1.png",
"9-Table5-1.png"
]
} | [
"what evaluation metrics were used?",
"what state of the art models did they compare with?"
] | [
[
"1905.10039-Evaluation Metrics-2",
"1905.10039-Evaluation Metrics-3",
"1905.10039-Evaluation Metrics-4",
"1905.10039-Hierarchical Decoder-2",
"1905.10039-Evaluation Metrics-0",
"1905.10039-8-Table3-1.png"
],
[
"1905.10039-8-Table4-1.png",
"1905.10039-8-Table3-1.png"
]
] | [
"EMoutline, EMsec, Rougehead",
"HiStGen_P, HiStGen_S, HiStGen_H, HiStGen_R, HiStGen_PSHR, IGCRF+TextRank, IGCRF+TopicRank, IGCRF+Hier, IGCRF+GHD, IGGPD+TextRank, IGGPD+TopicRank, IGGPD+Hier, IGGPD+GHD, GATextRank, GATopicRank, GAHier, GAGHD"
] | 179 |
1704.06851 | Affect-LM: A Neural Language Model for Customizable Affective Text Generation | Human verbal communication includes affective messages which are conveyed through use of emotionally colored words. There has been a lot of research in this direction but the problem of integrating state-of-the-art neural language models with affective information remains an area ripe for exploration. In this paper, we propose an extension to an LSTM (Long Short-Term Memory) language model for generating conversational text, conditioned on affect categories. Our proposed model, Affect-LM enables us to customize the degree of emotional content in generated sentences through an additional design parameter. Perception studies conducted using Amazon Mechanical Turk show that Affect-LM generates naturally looking emotional sentences without sacrificing grammatical correctness. Affect-LM also learns affect-discriminative word representations, and perplexity experiments show that additional affective information in conversational text can improve language model prediction. | {
"paragraphs": [
[
"Affect is a term that subsumes emotion and longer term constructs such as mood and personality and refers to the experience of feeling or emotion BIBREF0 . BIBREF1 picard1997affective provides a detailed discussion of the importance of affect analysis in human communication and interaction. Within this context the analysis of human affect from text is an important topic in natural language understanding, examples of which include sentiment analysis from Twitter BIBREF2 , affect analysis from poetry BIBREF3 and studies of correlation between function words and social/psychological processes BIBREF4 . People exchange verbal messages which not only contain syntactic information, but also information conveying their mental and emotional states. Examples include the use of emotionally colored words (such as furious and joy) and swear words. The automated processing of affect in human verbal communication is of great importance to understanding spoken language systems, particularly for emerging applications such as dialogue systems and conversational agents.",
"Statistical language modeling is an integral component of speech recognition systems, with other applications such as machine translation and information retrieval. There has been a resurgence of research effort in recurrent neural networks for language modeling BIBREF5 , which have yielded performances far superior to baseline language models based on n-gram approaches. However, there has not been much effort in building neural language models of text that leverage affective information. Current literature on deep learning for language understanding focuses mainly on representations based on word semantics BIBREF6 , encoder-decoder models for sentence representations BIBREF7 , language modeling integrated with symbolic knowledge BIBREF8 and neural caption generation BIBREF9 , but to the best of our knowledge there has been no work on augmenting neural language modeling with affective information, or on data-driven approaches to generate emotional text.",
"Motivated by these advances in neural language modeling and affective analysis of text, in this paper we propose a model for representation and generation of emotional text, which we call the Affect-LM. Our model is trained on conversational speech corpora, common in language modeling for speech recognition applications BIBREF10 . Figure 1 provides an overview of our Affect-LM and its ability to generate emotionally colored conversational text in a number of affect categories with varying affect strengths. While these parameters can be manually tuned to generate conversational text, the affect category can also be automatically inferred from preceding context words. Specifically for model training, the affect category is derived from features generated using keyword spotting from a dictionary of emotional words, such as the LIWC (Linguistic Inquiry and Word Count) tool BIBREF11 . Our primary research questions in this paper are:",
"Q1:Can Affect-LM be used to generate affective sentences for a target emotion with varying degrees of affect strength through a customizable model parameter?",
"Q2:Are these generated sentences rated as emotionally expressive as well as grammatically correct in an extensive crowd-sourced perception experiment?",
"Q3:Does the automatic inference of affect category from the context words improve language modeling performance of the proposed Affect-LM over the baseline as measured by perplexity?",
"The remainder of this paper is organized as follows. In Section \"Related Work\" , we discuss prior work in the fields of neural language modeling, and generation of affective conversational text. In Section \"LSTM Language Model\" we describe the baseline LSTM model and our proposed Affect-LM model. Section \"Experimental Setup\" details the experimental setup, and in Section \"Results\" , we discuss results for customizable emotional text generation, perception studies for each affect category, and perplexity improvements over the baseline model before concluding the paper in Section \"Conclusions and Future Work\" ."
],
[
"Language modeling is an integral component of spoken language systems, and traditionally n-gram approaches have been used BIBREF12 with the shortcoming that they are unable to generalize to word sequences which are not in the training set, but are encountered in unseen data. BIBREF13 bengio2003neural proposed neural language models, which address this shortcoming by generalizing through word representations. BIBREF5 mikolov2010recurrent and BIBREF14 sundermeyer2012lstm extend neural language models to a recurrent architecture, where a target word $w_t$ is predicted from a context of all preceding words $w_1, w_2,..., w_{t-1}$ with an LSTM (Long Short-Term Memory) neural network. There also has been recent effort on building language models conditioned on other modalities or attributes of the data. For example, BIBREF9 Vinyals2015CVPR introduced the neural image caption generator, where representations learnt from an input image by a CNN (Convolutional Neural Network) are fed to an LSTM language model to generate image captions. BIBREF15 kiros2014multimodal used an LBL model (Log-Bilinear language model) for two applications - image retrieval given sentence queries, and image captioning. Lower perplexity was achieved on text conditioned on images rather than language models trained only on text.",
"In contrast, previous literature on affective language generation has not focused sufficiently on customizable state-of-the-art neural network techniques to generate emotional text, nor have they quantitatively evaluated their models on multiple emotionally colored corpora. BIBREF16 mahamood2011generating use several NLG (natural language generation) strategies for producing affective medical reports for parents of neonatal infants undergoing healthcare. While they study the difference between affective and non-affective reports, their work is limited only to heuristic based systems and do not include conversational text. BIBREF17 mairesse2007personage developed PERSONAGE, a system for dialogue generation conditioned on extraversion dimensions. They trained regression models on ground truth judge's selections to automatically determine which of the sentences selected by their model exhibit appropriate extroversion attributes. In BIBREF18 keshtkar2011pattern, the authors use heuristics and rule-based approaches for emotional sentence generation. Their generation system is not training on large corpora and they use additional syntactic knowledge of parts of speech to create simple affective sentences. In contrast, our proposed approach builds on state-of-the-art approaches for neural language modeling, utilizes no syntactic prior knowledge, and generates expressive emotional text."
],
[
"Prior to providing a formulation for our proposed model, we briefly describe a LSTM language model. We have chosen this model as a baseline since it has been reported to achieve state-of-the-art perplexities compared to other approaches, such as n-gram models with Kneser-Ney smoothing BIBREF19 . Unlike an ordinary recurrent neural network, an LSTM network does not suffer from the vanishing gradient problem which is more pronounced for very long sequences BIBREF20 . Formally, by the chain rule of probability, for a sequence of $M$ words $w_1, w_2,..., w_M$ , the joint probability of all words is given by: ",
"$$P(w_1, w_2,..., w_M) = \\prod _{t=1}^{t=M} P(w_t|w_1, w_2,...., w_{t-1})$$ (Eq. 4) ",
"If the vocabulary consists of $V$ words, the conditional probability of word $w_t$ as a function of its context $\\mathbf {c_{t-1}}=(w_1, w_2,...., w_{t-1})$ is given by: ",
"$$P(w_t=i|\\mathbf {c_{t-1}})=\\frac{\\exp (\\mathbf {U_i}^T\\mathbf {f(c_{t-1})}+b_i)}{\\sum _{i=1}^{V} \\exp (\\mathbf {U_i}^T\\mathbf {f(c_{t-1})}+b_i)}$$ (Eq. 5) ",
" $\\mathbf {f(.)}$ is the output of an LSTM network which takes in the context words $w_1, w_2,...,w_{t-1}$ as inputs through one-hot representations, $\\mathbf {U}$ is a matrix of word representations which on visualization we have found to correspond to POS (Part of Speech) information, while $\\mathbf {b_i}$ is a bias term capturing the unigram occurrence of word $i$ . Equation 5 expresses the word $w_t$ as a function of its context for a LSTM language model which does not utilize any additional affective information."
],
[
"The proposed model Affect-LM has an additional energy term in the word prediction, and can be described by the following equation: ",
"$$\\begin{split}\n\\small {P(w_t=i|\\mathbf {c_{t-1}},\\mathbf {e_{t-1}})= \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad } \\\\\n\\small {\\frac{\\exp { (\\mathbf {U_i}^T\\mathbf {f(c_{t-1})}+\\beta \\mathbf {V_i}^T\\mathbf {g(e_{t-1})}+b_i) }}{\\sum _{i=1}^{V} \\exp (\\mathbf {U_i}^T\\mathbf {f(c_{t-1})}+\\beta \\mathbf {V_i}^T\\mathbf {g(e_{t-1})}+b_i)}}\n\\end{split}$$ (Eq. 7) ",
" $\\mathbf {e_{t-1}}$ is an input vector which consists of affect category information obtained from the words in the context during training, and $\\mathbf {g(.)}$ is the output of a network operating on $\\mathbf {e_{t-1}}$ . $\\mathbf {V_i}$ is an embedding learnt by the model for the $i$ -th word in the vocabulary and is expected to be discriminative of the affective information conveyed by each word. In Figure 4 we present a visualization of these affective representations.",
"The parameter $\\beta $ defined in Equation 7 , which we call the affect strength defines the influence of the affect category information (frequency of emotionally colored words) on the overall prediction of the target word $w_t$ given its context. We can consider the formulation as an energy based model (EBM), where the additional energy term captures the degree of correlation between the predicted word and the affective input BIBREF13 ."
],
[
"Our proposed model learns a generative model of the next word $w_t$ conditioned not only on the previous words $w_1,w_2,...,w_{t-1}$ but also on the affect category $\\mathbf {e_{t-1}}$ which is additional information about emotional content. During model training, the affect category is inferred from the context data itself. Thus we define a suitable feature extractor which can utilize an affective lexicon to infer emotion in the context. For our experiments, we have utilized the Linguistic Inquiry and Word Count (LIWC) text analysis program for feature extraction through keyword spotting. Introduced by BIBREF11 pennebaker2001linguistic, LIWC is based on a dictionary, where each word is assigned to a predefined LIWC category. The categories are chosen based on their association with social, affective, and cognitive processes. For example, the dictionary word worry is assigned to LIWC category anxiety. In our work, we have utilized all word categories of LIWC corresponding to affective processes: positive emotion, angry, sad, anxious, and negative emotion. Thus the descriptor $\\mathbf {e_{t-1}}$ has five features with each feature denoting presence or absence of a specific emotion, which is obtained by binary thresholding of the features extracted from LIWC. For example, the affective representation of the sentence i will fight in the war is $\\mathbf {e_{t-1}}=$ {“sad\":0, “angry\":1, “anxiety\":0, “negative emotion\":1, “positive emotion\":0}."
],
[
"Affect-LM can be used to generate sentences conditioned on the input affect category, the affect strength $\\beta $ , and the context words. For our experiments, we have chosen the following affect categories - positive emotion, anger, sad, anxiety, and negative emotion (which is a superclass of anger, sad and anxiety). As described in Section \"Conclusions and Future Work\" , the affect strength $\\beta $ defines the degree of dominance of the affect-dependent energy term on the word prediction in the language model, consequently after model training we can change $\\beta $ to control the degree of how “emotionally colored\" a generated utterance is, varying from $\\beta =0$ (neutral; baseline model) to $\\beta =\\infty $ (the generated sentences only consist of emotionally colored words, with no grammatical structure). When Affect-LM is used for generation, the affect categories could be either (1) inferred from the context using LIWC (this occurs when we provide sentence beginnings which are emotionally colored themselves), or (2) set to an input emotion descriptor $\\mathbf {e}$ (this is obtained by setting $\\mathbf {e}$ to a binary vector encoding the desired emotion and works even for neutral sentence beginnings). Given an initial starting set of $M$ words $w_1,w_2,...,w_M$ to complete, affect strength $\\beta $ , and the number of words $\\beta $0 to generate each $\\beta $1 -th generated word is obtained by sampling from $\\beta $2 for $\\beta $3 ."
],
[
"In Section \"Introduction\" , we have introduced three primary research questions related to the ability of the proposed Affect-LM model to generate emotionally colored conversational text without sacrificing grammatical correctness, and to obtain lower perplexity than a baseline LSTM language model when evaluated on emotionally colored corpora. In this section, we discuss our experimental setup to address these questions, with a description of Affect-LM's architecture and the corpora used for training and evaluating the language models."
],
[
"The Fisher English Training Speech Corpus is the main corpus used for training the proposed model, in addition to which we have chosen three emotionally colored conversational corpora. A brief description of each corpus is given below, and in Table 1 , we report relevant statistics, such as the total number of words, along with the fraction of emotionally colored words (those belonging to the LIWC affective word categories) in each corpus.",
"Fisher English Training Speech Parts 1 & 2: The Fisher dataset BIBREF21 consists of speech from telephonic conversations of 10 minutes each, along with their associated transcripts. Each conversation is between two strangers who are requested to speak on a randomly selected topic from a set. Examples of conversation topics are Minimum Wage, Time Travel and Comedy.",
"Distress Assessment Interview Corpus (DAIC): The DAIC corpus introduced by BIBREF22 gratch2014distress consists of 70+ hours of dyadic interviews between a human subject and a virtual human, where the virtual human asks questions designed to diagnose symptoms of psychological distress in the subject such as depression or PTSD (Post Traumatic Stress Disorder).",
"SEMAINE dataset: SEMAINE BIBREF23 is a large audiovisual corpus consisting of interactions between subjects and an operator simulating a SAL (Sensitive Artificial Listener). There are a total of 959 conversations which are approximately 5 minutes each, and are transcribed and annotated with affective dimensions.",
"Multimodal Opinion-level Sentiment Intensity Dataset (CMU-MOSI): BIBREF24 This is a multimodal annotated corpus of opinion videos where in each video a speaker expresses his opinion on a commercial product. The corpus consist of speech from 93 videos from 89 distinct speakers (41 male and 48 female speakers). This corpus differs from the others since it contains monologues rather than conversations.",
"While we find that all corpora contain spoken language, they have the following characteristics different from the Fisher corpus: (1) More emotional content as observed in Table 1 , since they have been generated through a human subject's spontaneous replies to questions designed to generate an emotional response, or from conversations on emotion-inducing topics (2) Domain mismatch due to recording environment (for example, the DAIC corpus was created in a mental health setting, while the CMU-MOSI corpus consisted of opinion videos uploaded online). (3) Significantly smaller than the Fisher corpus, which is 25 times the size of the other corpora combined. Thus, we perform training in two separate stages - training of the baseline and Affect-LM models on the Fisher corpus, and subsequent adaptation and fine-tuning on each of the emotionally colored corpora."
],
[
"For our experiments, we have implemented a baseline LSTM language model in Tensorflow BIBREF25 , which follows the non-regularized implementation as described in BIBREF26 zaremba2014recurrent and to which we have added a separate energy term for the affect category in implementing Affect-LM. We have used a vocabulary of 10000 words and an LSTM network with 2 hidden layers and 200 neurons per hidden layer. The network is unrolled for 20 time steps, and the size of each minibatch is 20. The affect category $\\mathbf {e_{t-1}}$ is processed by a multi-layer perceptron with a single hidden layer of 100 neurons and sigmoid activation function to yield $\\mathbf {g(e_{t-1})}$ . We have set the output layer size to 200 for both $\\mathbf {f(c_{t-1})}$ and $\\mathbf {g(e_{t-1})}$ . We have kept the network architecture constant throughout for ease of comparison between the baseline and Affect-LM."
],
[
"Affect-LM can also be used as a language model where the next predicted word is estimated from the words in the context, along with an affect category extracted from the context words themselves (instead of being encoded externally as in generation). To evaluate whether additional emotional information could improve the prediction performance, we train the corpora detailed in Section \"Speech Corpora\" in two stages as described below:",
"(1) Training and validation of the language models on Fisher dataset- The Fisher corpus is split in a 75:15:10 ratio corresponding to the training, validation and evaluation subsets respectively, and following the implementation in BIBREF26 zaremba2014recurrent, we train the language models (both the baseline and Affect-LM) on the training split for 13 epochs, with a learning rate of 1.0 for the first four epochs, and the rate decreasing by a factor of 2 after every subsequent epoch. The learning rate and neural architecture are the same for all models. We validate the model over the affect strength $\\beta \\in [1.0, 1.5, 1.75, 2.0, 2.25, 2.5, 3.0]$ . The best performing model on the Fisher validation set is chosen and used as a seed for subsequent adaptation on the emotionally colored corpora.",
"(2) Fine-tuning the seed model on other corpora- Each of the three corpora - CMU-MOSI, DAIC and SEMAINE are split in a 75:15:10 ratio to create individual training, validation and evaluation subsets. For both the baseline and Affect-LM, the best performing model from Stage 1 (the seed model) is fine-tuned on each of the training corpora, with a learning rate of 0.25 which is constant throughout, and a validation grid of $\\beta \\in [1.0, 1.5, 1.75, 2.0]$ . For each model adapted on a corpus, we compare the perplexities obtained by Affect-LM and the baseline model when evaluated on that corpus."
],
[
"We assess Affect-LM's ability to generate emotionally colored text of varying degrees without severely deteriorating grammatical correctness, by conducting an extensive perception study on Amazon's Mechanical Turk (MTurk) platform. The MTurk platform has been successfully used in the past for a wide range of perception experiments and has been shown to be an excellent resource to collect human ratings for large studies BIBREF27 . Specifically, we generated more than 200 sentences for four sentence beginnings (namely the three sentence beginnings listed in Table 2 as well as an end of sentence token indicating that the model should generate a new sentence) in five affect categories happy(positive emotion), angry, sad, anxiety, and negative emotion. The Affect-LM model trained on the Fisher corpus was used for sentence generation. Each sentence was evaluated by two human raters that have a minimum approval rating of 98% and are located in the United States. The human raters were instructed that the sentences should be considered to be taken from a conversational rather than a written context: repetitions and pause fillers (e.g., um, uh) are common and no punctuation is provided. The human raters evaluated each sentence on a seven-point Likert scale for the five affect categories, overall affective valence as well as the sentence's grammatical correctness and were paid 0.05USD per sentence. We measured inter-rater agreement using Krippendorff’s $\\alpha $ and observed considerable agreement between raters across all categories (e.g., for valence $\\alpha = 0.510$ and grammatical correctness $\\alpha = 0.505$ ).",
"For each target emotion (i.e., intended emotion of generated sentences) we conducted an initial MANOVA, with human ratings of affect categories the DVs (dependent variables) and the affect strength parameter $\\beta $ the IV (independent variable). We then conducted follow-up univariate ANOVAs to identify which DV changes significantly with $\\beta $ . In total we conducted 5 MANOVAs and 30 follow-up ANOVAs, which required us to update the significance level to p $<$ 0.001 following a Bonferroni correction."
],
[
"In Section \"Affect-LM for Emotional Text Generation\" we have described the process of sampling text from the model conditioned on input affective information (research question Q1). Table 2 shows three sentences generated by the model for input sentence beginnings I feel so ..., Why did you ... and I told him to ... for each of five affect categories - happy(positive emotion), angry, sad anxiety, and neutral(no emotion). They have been selected from a pool of 20 generated sentences for each category and sentence beginning."
],
[
"In the following we address research question Q2 by reporting the main statistical findings of our MTurk study, which are visualized in Figures 2 and 3 .",
"Positive Emotion Sentences. The multivariate result was significant for positive emotion generated sentences (Pillai's Trace $=$ .327, F(4,437) $=$ 6.44, p $<$ .0001). Follow up ANOVAs revealed significant results for all DVs except angry with p $<$ .0001, indicating that both affective valence and happy DVs were successfully manipulated with $\\beta $ , as seen in Figure 2 (a). Grammatical correctness was also significantly influenced by the affect strength parameter $\\beta $ and results show that the correctness deteriorates with increasing $\\beta $ (see Figure 3 ). However, a post-hoc Tukey test revealed that only the highest $\\beta $ value shows a significant drop in grammatical correctness at p $<$ .05.",
"Negative Emotion Sentences. The multivariate result was significant for negative emotion generated sentences (Pillai's Trace $=$ .130, F(4,413) $=$ 2.30, p $<$ .0005). Follow up ANOVAs revealed significant results for affective valence and happy DVs with p $<$ .0005, indicating that the affective valence DV was successfully manipulated with $\\beta $ , as seen in Figure 2 (b). Further, as intended there were no significant differences for DVs angry, sad and anxious, indicating that the negative emotion DV refers to a more general affect related concept rather than a specific negative emotion. This finding is in concordance with the intended LIWC category of negative affect that forms a parent category above the more specific emotions, such as angry, sad, and anxious BIBREF11 . Grammatical correctness was also significantly influenced by the affect strength $\\beta $ and results show that the correctness deteriorates with increasing $\\beta $ (see Figure 3 ). As for positive emotion, a post-hoc Tukey test revealed that only the highest $\\beta $ value shows a significant drop in grammatical correctness at p $<$ .05.",
"Angry Sentences. The multivariate result was significant for angry generated sentences (Pillai's Trace $=$ .199, F(4,433) $=$ 3.76, p $<$ .0001). Follow up ANOVAs revealed significant results for affective valence, happy, and angry DVs with p $<$ .0001, indicating that both affective valence and angry DVs were successfully manipulated with $\\beta $ , as seen in Figure 2 (c). Grammatical correctness was not significantly influenced by the affect strength parameter $\\beta $ , which indicates that angry sentences are highly stable across a wide range of $\\beta $ (see Figure 3 ). However, it seems that human raters could not successfully distinguish between angry, sad, and anxious affect categories, indicating that the generated sentences likely follow a general negative affect dimension.",
"Sad Sentences. The multivariate result was significant for sad generated sentences (Pillai's Trace $=$ .377, F(4,425) $=$ 7.33, p $<$ .0001). Follow up ANOVAs revealed significant results only for the sad DV with p $<$ .0001, indicating that while the sad DV can be successfully manipulated with $\\beta $ , as seen in Figure 2 (d). The grammatical correctness deteriorates significantly with $\\beta $ . Specifically, a post-hoc Tukey test revealed that only the two highest $\\beta $ values show a significant drop in grammatical correctness at p $<$ .05 (see Figure 3 ). A post-hoc Tukey test for sad reveals that $\\beta =3$ is optimal for this DV, since it leads to a significant jump in the perceived sadness scores at p $<$ .005 for $=$0 .",
"Anxious Sentences. The multivariate result was significant for anxious generated sentences (Pillai's Trace $=$ .289, F(4,421) $=$ 6.44, p $<$ .0001). Follow up ANOVAs revealed significant results for affective valence, happy and anxious DVs with p $<$ .0001, indicating that both affective valence and anxiety DVs were successfully manipulated with $\\beta $ , as seen in Figure 2 (e). Grammatical correctness was also significantly influenced by the affect strength parameter $\\beta $ and results show that the correctness deteriorates with increasing $\\beta $ . Similarly for sad, a post-hoc Tukey test revealed that only the two highest $\\beta $ values show a significant drop in grammatical correctness at p $<$ .05 (see Figure 3 ). Again, a post-hoc Tukey test for anxious reveals that $\\beta =3$ is optimal for this DV, since it leads to a",
"significant jump in the perceived anxiety scores at p $<$ .005 for $\\beta \\in \\lbrace 0,1,2\\rbrace $ ."
],
[
"In Table 3 , we address research question Q3 by presenting the perplexity scores obtained by the baseline model and Affect-LM, when trained on the Fisher corpus and subsequently adapted on three emotional corpora (each adapted model is individually trained on CMU-MOSI, DAIC and SEMAINE). The models trained on Fisher are evaluated on all corpora while each adapted model is evaluated only on it's respective corpus. For all corpora, we find that Affect-LM achieves lower perplexity on average than the baseline model, implying that affect category information obtained from the context words improves language model prediction. The average perplexity improvement is 1.44 (relative improvement 1.94%) for the model trained on Fisher, while it is 0.79 (1.31%) for the adapted models. We note that larger improvements in perplexity are observed for corpora with higher content of emotional words. This is supported by the results in Table 3 , where Affect-LM obtains a larger reduction in perplexity for the CMU-MOSI and SEMAINE corpora, which respectively consist of 2.76% and 2.75% more emotional words than the Fisher corpus."
],
[
"In Equation 7 , Affect-LM learns a weight matrix $\\mathbf {V}$ which captures the correlation between the predicted word $w_t$ , and the affect category $\\mathbf {e_{t-1}}$ . Thus, each row of the matrix $\\mathbf {V_i}$ is an emotionally meaningful embedding of the $i$ -th word in the vocabulary. In Figure 4 , we present a visualization of these embeddings, where each data point is a separate word, and words which appear in the LIWC dictionary are colored based on which affect category they belong to (we have labeled only words in categories positive emotion, negative emotion, anger, sad and anxiety since these categories contain the most frequent words). Words colored grey are those not in the LIWC dictionary. In Figure 4 , we observe that the embeddings contain affective information, where the positive emotion is highly separated from the negative emotions (sad, angry, anxiety) which are clustered together."
],
[
" In this paper, we have introduced a novel language model Affect-LM for generating affective conversational text conditioned on context words, an affective category and an affective strength parameter. MTurk perception studies show that the model can generate expressive text at varying degrees of emotional strength without affecting grammatical correctness. We also evaluate Affect-LM as a language model and show that it achieves lower perplexity than a baseline LSTM model when the affect category is obtained from the words in the context. For future work, we wish to extend this model by investigating language generation conditioned on other modalities such as facial images and speech, and to applications such as dialogue generation for virtual agents."
],
[
" This material is based upon work supported by the U.S. Army Research Laboratory under contract number W911NF-14-D-0005. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Government, and no official endorsement should be inferred. Sayan Ghosh also acknowledges the Viterbi Graduate School Fellowship for funding his graduate studies."
]
],
"section_name": [
"Introduction",
"Related Work",
"LSTM Language Model",
"Proposed Model: Affect-LM",
"Descriptors for Affect Category Information",
"Affect-LM for Emotional Text Generation",
"Experimental Setup",
"Speech Corpora",
"Affect-LM Neural Architecture",
"Language Modeling Experiments",
"Sentence Generation Perception Study",
"Generation of Emotional Text",
"MTurk Perception Experiments",
"Language Modeling Results",
"Word Representations",
"Conclusions and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"6b823cbac99a596dcf0bd1a622b4768f8bc8061c",
"bd773b875f19f6fb54bae6812577d2740511f049"
],
"answer": [
{
"evidence": [
"Positive Emotion Sentences. The multivariate result was significant for positive emotion generated sentences (Pillai's Trace $=$ .327, F(4,437) $=$ 6.44, p $<$ .0001). Follow up ANOVAs revealed significant results for all DVs except angry with p $<$ .0001, indicating that both affective valence and happy DVs were successfully manipulated with $\\beta $ , as seen in Figure 2 (a). Grammatical correctness was also significantly influenced by the affect strength parameter $\\beta $ and results show that the correctness deteriorates with increasing $\\beta $ (see Figure 3 ). However, a post-hoc Tukey test revealed that only the highest $\\beta $ value shows a significant drop in grammatical correctness at p $<$ .05.",
"Negative Emotion Sentences. The multivariate result was significant for negative emotion generated sentences (Pillai's Trace $=$ .130, F(4,413) $=$ 2.30, p $<$ .0005). Follow up ANOVAs revealed significant results for affective valence and happy DVs with p $<$ .0005, indicating that the affective valence DV was successfully manipulated with $\\beta $ , as seen in Figure 2 (b). Further, as intended there were no significant differences for DVs angry, sad and anxious, indicating that the negative emotion DV refers to a more general affect related concept rather than a specific negative emotion. This finding is in concordance with the intended LIWC category of negative affect that forms a parent category above the more specific emotions, such as angry, sad, and anxious BIBREF11 . Grammatical correctness was also significantly influenced by the affect strength $\\beta $ and results show that the correctness deteriorates with increasing $\\beta $ (see Figure 3 ). As for positive emotion, a post-hoc Tukey test revealed that only the highest $\\beta $ value shows a significant drop in grammatical correctness at p $<$ .05.",
"Angry Sentences. The multivariate result was significant for angry generated sentences (Pillai's Trace $=$ .199, F(4,433) $=$ 3.76, p $<$ .0001). Follow up ANOVAs revealed significant results for affective valence, happy, and angry DVs with p $<$ .0001, indicating that both affective valence and angry DVs were successfully manipulated with $\\beta $ , as seen in Figure 2 (c). Grammatical correctness was not significantly influenced by the affect strength parameter $\\beta $ , which indicates that angry sentences are highly stable across a wide range of $\\beta $ (see Figure 3 ). However, it seems that human raters could not successfully distinguish between angry, sad, and anxious affect categories, indicating that the generated sentences likely follow a general negative affect dimension.",
"Sad Sentences. The multivariate result was significant for sad generated sentences (Pillai's Trace $=$ .377, F(4,425) $=$ 7.33, p $<$ .0001). Follow up ANOVAs revealed significant results only for the sad DV with p $<$ .0001, indicating that while the sad DV can be successfully manipulated with $\\beta $ , as seen in Figure 2 (d). The grammatical correctness deteriorates significantly with $\\beta $ . Specifically, a post-hoc Tukey test revealed that only the two highest $\\beta $ values show a significant drop in grammatical correctness at p $<$ .05 (see Figure 3 ). A post-hoc Tukey test for sad reveals that $\\beta =3$ is optimal for this DV, since it leads to a significant jump in the perceived sadness scores at p $<$ .005 for $=$0 .",
"Anxious Sentences. The multivariate result was significant for anxious generated sentences (Pillai's Trace $=$ .289, F(4,421) $=$ 6.44, p $<$ .0001). Follow up ANOVAs revealed significant results for affective valence, happy and anxious DVs with p $<$ .0001, indicating that both affective valence and anxiety DVs were successfully manipulated with $\\beta $ , as seen in Figure 2 (e). Grammatical correctness was also significantly influenced by the affect strength parameter $\\beta $ and results show that the correctness deteriorates with increasing $\\beta $ . Similarly for sad, a post-hoc Tukey test revealed that only the two highest $\\beta $ values show a significant drop in grammatical correctness at p $<$ .05 (see Figure 3 ). Again, a post-hoc Tukey test for anxious reveals that $\\beta =3$ is optimal for this DV, since it leads to a"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The multivariate result was significant for positive emotion generated sentences (Pillai's Trace $=$ .327, F(4,437) $=$ 6.44, p $<$ .0001). ",
"The multivariate result was significant for negative emotion generated sentences (Pillai's Trace $=$ .130, F(4,413) $=$ 2.30, p $<$ .0005). ",
"The multivariate result was significant for angry generated sentences (Pillai's Trace $=$ .199, F(4,433) $=$ 3.76, p $<$ .0001). ",
"The multivariate result was significant for sad generated sentences (Pillai's Trace $=$ .377, F(4,425) $=$ 7.33, p $<$ .0001).",
"The multivariate result was significant for anxious generated sentences (Pillai's Trace $=$ .289, F(4,421) $=$ 6.44, p $<$ .0001). "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Positive Emotion Sentences. The multivariate result was significant for positive emotion generated sentences (Pillai's Trace $=$ .327, F(4,437) $=$ 6.44, p $<$ .0001). Follow up ANOVAs revealed significant results for all DVs except angry with p $<$ .0001, indicating that both affective valence and happy DVs were successfully manipulated with $\\beta $ , as seen in Figure 2 (a). Grammatical correctness was also significantly influenced by the affect strength parameter $\\beta $ and results show that the correctness deteriorates with increasing $\\beta $ (see Figure 3 ). However, a post-hoc Tukey test revealed that only the highest $\\beta $ value shows a significant drop in grammatical correctness at p $<$ .05.",
"Negative Emotion Sentences. The multivariate result was significant for negative emotion generated sentences (Pillai's Trace $=$ .130, F(4,413) $=$ 2.30, p $<$ .0005). Follow up ANOVAs revealed significant results for affective valence and happy DVs with p $<$ .0005, indicating that the affective valence DV was successfully manipulated with $\\beta $ , as seen in Figure 2 (b). Further, as intended there were no significant differences for DVs angry, sad and anxious, indicating that the negative emotion DV refers to a more general affect related concept rather than a specific negative emotion. This finding is in concordance with the intended LIWC category of negative affect that forms a parent category above the more specific emotions, such as angry, sad, and anxious BIBREF11 . Grammatical correctness was also significantly influenced by the affect strength $\\beta $ and results show that the correctness deteriorates with increasing $\\beta $ (see Figure 3 ). As for positive emotion, a post-hoc Tukey test revealed that only the highest $\\beta $ value shows a significant drop in grammatical correctness at p $<$ .05.",
"Angry Sentences. The multivariate result was significant for angry generated sentences (Pillai's Trace $=$ .199, F(4,433) $=$ 3.76, p $<$ .0001). Follow up ANOVAs revealed significant results for affective valence, happy, and angry DVs with p $<$ .0001, indicating that both affective valence and angry DVs were successfully manipulated with $\\beta $ , as seen in Figure 2 (c). Grammatical correctness was not significantly influenced by the affect strength parameter $\\beta $ , which indicates that angry sentences are highly stable across a wide range of $\\beta $ (see Figure 3 ). However, it seems that human raters could not successfully distinguish between angry, sad, and anxious affect categories, indicating that the generated sentences likely follow a general negative affect dimension.",
"Sad Sentences. The multivariate result was significant for sad generated sentences (Pillai's Trace $=$ .377, F(4,425) $=$ 7.33, p $<$ .0001). Follow up ANOVAs revealed significant results only for the sad DV with p $<$ .0001, indicating that while the sad DV can be successfully manipulated with $\\beta $ , as seen in Figure 2 (d). The grammatical correctness deteriorates significantly with $\\beta $ . Specifically, a post-hoc Tukey test revealed that only the two highest $\\beta $ values show a significant drop in grammatical correctness at p $<$ .05 (see Figure 3 ). A post-hoc Tukey test for sad reveals that $\\beta =3$ is optimal for this DV, since it leads to a significant jump in the perceived sadness scores at p $<$ .005 for $=$0 .",
"Anxious Sentences. The multivariate result was significant for anxious generated sentences (Pillai's Trace $=$ .289, F(4,421) $=$ 6.44, p $<$ .0001). Follow up ANOVAs revealed significant results for affective valence, happy and anxious DVs with p $<$ .0001, indicating that both affective valence and anxiety DVs were successfully manipulated with $\\beta $ , as seen in Figure 2 (e). Grammatical correctness was also significantly influenced by the affect strength parameter $\\beta $ and results show that the correctness deteriorates with increasing $\\beta $ . Similarly for sad, a post-hoc Tukey test revealed that only the two highest $\\beta $ values show a significant drop in grammatical correctness at p $<$ .05 (see Figure 3 ). Again, a post-hoc Tukey test for anxious reveals that $\\beta =3$ is optimal for this DV, since it leads to a"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The multivariate result was significant for positive emotion generated sentences (Pillai's Trace $=$ .327, F(4,437) $=$ 6.44, p $<$ .0001).",
"The multivariate result was significant for negative emotion generated sentences (Pillai's Trace $=$ .130, F(4,413) $=$ 2.30, p $<$ .0005).",
"The multivariate result was significant for angry generated sentences (Pillai's Trace $=$ .199, F(4,433) $=$ 3.76, p $<$ .0001). ",
"The multivariate result was significant for sad generated sentences (Pillai's Trace $=$ .377, F(4,425) $=$ 7.33, p $<$ .0001).",
"The multivariate result was significant for anxious generated sentences (Pillai's Trace $=$ .289, F(4,421) $=$ 6.44, p $<$ .0001)."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"ea4394112c1549185e6b763d6f36733a9f2ed794",
"ca2a4695129d0180768a955fb5910d639f79aa34"
]
},
{
"annotation_id": [
"8fd136ed230905a639b08e348d77cd67d98a9674",
"9b019965c576d75c7f6734c73bd13d6090cce57c",
"be90686012f253791525e4333b92464e9998ae4f"
],
"answer": [
{
"evidence": [
"Motivated by these advances in neural language modeling and affective analysis of text, in this paper we propose a model for representation and generation of emotional text, which we call the Affect-LM. Our model is trained on conversational speech corpora, common in language modeling for speech recognition applications BIBREF10 . Figure 1 provides an overview of our Affect-LM and its ability to generate emotionally colored conversational text in a number of affect categories with varying affect strengths. While these parameters can be manually tuned to generate conversational text, the affect category can also be automatically inferred from preceding context words. Specifically for model training, the affect category is derived from features generated using keyword spotting from a dictionary of emotional words, such as the LIWC (Linguistic Inquiry and Word Count) tool BIBREF11 . Our primary research questions in this paper are:",
"Our proposed model learns a generative model of the next word $w_t$ conditioned not only on the previous words $w_1,w_2,...,w_{t-1}$ but also on the affect category $\\mathbf {e_{t-1}}$ which is additional information about emotional content. During model training, the affect category is inferred from the context data itself. Thus we define a suitable feature extractor which can utilize an affective lexicon to infer emotion in the context. For our experiments, we have utilized the Linguistic Inquiry and Word Count (LIWC) text analysis program for feature extraction through keyword spotting. Introduced by BIBREF11 pennebaker2001linguistic, LIWC is based on a dictionary, where each word is assigned to a predefined LIWC category. The categories are chosen based on their association with social, affective, and cognitive processes. For example, the dictionary word worry is assigned to LIWC category anxiety. In our work, we have utilized all word categories of LIWC corresponding to affective processes: positive emotion, angry, sad, anxious, and negative emotion. Thus the descriptor $\\mathbf {e_{t-1}}$ has five features with each feature denoting presence or absence of a specific emotion, which is obtained by binary thresholding of the features extracted from LIWC. For example, the affective representation of the sentence i will fight in the war is $\\mathbf {e_{t-1}}=$ {“sad\":0, “angry\":1, “anxiety\":0, “negative emotion\":1, “positive emotion\":0}."
],
"extractive_spans": [],
"free_form_answer": "Using a dictionary of emotional words, LIWC, they perform keyword spotting.",
"highlighted_evidence": [
"Specifically for model training, the affect category is derived from features generated using keyword spotting from a dictionary of emotional words, such as the LIWC (Linguistic Inquiry and Word Count) tool BIBREF11 . ",
"During model training, the affect category is inferred from the context data itself. Thus we define a suitable feature extractor which can utilize an affective lexicon to infer emotion in the context. For our experiments, we have utilized the Linguistic Inquiry and Word Count (LIWC) text analysis program for feature extraction through keyword spotting. ",
"In our work, we have utilized all word categories of LIWC corresponding to affective processes: positive emotion, angry, sad, anxious, and negative emotion. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our proposed model learns a generative model of the next word $w_t$ conditioned not only on the previous words $w_1,w_2,...,w_{t-1}$ but also on the affect category $\\mathbf {e_{t-1}}$ which is additional information about emotional content. During model training, the affect category is inferred from the context data itself. Thus we define a suitable feature extractor which can utilize an affective lexicon to infer emotion in the context. For our experiments, we have utilized the Linguistic Inquiry and Word Count (LIWC) text analysis program for feature extraction through keyword spotting. Introduced by BIBREF11 pennebaker2001linguistic, LIWC is based on a dictionary, where each word is assigned to a predefined LIWC category. The categories are chosen based on their association with social, affective, and cognitive processes. For example, the dictionary word worry is assigned to LIWC category anxiety. In our work, we have utilized all word categories of LIWC corresponding to affective processes: positive emotion, angry, sad, anxious, and negative emotion. Thus the descriptor $\\mathbf {e_{t-1}}$ has five features with each feature denoting presence or absence of a specific emotion, which is obtained by binary thresholding of the features extracted from LIWC. For example, the affective representation of the sentence i will fight in the war is $\\mathbf {e_{t-1}}=$ {“sad\":0, “angry\":1, “anxiety\":0, “negative emotion\":1, “positive emotion\":0}."
],
"extractive_spans": [],
"free_form_answer": "A sentence is represented by five features that each mark presence or absence of an emotion: positive emotion, angry, sad, anxious, and negative emotion.",
"highlighted_evidence": [
"In our work, we have utilized all word categories of LIWC corresponding to affective processes: positive emotion, angry, sad, anxious, and negative emotion. Thus the descriptor $\\mathbf {e_{t-1}}$ has five features with each feature denoting presence or absence of a specific emotion, which is obtained by binary thresholding of the features extracted from LIWC. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Affect-LM can be used to generate sentences conditioned on the input affect category, the affect strength $\\beta $ , and the context words. For our experiments, we have chosen the following affect categories - positive emotion, anger, sad, anxiety, and negative emotion (which is a superclass of anger, sad and anxiety). As described in Section \"Conclusions and Future Work\" , the affect strength $\\beta $ defines the degree of dominance of the affect-dependent energy term on the word prediction in the language model, consequently after model training we can change $\\beta $ to control the degree of how “emotionally colored\" a generated utterance is, varying from $\\beta =0$ (neutral; baseline model) to $\\beta =\\infty $ (the generated sentences only consist of emotionally colored words, with no grammatical structure). When Affect-LM is used for generation, the affect categories could be either (1) inferred from the context using LIWC (this occurs when we provide sentence beginnings which are emotionally colored themselves), or (2) set to an input emotion descriptor $\\mathbf {e}$ (this is obtained by setting $\\mathbf {e}$ to a binary vector encoding the desired emotion and works even for neutral sentence beginnings). Given an initial starting set of $M$ words $w_1,w_2,...,w_M$ to complete, affect strength $\\beta $ , and the number of words $\\beta $0 to generate each $\\beta $1 -th generated word is obtained by sampling from $\\beta $2 for $\\beta $3 .",
"Our proposed model learns a generative model of the next word $w_t$ conditioned not only on the previous words $w_1,w_2,...,w_{t-1}$ but also on the affect category $\\mathbf {e_{t-1}}$ which is additional information about emotional content. During model training, the affect category is inferred from the context data itself. Thus we define a suitable feature extractor which can utilize an affective lexicon to infer emotion in the context. For our experiments, we have utilized the Linguistic Inquiry and Word Count (LIWC) text analysis program for feature extraction through keyword spotting. Introduced by BIBREF11 pennebaker2001linguistic, LIWC is based on a dictionary, where each word is assigned to a predefined LIWC category. The categories are chosen based on their association with social, affective, and cognitive processes. For example, the dictionary word worry is assigned to LIWC category anxiety. In our work, we have utilized all word categories of LIWC corresponding to affective processes: positive emotion, angry, sad, anxious, and negative emotion. Thus the descriptor $\\mathbf {e_{t-1}}$ has five features with each feature denoting presence or absence of a specific emotion, which is obtained by binary thresholding of the features extracted from LIWC. For example, the affective representation of the sentence i will fight in the war is $\\mathbf {e_{t-1}}=$ {“sad\":0, “angry\":1, “anxiety\":0, “negative emotion\":1, “positive emotion\":0}."
],
"extractive_spans": [
"either (1) inferred from the context using LIWC (this occurs when we provide sentence beginnings which are emotionally colored themselves), or (2) set to an input emotion descriptor $\\mathbf {e}$"
],
"free_form_answer": "",
"highlighted_evidence": [
"When Affect-LM is used for generation, the affect categories could be either (1) inferred from the context using LIWC (this occurs when we provide sentence beginnings which are emotionally colored themselves), or (2) set to an input emotion descriptor $\\mathbf {e}$ (this is obtained by setting $\\mathbf {e}$ to a binary vector encoding the desired emotion and works even for neutral sentence beginnings).",
"For our experiments, we have utilized the Linguistic Inquiry and Word Count (LIWC) text analysis program for feature extraction through keyword spotting."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b",
"ea4394112c1549185e6b763d6f36733a9f2ed794",
"ca2a4695129d0180768a955fb5910d639f79aa34"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"Is the performance improvement (with and without affect attributes) statistically significant?",
"How to extract affect attributes from the sentence?"
],
"question_id": [
"b78bb6fe817c2d4bc69236df998f546e94c3ee21",
"1a419468d255d40ae82ed7777618072a48f0091b"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"language model",
"language model"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Affect-LM is capable of generating emotionally colored conversational text in five specific affect categories (et−1) with varying affect strengths (β). Three generated example sentences for happy affect category are shown in three distinct affect strengths.",
"Table 1: Summary of corpora used in this paper. CMU-MOSI and SEMAINE are observed to have higher emotional content than Fisher and DAIC corpora.",
"Table 2: Example sentences generated by the model conditioned on different affect categories",
"Figure 2: Amazon Mechanical Turk study results for generated sentences in the target affect categories positive emotion, negative emotion, angry, sad, and anxious (a)-(e). The most relevant human rating curve for each generated emotion is highlighted in red, while less relevant rating curves are visualized in black. Affect categories are coded via different line types and listed in legend below figure.",
"Figure 3: Mechanical Turk study results for grammatical correctness for all generated target emotions. Perceived grammatical correctness for each affect categories are color-coded.",
"Table 3: Evaluation perplexity scores obtained by the baseline and Affect-LM models when trained on Fisher and subsequently adapted on DAIC, SEMAINE and CMU-MOSI corpora",
"Figure 4: Embeddings learnt by Affect-LM"
],
"file": [
"1-Figure1-1.png",
"4-Table1-1.png",
"6-Table2-1.png",
"7-Figure2-1.png",
"7-Figure3-1.png",
"8-Table3-1.png",
"8-Figure4-1.png"
]
} | [
"How to extract affect attributes from the sentence?"
] | [
[
"1704.06851-Descriptors for Affect Category Information-0",
"1704.06851-Introduction-2",
"1704.06851-Affect-LM for Emotional Text Generation-0"
]
] | [
"A sentence is represented by five features that each mark presence or absence of an emotion: positive emotion, angry, sad, anxious, and negative emotion."
] | 180 |
1911.06815 | Experiments in Detecting Persuasion Techniques in the News | Many recent political events, like the 2016 US Presidential elections or the 2018 Brazilian elections have raised the attention of institutions and of the general public on the role of Internet and social media in influencing the outcome of these events. We argue that a safe democracy is one in which citizens have tools to make them aware of propaganda campaigns. We propose a novel task: performing fine-grained analysis of texts by detecting all fragments that contain propaganda techniques as well as their type. We further design a novel multi-granularity neural network, and we show that it outperforms several strong BERT-based baselines. | {
"paragraphs": [
[
"Journalistic organisations, such as Media Bias/Fact Check, provide reports on news sources highlighting the ones that are propagandistic. Obviously, such analysis is time-consuming and possibly biased and it cannot be applied to the enormous amount of news that flood social media and the Internet. Research on detecting propaganda has focused primarily on classifying entire articles as propagandistic/non-propagandistic BIBREF0, BIBREF1, BIBREF2. Such learning systems are trained using gold labels obtained by transferring the label of the media source, as per Media Bias/Fact Check judgment, to each of its articles. Such distant supervision setting inevitably introduces noise in the learning process BIBREF3 and the resulting systems tend to lack explainability.",
"We argue that in order to study propaganda in a sound and reliable way, we need to rely on high-quality trusted professional annotations and it is best to do so at the fragment level, targeting specific techniques rather than using a label for an entire document or an entire news outlet. Therefore, we propose a novel task: identifying specific instances of propaganda techniques used within an article. In particular, we design a novel multi-granularity neural network, and we show that it outperforms several strong BERT-based baselines.",
"Our corpus could enable research in propagandistic and non-objective news, including the development of explainable AI systems. A system that can detect instances of use of specific propagandistic techniques would be able to make it explicit to the users why a given article was predicted to be propagandistic. It could also help train the users to spot the use of such techniques in the news."
],
[
"We retrieved 451 news articles from 48 news outlets, both propagandistic and non-propagandistic according to Media Bias/Fact Check, which professionals annotators annotated according to eighteen persuasion techniques BIBREF4, ranging from leveraging on the emotions of the audience —such as using loaded language or appeal to authority BIBREF5 and slogans BIBREF6— to using logical fallacies —such as straw men BIBREF7 (misrepresenting someone's opinion), hidden ad-hominem fallacies, and red herring BIBREF8 (presenting irrelevant data). Some of these techniques weren studied in tasks such as hate speech detection and computational argumentation BIBREF9.",
"The total number of technique instances found in the articles, after the consolidation phase, is $7,485$, out of a total number of $21,230$ sentences (35.2%). The distribution of the techniques in the corpus is also uneven: while there are $2,547$ occurrences of loaded language, there are only 15 instances of straw man (more statistics about the corpus can be found in BIBREF10). We define two tasks based on the corpus described in Section SECREF2: (i) SLC (Sentence-level Classification), which asks to predict whether a sentence contains at least one propaganda technique, and (ii) FLC (Fragment-level classification), which asks to identify both the spans and the type of propaganda technique. Note that these two tasks are of different granularity, $g_1$ and $g_2$, namely tokens for FLC and sentences for SLC. We split the corpus into training, development and test, each containing 293, 57, 101 articles and 14,857, 2,108, 4,265 sentences, respectively.",
"Our task requires specific evaluation measures that give credit for partial overlaps of fragments. Thus, in our precision and recall versions, we give partial credit to imperfect matches at the character level, as in plagiarism detection BIBREF11.",
"Let $s$ and $t$ be two fragments, i.e., sequences of characters. We measure the overlap of two annotated fragments as $ C(s,t,h) = \\frac{|(s\\cap t)|}{h}\\delta \\left(l(s), l(t) \\right)$, where $h$ is a normalizing factor, $l(a)$ is the labelling of fragment $a$, and $\\delta (a,b)=1$ if $a=b$, and 0 otherwise.",
"We now define variants of precision and recall able to account for the imbalance in the corpus:",
"In eq. (DISPLAY_FORM4), we define $P(S,T)$ to be zero if $|S|=0$ and $R(S,T)$ to be zero if $|T|=0$. Finally, we compute the harmonic mean of precision and recall in Eq. (DISPLAY_FORM4) and we obtain an F$_1$-measure. Having a separate function $C$ for comparing two annotations gives us additional flexibility compared to standard NER measures that operate at the token/character level, e.g., we can change the factor that gives credit for partial overlaps and be more forgiving when only a few characters are wrong."
],
[
"We depart from BERT BIBREF12, and we design three baselines.",
"BERT. We add a linear layer on top of BERT and we fine-tune it, as suggested in BIBREF12. For the FLC task, we feed the final hidden representation for each token to a layer $L_{g_2}$ that makes a 19-way classification: does this token belong to one of the eighteen propaganda techniques or to none of them (cf. Figure FIGREF7-a). For the SLC task, we feed the final hidden representation for the special [CLS] token, which BERT uses to represent the full sentence, to a two-dimensional layer $L_{g_1}$ to make a binary classification.",
"BERT-Joint. We use the layers for both tasks in the BERT baseline, $L_{g_1}$ and $L_{g_2}$, and we train for both FLC and SLC jointly (cf. Figure FIGREF7-b).",
"BERT-Granularity. We modify BERT-Joint to transfer information from SLC directly to FLC. Instead of using only the $L_{g_2}$ layer for FLC, we concatenate $L_{g_1}$ and $L_{g_2}$, and we add an extra 19-dimensional classification layer $L_{g_{1,2}}$ on top of that concatenation to perform the prediction for FLC (cf. Figure FIGREF7-c).",
"Multi-Granularity Network. We propose a model that can drive the higher-granularity task (FLC) on the basis of the lower-granularity information (SLC), rather than simply using low-granularity information directly. Figure FIGREF7-d shows the architecture of this model.",
"More generally, suppose there are $k$ tasks of increasing granularity, e.g., document-level, paragraph-level, sentence-level, word-level, subword-level, character-level. Each task has a separate classification layer $L_{g_k}$ that receives the feature representation of the specific level of granularity $g_k$ and outputs $o_{g_k}$. The dimension of the representation depends on the embedding layer, while the dimension of the output depends on the number of classes in the task. The output $o_{g_k}$ is used to generate a weight for the next granularity task $g_{k+1}$ through a trainable gate $f$:",
"The gate $f$ consists of a projection layer to one dimension and an activation function. The resulting weight is multiplied by each element of the output of layer $L_{g_{k+1}}$ to produce the output for task $g_{k+1}$:",
"If $w_{g_{k}}=0$ for a given example, the output of the next granularity task $o_{g_{k+1}}$ would be 0 as well. In our setting, this means that, if the sentence-level classifier is confident that the sentence does not contain propaganda, i.e., $w_{g_{k}}=0$, then $o_{g_{k+1}}=0$ and there would be no propagandistic technique predicted for any span within that sentence. Similarly, when back-propagating the error, if $w_{g_{k}}=0$ for a given example, the final entropy loss would become zero, i.e., the model would not get any information from that example. As a result, only examples strongly classified as negative in a lower-granularity task would be ignored in the high-granularity task. Having the lower-granularity as the main task means that higher-granularity information can be selectively used as additional information to improve the performance, but only if the example is not considered as highly negative.",
"For the loss function, we use a cross-entropy loss with sigmoid activation for every layer, except for the highest-granularity layer $L_{g_K}$, which uses a cross-entropy loss with softmax activation. Unlike softmax, which normalizes over all dimensions, the sigmoid allows each output component of layer $L_{g_k}$ to be independent from the rest. Thus, the output of the sigmoid for the positive class increases the degree of freedom by not affecting the negative class, and vice versa. As we have two tasks, we use sigmoid activation for $L_{g_1}$ and softmax activation for $L_{g_2}$. Moreover, we use a weighted sum of losses with a hyper-parameter $\\alpha $:",
"Again, we use BERT BIBREF12 for the contextualized embedding layer and we place the multi-granularity network on top of it."
],
[
"We used the PyTorch framework and the pretrained BERT model, which we fine-tuned for our tasks. To deal with class imbalance, we give weight to the binary cross-entropy according to the proportion of positive samples. For the $\\alpha $ in the joint loss function, we use 0.9 for sentence classification, and 0.1 for word-level classification. In order to reduce the effect of random fluctuations for BERT, all the reported numbers are the average of three experimental runs with different random seeds. As it is standard, we tune our models on the dev partition and we report results on the test partition.",
"The left side of Table TABREF12 shows the performance for the three baselines and for our multi-granularity network on the FLC task. For the latter, we vary the degree to which the gate function is applied: using ReLU is more aggressive compared to using the Sigmoid, as the ReLU outputs zero for a negative input. Table TABREF12 (right) shows that using additional information from the sentence-level for the token-level classification (BERT-Granularity) yields small improvements. The multi-granularity models outperform all baselines thanks to their higher precision. This shows the effect of the model excluding sentences that it determined to be non-propagandistic from being considered for token-level classification.",
"The right side of Table TABREF12 shows the results for the SLC task. We apply our multi-granularity network model to the sentence-level classification task to see its effect on low granularity when we train the model with a high granularity task. Interestingly, it yields huge performance improvements on the sentence-level classification result. Compared to the BERT baseline, it increases the recall by 8.42%, resulting in a 3.24% increase of the F$_1$ score. In this case, the result of token-level classification is used as additional information for the sentence-level task, and it helps to find more positive samples. This shows the opposite effect of our model compared to the FLC task."
],
[
"We have argued for a new way to study propaganda in news media: by focusing on identifying the instances of use of specific propaganda techniques. Going at this fine-grained level can yield more reliable systems and it also makes it possible to explain to the user why an article was judged as propagandistic by an automatic system.",
"We experimented with a number of BERT-based models and devised a novel architecture which outperforms standard BERT-based baselines. Our fine-grained task can complement document-level judgments, both to come out with an aggregated decision and to explain why a document —or an entire news outlet— has been flagged as potentially propagandistic by an automatic system.",
"In future work, we plan to include more media sources, especially from non-English-speaking media and regions. We further want to extend the tool to support other propaganda techniques."
],
[
"This research is part of the Propaganda Analysis Project, which is framed within the Tanbih project. The Tanbih project aims to limit the effect of “fake news”, propaganda, and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking. The project is developed in collaboration between the Qatar Computing Research Institute (QCRI), HBKU and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL)."
]
],
"section_name": [
"Introduction",
"Corpus Annotated with Propaganda Techniques",
"Models",
"Experiments and Evaluation",
"Conclusions",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"51b7a7bb6c79091d1321e4bfb836f35abc3334a7",
"cd303a9210fd01c51bcb24e7955f80149b8a6ff7"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"07049463ebd75ca41650292a9b559d92fabbe40e",
"7f4ec0eef40e0b5db6aafd7bfa1afa273db0504a",
"eb6fe6c4f6063c90e783427199b7d321f4919be0"
],
"answer": [
{
"evidence": [
"We depart from BERT BIBREF12, and we design three baselines.",
"BERT. We add a linear layer on top of BERT and we fine-tune it, as suggested in BIBREF12. For the FLC task, we feed the final hidden representation for each token to a layer $L_{g_2}$ that makes a 19-way classification: does this token belong to one of the eighteen propaganda techniques or to none of them (cf. Figure FIGREF7-a). For the SLC task, we feed the final hidden representation for the special [CLS] token, which BERT uses to represent the full sentence, to a two-dimensional layer $L_{g_1}$ to make a binary classification.",
"BERT-Joint. We use the layers for both tasks in the BERT baseline, $L_{g_1}$ and $L_{g_2}$, and we train for both FLC and SLC jointly (cf. Figure FIGREF7-b).",
"BERT-Granularity. We modify BERT-Joint to transfer information from SLC directly to FLC. Instead of using only the $L_{g_2}$ layer for FLC, we concatenate $L_{g_1}$ and $L_{g_2}$, and we add an extra 19-dimensional classification layer $L_{g_{1,2}}$ on top of that concatenation to perform the prediction for FLC (cf. Figure FIGREF7-c)."
],
"extractive_spans": [
"BERT. We add a linear layer on top of BERT and we fine-tune it",
"BERT-Joint. We use the layers for both tasks in the BERT baseline, $L_{g_1}$ and $L_{g_2}$, and we train for both FLC and SLC jointly (cf. Figure FIGREF7-b).",
"BERT-Granularity. We modify BERT-Joint to transfer information from SLC directly to FLC"
],
"free_form_answer": "",
"highlighted_evidence": [
"We depart from BERT BIBREF12, and we design three baselines.\n\nBERT. We add a linear layer on top of BERT and we fine-tune it, as suggested in BIBREF12. For the FLC task, we feed the final hidden representation for each token to a layer $L_{g_2}$ that makes a 19-way classification: does this token belong to one of the eighteen propaganda techniques or to none of them (cf. Figure FIGREF7-a). For the SLC task, we feed the final hidden representation for the special [CLS] token, which BERT uses to represent the full sentence, to a two-dimensional layer $L_{g_1}$ to make a binary classification.\n\nBERT-Joint. We use the layers for both tasks in the BERT baseline, $L_{g_1}$ and $L_{g_2}$, and we train for both FLC and SLC jointly (cf. Figure FIGREF7-b).\n\nBERT-Granularity. We modify BERT-Joint to transfer information from SLC directly to FLC. Instead of using only the $L_{g_2}$ layer for FLC, we concatenate $L_{g_1}$ and $L_{g_2}$, and we add an extra 19-dimensional classification layer $L_{g_{1,2}}$ on top of that concatenation to perform the prediction for FLC (cf. Figure FIGREF7-c)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We depart from BERT BIBREF12, and we design three baselines.",
"BERT. We add a linear layer on top of BERT and we fine-tune it, as suggested in BIBREF12. For the FLC task, we feed the final hidden representation for each token to a layer $L_{g_2}$ that makes a 19-way classification: does this token belong to one of the eighteen propaganda techniques or to none of them (cf. Figure FIGREF7-a). For the SLC task, we feed the final hidden representation for the special [CLS] token, which BERT uses to represent the full sentence, to a two-dimensional layer $L_{g_1}$ to make a binary classification.",
"BERT-Joint. We use the layers for both tasks in the BERT baseline, $L_{g_1}$ and $L_{g_2}$, and we train for both FLC and SLC jointly (cf. Figure FIGREF7-b).",
"BERT-Granularity. We modify BERT-Joint to transfer information from SLC directly to FLC. Instead of using only the $L_{g_2}$ layer for FLC, we concatenate $L_{g_1}$ and $L_{g_2}$, and we add an extra 19-dimensional classification layer $L_{g_{1,2}}$ on top of that concatenation to perform the prediction for FLC (cf. Figure FIGREF7-c)."
],
"extractive_spans": [
"BERT",
"BERT-Joint",
"BERT-Granularity"
],
"free_form_answer": "",
"highlighted_evidence": [
"We depart from BERT BIBREF12, and we design three baselines.\n\nBERT. We add a linear layer on top of BERT and we fine-tune it, as suggested in BIBREF12. For the FLC task, we feed the final hidden representation for each token to a layer $L_{g_2}$ that makes a 19-way classification: does this token belong to one of the eighteen propaganda techniques or to none of them (cf. Figure FIGREF7-a).",
"BERT-Joint. We use the layers for both tasks in the BERT baseline, $L_{g_1}$ and $L_{g_2}$, and we train for both FLC and SLC jointly (cf. Figure FIGREF7-b).",
"BERT-Granularity. We modify BERT-Joint to transfer information from SLC directly to FLC."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"BERT. We add a linear layer on top of BERT and we fine-tune it, as suggested in BIBREF12. For the FLC task, we feed the final hidden representation for each token to a layer $L_{g_2}$ that makes a 19-way classification: does this token belong to one of the eighteen propaganda techniques or to none of them (cf. Figure FIGREF7-a). For the SLC task, we feed the final hidden representation for the special [CLS] token, which BERT uses to represent the full sentence, to a two-dimensional layer $L_{g_1}$ to make a binary classification.",
"BERT-Joint. We use the layers for both tasks in the BERT baseline, $L_{g_1}$ and $L_{g_2}$, and we train for both FLC and SLC jointly (cf. Figure FIGREF7-b).",
"BERT-Granularity. We modify BERT-Joint to transfer information from SLC directly to FLC. Instead of using only the $L_{g_2}$ layer for FLC, we concatenate $L_{g_1}$ and $L_{g_2}$, and we add an extra 19-dimensional classification layer $L_{g_{1,2}}$ on top of that concatenation to perform the prediction for FLC (cf. Figure FIGREF7-c)."
],
"extractive_spans": [],
"free_form_answer": "BERT with one separately trained linear layer for each of the two tasks, BERT-Joint, which trains a layer for both tasks jointly, BERT-Granularity, a modification of BERT-Joint which transfers information from the less granular task to the more granular task. ",
"highlighted_evidence": [
"BERT. We add a linear layer on top of BERT and we fine-tune it, as suggested in BIBREF12. For the FLC task, we feed the final hidden representation for each token to a layer $L_{g_2}$ that makes a 19-way classification: does this token belong to one of the eighteen propaganda techniques or to none of them (cf. Figure FIGREF7-a). For the SLC task, we feed the final hidden representation for the special [CLS] token, which BERT uses to represent the full sentence, to a two-dimensional layer $L_{g_1}$ to make a binary classification.\n\nBERT-Joint. We use the layers for both tasks in the BERT baseline, $L_{g_1}$ and $L_{g_2}$, and we train for both FLC and SLC jointly (cf. Figure FIGREF7-b).\n\nBERT-Granularity. We modify BERT-Joint to transfer information from SLC directly to FLC. Instead of using only the $L_{g_2}$ layer for FLC, we concatenate $L_{g_1}$ and $L_{g_2}$, and we add an extra 19-dimensional classification layer $L_{g_{1,2}}$ on top of that concatenation to perform the prediction for FLC (cf. Figure FIGREF7-c)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
},
{
"annotation_id": [
"c7baea5f2d8e244434a3c14ece75eb679c7e9f6e",
"e4df103c52265e3fd6437d000d7625ce3f3e5acd"
],
"answer": [
{
"evidence": [
"We retrieved 451 news articles from 48 news outlets, both propagandistic and non-propagandistic according to Media Bias/Fact Check, which professionals annotators annotated according to eighteen persuasion techniques BIBREF4, ranging from leveraging on the emotions of the audience —such as using loaded language or appeal to authority BIBREF5 and slogans BIBREF6— to using logical fallacies —such as straw men BIBREF7 (misrepresenting someone's opinion), hidden ad-hominem fallacies, and red herring BIBREF8 (presenting irrelevant data). Some of these techniques weren studied in tasks such as hate speech detection and computational argumentation BIBREF9."
],
"extractive_spans": [
"annotated according to eighteen persuasion techniques BIBREF4"
],
"free_form_answer": "",
"highlighted_evidence": [
"We retrieved 451 news articles from 48 news outlets, both propagandistic and non-propagandistic according to Media Bias/Fact Check, which professionals annotators annotated according to eighteen persuasion techniques BIBREF4, ranging from leveraging on the emotions of the audience —such as using loaded language or appeal to authority BIBREF5 and slogans BIBREF6— to using logical fallacies —such as straw men BIBREF7 (misrepresenting someone's opinion), hidden ad-hominem fallacies, and red herring BIBREF8 (presenting irrelevant data)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We retrieved 451 news articles from 48 news outlets, both propagandistic and non-propagandistic according to Media Bias/Fact Check, which professionals annotators annotated according to eighteen persuasion techniques BIBREF4, ranging from leveraging on the emotions of the audience —such as using loaded language or appeal to authority BIBREF5 and slogans BIBREF6— to using logical fallacies —such as straw men BIBREF7 (misrepresenting someone's opinion), hidden ad-hominem fallacies, and red herring BIBREF8 (presenting irrelevant data). Some of these techniques weren studied in tasks such as hate speech detection and computational argumentation BIBREF9."
],
"extractive_spans": [],
"free_form_answer": "Although not all of the 18 types are listed, they include using loaded language or appeal to authority and slogans, using logical fallacies such as strawmen, hidden ad-hominen fallacies ad red herrings. ",
"highlighted_evidence": [
"We retrieved 451 news articles from 48 news outlets, both propagandistic and non-propagandistic according to Media Bias/Fact Check, which professionals annotators annotated according to eighteen persuasion techniques BIBREF4, ranging from leveraging on the emotions of the audience —such as using loaded language or appeal to authority BIBREF5 and slogans BIBREF6— to using logical fallacies —such as straw men BIBREF7 (misrepresenting someone's opinion), hidden ad-hominem fallacies, and red herring BIBREF8 (presenting irrelevant data)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
},
{
"annotation_id": [
"569557b58d8dad1785d4ddf94de7730b636bcf7f",
"e77a737c4a4c1fbe058d12b955f2d32cb2e83f84"
],
"answer": [
{
"evidence": [
"In future work, we plan to include more media sources, especially from non-English-speaking media and regions. We further want to extend the tool to support other propaganda techniques."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In future work, we plan to include more media sources, especially from non-English-speaking media and regions."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
},
{
"annotation_id": [
"8804b740ce661171d1aad26bf8fecdcf7bbb6121",
"ae28daadd0e48a438ea8eeea04e2ab9cf7e2c06e",
"f69417fde0a5656c611cd5563f7ba956c370ea2c"
],
"answer": [
{
"evidence": [
"We retrieved 451 news articles from 48 news outlets, both propagandistic and non-propagandistic according to Media Bias/Fact Check, which professionals annotators annotated according to eighteen persuasion techniques BIBREF4, ranging from leveraging on the emotions of the audience —such as using loaded language or appeal to authority BIBREF5 and slogans BIBREF6— to using logical fallacies —such as straw men BIBREF7 (misrepresenting someone's opinion), hidden ad-hominem fallacies, and red herring BIBREF8 (presenting irrelevant data). Some of these techniques weren studied in tasks such as hate speech detection and computational argumentation BIBREF9."
],
"extractive_spans": [
"retrieved 451 news articles from 48 news outlets, both propagandistic and non-propagandistic according to Media Bias/Fact Check, which professionals annotators annotated according to eighteen persuasion techniques"
],
"free_form_answer": "",
"highlighted_evidence": [
"We retrieved 451 news articles from 48 news outlets, both propagandistic and non-propagandistic according to Media Bias/Fact Check, which professionals annotators annotated according to eighteen persuasion techniques BIBREF4, ranging from leveraging on the emotions of the audience —such as using loaded language or appeal to authority BIBREF5 and slogans BIBREF6— to using logical fallacies —such as straw men BIBREF7 (misrepresenting someone's opinion), hidden ad-hominem fallacies, and red herring BIBREF8 (presenting irrelevant data)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We retrieved 451 news articles from 48 news outlets, both propagandistic and non-propagandistic according to Media Bias/Fact Check, which professionals annotators annotated according to eighteen persuasion techniques BIBREF4, ranging from leveraging on the emotions of the audience —such as using loaded language or appeal to authority BIBREF5 and slogans BIBREF6— to using logical fallacies —such as straw men BIBREF7 (misrepresenting someone's opinion), hidden ad-hominem fallacies, and red herring BIBREF8 (presenting irrelevant data). Some of these techniques weren studied in tasks such as hate speech detection and computational argumentation BIBREF9."
],
"extractive_spans": [],
"free_form_answer": "A dataset of news articles from different news outlets collected by the authors.",
"highlighted_evidence": [
"We retrieved 451 news articles from 48 news outlets, both propagandistic and non-propagandistic according to Media Bias/Fact Check, which professionals annotators annotated according to eighteen persuasion techniques BIBREF4, ranging from leveraging on the emotions of the audience —such as using loaded language or appeal to authority BIBREF5 and slogans BIBREF6— to using logical fallacies —such as straw men BIBREF7 (misrepresenting someone's opinion), hidden ad-hominem fallacies, and red herring BIBREF8 (presenting irrelevant data). Some of these techniques weren studied in tasks such as hate speech detection and computational argumentation BIBREF9."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We retrieved 451 news articles from 48 news outlets, both propagandistic and non-propagandistic according to Media Bias/Fact Check, which professionals annotators annotated according to eighteen persuasion techniques BIBREF4, ranging from leveraging on the emotions of the audience —such as using loaded language or appeal to authority BIBREF5 and slogans BIBREF6— to using logical fallacies —such as straw men BIBREF7 (misrepresenting someone's opinion), hidden ad-hominem fallacies, and red herring BIBREF8 (presenting irrelevant data). Some of these techniques weren studied in tasks such as hate speech detection and computational argumentation BIBREF9."
],
"extractive_spans": [
"451 news articles from 48 news outlets, both propagandistic and non-propagandistic according to Media Bias/Fact Check, which professionals annotators annotated according to eighteen persuasion techniques BIBREF4"
],
"free_form_answer": "",
"highlighted_evidence": [
"We retrieved 451 news articles from 48 news outlets, both propagandistic and non-propagandistic according to Media Bias/Fact Check, which professionals annotators annotated according to eighteen persuasion techniques BIBREF4, ranging from leveraging on the emotions of the audience —such as using loaded language or appeal to authority BIBREF5 and slogans BIBREF6— to using logical fallacies —such as straw men BIBREF7 (misrepresenting someone's opinion), hidden ad-hominem fallacies, and red herring BIBREF8 (presenting irrelevant data). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How many layers does the neural network have?",
"Which BERT-based baselines do they compare to?",
"What are the propaganda types?",
"Do they look at various languages?",
"What datasets did they use in their experiment?"
],
"question_id": [
"52f5249a9a2cb7210eeb8e52cb29d18912f6c3aa",
"baad4b6f834d5944f61bd12f30908e3cf3739dcd",
"37b972a3afae04193411dc569f672d802c16ad71",
"a01af34c7f630ba0e79e0a0120d2e1c92d022df5",
"0c4e419fe57bf01d58a44f3e263777c22cdd90dc"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: The architecture of the baseline models (a-c), and of our multi-granularity network (d)."
],
"file": [
"3-Figure1-1.png"
]
} | [
"Which BERT-based baselines do they compare to?",
"What are the propaganda types?",
"What datasets did they use in their experiment?"
] | [
[
"1911.06815-Models-2",
"1911.06815-Models-3",
"1911.06815-Models-0",
"1911.06815-Models-1"
],
[
"1911.06815-Corpus Annotated with Propaganda Techniques-0"
],
[
"1911.06815-Corpus Annotated with Propaganda Techniques-0"
]
] | [
"BERT with one separately trained linear layer for each of the two tasks, BERT-Joint, which trains a layer for both tasks jointly, BERT-Granularity, a modification of BERT-Joint which transfers information from the less granular task to the more granular task. ",
"Although not all of the 18 types are listed, they include using loaded language or appeal to authority and slogans, using logical fallacies such as strawmen, hidden ad-hominen fallacies ad red herrings. ",
"A dataset of news articles from different news outlets collected by the authors."
] | 181 |
1704.08390 | Duluth at SemEval-2017 Task 6: Language Models in Humor Detection | This paper describes the Duluth system that participated in SemEval-2017 Task 6 #HashtagWars: Learning a Sense of Humor. The system participated in Subtasks A and B using N-gram language models, ranking highly in the task evaluation. This paper discusses the results of our system in the development and evaluation stages and from two post-evaluation runs. | {
"paragraphs": [
[
"Humor is an expression of human uniqueness and intelligence and has drawn attention in diverse areas such as linguistics, psychology, philosophy and computer science. Computational humor draws from all of these fields and is a relatively new area of study. There is some history of systems that are able to generate humor (e.g., BIBREF0 , BIBREF1 ). However, humor detection remains a less explored and challenging problem (e.g., BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 ).",
"SemEval-2017 Task 6 BIBREF6 also focuses on humor detection by asking participants to develop systems that learn a sense of humor from the Comedy Central TV show, @midnight with Chris Hardwick. Our system ranks tweets according to how funny they are by training N-gram language models on two different corpora. One consisting of funny tweets provided by the task organizers, and the other on a freely available research corpus of news data. The funny tweet data is made up of tweets that are intended to be humorous responses to a hashtag given by host Chris Hardwick during the program."
],
[
"Training Language Models (LMs) is a straightforward way to collect a set of rules by utilizing the fact that words do not appear in an arbitrary order; we in fact can gain useful information about a word by knowing the company it keeps BIBREF7 . A statistical language model estimates the probability of a sequence of words or an upcoming word. An N-gram is a contiguous sequence of N words: a unigram is a single word, a bigram is a two-word sequence, and a trigram is a three-word sequence. For example, in the tweet",
"tears in Ramen #SingleLifeIn3Words",
"“tears”, “in”, “Ramen” and “#SingleLifeIn3Words” are unigrams; “tears in”, “in Ramen” and “Ramen #SingleLifeIn3Words” are bigrams and “tears in Ramen” and “in Ramen #SingleLifeIn3Words” are trigrams.",
"An N-gram model can predict the next word from a sequence of N-1 previous words. A trigram Language Model (LM) predicts the conditional probability of the next word using the following approximation: DISPLAYFORM0 ",
"The assumption that the probability of a word depends only on a small number of previous words is called a Markov assumption BIBREF8 . Given this assumption the probability of a sentence can be estimated as follows: DISPLAYFORM0 ",
"In a study on how phrasing affects memorability, BIBREF9 take a language model approach to measure the distinctiveness of memorable movie quotes. They do this by evaluating a quote with respect to a “common language” model built from the newswire sections of the Brown corpus BIBREF10 . They find that movie quotes which are less like “common language” are more distinctive and therefore more memorable. The intuition behind our approach is that humor should in some way be memorable or distinct, and so tweets that diverge from a “common language” model would be expected to be funnier.",
"In order to evaluate how funny a tweet is, we train language models on two datasets: the tweet data and the news data. Tweets that are more probable according to the tweet data language model are ranked as being funnier. However, tweets that have a lower probability according to the news language model are considered the funnier since they are the least like the (unfunny) news corpus. We relied on both bigrams and trigrams when training our models.",
"We use KenLM BIBREF11 as our language modeling tool. Language models are estimated using modified Kneser-Ney smoothing without pruning. KenLM also implements a back-off technique so if an N-gram is not found, KenLM applies the lower order N-gram's probability along with its back-off weights."
],
[
"Our system estimated tweet probability using N-gram LMs. Specifically, it solved the comparison (Subtask A) and semi-ranking (Subtask B) subtasks in four steps:"
],
[
"The tweet data was provided by the task organizers. It consists of 106 hashtag files made up of about 21,000 tokens. The hashtag files were further divided into a development set trial_dir of 6 hashtags and a training set of 100 hashtags train_dir. We also obtained 6.2 GB of English news data with about two million tokens from the News Commentary Corpus and the News Crawl Corpus from 2008, 2010 and 2011. Each tweet and each sentence from the news data is found on a single line in their respective files.",
"During the development of our system we trained our language models solely on the 100 hashtag files from train_dir and then evaluated our performance on the 6 hashtag files found in trial_dir. That data was formatted such that each tweet was found on a single line.",
"Pre-processing consists of two steps: filtering and tokenization. The filtering step was only for the tweet training corpus. We experimented with various filtering and tokenziation combinations during the development stage to determine the best setting.",
"Filtering removes the following elements from the tweets: URLs, tokens starting with the “@” symbol (Twitter user names), and tokens starting with the “#” symbol (Hashtags).",
"Tokenization: Text in all training data was split on white space and punctuation"
],
[
"Once we had the corpora ready, we used the KenLM Toolkit to train the N-gram language models on each corpus. We trained using both bigrams and trigrams on the tweet and news data. Our language models accounted for unknown words and were built both with and without considering sentence or tweet boundaries."
],
[
"After training the N-gram language models, the next step was scoring. For each hashtag file that needed to be evaluated, the logarithm of the probability was assigned to each tweet in the hashtag file based on the trained language model. The larger the probability, the more likely that tweet was according to the language model. Table 1 shows an example of two scored tweets from hashtag file Bad_Job_In_5_Words.tsv based on the tweet data trigram language model. Note that KenLM reports the log of the probability of the N-grams rather than the actual probabilities so the value closer to 0 (-19) has the higher probability and is associated with the tweet judged to be funnier."
],
[
"The system sorts all the tweets for each hashtag and orders them based on their log probability score, where the funniest tweet should be listed first. If the scores are based on the tweet language model then they are sorted in ascending order since the log probability value closest to 0 indicates the tweet that is most like the (funny) tweets model. However, if the log probability scores are based on the news data then they are sorted in descending order since the largest value will have the smallest probability associated with it and is therefore least like the (unfunny) news model.",
"For Subtask A, the system goes through the sorted list of tweets in a hashtag file and compares each pair of tweets. For each pair, if the first tweet was funnier than the second, the system would output the tweet_ids for the pair followed by a “1”. If the second tweet is funnier it outputs the tweet_ids followed by a “0”. For Subtask B, the system outputs all the tweet_ids for a hashtag file starting from the funniest."
],
[
"In this section we present the results from our development stage (Table 2), the evaluation stage (Table 3), and two post-evaluation results (Table 3). Since we implemented both bigram and trigam language models during the development stage but only results from trigram language models were submitted to the task, we evaluated bigram language models in the post-evaluation stage. Note that the accuracy and distance measurements listed in Table 2 and Table 3 are defined by the task organizers BIBREF6 .",
"Table 2 shows results from the development stage. These results show that for the tweet data the best setting is to keep the # and @, omit sentence boundaries, be case sensitive, and ignore tokenization. While using these settings the trigram language model performed better on Subtask B (.887) and the bigram language model performed better on Subtask A (.548). We decided to rely on trigram language models for the task evaluation since the advantage of bigrams on Subtask A was very slight (.548 versus .543). For the news data, we found that the best setting was to perform tokenization, omit sentence boundaries, and to be case sensitive. Given that trigrams performed most effectively in the development stage, we decided to use those during the evaluation.",
"Table 3 shows the results of our system during the task evaluation. We submitted two runs, one with a trigram language model trained on the tweet data, and another with a trigram language model trained on the news data. In addition, after the evaluation was concluded we also decided to run the bigram language models as well. Contrary to what we observed in the development data, the bigram language model actually performed somewhat better than the trigram language model. In addition, and also contrary to what we observed with the development data, the news data proved generally more effective in the post–evaluation runs than the tweet data."
],
[
"We relied on bigram and trigram language models because tweets are short and concise, and often only consist of just a few words.",
"The performance of our system was not consistent when comparing the development to the evaluation results. During development language models trained on the tweet data performed better. However during the evaluation and post-evaluation stage, language models trained on the news data were significantly more effective. We also observed that bigram language models performed slightly better than trigram models on the evaluation data. This suggests that going forward we should also consider both the use of unigram and character–level language models.",
"These results suggest that there are only slight differences between bigram and trigram models, and that the type and quantity of corpora used to train the models is what really determines the results.",
"The task description paper BIBREF6 reported system by system results for each hashtag. We were surprised to find that our performance on the hashtag file #BreakUpIn5Words in the evaluation stage was significantly better than any other system on both Subtask A (with accuracy of 0.913) and Subtask B (with distance score of 0.636). While we still do not fully understand the cause of these results, there is clearly something about the language used in this hashtag that is distinct from the other hashtags, and is somehow better represented or captured by a language model. Reaching a better understanding of this result is a high priority for future work.",
"The tweet data was significantly smaller than the news data, and so certainly we believe that this was a factor in the performance during the evaluation stage, where the models built from the news data were significantly more effective. Going forward we plan to collect more tweet data, particularly those that participate in #HashtagWars. We also intend to do some experiments where we cut the amount of news data and then build models to see how those compare.",
"While our language models performed well, there is some evidence that neural network models can outperform standard back-off N-gram models BIBREF12 . We would like to experiment with deep learning methods such as recurrent neural networks, since these networks are capable of forming short term memory and may be better suited for dealing with sequence data."
]
],
"section_name": [
"Introduction",
"Background",
"Method",
"Corpus Preparation and Pre-processing",
"Language Model Training",
"Tweet Scoring",
"Tweet Prediction",
"Experiments and Results",
"Discussion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"10ca0822d3f53a7ca6d1432302de12891472e670",
"11e8216528a1f82f93fecf7b3749d5fe16f5ee8d",
"432307ebb792ba356ec02ff3523346e7a24fc60a"
],
"answer": [
{
"evidence": [
"Table 3 shows the results of our system during the task evaluation. We submitted two runs, one with a trigram language model trained on the tweet data, and another with a trigram language model trained on the news data. In addition, after the evaluation was concluded we also decided to run the bigram language models as well. Contrary to what we observed in the development data, the bigram language model actually performed somewhat better than the trigram language model. In addition, and also contrary to what we observed with the development data, the news data proved generally more effective in the post–evaluation runs than the tweet data."
],
"extractive_spans": [
"bigram "
],
"free_form_answer": "",
"highlighted_evidence": [
"Contrary to what we observed in the development data, the bigram language model actually performed somewhat better than the trigram language model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table 2 shows results from the development stage. These results show that for the tweet data the best setting is to keep the # and @, omit sentence boundaries, be case sensitive, and ignore tokenization. While using these settings the trigram language model performed better on Subtask B (.887) and the bigram language model performed better on Subtask A (.548). We decided to rely on trigram language models for the task evaluation since the advantage of bigrams on Subtask A was very slight (.548 versus .543). For the news data, we found that the best setting was to perform tokenization, omit sentence boundaries, and to be case sensitive. Given that trigrams performed most effectively in the development stage, we decided to use those during the evaluation."
],
"extractive_spans": [
"the trigram language model performed better on Subtask B",
"the bigram language model performed better on Subtask A"
],
"free_form_answer": "",
"highlighted_evidence": [
" While using these settings the trigram language model performed better on Subtask B (.887) and the bigram language model performed better on Subtask A (.548)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table 2 shows results from the development stage. These results show that for the tweet data the best setting is to keep the # and @, omit sentence boundaries, be case sensitive, and ignore tokenization. While using these settings the trigram language model performed better on Subtask B (.887) and the bigram language model performed better on Subtask A (.548). We decided to rely on trigram language models for the task evaluation since the advantage of bigrams on Subtask A was very slight (.548 versus .543). For the news data, we found that the best setting was to perform tokenization, omit sentence boundaries, and to be case sensitive. Given that trigrams performed most effectively in the development stage, we decided to use those during the evaluation."
],
"extractive_spans": [
"advantage of bigrams on Subtask A was very slight"
],
"free_form_answer": "",
"highlighted_evidence": [
"We decided to rely on trigram language models for the task evaluation since the advantage of bigrams on Subtask A was very slight (.548 versus .543)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0c65ee843661cc0875cf8e874f9a441fa68cea21",
"16d74dfa4d4607455723285bc3340512f0d2305e",
"d8c80df40920a4bda8b6a47de848d2552b299fc5"
],
"answer": [
{
"evidence": [
"An N-gram model can predict the next word from a sequence of N-1 previous words. A trigram Language Model (LM) predicts the conditional probability of the next word using the following approximation: DISPLAYFORM0",
"After training the N-gram language models, the next step was scoring. For each hashtag file that needed to be evaluated, the logarithm of the probability was assigned to each tweet in the hashtag file based on the trained language model. The larger the probability, the more likely that tweet was according to the language model. Table 1 shows an example of two scored tweets from hashtag file Bad_Job_In_5_Words.tsv based on the tweet data trigram language model. Note that KenLM reports the log of the probability of the N-grams rather than the actual probabilities so the value closer to 0 (-19) has the higher probability and is associated with the tweet judged to be funnier."
],
"extractive_spans": [],
"free_form_answer": "The n-gram models were used to calculate the logarithm of the probability for each tweet",
"highlighted_evidence": [
"A trigram Language Model (LM) predicts the conditional probability of the next word using the following approximation: DISPLAYFORM0",
"For each hashtag file that needed to be evaluated, the logarithm of the probability was assigned to each tweet in the hashtag file based on the trained language model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The system sorts all the tweets for each hashtag and orders them based on their log probability score, where the funniest tweet should be listed first. If the scores are based on the tweet language model then they are sorted in ascending order since the log probability value closest to 0 indicates the tweet that is most like the (funny) tweets model. However, if the log probability scores are based on the news data then they are sorted in descending order since the largest value will have the smallest probability associated with it and is therefore least like the (unfunny) news model."
],
"extractive_spans": [
"system sorts all the tweets for each hashtag and orders them based on their log probability score"
],
"free_form_answer": "",
"highlighted_evidence": [
"The system sorts all the tweets for each hashtag and orders them based on their log probability score, where the funniest tweet should be listed first."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"After training the N-gram language models, the next step was scoring. For each hashtag file that needed to be evaluated, the logarithm of the probability was assigned to each tweet in the hashtag file based on the trained language model. The larger the probability, the more likely that tweet was according to the language model. Table 1 shows an example of two scored tweets from hashtag file Bad_Job_In_5_Words.tsv based on the tweet data trigram language model. Note that KenLM reports the log of the probability of the N-grams rather than the actual probabilities so the value closer to 0 (-19) has the higher probability and is associated with the tweet judged to be funnier.",
"The system sorts all the tweets for each hashtag and orders them based on their log probability score, where the funniest tweet should be listed first. If the scores are based on the tweet language model then they are sorted in ascending order since the log probability value closest to 0 indicates the tweet that is most like the (funny) tweets model. However, if the log probability scores are based on the news data then they are sorted in descending order since the largest value will have the smallest probability associated with it and is therefore least like the (unfunny) news model."
],
"extractive_spans": [
"The system sorts all the tweets for each hashtag and orders them based on their log probability score, where the funniest tweet should be listed first"
],
"free_form_answer": "",
"highlighted_evidence": [
"After training the N-gram language models, the next step was scoring. For each hashtag file that needed to be evaluated, the logarithm of the probability was assigned to each tweet in the hashtag file based on the trained language model. ",
"The system sorts all the tweets for each hashtag and orders them based on their log probability score, where the funniest tweet should be listed first."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"5c38b3c5aea2dca772a0d62501e37c67c7b4416d",
"84ead7489e895c4b49e7951bff73b8bba07919b5",
"caba2fcb074a9f31637d5203be52c30c1836a1b4"
],
"answer": [
{
"evidence": [
"Once we had the corpora ready, we used the KenLM Toolkit to train the N-gram language models on each corpus. We trained using both bigrams and trigrams on the tweet and news data. Our language models accounted for unknown words and were built both with and without considering sentence or tweet boundaries."
],
"extractive_spans": [
"KenLM Toolkit"
],
"free_form_answer": "",
"highlighted_evidence": [
"Once we had the corpora ready, we used the KenLM Toolkit to train the N-gram language models on each corpus."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Once we had the corpora ready, we used the KenLM Toolkit to train the N-gram language models on each corpus. We trained using both bigrams and trigrams on the tweet and news data. Our language models accounted for unknown words and were built both with and without considering sentence or tweet boundaries."
],
"extractive_spans": [
"KenLM Toolkit"
],
"free_form_answer": "",
"highlighted_evidence": [
"Once we had the corpora ready, we used the KenLM Toolkit to train the N-gram language models on each corpus."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Once we had the corpora ready, we used the KenLM Toolkit to train the N-gram language models on each corpus. We trained using both bigrams and trigrams on the tweet and news data. Our language models accounted for unknown words and were built both with and without considering sentence or tweet boundaries."
],
"extractive_spans": [
"KenLM Toolkit"
],
"free_form_answer": "",
"highlighted_evidence": [
"Once we had the corpora ready, we used the KenLM Toolkit to train the N-gram language models on each corpus."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"607877c8a6c0cf644fabaf09a44fd7e9689264d3",
"7fe1303ccadc96ac594a7424ac3eb44ffeb99565"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Evaluation results (bold) and post-evaluation results based on evaluation dir data. The trigram LM trained on the news data ranked 4th place on Subtask A and 1st place on Subtask B."
],
"extractive_spans": [],
"free_form_answer": "4th place on SubtaskA; 1st place on Subtask B",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Evaluation results (bold) and post-evaluation results based on evaluation dir data. The trigram LM trained on the news data ranked 4th place on Subtask A and 1st place on Subtask B."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"9aee059f57a3803b62e18b5a32d53a745881965b"
],
"answer": [
{
"evidence": [
"For Subtask A, the system goes through the sorted list of tweets in a hashtag file and compares each pair of tweets. For each pair, if the first tweet was funnier than the second, the system would output the tweet_ids for the pair followed by a “1”. If the second tweet is funnier it outputs the tweet_ids followed by a “0”. For Subtask B, the system outputs all the tweet_ids for a hashtag file starting from the funniest."
],
"extractive_spans": [
"For Subtask A, the system goes through the sorted list of tweets in a hashtag file and compares each pair of tweets.",
"For Subtask B, the system outputs all the tweet_ids for a hashtag file starting from the funniest."
],
"free_form_answer": "",
"highlighted_evidence": [
"For Subtask A, the system goes through the sorted list of tweets in a hashtag file and compares each pair of tweets. ",
"For Subtask B, the system outputs all the tweet_ids for a hashtag file starting from the funniest."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat",
"somewhat",
"somewhat"
],
"question": [
"What size ngram models performed best? e.g. bigram, trigram, etc.",
"How were the ngram models used to generate predictions on the data?",
"What package was used to build the ngram language models?",
"What rank did the language model system achieve in the task evaluation?",
"What were subtasks A and B?"
],
"question_id": [
"7b76b8b69246525a48c0a8ca0c42db3319cd10a5",
"8b1af67e3905244653b4cf66ba0acec8d6bff81f",
"9a7aeecbecf5e30ffa595c233fca31719c9b429f",
"3605ea281e72e9085a0ac0a7270cef25fc23063f",
"21f6cb3819c85312364dd17dd4091df946591ef0"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"humor",
"humor",
"humor",
"humor",
"humor"
],
"topic_background": [
"research",
"research",
"research",
"research",
"research"
]
} | {
"caption": [
"Table 1: Scored tweets according to the trigram LM. The log probability scores computed based on the trigram LM are shown in the third column.",
"Table 2: Development results based on trial dir data. The settings we chose to train LMs are in bold.",
"Table 3: Evaluation results (bold) and post-evaluation results based on evaluation dir data. The trigram LM trained on the news data ranked 4th place on Subtask A and 1st place on Subtask B."
],
"file": [
"4-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png"
]
} | [
"How were the ngram models used to generate predictions on the data?",
"What rank did the language model system achieve in the task evaluation?"
] | [
[
"1704.08390-Tweet Scoring-0",
"1704.08390-Tweet Prediction-0"
],
[
"1704.08390-4-Table3-1.png"
]
] | [
"The n-gram models were used to calculate the logarithm of the probability for each tweet",
"4th place on SubtaskA; 1st place on Subtask B"
] | 182 |
1911.01214 | A Richly Annotated Corpus for Different Tasks in Automated Fact-Checking | Automated fact-checking based on machine learning is a promising approach to identify false information distributed on the web. In order to achieve satisfactory performance, machine learning methods require a large corpus with reliable annotations for the different tasks in the fact-checking process. Having analyzed existing fact-checking corpora, we found that none of them meets these criteria in full. They are either too small in size, do not provide detailed annotations, or are limited to a single domain. Motivated by this gap, we present a new substantially sized mixed-domain corpus with annotations of good quality for the core fact-checking tasks: document retrieval, evidence extraction, stance detection, and claim validation. To aid future corpus construction, we describe our methodology for corpus creation and annotation, and demonstrate that it results in substantial inter-annotator agreement. As baselines for future research, we perform experiments on our corpus with a number of model architectures that reach high performance in similar problem settings. Finally, to support the development of future models, we provide a detailed error analysis for each of the tasks. Our results show that the realistic, multi-domain setting defined by our data poses new challenges for the existing models, providing opportunities for considerable improvement by future systems. | {
"paragraphs": [
[
"The ever-increasing role of the Internet as a primary communication channel is arguably the single most important development in the media over the past decades. While it has led to unprecedented growth in information coverage and distribution speed, it comes at a cost. False information can be shared through this channel reaching a much wider audience than traditional means of disinformation BIBREF0.",
"While human fact-checking still remains the primary method to counter this issue, the amount and the speed at which new information is spread makes manual validation challenging and costly. This motivates the development of automated fact-checking pipelines BIBREF1, BIBREF2, BIBREF3 consisting of several consecutive tasks. The following four tasks are commonly included in the pipeline. Given a controversial claim, document retrieval is applied to identify documents that contain important information for the validation of the claim. Evidence extraction aims at retrieving text snippets or sentences from the identified documents that are related to the claim. This evidence can be further processed via stance detection to infer whether it supports or refutes the claim. Finally, claim validation assesses the validity of the claim given the evidence.",
"Automated fact-checking has received significant attention in the NLP community in the past years. Multiple corpora have been created to assist the development of fact-checking models, varying in quality, size, domain, and range of annotated phenomena. Importantly, the successful development of a full-fledged fact-checking system requires that the underlying corpus satisfies certain characteristics. First, training data needs to contain a large number of instances with high-quality annotations for the different fact-checking sub-tasks. Second, the training data should not be limited to a particular domain, since potentially wrong information sources can range from official statements to blog and Twitter posts.",
"We analyzed existing corpora regarding their adherence to the above criteria and identified several drawbacks. The corpora introduced by BIBREF4, BIBREF5, BIBREF6 are valuable for the analysis of the fact-checking problem and provide annotations for stance detection. However, they contain only several hundreds of validated claims and it is therefore unlikely that deep learning models can generalize to unobserved claims if trained on these datasets.",
"A corpus with significantly more validated claims was introduced by BIBREF2. Nevertheless, for each claim, the corpus provides 30 documents which are retrieved from the web using the Google search engine instead of a document collection aggregated by fact-checkers. Thus, many of the documents are unrelated to the claim and important information for the validation may be missing.",
"The FEVER corpus constructed by BIBREF1 is the largest corpus available for the development of automated fact-checking systems. It consists of 185,445 validated claims with annotated documents and evidence for each of them. The corpus therefore allows training deep neural networks for automated fact-checking, which reach higher performance than shallow machine learning techniques. However, the corpus is based on synthetic claims derived from Wikipedia sentences rather than natural claims that originate from heterogeneous web sources.",
"In order to address the drawbacks of existing datasets, we introduce a new corpus based on the Snopes fact-checking website. Our corpus consists of 6,422 validated claims with comprehensive annotations based on the data collected by Snopes fact-checkers and our crowd-workers. The corpus covers multiple domains, including discussion blogs, news, and social media, which are often found responsible for the creation and distribution of unreliable information. In addition to validated claims, the corpus comprises over 14k documents annotated with evidence on two granularity levels and with the stance of the evidence with respect to the claims. Our data allows training machine learning models for the four steps of the automated fact-checking process described above: document retrieval, evidence extraction, stance detection, and claim validation.",
"The contributions of our work are as follows:",
"1) We provide a substantially sized mixed-domain corpus of natural claims with annotations for different fact-checking tasks. We publish a web crawler that reconstructs our dataset including all annotations. For research purposes, we are allowed to share the original corpus.",
"2) To support the creation of further fact-checking corpora, we present our methodology for data collection and annotation, which allows for the efficient construction of large-scale corpora with a substantial inter-annotator agreement.",
"3) For evidence extraction, stance detection, and claim validation we evaluate the performance of high-scoring systems from the FEVER shared task BIBREF7 and the Fake News Challenge BIBREF8 as well as the Bidirectional Transformer model BERT BIBREF9 on our data. To facilitate the development of future fact-checking systems, we release the code of our experiments.",
"4) Finally, we conduct a detailed error analysis of the systems trained and evaluated on our data, identifying challenging fact-checking instances which need to be addressed in future research."
],
[
"Below, we give a comprehensive overview of existing fact-checking corpora, summarized in Table TABREF7. We focus on their key parameters: fact-checking sub-task coverage, annotation quality, corpus size, and domain. It must be acknowledged that a fair comparison between the datasets is difficult to accomplish since the length of evidence and documents, as well as the annotation quality, significantly varies between the corpora.",
"PolitiFact14 BIBREF4 analyzed the fact-checking problem and constructed a corpus on the basis of the fact-checking blog of Channel 4 and the Truth-O-Meter from PolitiFact. The corpus includes additional evidence, which has been used by fact-checkers to validate the claims, as well as metadata including the speaker ID and the date when the claim was made. This is early work in automated fact-checking and BIBREF4 mainly focused on the analysis of the task. The corpus therefore only contains 106 claims, which is not enough to train high-performing machine learning systems.",
"Emergent16 A more comprehensive corpus for automated fact-checking was introduced by BIBREF5. The dataset is based on the project Emergent which is a journalist initiative for rumor debunking. It consists of 300 claims that have been validated by journalists. The corpus provides 2,595 news articles that are related to the claims. Each article is summarized into a headline and is annotated with the article's stance regarding the claim. The corpus is well suited for training stance detection systems in the news domain and it was therefore chosen in the Fake News Challenge BIBREF8 for training and evaluation of competing systems. However, the number of claims in the corpus is relatively small, thus it is unlikely that sophisticated claim validation systems can be trained using this corpus.",
"PolitiFact17 BIBREF10 extracted 12,800 validated claims made by public figures in various contexts from Politifact. For each statement, the corpus provides a verdict and meta information, such as the name and party affiliation of the speaker or subject of the debate. Nevertheless, the corpus does not include evidence and thus the models can only be trained on the basis of the claim, the verdict, and meta information.",
"RumourEval17 BIBREF6 organized the RumourEval shared task, for which they provided a corpus of 297 rumourous threads from Twitter, comprising 4,519 tweets. The shared task was divided into two parts, stance detection and veracity prediction of the rumors, which is similar to claim validation. The large number of stance-annotated tweets allows for training stance detection systems reaching a relatively high score of about 0.78 accuracy. However, since the number of claims (rumours) is relatively small, and the corpus is only based on tweets, this dataset alone is not suitable to train generally applicable claim validation systems.",
"Snopes17 A corpus featuring a substantially larger number of validated claims was introduced by BIBREF2. It contains 4,956 claims annotated with verdicts which have been extracted from the Snopes website as well as the Wikipedia collections of proven hoaxes and fictitious people. For each claim, the authors extracted about 30 associated documents using the Google search engine, resulting in a collection of 136,085 documents. However, since the documents were not annotated by fact-checkers, irrelevant information is present and important information for the claim validation might be missing.",
"CLEF-2018 Another corpus concerned with political debates was introduced by BIBREF11 and used for the CLEF-2018 shared task. The corpus consists of transcripts of political debates in English and Arabic and provides annotations for two tasks: identification of check-worthy statements (claims) in the transcripts, and validation of 150 statements (claims) from the debates. However, as for the corpus PolitiFact17, no evidence for the validation of these claims is available.",
"FEVER18 The FEVER corpus introduced by BIBREF1 is the largest available fact-checking corpus, consisting of 185,445 validated claims. The corpus is based on about 50k popular Wikipedia articles. Annotators modified sentences in these articles to create the claims and labeled other sentences in the articles, which support or refute the claim, as evidence. The corpus is large enough to train deep learning systems able to retrieve evidence from Wikipedia. Nevertheless, since the corpus only covers Wikipedia and the claims are created synthetically, the trained systems are unlikely to be able to extract evidence from heterogeneous web-sources and validate claims on the basis of evidence found on the Internet.",
"As our analysis shows, while multiple fact-checking corpora are already available, no single existing resource provides full fact-checking sub-task coverage backed by a substantially-sized and validated dataset spanning across multiple domains. To eliminate this gap, we have created a new corpus as detailed in the following sections."
],
[
"This section describes the original data from the Snopes platform, followed by a detailed report on our corpus annotation methodology."
],
[
"Snopes is a large-scale fact-checking platform that employs human fact-checkers to validate claims. A simple fact-checking instance from the Snopes website is shown in Figure FIGREF14. At the top of the page, the claim and the verdict (rating) are given. The fact-checkers additionally provide a resolution (origin), which backs up the verdict. Evidence in the resolution, which we call evidence text snippets (ETSs), is marked with a yellow bar. As additional validation support, Snopes fact-checkers provide URLs for original documents (ODCs) from which the ETSs have been extracted or which provide additional information.",
"Our crawler extracts the claims, verdicts, ETSs, the resolution, as well as ODCs along with their URLs, thereby enriching the ETSs with useful contextual information. Snopes is almost entirely focused on claims made on English speaking websites. Our corpus therefore only features English fact-checking instances."
],
[
"While ETSs express a stance towards the claim, which is useful information for the fact-checking process, this stance is not explicitly stated on the Snopes website. Moreover, the ETSs given by fact-checkers are quite coarse and often contain detailed background information that is not directly related to the claim and consequently not useful for its validation. In order to obtain an informative, high-quality collection of evidence, we asked crowd-workers to label the stance of ETSs and to extract sentence-level evidence from the ETSs that are directly relevant for the validation of the claim. We further refer to these sentences as fine grained evidence (FGE).",
"Stance annotation. We asked crowd workers on Amazon Mechanical Turk to annotate whether an ETS agrees with the claim, refutes it, or has no stance towards the claim. An ETS was only considered to express a stance if it explicitly referred to the claim and either expressed support for it or refuted it. In all other cases, the ETS was considered as having no stance.",
"FGE annotation. We filtered out ETSs with no stance, as they do not contain supporting or refuting FGE. If an ETS was annotated as supporting the claim, the crowd workers selected only supporting sentences; if the ETS was annotated as refuting the claim, only refuting sentences were selected. Table TABREF18 shows two examples of ETSs with annotated FGE. As can be observed, not all information given in the original ETS is directly relevant for validating the claim. For example, sentence (1c) in the first example's ETS simply provides additional background information and is therefore not considered FGE."
],
[
"Stance annotation. Every ETS was annotated by at least six crowd workers. We evaluate the inter-annotator agreement between groups of workers as proposed by BIBREF12, i.e. by randomly dividing the workers into two equal groups and determining the aggregate annotation for each group using MACE BIBREF13. The final inter-annotator agreement score is obtained by comparing the aggregate annotation of the two groups. Using this procedure, we obtain a Cohen's Kappa of $\\kappa = 0.7$ BIBREF14, indicating a substantial agreement between the crowd workers BIBREF15. The gold annotations of the ETS stances were computed with MACE, using the annotations of all crowd workers. We have further assessed the quality of the annotations performed by crowd workers by comparing them to expert annotations. Two experts labeled 200 ETSs, reaching the same agreement as the crowd workers, i.e. $\\kappa = 0.7$. The agreement between the experts' annotations and the computed gold annotations from the crowd workers is also substantial, $\\kappa = 0.683$.",
"FGE Annotation. Similar to the stance annotation, we used the approach of BIBREF12 to compute the agreement. The inter-annotator agreement between the crowd workers in this case is $\\kappa = 0.55$ Cohen's Kappa. We compared the annotations of FGE in 200 ETSs by experts with the annotations by crowd workers, reaching an agreement of $\\kappa = 0.56$. This is considered as moderate inter-annotator agreement BIBREF15.",
"In fact, the task is significantly more difficult than stance annotation as sentences may provide only partial evidence for or against the claim. In such cases, it is unclear how large the information overlap between sentence and claim should be for a sentence to be FGE. The sentence (1a) in Table TABREF18, for example, only refers to one part of the claim without mentioning the time of the shutdown. We can further modify the example in order to make the problem more obvious: (a) The channel announced today that it is planing a shutdown. (b) Fox News made an announcement today.",
"As the example illustrates, there is a gradual transition between sentences that can be considered as essential for the validation of the claim and those which just provide minor negligible details or unrelated information. Nevertheless, even though the inter-annotator agreement for the annotation of FGE is lower than for the annotation of ETS stance, compared to other annotation problems BIBREF16, BIBREF17, BIBREF18 that are similar to the annotation of FGE, our framework leads to a better agreement."
],
[
"Table TABREF21 displays the main statistics of the corpus. In the table, FGE sets denotes groups of FGE extracted from the same ETS. Many of the ETSs have been annotated as no stance (see Table TABREF23) and, following our annotation study setup, are not used for FGE extraction. Therefore, the number of FGE sets is much lower than that of ETSs. We have found that, on average, an ETS consists of 6.5 sentences. For those ETSs that have support/refute stance, on average, 2.3 sentences are selected as FGE. For many of the ETSs, no original documents (ODCs) have been provided (documents from which they have been extracted). On the other hand, in many instances, links to ODCs are given that provide additional information, but from which no ETSs have been extracted.",
"The distribution of verdicts in Table TABREF22 shows that the dataset is unbalanced in favor of false claims. The label other refers to a collocation of verdicts that do not express a tendency towards declaring the claim as being false or true, such as mixture, unproven, outdated, legend, etc.",
"Table TABREF23 shows the stance distribution for ETSs. Here, supporting ETSs and ETSs that do not express any stance are dominating.",
"For supporting and refuting ETSs annotators identified FGE sets for 8,291 out of 8,998 ETSs. ETSs with a stance but without FGE sets often miss a clear connection to the claim, so the annotators did not annotate any sentences in these cases. The class distribution of the FGE sets in Table TABREF23 shows that supporting ETSs are more dominant.",
"To identify potential biases in our new dataset, we investigated which topics are prevalent by grouping the fact-checking instances (claims with their resolutions) into categories defined by Snopes. According to our analysis, the four categories Fake News, Political News, Politics and Fauxtography are dominant in the corpus ranging from more than 700 to about 900 instances. A significant number of instances are present in the categories Inboxer Rebellion (Email hoax), Business, Medical, Entertainment and Crime.",
"We further investigated the sources of the collected documents (ODCs) and grouped them into a number of classes. We found that 38% of the articles are from different news websites ranging from mainstream news like CNN to tabloid press and partisan news. The second largest group of documents are false news and satirical articles with 30%. Here, the majority of articles are from the two websites thelastlineofdefense.org and worldnewsdailyreport.com. The third class of documents, with a share of 11%, are from social media like Facebook and Twitter. The remaining 21% of documents come from diverse sources, such as debate blogs, governmental domains, online retail, or entertainment websites."
],
[
"I this subsection, we briefly discuss the differences of our corpus to the FEVER dataset as the most comprehensive dataset introduced so far. Due to the way the FEVER dataset was constructed, the claim validation problem defined by this corpus is different compared to the problem setting defined by our corpus. The verdict of a claim for FEVER depends on the stance of the evidence, that is, if the stance of the evidence is agree the claim is necessarily true, and if the stance is disagree the claim is necessarily false. As a result, the claim validation problem can be reduced to stance detection. Such a transformation is not possible for our corpus, as the evidence might originate from unreliable sources and a claim may have both supporting and refuting ETSs. The stance of ETSs is therefore not necessarily indicative of the veracity of the claim. In order to investigate how the stance is related to the verdict of the claim for our dataset, we computed their correlation. In the correlation analysis, we considered how a claims' verdict, represented by the classes false, mostly false, other, mostly true, true, correlates with the number of supporting ETSs minus the number of refuting ETSs. More precisely, the verdicts of the claims are considered as one variable, which can take 5 discreet values ranging from false to true, and the stance is considered as the other variable, which is represented by the difference between the number of supporting versus the number of refuting evidence. We found that the verdict is only weakly correlated with the stance, as indicated by the Pearson correlation coefficient of 0.16. This illustrates that the fact-checking problem setting for our corpus is more challenging than for the FEVER dataset."
],
[
"The annotation of the corpus described in the previous section provides supervision for different fact-checking sub-tasks. In this paper, we perform experiments for the following sub-tasks: (1) detection of the stance of the ETSs with respect to the claim, (2) identification of FGE in the ETSs, and (3) prediction of a claim's verdict given FGE.",
"There are a number of experiments beyond the scope of this paper, which are left for future work: (1) retrieval of the original documents (ODCs) given a claim, (2) identification of ETSs in ODCs, and (3) prediction of a claim's verdict on the basis of FGE, the stance of FGE, and their sources.",
"Moreover, in this paper, we consider the three tasks independent of each other rather than as a pipeline. In other words, we always take the gold standard from the preceding task instead of the output of the preceding model in the pipeline. For the three independent tasks, we use recently suggested models that achieved high performance in similar problem settings. In addition, we provide the human agreement bound, which is determined by comparing expert annotations for 200 ETSs to the gold standard derived from crowd worker annotations (Section SECREF19)."
],
[
"In the stance detection task, models need to determine whether an ETS supports or refutes a claim, or expresses no stance with respect to the claim."
],
[
"We report the performance of the following models: AtheneMLP is a feature-based multi-layer perceptron BIBREF19, which has reached the second rank in the Fake News Challenge. DecompAttent BIBREF20 is a neural network with a relatively small number of parameters that uses decomposable attention, reaching good results on the Stanford Natural Language Inference task BIBREF21. USE+Attent is a model which uses the Universal Sentence Encoder (USE) BIBREF22 to extract representations for the sentences of the ETSs and the claim. For the classification of the stance, an attention mechanism and a MLP is used.",
"The results in Table TABREF27 show that AtheneMLP scores highest. Similar to the outcome of the Fake News Challenge, feature-based models outperform neural networks based on word embeddings BIBREF19. As the comparison to the human agreement bound suggests, there is still substantial room for improvement."
],
[
"We performed an error analysis for the best-scoring model AtheneMLP. The error analysis has shown that supporting ETSs are mostly classified correctly if there is a significant lexical overlap between the claim and the ETS. If the claim and the ETSs use different wording, or if the ETS implies the validity of the claim without explicitly referring to it, the model often misclassifies the snippets (see example in the Appendix SECREF41). This is not surprising, as the model is based on bag-of-words, topic models, and lexica.",
"Moreover, as the distribution of the classes in Table TABREF23 shows, support and no stance are more dominant than the refute class. The model is therefore biased towards these classes and is less likely to predict refute (see confusion matrix in the Appendix Table TABREF42). An analysis of the misclassified refute ETSs has shown that the contradiction is often expressed in difficult terms, which the model could not detect, e.g. “the myth originated”, “no effect can be observed”, “The short answer is no”."
],
[
"We define evidence extraction as the identification of fine-grained evidence (FGE) in the evidence text snippets (ETSs). The problem can be approached in two ways, either as a classification problem, where each sentence from the ETSs is classified as to whether it is an evidence for a given claim, or as a ranking problem, in the way defined in the FEVER shared task. For FEVER, sentences in introductory sections of Wikipedia articles need to be ranked according to their relevance for the validation of the claim and the 5 highest ranked sentences are taken as evidence."
],
[
"We consider the task as a ranking problem, but also provide the human agreement bound, the random baseline and the majority vote for evidence extraction as a classification problem for future reference in Table TABREF39 in the Appendix.",
"To evaluate the performance of the models in the ranking setup, we measure the precision and recall on five highest ranked ETS sentences (precision @5 and recall @5), similar to the evaluation procedure used in the FEVER shared task. Table TABREF31 summarizes the performance of several models on our corpus. The rankingESIM BIBREF23 was the best performing model on the FEVER evidence extraction task. The Tf-Idf model BIBREF1 served as a baseline in the FEVER shared task. We also evaluate the performance of DecompAttent and a simple BiLSTM BIBREF24 architecture. To adjust the latter two models to the ranking problem setting, we used the hinge loss objective function with negative sampling as implemented in the rankingESIM model. As in the FEVER shared task, we consider the recall @5 as a metric for the evaluation of the systems.",
"The results in Table TABREF31 illustrate that, in terms of recall, the neural networks with a small number of parameters, BiLSTM and DecompAttent, perform best. The Tf-Idf model reaches best results in terms of precision. The rankingESIM reaches a relatively low score and is not able to beat the random baseline. We assume this is because the model has a large number of parameters and requires many training instances."
],
[
"We performed an error analysis for the BiLSTM and the Tf-Idf model, as they reach the highest recall and precision, respectively. Tf-Idf achieves the best precision because it only predicts a small set of sentences, which have lexical overlap with the claim. The model therefore misses FGE that paraphrase the claim. The BiLSTM is better able to capture the semantics of the sentences. We believe that it was therefore able to take related word pairs, such as “Israel” - “Jewish”, “price”-“sold”, “pointed”-“pointing”, “broken\"-\"injured”, into account during the ranking process. Nevertheless, the model fails when the relationship between the claim and the potential FGE is more elaborate, e.g. if the claim is not paraphrased, but reasons for it being true are provided. An example of a misclassified sentence is given in the Appendix SECREF43."
],
[
"We formulate the claim validation problem in such a way that we can compare it to the FEVER recognizing textual entailment task. Thus, as illustrated in Table TABREF34, we compress the different verdicts present on the Snopes webpage into three categories of the FEVER shared task. In order to form the not enough information (NEI) class, we compress the three verdicts mixture, unproven, and undetermined. We entirely omit all the other verdicts like legend, outdated, miscaptioned, as these cases are ambiguous and difficult to classify. For the classification of the claims, we provide only the FGE as they contain the most important information from ETSs."
],
[
"For the claim validation, we consider models of different complexity: BertEmb is an MLP classifier which is based on BERT pre-trained embeddings BIBREF9; DecompAttent was used in the FEVER shared task as baseline; extendedESIM is an extended version of the ESIM model BIBREF23 reaching the third rank in the FEVER shared task; BiLSTM is a simple BiLSTM architecture; USE+MLP is the Universal Sentence Encoder combined with a MLP; SVM is an SVM classifier based on bag-of-words, unigrams, and topic models.",
"The results illustrated in Table TABREF36 show that BertEmb, USE+MLP, BiLSTM, and extendedESIM reach similar performance, with BertEmb being the best. However, compared to the FEVER claim validation problem, where systems reach up to 0.7 F1 macro, the scores are relatively low. Thus, there is ample opportunity for improvement by future systems."
],
[
"We performed an error analysis for the best-scoring model BertEmb. The class distribution for claim validation is highly biased towards refuted (false) claims and, therefore, claims are frequently labeled as refuted even though they belong to one of the other two classes (see confusion matrix in the Appendix in Table TABREF45).",
"We have also found that it is often difficult to classify the claims as the provided FGE in many cases are contradicting (e.g. Appendix SECREF44). Although the corpus is biased towards false claims (Table TABREF23), there is a large number of ETSs that support those false claims (Table TABREF22). As discussed in Section SECREF20, this is because many of the retrieved ETSs originate from false news websites.",
"Another possible reason for the lower performance is that our data is heterogeneous and, therefore, it is more challenging for a machine learning model to generalize. In fact, we have performed additional experiments in which we pre-trained a model on the FEVER corpus and fine-tuned the parameters on our corpus and vice versa. However, no significant performance gain could be observed in both experiments",
"Based on our analysis, we conclude that heterogeneous data and FGE from unreliable sources, as found in our corpus and in the real world, make it difficult to correctly classify the claims. Thus, in future experiments, not just FGE need to be taken into account, but also additional information from our newly constructed corpus, that is, the stance of the FGE, FGE sources, and documents from the Snopes website which provide additional information about the claim. Taking all this information into account would enable the system to find a consistent configuration of these labels and thus potentially help to improve performance. For instance, a claim that is supported by evidence coming from an unreliable source is most likely false. In fact, we believe that modeling the meta-information about the evidence and the claim more explicitly represents an important step in making progress in automated fact-checking."
],
[
"In this paper, we have introduced a new richly annotated corpus for training machine learning models for the core tasks in the fact-checking process. The corpus is based on heterogeneous web sources, such as blogs, social media, and news, where most false claims originate. It includes validated claims along with related documents, evidence of two granularity levels, the sources of the evidence, and the stance of the evidence towards the claim. This allows training machine learning systems for document retrieval, stance detection, evidence extraction, and claim validation.",
"We have described the structure and statistics of the corpus, as well as our methodology for the annotation of evidence and the stance of the evidence. We have also presented experiments for stance detection, evidence extraction, and claim validation with models that achieve high performance in similar problem settings. In order to support the development of machine learning approaches that go beyond the presented models, we provided an error analysis for each of the three tasks, identifying difficulties with each.",
"Our analysis has shown that the fact-checking problem defined by our corpus is more difficult than for other datasets. Heterogeneous data and evidence from unreliable sources, as found in our corpus and in the real world, make it difficult to correctly classify the claims. We conclude that more elaborate approaches are required to achieve higher performance in this challenging setting."
],
[
"This work has been supported by the German Research Foundation as part of the Research Training Group ”Adaptive Preparation of Information from Heterogeneous Sources” (AIPHES) at the Technische Universität Darmstadt under grant No. GRK 1994/1.",
"",
"."
],
[
"Below we give an instance of a misclassified ETS. Even though the ETS supports the claim, the lexical overlap is relatively low. Most likely, for this reason, the model predicts refute.",
"Example:",
"",
"Claim: The Reuters news agency has proscribed the use of the word 'terrorists' to describe those who pulled off the September 11 terrorist attacks on America.",
"ETS: Reuters' approach doesn't sit well with some journalists, who say it amounts to self-censorship. “Journalism should be about telling the truth. And when you don't call this a terrorist attack, you're not telling the truth,” says Rich Noyes, director of media analysis at the conservative Media Research Center. ...",
""
],
[
"The model wrongly predicts sentences when the topic of the sentences is similar to the topic of the claim, but the sentence is not relevant for the validation of the claim:",
"Example:",
"",
"Claim: The Department of Homeland Security uncovered a terrorist plot to attack Black Friday shoppers in several locations.",
"FGE: Bhakkar Fatwa is a small, relatively unknown group of Islamic militants and fanatics that originated in Bhakkar Pakistan as the central leadership of Al Qaeda disintegrated under the pressures of U.S. military operations in Afghanistan and drone strikes conducted around the world."
],
[
"The FGE are contradicting and the classifier predicts refuted instead of supported.",
"Example:",
"",
"Gold standard: supported; Prediction: refuted",
"Claim: As a teenager, U.S. Secretary of State Colin Powell learned to speak Yiddish while working in a Jewish-owned baby equipment store.",
"FGE: As a boy whose friends and employers at the furniture store were Jewish, Powell picked up a smattering of Yiddish. He kept working at Sickser's through his teens, ... picking up a smattering of Yiddish ... A spokesman for Mr. Powell said he hadn't heard about the spoof ...",
""
]
],
"section_name": [
"Introduction",
"Related work",
"Corpus construction",
"Corpus construction ::: Source data",
"Corpus construction ::: Corpus annotation",
"Corpus analysis ::: Inter-annotator agreement",
"Corpus analysis ::: Corpus statistics",
"Corpus analysis ::: Discussion",
"Experiments and error analysis",
"Experiments and error analysis ::: Stance detection",
"Experiments and error analysis ::: Stance detection ::: Models and Results",
"Experiments and error analysis ::: Stance detection ::: Error analysis",
"Experiments and error analysis ::: Evidence extraction",
"Experiments and error analysis ::: Evidence extraction ::: Models and Results",
"Experiments and error analysis ::: Evidence extraction ::: Error analysis",
"Experiments and error analysis ::: Claim validation",
"Experiments and error analysis ::: Claim validation ::: Experiments",
"Experiments and error analysis ::: Claim validation ::: Error analysis",
"Conclusion",
"Acknowledgements",
"Appendix ::: Error analysis ::: Stance detection",
"Appendix ::: Error analysis ::: Evidence extraction",
"Appendix ::: Error analysis ::: Claim validation"
]
} | {
"answers": [
{
"annotation_id": [
"04ee55eb20fe32c6b5fa3a58be573f2e9514502a",
"0ecbbcec35551396303618958b69aee88941ccfb",
"cde9e05b3247fbd755e7fb989deaca26c02ffa0c"
],
"answer": [
{
"evidence": [
"Stance annotation. We asked crowd workers on Amazon Mechanical Turk to annotate whether an ETS agrees with the claim, refutes it, or has no stance towards the claim. An ETS was only considered to express a stance if it explicitly referred to the claim and either expressed support for it or refuted it. In all other cases, the ETS was considered as having no stance."
],
"extractive_spans": [
"Amazon Mechanical Turk"
],
"free_form_answer": "",
"highlighted_evidence": [
" We asked crowd workers on Amazon Mechanical Turk to annotate whether an ETS agrees with the claim, refutes it, or has no stance towards the claim."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Stance annotation. We asked crowd workers on Amazon Mechanical Turk to annotate whether an ETS agrees with the claim, refutes it, or has no stance towards the claim. An ETS was only considered to express a stance if it explicitly referred to the claim and either expressed support for it or refuted it. In all other cases, the ETS was considered as having no stance."
],
"extractive_spans": [
"Amazon Mechanical Turk"
],
"free_form_answer": "",
"highlighted_evidence": [
"We asked crowd workers on Amazon Mechanical Turk to annotate whether an ETS agrees with the claim, refutes it, or has no stance towards the claim. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Stance annotation. We asked crowd workers on Amazon Mechanical Turk to annotate whether an ETS agrees with the claim, refutes it, or has no stance towards the claim. An ETS was only considered to express a stance if it explicitly referred to the claim and either expressed support for it or refuted it. In all other cases, the ETS was considered as having no stance."
],
"extractive_spans": [
" Amazon Mechanical Turk"
],
"free_form_answer": "",
"highlighted_evidence": [
"Stance annotation. We asked crowd workers on Amazon Mechanical Turk to annotate whether an ETS agrees with the claim, refutes it, or has no stance towards the claim. An ETS was only considered to express a stance if it explicitly referred to the claim and either expressed support for it or refuted it. In all other cases, the ETS was considered as having no stance."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"871f36674634996ce2fae53f98a95805cf0860ef",
"9eb04d07381ec3900400126ce3d7f74a2ffab22b",
"c524058e1f45f1d2d2237bdd899c49dc178f57f9"
],
"answer": [
{
"evidence": [
"In order to address the drawbacks of existing datasets, we introduce a new corpus based on the Snopes fact-checking website. Our corpus consists of 6,422 validated claims with comprehensive annotations based on the data collected by Snopes fact-checkers and our crowd-workers. The corpus covers multiple domains, including discussion blogs, news, and social media, which are often found responsible for the creation and distribution of unreliable information. In addition to validated claims, the corpus comprises over 14k documents annotated with evidence on two granularity levels and with the stance of the evidence with respect to the claims. Our data allows training machine learning models for the four steps of the automated fact-checking process described above: document retrieval, evidence extraction, stance detection, and claim validation."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Our corpus consists of 6,422 validated claims with comprehensive annotations based on the data collected by Snopes fact-checkers and our crowd-workers. "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Stance annotation. We asked crowd workers on Amazon Mechanical Turk to annotate whether an ETS agrees with the claim, refutes it, or has no stance towards the claim. An ETS was only considered to express a stance if it explicitly referred to the claim and either expressed support for it or refuted it. In all other cases, the ETS was considered as having no stance."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We asked crowd workers on Amazon Mechanical Turk to annotate whether an ETS agrees with the claim, refutes it, or has no stance towards the claim."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Stance annotation. We asked crowd workers on Amazon Mechanical Turk to annotate whether an ETS agrees with the claim, refutes it, or has no stance towards the claim. An ETS was only considered to express a stance if it explicitly referred to the claim and either expressed support for it or refuted it. In all other cases, the ETS was considered as having no stance."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Stance annotation. We asked crowd workers on Amazon Mechanical Turk to annotate whether an ETS agrees with the claim, refutes it, or has no stance towards the claim. An ETS was only considered to express a stance if it explicitly referred to the claim and either expressed support for it or refuted it. In all other cases, the ETS was considered as having no stance."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"42628a1d0e9fbdbb7e3fe6cdbe2d1927343df1a0",
"4afb2080dd6423c275986353b2f5fdca07f03974",
"f4be02dfd54f989cc4604fe036ddd32f68fd9d24"
],
"answer": [
{
"evidence": [
"Snopes is a large-scale fact-checking platform that employs human fact-checkers to validate claims. A simple fact-checking instance from the Snopes website is shown in Figure FIGREF14. At the top of the page, the claim and the verdict (rating) are given. The fact-checkers additionally provide a resolution (origin), which backs up the verdict. Evidence in the resolution, which we call evidence text snippets (ETSs), is marked with a yellow bar. As additional validation support, Snopes fact-checkers provide URLs for original documents (ODCs) from which the ETSs have been extracted or which provide additional information.",
"Our crawler extracts the claims, verdicts, ETSs, the resolution, as well as ODCs along with their URLs, thereby enriching the ETSs with useful contextual information. Snopes is almost entirely focused on claims made on English speaking websites. Our corpus therefore only features English fact-checking instances."
],
"extractive_spans": [
"Snopes"
],
"free_form_answer": "",
"highlighted_evidence": [
"Snopes is a large-scale fact-checking platform that employs human fact-checkers to validate claims.",
"Our crawler extracts the claims, verdicts, ETSs, the resolution, as well as ODCs along with their URLs, thereby enriching the ETSs with useful contextual information."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Snopes is a large-scale fact-checking platform that employs human fact-checkers to validate claims. A simple fact-checking instance from the Snopes website is shown in Figure FIGREF14. At the top of the page, the claim and the verdict (rating) are given. The fact-checkers additionally provide a resolution (origin), which backs up the verdict. Evidence in the resolution, which we call evidence text snippets (ETSs), is marked with a yellow bar. As additional validation support, Snopes fact-checkers provide URLs for original documents (ODCs) from which the ETSs have been extracted or which provide additional information.",
"Our crawler extracts the claims, verdicts, ETSs, the resolution, as well as ODCs along with their URLs, thereby enriching the ETSs with useful contextual information. Snopes is almost entirely focused on claims made on English speaking websites. Our corpus therefore only features English fact-checking instances."
],
"extractive_spans": [
"Snopes "
],
"free_form_answer": "",
"highlighted_evidence": [
"Snopes is a large-scale fact-checking platform that employs human fact-checkers to validate claims. A simple fact-checking instance from the Snopes website is shown in Figure FIGREF14. At the top of the page, the claim and the verdict (rating) are given. The fact-checkers additionally provide a resolution (origin), which backs up the verdict. Evidence in the resolution, which we call evidence text snippets (ETSs), is marked with a yellow bar. As additional validation support, Snopes fact-checkers provide URLs for original documents (ODCs) from which the ETSs have been extracted or which provide additional information.\n\nOur crawler extracts the claims, verdicts, ETSs, the resolution, as well as ODCs along with their URLs, thereby enriching the ETSs with useful contextual information. Snopes is almost entirely focused on claims made on English speaking websites. Our corpus therefore only features English fact-checking instances."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to address the drawbacks of existing datasets, we introduce a new corpus based on the Snopes fact-checking website. Our corpus consists of 6,422 validated claims with comprehensive annotations based on the data collected by Snopes fact-checkers and our crowd-workers. The corpus covers multiple domains, including discussion blogs, news, and social media, which are often found responsible for the creation and distribution of unreliable information. In addition to validated claims, the corpus comprises over 14k documents annotated with evidence on two granularity levels and with the stance of the evidence with respect to the claims. Our data allows training machine learning models for the four steps of the automated fact-checking process described above: document retrieval, evidence extraction, stance detection, and claim validation."
],
"extractive_spans": [
"Snopes fact-checking website"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to address the drawbacks of existing datasets, we introduce a new corpus based on the Snopes fact-checking website."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"74932f8e5955aae5ed3143731db26b2b46c4412f",
"c78178a66261f36222e72de4fe931227621a0fd4"
],
"answer": [
{
"evidence": [
"3) For evidence extraction, stance detection, and claim validation we evaluate the performance of high-scoring systems from the FEVER shared task BIBREF7 and the Fake News Challenge BIBREF8 as well as the Bidirectional Transformer model BERT BIBREF9 on our data. To facilitate the development of future fact-checking systems, we release the code of our experiments."
],
"extractive_spans": [
"FEVER shared task BIBREF7 and the Fake News Challenge BIBREF8"
],
"free_form_answer": "",
"highlighted_evidence": [
"For evidence extraction, stance detection, and claim validation we evaluate the performance of high-scoring systems from the FEVER shared task BIBREF7 and the Fake News Challenge BIBREF8 as well as the Bidirectional Transformer model BERT BIBREF9 on our data."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Related work",
"Below, we give a comprehensive overview of existing fact-checking corpora, summarized in Table TABREF7. We focus on their key parameters: fact-checking sub-task coverage, annotation quality, corpus size, and domain. It must be acknowledged that a fair comparison between the datasets is difficult to accomplish since the length of evidence and documents, as well as the annotation quality, significantly varies between the corpora.",
"PolitiFact14 BIBREF4 analyzed the fact-checking problem and constructed a corpus on the basis of the fact-checking blog of Channel 4 and the Truth-O-Meter from PolitiFact. The corpus includes additional evidence, which has been used by fact-checkers to validate the claims, as well as metadata including the speaker ID and the date when the claim was made. This is early work in automated fact-checking and BIBREF4 mainly focused on the analysis of the task. The corpus therefore only contains 106 claims, which is not enough to train high-performing machine learning systems.",
"Emergent16 A more comprehensive corpus for automated fact-checking was introduced by BIBREF5. The dataset is based on the project Emergent which is a journalist initiative for rumor debunking. It consists of 300 claims that have been validated by journalists. The corpus provides 2,595 news articles that are related to the claims. Each article is summarized into a headline and is annotated with the article's stance regarding the claim. The corpus is well suited for training stance detection systems in the news domain and it was therefore chosen in the Fake News Challenge BIBREF8 for training and evaluation of competing systems. However, the number of claims in the corpus is relatively small, thus it is unlikely that sophisticated claim validation systems can be trained using this corpus.",
"PolitiFact17 BIBREF10 extracted 12,800 validated claims made by public figures in various contexts from Politifact. For each statement, the corpus provides a verdict and meta information, such as the name and party affiliation of the speaker or subject of the debate. Nevertheless, the corpus does not include evidence and thus the models can only be trained on the basis of the claim, the verdict, and meta information.",
"RumourEval17 BIBREF6 organized the RumourEval shared task, for which they provided a corpus of 297 rumourous threads from Twitter, comprising 4,519 tweets. The shared task was divided into two parts, stance detection and veracity prediction of the rumors, which is similar to claim validation. The large number of stance-annotated tweets allows for training stance detection systems reaching a relatively high score of about 0.78 accuracy. However, since the number of claims (rumours) is relatively small, and the corpus is only based on tweets, this dataset alone is not suitable to train generally applicable claim validation systems.",
"Snopes17 A corpus featuring a substantially larger number of validated claims was introduced by BIBREF2. It contains 4,956 claims annotated with verdicts which have been extracted from the Snopes website as well as the Wikipedia collections of proven hoaxes and fictitious people. For each claim, the authors extracted about 30 associated documents using the Google search engine, resulting in a collection of 136,085 documents. However, since the documents were not annotated by fact-checkers, irrelevant information is present and important information for the claim validation might be missing.",
"CLEF-2018 Another corpus concerned with political debates was introduced by BIBREF11 and used for the CLEF-2018 shared task. The corpus consists of transcripts of political debates in English and Arabic and provides annotations for two tasks: identification of check-worthy statements (claims) in the transcripts, and validation of 150 statements (claims) from the debates. However, as for the corpus PolitiFact17, no evidence for the validation of these claims is available.",
"FEVER18 The FEVER corpus introduced by BIBREF1 is the largest available fact-checking corpus, consisting of 185,445 validated claims. The corpus is based on about 50k popular Wikipedia articles. Annotators modified sentences in these articles to create the claims and labeled other sentences in the articles, which support or refute the claim, as evidence. The corpus is large enough to train deep learning systems able to retrieve evidence from Wikipedia. Nevertheless, since the corpus only covers Wikipedia and the claims are created synthetically, the trained systems are unlikely to be able to extract evidence from heterogeneous web-sources and validate claims on the basis of evidence found on the Internet."
],
"extractive_spans": [
"PolitiFact14",
"Emergent16",
"PolitiFact17",
"RumourEval17",
"Snopes17",
"CLEF-2018",
"FEVER18"
],
"free_form_answer": "",
"highlighted_evidence": [
"Related work\nBelow, we give a comprehensive overview of existing fact-checking corpora, summarized in Table TABREF7.",
"PolitiFact14 BIBREF4 analyzed the fact-checking problem and constructed a corpus on the basis of the fact-checking blog of Channel 4 and the Truth-O-Meter from PolitiFact.",
"Emergent16 A more comprehensive corpus for automated fact-checking was introduced by BIBREF5. ",
"PolitiFact17 BIBREF10 extracted 12,800 validated claims made by public figures in various contexts from Politifact.",
"RumourEval17 BIBREF6 organized the RumourEval shared task, for which they provided a corpus of 297 rumourous threads from Twitter, comprising 4,519 tweets.",
"Snopes17 A corpus featuring a substantially larger number of validated claims was introduced by BIBREF2. ",
"CLEF-2018 Another corpus concerned with political debates was introduced by BIBREF11 and used for the CLEF-2018 shared task. ",
"FEVER18 The FEVER corpus introduced by BIBREF1 is the largest available fact-checking corpus, consisting of 185,445 validated claims. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"5598e76f43aec5d6d2579e6436688f6f1702b15a",
"dcd1e86160fbf684afae06c5136ad3d9900a09a5"
],
"answer": [
{
"evidence": [
"In order to address the drawbacks of existing datasets, we introduce a new corpus based on the Snopes fact-checking website. Our corpus consists of 6,422 validated claims with comprehensive annotations based on the data collected by Snopes fact-checkers and our crowd-workers. The corpus covers multiple domains, including discussion blogs, news, and social media, which are often found responsible for the creation and distribution of unreliable information. In addition to validated claims, the corpus comprises over 14k documents annotated with evidence on two granularity levels and with the stance of the evidence with respect to the claims. Our data allows training machine learning models for the four steps of the automated fact-checking process described above: document retrieval, evidence extraction, stance detection, and claim validation."
],
"extractive_spans": [
"6,422"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our corpus consists of 6,422 validated claims with comprehensive annotations based on the data collected by Snopes fact-checkers and our crowd-workers. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF21 displays the main statistics of the corpus. In the table, FGE sets denotes groups of FGE extracted from the same ETS. Many of the ETSs have been annotated as no stance (see Table TABREF23) and, following our annotation study setup, are not used for FGE extraction. Therefore, the number of FGE sets is much lower than that of ETSs. We have found that, on average, an ETS consists of 6.5 sentences. For those ETSs that have support/refute stance, on average, 2.3 sentences are selected as FGE. For many of the ETSs, no original documents (ODCs) have been provided (documents from which they have been extracted). On the other hand, in many instances, links to ODCs are given that provide additional information, but from which no ETSs have been extracted.",
"FLOAT SELECTED: Table 3: Overall statistics of the corpus"
],
"extractive_spans": [],
"free_form_answer": "Corpus has 6422 claims, 16509 ETSs, 8291 FGE sets and 14296 ODCs.",
"highlighted_evidence": [
"Table TABREF21 displays the main statistics of the corpus.",
"FLOAT SELECTED: Table 3: Overall statistics of the corpus"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7e21af0ccebd32da6aa8ca6b462dce6d3fd79612",
"a86fa6f82585408d3276dc2984109c013077c84c"
],
"answer": [
{
"evidence": [
"Experiments and error analysis ::: Stance detection ::: Models and Results",
"We report the performance of the following models: AtheneMLP is a feature-based multi-layer perceptron BIBREF19, which has reached the second rank in the Fake News Challenge. DecompAttent BIBREF20 is a neural network with a relatively small number of parameters that uses decomposable attention, reaching good results on the Stanford Natural Language Inference task BIBREF21. USE+Attent is a model which uses the Universal Sentence Encoder (USE) BIBREF22 to extract representations for the sentences of the ETSs and the claim. For the classification of the stance, an attention mechanism and a MLP is used.",
"Experiments and error analysis ::: Evidence extraction ::: Models and Results",
"To evaluate the performance of the models in the ranking setup, we measure the precision and recall on five highest ranked ETS sentences (precision @5 and recall @5), similar to the evaluation procedure used in the FEVER shared task. Table TABREF31 summarizes the performance of several models on our corpus. The rankingESIM BIBREF23 was the best performing model on the FEVER evidence extraction task. The Tf-Idf model BIBREF1 served as a baseline in the FEVER shared task. We also evaluate the performance of DecompAttent and a simple BiLSTM BIBREF24 architecture. To adjust the latter two models to the ranking problem setting, we used the hinge loss objective function with negative sampling as implemented in the rankingESIM model. As in the FEVER shared task, we consider the recall @5 as a metric for the evaluation of the systems.",
"Experiments and error analysis ::: Claim validation ::: Experiments",
"For the claim validation, we consider models of different complexity: BertEmb is an MLP classifier which is based on BERT pre-trained embeddings BIBREF9; DecompAttent was used in the FEVER shared task as baseline; extendedESIM is an extended version of the ESIM model BIBREF23 reaching the third rank in the FEVER shared task; BiLSTM is a simple BiLSTM architecture; USE+MLP is the Universal Sentence Encoder combined with a MLP; SVM is an SVM classifier based on bag-of-words, unigrams, and topic models."
],
"extractive_spans": [],
"free_form_answer": "For stance detection they used MLP, for evidence extraction they used Tf-idf and BiLSTM, for claim validation they used MLP, BiLSTM and SVM",
"highlighted_evidence": [
"Experiments and error analysis ::: Stance detection ::: Models and Results",
"For the classification of the stance, an attention mechanism and a MLP is used.",
"Experiments and error analysis ::: Evidence extraction ::: Models and Results",
"The Tf-Idf model BIBREF1 served as a baseline in the FEVER shared task. We also evaluate the performance of DecompAttent and a simple BiLSTM BIBREF24 architecture. ",
"Experiments and error analysis ::: Claim validation ::: Experiments\nFor the claim validation, we consider models of different complexity: BertEmb is an MLP classifier which is based on BERT pre-trained embeddings BIBREF9; DecompAttent was used in the FEVER shared task as baseline; extendedESIM is an extended version of the ESIM model BIBREF23 reaching the third rank in the FEVER shared task; BiLSTM is a simple BiLSTM architecture; USE+MLP is the Universal Sentence Encoder combined with a MLP; SVM is an SVM classifier based on bag-of-words, unigrams, and topic models."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We report the performance of the following models: AtheneMLP is a feature-based multi-layer perceptron BIBREF19, which has reached the second rank in the Fake News Challenge. DecompAttent BIBREF20 is a neural network with a relatively small number of parameters that uses decomposable attention, reaching good results on the Stanford Natural Language Inference task BIBREF21. USE+Attent is a model which uses the Universal Sentence Encoder (USE) BIBREF22 to extract representations for the sentences of the ETSs and the claim. For the classification of the stance, an attention mechanism and a MLP is used."
],
"extractive_spans": [
"AtheneMLP",
"DecompAttent BIBREF20",
"USE+Attent"
],
"free_form_answer": "",
"highlighted_evidence": [
"We report the performance of the following models: AtheneMLP is a feature-based multi-layer perceptron BIBREF19, which has reached the second rank in the Fake News Challenge. DecompAttent BIBREF20 is a neural network with a relatively small number of parameters that uses decomposable attention, reaching good results on the Stanford Natural Language Inference task BIBREF21. USE+Attent is a model which uses the Universal Sentence Encoder (USE) BIBREF22 to extract representations for the sentences of the ETSs and the claim."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"095eab2bedd04a3976317635c59066504485ab6c",
"f7b65e8b6af27be06df40b69fd718397999817fb"
],
"answer": [
{
"evidence": [
"In order to address the drawbacks of existing datasets, we introduce a new corpus based on the Snopes fact-checking website. Our corpus consists of 6,422 validated claims with comprehensive annotations based on the data collected by Snopes fact-checkers and our crowd-workers. The corpus covers multiple domains, including discussion blogs, news, and social media, which are often found responsible for the creation and distribution of unreliable information. In addition to validated claims, the corpus comprises over 14k documents annotated with evidence on two granularity levels and with the stance of the evidence with respect to the claims. Our data allows training machine learning models for the four steps of the automated fact-checking process described above: document retrieval, evidence extraction, stance detection, and claim validation."
],
"extractive_spans": [
"corpus covers multiple domains, including discussion blogs, news, and social media"
],
"free_form_answer": "",
"highlighted_evidence": [
"The corpus covers multiple domains, including discussion blogs, news, and social media, which are often found responsible for the creation and distribution of unreliable information."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to address the drawbacks of existing datasets, we introduce a new corpus based on the Snopes fact-checking website. Our corpus consists of 6,422 validated claims with comprehensive annotations based on the data collected by Snopes fact-checkers and our crowd-workers. The corpus covers multiple domains, including discussion blogs, news, and social media, which are often found responsible for the creation and distribution of unreliable information. In addition to validated claims, the corpus comprises over 14k documents annotated with evidence on two granularity levels and with the stance of the evidence with respect to the claims. Our data allows training machine learning models for the four steps of the automated fact-checking process described above: document retrieval, evidence extraction, stance detection, and claim validation."
],
"extractive_spans": [
"discussion blogs",
"news",
"social media"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to address the drawbacks of existing datasets, we introduce a new corpus based on the Snopes fact-checking website. Our corpus consists of 6,422 validated claims with comprehensive annotations based on the data collected by Snopes fact-checkers and our crowd-workers. The corpus covers multiple domains, including discussion blogs, news, and social media, which are often found responsible for the creation and distribution of unreliable information. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"49866181e7853c119640e3d73b0f0da5f0534dbd",
"b9d8c81e90f6f23073f035759f822fad88dcc645"
],
"answer": [
{
"evidence": [
"Stance annotation. We asked crowd workers on Amazon Mechanical Turk to annotate whether an ETS agrees with the claim, refutes it, or has no stance towards the claim. An ETS was only considered to express a stance if it explicitly referred to the claim and either expressed support for it or refuted it. In all other cases, the ETS was considered as having no stance.",
"Stance annotation. Every ETS was annotated by at least six crowd workers. We evaluate the inter-annotator agreement between groups of workers as proposed by BIBREF12, i.e. by randomly dividing the workers into two equal groups and determining the aggregate annotation for each group using MACE BIBREF13. The final inter-annotator agreement score is obtained by comparing the aggregate annotation of the two groups. Using this procedure, we obtain a Cohen's Kappa of $\\kappa = 0.7$ BIBREF14, indicating a substantial agreement between the crowd workers BIBREF15. The gold annotations of the ETS stances were computed with MACE, using the annotations of all crowd workers. We have further assessed the quality of the annotations performed by crowd workers by comparing them to expert annotations. Two experts labeled 200 ETSs, reaching the same agreement as the crowd workers, i.e. $\\kappa = 0.7$. The agreement between the experts' annotations and the computed gold annotations from the crowd workers is also substantial, $\\kappa = 0.683$.",
"FGE Annotation. Similar to the stance annotation, we used the approach of BIBREF12 to compute the agreement. The inter-annotator agreement between the crowd workers in this case is $\\kappa = 0.55$ Cohen's Kappa. We compared the annotations of FGE in 200 ETSs by experts with the annotations by crowd workers, reaching an agreement of $\\kappa = 0.56$. This is considered as moderate inter-annotator agreement BIBREF15."
],
"extractive_spans": [],
"free_form_answer": "For stance annotation the inter-annotator agreement was 0.7, for FGE annotation inter-annotator agreement was 0.55",
"highlighted_evidence": [
"Stance annotation.",
"We evaluate the inter-annotator agreement between groups of workers as proposed by BIBREF12, i.e. by randomly dividing the workers into two equal groups and determining the aggregate annotation for each group using MACE BIBREF13. ",
"Using this procedure, we obtain a Cohen's Kappa of $\\kappa = 0.7$ BIBREF14, indicating a substantial agreement between the crowd workers BIBREF15. ",
"FGE Annotation. ",
"The inter-annotator agreement between the crowd workers in this case is $\\kappa = 0.55$ Cohen's Kappa."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Stance annotation. Every ETS was annotated by at least six crowd workers. We evaluate the inter-annotator agreement between groups of workers as proposed by BIBREF12, i.e. by randomly dividing the workers into two equal groups and determining the aggregate annotation for each group using MACE BIBREF13. The final inter-annotator agreement score is obtained by comparing the aggregate annotation of the two groups. Using this procedure, we obtain a Cohen's Kappa of $\\kappa = 0.7$ BIBREF14, indicating a substantial agreement between the crowd workers BIBREF15. The gold annotations of the ETS stances were computed with MACE, using the annotations of all crowd workers. We have further assessed the quality of the annotations performed by crowd workers by comparing them to expert annotations. Two experts labeled 200 ETSs, reaching the same agreement as the crowd workers, i.e. $\\kappa = 0.7$. The agreement between the experts' annotations and the computed gold annotations from the crowd workers is also substantial, $\\kappa = 0.683$.",
"FGE Annotation. Similar to the stance annotation, we used the approach of BIBREF12 to compute the agreement. The inter-annotator agreement between the crowd workers in this case is $\\kappa = 0.55$ Cohen's Kappa. We compared the annotations of FGE in 200 ETSs by experts with the annotations by crowd workers, reaching an agreement of $\\kappa = 0.56$. This is considered as moderate inter-annotator agreement BIBREF15."
],
"extractive_spans": [
"Cohen's Kappa of $\\kappa = 0.7$ BIBREF14",
"$\\kappa = 0.55$ Cohen's Kappa"
],
"free_form_answer": "",
"highlighted_evidence": [
"Stance annotation. Every ETS was annotated by at least six crowd workers. We evaluate the inter-annotator agreement between groups of workers as proposed by BIBREF12, i.e. by randomly dividing the workers into two equal groups and determining the aggregate annotation for each group using MACE BIBREF13. The final inter-annotator agreement score is obtained by comparing the aggregate annotation of the two groups. Using this procedure, we obtain a Cohen's Kappa of $\\kappa = 0.7$ BIBREF14, indicating a substantial agreement between the crowd workers BIBREF15. ",
"FGE Annotation. Similar to the stance annotation, we used the approach of BIBREF12 to compute the agreement. The inter-annotator agreement between the crowd workers in this case is $\\kappa = 0.55$ Cohen's Kappa."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
"",
"",
""
],
"question": [
"what crowdsourcing platform did they use?",
"did they crowdsource annotations?",
"where does their data come from?",
"which existing corpora do they compare with?",
"what is the size of their corpus?",
"which architectures did they experiment with?",
"what domains are present in the corpus?",
"what was the inter-annotator agreement?"
],
"question_id": [
"cd82bdaa0c94330f8cccfb1c59b4e6761a5a4f4d",
"753a187c1dd8d96353187fbb193b5f86293a796c",
"29794bda61665a1fbe736111e107fd181eacba1b",
"dd80a38e578443496d3720d883ad194ce82c5f39",
"9a9774eacb8f75bcfa07a4e60ed5eb02646467e3",
"4ed58d828cd6bb9beca1471a9fa9f5e77488b1d1",
"de580e43614ee38d2d9fc6263ff96e6ca2b54eb5",
"ae89eed483c11ccd70a34795e9fe416af8a35da2"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Overview of corpora for automated fact-checking. docs: documents related to the claims; evid.: evidence in form of sentence or text snippets; stance: stance of the evidence; sources: sources of the evidence; rater agr.: whether or not the inter-annotator agreement is reported; domain: the genre of the corpus",
"Figure 1: Snopes fact-checking data example",
"Table 3: Overall statistics of the corpus",
"Table 4: Distribution of verdicts for claims",
"Table 5: Class distribution of ETSs the FGE sets",
"Table 6: Stance detection results (F1m = F1 macro)",
"Table 8: Compression of Snopes verdicts",
"Table 7: Evidence extraction: ranking setting",
"Table 9: Claim validation results (m = macro)",
"Table 10: Evidence extraction classification problem: baselines and agreement bound (m = macro)",
"Table 12: Confusion matrix for claim validation BertEmb (NEI: not enough information)",
"Table 11: Stance detection confusion matrix (AtheneMLP)"
],
"file": [
"3-Table1-1.png",
"4-Figure1-1.png",
"6-Table3-1.png",
"6-Table4-1.png",
"6-Table5-1.png",
"7-Table6-1.png",
"8-Table8-1.png",
"8-Table7-1.png",
"9-Table9-1.png",
"12-Table10-1.png",
"12-Table12-1.png",
"12-Table11-1.png"
]
} | [
"what is the size of their corpus?",
"which architectures did they experiment with?",
"what was the inter-annotator agreement?"
] | [
[
"1911.01214-6-Table3-1.png",
"1911.01214-Corpus analysis ::: Corpus statistics-0",
"1911.01214-Introduction-6"
],
[
"1911.01214-Experiments and error analysis ::: Evidence extraction ::: Models and Results-1",
"1911.01214-Experiments and error analysis ::: Stance detection ::: Models and Results-0",
"1911.01214-Experiments and error analysis ::: Claim validation ::: Experiments-0"
],
[
"1911.01214-Corpus analysis ::: Inter-annotator agreement-0",
"1911.01214-Corpus analysis ::: Inter-annotator agreement-1",
"1911.01214-Corpus construction ::: Corpus annotation-1"
]
] | [
"Corpus has 6422 claims, 16509 ETSs, 8291 FGE sets and 14296 ODCs.",
"For stance detection they used MLP, for evidence extraction they used Tf-idf and BiLSTM, for claim validation they used MLP, BiLSTM and SVM",
"For stance annotation the inter-annotator agreement was 0.7, for FGE annotation inter-annotator agreement was 0.55"
] | 184 |
1701.03578 | Efficient Transfer Learning Schemes for Personalized Language Modeling using Recurrent Neural Network | In this paper, we propose an efficient transfer leaning methods for training a personalized language model using a recurrent neural network with long short-term memory architecture. With our proposed fast transfer learning schemes, a general language model is updated to a personalized language model with a small amount of user data and a limited computing resource. These methods are especially useful for a mobile device environment while the data is prevented from transferring out of the device for privacy purposes. Through experiments on dialogue data in a drama, it is verified that our transfer learning methods have successfully generated the personalized language model, whose output is more similar to the personal language style in both qualitative and quantitative aspects. | {
"paragraphs": [
[
"Recently there has been a considerable interest in language modeling due to various academic and commercial demands. Academically, many studies have investigated this domain such as machine translation, chat-bot, message generation, image tagging and other language-related areas. Commercially, it can be used as a core technology for providing a new application on consumer products or services. For instance, an automatic message-reply prediction service can be launched in mobile devices, thus helping a user to send a reply message when he/she is not provided with a proper input interface.",
"To model the language of human dialogue, a recurrent neural network (RNN) structure is known to show the state of the arts performance with its ability to learn a sequential pattern of the data BIBREF0 . Among the RNN structures, a Long Short-Term Memory RNN (LSTM-RNN) and its variants are successfully used for language modeling tasks BIBREF1 , BIBREF2 . However, as a kind of deep learning technique, the LSTM-RNN and the RNN structure requires both a large number of data and huge computing power to train the model properly. Hence any attempts for applying the RNN structure to personalized language modeling are mainly constrained by the following two limitations. First, personal mobile devices contain private message data among close acquaintances, so users seldom agree to transfer their log out of the devices. This causes a limitation of gathering the whole user data to common computing spaces, where high-performance machines are available. Second, in relatively small computing machines, i.e., smart phone, it is not always-guaranteed to have enough resources to train a deep model within the devices.",
"To resolve these limitations, we propose fast transfer learning schemes. It trains a base model with a large dataset and copies its first n-many layers to the first n-many layers of a target model. Then the target model is fine-tuned with relatively small target data. Several learning schemes such as freezing a certain layer or adding a surplus layer are proposed for achieving the result. In experiments, we trained a general language model with huge corpus such as an Workshop on Statistical Machine Translation (WMT) data and a movie script data by using powerful computing machines, and then transferred the model to target environment for updating to be a personalized language model. With this approach, the final model can mimic target user's language style with proper syntax.",
"In the experiments, we trained the general language model with literary-style data and applied the transfer learning with spoken-style data. Then we evaluated the model output for sentence completion task in a qualitative and a quantitative manner. The test result showed that the model learned the style of the target language properly. Another test was conducted by training the general language model with the script of the drama, “Friends,\" and by applying transfer learning with main character corpora from the script to generate the personalized language model. The message-reply prediction task was evaluated with this model. The test result shows higher similarity between the output of the personalized language model and the same user dialogue than the one between the output of the personalized language model and other users' dialogues.",
"The contributions of this paper are as follows. First, we propose efficient transfer learning schemes for personalized language modeling, which is the first research on transfer learning for RNN based language models with privacy preserving. Second, we show the applicability of our research to the target scenario in the short message reply application by training the model in the similar environment to that of the mobile device, and highlight its test results."
],
[
"As we are focusing on a personalized language modeling with the preservation of user data, we generate two types of language models. First is a sentence completion language model, which can complete sentences with a given n-many sequence of words. Second is a message-reply prediction language model, which can generate a response sentence for a given message. The output of both models implies user characteristics such as preferable vocabulary, sentence length, and other language-related patterns.",
"To achieve this result, we trained the language model with a large amount of general data in powerful computing environments, and then applied the transfer learning in relatively small computing environments. We assume that this method would be applied to mobile devices. As we are taking the preservation of privacy into consideration, the transferred model is retrained within the local environments such as mobile devices, and no personal data is sent out of the devices. This could have been accomplished using the proposed transfer learning schemes in RNN-LSTM architecture."
],
[
"A sentence completion model completes a sentence with the given word sequence $X= \\lbrace x_1,x_2, \\dots , x_T\\rbrace $ , where $x_N$ is a word ( $N=1, 2, \\dots , T$ ). The model can predict the next word $x_{N+1}$ with given word sequence $x_{1:N}$ . By repeating the prediction until the output word reaches the end-of-sentence signal, “ $<eos>$ ,\" the whole sentence can be generated.",
"The model is similar to that of BIBREF3 , and we put the 1,000-dimension word-embedding layer right after the input layer. Then 3 deep LSTM layers with 100 LSTM cells each and without peephole connection are used for learning the sequential pattern of the sentences.",
"The output probability to the input sequence $X$ and the training objective are ",
"$$\\begin{aligned}\n& p(Y|X)=\\prod _{t=1}^{T}p(y_t|x_{1:t-1}) \\\\\n& \\textit {L}= -\\dfrac{1}{|T|}\\sum \\limits _{t=1}^{T} x_{t+1}\\log p(y_t|x_{1:t-1}),\n\\end{aligned}$$ (Eq. 3) ",
"where $X$ is a word sequence in the sentence, $Y$ is a model output sequence $Y=\\lbrace y_1,y_2, \\dots , y_{T}\\rbrace $ "
],
[
"A message-reply prediction model generates a response sentence for a given message. It is similar to the sentence completion language model except that the message sentence is encoded and used as a context information when the model generates a response word sequence. Our approach is inspired by the sequence-to-sequence learning research BIBREF0 that is successfully applied to a machine translation task. The message word sequence $X=\\lbrace x_1, x_2, \\dots , x_T\\rbrace $ is fed into the model, and the last hidden state is used as context information $c_T$ . With this context information, the next sequence word is predicted similarly to that in the sentence completion language model case. During implementation, we used 1,000-dimension word embedding and 3-deep LSTM layers with 100 LSTM cells in each layer. The output probability and the training objective are ",
"$$\\begin{aligned}\n& p(Y|X)=\\prod _{t=1}^{T^{\\prime }}p(y_t|c_T,y_{1:t-1})\\\\\n& L = -\\dfrac{1}{|T^{\\prime }|}|\\sum \\limits _{t=1}^{T^{\\prime }} z_t\\log p(y_t|c_T, y_{1:t-1}),\n\\end{aligned}$$ (Eq. 5) ",
"where $X$ is a word sequence in the message sentence, $Z$ is a target word sequence in the response sentence $Z = \\lbrace z_1,z_2, \\dots , z_{T^{\\prime }}\\rbrace $ , $Y$ is a model output sequence $Y=\\lbrace y_1,y_2, \\dots , y_{T^{\\prime }}\\rbrace $ , $c_T$ is the encoding vector for the message sentence."
],
[
"To generate a personalized language model with a small amount of user data and limited computing resources, transfer learning is essential. In the private data preservation scenario, we investigate three fast transfer learning schemes. Each scheme is described below:",
"Scheme 1, relearn the whole layer: As a baseline, we retrain the whole model with private data only and compare the result with the two other schemes below. Because of the retraining of the LSTM layers in their entirety, this scheme requires more computing power than the other two schemes.",
"Scheme 2, surplus layer: After the training of the model with general data, a surplus layer is inserted between the output layer and the last of the deep LSTM layers. Then, with private data, we update only the parameters of the surplus layer in the transfer learning phase. We assume that a user's parlance could be modeled by learning additional features in the user's private data.",
"Scheme 3, fixed first n layers: After training the model with general data, we fix the parameters in the first n LSTM layers (layer 1 and layer 2 in our experiments) and train remaining parameters in the transfer learning phase. We assume that the user's parlance is a subset of the general pattern and the last layer plays the key role in determining this pattern."
],
[
"The perplexity is one of the popular measures for a language model. It measures how well the language model predicts a sample. However, it is not good at measuring how well the output of the language model matches a target language style. Another measure, the BLEU score algorithm BIBREF4 , has been widely used for the automatic evaluation of the model output. However, it cannot be applied directly to measuring a quality of the personalized model output because it considers the similarity between one language and the target language. Other research was conducted on proving authorship and fraud in literature, for instance, Jane Austen's left-over novel with partially completed BIBREF5 . This research counted the occurrence of several words in the literature, compared their relative frequencies with those of the words in the target literature, and concluded that the target literature was a forgery. This approach could be applied to a text evaluation where a large amount of data is available and certain words are used more frequently. In spoken language, such as in the message-reply case, however, whole word distribution must be considered instead of considering the occurrence of several words, because the data is usually not enough than the literature case. So, we use a simple and efficient metric to measure the similarity between the user style and the output of the personalized model.",
"An output of a personalized language model can be measured by calculating the cross entropy between the word distribution of the model output and that of the target data. Word distribution can be acquired by normalizing a word histogram which is calculated based on word counts in the target corpus. Equation (3) shows the metric formulation. ",
"$$\\begin{aligned}\n& Y_1=g( f_{LM}( M_i ) ), Y_2=g( T_i ) \\\\\n& measure = Cross~Entropy(Y_1, Y_2), \\\\\n\\end{aligned}$$ (Eq. 11) ",
"where $M_i$ is a message $\\in {D_{test}}$ , $T_i$ is a corpus $\\in {D_{target}}$ , $f_{LM}$ is a language model, $g(\\cdot )$ calculates word distribution with given corpus, CrossEntropy(p, q) is $- \\sum _{x} p(x) \\log q(x)$ .",
"The characteristics of a user speech can mainly be distinguished by the word dictionary. Thus, this metric tries to measure the differences of the word dictionary among the comparing set. Table 1 shows the quantitative measure results from the dialogue set of the main characters in drama data from “Friends,\" a famous American television sitcom. In the figures, “character_1\" to “character_6\" are the main characters of the drama (Chandler, Joey, Monica, Phoebe, Rachel, and Ross, respectively). The dialogues were measured against one another by using the cross entropy metric. As shown in the table, the lower cross entropy value among the same character's dialogue was calculated, and the higher value was calculated among the different character's dialogues as expected. This result demonstrates that the cross entropy metric can be used to measure the similarities among the members of the set."
],
[
"We mainly conduct two types of experiments. The first one is a sentence completion experiment, and the other one is a message-reply prediction experiment. In the former case, we train a general language model with literary-style data and apply a proposed transfer learning scheme with spoken-style data to achieve a personalized language model. With this setting, the difference between general and personalized language models can be measured in a quantitative and a qualitative manner. For the latter case, we use dialogue-style data such as drama scripts to train a general language model. From the drama scripts, some characters' data are taken apart and are used to train the personalized language model. With this setting, the output of the personalized model is compared to the original dialogue of the same character."
],
[
"We train a general language model of literary-style with the WMT'14 corpus. We then apply a transfer learning scheme with “Friends\" drama data for the model to learn the spoken-style language. Training the general language model took about 10 days then we spent another 4 hours training the personalized language model in each scheme. A “titan-X GPU\" and a “GeForce GT 730 GPU\" were used for these experiments. The latter GPU is one of the low-end GPU series of which computing power was similar to that of latest mobile GPUs such as “Qualcomm Adreno 530\" in “Samsung Galaxy S7\" or “NVIDIA Tegra K1\" in “Google Nexus 9\". For a vocabulary setting, we construct our dictionary as 50,002 words, including “ $<eos>$ \" to mark ends of sentence and “**unknown**\" to replace unconsidered vocabulary in the data. The out-of-vocabulary rate is about 3.5%.",
"The “general language model\" in Table 2 shows the sample output of the general language model trained with document-style data, and the “personal language model 1\" in Table 2 shows the sample output of the personalized language model trained with human-dialogue-style data. Scheme_1 to scheme_3 are relearn-whole, surplus layer, and fixed-n layer, respectively. Given input word sequence for the test was, “It is possible, however.\" As can be seen in the table, both outputs differ in length and style. The sentence completed using the general language model tends to be longer than that of obtained using the personalized language model. This result indicates that the personalized language model is properly trained with the spoken language characteristics because human dialogue is usually briefer than the language in official documents.",
"We also apply the transfer learning schemes with some of the English bible data. The same general language model, which involved previously training with the WMT'14 corpus for 10 days, is used. English bible data is added and employed in training for another 4 hours using proposed transfer learning schemes.",
"The “personalized language model 2\" in Table 2 shows the sample output of the personalized language model trained with another style of document data, English bible data. As shown in Table 2, the output of the personalized language model contains more bible-like vocabulary and sentence styles."
],
[
"We simulate the message-reply prediction scenario using the drama corpus. The script of the drama, “Friends,\" is used to train a general language model, and two main character corpora are used to generate a personalized language model. For this message-reply prediction experiment, we use a vocabulary size of 18,107, and the out-of-vocabulary rate is about 3.5%. In the message-reply prediction case, pairwise data is generated by extracting the drama corpus of each character and concatenating two consecutive sentences of different characters to form one single message-reply sentence data. We insert the word “ $<eos>$ \" between the message and reply to mark the border separating them. This pairwise data is used for the training, and only the message part of the pairwise data is used for the message-reply prediction. During implementation, it took about a day to train the general language model with the “Friends\" corpus and another 4 hours to train the personalized language model with two main character corpora. The “titan-X GPU\" and the “GeForce GT 730 GPU\" was used for these experiments. Validation messages-reply sentences of 1,281 are randomly sampled from the “Friends\" corpus for tracking validation curve and another 753 test messages are prepared for predicting the responses. These data remained unseen from training phase. The word distributions of the model output from the test messages and the target corpus data are calculated to measure their similarity.",
"Figure 1 shows the validation curve while training. Perplexity values from various model output are plotted. The perplexity of baseline model, “scheme_1\", decreases until around epoch 10, and then it starts to increase because model is over-fitted to training data. The proposed “scheme_2\" and “scheme_3\", however, show continuous decreasing tendency and reach lower perplexity values compared to that of the baseline model. It is interesting that proposed methods achieve lower perplexity than baseline while saving computing power with reduced parameters.",
"Table 3 shows the performances of various models measured with the same validation dataset used in Figure 1. An unpruned n-gram language models using modified Kneser-Ney smoothing are used for performance comparisons BIBREF7 . The n-gram models were trained by using KenLM software package BIBREF8 . The chandler n-gram model was trained with “Chandler” corpus and the friends n-gram model was trained with “Friends” corpus. The proposed scheme_1 to scheme_3 were trained with “Chandler” corpus from “Friends” general language model. We see that our proposed schemes outperform the n-gram models (n=3 and 5).",
"To check the influence of training data size (number of sentences) in personalized language model, we trained the general language model (trained with “Friends\" corpus, message-reply prediction model) with different sizes of personal (“chandler\" and “rachel\") dataset. The proposed scheme_2 method was used for this test. Table 4 shows evaluation results of the trained models. Dataset '0' means the model is not trained with personal dataset. The perplexity shows lower value as we use more dataset in training, and it outperforms “friends 5-gram” model from the 2,000 dataset cases.",
"Table 5 indicates the cross entropy measure between the output of “scheme_1\" to “scheme_3\" model and that of the target corpus, the “friends\" drama corpus, the “chandler\" corpus, and the “bible\" corpus. It shows the similarity between the personalized model output and the target corpus as the number of training epoch increasing. The general model was pre-trained with the “Friends” corpus and the “Chandler” corpus was used training personalized model. Each Model is selected from various training epoch (0, 10, 20 and 40) and schemes, and test messages of 753 are used for the reply generation with the selected model used. As the table shows, the cross entropy measure has the highest value when the target corpus is the “bible” as expected because it is written in different style than dialogues in drama script. For the drama script case, the cross entropy measured with the “chandler\" corpus shows the lowest value among schemes. This result reveals that the personalized language model is trained properly from the general language model. Thus it is more similar in style to the target data corpus than the general language model. The “epoch 0\" case means the initial model state trained from general language corpus, “friends\" corpus. Thus cross entropy with “friends\" target corpus shows lower value than that of “chandler\" and “bible\" target corpus cases."
],
[
"Researchers have proposed language models using RNN, which learns the probability of next sequence data at the character or word level BIBREF9 , BIBREF3 . The proposed language models were tested on web corpora (i.e. Wikipedia, news articles) and qualitative examples showed their applicability. BIBREF0 proposed a sequence-to-sequence learning algorithm with RNN and long short-term memory (LSTM) architecture BIBREF1 , and BIBREF2 proposed RNN encoder-decoder architecture. Those studies were applied to the machine translation problem.",
"Recently, the RNN machine translation approach was extended to the short message generation problem BIBREF10 . Considering the message and response as a translation problem, the Neural Responding Machine achieved 40% accuracy for both contextually and syntactically proper response generations with twitter-like micro-blogging data BIBREF11 . Those studies were similar to our research in the sense that both target message-reply prediction language model using RNN. Our research, however, differs in that it updates a general language model to a personalized language model with user data separately, whereas the previous research trained a language model with the data, as a whole, in same place.",
"In the commercial sphere, Google recently released a smart-reply service that could generate a response to a given email by using a sequence-to-sequence learning model BIBREF12 . There was another trial on the generation of responses in technical troubleshooting discourses BIBREF13 . This research also required complete data in one place and did not provide a personalized model.",
"Moreover, many researchers have conducted studies on transfer learning. BIBREF14 , BIBREF15 suggested that a base-trained model with general data could be transferred to another domain. Recently, BIBREF16 showed, through experiments, that the lower layers tended to have general features whereas the higher layer tended to have specific features. However, none of this research was applied to an RNN language model.",
"To adapt a neural network model to an embedded system with limited resources, BIBREF17 BIBREF18 reduced the size of the model by pruning the unnecessary connections within it. It repeatedly tried to reduce the model size without accuracy degradation. This research inspired us to a considerable extent. It applied a neural model to mobile devices. However, the research focused on reducing the model size using a powerful machine and releasing the final model to an embedded system, whereas ours investigated how to train a model within mobile devices so that private user data could be kept."
],
[
"We propose an efficient method for training a personalized model using the LSTM-RNN model. To preserve users' privacy, we suggest various transfer learning schemes so that the personalized language model can be generated within the user's local environment. The proposed schemes “surplus layer' and “fixed-n layer' shows higher generalization performance whereas it trains only reduced number of parameters than baseline model. The quantitative and qualitative test result indicate that the output of the model is similar to that of the user's style.",
"It is certain that our proposed method reveals the applicability of the RNN-based language model in a user device with the preservation of privacy. Furthermore, with our method the personalized language model can be generated with a smaller amount of user data than the huge amount of training data that is usually required in the traditional deep neural network discipline. In the future work, we aim to visualize the deep neural network and to investigate the specific relationship among users' language styles and the LSTM cells in the network. This approach seems likely to uncover enhanced learning schemes that require less data than was previously necessary."
]
],
"section_name": [
"Introduction",
"Architecture for Personalized Language Model",
"Sentence Completion Language Model",
"Message-Reply Prediction Language Model",
"Fast Transfer Learning Schemes",
"Measures",
"Experiments",
"Literary-Style to Spoken-Style Sentence Completion",
"General-Style to Personal-Style Message-Reply Prediction",
"Related Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"0b7e8b0887855815632612b62b989b42c3e03e6d",
"804a348ad07f273bcfcea562188e6d0e4ef24abd",
"c73376b68ac9b57347557921e25a1831a7f79fa9"
],
"answer": [
{
"evidence": [
"Figure 1 shows the validation curve while training. Perplexity values from various model output are plotted. The perplexity of baseline model, “scheme_1\", decreases until around epoch 10, and then it starts to increase because model is over-fitted to training data. The proposed “scheme_2\" and “scheme_3\", however, show continuous decreasing tendency and reach lower perplexity values compared to that of the baseline model. It is interesting that proposed methods achieve lower perplexity than baseline while saving computing power with reduced parameters.",
"The characteristics of a user speech can mainly be distinguished by the word dictionary. Thus, this metric tries to measure the differences of the word dictionary among the comparing set. Table 1 shows the quantitative measure results from the dialogue set of the main characters in drama data from “Friends,\" a famous American television sitcom. In the figures, “character_1\" to “character_6\" are the main characters of the drama (Chandler, Joey, Monica, Phoebe, Rachel, and Ross, respectively). The dialogues were measured against one another by using the cross entropy metric. As shown in the table, the lower cross entropy value among the same character's dialogue was calculated, and the higher value was calculated among the different character's dialogues as expected. This result demonstrates that the cross entropy metric can be used to measure the similarities among the members of the set."
],
"extractive_spans": [
"perplexity",
"cross entropy"
],
"free_form_answer": "",
"highlighted_evidence": [
"Figure 1 shows the validation curve while training. Perplexity values from various model output are plotted. ",
"The dialogues were measured against one another by using the cross entropy metric. As shown in the table, the lower cross entropy value among the same character's dialogue was calculated, and the higher value was calculated among the different character's dialogues as expected. This result demonstrates that the cross entropy metric can be used to measure the similarities among the members of the set."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The perplexity is one of the popular measures for a language model. It measures how well the language model predicts a sample. However, it is not good at measuring how well the output of the language model matches a target language style. Another measure, the BLEU score algorithm BIBREF4 , has been widely used for the automatic evaluation of the model output. However, it cannot be applied directly to measuring a quality of the personalized model output because it considers the similarity between one language and the target language. Other research was conducted on proving authorship and fraud in literature, for instance, Jane Austen's left-over novel with partially completed BIBREF5 . This research counted the occurrence of several words in the literature, compared their relative frequencies with those of the words in the target literature, and concluded that the target literature was a forgery. This approach could be applied to a text evaluation where a large amount of data is available and certain words are used more frequently. In spoken language, such as in the message-reply case, however, whole word distribution must be considered instead of considering the occurrence of several words, because the data is usually not enough than the literature case. So, we use a simple and efficient metric to measure the similarity between the user style and the output of the personalized model.",
"An output of a personalized language model can be measured by calculating the cross entropy between the word distribution of the model output and that of the target data. Word distribution can be acquired by normalizing a word histogram which is calculated based on word counts in the target corpus. Equation (3) shows the metric formulation."
],
"extractive_spans": [],
"free_form_answer": "Cross entropy between the trained model and models trained on different corpora.",
"highlighted_evidence": [
"So, we use a simple and efficient metric to measure the similarity between the user style and the output of the personalized model.\n\nAn output of a personalized language model can be measured by calculating the cross entropy between the word distribution of the model output and that of the target data. Word distribution can be acquired by normalizing a word histogram which is calculated based on word counts in the target corpus. Equation (3) shows the metric formulation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The perplexity is one of the popular measures for a language model. It measures how well the language model predicts a sample. However, it is not good at measuring how well the output of the language model matches a target language style. Another measure, the BLEU score algorithm BIBREF4 , has been widely used for the automatic evaluation of the model output. However, it cannot be applied directly to measuring a quality of the personalized model output because it considers the similarity between one language and the target language. Other research was conducted on proving authorship and fraud in literature, for instance, Jane Austen's left-over novel with partially completed BIBREF5 . This research counted the occurrence of several words in the literature, compared their relative frequencies with those of the words in the target literature, and concluded that the target literature was a forgery. This approach could be applied to a text evaluation where a large amount of data is available and certain words are used more frequently. In spoken language, such as in the message-reply case, however, whole word distribution must be considered instead of considering the occurrence of several words, because the data is usually not enough than the literature case. So, we use a simple and efficient metric to measure the similarity between the user style and the output of the personalized model.",
"An output of a personalized language model can be measured by calculating the cross entropy between the word distribution of the model output and that of the target data. Word distribution can be acquired by normalizing a word histogram which is calculated based on word counts in the target corpus. Equation (3) shows the metric formulation.",
"$$\\begin{aligned} & Y_1=g( f_{LM}( M_i ) ), Y_2=g( T_i ) \\\\ & measure = Cross~Entropy(Y_1, Y_2), \\\\ \\end{aligned}$$ (Eq. 11)",
"where $M_i$ is a message $\\in {D_{test}}$ , $T_i$ is a corpus $\\in {D_{target}}$ , $f_{LM}$ is a language model, $g(\\cdot )$ calculates word distribution with given corpus, CrossEntropy(p, q) is $- \\sum _{x} p(x) \\log q(x)$ ."
],
"extractive_spans": [],
"free_form_answer": "a measure that calculates the cross entropy between the word distribution of the model output and that of the target data",
"highlighted_evidence": [
"So, we use a simple and efficient metric to measure the similarity between the user style and the output of the personalized model.",
"An output of a personalized language model can be measured by calculating the cross entropy between the word distribution of the model output and that of the target data. Word distribution can be acquired by normalizing a word histogram which is calculated based on word counts in the target corpus. Equation (3) shows the metric formulation.",
"$$\\begin{aligned} & Y_1=g( f_{LM}( M_i ) ), Y_2=g( T_i ) \\\\ & measure = Cross~Entropy(Y_1, Y_2), \\\\ \\end{aligned}$$ (Eq. 11)",
"where $M_i$ is a message $\\in {D_{test}}$ , $T_i$ is a corpus $\\in {D_{target}}$ , $f_{LM}$ is a language model, $g(\\cdot )$ calculates word distribution with given corpus, CrossEntropy(p, q) is $- \\sum _{x} p(x) \\log q(x)$ ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"9b253a1f26aaf983aca556df025083a4a2fa4ab9",
"c7d4a630661cd719ea504dba56393f78278b296b",
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"416e19991c70165de8dcd71f714e0c76277fb249",
"7c90f8c329e8d904f537f041f3d4d486a99aec46",
"edca21e8106f5c4d7ddd2ead7a2df17bb179ee50"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b",
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a",
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"4a8113580d8288d7127ebb26178e625b79d5661f",
"8b4d888e86f9e0545ceb90bbc1b67e00ea920d23"
],
"answer": [
{
"evidence": [
"To resolve these limitations, we propose fast transfer learning schemes. It trains a base model with a large dataset and copies its first n-many layers to the first n-many layers of a target model. Then the target model is fine-tuned with relatively small target data. Several learning schemes such as freezing a certain layer or adding a surplus layer are proposed for achieving the result. In experiments, we trained a general language model with huge corpus such as an Workshop on Statistical Machine Translation (WMT) data and a movie script data by using powerful computing machines, and then transferred the model to target environment for updating to be a personalized language model. With this approach, the final model can mimic target user's language style with proper syntax.",
"In the experiments, we trained the general language model with literary-style data and applied the transfer learning with spoken-style data. Then we evaluated the model output for sentence completion task in a qualitative and a quantitative manner. The test result showed that the model learned the style of the target language properly. Another test was conducted by training the general language model with the script of the drama, “Friends,\" and by applying transfer learning with main character corpora from the script to generate the personalized language model. The message-reply prediction task was evaluated with this model. The test result shows higher similarity between the output of the personalized language model and the same user dialogue than the one between the output of the personalized language model and other users' dialogues.",
"We also apply the transfer learning schemes with some of the English bible data. The same general language model, which involved previously training with the WMT'14 corpus for 10 days, is used. English bible data is added and employed in training for another 4 hours using proposed transfer learning schemes."
],
"extractive_spans": [
"Workshop on Statistical Machine Translation (WMT) data",
"script of the drama, “Friends,\"",
"English bible data"
],
"free_form_answer": "",
"highlighted_evidence": [
"In experiments, we trained a general language model with huge corpus such as an Workshop on Statistical Machine Translation (WMT) data and a movie script data by using powerful computing machines, and then transferred the model to target environment for updating to be a personalized language model. ",
"Another test was conducted by training the general language model with the script of the drama, “Friends,\" and by applying transfer learning with main character corpora from the script to generate the personalized language model. ",
"We also apply the transfer learning schemes with some of the English bible data. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We train a general language model of literary-style with the WMT'14 corpus. We then apply a transfer learning scheme with “Friends\" drama data for the model to learn the spoken-style language. Training the general language model took about 10 days then we spent another 4 hours training the personalized language model in each scheme. A “titan-X GPU\" and a “GeForce GT 730 GPU\" were used for these experiments. The latter GPU is one of the low-end GPU series of which computing power was similar to that of latest mobile GPUs such as “Qualcomm Adreno 530\" in “Samsung Galaxy S7\" or “NVIDIA Tegra K1\" in “Google Nexus 9\". For a vocabulary setting, we construct our dictionary as 50,002 words, including “ $<eos>$ \" to mark ends of sentence and “**unknown**\" to replace unconsidered vocabulary in the data. The out-of-vocabulary rate is about 3.5%.",
"We also apply the transfer learning schemes with some of the English bible data. The same general language model, which involved previously training with the WMT'14 corpus for 10 days, is used. English bible data is added and employed in training for another 4 hours using proposed transfer learning schemes.",
"We simulate the message-reply prediction scenario using the drama corpus. The script of the drama, “Friends,\" is used to train a general language model, and two main character corpora are used to generate a personalized language model. For this message-reply prediction experiment, we use a vocabulary size of 18,107, and the out-of-vocabulary rate is about 3.5%. In the message-reply prediction case, pairwise data is generated by extracting the drama corpus of each character and concatenating two consecutive sentences of different characters to form one single message-reply sentence data. We insert the word “ $<eos>$ \" between the message and reply to mark the border separating them. This pairwise data is used for the training, and only the message part of the pairwise data is used for the message-reply prediction. During implementation, it took about a day to train the general language model with the “Friends\" corpus and another 4 hours to train the personalized language model with two main character corpora. The “titan-X GPU\" and the “GeForce GT 730 GPU\" was used for these experiments. Validation messages-reply sentences of 1,281 are randomly sampled from the “Friends\" corpus for tracking validation curve and another 753 test messages are prepared for predicting the responses. These data remained unseen from training phase. The word distributions of the model output from the test messages and the target corpus data are calculated to measure their similarity."
],
"extractive_spans": [],
"free_form_answer": "WMT'14, English bible corpus, Drama corpus, and main character corpora",
"highlighted_evidence": [
"We train a general language model of literary-style with the WMT'14 corpus. We then apply a transfer learning scheme with “Friends\" drama data for the model to learn the spoken-style language. ",
"We also apply the transfer learning schemes with some of the English bible data. ",
"We simulate the message-reply prediction scenario using the drama corpus. The script of the drama, “Friends,\" is used to train a general language model, and two main character corpora are used to generate a personalized language model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"9b253a1f26aaf983aca556df025083a4a2fa4ab9",
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"03a196b2ea9c1d7713a477e0893a0bf422324b64",
"e60923f5a6e13890c1344a0b77cafef4eb015c90"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Sample model output of general language model and personalized language model. The general language model used WMT’14 data, personalized language model 1 used “Friends” drama data, and personalized language model 2 used the English bible data. Scheme 1 to scheme 3 are relearn whole, surplus layer and fixed-n layer, respectively. The output was generated with the given input sequence, “It is possible, however”"
],
"extractive_spans": [
"Sample model output"
],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Sample model output of general language model and personalized language model. The general language model used WMT’14 data, personalized language model 1 used “Friends” drama data, and personalized language model 2 used the English bible data. Scheme 1 to scheme 3 are relearn whole, surplus layer and fixed-n layer, respectively. The output was generated with the given input sequence, “It is possible, however”"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The “general language model\" in Table 2 shows the sample output of the general language model trained with document-style data, and the “personal language model 1\" in Table 2 shows the sample output of the personalized language model trained with human-dialogue-style data. Scheme_1 to scheme_3 are relearn-whole, surplus layer, and fixed-n layer, respectively. Given input word sequence for the test was, “It is possible, however.\" As can be seen in the table, both outputs differ in length and style. The sentence completed using the general language model tends to be longer than that of obtained using the personalized language model. This result indicates that the personalized language model is properly trained with the spoken language characteristics because human dialogue is usually briefer than the language in official documents."
],
"extractive_spans": [],
"free_form_answer": "length and style of sample output",
"highlighted_evidence": [
"The “general language model\" in Table 2 shows the sample output of the general language model trained with document-style data, and the “personal language model 1\" in Table 2 shows the sample output of the personalized language model trained with human-dialogue-style data. Scheme_1 to scheme_3 are relearn-whole, surplus layer, and fixed-n layer, respectively. Given input word sequence for the test was, “It is possible, however.\" As can be seen in the table, both outputs differ in length and style. The sentence completed using the general language model tends to be longer than that of obtained using the personalized language model. This result indicates that the personalized language model is properly trained with the spoken language characteristics because human dialogue is usually briefer than the language in official documents."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b",
"9b253a1f26aaf983aca556df025083a4a2fa4ab9"
]
},
{
"annotation_id": [
"78d20ddfac3c95ea7248348f51bf7b79a2011f92",
"ceb968d95883aaf0ec39277e7596d1aab8d523ec"
],
"answer": [
{
"evidence": [
"Figure 1 shows the validation curve while training. Perplexity values from various model output are plotted. The perplexity of baseline model, “scheme_1\", decreases until around epoch 10, and then it starts to increase because model is over-fitted to training data. The proposed “scheme_2\" and “scheme_3\", however, show continuous decreasing tendency and reach lower perplexity values compared to that of the baseline model. It is interesting that proposed methods achieve lower perplexity than baseline while saving computing power with reduced parameters.",
"FLOAT SELECTED: Table 3: Performances of models measured with the same validation dataset used in Figure 1. The chandler n-gram model was trained with “Chandler” corpus and the friends n-gram model was trained with “Friends” corpus. The scheme 1 model is over-fitted to training data (see Figure 1), and the lowest value is 48.17."
],
"extractive_spans": [],
"free_form_answer": "perplexity",
"highlighted_evidence": [
"Figure 1 shows the validation curve while training. Perplexity values from various model output are plotted. ",
"FLOAT SELECTED: Table 3: Performances of models measured with the same validation dataset used in Figure 1. The chandler n-gram model was trained with “Chandler” corpus and the friends n-gram model was trained with “Friends” corpus. The scheme 1 model is over-fitted to training data (see Figure 1), and the lowest value is 48.17."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The perplexity is one of the popular measures for a language model. It measures how well the language model predicts a sample. However, it is not good at measuring how well the output of the language model matches a target language style. Another measure, the BLEU score algorithm BIBREF4 , has been widely used for the automatic evaluation of the model output. However, it cannot be applied directly to measuring a quality of the personalized model output because it considers the similarity between one language and the target language. Other research was conducted on proving authorship and fraud in literature, for instance, Jane Austen's left-over novel with partially completed BIBREF5 . This research counted the occurrence of several words in the literature, compared their relative frequencies with those of the words in the target literature, and concluded that the target literature was a forgery. This approach could be applied to a text evaluation where a large amount of data is available and certain words are used more frequently. In spoken language, such as in the message-reply case, however, whole word distribution must be considered instead of considering the occurrence of several words, because the data is usually not enough than the literature case. So, we use a simple and efficient metric to measure the similarity between the user style and the output of the personalized model.",
"An output of a personalized language model can be measured by calculating the cross entropy between the word distribution of the model output and that of the target data. Word distribution can be acquired by normalizing a word histogram which is calculated based on word counts in the target corpus. Equation (3) shows the metric formulation."
],
"extractive_spans": [],
"free_form_answer": "Cross entropy between word distribution of model output and word distribution of target data.",
"highlighted_evidence": [
"So, we use a simple and efficient metric to measure the similarity between the user style and the output of the personalized model.\n\nAn output of a personalized language model can be measured by calculating the cross entropy between the word distribution of the model output and that of the target data. Word distribution can be acquired by normalizing a word histogram which is calculated based on word counts in the target corpus. Equation (3) shows the metric formulation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a",
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"5d16b353320ddcbb93a54869a1a50695bfcb2ac3",
"90e0a9058b586cba200c0f77c56c7344f5f9f970"
],
"answer": [
{
"evidence": [
"The perplexity is one of the popular measures for a language model. It measures how well the language model predicts a sample. However, it is not good at measuring how well the output of the language model matches a target language style. Another measure, the BLEU score algorithm BIBREF4 , has been widely used for the automatic evaluation of the model output. However, it cannot be applied directly to measuring a quality of the personalized model output because it considers the similarity between one language and the target language. Other research was conducted on proving authorship and fraud in literature, for instance, Jane Austen's left-over novel with partially completed BIBREF5 . This research counted the occurrence of several words in the literature, compared their relative frequencies with those of the words in the target literature, and concluded that the target literature was a forgery. This approach could be applied to a text evaluation where a large amount of data is available and certain words are used more frequently. In spoken language, such as in the message-reply case, however, whole word distribution must be considered instead of considering the occurrence of several words, because the data is usually not enough than the literature case. So, we use a simple and efficient metric to measure the similarity between the user style and the output of the personalized model.",
"An output of a personalized language model can be measured by calculating the cross entropy between the word distribution of the model output and that of the target data. Word distribution can be acquired by normalizing a word histogram which is calculated based on word counts in the target corpus. Equation (3) shows the metric formulation."
],
"extractive_spans": [],
"free_form_answer": "Cross entropy between word distribution of model output and word distribution of target data.",
"highlighted_evidence": [
"So, we use a simple and efficient metric to measure the similarity between the user style and the output of the personalized model.\n\nAn output of a personalized language model can be measured by calculating the cross entropy between the word distribution of the model output and that of the target data. Word distribution can be acquired by normalizing a word histogram which is calculated based on word counts in the target corpus. Equation (3) shows the metric formulation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table 5 indicates the cross entropy measure between the output of “scheme_1\" to “scheme_3\" model and that of the target corpus, the “friends\" drama corpus, the “chandler\" corpus, and the “bible\" corpus. It shows the similarity between the personalized model output and the target corpus as the number of training epoch increasing. The general model was pre-trained with the “Friends” corpus and the “Chandler” corpus was used training personalized model. Each Model is selected from various training epoch (0, 10, 20 and 40) and schemes, and test messages of 753 are used for the reply generation with the selected model used. As the table shows, the cross entropy measure has the highest value when the target corpus is the “bible” as expected because it is written in different style than dialogues in drama script. For the drama script case, the cross entropy measured with the “chandler\" corpus shows the lowest value among schemes. This result reveals that the personalized language model is trained properly from the general language model. Thus it is more similar in style to the target data corpus than the general language model. The “epoch 0\" case means the initial model state trained from general language corpus, “friends\" corpus. Thus cross entropy with “friends\" target corpus shows lower value than that of “chandler\" and “bible\" target corpus cases."
],
"extractive_spans": [
"cross entropy"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table 5 indicates the cross entropy measure between the output of “scheme_1\" to “scheme_3\" model and that of the target corpus, the “friends\" drama corpus, the “chandler\" corpus, and the “bible\" corpus. It shows the similarity between the personalized model output and the target corpus as the number of training epoch increasing. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b",
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
],
"nlp_background": [
"two",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Which metrics are used for quantitative analysis?",
"Is their data open sourced?",
"What dataset did they use?",
"What metric did they use for qualitative evaluation?",
"What metric did they use for quantitative evaluation?",
"Which similarity metrics are used for quantitative analysis?"
],
"question_id": [
"fc62549a8f0922c09996a119b2b6a8b5e829e989",
"e2a507749a4a3201edd6413c77ad0d4c23e9c6ce",
"a3a867f7b3557c168d05c517c468ff6c7337bff9",
"8bb2280483af8013a32e0d294e97d44444f08ab0",
"a68acd8364764d5601dc12e4b31d9102fb7d5f7e",
"6d55e377335815b7ad134d1a2977d231ad34a25b"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"efficient",
"transfer",
"transfer",
"transfer",
"transfer",
"recurrent network"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Quantitative measure result of dialogues among main characters. Character 1 to character 6 are Chandler, Joey, Monica, Phoebe, Rachel, and Ross, respectively. A lower value indicates that the two sets compared have similar distributions and are, thus, similar in style.",
"Table 2: Sample model output of general language model and personalized language model. The general language model used WMT’14 data, personalized language model 1 used “Friends” drama data, and personalized language model 2 used the English bible data. Scheme 1 to scheme 3 are relearn whole, surplus layer and fixed-n layer, respectively. The output was generated with the given input sequence, “It is possible, however”",
"Table 3: Performances of models measured with the same validation dataset used in Figure 1. The chandler n-gram model was trained with “Chandler” corpus and the friends n-gram model was trained with “Friends” corpus. The scheme 1 model is over-fitted to training data (see Figure 1), and the lowest value is 48.17.",
"Figure 1: Validation curve for each schemes. Scheme 1 is re-learn whole, scheme 2 is surplus layer and scheme 3 is fixed-n layer (train 3rd layer only).",
"Table 4: Performances of models with different number of sentences in training dataset (lower is better). “Friends” corpus was used pre-training the general model, and “Chandler” and “Rachel” corpus was used training the personalized model with the proposed scheme 2 method. Dataset ’0’ means the model is not trained with personal dataset.",
"Table 5: Cross entropy measure between the language model output and the training data corpus, the “Friends” drama corpus, the“Chandler” corpus and the “Bible” corpus. Scheme 1 to scheme 3 are relearn whole, surplus layer and fixed-n layer, respectively. The “epoch 0” case means the initial model state trained from general language corpus, “friends” corpus. Thus cross entropy with “friends” target corpus shows lower value than that of “chandler” and “bible” target corpus cases. The lower value indicates that the language model output is similar in style to the compared target corpus"
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"5-Figure1-1.png",
"5-Table4-1.png",
"6-Table5-1.png"
]
} | [
"Which metrics are used for quantitative analysis?",
"What dataset did they use?",
"What metric did they use for qualitative evaluation?",
"What metric did they use for quantitative evaluation?",
"Which similarity metrics are used for quantitative analysis?"
] | [
[
"1701.03578-Measures-4",
"1701.03578-Measures-3",
"1701.03578-Measures-0",
"1701.03578-General-Style to Personal-Style Message-Reply Prediction-1"
],
[
"1701.03578-Literary-Style to Spoken-Style Sentence Completion-0",
"1701.03578-General-Style to Personal-Style Message-Reply Prediction-0",
"1701.03578-Literary-Style to Spoken-Style Sentence Completion-2",
"1701.03578-Introduction-2",
"1701.03578-Introduction-3"
],
[
"1701.03578-Literary-Style to Spoken-Style Sentence Completion-1",
"1701.03578-4-Table2-1.png"
],
[
"1701.03578-5-Table3-1.png",
"1701.03578-Measures-0",
"1701.03578-General-Style to Personal-Style Message-Reply Prediction-1"
],
[
"1701.03578-Measures-0",
"1701.03578-General-Style to Personal-Style Message-Reply Prediction-4"
]
] | [
"a measure that calculates the cross entropy between the word distribution of the model output and that of the target data",
"WMT'14, English bible corpus, Drama corpus, and main character corpora",
"length and style of sample output",
"Cross entropy between word distribution of model output and word distribution of target data.",
"Cross entropy between word distribution of model output and word distribution of target data."
] | 185 |
1802.09233 | EiTAKA at SemEval-2018 Task 1: An Ensemble of N-Channels ConvNet and XGboost Regressors for Emotion Analysis of Tweets | This paper describes our system that has been used in Task1 Affect in Tweets. We combine two different approaches. The first one called N-Stream ConvNets, which is a deep learning approach where the second one is XGboost regresseor based on a set of embedding and lexicons based features. Our system was evaluated on the testing sets of the tasks outperforming all other approaches for the Arabic version of valence intensity regression task and valence ordinal classification task. | {
"paragraphs": [
[
"Sentiment analysis in Twitter is the problem of identifying people’s opinions expressed in tweets. It normally involves the classification of tweets into categories such as “positive”, “negative” and in some cases, “neutral”. The main challenges in designing a sentiment analysis system for Twitter are the following:",
"Most of the existing systems are inspired in the work presented in BIBREF0 . Machine Learning techniques have been used to build a classifier from a set of tweets with a manually annotated sentiment polarity. The success of the Machine Learning models is based on two main facts: a large amount of labeled data and the intelligent design of a set of features that can distinguish between the samples.",
"With this approach, most studies have focused on designing a set of efficient features to obtain a good classification performance BIBREF1 , BIBREF2 , BIBREF3 . For instance, the authors in BIBREF4 used diverse sentiment lexicons and a variety of hand-crafted features.",
"This paper proposes the representation of tweets using a novel set of features, which include the information provided by seven lexicons and a bag of negated words (BonW). The concatenation of these features with a set of basic features improves the classification performance. The polarity of tweets is determined by a classifier based on a Support Vector Machine.",
"The system has been evaluated on the Arabic and English language test sets of the Twitter Sentiment Analysis Track in SemEval 2017, subtask A (Message Polarity Classification). Our system (SiTAKA) has been ranked 8th over 36 teams in the English language test set and 2nd out of 8 teams in the Arabic language test set.",
"The rest of the paper is structured as follows. Section SECREF2 presents the tools and the resources that have been used. In Section SECREF3 we describe the system. The experiments and results are presented and discussed in Section SECREF4 . Finally, in the last section the conclusions as well as further work are presented."
],
[
"This section explains the tools and the resources that have been used in the SiTAKA system. Let us denote to its Arabic language and English language versions by Ar-SiTAKA and En-SiTAKA, respectively."
],
[
"We used for En-SiTAKA seven lexicons in this work, namely: General Inquirer BIBREF5 , Hu-Liu opinion lexicon (HL) BIBREF6 , NRC hashtags lexicon BIBREF4 , SenticNet BIBREF7 , and TS-Lex BIBREF8 . More details about each lexicon, such as how it was created, the polarity score for each term, and the statistical distribution of the lexicon, can be found in BIBREF9 .",
"In this version of the SiTAKA system, we used four lexicons created by BIBREF10 . Arabic Hashtag Lexicon, Dialectal Arabic Hashtag Lexicon, Arabic Bing Liu Lexicon and Arabic Sentiment140 Lexicon. The first two were created manually, whereas the rest were translated to Arabic from the English version using Google Translator."
],
[
"We used two pre-trained embedding models in En-SiTAKA. The first one is word2vec which is provided by Google. It is trained on part of the Google News dataset (about 100 billion words) and it contains 300-dimensional vectors for 3M words and phrases BIBREF11 . The second one is SSWEu, which has been trained to capture the sentiment information of sentences as well as the syntactic contexts of words BIBREF12 . The SSWEu model contains 50-dimensional vectors for 100K words.",
"In Ar-SiTAKA we used the model Arabic-SKIP-G300 provided by BIBREF13 . Arabic-SKIP-G300 has been trained on a large corpus of Arabic text collected from different sources such as Arabic Wikipedia, Arabic Gigaword Corpus, Ksucorpus, King Saud University Corpus, Microsoft crawled Arabic Corpus, etc. It contains 300-dimensional vectors for 6M words and phrases."
],
[
"This section explains the main steps of the SiTAKA system, the features used to describe a tweet and the classification method."
],
[
"Some standard pre-processing methods are applied on the tweets:",
"Normalization: Each tweet in English is converted to the lowercase. URLs and usernames are omitted. Non-Arabic letters are removed from each tweet in the Arabic-language sets. Words with repeated letters (i.e. elongated) are corrected.",
"Tokenization and POS tagging: All English-language tweets are tokenized and tagged using Ark Tweet NLP BIBREF14 , while all Arabic-language tweets are tokenized and tagged using Stanford Tagger BIBREF15 .",
"Negation: A negated context can be defined as a segment of tweet that starts with a negation word (e.g. no, don't for English-language, لا و ليس > for Arabic-language) and ends with a punctuation mark BIBREF0 . Each tweet is negated by adding a suffix (\"_NEG\" and \"_منفي>\") to each word in the negated context.",
"It is necessary to mention that in Ar-SiTAKA we did not use all the Arabic negation words due to the ambiguity of some of them. For example, the first word ما>, is a question mark in the following \"ما رأيك في ما حدث؟>-What do you think about what happened?\" and it means \"which/that\" in the following example \"إن ما حدث اليوم سيء جدا> - The matter that happened today was very bad\".",
"As shown in BIBREF16 , stopwords tend to carry sentiment information; thus, note that they were not removed from the tweets."
],
[
"SiTAKA uses five types of features: basic text, syntactic, lexicon, cluster and Word Embeddings. These features are described in the following subsections:",
"These basic features are extracted from the text. They are the following:",
"Bag of Words (BoW): Bag of words or n-grams features introduce some contextual information. The presence or absence of contiguous sequences of 1, 2, 3, and 4 tokens are used to represent the tweets.",
"Bag of Negated Words (BonW): Negated contexts are important keys in the sentiment analysis problem. Thus, we used the presence or absence of contiguous sequences of 1, 2, 3 and 4 tokens in the negated contexts as a set of features to represent the tweets.",
"Syntactic features are useful to discriminate between neutral and non-neutral texts.",
"Part of Speech (POS): Subjective and objective texts have different POS tags BIBREF17 . According to BIBREF18 , non-neutral terms are more likely to exhibit the following POS tags in Twitter: nouns, adjectives, adverbs, abbreviations and interjections. The number of occurrences of each part of speech tag is used to represent each tweet.",
"Bi-tagged: Bi-tagged features are extracted by combining the tokens of the bi-grams with their POS tag e.g. \"feel_VBP good_JJ\" \"جميل>_JJ جداً>_VBD\". It has been shown in the literature that adjectives and adverbs are subjective in nature and they help to increase the degree of expressiveness BIBREF19 , BIBREF0 .",
"Opinion lexicons play an important role in sentiment analysis systems, and the majority of the existing systems rely heavily on them BIBREF20 . For each of the seven chosen lexicons, a tweet is represented by calculating the following features: (1) tweet polarity, (2) the average polarity of the positive terms, (3) the average polarity of the negative terms, (4) the score of the last positive term, (5) the score of the last negative term, (6) the maximum positive score and (7) the minimum negative score.",
"The polarity of a tweet T given a lexicon L is calculated using the equation (1). First, the tweet is tokenized. Then, the number of positive (P) and negative (N) tokens found in the lexicon are counted. Finally, the polarity measure is calculated as follows: DISPLAYFORM0 ",
"We used two set of clusters in En-SiTAKA to represent the English-language tweets by mapping each tweet to a set of clusters. The first one is the well known set of clusters provided by the Ark Tweet NLP tool which contains 1000 clusters produced with the Brown clustering algorithm from 56M English-language tweets. These 1000 clusters are used to represent each tweet by mapping each word in the tweet to its cluster. The second one is Word2vec cluster ngrams, which is provided by BIBREF21 . They used the word2vec tool to learn 40-dimensional word embeddings of 255,657 words from a Twitter dataset and the K-means algorithm to cluster them into 4960 clusters. We were not able to find publicly available semantic clusters to be used in Ar-SiTAKA.",
"Word embeddings are an approach for distributional semantics which represents words as vectors of real numbers. Such representation has useful clustering properties, since the words that are semantically and syntactically related are represented by similar vectors BIBREF22 . For example, the words \"coffee\" and \"tea\" will be very close in the created space.",
"We used sum, standard-deviation, min and max pooling functions BIBREF23 to obtain the tweet representation in the embedding space. The result is the concatenation of vectors derived from different pooling functions. More formally, let us consider an embedding matrix INLINEFORM0 and a tweet INLINEFORM1 , where INLINEFORM2 is the dimension size, INLINEFORM3 is the length of the vocabulary (i.e. the number of words in the embedding model), INLINEFORM4 is the word INLINEFORM5 in the tweet and INLINEFORM6 is the number of words. First, each word INLINEFORM7 is substituted by the corresponding vector INLINEFORM8 in the matrix INLINEFORM9 where INLINEFORM10 is the index of the word INLINEFORM11 in the vocabulary. This step ends with the matrix INLINEFORM12 . The vector INLINEFORM13 is computed using the following formula: DISPLAYFORM0 ",
"where INLINEFORM0 denotes the concatenation operation. The pooling function is an element-wise function, and it converts texts with various lengths into a fixed-length vector allowing to capture the information throughout the entire text."
],
[
"Up to now, support vector machines (SVM) BIBREF24 have been used widely and reported as the best classifier in the sentiment analysis problem. Thus, we trained a SVM classifier on the training sets provided by the organizers. For the English-language we combined the training sets of SemEval13-16 and testing sets of SemEval13-15, and used them as a training set. Table TABREF20 shows the numerical description of the datasets used in this work. We used the linear kernel with the value 0.5 for the cost parameter C. All the parameters and the set of features have been experimentally chosen based on the development sets."
],
[
"The evaluation metrics used by the task organizers were the macroaveraged recall ( INLINEFORM0 ), the F1 averaged across the positives and the negatives INLINEFORM1 and the accuracy ( INLINEFORM2 ) BIBREF25 .",
"The system has been tested on 12,284 English-language tweets and 6100 Arabic-language tweets provided by the organizers. The golden answers of all the test tweets were omitted by the organizers. The official evaluation results of our system are reported along with the top 10 systems and the baseline results in Table 2 and 3. Our system ranks 8th among 38 systems in the English-language tweets and ranks 2nd among 8 systems in the Arabic language tweets. The baselines 1, 2 and 3 stand for case when the system classify all the tweets as positive, negative and neutral respectively."
],
[
"We have presented a new set of rich sentimental features for the sentiment analysis of the messages posted on Twitter. A Support Vector Machine classifier has been trained using a set of basic features, information extracted from seven useful and publicly available opinion lexicons, syntactic features, clusters and embeddings. We have realized that the lexicon opinions are the key point in the improvement of the performance of the classifier; thus, for the future work we plan to focus on working on the development of an efficient lexicon-based method or building a new lexicon that can be used to improve the performance of the sentiment analysis systems. Deep learning approaches have recently been used to build supervised, unsupervised or even semi-supervised methods to analyze the sentiment of texts and to build efficient opinion lexicons BIBREF26 , BIBREF27 , BIBREF12 ; thus, the authors are considering the possibility of also using this technique to build a sentiment analysis system."
],
[
"This work was partially supported by URV Research Support Funds (2015PFR-URV-B2-60, 2016PFR-URV-B2-60 and Martí i Franqués PhD grant)."
]
],
"section_name": [
"Introduction",
"Resources",
"Sentiment Lexicons",
"Embeddings",
"System Description",
"Preprocessing and Normalization",
"Features ُExtraction",
"Classifier",
"Results",
"Conclusion",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"0caa054cb0800b18a2fe4a4a156128489fe7ea2b",
"4fbb401a4101f9079adb3f02b46de2d72e6deccb",
"a46cb3eeb0514a210d6bc9fea5a0af213e70ad56"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"e07b94fbff00b8bb8e7116732b073ca07a81fbb5",
"beb58acc19a9af1d1dd38507d18de5b4a239d7d9"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: EI-reg task results.",
"FLOAT SELECTED: Table 4: V-reg task results.",
"FLOAT SELECTED: Table 5: EI-oc task results.",
"FLOAT SELECTED: Table 6: V-oc task results."
],
"extractive_spans": [],
"free_form_answer": "An ensemble of N-Channels ConvNet and XGboost regressor model",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: EI-reg task results.",
"FLOAT SELECTED: Table 4: V-reg task results.",
"FLOAT SELECTED: Table 5: EI-oc task results.",
"FLOAT SELECTED: Table 6: V-oc task results."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: EI-reg task results.",
"FLOAT SELECTED: Table 4: V-reg task results."
],
"extractive_spans": [],
"free_form_answer": "Ensemble Model",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: EI-reg task results.",
"FLOAT SELECTED: Table 4: V-reg task results."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"050b85dd2e33c2024b9ca76a61ebe76238c9715d",
"1bdd687794c7c4498aa7848ff5b6a0bcba77a3c8"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How is the data labeled?",
"What is the best performing model?",
"How long is the dataset?"
],
"question_id": [
"0035b351df63971ec57e36d4bfc6f7594bed41ae",
"2b021e1486343d503bab26c2282f56cfdab67248",
"e801b6a6048175d3b1f3440852386adb220bcb36"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: System Architecture.",
"Figure 2: Channel Architecture.",
"Table 1: The XGBoost regressors parameters. #Est. refers to the number of estimators, S is the subsample, M is the maximum depth and O refers to the objective function.",
"Table 2: The value of α for each individual model.",
"Figure 3: An example of a decision tree classifier.",
"Table 3: EI-reg task results.",
"Table 4: V-reg task results.",
"Table 5: EI-oc task results.",
"Table 6: V-oc task results."
],
"file": [
"3-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Figure3-1.png",
"6-Table3-1.png",
"6-Table4-1.png",
"7-Table5-1.png",
"7-Table6-1.png"
]
} | [
"What is the best performing model?"
] | [
[
"1802.09233-7-Table5-1.png",
"1802.09233-6-Table4-1.png",
"1802.09233-7-Table6-1.png",
"1802.09233-6-Table3-1.png"
]
] | [
"Ensemble Model"
] | 186 |
1801.04433 | Detecting Offensive Language in Tweets Using Deep Learning | This paper addresses the important problem of discerning hateful content in social media. We propose a detection scheme that is an ensemble of Recurrent Neural Network (RNN) classifiers, and it incorporates various features associated with user-related information, such as the users' tendency towards racism or sexism. These data are fed as input to the above classifiers along with the word frequency vectors derived from the textual content. Our approach has been evaluated on a publicly available corpus of 16k tweets, and the results demonstrate its effectiveness in comparison to existing state of the art solutions. More specifically, our scheme can successfully distinguish racism and sexism messages from normal text, and achieve higher classification quality than current state-of-the-art algorithms. | {
"paragraphs": [
[
"Social media is a very popular way for people to express their opinions publicly and to interact with others online. In aggregation, social media can provide a reflection of public sentiment on various events. Unfortunately, many users engaging online, either on social media, forums or blogs, will often have the risk of being targeted or harassed via abusive language, which may severely impact their online experience and the community in general. The existence of social networking services creates the need for detecting user-generated hateful messages prior to publication. All published text that is used to express hatred towards some particular group with the intention to humiliate its members is considered a hateful message.",
"Although hate speech is protected under the free speech provisions in the United States, there are other countries, such as Canada, France, United Kingdom, and Germany, where there are laws prohibiting it as being promoting violence or social disorder. Social media services such as Facebook and Twitter have been criticized for not having done enough to prohibit the use of their services for attacking people belonging to some specific race, minority etc. BIBREF0 . They have announced though that they would seek to battle against racism and xenophobia BIBREF1 . Nevertheless, the current solutions deployed by them have attempted to address the problem with manual effort, relying on users to report offensive comments BIBREF2 . This not only requires a huge effort by human annotators, but it also has the risk of applying discrimination under subjective judgment. Moreover, a non-automated task by human annotators would have strong impact on system response times, since a computer-based solution can accomplish this task much faster than humans. The massive rise in the user-generated content in the above social media services, with manual filtering not being scalable, highlights the need for automating the process of on-line hate-speech detection.",
"Despite the fact that the majority of the solutions for automated detection of offensive text rely on Natural Language Processing (NLP) approaches, there is lately a tendency towards employing pure machine learning techniques like neural networks for that task. NLP approaches have the drawback of being complex, and to a large extent dependent on the language used in the text. This provides a strong motivation for employing alternative machine learning models for the classification task. Moreover, the majority of the existing automated approaches depend on using pre-trained vectors (e.g. Glove, Word2Vec) as word embeddings to achieve good performance from the classification model. That makes the detection of hatred content unfeasible in cases where users have deliberately obfuscated their offensive terms with short slang words.",
"There is a plethora of unsupervised learning models in the existing literature to deal with hate-speech BIBREF3 , as well as in detecting the sentiment polarity in tweets BIBREF4 . At the same time, the supervised learning approaches have not been explored adequately so far. While the task of sentence classification seems similar to that of sentiment analysis; nevertheless, in hate-speech even negative sentiment could still provide useful insight. Our intuition is that the task of hate-speech detection can be further benefited by the incorporation of other sources of information to be used as features into a supervised learning model. A simple statistical analysis on an existing annotated dataset of tweets by BIBREF5 , can easily reveal the existence of significant correlation between the user tendency in expressing opinions that belong to some offensive class (Racism or Sexism), and the annotation labels associated with that class. More precisely, the correlation coefficient value that describes such user tendency was found to be 0.71 for racism in the above dataset, while that value reached as high as 0.76 for sexism. In our opinion, utilizing such user-oriented behavioural data for reinforcing an existing solution is feasible, because such information is retrieva2ble in real-world use-case scenarios like Twitter. This highlights the need to explore the user features more systematically to further improve the classification accuracy of a supervised learning system.",
"Our approach employs a neural network solution composed of multiple Long-Short-Term-Memory (LSTM) based classifiers, and utilizes user behavioral characteristics such as the tendency towards racism or sexism to boost performance. Although our technique is not necessarily revolutionary in terms of the deep learning models used, we show in this paper that it is quite effective.",
"Our main contributions are: INLINEFORM0 ) a deep learning architecture for text classification in terms of hateful content, which incorporates features derived form the users' behavioural data, INLINEFORM1 ) a language agnostic solution, due to no-use of pre-trained word embeddings, for detecting hate-speech, INLINEFORM2 ) an experimental evaluation of the model on a Twitter dataset, demonstrating the top performance achieved on the classification task. Special focus is given to investigating how the additional features concerning the users' tendency to utter hate-speech, as expressed by their previous history, could leverage the performance. To the best of our knowledge, there has not been done any previous study on exploring features related to the users tendency in hatred content that used a deep learning model.",
"The rest of the paper is organized as follows. In Section SECREF2 we describe the problem of hate speech in more detail, and we refer to the existing work in the field in Section SECREF3 . In Section SECREF4 we present our proposed model, while in Section SECREF5 we refer to the dataset used, the evaluation tests we performed and we discuss the results received. Finally, in Section SECREF6 we summarize our contributions and discuss the future work."
],
[
"The problem we address in this work can be formally described as follows: Let INLINEFORM0 be an unlabeled short sentence composed of a number of words, posted by a user INLINEFORM1 . Let INLINEFORM2 , INLINEFORM3 , INLINEFORM4 be three classes denoting Neutrality, Sexism and Racism respectively in a textual content. Members of these classes are those postings with content classified as belonging to the corresponding class, for which the following holds: INLINEFORM5 . Further, given that user INLINEFORM6 has a previous history of message postings INLINEFORM7 , we assume that any previous posting INLINEFORM8 by that user is already labeled as belonging to any of the classes N,S,R. Similarly, other postings by other users have also been labeled accordingly, forming up their previous history. Based on these facts, the problem is to identify the class, which the unlabeled sentence INLINEFORM9 by user INLINEFORM10 belongs to.",
"The research question we address in this work is:",
"How to effectively identify the class of a new posting, given the identity of the posting user and the history of postings related to that user?",
"To answer this question, our main goals can be summarized as follows:",
"Note that existing solutions for automatic detection are still falling short to effectively detect abusive messages. Therefore there is a need for new algorithms which would do the job of classification of such content more effectively and efficiently. Our work is towards that direction."
],
[
"Simple word-based approaches, if used for blocking the posting of text or blacklisting users, not only fail to identify subtle offensive content, but they also affect the freedom of speech and expression. The word ambiguity problem – that is, a word can have different meanings in different contexts – is mainly responsible for the high false positive rate in such approaches. Ordinary NLP approaches on the other hand, are ineffective to detect unusual spelling, experienced in user-generated comment text. This is best known as the spelling variation problem, and it is caused either by unintentional or intentional replacement of single characters in a token, aiming to obfuscate the detectors.",
"In general, the complexity of the natural language constructs renders the task quite challenging. The employment of supervised learning classification methods for hate speech detection is not new. BIBREF6 reported performance for a simple LSTM classifier not better than an ordinary SVM, when evaluated on a small sample of Facebook data for only 2 classes (Hate, No-Hate), and 3 different levels of strength of hatred. BIBREF7 described another way of detecting offensive language in tweets, based on some supervised model. They differentiate hate speech from offensive language, using a classifier that involves naive Bayes, decision trees and SVM. Also, BIBREF8 attempted to discern abusive content with a supervised model combining various linguistic and syntactic features in the text, considered at character uni-gram and bi-gram level, and tested on Amazon data. In general, we can point out the main weaknesses of NLP-based models in their non-language agnostic nature and the low scores in detection.",
"Unsupervised learning approaches are quite common for detecting offensive messages in text by applying concepts from NLP to exploit the lexical syntactic features of sentences BIBREF9 , or using AI-solutions and bag-of-words based text-representations BIBREF10 . The latter is known to be less effective for automatic detection, since hatred users apply various obfuscation tricks, such as replacing a single character in offensive words. For instance, applying a binary classifier onto a paragraph2vec representation of words has already been attempted on Amazon data in the past BIBREF11 , but it only performed well on a binary classification problem. Another unsupervised learning based solution is the work by BIBREF12 , in which the authors proposed a set of criteria that a tweet should exhibit in order to be classified as offensive. They also showed that differences in geographic distribution of users have only marginal effect on the detection performance. Despite the above observation, we explore other features that might be possible to improve the detection accuracy in the solution outlined below.",
"The work by BIBREF5 applied a crowd-sourced solution to tackle hate-speech, with the creation of an additional dataset of annotations to extend the existing corpus. The impact of the experience of annotators in the classification performance was investigated. The work by BIBREF13 dealt with the classification problem of tweets, but their interest was on sexism alone, which they distinguished into `Hostile', `Benevolent' or `Other'. While the authors used the dataset of tweets from BIBREF12 , they treated the existing `Sexism' tweets as being of class `Hostile', while they collected their own tweets for the `Benevolent' class, on which they finally applied the FastText by BIBREF14 , and SVM classification.",
" BIBREF15 approached the issue with a supervised learning model that is based on a neural network. Their method achieved higher score over the same dataset of tweets than any unsupervised learning solution known so far. That solution uses an LSTM model, with features extracted by character n-grams, and assisted by Gradient Boosted Decision Trees. Convolution Neural Networks (CNN) has also been explored as a potential solution in the hate-speech problem in tweets, with character n-grams and word2vec pre-trained vectors being the main tools. For example, BIBREF16 transformed the classification into a 2-step problem, where abusive text first is distinguished from the non-abusive, and then the class of abuse (Sexism or Racism) is determined. BIBREF17 employed pre-trained CNN vectors in an effort to predict four classes. They achieved slightly higher F-score than character n-grams.",
"In spite of the high popularity of NLP approaches in hate-speech classification BIBREF3 , we believe there is still a high potential for deep learning models to further contribute to the issue. At this point it is also relevant to note the inherent difficulty of the challenge itself, which can be clearly noted by the fact that no solution thus far has been able to obtain an F-score above 0.93."
],
[
"The power of neural networks comes from their ability to find data representations that are useful for classification. Recurrent Neural Networks (RNN) are a special type of neural network, which can be thought of as the addition of loops to the architecture. RNNs use back propagation in the training process to update the network weights in every layer. In our experimentation we used a powerful type of RNN known as Long Short-Term Memory Network (LSTM). Inspired by the work by BIBREF15 , we experiment with combining various LSTM models enhanced with a number of novel features in an ensemble. More specifically we introduce:"
],
[
"We first elaborate into the details of the features derived to describe each user's tendency towards each class (Neutral, Racism or Sexism), as captured in their tweeting history. In total, we define the three features INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , representing a user's tendency towards posting Neutral, Racist and Sexist content, respectively. We let INLINEFORM3 denote the set of tweets by user INLINEFORM4 , and use INLINEFORM5 , INLINEFORM6 and INLINEFORM7 to denote the subsets of those tweets that have been labeled as Neutral, Racist and Sexist respectively. Now, the features are calculated as INLINEFORM8 , INLINEFORM9 ,and INLINEFORM10 .",
"Furthermore, we choose to model the input tweets in the form of vectors using word-based frequency vectorization. That is, the words in the corpus are indexed based on their frequency of appearance in the corpus, and the index value of each word in a tweet is used as one of the vector elements to describe that tweet. We note that this modelling choice provides us with a big advantage, because the model is independent of the language used for posting the message."
],
[
"To improve classification ability we employ an ensemble of LSTM-based classifiers.",
"In total the scheme comprises a number of classifiers (3 or 5), each receiving the vectorized tweets together with behavioural features (see Section SECREF5 ) as input.",
"The choice of various characteristics was done with the purpose to train the neural network with any data associations existing between the attributes for each tweet and the class label given to that tweet. In each case, the characteristic feature is attached to the already computed vectorized content for a tweet, thereby providing an input vector for one LSTM classifier. A high level view of the architecture is shown in Figure FIGREF7 , with the multiple classifiers. The ensemble has two mechanisms for aggregating the classifications from the base classifiers; namely Voting and Confidence. The preferred method is majority voting, which is employed whenever at least two of the base classifiers agrees wrt. classification of a given tweet. When all classifiers disagree, the classifier with strongest confidence in its prediction is given preference. The conflict resolution logic is implemented in the Combined Decision component.",
"Ensemble classifier [1] INLINEFORM0 INLINEFORM1 classifiers INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 decision INLINEFORM8 decision INLINEFORM9 decision for INLINEFORM10 ",
"We present the above process in Algorithm SECREF6 . Here mode denotes a function that provides the dominant value within the inputs classes INLINEFORM0 and returns NIL if there is a tie, while classifier is a function that returns the classification output in the form of a tuple (Neutral, Racism, Sexism)."
],
[
"Before training the neural network with the labeled tweets, it is necessary to apply the proper tokenization to every tweet. In this way, the text corpus is split into word elements, taking white spaces and the various punctuation symbols used in the language into account. This was done using the Moses package for machine translation.",
"We choose to limit the maximum size of each tweet to be considered during training to 30 words, and padded tweets of shorter size with zeros. Next, tweets are converted into vectors using word-based frequency, as described in Section SECREF5 . To feed the various classifiers in our evaluation, we attach the feature values onto every tweet vector.",
"In this work we experimented with various combinations of attached features INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 that express the user tendency. The details of each experiment, including the resulting size of each embedding can be found in Table TABREF10 , with the latter denoted `input dimension' in the table."
],
[
"In our evaluation of the proposed scheme, each classifier is implemented as a deep learning model having four layers, as illustrated in Figure FIGREF16 , and is described as follows:",
"The Input (a.k.a Embedding) Layer. The input layer's size is defined by the number of inputs for that classifier. This number equals the size to the word vector plus the number of additional features. The word vector dimension was set to 30 so that to be able to encode every word in the vocabulary used.",
"The hidden layer. The sigmoid activation was selected for the the hidden LSTM layer. Based on preliminary experiments the dimensionality of the output space for this layer was set to 200. This layer is fully connected to both the Input and the subsequent layer.",
"The dense layer. The output of the LSTM was run through an additional layer to improve the learning and obtain more stable output. The ReLU activation function was used. Its size was selected equal to the size of the input layer.",
"The output layer. This layer has 3 neurons to provide output in the form of probabilities for each of the three classes Neutral, Racism, and Sexism. The softmax activation function was used for this layer.",
"In total we experimented with 11 different setups of the proposed scheme, each with a different ensemble of classifiers, see Table TABREF17 ."
],
[
"We experimented with a dataset of approximately 16k short messages from Twitter, that was made available by BIBREF12 . The dataset contains 1943 tweets labeled as Racism, 3166 tweets labeled as Sexism and 10889 tweets labeled as Neutral (i.e., tweets that neither contain sexism nor racism). There is also a number of dual labeled tweets in the dataset. More particularly, we found 42 tweets labeled both as both `Neutral' and `Sexism', while six tweets were labelled as both `Racism' and `Neutral'. According to the dataset providers, the labeling was performed manually.",
"The relatively small number of tweets in the dataset makes the task more challenging. As reported by several authors already, the dataset is imbalanced, with a majority of neutral tweets. Additionally, we used the public Twitter API to retrieve additional data associated with the user identity for each tweet in the original dataset."
],
[
"To produce results in a setup comparable with the current state of the art BIBREF15 , we performed 10-fold cross validation and calculated the Precision,Recall and F-Score for every evaluated scheme. We randomly split each training fold into 15% validation and 85% training, while performance is evaluated over the remaining fold of unseen data. The model was implemented using Keras. We used categorical cross-entropy as learning objective, and selected the ADAM optimization algorithm BIBREF18 . Furthermore, the vocabulary size was set to 25000, and the batch-size during training was set to 500.",
"To avoid over-fitting, the model training was allowed to run for a maximum number of 100 epochs, out of which the optimally trained state was chosen for the model evaluation. An optimal epoch was identified so, such that the validation accuracy was maximized, while at the same time the error remained within INLINEFORM0 of the lowest ever figure within the current fold. Throughout the experiment we observed that the optimal epochs typically occurred after between the 30 and 40 epochs.",
"To achieve stability in the results produced, we ran every single classifier for 15 times and the output values were aggregated. In addition, the output from each single classifier run was combined with the output from another two single classifiers to build the input of an ensemble, producing INLINEFORM0 combinations. For the case of the ensemble that incorporates all five classifiers we restricted to using the input by only the first five runs of the single classifiers ( INLINEFORM1 combinations). That was due to the prohibitively very large number of combinations that were required."
],
[
"We now present the most interesting results from our experiments. For the evaluation we used standard metrics for classification accuracy, suitable for studying problems such as sentiment analysis. In particular we used Precision and Recall, with the former calculated as the ratio of the number of tweets correctly classified to a given class over the total number of tweets classified to that class, while the latter measures the ratio of messages correctly classified to a given class over the number of messages from that class. Additionally, the F-score is the harmonic mean of precision and recall, expressed as INLINEFORM0 . For our particular case with three classes, P, R and F are computed for each class separately, with the final F value derived as the weighted mean of the separate INLINEFORM1 -scores: INLINEFORM2 ; recall that INLINEFORM3 , INLINEFORM4 and INLINEFORM5 . The results are shown in Table TABREF24 , along with the reported results from state of the art approaches proposed by other researchers in the field. Note that the performance numbers P,R and F of the other state of the art approaches are based on the authors' reported data in the cited works. Additionally, we report the performance of each individual LSTM classifier as if used alone over the same data (that is, without the ensemble logic). The F-score for our proposed approaches shown in the last column, is the weighted average value over the 3 classes (Neutral,Sexism,Racism). Moreover, all the reported values are average values produced for a number of runs of the same tested scheme over the same data. Figure FIGREF23 shows the F-Score as a function of the number of training samples for each ensemble of classifiers. We clearly see that the models converge. For the final run the F-score has standard deviation value not larger than 0.001, for all classifiers.",
"As can be seen in Table TABREF24 , the work by BIBREF12 , in which character n-grams and gender information were used as features, obtained the quite low F-score of INLINEFORM0 . Later work by the same author BIBREF5 investigated the impact of the experience of the annotator in the performance, but still obtaining a lower F-score than ours. Furthermore, while the first part of the two step classification BIBREF16 performs quite well (reported an F-score of 0.9520), it falls short in detecting the particular class the abusive text belongs to. Finally, we observe that applying a simple LSTM classification with no use of additional features (denoted `single classifier (i)' in Table TABREF24 ), achieves an F-score that is below 0.93, something that is in line with other researchers in the field, see BIBREF15 .",
"Very interestingly, the incorporation of features related to user's behaviour into the classification has provided a significant increase in the performance vs. using the textual content alone, INLINEFORM0 vs. INLINEFORM1 .",
"Another interesting finding is the observed performance improvement by using an ensemble instead of a single classifier; some ensembles outperform the best single classifier. Furthermore, the NRS classifier, which produces the best score in relation to other single classifiers, is the one included in the best performing ensemble.",
"In comparison to the approach by BIBREF13 , which focuses on various classes of Sexism, the results show that our deep learning model is doing better as far as detecting Sexism in general, outperforming the FastText algorithm they include in their experiments (F=0.87). The inferiority of FastText over LSTM is also reported in the work by BIBREF15 , as well as being inferior over CNN in, BIBREF16 . In general, through our ensemble schemes is confirmed that deep learning can outperform any NLP-based approaches known so far in the task of abusive language detection.",
"We also present the performance of each of the tested models per class label in Table TABREF25 . Results by other researchers have not been included, as these figures are not reported in the existing literature. As can be seen, sexism is quite easy to classify in hate-speech, while racism seems to be harder; similar results were reported by BIBREF7 . This result is consistent across all ensembles.",
"For completion, the confusion matrices of the best performing approach that employs 3 classifiers (ensemble viii) as well as of the ensemble of the 5 classifiers (xi), are provided in Table TABREF26 . The presented values is the sum over multiple runs.",
"The code and results associated with this paper will be available on-line soon at: https://github.com/gpitsilis/hate-speech/"
],
[
"In this work we present an ensemble classifier that is detecting hate-speech in short text, such as tweets. The input to the base-classifiers consists of not only the standard word uni-grams, but also a set of features describing each user's historical tendency to post abusive messages. Our main innovations are: i) a deep learning architecture that uses word frequency vectorisation for implementing the above features, ii) an experimental evaluation of the above model on a public dataset of labeled tweets, iii) an open-sourced implementation built on top of Keras.",
"The results show that our approach outperforms the current state of the art, and to the best of our knowledge, no other model has achieved better performance in classifying short messages. The approach does not rely on pre-trained vectors, which provides a serious advantage when dealing with short messages of this kind. More specifically, users will often prefer to obfuscate their offensive terms using shorter slang words or create new words by `inventive' spelling and word concatenation. For instance, the word `Islamolunatic' is not available in the popular pre-trained word embeddings (Word2Vec or GloVe), even though it appears with a rather high frequency in racist postings. Hence, word frequency vectorization is preferable to the pre-trained word embeddings used in prior works if one aims to build a language-agnostic solution.",
"We believe that deep learning models have a high potential wrt. classifying text or analyzing the sentiment in general. In our opinion there is still space for further improving the classification algorithms.",
"For future work we plan to investigate other sources of information that can be utilized to detect hateful messages. In addition, we intend to generalize the output received in the current experiment, with evaluation over other datasets, including analyzing texts written in different languages."
]
],
"section_name": [
"Introduction",
"Problem Statement",
"Related Work",
"Description of our Recurrent Neural Network-based Approach",
"Features",
"Classification",
"Data Preprocessing",
"Deep learning model",
"Dataset",
"Experimental Setting",
"Results",
"Conclusions and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"24fa3a644dceb7bfcd148f4fca5330dd2ae54ead",
"43b263e24c00aad9036ea4d7fa9687703b99ede1",
"70d0c37a81998759bd353a03a45eea1bc0803255"
],
"answer": [
{
"evidence": [
"In this work we experimented with various combinations of attached features INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 that express the user tendency. The details of each experiment, including the resulting size of each embedding can be found in Table TABREF10 , with the latter denoted `input dimension' in the table.",
"FLOAT SELECTED: Table 1: Combined features in proposed schemes",
"To improve classification ability we employ an ensemble of LSTM-based classifiers."
],
"extractive_spans": [],
"free_form_answer": "LSTM classifier with no additional features, Neutral & Sexism, Neutral & Racism, Racism & Sexism and Neutral, Racism & Sexism.",
"highlighted_evidence": [
"The details of each experiment, including the resulting size of each embedding can be found in Table TABREF10 , with the latter denoted `input dimension' in the table.",
"FLOAT SELECTED: Table 1: Combined features in proposed schemes",
"To improve classification ability we employ an ensemble of LSTM-based classifiers."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The power of neural networks comes from their ability to find data representations that are useful for classification. Recurrent Neural Networks (RNN) are a special type of neural network, which can be thought of as the addition of loops to the architecture. RNNs use back propagation in the training process to update the network weights in every layer. In our experimentation we used a powerful type of RNN known as Long Short-Term Memory Network (LSTM). Inspired by the work by BIBREF15 , we experiment with combining various LSTM models enhanced with a number of novel features in an ensemble. More specifically we introduce:",
"FLOAT SELECTED: Table 1: Combined features in proposed schemes"
],
"extractive_spans": [],
"free_form_answer": "experiment with combining various LSTM models enhanced with a number of novel features (O No additional features, NS Neutral & Sexism, NR Neutral & Racism, RS Racism & Sexism, NRS Neutral, Racism & Sexism) in an ensemble.",
"highlighted_evidence": [
"In our experimentation we used a powerful type of RNN known as Long Short-Term Memory Network (LSTM). Inspired by the work by BIBREF15 , we experiment with combining various LSTM models enhanced with a number of novel features in an ensemble. ",
"FLOAT SELECTED: Table 1: Combined features in proposed schemes"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The power of neural networks comes from their ability to find data representations that are useful for classification. Recurrent Neural Networks (RNN) are a special type of neural network, which can be thought of as the addition of loops to the architecture. RNNs use back propagation in the training process to update the network weights in every layer. In our experimentation we used a powerful type of RNN known as Long Short-Term Memory Network (LSTM). Inspired by the work by BIBREF15 , we experiment with combining various LSTM models enhanced with a number of novel features in an ensemble. More specifically we introduce:"
],
"extractive_spans": [
"Long Short-Term Memory Network (LSTM)"
],
"free_form_answer": "",
"highlighted_evidence": [
"In our experimentation we used a powerful type of RNN known as Long Short-Term Memory Network (LSTM). Inspired by the work by BIBREF15 , we experiment with combining various LSTM models enhanced with a number of novel features in an ensemble."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2add69398a451dfac9d3524d5e687901fec6ac95",
"44ee6b8be48b3c13b3bc9b54b59a98b76ce7cebb",
"cd6d7f62070b17f7bd2d918bdf0f414916dffce3"
],
"answer": [
{
"evidence": [
"We now present the most interesting results from our experiments. For the evaluation we used standard metrics for classification accuracy, suitable for studying problems such as sentiment analysis. In particular we used Precision and Recall, with the former calculated as the ratio of the number of tweets correctly classified to a given class over the total number of tweets classified to that class, while the latter measures the ratio of messages correctly classified to a given class over the number of messages from that class. Additionally, the F-score is the harmonic mean of precision and recall, expressed as INLINEFORM0 . For our particular case with three classes, P, R and F are computed for each class separately, with the final F value derived as the weighted mean of the separate INLINEFORM1 -scores: INLINEFORM2 ; recall that INLINEFORM3 , INLINEFORM4 and INLINEFORM5 . The results are shown in Table TABREF24 , along with the reported results from state of the art approaches proposed by other researchers in the field. Note that the performance numbers P,R and F of the other state of the art approaches are based on the authors' reported data in the cited works. Additionally, we report the performance of each individual LSTM classifier as if used alone over the same data (that is, without the ensemble logic). The F-score for our proposed approaches shown in the last column, is the weighted average value over the 3 classes (Neutral,Sexism,Racism). Moreover, all the reported values are average values produced for a number of runs of the same tested scheme over the same data. Figure FIGREF23 shows the F-Score as a function of the number of training samples for each ensemble of classifiers. We clearly see that the models converge. For the final run the F-score has standard deviation value not larger than 0.001, for all classifiers.",
"FLOAT SELECTED: Table 3: Evaluation Results"
],
"extractive_spans": [],
"free_form_answer": "Best authors' system achieved 0.9320 F1 score.",
"highlighted_evidence": [
"The results are shown in Table TABREF24 , along with the reported results from state of the art approaches proposed by other researchers in the field.",
"FLOAT SELECTED: Table 3: Evaluation Results"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We now present the most interesting results from our experiments. For the evaluation we used standard metrics for classification accuracy, suitable for studying problems such as sentiment analysis. In particular we used Precision and Recall, with the former calculated as the ratio of the number of tweets correctly classified to a given class over the total number of tweets classified to that class, while the latter measures the ratio of messages correctly classified to a given class over the number of messages from that class. Additionally, the F-score is the harmonic mean of precision and recall, expressed as INLINEFORM0 . For our particular case with three classes, P, R and F are computed for each class separately, with the final F value derived as the weighted mean of the separate INLINEFORM1 -scores: INLINEFORM2 ; recall that INLINEFORM3 , INLINEFORM4 and INLINEFORM5 . The results are shown in Table TABREF24 , along with the reported results from state of the art approaches proposed by other researchers in the field. Note that the performance numbers P,R and F of the other state of the art approaches are based on the authors' reported data in the cited works. Additionally, we report the performance of each individual LSTM classifier as if used alone over the same data (that is, without the ensemble logic). The F-score for our proposed approaches shown in the last column, is the weighted average value over the 3 classes (Neutral,Sexism,Racism). Moreover, all the reported values are average values produced for a number of runs of the same tested scheme over the same data. Figure FIGREF23 shows the F-Score as a function of the number of training samples for each ensemble of classifiers. We clearly see that the models converge. For the final run the F-score has standard deviation value not larger than 0.001, for all classifiers.",
"FLOAT SELECTED: Table 3: Evaluation Results"
],
"extractive_spans": [],
"free_form_answer": "The best model achieved a 0.9320 F-score",
"highlighted_evidence": [
"The results are shown in Table TABREF24 , along with the reported results from state of the art approaches proposed by other researchers in the field. ",
"FLOAT SELECTED: Table 3: Evaluation Results"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: Evaluation Results"
],
"extractive_spans": [],
"free_form_answer": "The best performing single classifier produces F1 0.9265. The best ensemble classifier (O+NS+RS+NR+NRS) produce F1 0.9320.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Evaluation Results"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"4e4dc99b1625a7213dab3722af9d3c42aa877fa6"
],
"answer": [
{
"evidence": [
"As can be seen in Table TABREF24 , the work by BIBREF12 , in which character n-grams and gender information were used as features, obtained the quite low F-score of INLINEFORM0 . Later work by the same author BIBREF5 investigated the impact of the experience of the annotator in the performance, but still obtaining a lower F-score than ours. Furthermore, while the first part of the two step classification BIBREF16 performs quite well (reported an F-score of 0.9520), it falls short in detecting the particular class the abusive text belongs to. Finally, we observe that applying a simple LSTM classification with no use of additional features (denoted `single classifier (i)' in Table TABREF24 ), achieves an F-score that is below 0.93, something that is in line with other researchers in the field, see BIBREF15 .",
"In comparison to the approach by BIBREF13 , which focuses on various classes of Sexism, the results show that our deep learning model is doing better as far as detecting Sexism in general, outperforming the FastText algorithm they include in their experiments (F=0.87). The inferiority of FastText over LSTM is also reported in the work by BIBREF15 , as well as being inferior over CNN in, BIBREF16 . In general, through our ensemble schemes is confirmed that deep learning can outperform any NLP-based approaches known so far in the task of abusive language detection."
],
"extractive_spans": [
"BIBREF12 , in which character n-grams and gender information were used as features",
"BIBREF5 investigated the impact of the experience of the annotator in the performance",
"two step classification BIBREF16",
"BIBREF13 , which focuses on various classes of Sexism",
"CNN in, BIBREF16"
],
"free_form_answer": "",
"highlighted_evidence": [
"As can be seen in Table TABREF24 , the work by BIBREF12 , in which character n-grams and gender information were used as features, obtained the quite low F-score of INLINEFORM0 . Later work by the same author BIBREF5 investigated the impact of the experience of the annotator in the performance, but still obtaining a lower F-score than ours. Furthermore, while the first part of the two step classification BIBREF16 performs quite well (reported an F-score of 0.9520), it falls short in detecting the particular class the abusive text belongs to.",
"In comparison to the approach by BIBREF13 , which focuses on various classes of Sexism, the results show that our deep learning model is doing better as far as detecting Sexism in general, outperforming the FastText algorithm they include in their experiments (F=0.87). The inferiority of FastText over LSTM is also reported in the work by BIBREF15 , as well as being inferior over CNN in, BIBREF16 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"what rnn classifiers were used?",
"what results did their system obtain?",
"what are the existing approaches?"
],
"question_id": [
"1c8958ec50976a9b1088c51e8f73a767fb3973fa",
"363d0cb0cd5c9a0b0364d61d95f2eff7347d5a36",
"cf0b7d8a2449d04078f69ec9717a547adfb67d17"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Figure 1: High level view of the system with multiple classifiers",
"Table 1: Combined features in proposed schemes",
"Figure 2: Our deep learning model",
"Table 2: Evaluated ensemble schemes",
"Figure 3: Aggregated value for F-score vs the number of experiment runs",
"Table 3: Evaluation Results",
"Table 4: Detailed Results for every Class Label",
"Table 5: Confusion Matrices of Results for the best performing approaches with 3 and 5 classifiers."
],
"file": [
"6-Figure1-1.png",
"8-Table1-1.png",
"9-Figure2-1.png",
"9-Table2-1.png",
"11-Figure3-1.png",
"12-Table3-1.png",
"14-Table4-1.png",
"14-Table5-1.png"
]
} | [
"what rnn classifiers were used?",
"what results did their system obtain?"
] | [
[
"1801.04433-8-Table1-1.png",
"1801.04433-Classification-0",
"1801.04433-Description of our Recurrent Neural Network-based Approach-0",
"1801.04433-Data Preprocessing-2"
],
[
"1801.04433-Results-0",
"1801.04433-12-Table3-1.png"
]
] | [
"experiment with combining various LSTM models enhanced with a number of novel features (O No additional features, NS Neutral & Sexism, NR Neutral & Racism, RS Racism & Sexism, NRS Neutral, Racism & Sexism) in an ensemble.",
"The best performing single classifier produces F1 0.9265. The best ensemble classifier (O+NS+RS+NR+NRS) produce F1 0.9320."
] | 189 |
2003.03131 | Morfessor EM+Prune: Improved Subword Segmentation with Expectation Maximization and Pruning | Data-driven segmentation of words into subword units has been used in various natural language processing applications such as automatic speech recognition and statistical machine translation for almost 20 years. Recently it has became more widely adopted, as models based on deep neural networks often benefit from subword units even for morphologically simpler languages. In this paper, we discuss and compare training algorithms for a unigram subword model, based on the Expectation Maximization algorithm and lexicon pruning. Using English, Finnish, North Sami, and Turkish data sets, we show that this approach is able to find better solutions to the optimization problem defined by the Morfessor Baseline model than its original recursive training algorithm. The improved optimization also leads to higher morphological segmentation accuracy when compared to a linguistic gold standard. We publish implementations of the new algorithms in the widely-used Morfessor software package. | {
"paragraphs": [
[
"Subword segmentation has become a standard preprocessing step in many neural approaches to natural language processing (NLP) tasks, e.g Neural Machine Translation (NMT) BIBREF0 and Automatic Speech Recognition (ASR) BIBREF1. Word level modeling suffers from sparse statistics, issues with Out-of-Vocabulary (OOV) words, and heavy computational cost due to a large vocabulary. Word level modeling is particularly unsuitable for morphologically rich languages, but subwords are commonly used for other languages as well. Subword segmentation is best suited for languages with agglutinative morphology.",
"While rule-based morphological segmentation systems can achieve high quality, the large amount of human effort needed makes the approach problematic, particularly for low-resource languages. The systems are language dependent, necessitating use of multiple tools in multilingual setups. As a fast, cheap and effective alternative, data-driven segmentation can be learned in a completely unsupervised manner from raw corpora. Unsupervised morphological segmentation saw much research interest until the early 2010's; for a survey on the methods, see hammarstrom2011unsupervised. Semi-supervised segmentation with already small amounts of annotated training data was found to improve the accuracy significantly when compared to a linguistic segmentation; see ruokolainen2016comparative for a survey. While this line of research has been continued in supervised and more grammatically oriented tasks BIBREF2, the more recent work on unsupervised segmentation is less focused on approximating a linguistically motivated segmentation. Instead, the aim has been to tune subword segmentations for particular applications. For example, the simple substitution dictionary based Byte Pair Encoding segmentation algorithm BIBREF3, first proposed for NMT by sennrich2015neural, has become a standard in the field. Especially in the case of multilingual models, training a single language-independent subword segmentation method is preferable to linguistic segmentation BIBREF4.",
"In this study, we compare three existing and one novel subword segmentation method, all sharing the use of a unigram language model in a generative modeling framework. The previously published methods are Morfessor Baseline BIBREF5, Greedy Unigram Likelihood BIBREF6, and SentencePiece BIBREF7. The new Morfessor variant proposed in this work is called Morfessor EM+Prune.",
"The contributions of this article are",
"a better training algorithm for Morfessor Baseline, with reduction of search error during training, and improved segmentation quality for English, Finnish and Turkish;",
"comparing four similar segmentation methods, including a close look at the SentencePiece reference implementation, highlighting details omitted from the original article BIBREF7;",
"and showing that the proposed Morfessor EM+Prune with particular hyper-parameters yields SentencePiece."
],
[
"Morphological surface segmentation is the task of splitting words into morphs, the surface forms of meaning-bearing sub-word units, morphemes. The concatenation of the morphs is the word, e.g.",
"Probabilistic generative methods for morphological segmentation model the probability $()$ of generating a sequence of morphs (a word, sentence or corpus) $= [_{0}, \\ldots , _{N}]$, as opposed to discriminative methods that model the conditional probability of the segmentation boundaries given the unsegmented data.",
"This study focuses on segmentation methods applying a unigram language model. In the unigram language model, an assumption is made that the morphs in a word occur independently of each other. Alternatively stated, it is a zero-order (memoryless) Markov model, generalized so that one observation can cover multiple characters. The probability of a sequence of morphs decomposes into the product of the probabilities of the morphs of which it consists.",
"The Expectation Maximization (EM) algorithm BIBREF8 is an iterative algorithm for finding Maximum Likelihood (ML) or Maximum a Posteriori (MAP) estimates for parameters in models with latent variables. The EM algorithm consists of two steps. In the E-step (SECREF5), the expected value of the complete data likelihood including the latent variable is taken, and in the M-step (SECREF5), the parameters are updated to maximize the expected value of the E-step: Q(, (i-1)) = y (, y ) (y , (i-1)) dy",
"i = Q(, (i-1)) .",
"When applied to a (hidden) Markov model, EM is called the forward-backward algorithm. Using instead the related Viterbi algorithm BIBREF9 is sometimes referred to as hard-EM. spitkovsky2011lateen present lateen-EM, a hybrid variant in which EM and Viterbi optimization are alternated.",
"[Section 6.4.1.3]virpioja2012learning discusses the challenges of applying EM to learning of generative morphology. Jointly optimizing both the morph lexicon and the parameters for the morphs is intractable. If, like in Morfessor Baseline, the cost function is discontinuous when morphs are added or removed from the lexicon, there is no closed form solution to the M-step. With ML estimates for morph probabilities, EM can neither add nor remove morphs from the lexicon, because it can neither change a zero probability to nonzero nor vice versa.",
"One solution to this challenge is to apply local search. Starting from the current best estimate for the parameters, small search steps are tried to explore near-lying parameter configurations. The choice that yields the lowest cost is selected as the new parameters. Greedy local search often gets stuck in local minima. Even if there are parameters yielding a better cost, the search may not find them, causing search error. The error remaining at the parameters with globally optimal cost is the model error.",
"Another solution is to combine EM with pruning (EM+Prune). The methods based on pruning begin with a seed lexicon, which is then iteratively pruned until a stopping condition is reached. Subwords cannot be added to the lexicon after initialization. As a consequence, proper initialization is important, and the methods should not prune too aggressively without reestimating parameters, as pruning decisions cannot be backtracked. For this reason, EM+Prune methods proceed iteratively, only pruning subwords up to a predefined iteration pruning quota, e.g. removing at most 20% of the remaining lexicon at a time."
],
[
"In this section we review three previously published segmentation methods that apply a unigram language model. Table summarizes the differences between these methods."
],
[
"Morfessor is a family of generative models for unsupervised morphology induction BIBREF10. Here, consider the Morfessor 2.0 implementation BIBREF11 of Morfessor Baseline method BIBREF5.",
"A point estimate for the model parameters $$ is found using MAP estimation with a Minimum Description Length (MDL) BIBREF12 inspired prior that favors lexicons containing fewer, shorter morphs. The MAP estimate yields a two-part cost function, consisting of a prior (the lexicon cost) and likelihood (the corpus cost). The model can be tuned using the hyper-parameter $\\alpha $, which is a weight applied to the likelihood BIBREF13:",
"The $\\alpha $ parameter controls the overall amount of segmentation, with higher values increasing the weight of each emitted morph in the corpus (leading to less segmentation), and lower values giving a relatively larger weight to a small lexicon (more segmentation).",
"The prior can be further divided into two parts: the prior for the morph form properties and the usage properties. The form properties encode the string representation of the morphs, while the usage properties encode their frequencies. Morfessor Baseline applies a non-informative prior for the distribution of the morph frequencies. It is derived using combinatorics from the number of ways that the total token count $\\nu $ can be divided among the $\\mu $ lexicon items:",
"Morfessor Baseline is initialized with a seed lexicon of whole words. The Morfessor Baseline training algorithm is a greedy local search. During training, in addition to storing the model parameters, the current best segmentation for the corpus is stored in a graph structure. The segmentation is iteratively refined, by looping over all the words in the corpus in a random order and resegmenting them. The resegmentation is applied by recursive binary splitting, leading to changes in other words that share intermediary units with the word currently being resegmented. The search converges to a local optimum, and is known to be sensitive to the initialization BIBREF11.",
"In the Morfessor 2.0 implementation, the likelihood weight hyper-parameter $\\alpha $ is set either with a grid search using the best evaluation score on a held-out development set, or by applying an approximate automatic tuning procedure based on a heuristic guess of which direction the $\\alpha $ parameter should be adjusted."
],
[
"varjokallio2013learning presents a subword segmentation method, particularly designed for use in ASR. It applies greedy pruning based on unigram likelihood. The seed lexicon is constructed by enumerating all substrings from a list of common words, up to a specified maximum length. Pruning proceeds in two phases, which the authors call initialization and pruning.",
"In the first phase, a character-level language model is trained. The initial probabilities of the subwords are computed using the language model. The probabilities are refined by EM, followed by hard-EM. During the hard-EM, frequency based pruning of subwords begins.",
"In the second phase, hard-EM is used for parameter estimation. At the end of each iteration, the least frequent subwords are selected as candidates for pruning. For each candidate subword, the change in likelihood when removing the subword is estimated by resegmenting all words in which the subword occurs. After each pruned subword, the parameters of the model are updated. Pruning ends when the goal lexicon size is reached or the change in likelihood no longer exceeds a given threshold."
],
[
"SentencePiece BIBREF14, BIBREF7 is a subword segmentation method aimed for use in any NLP system, particularly NMT. One of its design goals is use in multilingual systems.",
"Although BIBREF7 implies a use of maximum likelihood estimation, the reference implementation uses the implicit Dirichlet Process prior called Bayesian EM BIBREF15. In the M-step, the count normalization is modified to",
"where $\\Psi $ is the digamma function.",
"The seed lexicon is simply the e.g. one million most frequent substrings. SentencePiece uses an EM+Prune training algorithm. Each iteration consists of two sub-iterations of EM, after which the lexicon is pruned. Pruning is based on Viterbi counts (EM+Viterbi-prune). First, subwords that do not occur in the Viterbi segmentation are pre-pruned. The cost function is the estimated change in likelihood when the subword is removed, estimated using the assumption that all probability mass of the removed subword goes to its Viterbi segmentation. Subwords are sorted according to the cost, and a fixed proportion of remaining subwords are pruned each iteration. Single character subwords are never pruned. A predetermined lexicon size is used as the stopping condition."
],
[
"Morfessor EM+Prune uses the unigram language model and priors similar to Morfessor Baseline, but combines them with EM+Prune training."
],
[
"The prior must be slightly modified for use with the EM+Prune algorithm. The prior for the frequency distribution (DISPLAY_FORM10) is derived using combinatorics. When using real-valued expected counts, there are infinite assignments of counts to parameters. Despite not being theoretically motivated, it can still be desirable to compute an approximation of the Baseline frequency distribution prior, in order to use EM+Prune as an improved search to find more optimal parameters for the original cost. To do this, the real valued token count $\\nu $ is rounded to the nearest integer. Alternatively, the prior for the frequency distribution can be omitted, or a new prior with suitable properties could be formulated. We do not propose a completely new prior in this work, instead opting to remain as close as possible to Morfessor Baseline.",
"In Morfessor EM+Prune, morphs are explicitly stored in the lexicon, and morphs are removed from the lexicon only during pruning. This differs from Morfessor Baseline, in which a morph is implicitly considered to be stored in the lexicon if it has non-zero count.",
"The prior for the morph form properties does not need to be modified. During the EM parameter estimation, the prior for the morph form properties is omitted as the morph lexicon remains constant. During pruning, the standard form prior is applicable.",
"Additionally we apply the Bayesian EM implicit Dirichlet Process prior BIBREF15. We experiment with four variations of the prior:",
"the full EM+Prune prior,",
"omitting the Bayesian EM (noexp$\\Psi $),",
"omitting the approximate frequency distribution prior (nofreqdistr),",
"and omitting the prior entirely (noprior)."
],
[
"The seed lexicon consists of the one million most frequent substrings, with two restrictions on which substrings to include: pre-pruning of redundant subwords, and forcesplit. Truncating to the chosen size is performed after pre-pruning, which means that pre-pruning can make space for substrings that would otherwise have been below the threshold.",
"Pre-pruning of redundant subwords is based on occurrence counts. If a string $x$ occurs $n$ times, then any substring of $x$ will occur at least $n$ times. Therefore, if the substring has a count of exactly $n$, we know that it is not needed in any other context except as a part of $x$. Such unproductive substrings are likely to be poor candidate subwords, and can be removed to make space in the seed lexicon for more useful substrings. This pre-pruning is not a neutral optimization, but does affect segmentation results. We check all initial and final substrings for redundancy, but do not pre-prune internal substrings.",
"To achieve forced splitting before or after certain characters, e.g. hyphens, apostrophes and colons, substrings which include a forced split point can be removed from the seed lexicon. As EM+Prune is unable to introduce new subwords, this pre-pruning is sufficient to guarantee the forced splits. While Morfessor 2.0 only implements force splitting certain characters to single-character morphs, i.e. force splitting on both sides, we implement more fine-grained force splitting separately before and after the specified character."
],
[
"We experiment with three variants of the EM+Prune iteration structure:",
"EM,",
"Lateen-EM,",
"EM+Viterbi-prune",
"EM+Viterbi-prune is an intermediary mode between EM and lateen-EM in the context of pruning. The pruning decisions are made based on counts from a single iteration of Viterbi training, but these Viterbi counts are not otherwise used to update the parameters. In effect, this allows for the more aggressive pruning using the Viterbi counts, while retaining the uncertainty of the soft parameters.",
"Each iteration begins with 3 sub-iterations of EM. In the pruning phase of each iteration, the subwords in the current lexicon are sorted in ascending order according to the estimated change in the cost function if the subword is removed from the lexicon. Subwords consisting of a single character are always kept, to retain the ability to represent an open vocabulary without OOV issues. The list is then pruned according to one of three available pruning criteria:",
"($\\alpha $-weighted) MDL pruning,",
"MDL with automatic tuning of $\\alpha $ for lexicon size,",
"lexicon size with omitted prior or pretuned $\\alpha $.",
"In ($\\alpha $-weighted) Minimum Description Length (MDL) pruning, subwords are pruned until the estimated cost starts rising, or until the pruning quota for the iteration is reached, whichever comes first.",
"A subword lexicon of a predetermined size can be used as pruning criterion in two different ways. If the desired $\\alpha $ is known in advance, or if the prior is omitted, subwords are pruned until the desired lexicon size is reached, or until the pruning quota for the iteration is reached, whichever comes first.",
"To reach a subword lexicon of a predetermined size while using the Morfessor prior, the new automatic tuning procedure can be applied. For each subword, the estimated change in prior and likelihood are computed separately. These allow computing the value of $\\alpha $ that would cause the removal of each subword to be cost neutral, i.e. the value that would cause MDL pruning to terminate at that subword. For subwords with the same sign for both the change in prior and likelihood, no such threshold $\\alpha $ can be computed: if the removal decreases both costs the subword will always be removed, and if it increases both costs it will always be kept. Sorting the list of subwords according to the estimated threshold $\\alpha $ including the always kept subwords allows automatically tuning $\\alpha $ so that a subword lexicon of exactly the desired size is retained after MDL pruning. The automatic tuning is repeated before the pruning phase of each iteration, as retraining the parameters affects the estimates."
],
[
"Morfessor EM+Prune can be used in subword regularization BIBREF7, a denoising-based regularization method for neural NLP systems. Alternative segmentations can be sampled from the full data distribution using Forward-filtering backward-sampling algorithm BIBREF16 or approximatively but more efficiently from an $n$-best list."
],
[
"Table contains a comparison between all four methods discussed in this work. To recover SentencePiece, Morfessor EM+Prune should be run with the following settings: The prior should be omitted entirely, leaving only the likelihood",
"As the tuning parameter $\\alpha $ is no longer needed when the prior is omitted, the pruning criterion can be set to a predetermined lexicon size, without automatic tuning of $\\alpha $. Morfessor by default uses type-based training; to use frequency information, count dampening should be turned off. The seed lexicon should be constructed without using forced splitting. The EM+Viterbi-prune training scheme should be used, with Bayesian EM turned on."
],
[
"English, Finnish and Turkish data are from the Morpho Challenge 2010 data set BIBREF17, BIBREF18. The training sets contain ca 878k, 2.9M and 617k word types, respectively. As test sets we use the union of the 10 official test set samples. For North Sámi, we use a list of ca 691k word types extracted from Den samiske tekstbanken corpus (Sametinget, 2004) and the 796 word type test set from version 2 of the data set collected by BIBREF19, BIBREF20.",
"In most experiments we use a grid search with a development set to find a suitable value for $\\alpha $. The exception is experiments using autotuning or lexicon size criterion, and experiments using SentencePiece. We use type-based training (dampening counts to 1) with all Morfessor methods.",
"For English, we force splits before and after hyphens, and before apostrophes, e.g. women's-rights is force split into women 's - rights. For Finnish, we force splits before and after hyphens, and after colons. For North Sámi, we force splits before and after colons. For Turkish, the Morpho Challenge data is preprocessed in a way that makes force splitting ineffectual."
],
[
"The ability of the training algorithm to find parameters minimizing the Morfessor cost is evaluated by using the trained model to segment the training data, and loading the resulting segmentation as if it was a Morfessor Baseline model. We observe both unweighted prior and likelihood, and their $\\alpha $-weighted sum.",
"The closeness to linguistic segmentation is evaluated by comparison with annotated morph boundaries using boundary precision, boundary recall, and boundary $F_{1}$-score BIBREF21. The boundary $F_{1}$-score (F-score for short) equals the harmonic mean of precision (the percentage of correctly assigned boundaries with respect to all assigned boundaries) and recall (the percentage of correctly assigned boundaries with respect to the reference boundaries). Precision and recall are calculated using macro-averages over the word types in the test set. In the case that a word has more than one annotated segmentation, we take the one that gives the highest score."
],
[
"We perform an error analysis, with the purpose of gaining more insight into the ability of the methods to model particular aspects of morphology. We follow the procedure used by ruokolainen2016comparative. It is based on a categorization of morphs into the categories prefix, stem, and suffix. The category labels are derived from the original morphological analysis labels in the English and Finnish gold standards, and directly correspond to the annotation scheme used in the North Sámi test set.",
"We first divide errors into two kinds, over-segmentation and under-segmentation. Over-segmentation occurs when a boundary is incorrectly assigned within a morph segment. In under-segmentation, the a correct morph boundary is omitted from the generated segmentation. We further divide the errors by the morph category in which the over-segmentation occurs, and the two morph categories surrounding the omitted boundary in under-segmentation."
],
[
"Figure compares the cost components of the Morfessor model across different $\\alpha $ parameters. The lowest costs for the mid-range settings are obtained for the EM+Prune algorithm, but for larger lexicons, the Baseline algorithm copes better. As expected, using forced splits at certain characters increase the costs, and the increase is larger than between the training algorithms. As Turkish preprocessing causes the results to be unaffected by the forced splits, we only report results without them.",
"Tables to show the Morfessor cost of the segmented training data for particular $\\alpha $ values. Again, the proposed Morfessor EM+Prune reaches a lower Morfessor cost than Morfessor Baseline. Using the lateen-EM has only minimal effect to the costs, decreasing the total cost slightly for English and increasing for the other languages. Turkish results include the “keep-redundant” setting discussed below in more detail.",
"Figure shows the Precision–Recall curves for the primary systems, for all four languages. While increasing the Morfessor cost, forced splitting improves BPR. Tables to show test set Boundary Precision, Recall and F$_{1}$-score (BPR) results at the optimal tuning point (selected using a development set) for each model, for English, Finnish, Turkish and North Sámi, respectively. The default Morfessor EM+Prune configuration (“soft” EM, full prior, forcesplit) significantly outperforms Morfessor Baseline w.r.t. the F-score for all languages except North Sámi, for which there is no significant difference between the methods.",
"Morfessor EM+Prune is less responsive to tuning than Morfessor Baseline. This is visible in the shorter lines in Figures and , although the tuning parameter takes values from the same range. In particular, EM+Prune can not easily be tuned to produce very large lexicons.",
"Pre-pruning of redundant substrings gives mixed results. For Turkish, both Morfessor cost and BPR are degraded by the pre-pruning, but for the other three languages the pre-pruning is beneficial or neutral. When tuning $\\alpha $ to very high values (less segmentation), pre-pruning of redundant substrings improves the sensitivity to tuning. The same effect may also be achievable by using a larger seed lexicon. We perform most of our experiments with pre-pruning turned on.",
"To see the effect of pre-pruning on the seed lexicon, we count the number of subwords that are used in the gold standard segmentations, but not included in seed lexicons of various sizes. Taking Finnish as an example, we see 203 subword types missing from a 1 million substring seed lexicon without pre-pruning. Turning on pre-pruning decreases the number of missing types to 120. To reach the same number without using pre-pruning, a much larger seed lexicon of 1.7M substrings must be used.",
"Omitting the frequency distribution appears to have little effect on Morfessor cost and BPR. Turning off Bayesian EM (noexp$\\Psi $) results in a less compact lexicon resulting in higher prior cost, but improves BPR for two languages: English and Turkish.",
"Table contains the error analysis for English, Finnish and North Sámi. For English and North Sámi, EM+Prune results in less under-segmentation but worse over-segmentation. For Finnish these results are reversed. However, the suffixes are often better modeled, as shown by lower under-segmentation on SUF-SUF (all languages) and STM-SUF (English and North Sámi)."
],
[
"We propose Morfessor EM+Prune, a new training algorithm for Morfessor Baseline. EM+Prune reduces search error during training, resulting in models with lower Morfessor costs. Lower costs also lead to improved accuracy when segmentation output is compared to linguistic morphological segmentation.",
"We compare Morfessor EM+Prune to three previously published segmentation methods applying unigram language models. We find that using the Morfessor prior is beneficial when the reference is linguistic morphological segmentation.",
"In this work we focused on model cost and linguistic segmentation. In future work the performance of Morfessor EM+Prune in applications will be evaluated. Also, a new frequency distribution prior, which is theoretically better motivated or has desirable properties, could be formulated."
],
[
"This study has been supported by the MeMAD project, funded by the European Union's Horizon 2020 research and innovation programme (grant agreement № 780069), and the FoTran project, funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement № 771113) Computer resources within the Aalto University School of Science “Science-IT” project were used."
]
],
"section_name": [
"Introduction",
"Introduction ::: Morphological Segmentation with Unigram Language Models",
"Related Work",
"Related Work ::: Morfessor Baseline",
"Related Work ::: Greedy Unigram Likelihood",
"Related Work ::: SentencePiece",
"Morfessor EM+Prune",
"Morfessor EM+Prune ::: Prior",
"Morfessor EM+Prune ::: Seed Lexicon",
"Morfessor EM+Prune ::: Training Algorithm",
"Morfessor EM+Prune ::: Sampling of Segmentations",
"Morfessor EM+Prune ::: SentencePiece as a Special Case of Morfessor EM+Prune",
"Experimental Setup",
"Experimental Setup ::: Evaluation",
"Experimental Setup ::: Error Analysis",
"Results",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"2964944465a641f3586e258235e46bbe9887996f",
"3ee8321692c90a487cf4ece31c63d864473a393d",
"7cc7e425269b1bf5e46f83ae2d5dd3d393e12271",
"d6bd61e446ab7d3205132e343fe8234f4e9c9c81"
],
"answer": [
{
"evidence": [
"The ability of the training algorithm to find parameters minimizing the Morfessor cost is evaluated by using the trained model to segment the training data, and loading the resulting segmentation as if it was a Morfessor Baseline model. We observe both unweighted prior and likelihood, and their $\\alpha $-weighted sum.",
"The closeness to linguistic segmentation is evaluated by comparison with annotated morph boundaries using boundary precision, boundary recall, and boundary $F_{1}$-score BIBREF21. The boundary $F_{1}$-score (F-score for short) equals the harmonic mean of precision (the percentage of correctly assigned boundaries with respect to all assigned boundaries) and recall (the percentage of correctly assigned boundaries with respect to the reference boundaries). Precision and recall are calculated using macro-averages over the word types in the test set. In the case that a word has more than one annotated segmentation, we take the one that gives the highest score."
],
"extractive_spans": [
"The ability of the training algorithm to find parameters minimizing the Morfessor cost is evaluated by using the trained model to segment the training data, and loading the resulting segmentation as if it was a Morfessor Baseline model. We observe both unweighted prior and likelihood, and their $\\alpha $-weighted sum.\n\nThe closeness to linguistic segmentation is evaluated by comparison with annotated morph boundaries using boundary precision, boundary recall, and boundary $F_{1}$-score BIBREF21."
],
"free_form_answer": "",
"highlighted_evidence": [
"The ability of the training algorithm to find parameters minimizing the Morfessor cost is evaluated by using the trained model to segment the training data, and loading the resulting segmentation as if it was a Morfessor Baseline model. We observe both unweighted prior and likelihood, and their $\\alpha $-weighted sum.\n\nThe closeness to linguistic segmentation is evaluated by comparison with annotated morph boundaries using boundary precision, boundary recall, and boundary $F_{1}$-score BIBREF21. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Morfessor Baseline is initialized with a seed lexicon of whole words. The Morfessor Baseline training algorithm is a greedy local search. During training, in addition to storing the model parameters, the current best segmentation for the corpus is stored in a graph structure. The segmentation is iteratively refined, by looping over all the words in the corpus in a random order and resegmenting them. The resegmentation is applied by recursive binary splitting, leading to changes in other words that share intermediary units with the word currently being resegmented. The search converges to a local optimum, and is known to be sensitive to the initialization BIBREF11.",
"English, Finnish and Turkish data are from the Morpho Challenge 2010 data set BIBREF17, BIBREF18. The training sets contain ca 878k, 2.9M and 617k word types, respectively. As test sets we use the union of the 10 official test set samples. For North Sámi, we use a list of ca 691k word types extracted from Den samiske tekstbanken corpus (Sametinget, 2004) and the 796 word type test set from version 2 of the data set collected by BIBREF19, BIBREF20.",
"We perform an error analysis, with the purpose of gaining more insight into the ability of the methods to model particular aspects of morphology. We follow the procedure used by ruokolainen2016comparative. It is based on a categorization of morphs into the categories prefix, stem, and suffix. The category labels are derived from the original morphological analysis labels in the English and Finnish gold standards, and directly correspond to the annotation scheme used in the North Sámi test set.",
"Table contains the error analysis for English, Finnish and North Sámi. For English and North Sámi, EM+Prune results in less under-segmentation but worse over-segmentation. For Finnish these results are reversed. However, the suffixes are often better modeled, as shown by lower under-segmentation on SUF-SUF (all languages) and STM-SUF (English and North Sámi).",
"FLOAT SELECTED: Table 2: Morfessor cost results for English. α = 0.9. FS is short for forcesplit, W-sum for weighted sum of prior and likelihood. ↓means that lower values are better. The bolded method is our primary configuration.",
"FLOAT SELECTED: Table 4: Morfessor cost results for Turkish. α = 0.4",
"FLOAT SELECTED: Table 5: Morfessor cost results for North Sámi. α = 1.0",
"FLOAT SELECTED: Table 3: Morfessor cost results for Finnish. α = 0.02.",
"FLOAT SELECTED: Table 10: Error analysis for English (eng, α = 0.9), Finnish (fin, α = 0.02), and North Sámi (sme, α = 1.0). All results without forcesplit. Over-segmentation and under-segmentation errors reduce precision and recall, respectively."
],
"extractive_spans": [
"We perform an error analysis, with the purpose of gaining more insight into the ability of the methods to model particular aspects of morphology."
],
"free_form_answer": "",
"highlighted_evidence": [
"Morfessor Baseline is initialized with a seed lexicon of whole words. The Morfessor Baseline training algorithm is a greedy local search. During training, in addition to storing the model parameters, the current best segmentation for the corpus is stored in a graph structure. The segmentation is iteratively refined, by looping over all the words in the corpus in a random order and resegmenting them. The resegmentation is applied by recursive binary splitting, leading to changes in other words that share intermediary units with the word currently being resegmented. ",
"English, Finnish and Turkish data are from the Morpho Challenge 2010 data set BIBREF17, BIBREF18. The training sets contain ca 878k, 2.9M and 617k word types, respectively. As test sets we use the union of the 10 official test set samples. For North Sámi, we use a list of ca 691k word types extracted from Den samiske tekstbanken corpus (Sametinget, 2004) and the 796 word type test set from version 2 of the data set collected by BIBREF19, BIBREF20.",
"We perform an error analysis, with the purpose of gaining more insight into the ability of the methods to model particular aspects of morphology. We follow the procedure used by ruokolainen2016comparative. ",
"Table contains the error analysis for English, Finnish and North Sámi. For English and North Sámi, EM+Prune results in less under-segmentation but worse over-segmentation. For Finnish these results are reversed. However, the suffixes are often better modeled, as shown by lower under-segmentation on SUF-SUF (all languages) and STM-SUF (English and North Sámi).",
"FLOAT SELECTED: Table 2: Morfessor cost results for English. α = 0.9. FS is short for forcesplit, W-sum for weighted sum of prior and likelihood. ↓means that lower values are better. The bolded method is our primary configuration.",
"FLOAT SELECTED: Table 4: Morfessor cost results for Turkish. α = 0.4",
"FLOAT SELECTED: Table 5: Morfessor cost results for North Sámi. α = 1.0",
"FLOAT SELECTED: Table 3: Morfessor cost results for Finnish. α = 0.02.",
"FLOAT SELECTED: Table 10: Error analysis for English (eng, α = 0.9), Finnish (fin, α = 0.02), and North Sámi (sme, α = 1.0). All results without forcesplit. Over-segmentation and under-segmentation errors reduce precision and recall, respectively."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The ability of the training algorithm to find parameters minimizing the Morfessor cost is evaluated by using the trained model to segment the training data, and loading the resulting segmentation as if it was a Morfessor Baseline model. We observe both unweighted prior and likelihood, and their $\\alpha $-weighted sum.",
"The closeness to linguistic segmentation is evaluated by comparison with annotated morph boundaries using boundary precision, boundary recall, and boundary $F_{1}$-score BIBREF21. The boundary $F_{1}$-score (F-score for short) equals the harmonic mean of precision (the percentage of correctly assigned boundaries with respect to all assigned boundaries) and recall (the percentage of correctly assigned boundaries with respect to the reference boundaries). Precision and recall are calculated using macro-averages over the word types in the test set. In the case that a word has more than one annotated segmentation, we take the one that gives the highest score."
],
"extractive_spans": [
"boundary precision",
"boundary recall",
" boundary $F_{1}$-score"
],
"free_form_answer": "",
"highlighted_evidence": [
"We observe both unweighted prior and likelihood, and their $\\alpha $-weighted sum.\n\nThe closeness to linguistic segmentation is evaluated by comparison with annotated morph boundaries using boundary precision, boundary recall, and boundary $F_{1}$-score BIBREF21. The boundary $F_{1}$-score (F-score for short) equals the harmonic mean of precision (the percentage of correctly assigned boundaries with respect to all assigned boundaries) and recall (the percentage of correctly assigned boundaries with respect to the reference boundaries). Precision and recall are calculated using macro-averages over the word types in the test set. In the case that a word has more than one annotated segmentation, we take the one that gives the highest score."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Figure shows the Precision–Recall curves for the primary systems, for all four languages. While increasing the Morfessor cost, forced splitting improves BPR. Tables to show test set Boundary Precision, Recall and F$_{1}$-score (BPR) results at the optimal tuning point (selected using a development set) for each model, for English, Finnish, Turkish and North Sámi, respectively. The default Morfessor EM+Prune configuration (“soft” EM, full prior, forcesplit) significantly outperforms Morfessor Baseline w.r.t. the F-score for all languages except North Sámi, for which there is no significant difference between the methods.",
"Morfessor EM+Prune is less responsive to tuning than Morfessor Baseline. This is visible in the shorter lines in Figures and , although the tuning parameter takes values from the same range. In particular, EM+Prune can not easily be tuned to produce very large lexicons."
],
"extractive_spans": [],
"free_form_answer": "Morfessor EM+Prune configuration significantly outperforms Morfessor Baseline w.r.t. the F-score for all languages except North Sámi. Morfessor EM+Prune is less responsive to tuning than Morfessor Baseline.",
"highlighted_evidence": [
"The default Morfessor EM+Prune configuration (“soft” EM, full prior, forcesplit) significantly outperforms Morfessor Baseline w.r.t. the F-score for all languages except North Sámi, for which there is no significant difference between the methods.\n\nMorfessor EM+Prune is less responsive to tuning than Morfessor Baseline.",
"Morfessor EM+Prune is less responsive to tuning than Morfessor Baseline"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"2d78bccfdfb482ac0d72125c1188b114f3525491",
"45aefa904c992d8dd218507d40435d92d57fe911"
],
"answer": [
{
"evidence": [
"Table contains the error analysis for English, Finnish and North Sámi. For English and North Sámi, EM+Prune results in less under-segmentation but worse over-segmentation. For Finnish these results are reversed. However, the suffixes are often better modeled, as shown by lower under-segmentation on SUF-SUF (all languages) and STM-SUF (English and North Sámi).",
"FLOAT SELECTED: Table 10: Error analysis for English (eng, α = 0.9), Finnish (fin, α = 0.02), and North Sámi (sme, α = 1.0). All results without forcesplit. Over-segmentation and under-segmentation errors reduce precision and recall, respectively."
],
"extractive_spans": [],
"free_form_answer": "Proposed approach is best in:\n- Recall English: +3.47 (70.84 compared to next best 67.37)\n- Precision Finnish: +6.16 (68.18 compared to 62.02)\n- Recall NorthSami: +1.44 (62.84 compared to 61.40)",
"highlighted_evidence": [
"Table contains the error analysis for English, Finnish and North Sámi.",
"FLOAT SELECTED: Table 10: Error analysis for English (eng, α = 0.9), Finnish (fin, α = 0.02), and North Sámi (sme, α = 1.0). All results without forcesplit. Over-segmentation and under-segmentation errors reduce precision and recall, respectively."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We perform an error analysis, with the purpose of gaining more insight into the ability of the methods to model particular aspects of morphology. We follow the procedure used by ruokolainen2016comparative. It is based on a categorization of morphs into the categories prefix, stem, and suffix. The category labels are derived from the original morphological analysis labels in the English and Finnish gold standards, and directly correspond to the annotation scheme used in the North Sámi test set.",
"We first divide errors into two kinds, over-segmentation and under-segmentation. Over-segmentation occurs when a boundary is incorrectly assigned within a morph segment. In under-segmentation, the a correct morph boundary is omitted from the generated segmentation. We further divide the errors by the morph category in which the over-segmentation occurs, and the two morph categories surrounding the omitted boundary in under-segmentation.",
"Table contains the error analysis for English, Finnish and North Sámi. For English and North Sámi, EM+Prune results in less under-segmentation but worse over-segmentation. For Finnish these results are reversed. However, the suffixes are often better modeled, as shown by lower under-segmentation on SUF-SUF (all languages) and STM-SUF (English and North Sámi)."
],
"extractive_spans": [],
"free_form_answer": " For English and North Sámi, EM+Prune results in less under-segmentation but worse over-segmentation. For Finnish these results are reversed.",
"highlighted_evidence": [
"We perform an error analysis, with the purpose of gaining more insight into the ability of the methods to model particular aspects of morphology. ",
"We first divide errors into two kinds, over-segmentation and under-segmentation. Over-segmentation occurs when a boundary is incorrectly assigned within a morph segment. In under-segmentation, the a correct morph boundary is omitted from the generated segmentation. We further divide the errors by the morph category in which the over-segmentation occurs, and the two morph categories surrounding the omitted boundary in under-segmentation.",
"Table contains the error analysis for English, Finnish and North Sámi. For English and North Sámi, EM+Prune results in less under-segmentation but worse over-segmentation. For Finnish these results are reversed. However, the suffixes are often better modeled, as shown by lower under-segmentation on SUF-SUF (all languages) and STM-SUF (English and North Sámi).",
"For English and North Sámi, E",
" For English and North Sámi, EM+Prune results in less under-segmentation but worse over-segmentation",
"For Finnish these results are reversed."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"ee633dc9ebbb3d67a486fc65864a49c50ffa7578"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How is the model evaluated against the original recursive training algorithm?",
"What is the improvement in performance compared to the linguistic gold standard?",
"What is the improvement in performance brought by lexicon pruning on a simple EM algorithm?"
],
"question_id": [
"9186b2c5b7000ab7f15a46a47da73ea45544bace",
"d30b2fb5b29faf05cf5e04d0c587a7310a908d8c",
"526dc757a686a1fe41e77f7e3848e3507940bfc4"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"word segmentation",
"word segmentation",
"word segmentation"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Comparison of subword segmentation methods applying a unigram language model.",
"Figure 1: Unweighted Morfessor cost function components (prior and likelihood). Log scale.",
"Table 2: Morfessor cost results for English. α = 0.9. FS is short for forcesplit, W-sum for weighted sum of prior and likelihood. ↓means that lower values are better. The bolded method is our primary configuration.",
"Table 4: Morfessor cost results for Turkish. α = 0.4",
"Table 5: Morfessor cost results for North Sámi. α = 1.0",
"Table 3: Morfessor cost results for Finnish. α = 0.02.",
"Table 6: Boundary Precision (Pre), Recall (Rec) and F1score (F) results for English. ∼E indicates not significantly different (two-sided Wilcoxon signed-rank test, p < 0.05, zero splitting) from the bolded EM+Prune method, and ∼B from the bolded Baseline.",
"Table 8: Boundary Precision (Pre), Recall (Rec) and F1score (F) results for Turkish.",
"Table 9: Boundary Precision (Pre), Recall (Rec) and F1score (F) results for North Sámi.",
"Table 7: Boundary Precision (Pre), Recall (Rec) and F1score (F) results for Finnish.",
"Figure 2: Boundary Precision–Recall curve at different tuning points, The smallest and largest α-values are labeled.",
"Table 10: Error analysis for English (eng, α = 0.9), Finnish (fin, α = 0.02), and North Sámi (sme, α = 1.0). All results without forcesplit. Over-segmentation and under-segmentation errors reduce precision and recall, respectively."
],
"file": [
"2-Table1-1.png",
"4-Figure1-1.png",
"5-Table2-1.png",
"5-Table4-1.png",
"5-Table5-1.png",
"5-Table3-1.png",
"6-Table6-1.png",
"6-Table8-1.png",
"6-Table9-1.png",
"6-Table7-1.png",
"7-Figure2-1.png",
"8-Table10-1.png"
]
} | [
"How is the model evaluated against the original recursive training algorithm?",
"What is the improvement in performance compared to the linguistic gold standard?"
] | [
[
"2003.03131-Experimental Setup ::: Error Analysis-0",
"2003.03131-Experimental Setup ::: Evaluation-1",
"2003.03131-Results-3",
"2003.03131-Experimental Setup ::: Evaluation-0",
"2003.03131-Experimental Setup-0",
"2003.03131-5-Table3-1.png",
"2003.03131-Results-2",
"2003.03131-5-Table5-1.png",
"2003.03131-5-Table4-1.png",
"2003.03131-8-Table10-1.png",
"2003.03131-Related Work ::: Morfessor Baseline-4",
"2003.03131-5-Table2-1.png",
"2003.03131-Results-7"
],
[
"2003.03131-Experimental Setup ::: Error Analysis-0",
"2003.03131-8-Table10-1.png",
"2003.03131-Experimental Setup ::: Error Analysis-1",
"2003.03131-Results-7"
]
] | [
"Morfessor EM+Prune configuration significantly outperforms Morfessor Baseline w.r.t. the F-score for all languages except North Sámi. Morfessor EM+Prune is less responsive to tuning than Morfessor Baseline.",
" For English and North Sámi, EM+Prune results in less under-segmentation but worse over-segmentation. For Finnish these results are reversed."
] | 191 |
2004.03090 | Interview: A Large-Scale Open-Source Corpus of Media Dialog | Existing conversational datasets consist either of written proxies for dialog or small-scale transcriptions of natural speech. We introduce 'Interview': a large-scale (105K conversations) media dialog dataset collected from news interview transcripts. Compared to existing large-scale proxies for conversational data, language models trained on our dataset exhibit better zero-shot out-of-domain performance on existing spoken dialog datasets, demonstrating its usefulness in modeling real-world conversations. 'Interview' contains speaker role annotations for each turn, facilitating the development of engaging, responsive dialog systems. In fact, experiments on two dialog tasks show that leveraging such labels improves performance over strong speaker-agnostic baselines, and enabling models to generate more specific and inquisitive responses in interview-style conversations. | {
"paragraphs": [
[
"Large repositories of textual communications (e.g. forum and microblog posts) have gained recent popularity as proxies for dialog BIBREF0, BIBREF1, BIBREF2. However, conversations in these settings differ from natural dialog: turns may be sparsely scattered over a large temporal span, contain distinct syntax and vocabulary BIBREF3, and differ greatly in formality and focus BIBREF4. In this paper, we investigate how appropriate such data is for modeling natural dialog, and introduce Interview, a new high-quality large-scale open-domain conversational dataset grounded in interview settings with annotations for specific speaker roles.",
"We compare the performance of state-of-the-art language models fine-tuned on Interview and other popular conversational datasets, demonstrating that Interview contains more complex dialog and better models the characteristics of natural spoken conversations. Our dataset is an order of magnitude larger than existing high-quality natural dialog datasets and contains speaker role annotations for each turn, facilitating the development of conversational agents and assistive systems for settings involving specific speaker roles, such as doctor-patient interviews or hosted talk shows.",
"In particular, we explore the tasks of role modeling in media dialog and role change detection on Interview and find that leveraging role information can enable more nuanced, on-topic and natural dialog generation, as well as improve role change classification performance.",
"In summary, we present Interview, the first large-scale open-domain media dialog dataset. We explore two tasks for which it serves as a promising benchmark dataset: speaker role modeling and speaker change detection. We build simple yet strong models to show quantitatively that role labels from Interview improve performance on such tasks. Interview's scale, spoken origins, role diversity, and complex utterances make it a better source for grounded open-domain conversations."
],
[
"Broadly speaking, dialog and conversation datasets can be classified as constrained (goal-oriented) or open-domain, written or spoken, and scripted or spontaneous BIBREF5. In the realm of written dialog, the closest proxy to natural dialog comes in the form of role-play-style BIBREF6 conversations, featuring two agents instructed to participate in a constrained conversation. This setup has seen recent usage to construct goal-oriented BIBREF7, BIBREF8 and grounded conversations BIBREF9, BIBREF10. These datasets are expensive to collect at scale and are heavily constrained/guided by the instructions given to participants. Several initiatives have recorded and manually transcribed natural conversations occurring in the course of normal life, resulting in small, high-quality natural dialog datasets BIBREF11, BIBREF12, BIBREF13, BIBREF14. We explore an alternative venue for collecting a large-scale dataset of natural dialog: conversations and interviews on public radio.",
"The US Defense Advanced Research Projects Agency (DARPA) has undertaken efforts to collect broadcast and informal conversation from public and private sources including messaging boards, SMS BIBREF15, and broadcast newswire content BIBREF16, BIBREF17. However, it proves difficult to adopt these datasets as widely available benchmarks on dialog modeling tasks, as they come with a substantial cost ($100-$1000 per dataset/year, covering up to a hundred hours of transcribed conversation). In this vein, we contribute an open-access large-scale corpus of cleanly annotated broadcast media dialog.",
"BIBREF18 explores the patterns and discourse within media dialog and contrast the associated speaker role dynamics with spontaneous natural conversation. The author manually annotates and investigates 24 hours of Israeli news television programs. We see an opportunity for the investigation of speaker dynamics and significance of speaker roles at scale with our dataset.",
"Dialog modeling of open-domain chit-chat predicts one turn of dialog from one or many context turn(s). Structured approaches for dialog modeling build on hierarchical RNNs BIBREF19, BIBREF20, BIBREF21, with recent work employing a simple concatenation of dialog history in a transformer-based architecture BIBREF22. We draw inspiration from recent works in dialog generation that model speakers via persistent `personas,' whose representations are learned from a set of grounding facts BIBREF23 or other non-conversational metadata BIBREF24. Our approach eschews external grounding and learns speaker embeddings via dialog modeling, similar to BIBREF25. We, however, propose to learn speaker embeddings for different roles and capture role-dependent lexical profiles in conversation."
],
[
"We collect a novel dataset of 105K multi-party interview transcripts for 7 programs on National Public Radio (NPR) over 20 years (1999–2019), total of 10k hours. These transcripts contain a total of 3M turns comprising 7.5M sentences (127M words) from 184K speakers, of which 287 are hosts. To investigate role-play in media dialog, we curate a subset, Interview 2P, with two roles: a host and a guest, comprising 23K two-party conversations encompassing 455K turns, with 1.24M sentences and 21.7M words.",
"In these two-party conversations, each speaker takes an average of nine turns per dialog. Guests tend to speak longer on their turns, with 1.6x as many sentences spoken and 2x as many words per turn, and also use a more diverse vocabulary (1.6x size). Meanwhile, hosts ask five times as many questions as guests, with 40% of their dialog turns containing questions. When asking questions, hosts and guests use interrogative forms BIBREF26 at the same rate (65%). We note that the host and guest roles have differing discourse patterns, which support the notion of role modeling."
],
[
"To assess how well Interview represents open-domain dialog, we look to two datasets in widespread usage: DailyDialog BIBREF4, 13K short dialogs written to simulate simple conversations from daily life; and CALLHOME BIBREF11, transcriptions from 120 half-hour casual telephone conversations. We measure the language modeling performance of a pre-trained transformer model—117M-parameter GPT2 BIBREF27—both in its original form and versions fine-tuned (FT) on the training splits for Interview, DailyDialog, and CALLHOME. We evaluated the zero-shot performance of these models on the test splits of these datasets, with perplexities shown in tab:datasetcomparison.",
"While models fine-tuned on the training set performed best on each dataset as expected, we observe that 1) models trained on other datasets obtain relatively poor zero-shot performance on Interview; and 2) the model trained on Interview achieved the best out-of-domain performance on DailyDialog and CALLHOME by large margins. This suggests that language models trained on Interview can learn patterns characteristic of natural open-domain dialog in both simple daily conversation and informal long spoken exchanges. We also investigate DialoGPT, a model pre-trained on 147M Reddit threads as a proxy for dialog BIBREF22. Our results show that while Reddit threads can be used to emulate conversation, they may not resemble natural speech; DialoGPT posts by far the worst zero-shot modeling performance across all test datasets ($>$500 perplexity)—inferior to zero-shot GPT2. These experiments confirm that Interview, a dataset of real, complex conversations, is useful for modeling patterns in natural spoken dialog. We show statistics for Interview compared to other dialog datasets in tab:nprstats."
],
[
"We additionally explore two tasks that are facilitated by speaker role annotations in Interview: 1) generating appropriate responses for a specific role given a conversation history (speaker role modeling); and 2) predicting whether a new speaker will interject on the next sentence of a conversation. These tasks are crucial components to building fluent and role-specific dialog systems, for settings such as healthcare and customer service."
],
[
"We generate a response conditioned on the host speaker role, to specifically model how an interview host speaks and inquires, contrary to speaker-agnostic dialog settings BIBREF28, BIBREF29. Individual guests appear sparsely and their utterances heavily rely on external world knowledge. Thus, we model host responses, which are generally aimed towards moderating the conversation via follow-up questions and acknowledgements. Role-specific generation like this can benefit the development of assistive technologies and role-dependent dialog systems.",
"We approach speaker role modeling conditional language modeling task: generating the next response $T_{t, \\textbf {h}}$ for host $\\textbf {h}$ with the highest likelihood given a trace of prior utterances $T_{1\\dots t, g}$ and $T_{1\\dots t-1, \\textbf {h}}$. We use a transformer decoder to generate tokens $T_{1 \\dots t}$ from inputs $T_{0 \\dots t-1}$, but calculate loss only across the target sequence (gold host response). We mimic the input schema for DialoGPT, concatenating all historical turns with separator tokens, appending the host target response."
],
[
"To condition on a speaker role, we prepend each utterance in the dialog history with a role-specific speaker ID. Hosts each have one ID, while guests share a single ID, allowing us to model idiosyncrasies and interviewing patterns for individual hosts:",
"These role-specific speaker IDs are modeled by a speaker embedding layer of the same dimensions as the transformer hidden state, injected into the transformer input layer. We fine-tune GPT2 (Speaker GPT2) and DialoGPT (Speaker DialoGPT) on our dataset with speaker embeddings. We also finetune (FT) DialoGPT and GPT2 on Interview without speaker information as strong speaker-agnostic baselines for host response generation.",
"For training and evaluation, we provide our model with up to 512 tokens of non-truncated historical turns. We use an 80-10-10 train/dev/test split with unique conversations in each split.",
"We use GPT2-small (Transformer with 12 layers, 768 hidden size, 12 heads, and 117M parameters) as the base architecture for all of our models. We perform BPE tokenization with the GPT2Tokenizer. We use the RAdam optimizer BIBREF30 with a learning rate of $10^{-6} \\times \\text{batch size} \\times \\text{no. of GPUs}$ to utilize linear scaling in multi-GPU training. Our models are trained to convergence on 8 NVIDIA Tesla V100 GPUs, with a batch size of 5 per GPU. We use teacher-forcing to calculate perplexity for all train/dev/test splits. We avoid modeling salutations and sign-offs (which tend to be formulaic, speaker-independent, and specific to the radio station) by restricting the target turns to those with at least three prior turns and two following turns of conversation, resulting in a target training set of 87K host-only turns and 11K host-only turns for dev and test.",
"We decode the host response via top-$k$ sampling BIBREF27 with $k=5$. Results across all models on the test set are in tab:metrics."
],
[
"Speaker-conditioned models generate utterances closer to gold length than speaker-agnostic baselines, with significantly lower perplexity and higher BLEU scores. This indicates that including speaker information promotes the generation of higher fidelity responses. Our speaker models, especially Speaker GPT2, produce the most inquisitive responses (59.4% question-asking rate).",
"In an interview setting, it is also important for host utterances to be related to the conversation at hand. We evaluate the content similarity between generated responses and the dialog history. We show that our speaker-conditioned models generate responses with the most noun-phrases / topical references. These also overlap the most with topics in the dialog history, indicating topical relatedness. We note that gold responses include more noun phrases with lower historical overlap, possibly due to hosts bringing up new topics."
],
[
"To measure the conditioning effect of speaker role profiles on host response generation, we generate a dialog turn with the gold host profile and a dialog history. We then compute the likelihood of generating that response conditioned on the same context but with the gold and nine randomly sampled hosts. As in BIBREF31, we rank the likelihoods for each host and report the host matching accuracy (HMA)—proportion where the gold host is highest ranked—and Mean Reciprocal Rank (MMR) BIBREF32 of the gold host. Our speaker-conditioned models achieve much higher HMA and MRR compared to strong speaker-agnostic baselines, indicating significant conditioning on host profiles."
],
[
"Our models additionally exhibit several qualitative properties of high-quality and fluent conversation. We present a sample generation in tab:sampleconv (additional samples in the Appendix) that is indicative of broad trends across the test set. None of the models are able to introduce novel information (like Gold), but our speaker-conditioned models produce markedly better inquisitive responses. While GPT2 generates a natural-sounding short question with little relevance to the topic at hand, our Speaker DialoGPT model paraphrases previous turns and refers to existing entities to ask a substantial and coherent question. We further performed a human evaluation on a Likert scale to assess subjective dialog quality, with human raters preferring speaker model responses to speaker-agnostic models 62.5% of the time across 150 pairwise comparisons."
],
[
"We also investigate role change detection as a binary classification task for two-party dialogs. As a single turn of dialog may consist of multiple sentences, we aim to use a series of historical sentences and their speakers to classify whether a role change will occur in the next sentence of dialog. In contrast to previous textual speaker change detection tasks BIBREF33, we do not provide the target sentence for which we are predicting the role change. This setting is more realistic for a real-time assistive dialog system and online prediction in general.",
"We fine-tune BERT BIBREF34 to encode the dialog history, classifying speaker changes with a linear layer over the [CLS] representation. To understand the role of contextual speaker information in this task, we investigate representing the dialog history with and without speaker labels for each turn. This is a difficult task on our dataset, as BERT obtains a 63.2 F1 score without speaker information, struggling to predict role changes substantially better than random. While the task remains difficult, the classifier benefits from the inclusion of speaker labels, learning speaker embeddings and achieving a 66.1 F1 score. We see the potential for further research toward learning speaker representations to predict role changes and infer the structure of dialogs."
],
[
"We contribute a large-scale media dialog dataset that can act as a benchmark for complex open-domain, role-dependent grounded dialog. We present baseline model for role-conditioned dialog generation and show that they benefit from speaker information when added. In future work, we aim to perform temporal analyses of trends and biases within Interview and take advantage of the news setting to investigate external knowledge grounding in long natural conversations. These directions could potentially lead to more coherent free-form and assistive dialog systems."
],
[
"See the following tables for sample dialog histories and generated host responses from each of our baseline and speaker-conditioned dialog models."
]
],
"section_name": [
"Introduction",
"Related Works",
"Interview Dataset",
"Interview Dataset ::: Comparison with Other Datasets",
"Tasks and Experiments",
"Tasks and Experiments ::: Task 1: Role Modeling",
"Tasks and Experiments ::: Task 1: Role Modeling ::: Conditioning on Speakers",
"Tasks and Experiments ::: Task 1: Role Modeling ::: Performance",
"Tasks and Experiments ::: Task 1: Role Modeling ::: Speaker Role Ranking",
"Tasks and Experiments ::: Task 1: Role Modeling ::: Qualitative Analysis",
"Tasks and Experiments ::: Task 2: Role Change Detection",
"Conclusion",
"Generated Examples"
]
} | {
"answers": [
{
"annotation_id": [
"17f73111fab1568a02a08f644e83930a19ddb5dd",
"806054e85932d543c59fe657a4e8fbfcba895f47",
"9676513da67a7b05138f244a4650a2be39a432ac"
],
"answer": [
{
"evidence": [
"These role-specific speaker IDs are modeled by a speaker embedding layer of the same dimensions as the transformer hidden state, injected into the transformer input layer. We fine-tune GPT2 (Speaker GPT2) and DialoGPT (Speaker DialoGPT) on our dataset with speaker embeddings. We also finetune (FT) DialoGPT and GPT2 on Interview without speaker information as strong speaker-agnostic baselines for host response generation."
],
"extractive_spans": [],
"free_form_answer": "Fine tuned DIaloGPT and GPT2 on Interview without speaker information.",
"highlighted_evidence": [
" We also finetune (FT) DialoGPT and GPT2 on Interview without speaker information as strong speaker-agnostic baselines for host response generation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"These role-specific speaker IDs are modeled by a speaker embedding layer of the same dimensions as the transformer hidden state, injected into the transformer input layer. We fine-tune GPT2 (Speaker GPT2) and DialoGPT (Speaker DialoGPT) on our dataset with speaker embeddings. We also finetune (FT) DialoGPT and GPT2 on Interview without speaker information as strong speaker-agnostic baselines for host response generation."
],
"extractive_spans": [
"finetune (FT) DialoGPT and GPT2 on Interview without speaker information"
],
"free_form_answer": "",
"highlighted_evidence": [
"We also finetune (FT) DialoGPT and GPT2 on Interview without speaker information as strong speaker-agnostic baselines for host response generation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To assess how well Interview represents open-domain dialog, we look to two datasets in widespread usage: DailyDialog BIBREF4, 13K short dialogs written to simulate simple conversations from daily life; and CALLHOME BIBREF11, transcriptions from 120 half-hour casual telephone conversations. We measure the language modeling performance of a pre-trained transformer model—117M-parameter GPT2 BIBREF27—both in its original form and versions fine-tuned (FT) on the training splits for Interview, DailyDialog, and CALLHOME. We evaluated the zero-shot performance of these models on the test splits of these datasets, with perplexities shown in tab:datasetcomparison.",
"While models fine-tuned on the training set performed best on each dataset as expected, we observe that 1) models trained on other datasets obtain relatively poor zero-shot performance on Interview; and 2) the model trained on Interview achieved the best out-of-domain performance on DailyDialog and CALLHOME by large margins. This suggests that language models trained on Interview can learn patterns characteristic of natural open-domain dialog in both simple daily conversation and informal long spoken exchanges. We also investigate DialoGPT, a model pre-trained on 147M Reddit threads as a proxy for dialog BIBREF22. Our results show that while Reddit threads can be used to emulate conversation, they may not resemble natural speech; DialoGPT posts by far the worst zero-shot modeling performance across all test datasets ($>$500 perplexity)—inferior to zero-shot GPT2. These experiments confirm that Interview, a dataset of real, complex conversations, is useful for modeling patterns in natural spoken dialog. We show statistics for Interview compared to other dialog datasets in tab:nprstats."
],
"extractive_spans": [],
"free_form_answer": "two models (GPT2 and DialoGPT) on two datasets (DailyDialog and CALLHOME)",
"highlighted_evidence": [
"We measure the language modeling performance of a pre-trained transformer model—117M-parameter GPT2 BIBREF27—both in its original form and versions fine-tuned (FT) on the training splits for Interview, DailyDialog, and CALLHOME. ",
"We also investigate DialoGPT, a model pre-trained on 147M Reddit threads as a proxy for dialog BIBREF22. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"907717d5716b6511e152018ab85899b5c54b978a",
"bab2b60692b485be326e38931694b3e2f22a0c23",
"ef7b8dc8efa53c3bf0a948d88a5015b22d2273fb"
],
"answer": [
{
"evidence": [
"In particular, we explore the tasks of role modeling in media dialog and role change detection on Interview and find that leveraging role information can enable more nuanced, on-topic and natural dialog generation, as well as improve role change classification performance."
],
"extractive_spans": [
"role modeling in media dialog ",
"role change detection "
],
"free_form_answer": "",
"highlighted_evidence": [
"In particular, we explore the tasks of role modeling in media dialog and role change detection on Interview and find that leveraging role information can enable more nuanced, on-topic and natural dialog generation, as well as improve role change classification performance."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We additionally explore two tasks that are facilitated by speaker role annotations in Interview: 1) generating appropriate responses for a specific role given a conversation history (speaker role modeling); and 2) predicting whether a new speaker will interject on the next sentence of a conversation. These tasks are crucial components to building fluent and role-specific dialog systems, for settings such as healthcare and customer service."
],
"extractive_spans": [
"1) generating appropriate responses for a specific role given a conversation history (speaker role modeling)",
"2) predicting whether a new speaker will interject on the next sentence of a conversation"
],
"free_form_answer": "",
"highlighted_evidence": [
"We additionally explore two tasks that are facilitated by speaker role annotations in Interview: 1) generating appropriate responses for a specific role given a conversation history (speaker role modeling); and 2) predicting whether a new speaker will interject on the next sentence of a conversation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In particular, we explore the tasks of role modeling in media dialog and role change detection on Interview and find that leveraging role information can enable more nuanced, on-topic and natural dialog generation, as well as improve role change classification performance."
],
"extractive_spans": [
"role modeling in media dialog and role change detection on Interview"
],
"free_form_answer": "",
"highlighted_evidence": [
"In particular, we explore the tasks of role modeling in media dialog and role change detection on Interview and find that leveraging role information can enable more nuanced, on-topic and natural dialog generation, as well as improve role change classification performance."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"0447f053231868756893509daefc0457e8bd8f2f",
"2d1c252a0b19f32c3c83890fae68a8dd852b5968",
"fb1c82293313baa87c922f945e6705a040e6a86b"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"We compare the performance of state-of-the-art language models fine-tuned on Interview and other popular conversational datasets, demonstrating that Interview contains more complex dialog and better models the characteristics of natural spoken conversations. Our dataset is an order of magnitude larger than existing high-quality natural dialog datasets and contains speaker role annotations for each turn, facilitating the development of conversational agents and assistive systems for settings involving specific speaker roles, such as doctor-patient interviews or hosted talk shows."
],
"extractive_spans": [
"annotations for each turn"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our dataset is an order of magnitude larger than existing high-quality natural dialog datasets and contains speaker role annotations for each turn, facilitating the development of conversational agents and assistive systems for settings involving specific speaker roles, such as doctor-patient interviews or hosted talk shows."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"36adcf4ef79be28aa5d81f9b83c40c9cb48a896c",
"a8b000c0ace07638635fcc0b73d4abe2f18713b4",
"f965c6f611294015b92a37314cad0d34c1599ae5"
],
"answer": [
{
"evidence": [
"Large repositories of textual communications (e.g. forum and microblog posts) have gained recent popularity as proxies for dialog BIBREF0, BIBREF1, BIBREF2. However, conversations in these settings differ from natural dialog: turns may be sparsely scattered over a large temporal span, contain distinct syntax and vocabulary BIBREF3, and differ greatly in formality and focus BIBREF4. In this paper, we investigate how appropriate such data is for modeling natural dialog, and introduce Interview, a new high-quality large-scale open-domain conversational dataset grounded in interview settings with annotations for specific speaker roles."
],
"extractive_spans": [
"natural dialog"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we investigate how appropriate such data is for modeling natural dialog, and introduce Interview, a new high-quality large-scale open-domain conversational dataset grounded in interview settings with annotations for specific speaker roles."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"We collect a novel dataset of 105K multi-party interview transcripts for 7 programs on National Public Radio (NPR) over 20 years (1999–2019), total of 10k hours. These transcripts contain a total of 3M turns comprising 7.5M sentences (127M words) from 184K speakers, of which 287 are hosts. To investigate role-play in media dialog, we curate a subset, Interview 2P, with two roles: a host and a guest, comprising 23K two-party conversations encompassing 455K turns, with 1.24M sentences and 21.7M words."
],
"extractive_spans": [
"NPR"
],
"free_form_answer": "",
"highlighted_evidence": [
"We collect a novel dataset of 105K multi-party interview transcripts for 7 programs on National Public Radio (NPR) over 20 years (1999–2019), total of 10k hours."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"2c71542ed1a689689268e4db350191ac06aea13e"
],
"answer": [
{
"evidence": [
"While models fine-tuned on the training set performed best on each dataset as expected, we observe that 1) models trained on other datasets obtain relatively poor zero-shot performance on Interview; and 2) the model trained on Interview achieved the best out-of-domain performance on DailyDialog and CALLHOME by large margins. This suggests that language models trained on Interview can learn patterns characteristic of natural open-domain dialog in both simple daily conversation and informal long spoken exchanges. We also investigate DialoGPT, a model pre-trained on 147M Reddit threads as a proxy for dialog BIBREF22. Our results show that while Reddit threads can be used to emulate conversation, they may not resemble natural speech; DialoGPT posts by far the worst zero-shot modeling performance across all test datasets ($>$500 perplexity)—inferior to zero-shot GPT2. These experiments confirm that Interview, a dataset of real, complex conversations, is useful for modeling patterns in natural spoken dialog. We show statistics for Interview compared to other dialog datasets in tab:nprstats."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"This suggests that language models trained on Interview can learn patterns characteristic of natural open-domain dialog in both simple daily conversation and informal long spoken exchanges."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"87dad0adf082d002e1d354ba54e2c374bafadb74",
"e5a0d9ce51ffe514b21165a48b9baee40b6f7a0b"
],
"answer": [
{
"evidence": [
"These role-specific speaker IDs are modeled by a speaker embedding layer of the same dimensions as the transformer hidden state, injected into the transformer input layer. We fine-tune GPT2 (Speaker GPT2) and DialoGPT (Speaker DialoGPT) on our dataset with speaker embeddings. We also finetune (FT) DialoGPT and GPT2 on Interview without speaker information as strong speaker-agnostic baselines for host response generation."
],
"extractive_spans": [
"We also finetune (FT) DialoGPT and GPT2 on Interview without speaker information"
],
"free_form_answer": "",
"highlighted_evidence": [
"We also finetune (FT) DialoGPT and GPT2 on Interview without speaker information as strong speaker-agnostic baselines for host response generation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"These role-specific speaker IDs are modeled by a speaker embedding layer of the same dimensions as the transformer hidden state, injected into the transformer input layer. We fine-tune GPT2 (Speaker GPT2) and DialoGPT (Speaker DialoGPT) on our dataset with speaker embeddings. We also finetune (FT) DialoGPT and GPT2 on Interview without speaker information as strong speaker-agnostic baselines for host response generation."
],
"extractive_spans": [],
"free_form_answer": "Fine-tuned DialGPT and GPT2 on Interview without speaker information.",
"highlighted_evidence": [
" We also finetune (FT) DialoGPT and GPT2 on Interview without speaker information as strong speaker-agnostic baselines for host response generation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"17be40f133b58a2524d89d3562435fdd7ac10276",
"db9bc934ffd26f034dec722ee19bfc7592dc0baa"
],
"answer": [
{
"evidence": [
"In particular, we explore the tasks of role modeling in media dialog and role change detection on Interview and find that leveraging role information can enable more nuanced, on-topic and natural dialog generation, as well as improve role change classification performance."
],
"extractive_spans": [
" role modeling in media dialog and role change detection on Interview"
],
"free_form_answer": "",
"highlighted_evidence": [
"In particular, we explore the tasks of role modeling in media dialog and role change detection on Interview and find that leveraging role information can enable more nuanced, on-topic and natural dialog generation, as well as improve role change classification performance."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We additionally explore two tasks that are facilitated by speaker role annotations in Interview: 1) generating appropriate responses for a specific role given a conversation history (speaker role modeling); and 2) predicting whether a new speaker will interject on the next sentence of a conversation. These tasks are crucial components to building fluent and role-specific dialog systems, for settings such as healthcare and customer service."
],
"extractive_spans": [
"1) generating appropriate responses for a specific role given a conversation history (speaker role modeling)",
"2) predicting whether a new speaker will interject on the next sentence of a conversation"
],
"free_form_answer": "",
"highlighted_evidence": [
"We additionally explore two tasks that are facilitated by speaker role annotations in Interview: 1) generating appropriate responses for a specific role given a conversation history (speaker role modeling); and 2) predicting whether a new speaker will interject on the next sentence of a conversation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"43e659a8940b0d65c1826b0f507ed63300f611d1",
"cbdc63fb78c34edf9d75aaf1e177ab27d5676916"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"6dff642903484365ad480a6fd8279569358b456f",
"db727c0704abc81aedb8bf71b7360f55a5e279d1"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"1bd93d01b01241685a9bd3786c52a05a4d50ac6d",
"9724789c9264afdc7479b7a702e703abf124a4d6"
],
"answer": [
{
"evidence": [
"We collect a novel dataset of 105K multi-party interview transcripts for 7 programs on National Public Radio (NPR) over 20 years (1999–2019), total of 10k hours. These transcripts contain a total of 3M turns comprising 7.5M sentences (127M words) from 184K speakers, of which 287 are hosts. To investigate role-play in media dialog, we curate a subset, Interview 2P, with two roles: a host and a guest, comprising 23K two-party conversations encompassing 455K turns, with 1.24M sentences and 21.7M words."
],
"extractive_spans": [
"7 programs on National Public Radio (NPR) over 20 years"
],
"free_form_answer": "",
"highlighted_evidence": [
"We collect a novel dataset of 105K multi-party interview transcripts for 7 programs on National Public Radio (NPR) over 20 years (1999–2019), total of 10k hours."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We collect a novel dataset of 105K multi-party interview transcripts for 7 programs on National Public Radio (NPR) over 20 years (1999–2019), total of 10k hours. These transcripts contain a total of 3M turns comprising 7.5M sentences (127M words) from 184K speakers, of which 287 are hosts. To investigate role-play in media dialog, we curate a subset, Interview 2P, with two roles: a host and a guest, comprising 23K two-party conversations encompassing 455K turns, with 1.24M sentences and 21.7M words."
],
"extractive_spans": [
" 7 programs on National Public Radio (NPR)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We collect a novel dataset of 105K multi-party interview transcripts for 7 programs on National Public Radio (NPR) over 20 years (1999–2019), total of 10k hours"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"",
"",
"",
"",
"",
""
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Which baselines did they compare to?",
"What dialog tasks was it experimented on?",
"How was annotation done?",
"Which news outlets did they focus on?",
"Do the interviews fall under a specific news category? ",
"Which baselines did they compare to?",
"Which dialog tasks did they experiment on?",
"Did they use crowdsourcing for annotations?",
"Were annotations done manually?",
"Which news sources do the transcripts come from?"
],
"question_id": [
"25e6ba07285155266c3154d3e2ca1ae05c2f7f2d",
"d68cc9aaf0466b97354600a5646c3be4512fc096",
"d038e5d2a6f85e68422caaf8b96cb046db6599fa",
"c66e0aa86b59bbf9e6a1dc725fb9785473bfa137",
"369d7bc5351409910c7a5e05c0cbb5abab8e50ec",
"b9d9803ba24127f91ba4d7cff4da11492da20f09",
"7625068cc22a095109580b83eff48616387167c2",
"be0b438952048fe6bb91c61ba48e529d784bdcea",
"a97137318025a6642ed0634f7159255270ba3d4f",
"a24b2269b292fd0ee81d50303d1315383c594382"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Comparative dialog dataset statistics, including two-party (2P) and full Interview dataset",
"Table 2: Zero-shot BPE perplexity for GPT2-based models. Bold denotes best out-of-domain performance.",
"Table 3: Metrics on generated host responses on test set. NPO = Noun-phrase overlap with dialog history, HMA = Host Matching Accuracy, MRR = Mean Reciprocal Rank.",
"Table 4: Sample generated responses. Bold emphasizes specificity and topicality.",
"Table 5: Sample generated response on Syrian air strikes. Bold emphasizes specificity and topicality. Red denotes factually incorrect or inconsistent segments.",
"Table 6: Sample generated response. Bold emphasizes specificity and topicality.",
"Table 7: Sample generated response on auto spy photography. Bold emphasizes specificity and topicality."
],
"file": [
"1-Table1-1.png",
"2-Table2-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"7-Table5-1.png",
"8-Table6-1.png",
"9-Table7-1.png"
]
} | [
"Which baselines did they compare to?",
"Which baselines did they compare to?"
] | [
[
"2004.03090-Interview Dataset ::: Comparison with Other Datasets-0",
"2004.03090-Interview Dataset ::: Comparison with Other Datasets-1",
"2004.03090-Tasks and Experiments ::: Task 1: Role Modeling ::: Conditioning on Speakers-1"
],
[
"2004.03090-Tasks and Experiments ::: Task 1: Role Modeling ::: Conditioning on Speakers-1"
]
] | [
"two models (GPT2 and DialoGPT) on two datasets (DailyDialog and CALLHOME)",
"Fine-tuned DialGPT and GPT2 on Interview without speaker information."
] | 193 |
1901.10619 | Twitter Job/Employment Corpus: A Dataset of Job-Related Discourse Built with Humans in the Loop | We present the Twitter Job/Employment Corpus, a collection of tweets annotated by a humans-in-the-loop supervised learning framework that integrates crowdsourcing contributions and expertise on the local community and employment environment. Previous computational studies of job-related phenomena have used corpora collected from workplace social media that are hosted internally by the employers, and so lacks independence from latent job-related coercion and the broader context that an open domain, general-purpose medium such as Twitter provides. Our new corpus promises to be a benchmark for the extraction of job-related topics and advanced analysis and modeling, and can potentially benefit a wide range of research communities in the future. | {
"paragraphs": [
[
"Working American adults spend more than one third of their daily time on job-related activities BIBREF0 —more than on anything else. Any attempt to understand a working individual's experiences, state of mind, or motivations must take into account their life at work. In the extreme, job dissatisfaction poses serious health risks and even leads to suicide BIBREF1 , BIBREF2 .",
"Conversely, behavioral and mental problems greatly affect employee's productivity and loyalty. 70% of US workers are disengaged at work BIBREF3 . Each year lost productivity costs between 450 and 550 billion dollars. Disengaged workers are 87% more likely to leave their jobs than their more satisfied counterparts are BIBREF3 . The deaths by suicide among working age people (25-64 years old) costs more than $44 billion annually BIBREF4 . By contrast, behaviors such as helpfulness, kindness and optimism predict greater job satisfaction and positive or pleasurable engagement at work BIBREF5 .",
"A number of computational social scientists have studied organizational behavior, professional attitudes, working mood and affect BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , but in each case: the data they investigated were collected from internal interactive platforms hosted by the workers' employers.",
"These studies are valuable in their own right, but one evident limitation is that each dataset is limited to depicting a particular company and excludes the populations who have no access to such restricted networks (e.g., people who are not employees of that company). Moreover, the workers may be unwilling to express, e.g., negative feelings about work (“I don't wanna go to work today”), unprofessional behavior (“Got drunk as hell last night and still made it to work”), or a desire to work elsewhere (“I want to go work at Disney World so bad”) on platforms controlled by their employers.",
"A major barrier to studying job-related discourse on general-purpose, public social media—one that the previous studies did not face—is the problem of determining which posts are job-related in the first place. There is no authoritative training data available to model this problem. Since the datasets used in previous work were collected in the workplace during worktime, the content is implicitly job-related. By contrast, the subject matter of public social media is much more diverse. People with various life experiences may have different criteria for what constitutes a “job” and describe their jobs differently.",
"For instance, a tweet like “@SOMEONE @SOMEONE shit manager shit players shit everything” contains the job-related signal word “manager,” yet the presence of “players” ultimately suggests this tweet is talking about a sport team. Another example “@SOMEONE anytime for you boss lol” might seem job-related, but “boss” here could also simply refer to “friend” in an informal and acquainted register.",
"Extracting job-related information from Twitter can be valuable to a range of stakeholders. For example, public health specialists, psychologists and psychiatrists could use such first-hand reportage of work experiences to monitor job-related stress at a community level and provide professional support if necessary. Employers might analyze these data and use it to improve how they manage their businesses. It could help employees to maintain better online reputations for potential job recruiters as well. It is also meaningful to compare job-related tweets against non-job-related discourse to observe and understand the linguistic and behavioral similarities and differences between on- and off-hours.",
"Our main contributions are:"
],
[
"Social media accounts for about 20% of the time spent online BIBREF10 . Online communication can embolden people to reveal their cognitive state in a natural, un-self-conscious manner BIBREF11 . Mobile phone platforms help social media to capture personal behaviors whenever and wherever possible BIBREF12 , BIBREF13 . These signals are often temporal, and can reveal how phenomena change over time. Thus, aspects about individuals or groups, such as preferences and perspectives, affective states and experiences, communicative patterns, and socialization behaviors can, to some degree, be analyzed and computationally modeled continuously and unobtrusively BIBREF12 .",
"Twitter has drawn much attention from researchers in various disciplines in large part because of the volume and granularity of publicly available social data associated with massive information. This micro-blogging website, which was launched in 2006, has attracted more than 500 million registered users by 2012, with 340 million tweets posted every day. Twitter supports directional connections (followers and followees) in its social network, and allows for geographic information about where a tweet was posted if a user enables location services. The large volume and desirable features provided by Twitter makes it a well-suited source of data for our task.",
"We focus on a broad discourse and narrative theme that touches most adults worldwide. Measures of volume, content, affect of job-related discourse on social media may help understand the behavioral patterns of working people, predict labor market changes, monitor and control satisfaction/dissatisfaction with respect to their workplaces or colleagues, and help people strive for positive change BIBREF9 . The language differences exposed in social media have been observed and analyzed in relation to location BIBREF14 , gender, age, regional origin, and political orientation BIBREF15 . However, it is probably due to the natural challenges of Twitter messages — conversational style of interactions, lack of traditional spelling rules, and 140-character limit of each message—we barely see similar public Twitter datasets investigating open-domain problems like job/employment in computational linguistic or social science field. Li et al. li2014major proposed a pipelined system to extract a wide variety of major life events, including job, from Twitter. Their key strategy was to build a relatively clean training dataset from large volume of Twitter data with minimum human efforts. Their real world testing demonstrates the capability of their system to identify major life events accurately. The most parallel work that we can leverage here is the method and corpus developed by Liu et al. liu2016understanding, which is an effective supervised learning system to detect job-related tweets from individual and business accounts. To fully utilize the existing resources, we build upon the corpus by Liu et al. liu2016understanding to construct and contribute our more fine-grained corpus of job-related discourse with improvements of the classification methods."
],
[
"Figure FIGREF4 shows the workflow of our humans-in-the-loop framework. It has multiple iterations of human annotations and automatic machine learning predictions, followed by some linguistic heuristics, to extract job-related tweets from personal and business accounts.",
"Compared to the framework introduced in BIBREF16 , our improvements include: introducing a new rule-based classifier ( INLINEFORM0 ), conducting an additional round of crowdsourcing annotations (R4) to enrich the human labeled data, and training a classification model with enhanced performances ( INLINEFORM1 ) which was ultimately used to label the unseen data."
],
[
"Using the DataSift Firehose, we collected historical tweets from public accounts with geographical coordinates located in a 15-counties region surrounding a medium sized US city from July 2013 to June 2014. This one-year data set contains over 7 million geo-tagged tweets (approximately 90% written in English) from around 85,000 unique Twitter accounts. This particular locality has geographical diversity, covering both urban and rural areas and providing mixed and balanced demographics. We could apply local knowledge into the construction of our final job-related corpus, which has been approved very helpful in the later experiments."
],
[
"In order to identify probable job-related tweets which are talking about paid positions of regular employment while excluding noises (such as students discussing homework or school-related activities, or people complimenting others), we defined a simple term-matching classifier with inclusion and exclusion terms in the first step (see Table TABREF9 ).",
"Classifier INLINEFORM0 consists of two rules: the matched tweet must contain at least one word in the Include lexicon and it cannot contain any word in the Exclude lexicon. Before applying filtering rules, we pre-processed each tweet by (1) converting all words to lower cases; (2) stripping out punctuation and special characters; and (3) normalizing the tweets by mapping out-of-vocabulary phrases (such as abbreviations and acronyms) to standard phrases using a dictionary of more than 5,400 slang terms in the Internet.",
"This filtering yielded over 40,000 matched tweets having at least five words, referred as job-likely."
],
[
"Our conjecture about crowdsourced annotations, based on the experiments and conclusions from BIBREF17 , is that non-expert contributors could produce comparable quality of annotations when evaluating against those gold standard annotations from experts. And it is similarly effective to use the labeled tweets with high inter-annotator agreement among multiple non-expert annotators from crowdsourcing platforms to build robust models as doing so on expert-labeled data.",
"We randomly chose around 2,000 job-likely tweets and split them equally into 50 subsets of 40 tweets each. In each subset, we additionally randomly duplicated five tweets in order to measure the intra-annotator agreement and consistency. We then constructed Amazon Mechanical Turk (AMT) Human Intelligence Tasks (HITs) to collect reference annotations from crowdsourcing workers. We assigned 5 crowdworkers to each HIT—this is an empirical scale for crowdsourced linguistic annotation tasks suggested by previous studies BIBREF18 , BIBREF19 . Crowdsourcing workers were required to live in the United States and had records of approval rating of 90% or better. They were instructed to read each tweet and answer following question “Is this tweet about job or employment?”: their answer Y represents job-related and N represents not job-related. Workers were allowed to work on as many distinct HITs as they liked.",
"We paid each worker $1.00 per HIT and gave extra bonuses to those who completed multiple HITs. We rejected workers who did not provide consistent answers to the duplicate tweets in each HIT. Before publishing the HITs to crowdsourcing workers, we consulted with Turker Nation to ensure that we treat and compensate workers fairly for their requested tasks.",
"Given the sensitive nature of this work, we anonymized all tweets to minimize any inadvertent disclosure of personal information ( INLINEFORM0 names) or cues about an individual’s online identity (URLs) before publishing tweets to crowdsourcing workers. We replaced INLINEFORM1 names with INLINEFORM2 , and recognizable URLs with INLINEFORM3 . No attempt was ever made to contact or interact with any user.",
"This labeling round yielded 1,297 tweets labeled with unanimous agreement among five workers, i.e. five workers gave the same label to one tweet—1,027 of these were labeled job-related, and the rest 270 were not job-related. They composed the first part of our human-annotated dataset, named as Part-1."
],
[
"We relied on the textual representations—a feature space of n-grams (unigrams, bigrams and trigrams)—for training. Due to the noisy nature of Twitter, where users frequently write short, informal spellings and grammars, we pre-processed input data as the following steps: (1) utilized a revised Twokenizer system which was specially trained on Twitter texts BIBREF20 to tokenize raw messages, (2) completed stemming and lemmatization using WordNet Lemmatizer BIBREF21 .",
"Considering the class imbalance situations in the training dataset, we selected the optimal learning parameters by grid-searching on a range of class weights for the positive (job-related) and negative (not job-related) classes, and then chose the estimator that optimized F1 score, using 10-fold cross validation.",
"In Part-1 set, there are 1,027 job-related and 270 not job-related tweets. To construct a balanced training set for INLINEFORM0 , we randomly chose 757 tweets outside the job-likely set (which were classified as negative by INLINEFORM1 ). Admittedly these additional samples do not necessarily represent the true negative tweets (not job-related) as they have not been manually checked. The noise introduced into the framework would be handled by the next round of crowdsourced annotations.",
"We trained our first SVM classification model INLINEFORM0 and then used it to label the remaining data in our data pool."
],
[
"We conducted the second round of labeling on a subset of INLINEFORM0 -predicted data to evaluate the effectiveness of the aforementioned helper INLINEFORM1 and collect more human labeled data to build a class-balanced set (for training more robust models).",
"After separating positive- and negative-labeled (job-related vs. not job-related) tweets, we sorted each class in descending order of their confidence scores. We then spot-checked the tweets to estimate the frequency of job-related tweets as the confidence score changes. We discovered that among the top-ranked tweets in the positive class about half, and near the separating hyperplane (i.e., where the confidence scores are near zero) almost none, are truly job-related.",
"We randomly selected 2,400 tweets from those in the top 80th percentile of confidence scores in positive class (Type-1). The Type-1 tweets are automatically classified as positive, but some of them may not be job-related in the ground truth. Such tweets are the ones which INLINEFORM0 fails though INLINEFORM1 is very confident about it. We also randomly selected about 800 tweets from those tweets having confidence scores closest to zero approaching from the positive side, and another 800 tweets from the negative side (Type-2). These 1,600 tweets have very low confidence scores, representing those INLINEFORM2 cannot clearly distinguish. Thus the automatic prediction results of the Type-2 tweets have a high chance being wrongly predicted. Hence, we considered both the clearer core and at the gray zone periphery of this meaningful phenomenon.",
"Crowdworkers again were asked to annotate this combination of Type-1 and Type-2 tweets in the same fashion as in R1. Table TABREF18 records annotation details.",
"Grouping Type-1 and Type-2 tweets with unanimous labels in R2 (bold columns in Table TABREF18 ), we had our second part of human-labeled dataset (Part-2)."
],
[
"Combining Part-1 and Part-2 data into one training set—4,586 annotated tweets with perfect inter-annotator agreement (1748 job-related tweets and 2838 not job-related), we trained the machine labeler INLINEFORM0 similarly as how we obtained INLINEFORM1 ."
],
[
"Having conducted two rounds of crowdsourced annotations, we noticed that crowdworkers could not reach consensuses on a number of tweets which were not unanimously labeled. This observation intuitively suggests that non-expert annotators inevitably have diverse types of understanding about the job topic because of its subjectivity and ambiguity. Table TABREF21 provides examples (selected from both R1 and R2) of tweets in six possible inter-annotator agreement combinations.",
"Two experts from the local community with prior experience in employment were actively introduced into this phase to review tweets on which crowdworkers disagreed and provided their labels. The tweets with unanimous labels in two rounds of crowdsourced annotations were not re-annotated by experts because unanimous votes are hypothesized to be reliable as experts' labels. Table TABREF22 records the numbers of tweets these two community annotators corrected.",
"We have our third part of human-annotated data (Part-3): tweets reviewed and corrected by the community annotators."
],
[
"Combining Part-3 with all unanimously labeled data from the previous rounds (Part-1 and Part-2) yielded 2,645 gold-standard-labeled job-related and 3,212 not job-related tweets. We trained INLINEFORM0 on this entire training set."
],
[
"These three learned labelers ( INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 ) are capable to annotate unseen tweets automatically. Their performances may vary due to the progressively increasing size of training data.",
"To evaluate the models in different stages uniformly—including the initial rule-based classifier INLINEFORM0 —we adopted a post-hoc evaluation procedure: We sampled 400 distinct tweets that have not been used before from the data pool labeled by INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 respectively (there is no intersection between any two sets of samples). We had these four classifiers to label this combination of 1600-samples test set. We then asked crowdsourcing workers to validate a total of 1,600 unique samples just like our settings in previous rounds of crowdsourced annotations (R1 and R2). We took the majority votes (where at least 3 out of 5 crowdsourcing workers agreed) as reference labels for these testing tweets.",
"Table TABREF25 displays the classification measures of the predicted labels as returned by each model against the reference labels provided by crowdsourcing workers, and shows that INLINEFORM0 outperforms INLINEFORM1 , INLINEFORM2 and INLINEFORM3 ."
],
[
"Even though INLINEFORM0 achieves the highest performance among four, it has scope for improvement. We manually checked the tweets in the test set that were incorrectly classified as not job-related and focused on the language features we ignored in preparation for the model training. After performing some pre-processing on the tweets in false negative and true positive groups from the above testing phase, we ranked and compared their distributions of word frequencies. These two rankings reveal the differences between the two categories (false negative vs. true positive) and help us discover some signal words that were prominent in false negative group but not in true positive—if our trained models are able to recognize these features when forming the separating boundaries, the prediction false negative rates would decrease and the overall performances would further improve.",
"Our fourth classifier INLINEFORM0 is rule-based again and to extract more potential job-related tweets, especially those would have been misclassified by our trained models. The lexicons in INLINEFORM1 include the following signal words: career, hustle, wrk, employed, training, payday, company, coworker and agent.",
"We ran INLINEFORM0 on our data pool and randomly selected about 2,000 tweets that were labeled as positive by INLINEFORM1 and never used previously (i.e., not annotated, trained or tested in INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 ). We published these tweets to crowdsouring workers using the same settings of R1 and R2. The tweets with unanimously agreed labels in R4 form the last part of our human-labeled dataset (Part-4).",
"Table TABREF27 summarizes the results from multiple crowdsourced annotation rounds (R1, R2 and R4)."
],
[
"Aggregating separate parts of human-labeled data (Part-1 to Part-4), we obtained an integrated training set with 2,983 job-related tweets and 3,736 not job-related tweets and trained INLINEFORM0 upon it. We tested INLINEFORM1 using the same data in crowdsourced validation phase (1,600 tested tweets) and discovered that INLINEFORM2 beats the performances of other models (Table TABREF29 ).",
"Table TABREF30 lists the top 15 features for both classes in INLINEFORM0 with their corresponding weights. Positive features (job-related) unearth expressions about personal job satisfaction (lovemyjob) and announcements of working schedules (day off, break) beyond our rules defined in INLINEFORM1 and INLINEFORM2 . Negative features (not job-related) identify phrases to comment on others' work (your work, amazing job, awesome job, nut job) though they contain “work” or “job,” and show that school- or game-themed messages (college career, play) are not classified into the job class which meets our original intention."
],
[
"The class distribution in the machine-labeled test data is roughly balanced, which is not the case in real-world scenarios, where not-job-related tweets are much more common than job-related ones.",
"We proposed an end-to-end evaluation: to what degree can our trained automatic classifiers ( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 ) identify job-related tweets in the real world? We introduced the estimated effective recall under the assumption that for each model, the error rates in our test samples (1,600 tweets) are proportional to the actual error rates found in the entire one-year data set which resembles the real world. We labeled the entire data set using each classifier and defined the estimated effective recall INLINEFORM4 for each classifier as INLINEFORM5 ",
"where INLINEFORM0 is the total number of the classifier-labeled job-related tweets in the entire one-year data set, INLINEFORM1 is the total of not job-related tweets in the entire one-year data set, INLINEFORM2 is the number of classifier-labeled job-related tweets in our 1,600-sample test set, INLINEFORM3 , and INLINEFORM4 is the recall of the job class in our test set, as reported in Tables TABREF25 and TABREF29 .",
"Table TABREF32 shows that INLINEFORM0 can be used as a good classifier to automatically label the topic of unseen data as job-related or not."
],
[
"Through observation we noticed some patterns like:",
"“Panera Bread: Baker - Night (#Rochester, NY) HTTP://URL #Hospitality #VeteranJob #Job #Jobs #TweetMyJobs”",
"in the class of job-related tweets. Nearly every job-related tweet that contained at least one of the following hashtags: #veteranjob, #job, #jobs, #tweetmyjobs, #hiring, #retail, #realestate, #hr also had a URL embedded. We counted the tweets containing only the listed hashtags, and the tweets having both the queried hashtags and embedded URL, and summarized the statistics in Table TABREF34 . By spot checking we found such tweets always led to recruitment websites. This observation suggests that these tweets with similar “hashtags + URL” patterns originated from business agencies or companies instead of personal accounts, because individuals by common sense are unlikely to post recruitment advertising.",
"This motivated a simple heuristic that appeared surprisingly effective at determining which kind of accounts each job-related tweet was posted from: if an account had more job-related tweets matching the “hashtags + URL” patterns than tweets in other topics, we labeled it a business account; otherwise it is a personal account. We validated its effectiveness using the job-related tweets sampled by the models in crowdsourced evaluations phase. It is essential to note that when crowdsourcing annotators made judgment about the type of accounts as personal or business, they were shown only one target tweet—without any contexts or posts history which our heuristics rely on.",
"Table TABREF35 records the performance metrics and confirms that our heuristics to determine the sources of job-related tweets (personal vs. business accounts) are consistently accurate and effective.",
"We used INLINEFORM0 to detect (not) job-related tweets, and applied our linguistic heuristics to further separate accounts into personal and business groups automatically."
],
[
"To assess the labeling quality of multiple annotators in crowdsourced annotation rounds (R1, R2 and R4), we calculated Fleiss' kappa BIBREF22 and Krippendorff's alpha BIBREF23 measures using the online tool BIBREF24 to assess inter-annotator reliability among the five annotators of each HIT. And then we calculated the average and standard deviation of inter-annotator scores for multiple HITs per round. Table TABREF36 records the inter-annotator agreement scores in three rounds of crowdsourced annotations.",
"The inter-annotator agreement between the two expert annotators from local community was assessed using Cohen's kappa BIBREF26 as INLINEFORM0 which indicates empirically almost excellent. Their joint efforts corrected more than 90% of tweets which collected divergent labels from crowdsourcing workers in R1 and R2.",
"We observe in Table TABREF36 that annotators in R2 achieved the highest average inter-annotator agreements and the lowest standard deviations than the other two rounds, suggesting that tweets in R2 have the highest level of confidence being related to job/employment. As shown in Figure FIGREF4 , the annotated tweets in R1 are the outputs from INLINEFORM0 , the tweets in R2 are from INLINEFORM1 , and the tweets in R4 are from INLINEFORM2 . INLINEFORM3 is a supervised SVM classifier, while both INLINEFORM4 and INLINEFORM5 are rule-based classifiers. The higher agreement scores in R2 indicate that a trained SVM classifier can provide more reliable and less noisy predictions (i.e., labeled data). Further, higher agreement scores in R1 than R4 indicates that the rules in INLINEFORM6 are not intuitive as that in INLINEFORM7 and introduce ambiguities. For example, tweets “What a career from Vince young!” and “I hope Derrick Rose plays the best game of his career tonight” both use career but convey different information: the first tweet was talking about this professional athlete's accomplishments while the second tweet was actually commenting on the game the user was watching. Hence crowdsourcing workers working on INLINEFORM8 tasks read more ambiguous tweets and solved more difficult problems than those in INLINEFORM9 tasks did. Considering that, it is not surprising that the inter-annotator agreement scores of R4 are the worst."
],
[
"Our dataset is available as a plain text file in JSON format. Each line represents one unique tweet with five attributes identifying the tweet id (tweet_id, a unique identification number generated by Twitter for each tweet), topics job vs. notjob labeled by human (topic_human) and machine (topic_machine), and sources personal vs. business labeled by human (source_human) and machine (source_machine). NA represents “not applicable.” An example of tweet in our corpus is shown as follows:",
"{",
" \"topic_human\":\"NA\",",
" \"tweet_id\":\"409834886405832705\",",
" \"topic_machine\":\"job\",",
" \"source_machine\":\"personal\",",
" \"source_human\":\"NA\"",
"}",
"Table TABREF37 provides the main statistics of our dataset w.r.t the topic and source labels provided by human and machine."
],
[
"We presented the Twitter Job/Employment Corpus and our approach for extracting discourse on work from public social media. We developed and improved an effective, humans-in-the-loop active learning framework that uses human annotation and automatic predictions over multiple rounds to label automatically data as job-related or not job-related. We accurately determine whether or not Twitter accounts are personal or business-related, according to their linguistic characteristics and posts history. Our crowdsourced evaluations suggest that these labels are precise and reliable. Our classification framework could be extended to other open-domain problems that similarly lack high-quality labeled ground truth data."
]
],
"section_name": [
"Introduction",
"Background and Related Work",
"Data and Methods",
"Data Collection",
"Initial Classifier 𝐂 0 \\mathbf {C_0}",
"Crowdsourced Annotation R1",
"Training Helper Labeler 𝐂 1 \\mathbf {C_1}",
"Crowdsourced Annotation R2",
"Training Helper Labeler 𝐂 2 \\mathbf {C_2}",
"Community Annotation R3",
"Training Helper Labeler 𝐂 3 \\mathbf {C_3}",
"Crowdsourced Validation of 𝐂 0 \\mathbf {C_0}, 𝐂 1 \\mathbf {C_1}, 𝐂 2 \\mathbf {C_2} and 𝐂 3 \\mathbf {C_3}",
"Crowdsourced Annotation R4",
"Training Labeler 𝐂 5 \\mathbf {C_5}",
"End-to-End Evaluation",
"Determining Sources of Job-Related Tweets",
"Annotation Quality",
"Dataset Description",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"047b7c6c847008fe5b62f5d05c0f24f98b5fc5ab",
"557fbb079a190c557b4ac9c6204a39e02ab0457e",
"e155c3ef165ef9f0e19237194ca9cc879fa84979"
],
"answer": [
{
"evidence": [
"Using the DataSift Firehose, we collected historical tweets from public accounts with geographical coordinates located in a 15-counties region surrounding a medium sized US city from July 2013 to June 2014. This one-year data set contains over 7 million geo-tagged tweets (approximately 90% written in English) from around 85,000 unique Twitter accounts. This particular locality has geographical diversity, covering both urban and rural areas and providing mixed and balanced demographics. We could apply local knowledge into the construction of our final job-related corpus, which has been approved very helpful in the later experiments."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Using the DataSift Firehose, we collected historical tweets from public accounts with geographical coordinates located in a 15-counties region surrounding a medium sized US city from July 2013 to June 2014. This one-year data set contains over 7 million geo-tagged tweets (approximately 90% written in English) from around 85,000 unique Twitter accounts. "
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"Using the DataSift Firehose, we collected historical tweets from public accounts with geographical coordinates located in a 15-counties region surrounding a medium sized US city from July 2013 to June 2014. This one-year data set contains over 7 million geo-tagged tweets (approximately 90% written in English) from around 85,000 unique Twitter accounts. This particular locality has geographical diversity, covering both urban and rural areas and providing mixed and balanced demographics. We could apply local knowledge into the construction of our final job-related corpus, which has been approved very helpful in the later experiments."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Using the DataSift Firehose, we collected historical tweets from public accounts with geographical coordinates located in a 15-counties region surrounding a medium sized US city from July 2013 to June 2014. This one-year data set contains over 7 million geo-tagged tweets (approximately 90% written in English) from around 85,000 unique Twitter accounts. "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Using the DataSift Firehose, we collected historical tweets from public accounts with geographical coordinates located in a 15-counties region surrounding a medium sized US city from July 2013 to June 2014. This one-year data set contains over 7 million geo-tagged tweets (approximately 90% written in English) from around 85,000 unique Twitter accounts. This particular locality has geographical diversity, covering both urban and rural areas and providing mixed and balanced demographics. We could apply local knowledge into the construction of our final job-related corpus, which has been approved very helpful in the later experiments."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Using the DataSift Firehose, we collected historical tweets from public accounts with geographical coordinates located in a 15-counties region surrounding a medium sized US city from July 2013 to June 2014. This one-year data set contains over 7 million geo-tagged tweets (approximately 90% written in English) from around 85,000 unique Twitter accounts."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"027840b460de64e43d390dad76d1ca240abcbd21",
"34371a278b90debc3900996b975acf38fda8cb7c"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"A number of computational social scientists have studied organizational behavior, professional attitudes, working mood and affect BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , but in each case: the data they investigated were collected from internal interactive platforms hosted by the workers' employers.",
"These studies are valuable in their own right, but one evident limitation is that each dataset is limited to depicting a particular company and excludes the populations who have no access to such restricted networks (e.g., people who are not employees of that company). Moreover, the workers may be unwilling to express, e.g., negative feelings about work (“I don't wanna go to work today”), unprofessional behavior (“Got drunk as hell last night and still made it to work”), or a desire to work elsewhere (“I want to go work at Disney World so bad”) on platforms controlled by their employers.",
"A major barrier to studying job-related discourse on general-purpose, public social media—one that the previous studies did not face—is the problem of determining which posts are job-related in the first place. There is no authoritative training data available to model this problem. Since the datasets used in previous work were collected in the workplace during worktime, the content is implicitly job-related. By contrast, the subject matter of public social media is much more diverse. People with various life experiences may have different criteria for what constitutes a “job” and describe their jobs differently.",
"Extracting job-related information from Twitter can be valuable to a range of stakeholders. For example, public health specialists, psychologists and psychiatrists could use such first-hand reportage of work experiences to monitor job-related stress at a community level and provide professional support if necessary. Employers might analyze these data and use it to improve how they manage their businesses. It could help employees to maintain better online reputations for potential job recruiters as well. It is also meaningful to compare job-related tweets against non-job-related discourse to observe and understand the linguistic and behavioral similarities and differences between on- and off-hours."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"A number of computational social scientists have studied organizational behavior, professional attitudes, working mood and affect BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , but in each case: the data they investigated were collected from internal interactive platforms hosted by the workers' employers.\n\n",
"These studies are valuable in their own right, but one evident limitation is that each dataset is limited to depicting a particular company and excludes the populations who have no access to such restricted networks (e.g., people who are not employees of that company). Moreover, the workers may be unwilling to express, e.g., negative feelings about work (“I don't wanna go to work today”), unprofessional behavior (“Got drunk as hell last night and still made it to work”), or a desire to work elsewhere (“I want to go work at Disney World so bad”) on platforms controlled by their employers.",
"A major barrier to studying job-related discourse on general-purpose, public social media—one that the previous studies did not face—is the problem of determining which posts are job-related in the first place. There is no authoritative training data available to model this problem. ",
"Extracting job-related information from Twitter can be valuable to a range of stakeholders. For example, public health specialists, psychologists and psychiatrists could use such first-hand reportage of work experiences to monitor job-related stress at a community level and provide professional support if necessary. "
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"2a467e87dd2b453c8b57cf453f8063108bd4e158",
"a798512fb577a76e64046dd9342f3f8a7f2ee3c2"
],
"answer": [
{
"evidence": [
"We presented the Twitter Job/Employment Corpus and our approach for extracting discourse on work from public social media. We developed and improved an effective, humans-in-the-loop active learning framework that uses human annotation and automatic predictions over multiple rounds to label automatically data as job-related or not job-related. We accurately determine whether or not Twitter accounts are personal or business-related, according to their linguistic characteristics and posts history. Our crowdsourced evaluations suggest that these labels are precise and reliable. Our classification framework could be extended to other open-domain problems that similarly lack high-quality labeled ground truth data."
],
"extractive_spans": [
"human annotation and automatic predictions over multiple rounds to label automatically data as job-related or not job-related"
],
"free_form_answer": "",
"highlighted_evidence": [
"We developed and improved an effective, humans-in-the-loop active learning framework that uses human annotation and automatic predictions over multiple rounds to label automatically data as job-related or not job-related. ",
"We developed and improved an effective, humans-in-the-loop active learning framework that uses human annotation and automatic predictions over multiple rounds to label automatically data as job-related or not job-related. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Figure FIGREF4 shows the workflow of our humans-in-the-loop framework. It has multiple iterations of human annotations and automatic machine learning predictions, followed by some linguistic heuristics, to extract job-related tweets from personal and business accounts.",
"FLOAT SELECTED: Figure 1: Our humans-in-the-loop framework collects labeled data by alternating between human annotation and automatic prediction models over multiple rounds. Each diamond represents an automatic classifier (C), and each trapezoid represents human annotations (R). Each classifier filters and provides machine-predicted labels to tweets that are published to human annotators in the consecutive round. The human-labeled tweets are then used as training data by the succeeding automatic classifier. We use two types of classifiers: rule-based classifiers (C0 and C4) and support vector machines (C1, C2, C3 and C5). This framework serves to reduce the amount of human efforts needed to acquire large amounts of high-quality labeled data."
],
"extractive_spans": [
"multiple iterations of human annotations and automatic machine learning predictions, followed by some linguistic heuristics"
],
"free_form_answer": "",
"highlighted_evidence": [
"Figure FIGREF4 shows the workflow of our humans-in-the-loop framework. It has multiple iterations of human annotations and automatic machine learning predictions, followed by some linguistic heuristics, to extract job-related tweets from personal and business accounts.",
"FLOAT SELECTED: Figure 1: Our humans-in-the-loop framework collects labeled data by alternating between human annotation and automatic prediction models over multiple rounds. Each diamond represents an automatic classifier (C), and each trapezoid represents human annotations (R). Each classifier filters and provides machine-predicted labels to tweets that are published to human annotators in the consecutive round. The human-labeled tweets are then used as training data by the succeeding automatic classifier. We use two types of classifiers: rule-based classifiers (C0 and C4) and support vector machines (C1, C2, C3 and C5). This framework serves to reduce the amount of human efforts needed to acquire large amounts of high-quality labeled data."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"6d907f6685c6ccdd686ed641faeb368ced1fb08f",
"bf9a7cb8b74b67e21d8c04073784384ecc90ec3c"
],
"answer": [
{
"evidence": [
"Using the DataSift Firehose, we collected historical tweets from public accounts with geographical coordinates located in a 15-counties region surrounding a medium sized US city from July 2013 to June 2014. This one-year data set contains over 7 million geo-tagged tweets (approximately 90% written in English) from around 85,000 unique Twitter accounts. This particular locality has geographical diversity, covering both urban and rural areas and providing mixed and balanced demographics. We could apply local knowledge into the construction of our final job-related corpus, which has been approved very helpful in the later experiments.",
"Initial Classifier 𝐂 0 \\mathbf {C_0}",
"In order to identify probable job-related tweets which are talking about paid positions of regular employment while excluding noises (such as students discussing homework or school-related activities, or people complimenting others), we defined a simple term-matching classifier with inclusion and exclusion terms in the first step (see Table TABREF9 ).",
"Classifier INLINEFORM0 consists of two rules: the matched tweet must contain at least one word in the Include lexicon and it cannot contain any word in the Exclude lexicon. Before applying filtering rules, we pre-processed each tweet by (1) converting all words to lower cases; (2) stripping out punctuation and special characters; and (3) normalizing the tweets by mapping out-of-vocabulary phrases (such as abbreviations and acronyms) to standard phrases using a dictionary of more than 5,400 slang terms in the Internet."
],
"extractive_spans": [],
"free_form_answer": "They collected tweets from US and then applied some filtering rules based on Lexicons",
"highlighted_evidence": [
"Using the DataSift Firehose, we collected historical tweets from public accounts with geographical coordinates located in a 15-counties region surrounding a medium sized US city from July 2013 to June 2014.",
"Initial Classifier 𝐂 0 \\mathbf {C_0}\nIn order to identify probable job-related tweets which are talking about paid positions of regular employment while excluding noises (such as students discussing homework or school-related activities, or people complimenting others), we defined a simple term-matching classifier with inclusion and exclusion terms in the first step (see Table TABREF9 ).\n\nClassifier INLINEFORM0 consists of two rules: the matched tweet must contain at least one word in the Include lexicon and it cannot contain any word in the Exclude lexicon. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Figure FIGREF4 shows the workflow of our humans-in-the-loop framework. It has multiple iterations of human annotations and automatic machine learning predictions, followed by some linguistic heuristics, to extract job-related tweets from personal and business accounts."
],
"extractive_spans": [
" multiple iterations of human annotations and automatic machine learning predictions, followed by some linguistic heuristics, to extract job-related tweets from personal and business accounts"
],
"free_form_answer": "",
"highlighted_evidence": [
"Figure FIGREF4 shows the workflow of our humans-in-the-loop framework. It has multiple iterations of human annotations and automatic machine learning predictions, followed by some linguistic heuristics, to extract job-related tweets from personal and business accounts."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Is this an English language corpus?",
"The authors point out a relevant constraint on the previous corpora of workplace, do they authors mention any relevant constrains on this corpus?",
"What type of annotation is performed?",
"How are the tweets selected?"
],
"question_id": [
"e12166fa9d6f63c4e92252c95c6a7bc96977ebf4",
"d4cb704e93086a2246a8caa5c1035e8297b8f4c0",
"a11b5eb928a6db9a0e3bb290ace468ff1685d253",
"275b2c22b6a733d2840324d61b5b101f2bbc5653"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Our humans-in-the-loop framework collects labeled data by alternating between human annotation and automatic prediction models over multiple rounds. Each diamond represents an automatic classifier (C), and each trapezoid represents human annotations (R). Each classifier filters and provides machine-predicted labels to tweets that are published to human annotators in the consecutive round. The human-labeled tweets are then used as training data by the succeeding automatic classifier. We use two types of classifiers: rule-based classifiers (C0 and C4) and support vector machines (C1, C2, C3 and C5). This framework serves to reduce the amount of human efforts needed to acquire large amounts of high-quality labeled data.",
"Table 2: Summary of annotations in R2 (showing when 3 / 4 / 5 of 5 annotators agreed).",
"Table 3: Inter-annotator agreement combinations and sample tweets.",
"Table 4: Summary of R3 community-based reviewed-andcorrected annotations.",
"Table 6: Summary of crowdsourced annotations (R1, R2 and R4).",
"Table 5: Crowdsourced validations of samples identified by models C0, C1, C2 and C3.",
"Table 7: Performances of C5.",
"Table 9: Estimated effective recalls for different trained models (C1, C2, C3 and C5) to identify job-related tweets in real world setting.",
"Table 8: Top 15 features for both classes of C5.",
"Table 10: Counts of tweets containing the queried hashtags only, and their subsets of tweets with URL embedded.",
"Table 11: Evaluations of heuristics to determine the type of accounts (personal vs. business), job-related tweets sampled by different models in Table 5.",
"Table 13: Statistics of our dataset labeled by human and machine.",
"Table 12: Inter-annotator agreement performance for our three rounds of crowdsourced annotations. Average ± stdev agreements are Good, Very Good and Moderate (Altman 1991) respectively."
],
"file": [
"2-Figure1-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"5-Table4-1.png",
"5-Table6-1.png",
"5-Table5-1.png",
"5-Table7-1.png",
"6-Table9-1.png",
"6-Table8-1.png",
"6-Table10-1.png",
"7-Table11-1.png",
"7-Table13-1.png",
"7-Table12-1.png"
]
} | [
"How are the tweets selected?"
] | [
[
"1901.10619-Initial Classifier 𝐂 0 \\mathbf {C_0}-1",
"1901.10619-Data Collection-0",
"1901.10619-Data and Methods-0",
"1901.10619-Initial Classifier 𝐂 0 \\mathbf {C_0}-0"
]
] | [
"They collected tweets from US and then applied some filtering rules based on Lexicons"
] | 195 |
1612.09535 | PAMPO: using pattern matching and pos-tagging for effective Named Entities recognition in Portuguese | This paper deals with the entity extraction task (named entity recognition) of a text mining process that aims at unveiling non-trivial semantic structures, such as relationships and interaction between entities or communities. In this paper we present a simple and efficient named entity extraction algorithm. The method, named PAMPO (PAttern Matching and POs tagging based algorithm for NER), relies on flexible pattern matching, part-of-speech tagging and lexical-based rules. It was developed to process texts written in Portuguese, however it is potentially applicable to other languages as well. We compare our approach with current alternatives that support Named Entity Recognition (NER) for content written in Portuguese. These are Alchemy, Zemanta and Rembrandt. Evaluation of the efficacy of the entity extraction method on several texts written in Portuguese indicates a considerable improvement on $recall$ and $F_1$ measures. | {
"paragraphs": [
[
"Nowadays, a large amount of information is produced and shared in unstructured form, mostly unstructured text BIBREF0 , BIBREF1 . This information can be exploited in decision making processes but, to be useful, it should be transformed and presented in ways that make its intrinsic knowledge more readily intelligible. For that, we need efficient methods and tools that quickly extract useful information from unstructured text collections. Such demand can be observed, for instance, in Biology, where researchers, in order to be abreast of all developments, need to analyse new biomedical literature on a daily basis BIBREF2 . Another application is on fraud and corruption studies where the network information — the set of actors and their relationships — is implicitly stored in unstructured natural-language documents BIBREF3 . Hence, text mining and information extraction are required to pre-process the texts in order to extract the entities and the relations between them.",
"Information extraction is a challenging task mainly due to the ambiguous features of natural-language. Moreover, most tools need to be adapted to different human languages and to different domains BIBREF4 . In fact, the language of the processed texts is still the decisive factor when choosing among existing information extraction technologies. This is also true for the task of entity extraction (Named Entity Recognition - NER).",
"For several reasons, text mining tools are typically first developed for English and only afterwards extended to other languages. Thus, there are still relatively few text mining tools for Portuguese and even less that are freely accessible. In particular, for the named entities recognition task in Portuguese texts, we find three extractors available: Alchemy, Zemanta and Rembrandt BIBREF5 . We also find some studies where the measures ( INLINEFORM0 , INLINEFORM1 and INLINEFORM2 ) for those extractors are computed and compared BIBREF6 , but their comparative effectiveness remains domain and final purpose dependent.",
"In this work, we present PAMPO (PAttern Matching and POs tagging based algorithm for NER), a new method to automatically extract named entities from unstructured texts, applicable to the Portuguese language but potentially adaptable to other languages as well. The method relies on flexible pattern matching, part-of-speech tagging and lexical-based rules. All steps are implemented using free software and taking advantage of various existing packages.",
"The process has been developed using as case-study a specific book written in Portuguese, but it has since been used in other applications and successfully tested in different text collections. In this paper, we describe the evaluation procedures on independent textual collections, and produce a comparative study of PAMPO with other existing tools for NER."
],
[
"In 1991, Lisa F. Rau presented a paper describing an algorithm, based on heuristics and handcrafted rules, to automatically extract company names from financial news BIBREF7 . This was one of the first research papers on the NER field BIBREF8 . NER was first introduced as an information extraction task but since then its use in natural language text has spread widely through several fields, namely Information Retrieval, Question Answering, Machine Translation, Text Translation, Text Clustering and Navigation Systems BIBREF9 . In an attempt to suit the needs of each application, nowadays, a NER extraction workflow comprises not only analysing some input content and detecting named entities, but also assigning them a type and a list of URIs for disambiguation BIBREF10 . New approaches have been developed with the application of Supervised machine Learning (SL) techniques BIBREF6 and NER evolved to NERC — Named Entity Recognition and Classification. The handicap of those techniques is the requirement of a training set, i.e., a data set manually labelled. Therefore, the NER task depends also on the data set used to train the NER extraction algorithm.",
"Currently, many existing approaches for NER/NERC are implemented and available as downloadable code, APIs or web applications, i.e., as tools or services available on the web. A thorough search produces the following list: AIDA, AlchemyAPI, Apache Stanbol, CiceroLite, DBpedia Spotlight, Evri, Extractiv, FOX, FRED, Lupedia, NERD, Open Calais, PoolParty Knowledge Discoverer, Rembrandt, ReVerb, Saplo, Semiosearch Wikifier, Wikimeta, Yahohh! Content Analysis (YCA), Zemanta. More detailed information may be found in BIBREF10 , BIBREF11 , BIBREF12 , BIBREF5 , where the authors compare the services' strengths and weaknesses and compute some measures for their performance.",
"Nadeau et al. in A survey of named entity recognition and classification BIBREF8 point out three factors that distinguish the NERC algorithms: the language, the textual genre or domain, and the entity type. Regarding the third one, based on the Grishman et al. definition BIBREF13 , named entity refers to the name of a person or an organization, a location, a brand, a product, a numeric expression (including time, date, money and percentage), found in a sentence, but generally, the most studied types consider the enamex designation — proper names of `persons', `locations' and `organizations' — the `miscellaneous' category for the proper names that fall outside the classic enamex). In recent research , the possible types to extract are open and include subcategories BIBREF8 .",
"The language is an important factor to be taken in consideration in the NER task. Most of the services are devoted to English and few support NER on Portuguese texts. The first reference to work developed in Portuguese texts was published in 1997 BIBREF14 ; the authors perform the NER task and compute some measures in a Portuguese corpus and other five corpora. Until now, we have only identified the Rembrandt tool as a service developed and devoted to extract named entities in Portuguese texts. Other tools (AlchemyAPI, NERD and Zemanta) have been adapted to work and accept Portuguese texts but were not specifically developed for that purpose. As recently pointed out by Taba and Caseli BIBREF15 , the Portuguese language still lacks high quality linguistic resources and tools.",
"NER is not only one task of the text mining process but also an initial step in the performance of other tasks, such as relation extraction, classification and/or topic modelling BIBREF0 . This makes the quality of the NER process particularly important. In the light of the related works and taking in consideration that most of the approaches optimize INLINEFORM0 but not INLINEFORM1 , we propose PAMPO to extract named entities in Portuguese texts. In this work we do not classify neither disambiguate the entity. Our major concern is to increase the INLINEFORM2 without decreasing the INLINEFORM3 of the named entity extractor."
],
[
"In this work, we consider the enamex definition of entities plus the miscellaneous named entities where we include events like, for instance, `Jogos Olímpicos' (`Olympic Games'). To identify those entities, an information extraction procedure was designed using regular expressions and other pattern matching strategies, along with part-of-speech tagging, i.e., employing a Part-of-Speech Tagger (POST) tool. The extraction of the named entities from Portuguese unstructured texts is composed of two phases: candidate generation, where we generate a superset of candidate entities, and entity selection, where only relevant candidates are kept. The two phases are described in Algorithms SECREF3 and SECREF3 , respectively.",
"PAMPO - Candidate Generation In this phase, we provide a customizable base of regular expressions that gathers common candidate entities. Typical expressions capture capitalized words, personal titles (president, deputy, etc.) and other common words (assembly). This patterns' base is extendable and the aim of the process in this phase is to identify all good candidates.",
" PAMPO - Candidate Generation",
"Input: INLINEFORM0 , INLINEFORM1 :Term Pattern Base INLINEFORM2 INLINEFORM3 is the set of candidate entities each sentence INLINEFORM4 in INLINEFORM5 each term pattern INLINEFORM6 in TPB INLINEFORM7 sub-sequences of INLINEFORM8 that match INLINEFORM9 Output: INLINEFORM10 ",
"PAMPO - Entity Selection Here, all candidate entities of the previous phase are part-of-speech tagged. The POST process tags tokens with their corresponding word type (lexical category). Based on the tagging of the terms in candidate entities, we can identify some that can be discarded. This is done by applying a second level of regular expressions. In the entity selection phase, the regular expressions are defined on the lexical categories instead of terms themselves. For example, if the first word type is a `pron-det' (POS tag meaning determiner pronoun) the word is removed. Another example is the removal of candidate entities that do not have at least one tag `prop' or `n' (POS tag meaning a proper noun and a noun).",
" PAMPO - Entity selection",
"Input: INLINEFORM0 : candidate entities, INLINEFORM1 : category clipping patterns, INLINEFORM2 : category pruning patterns, INLINEFORM3 : term pruning pattern base each candidate entity INLINEFORM4 in INLINEFORM5 INLINEFORM6 POST of the candidate entity INLINEFORM7 each clipping pattern INLINEFORM8 in INLINEFORM9 INLINEFORM10 matches prefix of INLINEFORM11 remove matching prefix from INLINEFORM12 remove corresponding prefix from INLINEFORM13 each pruning pattern INLINEFORM14 in INLINEFORM15 INLINEFORM16 matches INLINEFORM17 INLINEFORM18 each pruning pattern INLINEFORM19 in INLINEFORM20 INLINEFORM21 = INLINEFORM22 INLINEFORM23 Output: modified INLINEFORM24 "
],
[
"The program was developed in R BIBREF16 and makes use of some specific text mining packages. We have implemented our method using the following R packages: tm BIBREF17 , cwhmisc BIBREF18 , memoise BIBREF19 , openNLP BIBREF20 , Hmisc BIBREF21 . The OpenNLP POS Tagger uses a probability model to predict the correct POS tag and, for Portuguese language, it was trained on CoNLL_X bosque data."
],
[
"The INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 bases adopted, for Portuguese texts, and used in this application are described in this section. As a first approach, and to test the PAMPO algorithm, we selected a book about the Portuguese Freemasonry BIBREF22 . Despite being on a specific topic, it contains a rich variety of situations to test our extractor. As an example, the piece of text shown in Figure FIGREF25 was scanned from the book with current OCR software and will be used here to highlight the contribution of each phase to the final result. The five named entities manually identified in this piece of text are `Irmandade do Bairro Ut O', `Parlamento do G', `Jorge Silva', `Ian' and `ministro Miguel Relvas'.",
"Applying Algorithm 1 to the paragraph of Figure FIGREF25 , the set of `candidate entities' found are `Irmandade do Bairro Ut O', `Conhecemos', `Parlamento do G', `L', `K', `Jorge Silva', `Ian' and `ministro Miguel Relvas'. Although most of the words in the extracted `candidate entities' list start with capital letter, with this algorithm we were able to extract also other important words that are not capitalized like the first word in the last named entity (ministro). This is possible because the INLINEFORM0 base includes a set of patterns that captures not only words (or sequence of words) starting with capital letters but also words that are associated to some entity's name like the ones in list1 on Appendix A.",
"Having collected the `candidate entities' in the previous step, we now proceed by removing from that list the ones that do not correspond to named entities. For that purpose, we use list2 (see Appendix A) as INLINEFORM0 base, all the tags that are not a noun ( INLINEFORM1 ) or a proper noun ( INLINEFORM2 ) are included in the INLINEFORM3 base and, finally, some terms that are not named entities but that were not excluded by previous actions (see list3 on Appendix A), are used as INLINEFORM4 base. Applying Algorithm 2 with those lists to the set of `candidate entities', from Figure FIGREF25 , we obtain as named entities `Irmandade do Bairro Ut O', `Parlamento do G', `Jorge Silva', `Ian' and `ministro Miguel Relvas'. In fact, these five terms are the only named entities in the paragraph."
],
[
"Table TABREF27 shows the most frequent `candidate entities' from the whole book, as extracted by Algorithm 1 and which of those candidate entities were considered as actual `named entities' by Algorithm 2.",
"To give an idea of the improvement introduced by each phase, we represent the `candidate entities' set in a word cloud where words with higher frequency have larger font size. As it can be observed in Figure FIGREF28 , after phase 1 some words that do not refer to entities, such as `Idem'(`Idem'), `Entre' (`Between') and `Nas' (`At the'), are present in the cloud, but, as expected, they disappear in phase 2.",
"From this book, a total of 12120 named entities were extracted by PAMPO, corresponding to 5159 unique named entities. To assess the quality of this process, the first 125 pages of the book were manually labelled (1/3 of the text book). The values of the computed measures are shown in Table TABREF29 . This part of the book contains 3836 named entities. INLINEFORM0 and INLINEFORM1 are estimated for the two phases based on the results obtained on the 125 pages of the book. A total of 5089 terms were labelled `candidate entities' in the first phase and 3075 were identified as `named entities' in the second phase. The true positives were 3205 in the first phase and 2982 in the second phase (partial identifications count as 1/2). This means that the INLINEFORM2 , given by Equation ( EQREF30 ), decreases from 0.84 to 0.78, and the INLINEFORM3 , given by Equation ( EQREF31 ), increases from 0.63 to 0.97. DISPLAYFORM0 DISPLAYFORM1 ",
"Equation ( EQREF32 ) defines another measure commonly used to assess the quality of the process, INLINEFORM0 . This measure allows interpreting the global quality, taking into account the decrease of INLINEFORM1 and the increase of INLINEFORM2 . The second phase of the PAMPO process increases the value of INLINEFORM3 from 0.72 to 0.87. DISPLAYFORM0 ",
"After these illustrative results of the PAMPO algorithm, the following section presents the results of a comparison between PAMPO and other approaches to extract named entities from texts in Portuguese."
],
[
"In this work, we evaluate our NER approach using two news corpora. One corpus is a set of 227 texts published on December 31, 2010 by the Lusa agency (portuguese agency of news) and will be referred to as `News'. The other corpus (named here `Sports news') is a set of 881 sports news. The texts were manually annotated according to the enamex designation and the type `miscellaneous'.",
"Each of the corpora used for evaluation has a considerable number of texts but with different characteristics. The `Sports news' corpus has text from only one domain, while the `News' presents a diversity of topics. This fact allows evaluating if the domain/topic factor can significantly affect the quality of the algorithm. Some features of the two corpora are present in Table TABREF33 . The minimum text length in words is 24 for the `News' corpus and 59 for `Sports news'. The maximum lengths are 770 and 445 respectively. The total named entities manually found for each type range between 798 and 7051 with an average of 16.4 entities (without type distinction) per text.",
"In this work we not only study the quality of the PAMPO NER extractor for Portuguese texts but we also compare the results with three other extractors. Two of them, AlchemyAPI and Zemanta, are easily accessed with the tool developed by Bartosz Malocha in EURECOM and available on the web. The other one, Rembrandt, has to be downloaded and locally installed, which is not a straightforward task."
],
[
"Considering the Portuguese text represented in Figure FIGREF37 (a) the PAMPO algorithm identifies the `named entities' listed in Figure FIGREF37 (b).",
" As can be observed by this example, the algorithm extracts all the manifestations of `named entities' and lists them in the order they appear in the text, including repetitions of the same `named entity'."
],
[
"To compare the results of PAMPO with the other NER extractors, we compute the INLINEFORM0 and INLINEFORM1 considering a unique occurrence per entity, instead of all named entities occurrences. Figure FIGREF39 presents the outputs of the four extractors, PAMPO, AlchemyAPI, Rembrandt and Zemanta, for the text in Figure FIGREF37 (a).",
"To compute the INLINEFORM0 , INLINEFORM1 and INLINEFORM2 measures presented in Table TABREF40 , we used Equations EQREF30 , EQREF31 and EQREF32 with a difference in the weight given to the partial identifications. Based on the example in Figure FIGREF39 , we observed that not all partial correspondences to the named entity on the text have necessarily the same value, i.e., `Atlanta', `Atlanta 1996', `Jogos Olímpicos' or `Jogos Olímpicos de Atlanta' as partial identifications of `Jogos Olímpicos de Atlanta 1996' do not have the same information. Hence we adopted as weight criterion for the partial identifications, the fraction of the named entity that is identified. This means that the previous partial identifications have weights of INLINEFORM3 , INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , respectively. As a result, two extractors will have the same performance even if one identifies the complete named entity `Jogos Olímpicos de Atlanta 1996' and the other splits it into two named entities, `Atlanta 1996' and `Jogos Olímpicos'.",
"Analysing the mean values of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 (standard deviation between parentheses) given in Table TABREF40 , it is easy to conclude that they are higher in the `Sports news' for all the extractors. Moreover, that difference is less noted in the PAMPO algorithm, which presents better results and a much higher mean INLINEFORM3 , and consequently higher mean INLINEFORM4 , than the other three extractors. The four extractors have similar mean INLINEFORM5 but none has better mean INLINEFORM6 than the PAMPO extractor. The mean INLINEFORM7 , mean INLINEFORM8 and mean INLINEFORM9 for the PAMPO algorithm are consistent with a good performance of the extractor. To further assess the quality of the extractors, the probability density function of the three measures for the two corpora, estimated using a kernel density estimation with 100 equally spaced points (MATLAB 7.10.0 (R2010a)), are plotted in Figure FIGREF41 . As expected, the probability density is higher around the value 1 for all the measures of PAMPO extractor on the two corpora.",
"Figure FIGREF42 presents scatter plots of INLINEFORM0 vs INLINEFORM1 for the four extractors, PAMPO, AlchemyAPI, Rembrandt and Zemanta for the `Sports news' and `News' corpora, first four panels and four bottom panels, respectively. It is noteworthy that almost all the 881 points of the `Sports news' for PAMPO extractor are in the upper right corner of the scatter plot, as well as almost all the 227 points of the `News'. The other tools present a more dispersed solution quality."
],
[
"To determine if the entity type contributes to output variability in the INLINEFORM0 , an analysis was conducted on the named entities for the classification types: `persons' (PER), `locations' (LOC), `organizations' (ORG) and `miscellaneous' (MISC).",
"The results (Figure FIGREF44 ) indicate that the INLINEFORM0 varies with the type of entity for the AlchemyAPI, Rembrandt and Zemanta but not for the PAMPO. The INLINEFORM1 of PAMPO extractor is the highest for all types of entities.",
"In summary, it is apparent from the analysis that PAMPO extracts a set of `named entities' that resembles the actual list of named entities on texts.",
"To complete the evaluation we also computed INLINEFORM0 , INLINEFORM1 and INLINEFORM2 of PAMPO extraction on the texts in Coleção Dourada-HAREM . This corpus has 129 documents. Using the evaluation criterion defined by curators of HAREM, we obtain a INLINEFORM3 of INLINEFORM4 , a INLINEFORM5 of INLINEFORM6 and a INLINEFORM7 of INLINEFORM8 considering all the categories. Considering that the PAMPO extractor was not designed to extract quantities or time expressions we computed the same measures excluding these two types of entities. While INLINEFORM9 practically keeps the same value ( INLINEFORM10 ), INLINEFORM11 and INLINEFORM12 increase to INLINEFORM13 and INLINEFORM14 , respectively."
],
[
"Now, we analyse the differences between measures obtained with PAMPO and with the three other extractors, for each one of the news on the two corpora. To perform a more informative comparison between PAMPO and the other extractors, we count the number of news items that had a positive, a null and a negative difference with respect to each measure and each concurrent extractor. These are summarized in Table TABREF47 for both corpora.",
"The mean and the standard deviation (between parentheses) for each extractor and each corpus are presented in Table TABREF48 . They will be used to test statistical hypotheses about the mean difference value of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 between PAMPO and the other three extractors.",
"Based on all the values of the differences between PAMPO and the other extractors, represented in Tables TABREF47 and TABREF48 , we may say that:",
"the INLINEFORM0 of the PAMPO extractor is the highest in almost all the news;",
" INLINEFORM0 does not differ much between PAMPO and the other extractors;",
"as a consequence the INLINEFORM0 of PAMPO is also the highest in almost all the news;",
"the mean difference of INLINEFORM0 between PAMPO and AlchemyAPI seams to be at least greater than 0.25;",
"the mean difference of INLINEFORM0 between PAMPO and Rembrandt seams to be at least greater than 0.35;",
"the mean difference of INLINEFORM0 between PAMPO and Zemanta seams to be at least greater than 0.40;",
"the mean difference of INLINEFORM0 is positive but near zero for all the three extractors;",
"the mean difference of INLINEFORM0 between PAMPO and AlchemyAPI seams to be at least greater than 0.15;",
"the mean difference of INLINEFORM0 between PAMPO and Rembrandt seams to be at least greater than 0.25;",
"the mean difference of INLINEFORM0 between PAMPO and Zemanta seams to be at least greater than 0.30.",
"To test the null hypothesis that the mean INLINEFORM0 differences between PAMPO and the other extractors are equal to 0.25, 0.35 and 0.40, for AlchemyAPI, Rembrandt and Zemanta, respectively, ztest was performed considering as alternative the mean INLINEFORM1 differences greater than those values. Based on the results of these two corpora the p-values are smaller than 9.5E-05. Hence, the results obtained so far provide statistical evidence that PAMPO increases NER INLINEFORM2 by at least 0.25."
],
[
"In this work we propose a novel effective method to extract named entities from unstructured text. The proposed PAMPO method is implemented using free software, namely R and available packages. Two manually annotated Portuguese news corpora were used to empirically evaluate the algorithm using the measures of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 . These corpora did not influence the definition of the algorithm or the construction of its pattern bases. We have compared PAMPO with three other NER extractors: AlchemyAPI, Rembrandt and Zemanta. Experimental results clearly show that PAMPO obtains significantly higher INLINEFORM3 and INLINEFORM4 than existing tools. The values of INLINEFORM5 are identical. We may say also that PAMPO's performance in the HAREM corpus was at least as good as the best one of the systems reported over there when we consider all categories of entities. However, when we exclude dates and numeric expressions, it presents better results than the ones reported for other tools.",
"Despite its simplicity, PAMPO has a very good performance and is highly configurable. The PAMPO algorithm is potentially adaptable to be used for other languages by properly defining the pattern bases. Furthermore, it allows for straightforward improvement of the results by adding terms to the lists.",
"The results take us one step closer to the creation of a text intelligence system to be used in several applications, namely, in the study of the social context of possible economic and financial offenses. As future work the authors are planning to improve the text mining procedure, by including a classification and a disambiguation step, as well as by automatically characterizing the relations between entities."
],
[
"The authors would like to thank SAPO Labs (http://labs.sapo.pt) for providing the data set of news from Lusa agency. The authors would also like to thank grant #2014/08996-0 and grant #2013/14757-6, São Paulo Research Foundation (FAPESP). This work is partially funded by FCT/MEC through PIDDAC and ERDF/ON2 within project NORTE-07-0124-FEDER-000059 and through the COMPETE Programme (operational programme for competitiveness) and by National Funds through the FCT - Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within project FCOMP-01-0124-FEDER-037281."
],
[
"list1 - {'grão INLINEFORM0 mestre', 'papa', 'duque', 'duquesa', 'conde', 'condessa', 'visconde', 'viscondessa', 'rei', 'raínha', 'príncipe', 'princesa', 'marquês', 'marquesa', 'barão', 'baronesa', 'bispo', 'presidente', 'secretário', 'secretária', 'ministro', 'ministra', 'primeiro', 'primeira', 'deputado', 'deputada', 'general', 'tenente', 'capitão', 'capitã', 'sargento', 'governador', 'governadora', 'diretor', 'director', 'diretora', 'directora', 'ex', 'filho', 'filha', irmão', 'irmã', 'pai', 'mãe', 'tio', 'tia', 'padrinho', 'madrinha', 'sobrinho', 'sobrinha', 'afilhado', 'afilhada', 'avó', 'avô', 'neto', 'neta', 'enteado', 'enteada', 'padrasto', 'madrasta'}",
"list2 - {'pron-det', 'adv adv ', 'adv prop', 'adv adj ', 'adv v-fi'}",
"list3 - {'Aproveitamento', 'Cuidado', 'Decerto', 'Desta', 'Desenvolvimento', 'Lançamento', 'Levantamento', 'Muitos', 'Muitas', 'Nessa', 'Nesse', 'Nessas', 'Nesses', 'Nestes', 'Neste', 'Nesta', 'Nestas', 'Noutro', 'Outros', 'Outro', 'Outra', 'Outras', 'Onde', 'Poucos', 'Poucas', 'Perante', 'Pela', 'Recém', 'Tal', 'Vários', 'Várias', 'Vós', 'Aceite', 'Comprometo', 'Cabe', 'Coloca', 'Conhecemos', 'Casado', 'Considerava', 'Desejo', 'Devíamos', 'Escolhiam, 'Executa', 'Faça', 'Fica', 'Interrompidas', 'Indicar', 'Incluído', 'Leva', 'Morrer', 'Ouvistes', 'Prestaste', 'Praticou', 'Pressiona', 'Pensa', 'Poder', 'Podes', 'Revolta', 'Sabe', 'Ser', 'Ter', 'Toque', 'Toma', 'Trata', 'Vens', 'Verificou', 'Viver', 'Vivemos', 'Venho', 'Reação', 'Sessão', 'Testamento', 'Tolerância', 'Término', 'Vitória', 'Visita', 'Harmonia', 'Iniciado', 'Instalação', 'Ibidem', 'Inventariação', 'Irregularidades', 'Internet', 'Lda', 'Manutenção', 'Nomeado', 'Obediência', 'Petição', 'Passaporte', 'Proposta', 'Programa', 'Proibição', 'Paz', 'Publicação', 'Questionário', 'Quadro', 'Relatório', 'Redução', 'Reorganização','Revolução', 'República', 'Reequilíbrio', 'Anexo', 'Abertura', 'Atestado', 'Ata', 'Adoção', 'Atualização', 'Às', 'Á', 'Capa', 'Convite', 'Compromisso', 'Condecoração', 'Convocatória', 'Cartão', 'Causa', 'Comunicação', 'Corrupção', 'Convergência', 'Decreto', 'Ditadura', 'Democracia', 'Democrata', 'Estrutura', 'Ficha', 'Fax', 'Fixação', 'Futuro', 'Gabinete', 'Glória', 'Janeiro', 'Fevereiro', 'Março', 'Abril', 'Maio', 'Junho', 'Julho', 'Agosto', 'Setembro', 'Outubro', 'Novembro', 'Dezembro', Diário', 'Semanal', 'Mensal', 'Minutos', 'Meses', 'Ano', 'Anos', 'Hoje'} INLINEFORM0 {Portuguese stopwords on R}"
]
],
"section_name": [
"Introduction",
"Related Work",
"The entity extraction algorithm",
"Implementation",
"An application",
"Analysis of results",
"Comparing PAMPO with other NER tools",
"PAMPO output",
"Evaluation",
"Evaluation by type of entity",
"PAMPO versus three other extractors",
"Remarks and Conclusions",
"Acknowledgements",
"Appendix"
]
} | {
"answers": [
{
"annotation_id": [
"626d54ee08464263a13f0294e528dabec5d8b6ed",
"f0ba5a8f4e81547e1abc0dd1f54daa4cb516e80e",
"fb4c0067dd8fb84ae99416388bbf6ec469917058"
],
"answer": [
{
"evidence": [
"The program was developed in R BIBREF16 and makes use of some specific text mining packages. We have implemented our method using the following R packages: tm BIBREF17 , cwhmisc BIBREF18 , memoise BIBREF19 , openNLP BIBREF20 , Hmisc BIBREF21 . The OpenNLP POS Tagger uses a probability model to predict the correct POS tag and, for Portuguese language, it was trained on CoNLL_X bosque data.",
"In this work, we evaluate our NER approach using two news corpora. One corpus is a set of 227 texts published on December 31, 2010 by the Lusa agency (portuguese agency of news) and will be referred to as `News'. The other corpus (named here `Sports news') is a set of 881 sports news. The texts were manually annotated according to the enamex designation and the type `miscellaneous'."
],
"extractive_spans": [],
"free_form_answer": "CoNLL_X bosque data, News data by Lusa agency, Sports news data",
"highlighted_evidence": [
"We have implemented our method using the following R packages: tm BIBREF17 , cwhmisc BIBREF18 , memoise BIBREF19 , openNLP BIBREF20 , Hmisc BIBREF21 . The OpenNLP POS Tagger uses a probability model to predict the correct POS tag and, for Portuguese language, it was trained on CoNLL_X bosque data.",
"In this work, we evaluate our NER approach using two news corpora. One corpus is a set of 227 texts published on December 31, 2010 by the Lusa agency (portuguese agency of news) and will be referred to as `News'. The other corpus (named here `Sports news') is a set of 881 sports news. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this work, we evaluate our NER approach using two news corpora. One corpus is a set of 227 texts published on December 31, 2010 by the Lusa agency (portuguese agency of news) and will be referred to as `News'. The other corpus (named here `Sports news') is a set of 881 sports news. The texts were manually annotated according to the enamex designation and the type `miscellaneous'."
],
"extractive_spans": [
"News",
"Sports news"
],
"free_form_answer": "",
"highlighted_evidence": [
"\n",
"One corpus is a set of 227 texts published on December 31, 2010 by the Lusa agency (portuguese agency of news) and will be referred to as `News'. The other corpus (named here `Sports news') is a set of 881 sports news. The texts were manually annotated according to the enamex designation and the type `miscellaneous'."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this work, we evaluate our NER approach using two news corpora. One corpus is a set of 227 texts published on December 31, 2010 by the Lusa agency (portuguese agency of news) and will be referred to as `News'. The other corpus (named here `Sports news') is a set of 881 sports news. The texts were manually annotated according to the enamex designation and the type `miscellaneous'."
],
"extractive_spans": [
"News",
"Sports news"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this work, we evaluate our NER approach using two news corpora. One corpus is a set of 227 texts published on December 31, 2010 by the Lusa agency (portuguese agency of news) and will be referred to as `News'. The other corpus (named here `Sports news') is a set of 881 sports news. The texts were manually annotated according to the enamex designation and the type `miscellaneous'."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"2bd11516d7be9ee035a4f3642ad72b3c5a5de9a8",
"3ab65bfbc0e289dd3c408d85b922d6687d1e77ba"
],
"answer": [
{
"evidence": [
"To compute the INLINEFORM0 , INLINEFORM1 and INLINEFORM2 measures presented in Table TABREF40 , we used Equations EQREF30 , EQREF31 and EQREF32 with a difference in the weight given to the partial identifications. Based on the example in Figure FIGREF39 , we observed that not all partial correspondences to the named entity on the text have necessarily the same value, i.e., `Atlanta', `Atlanta 1996', `Jogos Olímpicos' or `Jogos Olímpicos de Atlanta' as partial identifications of `Jogos Olímpicos de Atlanta 1996' do not have the same information. Hence we adopted as weight criterion for the partial identifications, the fraction of the named entity that is identified. This means that the previous partial identifications have weights of INLINEFORM3 , INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , respectively. As a result, two extractors will have the same performance even if one identifies the complete named entity `Jogos Olímpicos de Atlanta 1996' and the other splits it into two named entities, `Atlanta 1996' and `Jogos Olímpicos'.",
"FLOAT SELECTED: TABLE 4. Summary statistics of extractors’performance"
],
"extractive_spans": [],
"free_form_answer": "On average, it had better Recall by 0.481 in case of news dataset and by 0.372 in case of sports news dataset. \nOn average, it had better Precision by 0.086 in case of news dataset and by 0.37 in case of sports news dataset. \nOn average, it had better F1 by 0.381 in case of news dataset and by 0.616 in case of sports news dataset. ",
"highlighted_evidence": [
"To compute the INLINEFORM0 , INLINEFORM1 and INLINEFORM2 measures presented in Table TABREF40 , we used Equations EQREF30 , EQREF31 and EQREF32 with a difference in the weight given to the partial identifications. ",
"FLOAT SELECTED: TABLE 4. Summary statistics of extractors’performance"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Analysing the mean values of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 (standard deviation between parentheses) given in Table TABREF40 , it is easy to conclude that they are higher in the `Sports news' for all the extractors. Moreover, that difference is less noted in the PAMPO algorithm, which presents better results and a much higher mean INLINEFORM3 , and consequently higher mean INLINEFORM4 , than the other three extractors. The four extractors have similar mean INLINEFORM5 but none has better mean INLINEFORM6 than the PAMPO extractor. The mean INLINEFORM7 , mean INLINEFORM8 and mean INLINEFORM9 for the PAMPO algorithm are consistent with a good performance of the extractor. To further assess the quality of the extractors, the probability density function of the three measures for the two corpora, estimated using a kernel density estimation with 100 equally spaced points (MATLAB 7.10.0 (R2010a)), are plotted in Figure FIGREF41 . As expected, the probability density is higher around the value 1 for all the measures of PAMPO extractor on the two corpora.",
"FLOAT SELECTED: TABLE 4. Summary statistics of extractors’performance"
],
"extractive_spans": [],
"free_form_answer": "Pampo had F1 score of 0.932 and 0.971 compared to best alternative result of 0.608 and 0.794 on News and Sport news dataset respectively.",
"highlighted_evidence": [
"Analysing the mean values of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 (standard deviation between parentheses) given in Table TABREF40 , it is easy to conclude that they are higher in the `Sports news' for all the extractors.",
"FLOAT SELECTED: TABLE 4. Summary statistics of extractors’performance"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"07efdd0103b1208c0f7d6734f88dedc1e5211411",
"3e932e97276ccadcb5f40a1d6e6ddee1443fff06",
"5b25a6964389e680422381142e1a5590eea5a9a9"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Your full name:\n"
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"NER is not only one task of the text mining process but also an initial step in the performance of other tasks, such as relation extraction, classification and/or topic modelling BIBREF0 . This makes the quality of the NER process particularly important. In the light of the related works and taking in consideration that most of the approaches optimize INLINEFORM0 but not INLINEFORM1 , we propose PAMPO to extract named entities in Portuguese texts. In this work we do not classify neither disambiguate the entity. Our major concern is to increase the INLINEFORM2 without decreasing the INLINEFORM3 of the named entity extractor."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"This makes the quality of the NER process particularly important. In the light of the related works and taking in consideration that most of the approaches optimize INLINEFORM0 but not INLINEFORM1 , we propose PAMPO to extract named entities in Portuguese texts. In this work we do not classify neither disambiguate the entity. "
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"baa8c0935f45955491713db1ade220d056da0756"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"what dataset was used?",
"by how much did their model improve over current alternatives?",
"did they experiment with other languages besides portuguese?",
"how many rules did they use?"
],
"question_id": [
"f1f7a040545c9501215d3391e267c7874f9a6004",
"b6f4fd6bc76bfcbc15724a546445908afa6d922c",
"3614c1f1435b7c1fd1f7f0041219eebf5bcff473",
"c316d7d0c80b8f720ff90a8bb84a8b879a3ef7ea"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"FIGURE 2. Entity word clouds depicting the ‘candidate entities’ that appear 25 or more times in the text. Each subfigure represents the set of ‘candidate entities’ or‘named entities’ returned at the end of each phase of PAMPO process.",
"TABLE 2. Measures of the PAMPO process quality obtained for the 125 pages of the book.",
"TABLE 3. Corpus Information",
"TABLE 4. Summary statistics of extractors’performance",
"FIGURE 4. Lists of ‘named entities’ obtained by each of the four extractors, PAMPO, AlchemyAPI, Rembrandt and Zemanta, for the Portuguese text represented in Figure 3 (a).",
"FIGURE 5. Estimated probability density function of the three measures, recall, precision and F1, for the two corpora.",
"FIGURE 6. Scatter plots of precision and recall for four extractors, PAMPO, Alchemy, Rembrandt and Zemanta, for the ‘Sports news’ in the first four panels and for the news published by Lusa agency in the four bottom panels.",
"FIGURE 7. Bar representation for the recall of the four extractors, PAMPO, Alchemy, Rembrandt and Zemanta, for the four types of entities, ‘persons’ (PER), ‘locations’ (LOC), ‘organizations’ (ORG) and ‘miscellaneous’ (MISC), and for the two corpora.",
"TABLE 5. Number of positive and negative occurrences in the difference between the recall, precision, and F1 of PAMPO and the three other extractors, AlchemyAPI, Rembrandt and Zemanta, for the two corpora, ‘Sports news’ and ‘News’",
"TABLE 6. Summary statistics of the difference between the performance of the PAMPO extractor and the other three extractors, AlchemyAPI, Rembrandt and Zemanta, for the two corpora, ‘Sports news’ and ‘News’."
],
"file": [
"6-Figure2-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Table4-1.png",
"10-Figure4-1.png",
"11-Figure5-1.png",
"12-Figure6-1.png",
"13-Figure7-1.png",
"13-Table5-1.png",
"14-Table6-1.png"
]
} | [
"what dataset was used?",
"by how much did their model improve over current alternatives?"
] | [
[
"1612.09535-Comparing PAMPO with other NER tools-0",
"1612.09535-Implementation-0"
],
[
"1612.09535-Evaluation-1",
"1612.09535-Evaluation-2",
"1612.09535-8-Table4-1.png"
]
] | [
"CoNLL_X bosque data, News data by Lusa agency, Sports news data",
"Pampo had F1 score of 0.932 and 0.971 compared to best alternative result of 0.608 and 0.794 on News and Sport news dataset respectively."
] | 196 |
1603.01547 | Text Understanding with the Attention Sum Reader Network | Several large cloze-style context-question-answer datasets have been introduced recently: the CNN and Daily Mail news data and the Children's Book Test. Thanks to the size of these datasets, the associated text comprehension task is well suited for deep-learning techniques that currently seem to outperform all alternative approaches. We present a new, simple model that uses attention to directly pick the answer from the context as opposed to computing the answer using a blended representation of words in the document as is usual in similar models. This makes the model particularly suitable for question-answering problems where the answer is a single word from the document. Ensemble of our models sets new state of the art on all evaluated datasets. | {
"paragraphs": [
[
"Most of the information humanity has gathered up to this point is stored in the form of plain text. Hence the task of teaching machines how to understand this data is of utmost importance in the field of Artificial Intelligence. One way of testing the level of text understanding is simply to ask the system questions for which the answer can be inferred from the text. A well-known example of a system that could make use of a huge collection of unstructured documents to answer questions is for instance IBM's Watson system used for the Jeopardy challenge BIBREF0 .",
"Cloze-style questions BIBREF2 , i.e. questions formed by removing a phrase from a sentence, are an appealing form of such questions (for example see Figure FIGREF1 ). While the task is easy to evaluate, one can vary the context, the question sentence or the specific phrase missing in the question to dramatically change the task structure and difficulty.",
"One way of altering the task difficulty is to vary the word type being replaced, as in BIBREF3 . The complexity of such variation comes from the fact that the level of context understanding needed in order to correctly predict different types of words varies greatly. While predicting prepositions can easily be done using relatively simple models with very little context knowledge, predicting named entities requires a deeper understanding of the context.",
"Also, as opposed to selecting a random sentence from a text as in BIBREF3 ), the question can be formed from a specific part of the document, such as a short summary or a list of tags. Since such sentences often paraphrase in a condensed form what was said in the text, they are particularly suitable for testing text comprehension BIBREF1 .",
"An important property of cloze-style questions is that a large amount of such questions can be automatically generated from real world documents. This opens the task to data-hungry techniques such as deep learning. This is an advantage compared to smaller machine understanding datasets like MCTest BIBREF4 that have only hundreds of training examples and therefore the best performing systems usually rely on hand-crafted features BIBREF5 , BIBREF6 .",
"In the first part of this article we introduce the task at hand and the main aspects of the relevant datasets. Then we present our own model to tackle the problem. Subsequently we compare the model to previously proposed architectures and finally describe the experimental results on the performance of our model."
],
[
"In this section we introduce the task that we are seeking to solve and relevant large-scale datasets that have recently been introduced for this task."
],
[
"The task consists of answering a cloze-style question, the answer to which depends on the understanding of a context document provided with the question. The model is also provided with a set of possible answers from which the correct one is to be selected. This can be formalized as follows:",
"The training data consist of tuples INLINEFORM0 , where INLINEFORM1 is a question, INLINEFORM2 is a document that contains the answer to question INLINEFORM3 , INLINEFORM4 is a set of possible answers and INLINEFORM5 is the ground truth answer. Both INLINEFORM6 and INLINEFORM7 are sequences of words from vocabulary INLINEFORM8 . We also assume that all possible answers are words from the vocabulary, that is INLINEFORM9 , and that the ground truth answer INLINEFORM10 appears in the document, that is INLINEFORM11 ."
],
[
"We will now briefly summarize important features of the datasets.",
"The first two datasets BIBREF1 were constructed from a large number of news articles from the CNN and Daily Mail websites. The main body of each article forms a context, while the cloze-style question is formed from one of short highlight sentences, appearing at the top of each article page. Specifically, the question is created by replacing a named entity from the summary sentence (e.g. “Producer X will not press charges against Jeremy Clarkson, his lawyer says.”).",
"Furthermore the named entities in the whole dataset were replaced by anonymous tokens which were further shuffled for each example so that the model cannot build up any world knowledge about the entities and hence has to genuinely rely on the context document to search for an answer to the question.",
"Qualitative analysis of reasoning patterns needed to answer questions in the CNN dataset together with human performance on this task are provided in BIBREF7 .",
"The third dataset, the Children's Book Test (CBT) BIBREF3 , is built from books that are freely available thanks to Project Gutenberg. Each context document is formed by 20 consecutive sentences taken from a children's book story. Due to the lack of summary, the cloze-style question is then constructed from the subsequent (21st) sentence.",
"One can also see how the task complexity varies with the type of the omitted word (named entity, common noun, verb, preposition). BIBREF3 have shown that while standard LSTM language models have human level performance on predicting verbs and prepositions, they lack behind on named entities and common nouns. In this article we therefore focus only on predicting the first two word types.",
"Basic statistics about the CNN, Daily Mail and CBT datasets are summarized in Table TABREF2 .",
""
],
[
"Our model called the psr is tailor-made to leverage the fact that the answer is a word from the context document. This is a double-edged sword. While it achieves state-of-the-art results on all of the mentioned datasets (where this assumption holds true), it cannot produce an answer which is not contained in the document. Intuitively, our model is structured as follows:"
],
[
"Our model uses one word embedding function and two encoder functions. The word embedding function INLINEFORM0 translates words into vector representations. The first encoder function is a document encoder INLINEFORM1 that encodes every word from the document INLINEFORM2 in the context of the whole document. We call this the contextual embedding. For convenience we will denote the contextual embedding of the INLINEFORM3 -th word in INLINEFORM4 as INLINEFORM5 . The second encoder INLINEFORM6 is used to translate the query INLINEFORM7 into a fixed length representation of the same dimensionality as each INLINEFORM8 . Both encoders use word embeddings computed by INLINEFORM9 as their input. Then we compute a weight for every word in the document as the dot product of its contextual embedding and the query embedding. This weight might be viewed as an attention over the document INLINEFORM10 .",
"To form a proper probability distribution over the words in the document, we normalize the weights using the softmax function. This way we model probability INLINEFORM0 that the answer to query INLINEFORM1 appears at position INLINEFORM2 in the document INLINEFORM3 . In a functional form this is: DISPLAYFORM0 ",
"Finally we compute the probability that word INLINEFORM0 is a correct answer as: DISPLAYFORM0 ",
"where INLINEFORM0 is a set of positions where INLINEFORM1 appears in the document INLINEFORM2 . We call this mechanism pointer sum attention since we use attention as a pointer over discrete tokens in the context document and then we directly sum the word's attention across all the occurrences. This differs from the usual use of attention in sequence-to-sequence models BIBREF8 where attention is used to blend representations of words into a new embedding vector. Our use of attention was inspired by ptrnet BIBREF9 .",
"A high level structure of our model is shown in Figure FIGREF10 ."
],
[
"In our model the document encoder INLINEFORM0 is implemented as a bidirectional Gated Recurrent Unit (GRU) network BIBREF10 , BIBREF11 whose hidden states form the contextual word embeddings, that is INLINEFORM1 , where INLINEFORM2 denotes vector concatenation and INLINEFORM3 and INLINEFORM4 denote forward and backward contextual embeddings from the respective recurrent networks. The query encoder INLINEFORM5 is implemented by another bidirectional GRU network. This time the last hidden state of the forward network is concatenated with the last hidden state of the backward network to form the query embedding, that is INLINEFORM6 . The word embedding function INLINEFORM7 is implemented in a usual way as a look-up table INLINEFORM8 . INLINEFORM9 is a matrix whose rows can be indexed by words from the vocabulary, that is INLINEFORM10 . Therefore, each row of INLINEFORM11 contains embedding of one word from the vocabulary. During training we jointly optimize parameters of INLINEFORM12 , INLINEFORM13 and INLINEFORM14 ."
],
[
"Several recent deep neural network architectures BIBREF1 , BIBREF3 , BIBREF7 , BIBREF12 were applied to the task of text comprehension. The last two architectures were developed independently at the same time as our work. All of these architectures use an attention mechanism that allows them to highlight places in the document that might be relevant to answering the question. We will now briefly describe these architectures and compare them to our approach."
],
[
"Attentive and Impatient Readers were proposed in BIBREF1 . The simpler Attentive Reader is very similar to our architecture. It also uses bidirectional document and query encoders to compute an attention in a similar way we do. The more complex Impatient Reader computes attention over the document after reading every word of the query. However, empirical evaluation has shown that both models perform almost identically on the CNN and Daily Mail datasets.",
"The key difference between the Attentive Reader and our model is that the Attentive Reader uses attention to compute a fixed length representation INLINEFORM0 of the document INLINEFORM1 that is equal to a weighted sum of contextual embeddings of words in INLINEFORM2 , that is INLINEFORM3 . A joint query and document embedding INLINEFORM4 is then a non-linear function of INLINEFORM5 and the query embedding INLINEFORM6 . This joint embedding INLINEFORM7 is in the end compared against all candidate answers INLINEFORM8 using the dot product INLINEFORM9 , in the end the scores are normalized by INLINEFORM10 . That is: INLINEFORM11 .",
"In contrast to the Attentive Reader, we select the answer from the context directly using the computed attention rather than using such attention for a weighted sum of the individual representations (see Eq. EQREF17 ). The motivation for such simplification is the following.",
"Consider a context “A UFO was observed above our city in January and again in March.” and question “An observer has spotted a UFO in ___ .”",
"Since both January and March are equally good candidates, the attention mechanism might put the same attention on both these candidates in the context. The blending mechanism described above would compute a vector between the representations of these two words and propose the closest word as the answer - this may well happen to be February (it is indeed the case for Word2Vec trained on Google News). By contrast, our model would correctly propose January or March."
],
[
"A model presented in BIBREF7 is inspired by the Attentive Reader. One difference is that the attention weights are computed with a bilinear term instead of simple dot-product, that is INLINEFORM0 . The document embedding INLINEFORM1 is computed using a weighted sum as in the Attentive Reader, INLINEFORM2 . In the end INLINEFORM3 , where INLINEFORM4 is a new embedding function.",
"Even though it is a simplification of the Attentive Reader this model performs significantly better than the original."
],
[
"MenNN BIBREF13 were applied to the task of text comprehension in BIBREF3 .",
"The best performing memory networks model setup - window memory - uses windows of fixed length (8) centered around the candidate words as memory cells. Due to this limited context window, the model is unable to capture dependencies out of scope of this window. Furthermore, the representation within such window is computed simply as the sum of embeddings of words in that window. By contrast, in our model the representation of each individual word is computed using a recurrent network, which not only allows it to capture context from the entire document but also the embedding computation is much more flexible than a simple sum.",
"To improve on the initial accuracy, a heuristic approach called self supervision is used in BIBREF3 to help the network to select the right supporting “memories” using an attention mechanism showing similarities to the ours. Plain MenNN without this heuristic are not competitive on these machine reading tasks. Our model does not need any similar heuristics."
],
[
"The Dynamic Entity Representation model BIBREF12 has a complex architecture also based on the weighted attention mechanism and max-pooling over contextual embeddings of vectors for each named entity."
],
[
"Our model architecture was inspired by ptrnet BIBREF9 in using an attention mechanism to select the answer in the context rather than to blend words from the context into an answer representation. While a ptrnet consists of an encoder as well as a decoder, which uses the attention to select the output at each step, our model outputs the answer in a single step. Furthermore, the pointer networks assume that no input in the sequence appears more than once, which is not the case in our settings."
],
[
"Our model combines the best features of the architectures mentioned above. We use recurrent networks to “read” the document and the query as done in BIBREF1 , BIBREF7 , BIBREF12 and we use attention in a way similar to ptrnet. We also use summation of attention weights in a way similar to MenNN BIBREF3 .",
"From a high level perspective we simplify all the discussed text comprehension models by removing all transformations past the attention step. Instead we use the attention directly to compute the answer probability."
],
[
"In this section we evaluate our model on the CNN, Daily Mail and CBT datasets. We show that despite the model's simplicity its ensembles achieve state-of-the-art performance on each of these datasets."
],
[
"To train the model we used stochastic gradient descent with the ADAM update rule BIBREF14 and learning rate of INLINEFORM0 or INLINEFORM1 . During training we minimized the following negative log-likelihood with respect to INLINEFORM2 : DISPLAYFORM0 ",
"where INLINEFORM0 is the correct answer for query INLINEFORM1 and document INLINEFORM2 , and INLINEFORM3 represents parameters of the encoder functions INLINEFORM4 and INLINEFORM5 and of the word embedding function INLINEFORM6 . The optimized probability distribution INLINEFORM7 is defined in Eq. EQREF17 .",
"The initial weights in the word embedding matrix were drawn randomly uniformly from the interval INLINEFORM0 . Weights in the GRU networks were initialized by random orthogonal matrices BIBREF15 and biases were initialized to zero. We also used a gradient clipping BIBREF16 threshold of 10 and batches of size 32.",
"During training we randomly shuffled all examples in each epoch. To speedup training, we always pre-fetched 10 batches worth of examples and sorted them according to document length. Hence each batch contained documents of roughly the same length.",
"For each batch of the CNN and Daily Mail datasets we randomly reshuffled the assignment of named entities to the corresponding word embedding vectors to match the procedure proposed in BIBREF1 . This guaranteed that word embeddings of named entities were used only as semantically meaningless labels not encoding any intrinsic features of the represented entities. This forced the model to truly deduce the answer from the single context document associated with the question. We also do not use pre-trained word embeddings to make our training procedure comparable to BIBREF1 .",
"We did not perform any text pre-processing since the original datasets were already tokenized.",
"We do not use any regularization since in our experience it leads to longer training times of single models, however, performance of a model ensemble is usually the same. This way we can train the whole ensemble faster when using multiple GPUs for parallel training.",
"For Additional details about the training procedure see Appendix SECREF8 .",
"During training we evaluated the model performance after each epoch and stopped the training when the error on the validation set started increasing. The models usually converged after two epochs of training. Time needed to complete a single epoch of training on each dataset on an Nvidia K40 GPU is shown in Table TABREF46 .",
"The hyperparameters, namely the recurrent hidden layer dimension and the source embedding dimension, were chosen by grid search. We started with a range of 128 to 384 for both parameters and subsequently kept increasing the upper bound by 128 until we started observing a consistent decrease in validation accuracy. The region of the parameter space that we explored together with the parameters of the model with best validation accuracy are summarized in Table TABREF47 .",
"Our model was implemented using Theano BIBREF18 and Blocks BIBREF19 ."
],
[
"We evaluated the proposed model both as a single model and using ensemble averaging. Although the model computes attention for every word in the document we restrict the model to select an answer from a list of candidate answers associated with each question-document pair.",
"For single models we are reporting results for the best model as well as the average of accuracies for the best 20% of models with best performance on validation data since single models display considerable variation of results due to random weight initialization even for identical hyperparameter values. Single model performance may consequently prove difficult to reproduce.",
"What concerns ensembles, we used simple averaging of the answer probabilities predicted by ensemble members. For ensembling we used 14, 16, 84 and 53 models for CNN, Daily Mail and CBT CN and NE respectively. The ensemble models were chosen either as the top 70% of all trained models, we call this avg ensemble. Alternatively we use the following algorithm: We started with the best performing model according to validation performance. Then in each step we tried adding the best performing model that had not been previously tried. We kept it in the ensemble if it did improve its validation performance and discarded it otherwise. This way we gradually tried each model once. We call the resulting model a greedy ensemble."
],
[
"Performance of our models on the CNN and Daily Mail datasets is summarized in Table TABREF27 , Table TABREF28 shows results on the CBT dataset. The tables also list performance of other published models that were evaluated on these datasets. Ensembles of our models set new state-of-the-art results on all evaluated datasets.",
"Table TABREF45 then measures accuracy as the proportion of test cases where the ground truth was among the top INLINEFORM0 answers proposed by the greedy ensemble model for INLINEFORM1 .",
"CNN and Daily Mail. The CNN dataset is the most widely used dataset for evaluation of text comprehension systems published so far. Performance of our single model is a little bit worse than performance of simultaneously published models BIBREF7 , BIBREF12 . Compared to our work these models were trained with Dropout regularization BIBREF17 which might improve single model performance. However, ensemble of our models outperforms these models even though they use pre-trained word embeddings.",
"On the CNN dataset our single model with best validation accuracy achieves a test accuracy of 69.5%. The average performance of the top 20% models according to validation accuracy is 69.9% which is even 0.5% better than the single best-validation model. This shows that there were many models that performed better on test set than the best-validation model. Fusing multiple models then gives a significant further increase in accuracy on both CNN and Daily Mail datasets..",
"CBT. In named entity prediction our best single model with accuracy of 68.6% performs 2% absolute better than the MenNN with self supervision, the averaging ensemble performs 4% absolute better than the best previous result. In common noun prediction our single models is 0.4% absolute better than MenNN however the ensemble improves the performance to 69% which is 6% absolute better than MenNN."
],
[
"To further analyze the properties of our model, we examined the dependence of accuracy on the length of the context document (Figure FIGREF33 ), the number of candidate answers (Figure FIGREF38 ) and the frequency of the correct answer in the context (Figure FIGREF41 ).",
"On the CNN and Daily Mail datasets, the accuracy decreases with increasing document length (Figure FIGREF33 ). We hypothesize this may be due to multiple factors. Firstly long documents may make the task more complex. Secondly such cases are quite rare in the training data (Figure FIGREF33 ) which motivates the model to specialize on shorter contexts. Finally the context length is correlated with the number of named entities, i.e. the number of possible answers which is itself negatively correlated with accuracy (see Figure FIGREF38 ).",
"On the CBT dataset this negative trend seems to disappear (Fig. FIGREF33 ). This supports the later two explanations since the distribution of document lengths is somewhat more uniform (Figure FIGREF33 ) and the number of candidate answers is constant (10) for all examples in this dataset.",
"The effect of increasing number of candidate answers on the model's accuracy can be seen in Figure FIGREF38 . We can clearly see that as the number of candidate answers increases, the accuracy drops. On the other hand, the amount of examples with large number of candidate answers is quite small (Figure FIGREF38 ).",
"Finally, since the summation of attention in our model inherently favours frequently occurring tokens, we also visualize how the accuracy depends on the frequency of the correct answer in the document. Figure FIGREF41 shows that the accuracy significantly drops as the correct answer gets less and less frequent in the document compared to other candidate answers. On the other hand, the correct answer is likely to occur frequently (Fig. FIGREF41 )."
],
[
"In this article we presented a new neural network architecture for natural language text comprehension. While our model is simpler than previously published models, it gives a new state-of-the-art accuracy on all evaluated datasets.",
"An analysis by BIBREF7 suggests that on CNN and Daily Mail datasets a significant proportion of questions is ambiguous or too difficult to answer even for humans (partly due to entity anonymization) so the ensemble of our models may be very near to the maximal accuracy achievable on these datasets."
],
[
"We would like to thank Tim Klinger for providing us with masked softmax code that we used in our implementation."
],
[
"In Section SECREF6 we analysed how the test accuracy depends on how frequent the correct answer is compared to other answer candidates for the news datasets. The plots for the Children's Book Test looks very similar, however we are adding it here for completeness."
]
],
"section_name": [
"Introduction",
"Task and datasets",
"Formal Task Description",
"Datasets",
"Our Model — Attention Sum Reader",
"Formal Description",
"Model instance details",
"Related Work",
"Attentive and Impatient Readers",
"Chen et al. 2016",
"Memory Networks",
"Dynamic Entity Representation",
"Pointer Networks",
"Summary",
"Evaluation",
"Training Details",
"Evaluation Method",
"Results",
"Analysis",
"Conclusion",
"Acknowledgments",
"Dependence of accuracy on the frequency of the correct answer"
]
} | {
"answers": [
{
"annotation_id": [
"2ee9177118c141d2362df869cb0e583c04a6abbf",
"bbf364278fb9c19970687345a8ab420895a39736"
],
"answer": [
{
"evidence": [
"The first two datasets BIBREF1 were constructed from a large number of news articles from the CNN and Daily Mail websites. The main body of each article forms a context, while the cloze-style question is formed from one of short highlight sentences, appearing at the top of each article page. Specifically, the question is created by replacing a named entity from the summary sentence (e.g. “Producer X will not press charges against Jeremy Clarkson, his lawyer says.”).",
"The third dataset, the Children's Book Test (CBT) BIBREF3 , is built from books that are freely available thanks to Project Gutenberg. Each context document is formed by 20 consecutive sentences taken from a children's book story. Due to the lack of summary, the cloze-style question is then constructed from the subsequent (21st) sentence."
],
"extractive_spans": [
"CNN",
"Daily Mail",
"Children's Book Test"
],
"free_form_answer": "",
"highlighted_evidence": [
"The first two datasets BIBREF1 were constructed from a large number of news articles from the CNN and Daily Mail websites.",
"The third dataset, the Children's Book Test (CBT) BIBREF3 , is built from books that are freely available thanks to Project Gutenberg."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The first two datasets BIBREF1 were constructed from a large number of news articles from the CNN and Daily Mail websites. The main body of each article forms a context, while the cloze-style question is formed from one of short highlight sentences, appearing at the top of each article page. Specifically, the question is created by replacing a named entity from the summary sentence (e.g. “Producer X will not press charges against Jeremy Clarkson, his lawyer says.”).",
"The third dataset, the Children's Book Test (CBT) BIBREF3 , is built from books that are freely available thanks to Project Gutenberg. Each context document is formed by 20 consecutive sentences taken from a children's book story. Due to the lack of summary, the cloze-style question is then constructed from the subsequent (21st) sentence.",
"What concerns ensembles, we used simple averaging of the answer probabilities predicted by ensemble members. For ensembling we used 14, 16, 84 and 53 models for CNN, Daily Mail and CBT CN and NE respectively. The ensemble models were chosen either as the top 70% of all trained models, we call this avg ensemble. Alternatively we use the following algorithm: We started with the best performing model according to validation performance. Then in each step we tried adding the best performing model that had not been previously tried. We kept it in the ensemble if it did improve its validation performance and discarded it otherwise. This way we gradually tried each model once. We call the resulting model a greedy ensemble."
],
"extractive_spans": [
"CNN ",
"Daily Mail",
"CBT CN and NE"
],
"free_form_answer": "",
"highlighted_evidence": [
"The first two datasets BIBREF1 were constructed from a large number of news articles from the CNN and Daily Mail websites. The main body of each article forms a context, while the cloze-style question is formed from one of short highlight sentences, appearing at the top of each article page. Specifically, the question is created by replacing a named entity from the summary sentence (e.g. “Producer X will not press charges against Jeremy Clarkson, his lawyer says.”).",
"The third dataset, the Children's Book Test (CBT) BIBREF3 , is built from books that are freely available thanks to Project Gutenberg. Each context document is formed by 20 consecutive sentences taken from a children's book story. Due to the lack of summary, the cloze-style question is then constructed from the subsequent (21st) sentence.",
"What concerns ensembles, we used simple averaging of the answer probabilities predicted by ensemble members. For ensembling we used 14, 16, 84 and 53 models for CNN, Daily Mail and CBT CN and NE respectively. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"244353c653ddb1c9b020f722216ceec7852f63d9",
"5b32f4b82c57681e85d14b0506d1be1171baa707"
],
"answer": [
{
"evidence": [
"On the CNN dataset our single model with best validation accuracy achieves a test accuracy of 69.5%. The average performance of the top 20% models according to validation accuracy is 69.9% which is even 0.5% better than the single best-validation model. This shows that there were many models that performed better on test set than the best-validation model. Fusing multiple models then gives a significant further increase in accuracy on both CNN and Daily Mail datasets..",
"CBT. In named entity prediction our best single model with accuracy of 68.6% performs 2% absolute better than the MenNN with self supervision, the averaging ensemble performs 4% absolute better than the best previous result. In common noun prediction our single models is 0.4% absolute better than MenNN however the ensemble improves the performance to 69% which is 6% absolute better than MenNN."
],
"extractive_spans": [
"CNN dataset our single model with best validation accuracy achieves a test accuracy of 69.5%",
"In named entity prediction our best single model with accuracy of 68.6%"
],
"free_form_answer": "",
"highlighted_evidence": [
"On the CNN dataset our single model with best validation accuracy achieves a test accuracy of 69.5%.",
"CBT. In named entity prediction our best single model with accuracy of 68.6% performs 2% absolute better than the MenNN with self supervision, the averaging ensemble performs 4% absolute better than the best previous result."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Performance of our models on the CNN and Daily Mail datasets is summarized in Table TABREF27 , Table TABREF28 shows results on the CBT dataset. The tables also list performance of other published models that were evaluated on these datasets. Ensembles of our models set new state-of-the-art results on all evaluated datasets.",
"FLOAT SELECTED: Table 4: Results of our AS Reader on the CNN and Daily Mail datasets. Results for models marked with † are taken from (Hermann et al., 2015), results of models marked with ‡ are taken from (Hill et al., 2015). Performance of ‡models was evaluated only on CNN dataset.",
"FLOAT SELECTED: Table 5: Results of our AS Reader on the CBT datasets. Results marked with ‡ are taken from (Hill et al., 2015). (∗)Human results were collected on 10% of the test set."
],
"extractive_spans": [],
"free_form_answer": "The different AS Reader models had average test accuracy of 71,35% and AS Reader (avg ensemble) had the highest test accuracy between all tested models with 75.4%\n\nIn case of Daily Mail average was 75.55% and greedy assemble had the highest value with 77.7%\nCBT NE average was 69.65% and greedy ensemble had the highest value of 71% \n\nCBT CN had average of 65.5% and avg assemble had the highest value of 68.9%\n",
"highlighted_evidence": [
"Performance of our models on the CNN and Daily Mail datasets is summarized in Table TABREF27 , Table TABREF28 shows results on the CBT dataset. The tables also list performance of other published models that were evaluated on these datasets. Ensembles of our models set new state-of-the-art results on all evaluated datasets.",
"FLOAT SELECTED: Table 4: Results of our AS Reader on the CNN and Daily Mail datasets. Results for models marked with † are taken from (Hermann et al., 2015), results of models marked with ‡ are taken from (Hill et al., 2015). Performance of ‡models was evaluated only on CNN dataset.",
"FLOAT SELECTED: Table 5: Results of our AS Reader on the CBT datasets. Results marked with ‡ are taken from (Hill et al., 2015). (∗)Human results were collected on 10% of the test set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"e837c8bae07f8a534c9b53dc67d980f1f12d90a8",
"e9c42784a69e9b4a9e24b3f9457602a107b27969"
],
"answer": [
{
"evidence": [
"Several recent deep neural network architectures BIBREF1 , BIBREF3 , BIBREF7 , BIBREF12 were applied to the task of text comprehension. The last two architectures were developed independently at the same time as our work. All of these architectures use an attention mechanism that allows them to highlight places in the document that might be relevant to answering the question. We will now briefly describe these architectures and compare them to our approach.",
"Attentive and Impatient Readers were proposed in BIBREF1 . The simpler Attentive Reader is very similar to our architecture. It also uses bidirectional document and query encoders to compute an attention in a similar way we do. The more complex Impatient Reader computes attention over the document after reading every word of the query. However, empirical evaluation has shown that both models perform almost identically on the CNN and Daily Mail datasets.",
"Chen et al. 2016",
"A model presented in BIBREF7 is inspired by the Attentive Reader. One difference is that the attention weights are computed with a bilinear term instead of simple dot-product, that is INLINEFORM0 . The document embedding INLINEFORM1 is computed using a weighted sum as in the Attentive Reader, INLINEFORM2 . In the end INLINEFORM3 , where INLINEFORM4 is a new embedding function.",
"Memory Networks",
"MenNN BIBREF13 were applied to the task of text comprehension in BIBREF3 .",
"Dynamic Entity Representation",
"The Dynamic Entity Representation model BIBREF12 has a complex architecture also based on the weighted attention mechanism and max-pooling over contextual embeddings of vectors for each named entity.",
"One can also see how the task complexity varies with the type of the omitted word (named entity, common noun, verb, preposition). BIBREF3 have shown that while standard LSTM language models have human level performance on predicting verbs and prepositions, they lack behind on named entities and common nouns. In this article we therefore focus only on predicting the first two word types."
],
"extractive_spans": [
"Attentive and Impatient Readers ",
"Chen et al. 2016\n",
"MenNN",
"Dynamic Entity Representation ",
"LSTM "
],
"free_form_answer": "",
"highlighted_evidence": [
"All of these architectures use an attention mechanism that allows them to highlight places in the document that might be relevant to answering the question. We will now briefly describe these architectures and compare them to our approach.",
"Attentive and Impatient Readers were proposed in BIBREF1 . The simpler Attentive Reader is very similar to our architecture. ",
"Chen et al. 2016\nA model presented in BIBREF7 is inspired by the Attentive Reader. One difference is that the attention weights are computed with a bilinear term instead of simple dot-product, that is INLINEFORM0 ",
"Memory Networks\nMenNN BIBREF13 were applied to the task of text comprehension in BIBREF3 .",
"Dynamic Entity Representation\nThe Dynamic Entity Representation model BIBREF12 has a complex architecture also based on the weighted attention mechanism and max-pooling over contextual embeddings of vectors for each named entity.",
". BIBREF3 have shown that while standard LSTM language models have human level performance on predicting verbs and prepositions, they lack behind on named entities and common nouns. In this article we therefore focus only on predicting the first two word types."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2d08ecc2dec2313c77d7c2094776ca8956c4c5c5",
"6db3b0098a96ea86a27e367f4d0db6978abd0551",
"6f96029f9aa0e72ebeeb9383434395216ed2a342"
],
"answer": [
{
"evidence": [
"The first two datasets BIBREF1 were constructed from a large number of news articles from the CNN and Daily Mail websites. The main body of each article forms a context, while the cloze-style question is formed from one of short highlight sentences, appearing at the top of each article page. Specifically, the question is created by replacing a named entity from the summary sentence (e.g. “Producer X will not press charges against Jeremy Clarkson, his lawyer says.”).",
"What concerns ensembles, we used simple averaging of the answer probabilities predicted by ensemble members. For ensembling we used 14, 16, 84 and 53 models for CNN, Daily Mail and CBT CN and NE respectively. The ensemble models were chosen either as the top 70% of all trained models, we call this avg ensemble. Alternatively we use the following algorithm: We started with the best performing model according to validation performance. Then in each step we tried adding the best performing model that had not been previously tried. We kept it in the ensemble if it did improve its validation performance and discarded it otherwise. This way we gradually tried each model once. We call the resulting model a greedy ensemble."
],
"extractive_spans": [
"CNN ",
"Daily Mail",
" CBT CN and NE"
],
"free_form_answer": "",
"highlighted_evidence": [
"The first two datasets BIBREF1 were constructed from a large number of news articles from the CNN and Daily Mail websites. The main body of each article forms a context, while the cloze-style question is formed from one of short highlight sentences, appearing at the top of each article page. Specifically, the question is created by replacing a named entity from the summary sentence (e.g. “Producer X will not press charges against Jeremy Clarkson, his lawyer says.”).",
"For ensembling we used 14, 16, 84 and 53 models for CNN, Daily Mail and CBT CN and NE respectively."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this section we evaluate our model on the CNN, Daily Mail and CBT datasets. We show that despite the model's simplicity its ensembles achieve state-of-the-art performance on each of these datasets."
],
"extractive_spans": [
"CNN, Daily Mail and CBT"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this section we evaluate our model on the CNN, Daily Mail and CBT datasets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The first two datasets BIBREF1 were constructed from a large number of news articles from the CNN and Daily Mail websites. The main body of each article forms a context, while the cloze-style question is formed from one of short highlight sentences, appearing at the top of each article page. Specifically, the question is created by replacing a named entity from the summary sentence (e.g. “Producer X will not press charges against Jeremy Clarkson, his lawyer says.”).",
"The third dataset, the Children's Book Test (CBT) BIBREF3 , is built from books that are freely available thanks to Project Gutenberg. Each context document is formed by 20 consecutive sentences taken from a children's book story. Due to the lack of summary, the cloze-style question is then constructed from the subsequent (21st) sentence."
],
"extractive_spans": [
"CNN",
"Daily Mail",
"Children's Book Test"
],
"free_form_answer": "",
"highlighted_evidence": [
"The first two datasets BIBREF1 were constructed from a large number of news articles from the CNN and Daily Mail websites.",
"The third dataset, the Children's Book Test (CBT) BIBREF3 , is built from books that are freely available thanks to Project Gutenberg."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"Which datasets did they use to train the model?",
"What is the performance of their model?",
"What baseline do they compare against?",
"What datasets is the model evaluated on?"
],
"question_id": [
"2ca3ca39d59f448e30be6798514709be7e3c62d8",
"df7fb8e6e44c9c5af3f19dde762c75cbf2f8452f",
"20e2b517fddb0350f5099c39b16c2ca66186d09b",
"70512cc9dcd45157e40c8d1f85e82d21ade7645b"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Statistics on the 4 data sets used to evaluate the model. CBT CN stands for CBT Common Nouns and CBT NE stands for CBT Named Entites. CBT had a fixed number of 10 options for answering each question. Statistics were taken from (Hermann et al., 2015) and the statistics provided with the CBT data set.",
"Figure 1: Structure of the model.",
"Table 6: Average duration of one epoch of training on the four datasets.",
"Table 7: Dimension of the recurrent hidden layer and of the source embedding for the best model and the range of values that we tested.",
"Table 4: Results of our AS Reader on the CNN and Daily Mail datasets. Results for models marked with † are taken from (Hermann et al., 2015), results of models marked with ‡ are taken from (Hill et al., 2015). Performance of ‡models was evaluated only on CNN dataset.",
"Table 5: Results of our AS Reader on the CBT datasets. Results marked with ‡ are taken from (Hill et al., 2015). (∗)Human results were collected on 10% of the test set.",
"Table 8: Proportion of test examples for which the top k answers proposed by the greedy ensemble included the correct answer.",
"Figure 2: Sub-figures (a) and (b) plot the test accuracy against the length of the context document (for CNN the count was multiplied by 10). The examples were split into ten buckets of equal size by their context length. Averages for each bucket are plotted on each axis. Sub-figures (c) and (d) show distributions of context lengths in the four datasets. The number of examples was multiplied by 10 for the CNN dataset.",
"Figure 3: Subfigure (a) illustrates how the model accuracy decreases with an increasing number of candidate named entities. Subfigure (b) shows the overall distribution of the number of candidate answers in the news datasets. The number of examples was multiplied by 10 for the CNN dataset.",
"Figure 4: Subfigure (a) shows the model accuracy when the correct answer is among n most frequent named entities for n ∈ [1, 10]. Subfigure (b) shows the number of test examples for which the correct answer was among n most frequent entities. The number of examples was multiplied by 10 for the CNN dataset."
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"5-Table6-1.png",
"5-Table7-1.png",
"6-Table4-1.png",
"6-Table5-1.png",
"6-Table8-1.png",
"7-Figure2-1.png",
"7-Figure3-1.png",
"7-Figure4-1.png"
]
} | [
"What is the performance of their model?"
] | [
[
"1603.01547-Results-4",
"1603.01547-6-Table5-1.png",
"1603.01547-Results-3",
"1603.01547-6-Table4-1.png",
"1603.01547-Results-0"
]
] | [
"The different AS Reader models had average test accuracy of 71,35% and AS Reader (avg ensemble) had the highest test accuracy between all tested models with 75.4%\n\nIn case of Daily Mail average was 75.55% and greedy assemble had the highest value with 77.7%\nCBT NE average was 69.65% and greedy ensemble had the highest value of 71% \n\nCBT CN had average of 65.5% and avg assemble had the highest value of 68.9%\n"
] | 198 |
1912.00239 | Modeling German Verb Argument Structures: LSTMs vs. Humans | LSTMs have proven very successful at language modeling. However, it remains unclear to what extent they are able to capture complex morphosyntactic structures. In this paper, we examine whether LSTMs are sensitive to verb argument structures. We introduce a German grammaticality dataset in which ungrammatical sentences are constructed by manipulating case assignments (eg substituting nominative by accusative or dative). We find that LSTMs are better than chance in detecting incorrect argument structures and slightly worse than humans tested on the same dataset. Surprisingly, LSTMs are contaminated by heuristics not found in humans like a preference toward nominative noun phrases. In other respects they show human-similar results like biases for particular orders of case assignments. | {
"paragraphs": [
[
"Among neural networks, LSTMs BIBREF0 are commonly used for language modeling. Although new architectures BIBREF1, BIBREF2 challenge this standard, LSTMs remain competitive for language modeling BIBREF3. However, despite the success of LM LSTMs, it is not clear what makes them so effective. In particular, are representations derived through language modeling able to effectively encode syntactic structures and relations? Do they encode them in a reliable and systematic way?",
"The typical metric used to compare LMs, perplexity, is not adapted to address these questions. Perplexity measures the probability assigned to held-out data from the corpus the LM is trained on. Because the held-out and training data are typically randomly extracted from an initial corpus, they have similar statistics, which is good from a machine learning viewpoint, but bad from the viewpoint of linguistic analysis: perplexity is mostly sensitive to the most common sentence types in the initial corpus and therefore will not reflect well the behavior of the LM in the tail of the distribution. In addition, the sentences extracted from a natural corpus confound several factors: syntax, semantics, pragmatics, etc. further complicating the interpretation of a good perplexity score.",
"To circumvent this limitation, recent work has focused on using probing techniques inspired by linguistic and psycholinguistics (for instance, grammaticality or acceptability judgments, or forced choice). In addition, instead of using sentences from the training corpus, studies rely more and more on automatically constructed test sentences, which enable for a removal of the bias in the original corpus and focus on particular linguistic phenomena. Here, we will use acceptability judgments operationalized by the log probability of sentences according to the LM and sets of synthetic sentences generated from template sentences to probe for a challenging linguistic structure: verb argument structure.",
"Verb argument structure provides languages with a way to link syntactic position in a sentence (subject, direct object, etc) with semantic roles (agent, patient, etc), in other words, to determine who is doing what. It is currently unknown whether neural LMs purely trained from surface statistics are able to capture this kind of structure, or whether additional information from another modality would be needed to provide some semantic grounding.",
"Verb argument structure is typically correlated to sentence position in many languages like English. But in other languages with relatively free word order, it is indicated by morphological markers. Here, we study German, where the arguments of a verb can occur in any position (when occurring within a relative clause), and is indicated by the case of the noun phrase (nominative, accusative, etc).",
"We setup a test of argument structure representation by presenting a trained LM with carefully constructed sets of sentences that either have the right set of arguments, or abnormal sentences where one case is missing or duplicated. We use word order permutations to control for unigram and positional statistics. If the LM is able to track argument structure irrespective of word order, it should assign lower grammaticality scores (log probabilities) to the incorrect sentences as compared to the correct ones.",
"Since at the level of the sentence, we study a global rather than local syntactic phenomenon, we depart from earlier work BIBREF4, BIBREF5, BIBREF6, BIBREF7 and do not compare pairs of sentences. Rather, we compare a set of valid grammatical variations of the template to a corresponding set of grammatical violations of the template. Thus, for each template, we measure the model's ability to discriminate grammatical sentences from ungrammatical ones using receiver operating characteristic curves, or ROC curves. We also compute the area under the ROC curve, or AUC. In our results, we often report the average AUC over templates as our metric.",
"We evaluate three LMs on our dataset, the two-layer LSTM of BIBREF8 trained on German Wikipedia text, as well as n-gram baselines using the same corpus. We ask proficient German speakers to annotate our sentences for grammaticality, providing a human comparison. Since some of these sentences are rather implausible because of the permutations, we also collect human meaningfulness scores. We find that our dataset is challenging for both LMs and humans and that LMs lag behind human performance."
],
[
"Grammaticality judgments for recurrent networks have been investigated since BIBREF9, who use closely matched pairs of sentences to investigate grammatical correctness. This approach has been adopted recently to assess the abilities of RNNs, and LSTMs in particular, to capture syntactic structures. For instance, BIBREF4 and BIBREF5 use word probes in minimally different pairs of English sentences to study number agreement. To discriminate grammatical sentences from ungrammatical ones, they retrieve the probabilities of the possible morphological forms of a target word, given the probability of the previous words in the sentence. Practically, in the sentence “the boy is sleeping”, the network has detected number agreement if $\\mathbf {P}(w = is) > \\mathbf {P}(w = are)$. This methodology has also been adapted by BIBREF10 to models trained with a masked language-modeling objective. Those works find that in the absence of many detractors or complex sentence features, recent language models perform well at the number-agreement problem in English.",
"More closely related to our work, BIBREF11 use word probes to examine whether LSTMs understand Basque agreement. Like German, Basque is a morpho-syntactically rich language with relatively free word order, thus providing a challenging setting for the LM. In contrast to our work, the LM's ability to understand verb argument structure is tested on number-agreement and on suffix recovery tasks, which involve localized changes rather than whole sentence perturbations and re-orderings.",
"In parallel to work focusing on word probe probabilities, another closely related line of inquiry has investigated surprisal, the inverse log probability assigned to a specific prediction by a model. For instance, BIBREF12 and BIBREF13 examine many syntactic phenomena, including filler gap dependencies and garden path effects.",
"We depart from these approaches because our test set encompasses whole sentence variations, such as argument reordering. Word probes are therefore less apt to capture such changes. Instead, we choose to follow BIBREF6 and BIBREF7 in taking the more general approach of comparing whole sentence probabilities as our grammaticality probe. This method, which also corresponds to the sentence-level LogProb acceptability measure of BIBREF14, evaluates whether the model assigns a higher log probability to sentences which are grammatical than to sentences which are not.",
"In contrast with approaches that seek to probe language models directly, other approaches involve fine-tuning representations to a specific syntactic task using a task-specific supervision signal. For instance, BIBREF15 introduce CoLA, a binary acceptability dataset whose example sentences are taken from linguistic publications. They train a classifier on top of frozen ELMo BIBREF16 layers to assess performance at acceptability judgments. Later work BIBREF17, BIBREF18 has focused on fine-tuning an entire pre-trained model to the acceptability task, such as is done for BERT BIBREF17. Both of those paradigms do not directly evaluate syntactic ability but rather whether pre-trained representations can be effectively transferred to learn to solve specific syntax problems."
],
[
"Our test sentences were automatically generated from fifty grammatical sentences which we call templates. These templates are all constructed the same way: the main clause “wir wissen, dass...” (“we know that”), followed by a subordinate clause with a subject (nominative case), a verb in the past tense form, a direct object (accusative case) and an indirect object (dative case). For simplicity purposes, we did not use any adjective. In the Template of Figure FIGREF3, “the minister” is the subject, “that bill” the direct object, and “the Senate” the indirect object of “announced”.",
"We constructed a dataset designed to expose impossible verb argument structures by manipulating the arguments' case assignments. We introduced these changes within subordinate clauses rather than main clauses, because German subordinate clauses have a more flexible noun phrases order than main clauses. This specificity allows us to test whether models are able to capture syntactic dependencies when the arguments' positions vary.",
"In German, the syntactic role of noun phrases is indicated by the morphological form of its constituents: determiners and nouns take different suffixes, if not completely different forms, according to their case assignment. However, feminine, neutral and all plural noun phrases share common morphological forms. Thus, to avoid sentence duplicates within our dataset, all noun phrases are singular masculine."
],
[
"To control for all possible argument orders and words syntactic roles, for each template, we change (i) the positions of the three verb arguments in the subordinate clause and (ii) the case assignments of each noun group. There are three verb arguments, leading to six different position permutations. Similarly, they are three unique case assignments, leading to six possible case assignments. By generating all such permutations, we create $6 \\times 6 = 36$ grammatical sentences for each template, yielding 1800 grammatical sentences in total. In Figure FIGREF3, we show an example where only the positions of the subject and the indirect object are switched, which does not alter the meaning. We also show an example where only the case assignments of the subject and the indirect object are switched: “The Senate” becomes the subject and “the minister” the indirect object. The case permutations were done by retrieving the desired case markings (nominative, accusative or dative) from a dictionary mapping the vocabulary's nouns to their morphological forms. Case permutations change sentence meaning. In practice, some of our sentences will be implausible yet grammatical, in contrast with BIBREF6."
],
[
"We constructed ungrammatical sentences using the same templates. Briefly, we substituted one of the case assignments by another one already present in the sentence, which creates a grammatical violation: sentences contain three noun phrases and only two case assignments, one being duplicated. In Figure FIGREF3, we show how we apply this to a template sentence to create grammatical violations.",
"For each case violation, we generated 36 sentences containing a case violation from every template. Thus, from each of our 50 templates, we generated 36 valid grammatical variations and 108 ungrammatical variations. Note also that throughout the number of words in our dataset stays constant (11 words per sentence), so that log probabilities are more comparable. Overall, our dataset comprises 7,200 sentences, of which 1,800 are grammatical and 5,400 are ungrammatical."
],
[
"To generate human results for our dataset, we hire annotators proficient in German on Amazon Mechanical Turk."
],
[
"We asked Amazon Mechanical turkers to assess the sentence grammaticality on a scale from 1 to 10, where 1 means grammatically incorrect and 10 means grammatically correct. Before the task started, respondents were shown examples of grammatical sentences and ungrammatical sentences. Importantly, it was indicated that grammatical sentences were not necessarily meaningful. As an example, we translated to German Chomsky's famous quote: “Colorless green ideas sleep furiously” BIBREF19. Each respondent graded 50 sentences, with the following constraints: (i) each sentence comes from a different template, to avoid that past sentences impact future ratings; (ii) twenty-five percent of the sentences shown are grammatical, mirroring the construction of the dataset; (iii) sentences selected are randomly chosen among the 144 possibilities for each template, so that each user is exposed to a wide variety of case assignments, argument orders and grammatical violations; (iv) no sentence is annotated twice."
],
[
"For grammatical sentences only, we also conduct meaningfulness evaluations. Similarly to our grammaticality experiment, users are asked to grade 50 sentences from 1 to 10, where 1 is meaningless and 10 is meaningful. They were also shown examples of meaningful and meaningless grammaticality correct German sentences before starting the evaluations. Constraints are the same as above, except that all sentences are grammatical and that there are thus only 36 possibilities per template."
],
[
"To ensure that all annotators are proficient in German, we took the following steps: (i) we only accepted annotators from German-speaking countries; (ii) instructions are given in German only; (iii) annotators took a short German grammar test on conjugation and declination knowledge; (iv) filler sentences (easy sentences for which answers are known and obvious to proficient German speakers) are inserted throughout the annotation process to ensure annotators stay focused; (v) we remove annotators who took less than a third of the average time to complete the assignment after checking that they also underperform on our test questions."
],
[
"As noted, we do not ask humans to compare minimally differing sentences, but rather to grade individual sentences. This setup differs from earlier work such as BIBREF6 who show both sentences simultaneously and ask humans to pick the most grammatical one. This approach prevents humans from using the differences between the sentences to form a judgment on grammaticality; rather they must judge each sentence on its own. In doing so, the human setup is closer to that of language models: when we use log probability scores of LMs, we do not enable them to learn from the differences between the sentences to form a judgment.",
"In total, we collected 2,750 annotations from 55 annotators for sentence grammaticality (38% of the dataset) and 1,800 annotations from 36 annotators for sentence meaningfulness (100% of grammatical sentences). We do not have grammaticality annotations for all sentences due to a lack of proficient German annotators on Amazon Mechanical Turk. Our human results for grammaticality are computed on this subset of the dataset."
],
[
"We use the pre-trained word-level language model (German WordNLM) described and trained by BIBREF8. The model is a two-layer LSTM without attention, a hidden dimension of 1,204, and word embeddings of dimension 200 for the 50,000 most frequent words. It was trained on a corpus from German Wikipedia, totalling 819 million words. The 50,000 most-frequent words in this corpus are used as the vocabulary and embedded in 200-dimensional vector space. The model reaches a perplexity of 37.96 on this dataset. We use unigram and bigram language models that use the same corpus with Laplace smoothing as baselines. The probability of test sentences according to the language models is computed using the chain rule:",
"Each of these log probabilities can be read from the softmax outputs of the LSTM, or directly estimated in the case of the unigram and bigram models. We also tried normalizing for unigram frequency as proposed by BIBREF20 but found like BIBREF6 that it did not improve results for the LSTM."
],
[
"Figure FIGREF11 shows the distribution of the log probability scores predicted by the LSTM and the distribution of the grammaticality scores given by humans. Figure FIGREF16 presents the distributions and average of the AUC values computed per template (50 in total), both for the models' log probability scores and the human grammaticality scores. Performances are rather modest, with a mean AUC of 0.56 for the LTSM and of 0.58 for humans, compared to the chance score of 0.5 for the unigram and bigram models. As expected, the n-gram baselines perform exactly at chance, confirming that they do not represent verb argument structures and that LMs need a deeper encoding to be able capture syntax within sentences. We also notice that AUC varies relatively little across different templates for our models, indicating that the particular choice of template has little impact. For humans, the wider spread in results can be attributed partially to the fact that 55 random permutations out of the 144 permutations were annotated for each template. Therefore, it might have been easier to distinguish grammatical sentences from ungrammatical ones for some templates than others.",
"Surprisingly, humans performed only slightly better than the LSTM. We believe that this is due two factors. First, we presented the sentences in a scrambled order and asked for an absolute grammaticality judgment. It may be more difficult to put a sentence on a 1 to 10 scale than making pairwise judgments. Second, our sentences may be particularly challenging. The grammatical sentences contained both unusual argument orders and semantically odd situations, thus inciting participants to rate them low. While these factors could be expected to impact the LSTM, it is more surprising that they impact humans, despite precise instructions to rate on grammaticality rather than meaning or frequency. In addition, as can be seen in Figure FIGREF11b, some ungrammatical sentences were rated as highly grammatical by humans. We suspect that these are cases of inattention, as in our test set the distinction between grammatical and ungrammatical rest on a single word, and even a single character (the distinction between 'der' and 'den', for instance)."
],
[
"In Table TABREF18, we further investigate our grammaticality results by segregating them by case violation type (duplicate nominative, accusative or dative). While humans tend to give similar scores for each violation type, models tend to assign higher log probability scores to sentences with doubled nominatives than to grammatical sentences, leading to worse than chance performance on Nominative violations. Conversely, models tend to assign lower log probability scores to sentences with doubled datives, likely because these sentences lack either a nominative or an accusative, both of which are more frequent than dative. This leads to better than human performance on this case violation. Such behavior is probably due to the fact that German being a non pro-drop language, every verb must have a nominative case, making nominative more frequent than accusative, and that dative even rarer. This frequency bias is worse for models that are directly based on frequency, such as our unigram and bigram models. However, our LSTM is not exempt from it, confirming that RNNs rely in part on frequency cues."
],
[
"In Figure FIGREF20, we explore the effect of argument order. Despite the fact that all argument orderings should be equally valid from a grammatical perspective, we find that humans tend to favour more 'canonical' orders, with nominative-accusative-dative being the preferred order. Models also assign higher log probability scores to the canonical order compared to others. It is likely that some orders occur more frequently than others in German, thus leading to a frequency bias for both models and humans. Although sentences with shuffled argument order have the same meaning as those without shuffled order, we find a similar bias for the meaningfulness scores.",
"Interestingly, even though the case orders preferred by the LSTM correlate with those of humans, there are also subtle differences: we also find that models tend to prefer argument orders that start with dative to those that start with accusative, when the opposite is true for human grammaticality scores. The origin of such differences is unclear. Understanding it more fully would require to obtain distributional statistics on the order of such phrases in the original corpus."
],
[
"As mentioned in Section SECREF3, some of our grammatical sentences are semantically implausible though syntactically valid. This is because we create highly unlikely associations of case assignments and thematic roles when we permute the case assignments from the original sentence template. For instance, one permutation has a bill announcing a minister to the senate. Such unlikely sentences may be rejected by participants as ungrammatical even though they were specifically requested to ignore semantic plausibility. Similarly, they may affect neural models through the distributional correlates of meaningfulness: in any language corpus, a bill being an inanimate object is more likely to be an object (accusative case) than a subject (nominative case).",
"To check for the existence of such effect, we categorized the nouns in all of our sentences as animate and inanimate, and computed the human and machine scores of our grammatical sentences as a function of the association between case and animacy. Table TABREF22 shows that indeed, both humans and machines are biased by animacy-case associations: all share a preference for animate for nominative (subject) and dative (indirect object). By contrast, negative AUC values for accusative indicate that direct objects are preferred as inanimate."
],
[
"To see the impact of such biases, we re-analysed the human and machine scores by restricting the AUCs to the non-permuted sentences, i.e, the sentences whose case assignments correspond to that of the original templates. These templates were constructed to be plausible, and indeed the average human plausibility scores for these non-permuted orders of 5.33 is higher than for the permuted ones 3.61. In this analysis, we therefore include the 6 valid grammatical argument order permutations and all 108 grammatical violations for each template sentence.",
"The results are shown in Table TABREF24. As expected, the human AUC scores are higher in this restricted analysis than in the full dataset shown in Table TABREF18. Note that the model scores are also higher, which suggests that the algorithms are also sensitive to meaningfulness, probably through its effects on the distribution of case for the different nouns in the training corpus."
],
[
"In Table TABREF26, we show correlations between human judgments of grammaticality, meaningfulness and LSTM log probabilities. Unsurprisingly, all variables are positively correlated, which supports our earlier findings. More surprising is that the LSTM is more correlated with both grammaticality and meaningfulness than meaningfulness is with grammaticality. Note that meaningfulness and grammaticality have been annotated by different annotators, which might help explain this finding."
],
[
"We set up a well controlled grammaticality test for the processing of argument structure in neural language models and in humans. The results show that LSTMs are better than chance in detecting an abnormal argument structure, despite the fact that the arguments could occur in any position, due to the generally free word order of phrases in German relative clauses. The average performance of models, though, is far from 100% correct and lower than humans, and the error patterns differ markedly. Contrary to humans, neural language models are overly sensitive to frequency distribution of phrase types. For instance, they assign a higher probability to sentences containing multiple nominative phrases than a correct sentence with only one nominative phrase. This frequency bias directly reflects the frequency of nominative, accusative and dative in the language, as the same bias is found in unigram and bigram models. Similar to the conclusion reached by BIBREF21 in their investigation of the error patterns made by RNNs and humans on syntactic agreement, we find that the syntactic representations of humans and LSTMs differ in some respects.",
"Despite this difference, neural models are able to mimic the pattern of human responses for grammatical sentences. As has been noted previously, humans are not uniformly considering all grammatical sentences as grammatical, i.e, grammaticality judgments are graded BIBREF22. Humans tend to reject sentences with unusual word orders. For instance, they prefer the canonical Nominative-Accusative-Dative order over all of the others orders. A similar pattern is found in neural models, although the details differ somewhat.",
"Another point of convergence is found with regards to the association between case and semantic features: humans prefer that nominative phrases are animate, and accusative inanimate, a pattern also found in neural networks. This shows that humans have difficulties in judging grammaticality as separate from other factors like frequency and meaningfulness, especially when sentences are presented independently instead of in minimal pairs. In this respect, humans are quite comparable to neural models.",
"Overall, the difficulty of neural networks to detect incorrect argument structure as such (especially spectacular in the case of duplicate nominatives), provides us a clue that these models may not be fully able to represent such structures, above and beyond their probability distributions."
],
[
"The team's project is funded by the European Research Council (ERC-2011-AdG-295810 BOOTPHON), the Agence Nationale pour la Recherche (ANR-10-LABX-0087 IEC, ANR-10-IDEX-0001-02 PSL* ), Almerys (industrial chair Data Science and Security), and grants from Facebook AI Research (Research Grant), Google (Faculty Research Award), Microsoft Research (Azure Credits and Grant), and Amazon Web Service (AWS Research Credits)."
]
],
"section_name": [
"Introduction",
"Related work",
"Verb Argument Structure Dataset Construction ::: Templates",
"Verb Argument Structure Dataset Construction ::: Grammatical Sets",
"Verb Argument Structure Dataset Construction ::: Case Violation Sets",
"Methods ::: Human Evaluations",
"Methods ::: Human Evaluations ::: Sentence Grammaticality",
"Methods ::: Human Evaluations ::: Sentence Meaningfulness",
"Methods ::: Human Evaluations ::: Ensuring German Proficiency",
"Methods ::: Human Evaluations ::: Pairwise Ranking and Individual Grading",
"Methods ::: Language Models",
"Results ::: Main Classification Task",
"Results ::: Case Frequency Bias",
"Results ::: Argument Order Preferences",
"Results ::: Animacy Preferences",
"Results ::: Restricting the Analysis to Plausible Sentences",
"Results ::: Correlation between model and human ratings",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"faf09b518209f93e53ab88c6e1645457a74f53a4",
"fc0713b2869affb36672ea0d77cc1700da2e38f9"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"43c9348ad04f7a60e2d66832ce5eccf753f52794",
"6410e5f42d87c3fd469127ca2c004293138e9c48"
],
"answer": [
{
"evidence": [
"In Table TABREF18, we further investigate our grammaticality results by segregating them by case violation type (duplicate nominative, accusative or dative). While humans tend to give similar scores for each violation type, models tend to assign higher log probability scores to sentences with doubled nominatives than to grammatical sentences, leading to worse than chance performance on Nominative violations. Conversely, models tend to assign lower log probability scores to sentences with doubled datives, likely because these sentences lack either a nominative or an accusative, both of which are more frequent than dative. This leads to better than human performance on this case violation. Such behavior is probably due to the fact that German being a non pro-drop language, every verb must have a nominative case, making nominative more frequent than accusative, and that dative even rarer. This frequency bias is worse for models that are directly based on frequency, such as our unigram and bigram models. However, our LSTM is not exempt from it, confirming that RNNs rely in part on frequency cues."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Conversely, models tend to assign lower log probability scores to sentences with doubled datives, likely because these sentences lack either a nominative or an accusative, both of which are more frequent than dative."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"In Table TABREF18, we further investigate our grammaticality results by segregating them by case violation type (duplicate nominative, accusative or dative). While humans tend to give similar scores for each violation type, models tend to assign higher log probability scores to sentences with doubled nominatives than to grammatical sentences, leading to worse than chance performance on Nominative violations. Conversely, models tend to assign lower log probability scores to sentences with doubled datives, likely because these sentences lack either a nominative or an accusative, both of which are more frequent than dative. This leads to better than human performance on this case violation. Such behavior is probably due to the fact that German being a non pro-drop language, every verb must have a nominative case, making nominative more frequent than accusative, and that dative even rarer. This frequency bias is worse for models that are directly based on frequency, such as our unigram and bigram models. However, our LSTM is not exempt from it, confirming that RNNs rely in part on frequency cues."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Such behavior is probably due to the fact that German being a non pro-drop language, every verb must have a nominative case, making nominative more frequent than accusative, and that dative even rarer. This frequency bias is worse for models that are directly based on frequency, such as our unigram and bigram models. However, our LSTM is not exempt from it, confirming that RNNs rely in part on frequency cues."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"52a2780473d7e412a291b701a97694028bf052c5",
"87dd9eb0b55a160563bfc7edc5da79d82be44659"
],
"answer": [
{
"evidence": [
"Interestingly, even though the case orders preferred by the LSTM correlate with those of humans, there are also subtle differences: we also find that models tend to prefer argument orders that start with dative to those that start with accusative, when the opposite is true for human grammaticality scores. The origin of such differences is unclear. Understanding it more fully would require to obtain distributional statistics on the order of such phrases in the original corpus.",
"To check for the existence of such effect, we categorized the nouns in all of our sentences as animate and inanimate, and computed the human and machine scores of our grammatical sentences as a function of the association between case and animacy. Table TABREF22 shows that indeed, both humans and machines are biased by animacy-case associations: all share a preference for animate for nominative (subject) and dative (indirect object). By contrast, negative AUC values for accusative indicate that direct objects are preferred as inanimate."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Interestingly, even though the case orders preferred by the LSTM correlate with those of humans, there are also subtle differences: we also find that models tend to prefer argument orders that start with dative to those that start with accusative, when the opposite is true for human grammaticality scores.",
"Table TABREF22 shows that indeed, both humans and machines are biased by animacy-case associations: all share a preference for animate for nominative (subject) and dative (indirect object). By contrast, negative AUC values for accusative indicate that direct objects are preferred as inanimate."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"To check for the existence of such effect, we categorized the nouns in all of our sentences as animate and inanimate, and computed the human and machine scores of our grammatical sentences as a function of the association between case and animacy. Table TABREF22 shows that indeed, both humans and machines are biased by animacy-case associations: all share a preference for animate for nominative (subject) and dative (indirect object). By contrast, negative AUC values for accusative indicate that direct objects are preferred as inanimate."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF22 shows that indeed, both humans and machines are biased by animacy-case associations: all share a preference for animate for nominative (subject) and dative (indirect object)."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"6d9cd44a21fe003a1c101c7d796cb25c8fc3bdaa",
"91ca1bc25e3e0ee073be672e508048c0a5a26b8a",
"95ebef944c07bdb64e698c3e2328a3bce2ee27de"
],
"answer": [
{
"evidence": [
"Figure FIGREF11 shows the distribution of the log probability scores predicted by the LSTM and the distribution of the grammaticality scores given by humans. Figure FIGREF16 presents the distributions and average of the AUC values computed per template (50 in total), both for the models' log probability scores and the human grammaticality scores. Performances are rather modest, with a mean AUC of 0.56 for the LTSM and of 0.58 for humans, compared to the chance score of 0.5 for the unigram and bigram models. As expected, the n-gram baselines perform exactly at chance, confirming that they do not represent verb argument structures and that LMs need a deeper encoding to be able capture syntax within sentences. We also notice that AUC varies relatively little across different templates for our models, indicating that the particular choice of template has little impact. For humans, the wider spread in results can be attributed partially to the fact that 55 random permutations out of the 144 permutations were annotated for each template. Therefore, it might have been easier to distinguish grammatical sentences from ungrammatical ones for some templates than others."
],
"extractive_spans": [
"mean AUC of 0.56 for the LTSM and of 0.58 for humans"
],
"free_form_answer": "",
"highlighted_evidence": [
"Performances are rather modest, with a mean AUC of 0.56 for the LTSM and of 0.58 for humans, compared to the chance score of 0.5 for the unigram and bigram models."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Figure FIGREF11 shows the distribution of the log probability scores predicted by the LSTM and the distribution of the grammaticality scores given by humans. Figure FIGREF16 presents the distributions and average of the AUC values computed per template (50 in total), both for the models' log probability scores and the human grammaticality scores. Performances are rather modest, with a mean AUC of 0.56 for the LTSM and of 0.58 for humans, compared to the chance score of 0.5 for the unigram and bigram models. As expected, the n-gram baselines perform exactly at chance, confirming that they do not represent verb argument structures and that LMs need a deeper encoding to be able capture syntax within sentences. We also notice that AUC varies relatively little across different templates for our models, indicating that the particular choice of template has little impact. For humans, the wider spread in results can be attributed partially to the fact that 55 random permutations out of the 144 permutations were annotated for each template. Therefore, it might have been easier to distinguish grammatical sentences from ungrammatical ones for some templates than others."
],
"extractive_spans": [],
"free_form_answer": "LTSM 0.56 AUC, humans 0.58 AUC",
"highlighted_evidence": [
"Figure FIGREF11 shows the distribution of the log probability scores predicted by the LSTM and the distribution of the grammaticality scores given by humans.",
"Performances are rather modest, with a mean AUC of 0.56 for the LTSM and of 0.58 for humans, compared to the chance score of 0.5 for the unigram and bigram models. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: The grammatical vs non grammatical AUC scores based on log probability (models) and grammaticality scores (humans), for each type of case violation (e.g: Nominative compares grammatical vs double nominative sentences). Chance level corresponds to 0.5."
],
"extractive_spans": [],
"free_form_answer": "LSTM obtains an overall score of 0.56 while humans' score is 0.58",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: The grammatical vs non grammatical AUC scores based on log probability (models) and grammaticality scores (humans), for each type of case violation (e.g: Nominative compares grammatical vs double nominative sentences). Chance level corresponds to 0.5."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What is the percentage of human judgment agreement on the set?",
"Are the orders of case assignment biases motivated by frequency considerations?",
"Does the paper list other heuristic biases in the LSTMs?",
"What are the performances of LSTMs and humans on the task?"
],
"question_id": [
"fd556a038c36abc88a800d9d4f2cfa0aef6f5aba",
"9119fbfba84d298014d1b74e0e3d30330320002c",
"058b6e3fdbb607fa7dbfc688628b3e13e130c35a",
"5b95665d44666a1dc9e568d2471e5edf8614859f"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"German",
"German",
"German",
"German"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Construction of grammatical examples by permuting the case assignment and the argument order in template sentences. For the construction of the ungrammatical examples, we doubled one of the cases, creating a declension error.",
"Figure 3: Distributions of the AUC values per template and average over templates (in bold) for models and for humans.",
"Figure 2: Distribution of log probability scores and grammaticality scores for grammatical sentences and ungrammatical sentences (a) for the LSTM and (b) for humans.",
"Table 1: The grammatical vs non grammatical AUC scores based on log probability (models) and grammaticality scores (humans), for each type of case violation (e.g: Nominative compares grammatical vs double nominative sentences). Chance level corresponds to 0.5.",
"Figure 4: Average log probability scores of LSTM against grammaticality or meaningfulness scores of humans for specific case orders on grammatical sentences.",
"Table 2: Preference for animacy on grammatical sentences computed as the ROC-AUC for the scores as function of the association between case and the animate versus inanimate status of the noun. Less than .5: preference for inanimate. More than .5: preference for animate.",
"Table 3: The grammatical vs non grammatical AUC scores based on log probability (models) and gramaticality scores (humans), restricted to the original (plausible) template sentences plus their argument order permutations, for each type of case violation.",
"Table 4: Spearman’s Correlation Coefficient between log probabilities (models) and grammaticality and meaningfulness scores (humans). gr. only: restricted to the grammatical sentences; all: all sentences."
],
"file": [
"4-Figure1-1.png",
"5-Figure3-1.png",
"6-Figure2-1.png",
"6-Table1-1.png",
"7-Figure4-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"8-Table4-1.png"
]
} | [
"What are the performances of LSTMs and humans on the task?"
] | [
[
"1912.00239-Results ::: Main Classification Task-0",
"1912.00239-6-Table1-1.png"
]
] | [
"LSTM obtains an overall score of 0.56 while humans' score is 0.58"
] | 199 |
2004.00809 | Mapping Languages and Demographics with Georeferenced Corpora | This paper evaluates large georeferenced corpora, taken from both web-crawled and social media sources, against ground-truth population and language-census datasets. The goal is to determine (i) which dataset best represents population demographics; (ii) in what parts of the world the datasets are most representative of actual populations; and (iii) how to weight the datasets to provide more accurate representations of underlying populations. The paper finds that the two datasets represent very different populations and that they correlate with actual populations with values of r=0.60 (social media) and r=0.49 (web-crawled). Further, Twitter data makes better predictions about the inventory of languages used in each country. | {
"paragraphs": [
[
"In recent years there has been an increasing amount of research investigating the use of unstructured geographic information, such as user-generated social media content and textual web data, for geographical analysis. Such unstructured data sources can provide information about places that is difficult to measure through traditional sensor observations. Aggregated representations of places that are extracted from textual corpora, e.g., give us insight into what people feel and think about places, potentially providing a much richer understanding of human geography BIBREF0, BIBREF1. It can also give insight into the relationships between places and human behavior BIBREF2, BIBREF3. However, a recurring issue with this kind of big data, and user-generated content in general, is the question of how representative these data sets are compared to the populations that we wish to study BIBREF4. There exists little previous empirical work to establish how representative web corpora are with respect to different geographic regions of the world (c.f. BIBREF5, BIBREF6). In this paper we describe such a computational experiment using language identification models on two global-scale corpora.",
"How well does language data represent both regional population densities and the social characteristics of regional populations? To answer this question, we collect and analyze two large global-scale datasets: web-crawled data from the Common Crawl (16.65 billion words) and social media data from Twitter (4.14 billion words). This paper evaluates demographic-type information that is derived from these datasets, information which traditionally has been collected using survey-instruments as part of a census.",
"When labeled with a language identification model, this data provides a representation of both (i) how much language a particular country produces, a proxy for population density and (ii) the mix of languages used in a country, a proxy for population demographics. These corpus-based representations are compared against four ground-truth baselines. First, the UN country-level population estimates BIBREF7. Second, because not all populations have equal access to internet technologies, we use per capita GDP BIBREF8 and internet-usage statistics BIBREF9 to adjust raw populations. Third, the UN country-level census aggregations are used to represent what languages are used in each country BIBREF10 and, where these are not available, the World Factbook BIBREF11 estimations are used. The goal is to measure how well corpus-based representations correspond with each of these ground-truth, survey-based representations. Thus, we are not concerned at this point if the corpus-based representations are skewed or inaccurate in particular locations. Rather, the purpose is to measure how and where these datasets are skewed as a method for evaluating and improving future data collection methods.",
"We can view this problem from two perspectives: 1) from a human geography perspective, is it possible to use global-scale corpora to understand characteristics of regional populations?, and 2) from the perspective of computational linguistics, is it possible to normalize corpora to proportionally represent diverse populations? For example, some countries (like the United States) and some languages (like English) dominate many datasets. Is it possible to systematically correct this imbalance?",
"We begin by describing the corpora and how they were collected (Section 2) and the language identification model that is used to label them with language codes (Section 3). After looking at the frequency distribution of languages across the entire dataset (Section 4), we undertake a country-level evaluation of the datasets, first against population-density baselines (Section 5) and then against language-use baselines (Section 6)."
],
[
"Data comes from two sources of digital texts: web pages from the Common Crawl and social media from Twitter. Starting with the web-crawled data, we can compare this dataset to previous georeferenced web corpora BIBREF12, BIBREF13. The basic pipeline is to process all text within $<p>$ tags, removing boilerplate content, navigation content, and noisy text. We view each web page as a document containing the remaining material. Documents are then deduplicated by site, by time, and by location.",
"Language samples are geo-located using country-specific top-level domains: the assumption is that a language sample from a web-site under the .ca domain originated from Canada. This approach to regionalization does not assume that whoever produced that language sample was born in Canada or represents a traditional Canadian dialect group. Rather, the assumption is only that the sample represents someone in Canada who is producing language data. Previous work has shown that there is a significant relationship between domain-level georeferencing and traditionally-collected linguistic data BIBREF14.",
"Some countries are not available because their top-level domains are used for other purposes (i.e., .ai, .fm, .io, .ly, .ag, .tv). Domains that do not contain geographic information are also removed from consideration (e.g., .com sites). The Common Crawl dataset covers 2014 through the end of 2017, totalling 81.5 billion web pages. As shown in Table 1, after processing this produces a corpus of 16.65 billion words. Table 1 also shows the number of countries represented in the web corpus against the number of countries in the ground-truth UN dataset and in the collected Twitter corpus. Countries may be missing from the web dataset (i) because their domains are used for a different purpose or (ii) their domains are not widely used or the country does not produce a significant amount of data on the open internet.",
"In isolation, web-crawled data provides one observation of global language use. Another common source of data used for this purpose is Twitter BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21). A spatial search is used to collect tweets from within a 50km radius of 10k cities. This search avoids biasing the selection by using language-specific keywords or hashtags. The Twitter data covers the period from May of 2017 until early 2019. This creates a corpus containing 1,066,038,000 tweets, all connected with the city from which they were collected. Because the language identification component only provides reliable predictions for samples containing at least 50 characters, the corpus is pruned to that length threshold (this removes approximately 250 million tweets). As shown in Table 1, this produces a corpus containing 4.14 billion words.",
"Each of these datasets rests on different assumptions and, as a result, is subject to different confounds. For example, using top-level domains to determine the country of origin for web pages is likely to over-represent countries like Estonia that use their TLD extensively and to under-represent countries like the United States that do not traditionally use their TLD. The Twitter dataset relies on georeferenced tweets, but not all users have GPS-enabled devices. For example, we might expect that countries with a lower per capita GDP have a lower percent of georeferenced tweets, in addition to having fewer tweets overall. The goal here is to establish a baseline of how well these corpora represent actual populations."
],
[
"Language identification (LID) is important because we need to identify as many languages as possible while using the small samples provided by social media. We use a multi-layer perceptron with character trigrams, trained using samples of 50 characters from a range of datasets. The focus here is to evaluate the performance of the LID component within the context of language mapping. Previous work on LID either required large, homogenous samples BIBREF22 or covered only about one hundred languages BIBREF23, BIBREF24. The goal here is to evaluate the LID component on as many samples as possible from as many domains as possible. The model covers 464 languages drawn from the datasets represented in Table 2; this can be seen as a compromise between the sample size required and the number of languages covered. Table 2 shows the performance of the model on a held-out test set; the number of languages for each section of the test set is shown (listed as N. Langs) as well as the number of unique test samples for that section (listed as Test Samples). The model covers 464 languages because, after enforcing a threshold of the number of samples required for the development/training/testing datsets, this is how many languages remain. It is a purely practical number that is limited by the datasets that are drawn from to create the LID model. On the other hand, only 205 (web-crawled) and 97 (Twitter) languages were present with at least 100k words; thus additional languages are unlikely to be informative.",
"The dataset used for training and evaluating the LID component contains several independent sources of data: The first, more formal, set of domains comes from a traditional LID source: religious texts. Bibles are taken from BIBREF25 and from BIBREF22 (this is listed as LTI in Table 2); translations of the Quran are taken from the Tanzil corpus BIBREF26. The second set of domains contains official government and legislative texts: the European parliament, the JRC-Acquis corpus of European Union texts, and the United Nations BIBREF26. The third set contains non-official formal texts: the EU Bookshop corpus BIBREF27, newspapers and commentary from GlobalVoices, NewsCommentary, and Setimes BIBREF26, and Wikipedia BIBREF28. The fourth set contains documentation from open source software packages: Ubuntu and Gnome BIBREF26. The fifth set mimics informal speech: OpenSubtitles covering movies and television, TED Talks BIBREF26, and Tatoeba for language-learning sentences (from tatoeba.org). The sixth set contains language-focused corpora collected to represent specific languages: the Emille corpus of Indian languages BIBREF29, the Indian Parallel Corpus BIBREF30, and the IARPA Babel project language packs, for example the Cantonese corpus BIBREF31.",
"The official Twitter LID data BIBREF32 is also used for the evaluation (note that not all samples from the original dataset are still available). Given the length constraints of the model, this considers only samples containing at least 50 characters after cleaning has been performed. These results, with an F1 of 0.96, show that the LID model can also be used on tweets containing at least 50 characters. It is important to note that the LID model is trained using samples of 50 characters and that no Twitter data was included in the training set. Thus, this result represents the case of Twitter being an out-of-sample domain. It may be the case that future work could produce a LID model capable of accurate predictions on tweets with less than 50 characters. The present model, however, has been trained and evaluated using samples of 50 characters.",
"Table 2 shows the F1 score of a single LID model that is evaluated on held-out test samples of 50 characters from each domain. This reflects the expected accuracy of the language labels applied to the types of data found in the web-crawled and social media datasets. These datasets are dominated by more widely used languages: only 205 languages are present with at least 100k words in the web-crawled dataset and only 97 in the social media dataset. This means that small minority languages are less likely to be represented here. This fixed threshold of 100k per language is a somewhat arbitrary limit; future work will consider the relative usage of a language by place (i.e., a threshold such as 5% of the language produced by a country) to avoid a geographic bias against non-Western languages."
],
[
"To what degree do these datasets represent majority languages? This is an important question because, with only language labels available, the prevalence of only a few languages will obscure important demographic information. Table 3 shows the top twenty languages (chosen from the web corpus) by their relative proportion of each dataset and, at the bottom, by their combined percent of the overall dataset. The two datasets do not agree in top languages given only the total number of words; however, these twenty languages make up a similar percent of each dataset.",
"We see that 87.9% and 80.4% of the data belongs to these twenty languages. The implication is that all the other languages make up less than 20% of both datasets. This is potentially problematic because majority languages such as English and Spanish (both very common) are used across widely different demographics. In other words, knowing that a population uses English or Spanish gives us relatively little information about that population. A different view of this is shown in Figure 1, with the distribution by percentage of the data for the top 100 languages in each dataset (not necessarily the same languages). There is a long-tail of minority languages with a relatively small representation. This trend is more extreme in the social media dataset, but it is found with the same order of magnitude in both datasets. The figure is cut off above 2.0% in order to visualize the long-tail of very infrequent languages. The biggest driver of this trend is English, accounting for 37.46% of social media and 29.96% of web data. This is the case even though both datasets have large numbers of observations from locations which are not traditionally identified as English-speaking countries, suggesting that in digital contexts these countries default to global languages which they do not use natively.",
"Does this mean that digital data cannot be used to represent non-digital language use? The purpose of this paper is to find where and when and how we can map populations using digital data in order to establish a baseline for evaluating collection methods. The relative amount of data can be related to ground-truth population numbers (without language labels), and the language labels most common to particular countries can be related to ground-truth language-use statistics."
],
[
"How well does the amount of data correspond with the population density of each country? In this section we compare the number of words in each corpus with the UN population estimates. These datasets cover 199 countries in total, although the web-crawled data only represents 166 countries and the Twitter data only represents 169 countries.",
"The Pearson correlations between different measures per country are shown in Table 4. There are five measures per country: the size of the web-crawled and social media corpora in words, the UN population estimates, population adjusted by per capita GDP, and population adjusted by access to the internet. The purpose of these different weightings is to take into consideration the fact that some countries may produce more digital text per person than others.",
"First, we notice that there is very little relationship between the two corpora ($r=0.05$). This is interesting in and of itself because, given the systematic attempt to find georeferenced texts, we would expect a fairly high correlation here. But this shows that the data comes from different places (c.f., Figures 2 and 3). Because the collection methods were designed with this purpose in mind, it is unlikely that this low correlation is caused by the methods themselves rather than reflecting the fact that these datasets represent different populations. In other words, this is a strong indication that web data and Twitter data represent two different populations regardless of variations in the collection methods employed in this study.",
"Second, GDP-weighting and internet-usage-weighting have different effects on the two datasets. When adjusted for GDP, the correlation between Twitter data and population raises from $r=0.39$ to $r=0.60$, a fairly strong relationship and higher than when adjusted by internet usage. But when population is adjusted for GDP, the correlation between web-crawled data and population lowers from $r=0.39$ to $r=0.28$. In other words, economic information does not help to weight the web data towards actual ground-truth population density.",
"For web-crawled data, internet access provides a much better population weighting ($r=0.49$). This is perhaps not surprising because the internet usage statistics are directly relevant to the production of websites. But it is surprising that general internet access is not a good predictor of Twitter usage. Overall, we see that there is a definite relationship between populations and the amount of digital text produced per country, but there are clear regional biases.",
"What countries are specifically over-represented and under-represented in the two datasets? We first find the relative share of each dataset for each country. For example, what percentage of the web-corpus is from India? This assumes the total world population and the total corpus size as constants and finds the share of that total from each country. We then subtract the population estimates from the corpus-based estimates. In other words, we first normalize each representation (corpus size and population) and then find the difference between the normalized measures. This allows us to take into account the very different counts (words vs. persons).",
"If the result is negative, then a particular country is under-represented. For example, the share of the corpus from India has values of -0.1730 (CC) and -0.1421 (TW). This means that language from India is under-represented given what we would expect its population to produce. On the other hand, if the result is positive, then a particular country is over-represented. For example, Estonia is over-represented in the web-crawled data (0.0290) as is Australia in the Twitter data (0.0226) These numbers mean that there is 2.9% more language data from Estonia on the web than expected given the population of Estonia; and there is 17.3% less language data from India on the web than expected given the population of India.",
"Countries are shown by their representation in Twitter (Figure 2) and the web corpus (Figure 3), with red indicating over-representation: there is more corpus data than population size would predict. The imbalance between Twitter data and population is caused by a clear over-representation of the US, Canada, western Europe, Russia, and South America. But the imbalance between web-crawled data and population has a very different geographic pattern: there is less extreme over-representation but more under-representation. Specifically, under-representation is apparent in Africa and Southeast Asia.",
"The distribution of language data in Figures 2 and 3 raises an important distinction between types of users: locals vs. non-locals. For example, from internet usage statistics we know that many countries in Africa have less access to web-sites and thus produce much less web-crawled data. This is reflected in Figure 3. But many African countries are over-represented in Twitter data. Are these Twitter users members of the local populations or do they represent visitors? Note that Figure 2 does not reflect the popularity of Twitter as a platform because we start by normalizing the Twitter output for each country against the total Twitter usage. The over-represented countries in Figure 2, then, represent places where Twitter data is produced at a higher rate than expected. It has nothing to do with the relative popularity of the platform (e.g., Twitter vs. web pages)."
],
[
"Can we use georeferenced corpora to determine what languages are used in a particular country? We use as ground-truth the UN aggregated census data for each country and, in countries where this is not available, fall back on the CIA World Factbook data. Instead of trying to match up exactly how much of the population uses a specific language, we instead say that a language is used in a country if at least 5% of the observation is in that language. For example, if over 5% of the population in India uses Hindi, then we expect to see Hindi make up at least 5% of the corpora from India. This threshold allows us to evaluate the corpora without expecting that they will predict precisely the estimated figures of speakers per language. If there are ten languages in India that are used by over 5% of the population, then we expect all ten languages to be present in the corpora from India.",
"Figures 4 and 5 show the true positive rate: what percent of the actual census-based languages for each country are found using text data? Figures 6 and 7, on the other hand, show the false positive rate: how many languages do the text datasets say are used in a country but are not found in census data? These are two simple methods for comparing the relationship between the corpora and the underlying populations. If we predicted that any language that makes up at least 5% of the corpus from a country is, in fact, used by the population of that country, how often would be correct? There are many countries for which these two ground-truth datasets have no language information. For example, the UN language dataset has no information for Brazil. The ground-truth for language-use is much more sparse than that for population because many countries have no recent and reliable ground-truth data for the languages used by their populations. This lack of ground-truth data is not a problem. Rather, it is the motivation: if we can correctly predict language use in countries where we do have ground-truth, then we can use these methods to study countries that are currently unrepresented.",
"In both Figures 4 and 5 a darker red indicates a higher number of languages from a census being found in the respective corpora. In many cases, the two corpora agree in which languages they predict to be used in each country: Europe, those parts of Africa for which there is ground-truth data, and South America. But Twitter provides a better representation of North America and Oceania. One factor that is disguised in these figures is that many countries have only a few languages, so that a high true positive rate for a country could reflect only one or two languages. For example, both English and Spanish are very common on Twitter (c.f. Table 3), so that any country which predominantly uses these two languages will have a good representation by default.",
"We can also think about the false positive rate: what languages do the corpora find that are not contained in the census-based ground-truth? For example, if English and Spanish are used in a country on Twitter but not reflected on the census, this is a false positive. As shown in Figure 6, the web-crawled corpus has very few false positive outside of Russia and eastern Europe. The situation on Twitter is similar: most false positives are in Russia and Eastern Europe, but in Twitter there are also over-predicted languages in the US and Canada, South Africa, France, India, and Australia. This is important because it shows that relying on Twitter alone would indicate that there are more diverse language speaking populations in these countries. As shown in Table 1, Eastern Europe accounts for 2.4% of the world's population but 27.4% of the web corpus; this explains the region's general false positive rate. For Russia, on the other hand, which is not included in Eastern Europe, the false positive rate cannot be explained in reference to general over-representation. In this case, the false positives are other European languages: French, German, Spanish, Italian. More research is needed to distinguish between immigration, tourism, and business as alternate sources of false positive languages appearing in digital data sets."
],
[
"Analyses and models based on digital texts, especially from Twitter, often come with uncertainty about the underlying populations that those texts represent. This paper has systematically collected Twitter and web-data from locations around the world without language-specific searches that would bias the collection. The purpose is to understand how well these data sets correspond with what we know about global populations from ground-truth sources, providing a method for evaluating different data collection techniques.",
"The first important finding is that patterns from Twitter and web-crawled data diverge significantly in their representation of the world's population. This simply reflects the fact that data drawn from Twitter and web pages will likely represent people from different places. Why? We have also seen that Twitter data matches populations better when population numbers are weighted by GDP and worse when weighted by internet-usage statistics. This implies that Twitter as a platform represents more wealthy populations than general web-crawled data. An alternate interpretation is that the Twitter collection here is based on urban areas, which tend to have more wealthy populations. Would the same bias be found with a rural-centered collection procedure? That is a secondary problem in this context because the goal is to develop ground-truth population-centered baselines that could be used to evaluate different Twitter collection methods.",
"The second important finding is that, given what ground-truth language-use data is available, there are in general very few false positives: cases where the corpora suggest a language is frequently used in a country but census-based data does not. While uncommon, there are more false positives in Twitter data. This is significant because it means that, in general, these corpora do not predict language use that is not actually present.",
"But the third important finding is that, given what ground-truth language-use data is available, there remain a number of countries where these corpora do not represent all the language produced by the local populations: not all languages from censuses are found in digital texts. In this case Twitter has fewer missing languages."
]
],
"section_name": [
"Introduction",
"Collecting Global Corpora",
"Language Identification",
"Language Distribution",
"Population Density",
"Population Demographics",
"Discussion"
]
} | {
"answers": [
{
"annotation_id": [
"9b5c52ac181c7393d7c19483bda0fd89e5053c46",
"c7634c4f17ff6b02d7295b828d65a6e0878acdf3"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"The first important finding is that patterns from Twitter and web-crawled data diverge significantly in their representation of the world's population. This simply reflects the fact that data drawn from Twitter and web pages will likely represent people from different places. Why? We have also seen that Twitter data matches populations better when population numbers are weighted by GDP and worse when weighted by internet-usage statistics. This implies that Twitter as a platform represents more wealthy populations than general web-crawled data. An alternate interpretation is that the Twitter collection here is based on urban areas, which tend to have more wealthy populations. Would the same bias be found with a rural-centered collection procedure? That is a secondary problem in this context because the goal is to develop ground-truth population-centered baselines that could be used to evaluate different Twitter collection methods.",
"The second important finding is that, given what ground-truth language-use data is available, there are in general very few false positives: cases where the corpora suggest a language is frequently used in a country but census-based data does not. While uncommon, there are more false positives in Twitter data. This is significant because it means that, in general, these corpora do not predict language use that is not actually present.",
"But the third important finding is that, given what ground-truth language-use data is available, there remain a number of countries where these corpora do not represent all the language produced by the local populations: not all languages from censuses are found in digital texts. In this case Twitter has fewer missing languages."
],
"extractive_spans": [],
"free_form_answer": "Twitter data has fewer missing languages than what census-based data contains because it matches populations better when they are weighting by GDP",
"highlighted_evidence": [
"The first important finding is that patterns from Twitter and web-crawled data diverge significantly in their representation of the world's population. This simply reflects the fact that data drawn from Twitter and web pages will likely represent people from different places. Why? We have also seen that Twitter data matches populations better when population numbers are weighted by GDP and worse when weighted by internet-usage statistics. This implies that Twitter as a platform represents more wealthy populations than general web-crawled data. An alternate interpretation is that the Twitter collection here is based on urban areas, which tend to have more wealthy populations. Would the same bias be found with a rural-centered collection procedure? That is a secondary problem in this context because the goal is to develop ground-truth population-centered baselines that could be used to evaluate different Twitter collection methods.\n\nThe second important finding is that, given what ground-truth language-use data is available, there are in general very few false positives: cases where the corpora suggest a language is frequently used in a country but census-based data does not. While uncommon, there are more false positives in Twitter data. This is significant because it means that, in general, these corpora do not predict language use that is not actually present.\n\nBut the third important finding is that, given what ground-truth language-use data is available, there remain a number of countries where these corpora do not represent all the language produced by the local populations: not all languages from censuses are found in digital texts. In this case Twitter has fewer missing languages."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"615f232faa73de7c5722a5d7028e431d66ab4f88",
"b7798ced92e20c6cb5755c2499490dc4049a3aa9",
"df3b6ac16c7b67b64f62c508f1a62669e9f56602"
],
"answer": [
{
"evidence": [
"How well does language data represent both regional population densities and the social characteristics of regional populations? To answer this question, we collect and analyze two large global-scale datasets: web-crawled data from the Common Crawl (16.65 billion words) and social media data from Twitter (4.14 billion words). This paper evaluates demographic-type information that is derived from these datasets, information which traditionally has been collected using survey-instruments as part of a census."
],
"extractive_spans": [
"Twitter"
],
"free_form_answer": "",
"highlighted_evidence": [
"How well does language data represent both regional population densities and the social characteristics of regional populations? To answer this question, we collect and analyze two large global-scale datasets: web-crawled data from the Common Crawl (16.65 billion words) and social media data from Twitter (4.14 billion words). This paper evaluates demographic-type information that is derived from these datasets, information which traditionally has been collected using survey-instruments as part of a census."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"How well does language data represent both regional population densities and the social characteristics of regional populations? To answer this question, we collect and analyze two large global-scale datasets: web-crawled data from the Common Crawl (16.65 billion words) and social media data from Twitter (4.14 billion words). This paper evaluates demographic-type information that is derived from these datasets, information which traditionally has been collected using survey-instruments as part of a census."
],
"extractive_spans": [
"Twitter "
],
"free_form_answer": "",
"highlighted_evidence": [
"To answer this question, we collect and analyze two large global-scale datasets: web-crawled data from the Common Crawl (16.65 billion words) and social media data from Twitter (4.14 billion words). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"How well does language data represent both regional population densities and the social characteristics of regional populations? To answer this question, we collect and analyze two large global-scale datasets: web-crawled data from the Common Crawl (16.65 billion words) and social media data from Twitter (4.14 billion words). This paper evaluates demographic-type information that is derived from these datasets, information which traditionally has been collected using survey-instruments as part of a census."
],
"extractive_spans": [
"Twitter"
],
"free_form_answer": "",
"highlighted_evidence": [
"To answer this question, we collect and analyze two large global-scale datasets: web-crawled data from the Common Crawl (16.65 billion words) and social media data from Twitter (4.14 billion words). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"6e4454ff78bbbc399252db3e9a67a9027b981a66",
"8a36900152191580561ddbf6d3da596a321cb796"
],
"answer": [
{
"evidence": [
"Data comes from two sources of digital texts: web pages from the Common Crawl and social media from Twitter. Starting with the web-crawled data, we can compare this dataset to previous georeferenced web corpora BIBREF12, BIBREF13. The basic pipeline is to process all text within $<p>$ tags, removing boilerplate content, navigation content, and noisy text. We view each web page as a document containing the remaining material. Documents are then deduplicated by site, by time, and by location.",
"Some countries are not available because their top-level domains are used for other purposes (i.e., .ai, .fm, .io, .ly, .ag, .tv). Domains that do not contain geographic information are also removed from consideration (e.g., .com sites). The Common Crawl dataset covers 2014 through the end of 2017, totalling 81.5 billion web pages. As shown in Table 1, after processing this produces a corpus of 16.65 billion words. Table 1 also shows the number of countries represented in the web corpus against the number of countries in the ground-truth UN dataset and in the collected Twitter corpus. Countries may be missing from the web dataset (i) because their domains are used for a different purpose or (ii) their domains are not widely used or the country does not produce a significant amount of data on the open internet."
],
"extractive_spans": [],
"free_form_answer": "81.5 billion web pages covered in Common Crawl dataset",
"highlighted_evidence": [
"Data comes from two sources of digital texts: web pages from the Common Crawl and social media from Twitter.",
"The Common Crawl dataset covers 2014 through the end of 2017, totalling 81.5 billion web pages. As shown in Table 1, after processing this produces a corpus of 16.65 billion words."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"How well does language data represent both regional population densities and the social characteristics of regional populations? To answer this question, we collect and analyze two large global-scale datasets: web-crawled data from the Common Crawl (16.65 billion words) and social media data from Twitter (4.14 billion words). This paper evaluates demographic-type information that is derived from these datasets, information which traditionally has been collected using survey-instruments as part of a census."
],
"extractive_spans": [
"web-crawled data from the Common Crawl"
],
"free_form_answer": "",
"highlighted_evidence": [
"To answer this question, we collect and analyze two large global-scale datasets: web-crawled data from the Common Crawl (16.65 billion words) and social media data from Twitter (4.14 billion words)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"d11f00885172e2b25d38cfc7279b8edc02b5108f",
"e58ce8aed4541fdcf15b501c9bee52e3cc368596"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: Most Common Languages by Frequency Rank and Percent of Corpora"
],
"extractive_spans": [],
"free_form_answer": "English, Spanish, Russian, Serbo-Croatian, Mandarin, German, French, Slovenian, Portuguese, Finnish, Bulgarian, Arabic, Indonesian, Latvian, Estonian, Slovak, Azerbaijani, Romanina, Icelandic, Italian, among others.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Most Common Languages by Frequency Rank and Percent of Corpora"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Do they authors offer a hypothesis for why Twitter data makes better predictions about the inventory of languages used in each country?",
"What social media platforms are represented?",
"Which websites were used in the web crawl?",
"What countries and languages are represented in the datasets?"
],
"question_id": [
"b9686a168366aafbab1737df426e031ad74a6284",
"740cc392c0c8bfadfe6b3a60c0be635c03e17f2a",
"845bdcd900c0f96b2ae091d086fb1ab8bb1063f0",
"8d1b6c88f06ee195d75af32ede85dbd6477c8497"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Relative Size of Georeferenced Corpora by Region with Population Baseline",
"Table 2: Language Identification Performance Across Domains for Samples of 50 characters",
"Table 3: Most Common Languages by Frequency Rank and Percent of Corpora",
"Figure 1: Distribution of Top 100 Languages By Percentage",
"Table 4: Correlation Between Corpus Size and Population",
"Figure 2: Representation of Twitter Data (Red = over-represented i.r.t population)",
"Figure 3: Representation of Web-Crawled Data (Red = over-represented i.r.t population)",
"Figure 4: Web-Crawled Language Inventory in Relation to Ground-Truth",
"Figure 5: Twitter Language Inventory in Relation to Ground-Truth",
"Figure 6: False Positive Languages By Country for Web-Crawled Corpus",
"Figure 7: False Positive Languages By Country for Twitter Corpus"
],
"file": [
"3-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"7-Figure1-1.png",
"7-Table4-1.png",
"8-Figure2-1.png",
"8-Figure3-1.png",
"10-Figure4-1.png",
"10-Figure5-1.png",
"12-Figure6-1.png",
"12-Figure7-1.png"
]
} | [
"Do they authors offer a hypothesis for why Twitter data makes better predictions about the inventory of languages used in each country?",
"Which websites were used in the web crawl?",
"What countries and languages are represented in the datasets?"
] | [
[
"2004.00809-Discussion-3",
"2004.00809-Discussion-2",
"2004.00809-Discussion-1"
],
[
"2004.00809-Collecting Global Corpora-0",
"2004.00809-Collecting Global Corpora-2",
"2004.00809-Introduction-1"
],
[
"2004.00809-6-Table3-1.png"
]
] | [
"Twitter data has fewer missing languages than what census-based data contains because it matches populations better when they are weighting by GDP",
"81.5 billion web pages covered in Common Crawl dataset",
"English, Spanish, Russian, Serbo-Croatian, Mandarin, German, French, Slovenian, Portuguese, Finnish, Bulgarian, Arabic, Indonesian, Latvian, Estonian, Slovak, Azerbaijani, Romanina, Icelandic, Italian, among others."
] | 200 |
1903.10318 | Fine-tune BERT for Extractive Summarization | BERT, a pre-trained Transformer model, has achieved ground-breaking performance on multiple NLP tasks. In this paper, we describe BERTSUM, a simple variant of BERT, for extractive summarization. Our system is the state of the art on the CNN/Dailymail dataset, outperforming the previous best-performed system by 1.65 on ROUGE-L. The codes to reproduce our results are available at https://github.com/nlpyang/BertSum | {
"paragraphs": [
[
"Single-document summarization is the task of automatically generating a shorter version of a document while retaining its most important information. The task has received much attention in the natural language processing community due to its potential for various information access applications. Examples include tools which digest textual content (e.g., news, social media, reviews), answer questions, or provide recommendations.",
"The task is often divided into two paradigms, abstractive summarization and extractive summarization. In abstractive summarization, target summaries contains words or phrases that were not in the original text and usually require various text rewriting operations to generate, while extractive approaches form summaries by copying and concatenating the most important spans (usually sentences) in a document. In this paper, we focus on extractive summarization.",
"Although many neural models have been proposed for extractive summarization recently BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , the improvement on automatic metrics like ROUGE has reached a bottleneck due to the complexity of the task. In this paper, we argue that, BERT BIBREF0 , with its pre-training on a huge dataset and the powerful architecture for learning complex features, can further boost the performance of extractive summarization .",
"In this paper, we focus on designing different variants of using BERT on the extractive summarization task and showing their results on CNN/Dailymail and NYT datasets. We found that a flat architecture with inter-sentence Transformer layers performs the best, achieving the state-of-the-art results on this task."
],
[
"Let $d$ denote a document containing several sentences $[sent_1, sent_2, \\cdots , sent_m]$ , where $sent_i$ is the $i$ -th sentence in the document. Extractive summarization can be defined as the task of assigning a label $y_i \\in \\lbrace 0, 1\\rbrace $ to each $sent_i$ , indicating whether the sentence should be included in the summary. It is assumed that summary sentences represent the most important content of the document."
],
[
"To use BERT for extractive summarization, we require it to output the representation for each sentence. However, since BERT is trained as a masked-language model, the output vectors are grounded to tokens instead of sentences. Meanwhile, although BERT has segmentation embeddings for indicating different sentences, it only has two labels (sentence A or sentence B), instead of multiple sentences as in extractive summarization. Therefore, we modify the input sequence and embeddings of BERT to make it possible for extracting summaries.",
"As illustrated in Figure 1, we insert a [CLS] token before each sentence and a [SEP] token after each sentence. In vanilla BERT, The [CLS] is used as a symbol to aggregate features from one sentence or a pair of sentences. We modify the model by using multiple [CLS] symbols to get features for sentences ascending the symbol.",
"We use interval segment embeddings to distinguish multiple sentences within a document. For $sent_i$ we will assign a segment embedding $E_A$ or $E_B$ conditioned on $i$ is odd or even. For example, for $[sent_1, sent_2, sent_3, sent_4, sent_5]$ we will assign $[E_A, E_B, E_A,E_B, E_A]$ .",
"The vector $T_i$ which is the vector of the $i$ -th [CLS] symbol from the top BERT layer will be used as the representation for $sent_i$ ."
],
[
"After obtaining the sentence vectors from BERT, we build several summarization-specific layers stacked on top of the BERT outputs, to capture document-level features for extracting summaries. For each sentence $sent_i$ , we will calculate the final predicted score $\\hat{Y}_i$ . The loss of the whole model is the Binary Classification Entropy of $\\hat{Y}_i$ against gold label $Y_i$ . These summarization layers are jointly fine-tuned with BERT.",
"Like in the original BERT paper, the Simple Classifier only adds a linear layer on the BERT outputs and use a sigmoid function to get the predicted score: ",
"$$\\hat{Y}_i = \\sigma (W_oT_i+b_o)$$ (Eq. 7) ",
"where $\\sigma $ is the Sigmoid function.",
"Instead of a simple sigmoid classifier, Inter-sentence Transformer applies more Transformer layers only on sentence representations, extracting document-level features focusing on summarization tasks from the BERT outputs: ",
"$$\\tilde{h}^l=\\mathrm {LN}(h^{l-1}+\\mathrm {MHAtt}(h^{l-1}))\\\\\nh^l=\\mathrm {LN}(\\tilde{h}^l+\\mathrm {FFN}(\\tilde{h}^l))$$ (Eq. 9) ",
"where $h^0=\\mathrm {PosEmb}(T)$ and $T$ are the sentence vectors output by BERT, $\\mathrm {PosEmb}$ is the function of adding positional embeddings (indicating the position of each sentence) to $T$ ; $\\mathrm {LN}$ is the layer normalization operation BIBREF8 ; $\\mathrm {MHAtt}$ is the multi-head attention operation BIBREF1 ; the superscript $l$ indicates the depth of the stacked layer.",
"The final output layer is still a sigmoid classifier: ",
"$$\\hat{Y}_i = \\sigma (W_oh_i^L+b_o)$$ (Eq. 10) ",
"where $h^L$ is the vector for $sent_i$ from the top layer (the $L$ -th layer ) of the Transformer. In experiments, we implemented Transformers with $L=1, 2, 3$ and found Transformer with 2 layers performs the best.",
"Although the Transformer model achieved great results on several tasks, there are evidence that Recurrent Neural Networks still have their advantages, especially when combining with techniques in Transformer BIBREF9 . Therefore, we apply an LSTM layer over the BERT outputs to learn summarization-specific features.",
"To stabilize the training, pergate layer normalization BIBREF8 is applied within each LSTM cell. At time step $i$ , the input to the LSTM layer is the BERT output $T_i$ , and the output is calculated as: ",
"$$\\left(\n\\begin{tabular}{c}\nF_i \\\\\nI_i\\\\\nO_i\\\\\nG_i\n\\end{tabular}\n\\right)=\\mathrm {LN}_h(W_hh_{i-1})+\\mathrm {LN}_x(W_xT_i)\\\\\n{\\begin{@align}{1}{-1}\n\\nonumber C_i =&~\\sigma (F_i)\\odot C_{i-1}\\\\\n&+\\sigma (I_i)\\odot \\mathrm {tanh}(G_{i-1})\\\\\nh_i = &\\sigma (O_t)\\odot \\mathrm {tanh}(\\mathrm {LN}_c(C_t))\\end{@align}}$$ (Eq. 12) ",
"where $F_i, I_i, O_i$ are forget gates, input gates, output gates; $G_i$ is the hidden vector and $C_i$ is the memory vector; $h_i$ is the output vector; $\\mathrm {LN}_h, \\mathrm {LN}_x, \\mathrm {LN}_c$ are there difference layer normalization operations; Bias terms are not shown.",
"The final output layer is also a sigmoid classifier: ",
"$$\\hat{Y}_i = \\sigma (W_oh_i+b_o)$$ (Eq. 13) "
],
[
"In this section we present our implementation, describe the summarization datasets and our evaluation protocol, and analyze our results."
],
[
"We use PyTorch, OpenNMT BIBREF10 and the `bert-base-uncased' version of BERT to implement the model. BERT and summarization layers are jointly fine-tuned. Adam with $\\beta _1=0.9$ , $\\beta _2=0.999$ is used for fine-tuning. Learning rate schedule is following BIBREF1 with warming-up on first 10,000 steps: ",
"$$\\nonumber lr = 2e^{-3}\\cdot min(step^{-0.5}, step \\cdot warmup^{-1.5})$$ (Eq. 17) ",
"All models are trained for 50,000 steps on 3 GPUs (GTX 1080 Ti) with gradient accumulation per two steps, which makes the batch size approximately equal to 36. Model checkpoints are saved and evaluated on the validation set every 1,000 steps. We select the top-3 checkpoints based on their evaluation losses on the validations set, and report the averaged results on the test set.",
"When predicting summaries for a new document, we first use the models to obtain the score for each sentence. We then rank these sentences by the scores from higher to lower, and select the top-3 sentences as the summary.",
"During the predicting process, Trigram Blocking is used to reduce redundancy. Given selected summary $S$ and a candidate sentence $c$ , we will skip $c$ is there exists a trigram overlapping between $c$ and $S$ . This is similar to the Maximal Marginal Relevance (MMR) BIBREF11 but much simpler."
],
[
"We evaluated on two benchmark datasets, namely the CNN/DailyMail news highlights dataset BIBREF12 and the New York Times Annotated Corpus (NYT; BIBREF13 ). The CNN/DailyMail dataset contains news articles and associated highlights, i.e., a few bullet points giving a brief overview of the article. We used the standard splits of BIBREF12 for training, validation, and testing (90,266/1,220/1,093 CNN documents and 196,961/12,148/10,397 DailyMail documents). We did not anonymize entities. We first split sentences by CoreNLP and pre-process the dataset following methods in BIBREF14 .",
"The NYT dataset contains 110,540 articles with abstractive summaries. Following BIBREF15 , we split these into 100,834 training and 9,706 test examples, based on date of publication (test is all articles published on January 1, 2007 or later). We took 4,000 examples from the training set as the validation set. We also followed their filtering procedure, documents with summaries that are shorter than 50 words were removed from the raw dataset. The filtered test set (NYT50) includes 3,452 test examples. We first split sentences by CoreNLP and pre-process the dataset following methods in BIBREF15 .",
"Both datasets contain abstractive gold summaries, which are not readily suited to training extractive summarization models. A greedy algorithm was used to generate an oracle summary for each document. The algorithm greedily select sentences which can maximize the ROUGE scores as the oracle sentences. We assigned label 1 to sentences selected in the oracle summary and 0 otherwise."
],
[
"The experimental results on CNN/Dailymail datasets are shown in Table 1. For comparison, we implement a non-pretrained Transformer baseline which uses the same architecture as BERT, but with smaller parameters. It is randomly initialized and only trained on the summarization task. The Transformer baseline has 6 layers, the hidden size is 512 and the feed-forward filter size is 2048. The model is trained with same settings following BIBREF1 . We also compare our model with several previously proposed systems.",
"As illustrated in the table, all BERT-based models outperformed previous state-of-the-art models by a large margin. Bertsum with Transformer achieved the best performance on all three metrics. The Bertsum with LSTM model does not have an obvious influence on the summarization performance compared to the Classifier model.",
"Ablation studies are conducted to show the contribution of different components of Bertsum. The results are shown in in Table 2. Interval segments increase the performance of base model. Trigram blocking is able to greatly improve the summarization results. This is consistent to previous conclusions that a sequential extractive decoder is helpful to generate more informative summaries. However, here we use the trigram blocking as a simple but robust alternative.",
"The experimental results on NYT datasets are shown in Table 3. Different from CNN/Dailymail, we use the limited-length recall evaluation, following BIBREF15 . We truncate the predicted summaries to the lengths of the gold summaries and evaluate summarization quality with ROUGE Recall. Compared baselines are (1) First- $k$ words, which is a simple baseline by extracting first $k$ words of the input article; (2) Full is the best-performed extractive model in BIBREF15 ; (3) Deep Reinforced BIBREF18 is an abstractive model, using reinforce learning and encoder-decoder structure. The Bertsum+Classifier can achieve the state-of-the-art results on this dataset."
],
[
"In this paper, we explored how to use BERT for extractive summarization. We proposed the Bertsum model and tried several summarization layers can be applied with BERT. We did experiments on two large-scale datasets and found the Bertsum with inter-sentence Transformer layers can achieve the best performance."
]
],
"section_name": [
"Introduction",
"Methodology",
"Extractive Summarization with BERT",
"Fine-tuning with Summarization Layers",
"Experiments",
"Implementation Details",
"Summarization Datasets",
"Experimental Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"45daa172fe7291f7f93992e3462365da49e283e0",
"55fa7fa6fdbda328b4d9aa8b3b3301a0f90fa5aa",
"aeafed360c09e012c7c9899b486d36d9c1d56ea6",
"d8483b70443aa7b7b972e90c082c1a2e5ea8cde6"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Test set results on the CNN/DailyMail dataset using ROUGE F1. Results with ∗ mark are taken from the corresponding papers."
],
"extractive_spans": [],
"free_form_answer": "they also use ROUGE-1 and ROUGE-2",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Test set results on the CNN/DailyMail dataset using ROUGE F1. Results with ∗ mark are taken from the corresponding papers."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Test set results on the CNN/DailyMail dataset using ROUGE F1. Results with ∗ mark are taken from the corresponding papers.",
"FLOAT SELECTED: Table 3: Test set results on the NYT50 dataset using ROUGE Recall. The predicted summary are truncated to the length of the gold-standard summary. Results with ∗ mark are taken from the corresponding papers."
],
"extractive_spans": [],
"free_form_answer": "Rouge-1, Rouge-2, Rouge Recall, Rouge F1",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Test set results on the CNN/DailyMail dataset using ROUGE F1. Results with ∗ mark are taken from the corresponding papers.",
"FLOAT SELECTED: Table 3: Test set results on the NYT50 dataset using ROUGE Recall. The predicted summary are truncated to the length of the gold-standard summary. Results with ∗ mark are taken from the corresponding papers."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Test set results on the CNN/DailyMail dataset using ROUGE F1. Results with ∗ mark are taken from the corresponding papers."
],
"extractive_spans": [],
"free_form_answer": "ROUGE-1 and ROUGE-2",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Test set results on the CNN/DailyMail dataset using ROUGE F1. Results with ∗ mark are taken from the corresponding papers."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Test set results on the CNN/DailyMail dataset using ROUGE F1. Results with ∗ mark are taken from the corresponding papers."
],
"extractive_spans": [],
"free_form_answer": "ROUGE-1 and ROUGE-2",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Test set results on the CNN/DailyMail dataset using ROUGE F1. Results with ∗ mark are taken from the corresponding papers."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b",
"eca216170c00be9528a4f86abcb3ffe7115a9be2",
"c7d4a630661cd719ea504dba56393f78278b296b",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"14c1abfa111c2d4c00974c130e386ed8c7e7dee3",
"95f1a9d2f2e54daa358c1a7ef9ad5b103e7e38f4"
],
"answer": [
{
"evidence": [
"As illustrated in Figure 1, we insert a [CLS] token before each sentence and a [SEP] token after each sentence. In vanilla BERT, The [CLS] is used as a symbol to aggregate features from one sentence or a pair of sentences. We modify the model by using multiple [CLS] symbols to get features for sentences ascending the symbol.",
"FLOAT SELECTED: Figure 1: The overview architecture of the BERTSUM model."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"As illustrated in Figure 1, we insert a [CLS] token before each sentence and a [SEP] token after each sentence. In vanilla BERT, The [CLS] is used as a symbol to aggregate features from one sentence or a pair of sentences. We modify the model by using multiple [CLS] symbols to get features for sentences ascending the symbol.",
"FLOAT SELECTED: Figure 1: The overview architecture of the BERTSUM model."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"As illustrated in Figure 1, we insert a [CLS] token before each sentence and a [SEP] token after each sentence. In vanilla BERT, The [CLS] is used as a symbol to aggregate features from one sentence or a pair of sentences. We modify the model by using multiple [CLS] symbols to get features for sentences ascending the symbol."
],
"extractive_spans": [],
"free_form_answer": "Together",
"highlighted_evidence": [
"As illustrated in Figure 1, we insert a [CLS] token before each sentence and a [SEP] token after each sentence."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"eca216170c00be9528a4f86abcb3ffe7115a9be2",
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"01d5dbcb4ff925897fab9c38736ec210cd3a09b6",
"068c03662fe67d31e423e152d3b409fb76d1525c"
],
"answer": [
{
"evidence": [
"As illustrated in Figure 1, we insert a [CLS] token before each sentence and a [SEP] token after each sentence. In vanilla BERT, The [CLS] is used as a symbol to aggregate features from one sentence or a pair of sentences. We modify the model by using multiple [CLS] symbols to get features for sentences ascending the symbol.",
"We use interval segment embeddings to distinguish multiple sentences within a document. For $sent_i$ we will assign a segment embedding $E_A$ or $E_B$ conditioned on $i$ is odd or even. For example, for $[sent_1, sent_2, sent_3, sent_4, sent_5]$ we will assign $[E_A, E_B, E_A,E_B, E_A]$ ."
],
"extractive_spans": [
"insert a [CLS] token before each sentence and a [SEP] token after each sentence",
"use interval segment embeddings to distinguish multiple sentences within a document"
],
"free_form_answer": "",
"highlighted_evidence": [
"As illustrated in Figure 1, we insert a [CLS] token before each sentence and a [SEP] token after each sentence. In vanilla BERT, The [CLS] is used as a symbol to aggregate features from one sentence or a pair of sentences. ",
"We use interval segment embeddings to distinguish multiple sentences within a document. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Figure 1: The overview architecture of the BERTSUM model.",
"As illustrated in Figure 1, we insert a [CLS] token before each sentence and a [SEP] token after each sentence. In vanilla BERT, The [CLS] is used as a symbol to aggregate features from one sentence or a pair of sentences. We modify the model by using multiple [CLS] symbols to get features for sentences ascending the symbol.",
"We use interval segment embeddings to distinguish multiple sentences within a document. For $sent_i$ we will assign a segment embedding $E_A$ or $E_B$ conditioned on $i$ is odd or even. For example, for $[sent_1, sent_2, sent_3, sent_4, sent_5]$ we will assign $[E_A, E_B, E_A,E_B, E_A]$ .",
"The vector $T_i$ which is the vector of the $i$ -th [CLS] symbol from the top BERT layer will be used as the representation for $sent_i$ ."
],
"extractive_spans": [
"interval segment embeddings to distinguish multiple sentences within a document"
],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 1: The overview architecture of the BERTSUM model.",
"As illustrated in Figure 1, we insert a [CLS] token before each sentence and a [SEP] token after each sentence. In vanilla BERT, The [CLS] is used as a symbol to aggregate features from one sentence or a pair of sentences. We modify the model by using multiple [CLS] symbols to get features for sentences ascending the symbol.\r\n\r\nWe use interval segment embeddings to distinguish multiple sentences within a document. For $sent_i$ we will assign a segment embedding $E_A$ or $E_B$ conditioned on $i$ is odd or even. For example, for $[sent_1, sent_2, sent_3, sent_4, sent_5]$ we will assign $[E_A, E_B, E_A,E_B, E_A]$ .\r\n\r\nThe vector $T_i$ which is the vector of the $i$ -th [CLS] symbol from the top BERT layer will be used as the representation for $sent_i$ .",
"As illustrated in Figure 1, we insert a [CLS] token before each sentence and a [SEP] token after each sentence. In vanilla BERT, The [CLS] is used as a symbol to aggregate features from one sentence or a pair of sentences. We modify the model by using multiple [CLS] symbols to get features for sentences ascending the symbol.\r\n\r\nWe use interval segment embeddings to distinguish multiple sentences within a document. For $sent_i$ we will assign a segment embedding $E_A$ or $E_B$ conditioned on $i$ is odd or even. For example, for $[sent_1, sent_2, sent_3, sent_4, sent_5]$ we will assign $[E_A, E_B, E_A,E_B, E_A]$ .\r\n\r\nThe vector $T_i$ which is the vector of the $i$ -th [CLS] symbol from the top BERT layer will be used as the representation for $sent_i$ ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b",
"eca216170c00be9528a4f86abcb3ffe7115a9be2"
]
},
{
"annotation_id": [
"16b43354de220f20cbbb359be59ce1988ba7451f",
"2b28dbef04f7a9c3721ed155776841accd2df5af"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Test set results on the CNN/DailyMail dataset using ROUGE F1. Results with ∗ mark are taken from the corresponding papers."
],
"extractive_spans": [],
"free_form_answer": "37.17 for the baseline model using a non-pretrained Transformer",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Test set results on the CNN/DailyMail dataset using ROUGE F1. Results with ∗ mark are taken from the corresponding papers."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Test set results on the CNN/DailyMail dataset using ROUGE F1. Results with ∗ mark are taken from the corresponding papers."
],
"extractive_spans": [],
"free_form_answer": "37.17",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Test set results on the CNN/DailyMail dataset using ROUGE F1. Results with ∗ mark are taken from the corresponding papers."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b",
"eca216170c00be9528a4f86abcb3ffe7115a9be2"
]
},
{
"annotation_id": [
"50dad8dd221dbb488013accca7623a4a4b09b500"
],
"answer": [
{
"evidence": [
"The experimental results on CNN/Dailymail datasets are shown in Table 1. For comparison, we implement a non-pretrained Transformer baseline which uses the same architecture as BERT, but with smaller parameters. It is randomly initialized and only trained on the summarization task. The Transformer baseline has 6 layers, the hidden size is 512 and the feed-forward filter size is 2048. The model is trained with same settings following BIBREF1 . We also compare our model with several previously proposed systems."
],
"extractive_spans": [
"non-pretrained Transformer baseline "
],
"free_form_answer": "",
"highlighted_evidence": [
"For comparison, we implement a non-pretrained Transformer baseline which uses the same architecture as BERT, but with smaller parameters."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"eca216170c00be9528a4f86abcb3ffe7115a9be2"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"two",
"two"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat",
"yes",
"yes"
],
"question": [
"What other evaluation metrics did they use other than ROUGE-L??",
"Do they encode sentences separately or together?",
"How do they use BERT to encode the whole text?",
"What is the ROUGE-L score of baseline method?",
"Which is the baseline method?"
],
"question_id": [
"bc05503eef25c732f1785e29d59b6022f12ba094",
"a6603305f4fd3dd0010ac31243c40999a116537e",
"2ba4477d597b1fd123d14be07a7780ccb5c4819b",
"027814f3a879a6c7852e033f9d99519b8729e444",
"00df1ff914956d4d23299d02fd44e4c985bb61fa"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"BERT summarization",
"BERT summarization",
"BERT summarization",
"bert",
"bert"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: The overview architecture of the BERTSUM model.",
"Table 1: Test set results on the CNN/DailyMail dataset using ROUGE F1. Results with ∗ mark are taken from the corresponding papers.",
"Table 2: Results of ablation studies of BERTSUM on CNN/Dailymail test set using ROUGE F1 (R-1 and R2 are shorthands for unigram and bigram overlap, R-L is the longest common subsequence).",
"Table 3: Test set results on the NYT50 dataset using ROUGE Recall. The predicted summary are truncated to the length of the gold-standard summary. Results with ∗ mark are taken from the corresponding papers."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png"
]
} | [
"What other evaluation metrics did they use other than ROUGE-L??",
"Do they encode sentences separately or together?",
"What is the ROUGE-L score of baseline method?"
] | [
[
"1903.10318-5-Table3-1.png",
"1903.10318-4-Table1-1.png"
],
[
"1903.10318-Extractive Summarization with BERT-1",
"1903.10318-2-Figure1-1.png"
],
[
"1903.10318-4-Table1-1.png"
]
] | [
"ROUGE-1 and ROUGE-2",
"Together",
"37.17"
] | 201 |
1706.02427 | Content-Based Table Retrieval for Web Queries | Understanding the connections between unstructured text and semi-structured table is an important yet neglected problem in natural language processing. In this work, we focus on content-based table retrieval. Given a query, the task is to find the most relevant table from a collection of tables. Further progress towards improving this area requires powerful models of semantic matching and richer training and evaluation resources. To remedy this, we present a ranking based approach, and implement both carefully designed features and neural network architectures to measure the relevance between a query and the content of a table. Furthermore, we release an open-domain dataset that includes 21,113 web queries for 273,816 tables. We conduct comprehensive experiments on both real world and synthetic datasets. Results verify the effectiveness of our approach and present the challenges for this task. | {
"paragraphs": [
[
"Table is a special and valuable information that could be found almost everywhere from the Internet. We target at the task of content-based table retrieval in this work. Given a query, the task is to find the most relevant table from a collection of tables. Table retrieval is of great importance for both natural language processing and information retrieval. On one hand, it could improve existing information retrieval systems. The well-organized information from table, such as product comparison from different aspects and flights between two specific cities, could be used to directly respond to web queries. On the other hand, the retrieved table could be used as the input for question answering BIBREF0 .",
"Unlike existing studies in database community BIBREF1 , BIBREF2 that utilize surrounding text of a table or pagerank score of a web page, we focus on making a thorough exploration of table content in this work. We believe that content-based table retrieval has the following challenges. The first challenge is how to effectively represent a table, which is semi-structured and includes many aspects such as headers, cells and caption. The second challenge is how to build a robust model that measures the relevance between an unstructured natural language query and a semi-structured table. Table retrieval could be viewed as a multi-modal task because the query and the table are of different forms. Moreover, to the best of our knowledge, there is no publicly available dataset for table retrieval. Further progress towards improving this area requires richer training and evaluation resources.",
"To address the aforementioned challenges, we develop a ranking based approach. We separate the approach into two cascaded steps to trade-off between accuracy and efficiency. In the first step, it finds a small set (e.g. 50 or 100) of candidate tables using a basic similarity measurement. In the second step, more sophisticated features are used to measure the relevance between the query and each candidate table. We implement two types of features, including manually designed features inspired by expert knowledge and neural network models jointly learned from data. Both strategies take into account the relevance between query and table at different levels of granularity. We also introduce a new dataset WebQueryTable for table retrieval. It includes 21,113 web queries from search log, and 273,816 web tables from Wikipedia.",
"We conduct comprehensive experiments on two datasets, a real world dataset introduced by us, and a synthetic dataset WikiTableQuestions BIBREF0 which has been widely used for table-based question answering. Results in various conditions show that neural network models perform comparably with carefully designed features, and combining them both could obtain further improvement. We study the influence of each aspect of table for table retrieval, and show what depth of table understanding is required to do well on this task. Results show the difference between question and web query, and present future challenges for this task.",
"This paper has the following contributions. We develop both feature-based and neural network based approaches, and conduct thorough experiments on real world and synthetic datasets. We release an open-domain dataset for table retrieval."
],
[
"We formulate the task of table retrieval in this section. Given a query $q$ and a collection of tables $T=\\lbrace t_1, ..., t_N\\rbrace $ , the goal of table search is to find a table $t_i$ that is most relevant to $q$ .",
"Typically, a query $q$ is a natural language expression that consists of a list of words, such as “major cities of netherlands”. A table $t$ is a set of data elements arranged by vertical columns and horizontal rows. Formally, we define a table as a triple $t=\\lbrace headers,\\ cells,\\ caption\\rbrace $ that consists of three aspects. A table could have multiple $headers$ , each of which indicates the property of a column and could be used to identify a column. A table could have multiple $cells$ , each of which is a unit where a row and a column intersects. A table could have a $caption$ , which is typically an explanatory text about the table. Figure 1 gives an example to illustrate different aspects of a table.",
"It is helpful to note that tables from the web are not always “regular”. We regard a table as a “regular” table if it contains header, cell and caption, and the number of cells in each row is equal to the number of header cells. In this work, we make a comprehensive study of table retrieval on regular tables, and would like to release benchmark datasets of good quality. It is trivial to implement heuristic rules so as to convert the irregular tables to regular one, so we leave it to the future work."
],
[
"In this section, we give an overview of the proposed approach. To build a system with high efficiency, we separate the task into two cascaded modules, including candidate table retrieval and table ranking. Candidate table retrieval aims to find a small set of tables, such as 50 or 100. These candidate tables will be further used in the table ranking step, which uses more sophisticated features to measure the relevance between a query and a table. In the following subsections, we will give the work-flow of candidate table retrieval and table ranking. The detailed feature representation will be described in the next section."
],
[
"Candidate table retrieval aims to get a small candidate table set from the whole table set of large scale, which is hundreds of thousands in our experiment. In order to guarantee the efficiency of the searching process, we calculate the similarity between table and query with Okapi BM25 BIBREF3 , which is computationally efficient and has been successfully used in information retrieval. Specifically, we represent a query as bag-of-words, and represent table with plain text composed by the words from caption and headers. Given a query $q = {x_1, x_2, ..., x_n}$ , a table $t$ and the whole table set $T$ , the BM25 score of query $q$ and table $t$ is calculated as follows. ",
"$$BM25(q, t) \\\\\n= \\sum _{i=1}^{n} idf(x_{i}) \\frac{tf(x_{i}, t) \\cdot (k_1+1)}{tf(x_{i}, T) + k_1 (1-b+b \\frac{|t|}{avg_{tl}})} \\nonumber $$ (Eq. 4) ",
" where $tf(x_{i}, t)$ is the term frequency of word $x_i$ in $t$ , $idf(x_i)$ is its inverse document frequency, $avg_{tl}$ is the average sequence length in the whole table set $T$ , and $k_1$ and $b$ are hyper-parameters."
],
[
"The goal of table ranking is to rank a short list of candidate tables by measuring the relevance between a query and a table. We develop a feature-based approach and a neural network approach, both of them effectively take into account the structure of table. The details about the features will be described in next section. We use each feature to calculate a relevance score, representing the similarity between a query and a table from some perspective. Afterwards, we use LambdaMART BIBREF4 , a successful algorithm for solving real world ranking problem, to get the final ranking score of each table. The basic idea of LambdaMART is that it constructs a forest of decision trees, and its output is a linear combination of the results of decision trees. Each binary branch in a decision tree specifies a threshold to apply to a single feature, and each leaf node is real value. Specifically, for a forest of $N$ trees, the relevance score of a query-table pair is calculated as follow, ",
"$$s(q,t)\n= \\sum _{i=1}^{N} w_i tr_i(q,t) \\nonumber $$ (Eq. 7) ",
"where $w_i$ is the weight associated with the $i$ -th regression tree, and $tr_i( \\cdot )$ is the value of a leaf node obtained by evaluating $i$ -th tree with features $\\left[ f_1(q,t), ... ,f_K(q,t) \\right]$ . The values of $w_i$ and the parameters in $tr_i(\\cdot )$ are learned with gradient descent during training."
],
[
"Measuring the relevance between a query and a table is of great importance for table retrieval. In this section, we present carefully designed features and neural network architectures for matching between a query and a table."
],
[
"We carefully design a set of features to match query and table from word-level, phrase-level and sentence-level, respectively. The input of a feature function are two strings, one query string $q$ and one aspect string $t_a$ . We separately apply each of the following features to each aspect of a table, resulting in a list of feature scores. As described in Section 2, a table has three aspects, including headers, cells and caption. We represent each aspect as word sequence in this part.",
"(1) Word Level. We design two word matching features $f_{wmt}$ and $f_{mwq}$ . The intuition is that a query is similar to an aspect of table if they have a large amount of word overlap. $f_{wmt}$ and $f_{wmq}$ are calculated based on number of words shared by $q$ and $t_a$ . They are also normalized with the length of $q$ and $t_a$ , calculated as follows, ",
"$$f_{wmt}(t_{a}, q)&=\\frac{\\sum _{w \\in t_{a}} \\delta (w, q) \\cdot idf(w)}{\\sum _{w^{\\prime } \\in t_{a}} idf(w^{\\prime })} \\nonumber \\\\\nf_{wmq}(t_{a}, q)&=\\frac{\\sum _{w \\in t_{a}} \\delta (w, q) \\cdot idf(w)}{\\sum _{w^{\\prime } \\in q} idf(w^{\\prime })} \\nonumber $$ (Eq. 9) ",
"where $idf(w)$ denotes the inverse document frequency of word $w$ in $t_{a}$ . $\\delta (y_j, q)$ is an indicator function which is equal to 1 if $y_j$ occurs in $q$ , and 0 otherwise. Larger values of $f_{wmt}(\\cdot )$ and $f_{wmq}(\\cdot )$ correspond to larger amount of word overlap between $t_a$ and $q$ .",
"(2) Phrase Level. We design a paraphrase-based feature $f_{pp}$ to deal with the case that a query and a table use different expressions to describe the same meaning. In order to learn a strong and domain-independent paraphrase model, we leverage existing statistical machine translation (SMT) phrase tables. A phrase table is defined as a quadruple, namely $PT = \\lbrace \\langle src_i,trg_i, p(trg_i|src_i), p(src_i|trg_i) \\rangle \\rbrace $ , where $src_i$ (or $trg_i$ ) denotes a phrase, in source (or target) language, $p(trg_i|src_i)$ (or $p(src_i|trg_i)$ ) denotes the translation probability from $srg_i$ (or $trg_i$ ) to $trg_i$ (or $src_i$ ). We use an existing SMT approach BIBREF5 to extract a phrase table $PT = \\lbrace \\langle src_i,trg_i, p(trg_i|src_i), p(src_i|trg_i) \\rangle \\rbrace $0 from a bilingual corpus. Afterwards, we use $PT = \\lbrace \\langle src_i,trg_i, p(trg_i|src_i), p(src_i|trg_i) \\rangle \\rbrace $1 to calculate the relevance between a query and a table in paraphrase level. The intuition is that, two source phrases that are aligned to the same target phrase tend to be paraphrased. The phrase level score is calculated as follows, where $PT = \\lbrace \\langle src_i,trg_i, p(trg_i|src_i), p(src_i|trg_i) \\rangle \\rbrace $2 is the maximum n-gram order, which is set as 3, and $PT = \\lbrace \\langle src_i,trg_i, p(trg_i|src_i), p(src_i|trg_i) \\rangle \\rbrace $3 and $PT = \\lbrace \\langle src_i,trg_i, p(trg_i|src_i), p(src_i|trg_i) \\rangle \\rbrace $4 are the phrase in $PT = \\lbrace \\langle src_i,trg_i, p(trg_i|src_i), p(src_i|trg_i) \\rangle \\rbrace $5 and $PT = \\lbrace \\langle src_i,trg_i, p(trg_i|src_i), p(src_i|trg_i) \\rangle \\rbrace $6 starts from the $PT = \\lbrace \\langle src_i,trg_i, p(trg_i|src_i), p(src_i|trg_i) \\rangle \\rbrace $7 -th and $PT = \\lbrace \\langle src_i,trg_i, p(trg_i|src_i), p(src_i|trg_i) \\rangle \\rbrace $8 -th word with the length of $PT = \\lbrace \\langle src_i,trg_i, p(trg_i|src_i), p(src_i|trg_i) \\rangle \\rbrace $9 , and $src_i$0 and $src_i$1 . ",
"$$f_{pp}(t_{a},q)= \\frac{1}{N}\\sum _{n=1}^N \\frac{\\sum _{i,j} score(src_{i,n}^{t_q}, src_{j,n}^{q})}{|t_a|-N+1} \\nonumber \\\\\nscore(src_x;src_y)=\\sum _{PT}p(tgt_k|src_x) \\cdot p(src_y|tgt_k) \\nonumber $$ (Eq. 10) ",
"(3) Sentence Level. We design features to match a query with a table at the sentence level. We use CDSSM BIBREF6 , which has been successfully applied in text retrieval. The basic computational component of CDSSM is sub-word, which makes it very suitable for dealing the misspelling queries in web search. The model composes sentence vector from sub-word embedding via convolutional neural network. We use the same model architecture to get query vector and table aspect vector, and calculate their relevance with cosine function. ",
"$$f_{s1}(t_a, q)=cosine(cdssm(t_a), cdssm(q)) \\nonumber $$ (Eq. 11) ",
"We train model parameters on WikiAnswers dataset BIBREF7 , which contains almost 12M question-similar question pairs. In addition, since vector average is an intuitive way to compute sentence vector and does not induce additional parameters, we calculate another relevance score by representing a query and a table aspect with element-wise vector average. We use a publicly available word embedding which is released by mikolov2013w2v. ",
"$$f_{s2}(t_a, q)=cosine(vec\\_avg(t_a), vec\\_avg(q)) \\nonumber $$ (Eq. 12) "
],
[
"We present neural network models for matching a query with a table. As a table includes different aspects such as headers, cells and caption, we develop different strategies to measure the relevance between a query and a table from different perspectives. In this subsection, we first describe the model to compute query representation, and then present the method that measures the relevance between a query and each aspect.",
"A desirable query representation should be sensitive to word order as reversing or shuffling the words in a query might result in totally different intention. For example, “list of flights london to berlin\" and “list of flights berlin to london\" have different intentions. We use recurrent neural network (RNN) to map a query of variable length to a fixed-length vector. To avoid the problem of gradient vanishing, we use gated recurrent unit (GRU) BIBREF8 as the basic computation unit, which adaptively forgets the history and remembers the input, and has proven to be effective in sequence modeling BIBREF9 . It recursively transforming current word vector $e^q_t$ with the output vector of the previous step $h_{t-1}$ . ",
"$$&z_i &= \\sigma (W_{z}e^q_{i} + U_{z}{h}_{i-1}) \\nonumber \\\\\n&r_i &= \\sigma (W_{r}e^q_{i} + U_{r}{h}_{i-1}) \\nonumber \\\\\n&\\widetilde{h}_i &= \\tanh (W_{h}e^q_{i} + U_{h}(r_i \\odot {h}_{i-1})) \\nonumber \\\\\n&{h}_{i} &= z_i \\odot \\widetilde{h}_i + (1-z_i) \\odot {h}_{i-1} \\nonumber $$ (Eq. 14) ",
"where $z_i$ and $r_i$ are update and reset gates of GRU. We use a bi-directional RNN to get the meaning of a query from both directions, and use the concatenation of two last hidden states as the final query representation $v_q=[ \\overrightarrow{h}_n , \\overleftarrow{h}_n ]$ .",
"A table has different types of information, including headers, cells and caption. We develop different mechanisms to match the relevance between a query and each aspect of a table. An important property of a table is that randomly exchanging two rows or tow columns will not change the meaning of a table BIBREF10 . Therefore, a matching model should ensure that exchanging rows or columns will result in the same output. We first describe the method to deal with headers. To satisfy these conditions, we represent each header as an embedding vector, and regard a set of header embeddings as external memory $M_h \\in \\mathbb {R}^{k \\times d}$ , where $d$ is the dimension of word embedding, and $k$ is the number of header cells. Given a query vector $v_q$ , the model first assigns a probability $\\alpha _i$ to each memory cell $m_i$ , which is a header embedding in this case. Afterwards, a query-specific header vector is obtained through weighted average BIBREF11 , BIBREF12 , namely $v_{header} = \\sum _{i=1}^{k}\\alpha _i m_i$ , where $\\alpha _i \\in [0,1]$ is the weight of $m_i$ calculated as below and $\\sum _{i} \\alpha _i = 1$ . ",
"$$\\alpha _i = \\frac{exp(tanh(W [m_i; v_q] + b))}{\\sum _{j=1}^k exp(tanh(W [m_j; v_q] + b))}\\nonumber $$ (Eq. 15) ",
"Similar techniques have been successfully applied in table-based question answering BIBREF13 , BIBREF14 . Afterwards, we feed the concatenation of $v_q$ and $v_{header}$ to a linear layer followed by a $softmax$ function whose output length is 2. We regard the output of the first category as the relevance between query and header. We use $NN_1()$ to denote this model. ",
"$$f_{nn}(header, q)=NN_{1}(M_{h}, v_{q}) \\nonumber $$ (Eq. 16) ",
"Since headers and cells have similar characteristics, we use a similar way to measure the relevance between a query and table cells. Specifically, we derive three memories $M_{cel}$ , $M_{row}$ and $M_{col}$ from table cells in order to match from cell level, row level and column level. Each memory cell in $M_{cel}$ represents the embedding of a table cell. Each cell in $M_{row}$ represent the vector a row, which is computed with weighted average over the embeddings of cells in the same row. We derive the column memory $M_{col}$ in an analogous way. We use the same module $NN_1()$ to calculate the relevance scores for these three memories. ",
"$$f_{nn}(cell, q)&=&NN_{1}(M_{cel}, v_{q}) \\nonumber \\\\\nf_{nn}(column, q)&=&NN_{1}(M_{col}, v_{q}) \\nonumber \\\\\nf_{nn}(row, q)&=&NN_{1}(M_{row}, v_{q}) \\nonumber $$ (Eq. 17) ",
"Since a table caption is typically a descriptive word sequence. We model it with bi-directional GRU-RNN, the same strategy we have used for modeling the query. We concatenate the caption vector $v_{cap}$ with $v_{q}$ , and feed the results to a linear layer followed by $softmax$ . ",
"$$f_{nn}(caption, q)=NN_{2}(v_{cap}, v_{q}) \\nonumber $$ (Eq. 18) ",
"We separately train the parameters for each aspect with back-propagation. We use negative log-likelihood as the loss function. ",
"$$loss = -\\frac{1}{|D|}\\sum _{(t_a, q) \\in D} \\log (f_{nn}(t_a,q)) \\nonumber $$ (Eq. 20) "
],
[
"We describe the experimental setting and analyze the results in this section."
],
[
"To the best of our knowledge, there is no publicly available dataset for table retrieval. We introduce WebQueryTable, an open-domain dataset consisting of query-table pairs. We use search logs from a commercial search engine to get a list of queries that could be potentially answered by web tables. Each query in query logs is paired with a list of web pages, ordered by the number of user clicks for the query. We select the tables occurred in the top ranked web page, and ask annotators to label whether a table is relevant to a query or not. In this way, we get 21,113 query-table pairs. In the real scenario of table retrieval, a system is required to find a table from a huge collection of tables. Therefore, in order to enlarge the search space of our dataset, we extract 252,703 web tables from Wikipedia and regard them as searchable tables as well. Data statistics are given in Table 1 .",
"We sampled 200 examples to analyze the distribution of the query types in our dataset. We observe that 69.5% queries are asking about “a list of XXX”, such as “list of countries and capitals” and “major cities in netherlands\", and about 24.5% queries are asking about an attribute of an object, such as “density of liquid water temperature”. We randomly separate the dataset as training, validation, test with a 70:10:20 split.",
"We also conduct a synthetic experiment for table retrieval on WikiTableQuestions BIBREF0 , which is a widely used dataset for table-based question answering. It contains 2,108 HTML tables extracted from Wikipedia. Workers from Amazon Mechanical Turk are asked to write several relevant questions for each table. Since each query is written for a specific table, we believe that each pair of query-table can also be used as an instance for table retrieval. The difference between WikiTableQuestions and WebQueryTable is that the questions in WikiTableQuestions mainly focus on the local regions, such as cells or columns, of a table while the queries in WebQueryTable mainly focus on the global content of a table. The number of table index in WikiTableQuestions is 2,108, which is smaller than the number of table index in WebQueryTable. We randomly split the 22,033 question-table pairs into training (70%), development (10%) and test (20%).",
"In the candidate table retrieval phase, we encode a table as bag-of-words to guarantee the efficiency of the approach. Specifically, on WebQueryTable dataset we represent a table with caption and headers. On WikiTableQuestions dataset we represent a table with caption, headers and cells. The recalls of the candidate table retrieval step on WikiTableQuestions and WebQueryTable datasets are 56.91% and 69.57%, respectively. The performance of table ranking is evaluated with Mean Average Precision (MAP) and Precision@1 (P@1) BIBREF15 . When evaluating the performance on table ranking, we filter out the following special cases that only one candidate table is returned or the correct answer is not contained in the retrieved tables in the first step. Hyper parameters are tuned on the validation set."
],
[
"Table 2 shows the performance of different approaches on the WebQueryTable dataset.",
"We compare between different features for table ranking. An intuitive baseline is to represent a table as bag-of-words, represent a query with bag-of-words, and calculate their similarity with cosine similarity. Therefore, we use the BM25 score which is calculated in the candidate table retrieval step. This baseline is abbreviated as BM25. We also report the results of using designed features (Feature) described in Section \"Matching with Designed Features\" and neural networks (NeuralNet) described in Section \"Matching with Neural Networks\" . Results from Table 2 show that the neural networks perform comparably with the designed features, and obtain better performance than the BM25 baseline. This results reflect the necessary of taking into account the table structure for table retrieval. Furthermore, we can find that combining designed features and neural networks could achieve further improvement, which indicates the complementation between them.",
"We further investigate the effects of headers, cells and caption for table retrieval on WebQueryTable. We first use each aspect separately and then increasingly combine different aspects. Results are given in Table 3 . We can find that in general the performance of an aspect in designed features is consistent with its performance in neural networks. Caption is the most effective aspect on WebQueryTable. This is reasonable as we find that majority of the queries are asking about a list of objects, such as “polish rivers\", “world top 5 mountains\" and “list of american cruise lines\". These intentions are more likely to be matched in the caption of a table. Combining more aspects could get better results. Using cells, headers and caption simultaneously gets the best results.",
"Moreover, we investigate whether using a higher threshold could obtain a better precision. Therefore, we increasingly use a set of thresholds, and calculate the corresponding precision and recall in different conditions. An instance is considered to be correct if the top ranked table is correct and its ranking score is greater than the threshold. Results of our NeuralNet approach on WebQueryTable are given in 2 . We can see that using larger threshold results in lower recall and higher precision. The results are consistent with our intuition.",
"We conduct case study on our NeuralNet approach and find that the performance is sensitive to the length of queries. Therefore, we split the test set to several groups according to the length of queries. Results are given in Figure 4 . We can find that the performance of the approach decreases with the increase of query length. When the query length changes from 6 to 7, the performance of P@1 decreases rapidly from 58.12% to 50.23%. Through doing case study, we find that long queries contain more word dependencies. Therefore, having a good understanding about the intention of a query requires deep query understanding. Leveraging external knowledge to connect query and table is a potential solution to deal with long queries.",
"We illustrate two examples generated by our NeuralNet approach in Figure 3 . The example in Figure 3 (a) is a satisfied case that the top ranked result is the correct answer. We can find that the model uses evidences from different aspects to match between a query and a table. In this example, the supporting evidences come from caption (“ramadan\" and “malaysia\"), headers (“dates\") and cells (“2016\"). The example in Figure 3 (b) is a dissatisfied case. We can find that the top ranked result contains “life expectancy\" in both caption and header, however, it is talking about the people in U.S. rather than “german shepherd\". Despite the correct table contains a cell whose content is “german shepherd\", it still does not obtain a higher rank than the left table. The reason might be that the weight for header is larger than the weight for cells."
],
[
"Table 4 shows the results of table ranking on the WikiTableQuestions dataset.",
"We implement two baselines. The first baseline is BM25, which is the same baseline we have used for comparison on the WebQueryTable dataset. The second baseline is header grounding, which is partly inspired by VLDB2011GG who show the effectiveness of the semantic relationship between query and table header. We implement a CDSSM BIBREF6 approach to match between a table header and a query. We train the model by minimizing the cross-entropy error, where the ground truth is the header of the answer. Results are given in Table 4 . We can find that designed features perform comparably with neural networks, and both of them perform better than BM25 and column grounding baselines. Combining designed features and neural networks obtains further improvement.",
"We also study the effects of different aspects on the WikiTableQuestions dataset. Results are given in Table 5 .",
"We can find that the effects of different aspect in designed features and neural networks are consistent. Using more aspects could achieve better performance. Using all aspects obtains the best performance. We also find that the most effective aspect for WikiTableQuestions is header. This is different from the phenomenon in WebQueryTable that the most effective aspect is caption. We believe that this is because the questions in WikiTableQuestions typically include content constrains from cells or headers. Two randomly sampled questions are “which country won the 1994 europeans men's handball championship's preliminary round?\" and “what party had 7,115 inactive voters as of october 25, 2005?\". On the contrary, queries from WebTableQuery usually do not use information from specific headers or cells. Examples include “polish rivers\", “world top 5 mountains\" and “list of american cruise lines\". From Table 1 , we can also find that the question in WikiTableQuestions are longer than the queries in WebQueryTable. In addition, we observe that not all the questions from WikiTableQuestions are suitable for table retrieval. An example is “what was the first player to be drafted in this table?\"."
],
[
"Our work connects to the fields of database and natural language processing.",
"There exists several works in database community that aims at finding related tables from keyword queries. A representative work is given by VLDB2008GG, which considers table search as a special case of document search task and represent a table with its surrounding text and page title. VLDB2010india use YAGO ontology to annotate tables with column and relationship labels. VLDB2011GG go one step further and use labels and relationships extracted from the web. VLDB2012IBM focus on the queries that describe table columns, and retrieve tables based on column mapping. There also exists table-related studies such as searching related tables from a table BIBREF16 , assembling a table from list in web page BIBREF17 and extracting tables using tabular structure from web page BIBREF18 . Our work differs from this line of research in that we focus on exploring the content of table to find relevant tables from web queries.",
"Our work relates to a line of research works that learn continuous representation of structured knowledge with neural network for natural language processing tasks. For example, neelakantan2015neural,pengcheng2015 develop neural operator on the basis of table representation and apply the model to question answering. yin2015NGQA introduce a KB-enhanced sequence-to-sequence approach that generates natural language answers to simple factoid questions based on facts from KB. mei-bansal-walter:2016:N16-1 develop a LSTM based recurrent neural network to generate natural language weather forecast and sportscasting commentary from database records. serban-EtAl:2016:P16-1 introduce a recurrent neural network approach, which takes fact representation as input and generates factoid question from a fact from Freebase. table2textEMNLP2016 presented an neural language model that generates biographical sentences from Wikipedia infobox.",
"Our neural network approach relates to the recent advances of attention mechanism and reasoning over external memory in artificial intelligence BIBREF11 , BIBREF12 , BIBREF19 . Researchers typically represent a memory as a continuous vector or matrix, and develop neural network based controller, reader and writer to reason over the memory. The memory could be addressed by a “soft” attention mechanism trainable by standard back-propagation methods or a “hard” attention mechanism trainable by REINFORCE BIBREF20 . In this work, we use the soft attention mechanism, which could be easily optimized and has been successfully applied in nlp tasks BIBREF11 , BIBREF12 ."
],
[
"In this paper, we give an empirical study of content-based table retrieval for web queries. We implement a feature-based approach and a neural network based approach, and release a new dataset consisting of web queries and web tables. We conduct comprehensive experiments on two datasets. Results not only verify the effectiveness of our approach, but also present future challenges for content-based table retrieval."
]
],
"section_name": [
"Introduction",
"Task Definition",
"Approach Overview",
"Candidate Table Retrieval",
"Table Ranking",
"Matching between Query and Table",
"Matching with Designed Features",
"Matching with Neural Networks",
"Experiment",
"Dataset and Setting",
"Results on WebQueryTable",
"Results on WikiTableQuestions",
"Related Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"b27ed395f86e872f3866fca9fd8770fc354dc20d",
"99ca9aaed29dc9f91d7acfd688e076cdf7161b9a",
"c1eabd589a40a391732cf2907d4503d2287c29a6"
],
"answer": [
{
"evidence": [
"We separately train the parameters for each aspect with back-propagation. We use negative log-likelihood as the loss function."
],
"extractive_spans": [
"negative log-likelihood"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use negative log-likelihood as the loss function."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We separately train the parameters for each aspect with back-propagation. We use negative log-likelihood as the loss function."
],
"extractive_spans": [
"negative log-likelihood"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use negative log-likelihood as the loss function."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We separately train the parameters for each aspect with back-propagation. We use negative log-likelihood as the loss function."
],
"extractive_spans": [
"negative log-likelihood"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use negative log-likelihood as the loss function."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"6edf8b2bd1b6e03a535504401e6969c850269632",
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"annotation_id": [
"03444016686d403ea4874921fdc7c548503e7ece",
"046175a72d3f1fe14f594531a845b25f5771fb29"
],
"answer": [
{
"evidence": [
"It is helpful to note that tables from the web are not always “regular”. We regard a table as a “regular” table if it contains header, cell and caption, and the number of cells in each row is equal to the number of header cells. In this work, we make a comprehensive study of table retrieval on regular tables, and would like to release benchmark datasets of good quality. It is trivial to implement heuristic rules so as to convert the irregular tables to regular one, so we leave it to the future work.",
"Candidate table retrieval aims to get a small candidate table set from the whole table set of large scale, which is hundreds of thousands in our experiment. In order to guarantee the efficiency of the searching process, we calculate the similarity between table and query with Okapi BM25 BIBREF3 , which is computationally efficient and has been successfully used in information retrieval. Specifically, we represent a query as bag-of-words, and represent table with plain text composed by the words from caption and headers. Given a query $q = {x_1, x_2, ..., x_n}$ , a table $t$ and the whole table set $T$ , the BM25 score of query $q$ and table $t$ is calculated as follows."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We regard a table as a “regular” table if it contains header, cell and caption, and the number of cells in each row is equal to the number of header cells. In this work, we make a comprehensive study of table retrieval on regular tables, and would like to release benchmark datasets of good quality.",
"Specifically, we represent a query as bag-of-words, and represent table with plain text composed by the words from caption and headers."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"To the best of our knowledge, there is no publicly available dataset for table retrieval. We introduce WebQueryTable, an open-domain dataset consisting of query-table pairs. We use search logs from a commercial search engine to get a list of queries that could be potentially answered by web tables. Each query in query logs is paired with a list of web pages, ordered by the number of user clicks for the query. We select the tables occurred in the top ranked web page, and ask annotators to label whether a table is relevant to a query or not. In this way, we get 21,113 query-table pairs. In the real scenario of table retrieval, a system is required to find a table from a huge collection of tables. Therefore, in order to enlarge the search space of our dataset, we extract 252,703 web tables from Wikipedia and regard them as searchable tables as well. Data statistics are given in Table 1 ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We select the tables occurred in the top ranked web page, and ask annotators to label whether a table is relevant to a query or not. ",
"Therefore, in order to enlarge the search space of our dataset, we extract 252,703 web tables from Wikipedia and regard them as searchable tables as well."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"annotation_id": [
"7bb6df6009f7019c3adc9a4c2e56b740cace4580",
"c40de8e636eb080e7641d482ac57f5481fcf7aee"
],
"answer": [
{
"evidence": [
"Typically, a query $q$ is a natural language expression that consists of a list of words, such as “major cities of netherlands”. A table $t$ is a set of data elements arranged by vertical columns and horizontal rows. Formally, we define a table as a triple $t=\\lbrace headers,\\ cells,\\ caption\\rbrace $ that consists of three aspects. A table could have multiple $headers$ , each of which indicates the property of a column and could be used to identify a column. A table could have multiple $cells$ , each of which is a unit where a row and a column intersects. A table could have a $caption$ , which is typically an explanatory text about the table. Figure 1 gives an example to illustrate different aspects of a table.",
"A table has different types of information, including headers, cells and caption. We develop different mechanisms to match the relevance between a query and each aspect of a table. An important property of a table is that randomly exchanging two rows or tow columns will not change the meaning of a table BIBREF10 . Therefore, a matching model should ensure that exchanging rows or columns will result in the same output. We first describe the method to deal with headers. To satisfy these conditions, we represent each header as an embedding vector, and regard a set of header embeddings as external memory $M_h \\in \\mathbb {R}^{k \\times d}$ , where $d$ is the dimension of word embedding, and $k$ is the number of header cells. Given a query vector $v_q$ , the model first assigns a probability $\\alpha _i$ to each memory cell $m_i$ , which is a header embedding in this case. Afterwards, a query-specific header vector is obtained through weighted average BIBREF11 , BIBREF12 , namely $v_{header} = \\sum _{i=1}^{k}\\alpha _i m_i$ , where $\\alpha _i \\in [0,1]$ is the weight of $m_i$ calculated as below and $\\sum _{i} \\alpha _i = 1$ ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Formally, we define a table as a triple $t=\\lbrace headers,\\ cells,\\ caption\\rbrace $ that consists of three aspects. A table could have multiple $headers$ , each of which indicates the property of a column and could be used to identify a column.",
"We first describe the method to deal with headers. To satisfy these conditions, we represent each header as an embedding vector, and regard a set of header embeddings as external memory $M_h \\in \\mathbb {R}^{k \\times d}$ , where $d$ is the dimension of word embedding, and $k$ is the number of header cells."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Typically, a query $q$ is a natural language expression that consists of a list of words, such as “major cities of netherlands”. A table $t$ is a set of data elements arranged by vertical columns and horizontal rows. Formally, we define a table as a triple $t=\\lbrace headers,\\ cells,\\ caption\\rbrace $ that consists of three aspects. A table could have multiple $headers$ , each of which indicates the property of a column and could be used to identify a column. A table could have multiple $cells$ , each of which is a unit where a row and a column intersects. A table could have a $caption$ , which is typically an explanatory text about the table. Figure 1 gives an example to illustrate different aspects of a table."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Formally, we define a table as a triple $t=\\lbrace headers,\\ cells,\\ caption\\rbrace $ that consists of three aspects. A table could have multiple $headers$ , each of which indicates the property of a column and could be used to identify a column. A table could have multiple $cells$ , each of which is a unit where a row and a column intersects. A table could have a $caption$ , which is typically an explanatory text about the table."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"annotation_id": [
"1b0bfb15d442ce6e2235eb84667443a56f95ce01",
"3aab67b8051ab82181806946a3dd9419f624b94a"
],
"answer": [
{
"evidence": [
"To the best of our knowledge, there is no publicly available dataset for table retrieval. We introduce WebQueryTable, an open-domain dataset consisting of query-table pairs. We use search logs from a commercial search engine to get a list of queries that could be potentially answered by web tables. Each query in query logs is paired with a list of web pages, ordered by the number of user clicks for the query. We select the tables occurred in the top ranked web page, and ask annotators to label whether a table is relevant to a query or not. In this way, we get 21,113 query-table pairs. In the real scenario of table retrieval, a system is required to find a table from a huge collection of tables. Therefore, in order to enlarge the search space of our dataset, we extract 252,703 web tables from Wikipedia and regard them as searchable tables as well. Data statistics are given in Table 1 .",
"We also conduct a synthetic experiment for table retrieval on WikiTableQuestions BIBREF0 , which is a widely used dataset for table-based question answering. It contains 2,108 HTML tables extracted from Wikipedia. Workers from Amazon Mechanical Turk are asked to write several relevant questions for each table. Since each query is written for a specific table, we believe that each pair of query-table can also be used as an instance for table retrieval. The difference between WikiTableQuestions and WebQueryTable is that the questions in WikiTableQuestions mainly focus on the local regions, such as cells or columns, of a table while the queries in WebQueryTable mainly focus on the global content of a table. The number of table index in WikiTableQuestions is 2,108, which is smaller than the number of table index in WebQueryTable. We randomly split the 22,033 question-table pairs into training (70%), development (10%) and test (20%)."
],
"extractive_spans": [],
"free_form_answer": "No, they come from the top ranked web pages relevant to a query and from Wikipedia ",
"highlighted_evidence": [
"We use search logs from a commercial search engine to get a list of queries that could be potentially answered by web tables. Each query in query logs is paired with a list of web pages, ordered by the number of user clicks for the query. We select the tables occurred in the top ranked web page, and ask annotators to label whether a table is relevant to a query or not.",
"Therefore, in order to enlarge the search space of our dataset, we extract 252,703 web tables from Wikipedia and regard them as searchable tables as well. ",
"We also conduct a synthetic experiment for table retrieval on WikiTableQuestions BIBREF0 , which is a widely used dataset for table-based question answering. It contains 2,108 HTML tables extracted from Wikipedia."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To address the aforementioned challenges, we develop a ranking based approach. We separate the approach into two cascaded steps to trade-off between accuracy and efficiency. In the first step, it finds a small set (e.g. 50 or 100) of candidate tables using a basic similarity measurement. In the second step, more sophisticated features are used to measure the relevance between the query and each candidate table. We implement two types of features, including manually designed features inspired by expert knowledge and neural network models jointly learned from data. Both strategies take into account the relevance between query and table at different levels of granularity. We also introduce a new dataset WebQueryTable for table retrieval. It includes 21,113 web queries from search log, and 273,816 web tables from Wikipedia."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"It includes 21,113 web queries from search log, and 273,816 web tables from Wikipedia."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"6307b4bbe8a0528c39ec0924bda087dfca154fb1",
"c48d9e05ded9792c37aa2bf37199568fe0c52c9d"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What loss function is used?",
"Do they use the unstructured text on the webpage that was the source of the table?",
"Does their method rely on the column headings of the table?",
"Are all the tables in the dataset from the same website?",
"How are the tables extracted from the HTML?"
],
"question_id": [
"b57ad10468e1ba2a7a34396688dbb10a575d89f5",
"9d6d17120c42a834b2b5d96f2120d646218ed4bb",
"965e0ce975a0b8612a30cfc31bbfd4b8a57aa138",
"8dfdd1ed805bb23c774fbb032ef1d97c6802e07c",
"c21675d8a90bda624d27e5535d1c10f08fcbc16b"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"semi-structured",
"semi-structured",
"semi-structured",
"semi-structured",
"semi-structured"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: A example of query-table pair.",
"Table 1: Statistics of WebQueryTable (WQT) dataset and WikiTableQuestions (WTQ) dataset.",
"Table 2: Results on the WebQueryTable dataset.",
"Table 3: Performance on WebQueryTable dataset with different aspects.",
"Figure 2: PR Curve on WebQueryTable.",
"Figure 3: Results generated by NeuralNet on WebQueryTable.",
"Table 4: Results on the WikiTableQuestions dataset with different features.",
"Figure 4: P@1 with different query length on WebQueryTable dataset.",
"Table 5: Results on the WikiTableQuestions dataset with different aspects."
],
"file": [
"2-Figure1-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"6-Figure2-1.png",
"7-Figure3-1.png",
"7-Table4-1.png",
"7-Figure4-1.png",
"8-Table5-1.png"
]
} | [
"Are all the tables in the dataset from the same website?"
] | [
[
"1706.02427-Dataset and Setting-2",
"1706.02427-Introduction-2",
"1706.02427-Dataset and Setting-0"
]
] | [
"No, they come from the top ranked web pages relevant to a query and from Wikipedia "
] | 202 |
1911.02747 | Query-bag Matching with Mutual Coverage for Information-seeking Conversations in E-commerce | Information-seeking conversation system aims at satisfying the information needs of users through conversations. Text matching between a user query and a pre-collected question is an important part of the information-seeking conversation in E-commerce. In the practical scenario, a sort of questions always correspond to a same answer. Naturally, these questions can form a bag. Learning the matching between user query and bag directly may improve the conversation performance, denoted as query-bag matching. Inspired by such opinion, we propose a query-bag matching model which mainly utilizes the mutual coverage between query and bag and measures the degree of the content in the query mentioned by the bag, and vice verse. In addition, the learned bag representation in word level helps find the main points of a bag in a fine grade and promotes the query-bag matching performance. Experiments on two datasets show the effectiveness of our model. | {
"paragraphs": [
[
"AliMe Bot is a kind of retrieval-based online service of E-commerce which collects a lot of predefined question-answering pairs. Through data analysis, we find that many variants of a question exist which means a sort of questions can correspond to a same answer. Based on the observation, naturally, we can view these questions with the same answer as a bag. Obviously, the bag contains diverse expressions of a question, which may provide more matching evidence than only one question due to the rich information contained in the bag. Motivated by the fact, different from existing query-question (Q-Q) matching method, we propose a new query-bag matching approach for retrieval-based chatbots. Concretely, when a user raises a query, the query-bag matching model provides the most suitable bag and returns the corresponding answer of the bag. To our knowledge, there is no query-bag matching study exists, and we focus on the new approach in this paper.",
"Recalling the text matching task BIBREF0, recently, researchers have adopted the deep neural network to model the matching relationship. ESIM BIBREF1 judges the inference relationship between two sentences by enhanced LSTM and interaction space. SMN BIBREF2 performs the context-response matching for the open-domain dialog system. BIBREF3 BIBREF3 explores the usefulness of noisy pre-training in the paraphrase identification task. BIBREF4 BIBREF4 surveys the methods in query-document matching in web search which focuses on the topic model, the dependency model, etc. However, none of them pays attention to the query-bag matching which concentrates on the matching for a query and a bag containing multiple questions.",
"When a user poses a query to the bot, the bot searches the most similar bag and uses the corresponding answer to reply to the user. The more information in the query covered by the bag, the more likely the bag's corresponding answer answers the query. What's more, the bag should not have too much information exceeding the query. Thus modelling the bag-to-query and query-to-bag coverage is essential in this task.",
"In this paper, we propose a simple but effective mutual coverage component to model the above-mentioned problem. The coverage is based on the cross-attention matrix of the query-bag pair which indicates the matching degree of elements between the query and bag. The mutual coverage is performed by stacking the cross-attention matrix along two directions, i.e., query and bag, in the word level respectively. In addition to the mutual coverage, a bag representation in word level is issued to help discover the main points of a bag. The bag representation then provides new matching evidence to the query-bag matching model.",
"We conduct experiments on the AliMe and Quora dataset for the query-bag matching based information-seeking conversation. Compared with baselines, we verify the effectiveness of our model. Our model obtains 0.05 and 0.03 $\\text{R}_{10}@1$ gains comparing to the strongest baseline in the two datasets. The ablation study shows the usefulness of the components. The contributions in this paper are summarized as follows: 1) To the best of our knowledge, we are the first to adopt query-bag matching in the information-seeking conversation. 2) We propose the mutual coverage model to measure the information coverage in the query-bag matching. 3) We release the composite Quora dataset to facilitate the research in this area."
],
[
"This task aims at predicting whether a query $q$ matches a bag $b$, where the bag is composed of some questions $b=\\lbrace b_1, \\dots , b_n \\rbrace $ and $n$ is the number of questions in a bag. For the $q$ and $b_i$, an embedding layer is first applied to transform words to word embeddings via looking up word embedding table which is initialized by pre-trained word embeddings as in Section SECREF12.",
"In the following subsections, we will introduce our proposed Query-bag Matching (QBM) model which output is the matching probability indicating whether the query and bag are asking the same questions. The basic Q-Q (query-question) matching model hybrid CNN (hCNN) BIBREF5 is presented as the background. Then we will show the base model and its two components designed to promote the performance: Mutual Coverage and Bag Representation. For better understanding, the whole model is shown in Figure FIGREF2."
],
[
"We adopt the hCNN model, which measures the relationship between query-question pairs, to obtain the Q-Q matching representation. The model can be easily adapted to other query-question matching models. hCNN is a CNN based matching model which is fast enough to work on the industry application. The input of hCNN is a query $q$ and the $i$-th question $b_i$ in the bag. $q$ and $b_i$ are fed into a CNN respectively. A cross-attention matrix $M^i$ is fed into another CNN to get the interaction representation between them. Each element of $M^i$ is defined as $M^i_{a,b}=q_a^\\top \\cdot b_{i,b}$ where $q_a$ is the word embedding of the $a$-th word in query $q$ and $b_{i,b}$ is the embedding of the $b$-th word in $b_i$. Finally, the outputs of CNNs are combined via Equation SECREF3 to get the representation $r_i$, which indicates the matching representation of the query $q$ and the $i$-th question $b_i$ in the bag. For the Q-Q matching task, the $r_i$ is fed into an MLP (Multi-Layer Perceptron) to predict the matching score. In our query-bag matching setting, we will aggregate the $\\lbrace r_1, \\dots , r_n\\rbrace $ to predict the query-bag matching score. Due to the page limitation, please refer to BIBREF5 BIBREF5 for more details on hCNN. h1 = CNN1(q) h2i = CNN1(bi) hmi = CNN2(qbi)",
"ri = [h1; h2i; h1-h2i; h1 h2i;hmi]"
],
[
"After getting the Q-Q matching representation $r_i$, we combine the $\\lbrace r_1, \\dots , r_n\\rbrace $ by element-wise max and mean pooling in order to get $r_p$ to represent the query-bag matching representation: rp = [ max_pooling { r1, ..., rn }; mean_pooling { r1, ..., rn } ] where [;] denotes concatenation. After that, an MLP with softmax is applied to predict whether the query and the bag is asking the same question. Finally, the loss function minimizes the cross entropy of the training data. Due to the out-of-order of the bag, we do not model the bag representation by CNN or LSTM, and experiments show the pooling-based method works well."
],
[
"“How many parts of a query are covered by the bag?” and “Is all the information in the bag mentioned by the query?” are two important problems in the query-bag matching task. We propose a novel mutual coverage module to model the above-mentioned inter-projection problems.",
"Bag-to-query Considering the $i$-th question $b_i$ in the bag, the element-wise max pooling is performed on $\\lbrace M^i_0, \\cdots M^i_n \\rbrace $ to get the $b_i$ to $q$ coverage $c_i=\\text{max\\_pooling}\\lbrace M^i_0, \\cdots M^i_n \\rbrace $ where $M^i$ is the cross-attention matrix between $b_i$ and $q$ as in the background section, and $M^i_j$ is its $j$-th row. Each element $c_{i,j}$ represents how many information of the $j$-th word in $q$ is mentioned by the $i$-th question in the bag. To get the coverage from a bag instead of the $i$-th question in a bag, a new element-wise max pooling is applied on the generated $\\lbrace c_1, \\dots , c_n \\rbrace $ to get bag-to-query coverage $c_q$. The process of the bag-to-query coverage is shown in Figure FIGREF2.",
"Query-to-bag The query-to-bag coverage is performed in a similar way. After getting the coverage $c_i$ from query $q$ to $b_i$. The concatenation of $\\lbrace c_1, \\dots , c_n \\rbrace $ across all questions in a bag forms the query-to-bag coverage vector $c_b$.",
"In addition, not all words in a question should be treated equally. The word “the” contributes little to the matching degree in most cases. However, “package” is very important in the E-commerce scenario. We adopt the attention mechanism BIBREF6 to weight the coverage vector $c_q$ and $c_b$. The attention is calculated as follows (we take the bag-to-query coverage as an example): ej = MLP(qj) ej { ej } cq = e cq where $q_j$ is the embedding of $j$-th word in query. And the weighting of query-to-bag coverage performs in the same way. We call the mechanism coverage weighting.",
"The query-to-bag coverage, and bag-to-query coverage representation, and their summation are concatenated to the matching representation $r_p$ to predict the final score: [ rp ; cq ; cb ; sum(cq) ; sum(cb)]"
],
[
"All the questions in a bag follow the same question points because they are different variants of the same question. We model the question points by collecting the important words in the bag, forming the word-level bag representation. We collect the top-10 important words through TF-IDF algorithm, except stop words, in a bag to form a new “question” $b_r$, and an hCNN is used to model the relationship of the user query and the new “question” $b_r$ in order to obtain the matching representation $r_r$. The $r_r$ is then concatenated to the matching representation $r_p$ as a new feature to predict the query-bag matching degree. We also adopt the coverage mechanism discussed above over the cross-attention matrix between the query and the new “question”. The new coverage representation is also concatenated to the $r_p$."
],
[
"We conduct experiments on two datasets: AliMe and Quora. The AliMe dataset is collected from the AliMe intelligent assistant system and the Quora dataset is composed of a public dataset.",
"AliMe For the AliMe service in E-commerce, we collect 8,004 query-bag pairs to form our dataset. Negative sampling is also an important part of the matching model. For each query, we use the Lucene to retrieval the top-20 most similar questions from the whole question candidates. Then we filter the questions which are in the corresponding right bag. After that, we randomly sample one in the retrieved candidate and use the bag that the retrieved candidate belongs to as the negative case. In the bag construction stage, the annotators have already merged all the questions of the same meaning, so we can ensure that the after filtering retrieved cases are negative in our setting. We also restrict the number of questions in a bag not more than 5 and discard the redundant questions. Finally, we get 12,008 training cases, 2,000 valid cases, and 10,000 test cases. Please notice, for the testing, we sampled 9 negative bags instead of 1, and thus formed 10 candidates for ranking.",
"Quora The Quora dataset is originally released for the duplicated question detection task. The dataset contains 400,000 question pairs and each pair is marked whether they are asking the same question. Due to the huge amount of duplicated question pairs, we group the questions as question bag via the union-find algorithm from the duplicated questions. We get 60,400 bags, and all the questions in a bag are asking the same question. We filter the bags that contain questions less than 3 to make the bag not too small. The new bag dataset will help similar questions recommendation on the Quora website. We then extract one question in the bag as query and the other questions make up the bag in our task. Considering the negative samples, we follow the same strategy as AliMe dataset. Finally, we get 20,354 training set, 2,000 validation set, and 10,000 test set. To facilitate the research in this area, the composed Quora dataset are released."
],
[
"We use the Adam optimizer with learning rate 0.0001 to optimize the parameters. The batch size is 32. The dropout rate is 0.5. The max length of the query and questions is 20 to cover most of the words in a sentence. We use padding to handle the various lengths of the text. The model checkpoint is chosen according to the best F-score on the validation set. The word embedding dimension is 300, and the pre-trained word embedding is from Sina and Glove for AliMe and Quora dataset respectively. Besides, the embedding is tuned while the model training to get better performance."
],
[
"To prove the effectiveness of our models, we propose two baselines from different aspects: the Q-Q matching based baseline and the query-bag matching based baseline.",
"Q-Q Matching One starting point behind our work is that the query-bag matching may work better than the Q-Q matching for the information-seeking conversation. To verify such opinion, we propose the Q-Q matching based baseline and compare our model with two instances of the baseline. We extract the query-question pairs form the query-bag pair. The label of the query-bag pair is assigned to the new query-question pairs. An hCNN model is applied to train the new dataset. In the testing stage, each query-question pair is assigned with a probability indicating the matching degree. To compare with our model, we rank the bags based on the query-bag matching scores and the scores are defined as the max or mean matching probability of the query-question pairs in the query-bag pair. We name the two instances Q-Q Max and Q-Q Mean respectively.",
"Query-bag Matching To verify the effectiveness of our proposed models, We design a new query-bag matching based baseline. We concatenate the questions in the bag to form a new long “question”, then the hCNN model is applied to measure the matching degree of the original query and the new “question”, namely Bag-Con (Bag Concatenation)."
],
[
"Following BIBREF7, we evaluate the model performance on five automatic evaluation metrics: MRR, $\\text{R}_{10}@1$, $\\text{R}_{10}@2$, $\\text{R}_{10}@5$, and $\\text{R}_{2}@1$. $\\text{R}_n@k$ calculates the recall of the true positive pre-defined questions among the $k$ selected candidates from $n$ available candidates. And Mean Reciprocal Rank (MRR) is another popular measurement for ranking problems."
],
[
"Results and Ablation Study The results are shown in Table TABREF6. Our model (QBM) performs best compared to baselines (Q-Q Mean, Q-Q Max, Bag-con). Comparing Bag-Con and Base model, we find that modelling the query-question relationship following aggregation works better. We assume that the pooling-based aggregation can reduce the redundant information cross sentences in a bag. Considering the Q-Q matching based methods and query-bag based methods. In AliMe dataset, the query-bag matching outperforms the Q-Q matching based methods which shows the necessity to perform query-bag matching. The ablation study shows that the mutual coverage component and bag representation component achieve better performance than the base model, especially in the Quora dataset. The two components work independently and their combination gets the best performance.",
"UTF8gbsn",
"Effectiveness of the Mutual Coverage To intuitively learn the coverage weight, we sample some words with their weights in Table TABREF17. It shows that the words like “The” have low weight, which confirms that they contribute little to the matching. “Refund” in E-commerce is a very important element in a user query sentence. And “America” is important in Quora, because question like “what is the capital in <location>?” is highly related to location “<location>”.",
"Analysis of the Bag Representation Coverage is also applied in the bag representation layer. The results of the bag representation without coverage component (Base+(BR w/o Cov)) is shown in Table TABREF6. Compared with the Base+BR and BR without coverage, it shows that the coverage component contributes a lot on both the two datasets. The bag representation with coverage (Base+BR) gains improvement over Base model, especially in Quora dataset."
],
[
"In this paper, we propose the QBM model which performs the query-bag matching in information-seeking conversation. Experiments show that the proposed mutual coverage component improves the model performance. And the model can automatically discover important words in the query or bag from both the coverage weighting component and the word-level bag representation. This work also shows that learning the query-bag matching directly in some scenarios may outperform the query-question matching in ranking bags. One advantage of our model is that it is extensible in replacing the query-question matching component."
]
],
"section_name": [
"Introduction",
"Methodology",
"Methodology ::: Background: hCNN for Q-Q Matching",
"Methodology ::: Base Model",
"Methodology ::: Mutual Coverage",
"Methodology ::: Bag Representation",
"Experiments ::: Dataset",
"Experiments ::: Setup",
"Experiments ::: Baselines",
"Experiments ::: Evaluation",
"Results and Analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"17d60e066d357225c14088b993db1ede72400bf7",
"525b376937d45462a49e71740128ce2984b1d9f7",
"be7339e77daddeed32c5f08c93cddc26a64e7272"
],
"answer": [
{
"evidence": [
"In the following subsections, we will introduce our proposed Query-bag Matching (QBM) model which output is the matching probability indicating whether the query and bag are asking the same questions. The basic Q-Q (query-question) matching model hybrid CNN (hCNN) BIBREF5 is presented as the background. Then we will show the base model and its two components designed to promote the performance: Mutual Coverage and Bag Representation. For better understanding, the whole model is shown in Figure FIGREF2.",
"After getting the Q-Q matching representation $r_i$, we combine the $\\lbrace r_1, \\dots , r_n\\rbrace $ by element-wise max and mean pooling in order to get $r_p$ to represent the query-bag matching representation: rp = [ max_pooling { r1, ..., rn }; mean_pooling { r1, ..., rn } ] where [;] denotes concatenation. After that, an MLP with softmax is applied to predict whether the query and the bag is asking the same question. Finally, the loss function minimizes the cross entropy of the training data. Due to the out-of-order of the bag, we do not model the bag representation by CNN or LSTM, and experiments show the pooling-based method works well."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In the following subsections, we will introduce our proposed Query-bag Matching (QBM) model which output is the matching probability indicating whether the query and bag are asking the same questions. The basic Q-Q (query-question) matching model hybrid CNN (hCNN) BIBREF5 is presented as the background. Then we will show the base model and its two components designed to promote the performance: Mutual Coverage and Bag Representation. For better understanding, the whole model is shown in Figure FIGREF2.",
"After that, an MLP with softmax is applied to predict whether the query and the bag is asking the same question. Finally, the loss function minimizes the cross entropy of the training data. Due to the out-of-order of the bag, we do not model the bag representation by CNN or LSTM, and experiments show the pooling-based method works well."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We adopt the hCNN model, which measures the relationship between query-question pairs, to obtain the Q-Q matching representation. The model can be easily adapted to other query-question matching models. hCNN is a CNN based matching model which is fast enough to work on the industry application. The input of hCNN is a query $q$ and the $i$-th question $b_i$ in the bag. $q$ and $b_i$ are fed into a CNN respectively. A cross-attention matrix $M^i$ is fed into another CNN to get the interaction representation between them. Each element of $M^i$ is defined as $M^i_{a,b}=q_a^\\top \\cdot b_{i,b}$ where $q_a$ is the word embedding of the $a$-th word in query $q$ and $b_{i,b}$ is the embedding of the $b$-th word in $b_i$. Finally, the outputs of CNNs are combined via Equation SECREF3 to get the representation $r_i$, which indicates the matching representation of the query $q$ and the $i$-th question $b_i$ in the bag. For the Q-Q matching task, the $r_i$ is fed into an MLP (Multi-Layer Perceptron) to predict the matching score. In our query-bag matching setting, we will aggregate the $\\lbrace r_1, \\dots , r_n\\rbrace $ to predict the query-bag matching score. Due to the page limitation, please refer to BIBREF5 BIBREF5 for more details on hCNN. h1 = CNN1(q) h2i = CNN1(bi) hmi = CNN2(qbi)"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We adopt the hCNN model, which measures the relationship between query-question pairs, to obtain the Q-Q matching representation. ",
"hCNN is a CNN based matching model which is fast enough to work on the industry application. "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"In the following subsections, we will introduce our proposed Query-bag Matching (QBM) model which output is the matching probability indicating whether the query and bag are asking the same questions. The basic Q-Q (query-question) matching model hybrid CNN (hCNN) BIBREF5 is presented as the background. Then we will show the base model and its two components designed to promote the performance: Mutual Coverage and Bag Representation. For better understanding, the whole model is shown in Figure FIGREF2.",
"We adopt the hCNN model, which measures the relationship between query-question pairs, to obtain the Q-Q matching representation. The model can be easily adapted to other query-question matching models. hCNN is a CNN based matching model which is fast enough to work on the industry application. The input of hCNN is a query $q$ and the $i$-th question $b_i$ in the bag. $q$ and $b_i$ are fed into a CNN respectively. A cross-attention matrix $M^i$ is fed into another CNN to get the interaction representation between them. Each element of $M^i$ is defined as $M^i_{a,b}=q_a^\\top \\cdot b_{i,b}$ where $q_a$ is the word embedding of the $a$-th word in query $q$ and $b_{i,b}$ is the embedding of the $b$-th word in $b_i$. Finally, the outputs of CNNs are combined via Equation SECREF3 to get the representation $r_i$, which indicates the matching representation of the query $q$ and the $i$-th question $b_i$ in the bag. For the Q-Q matching task, the $r_i$ is fed into an MLP (Multi-Layer Perceptron) to predict the matching score. In our query-bag matching setting, we will aggregate the $\\lbrace r_1, \\dots , r_n\\rbrace $ to predict the query-bag matching score. Due to the page limitation, please refer to BIBREF5 BIBREF5 for more details on hCNN. h1 = CNN1(q) h2i = CNN1(bi) hmi = CNN2(qbi)",
"After getting the Q-Q matching representation $r_i$, we combine the $\\lbrace r_1, \\dots , r_n\\rbrace $ by element-wise max and mean pooling in order to get $r_p$ to represent the query-bag matching representation: rp = [ max_pooling { r1, ..., rn }; mean_pooling { r1, ..., rn } ] where [;] denotes concatenation. After that, an MLP with softmax is applied to predict whether the query and the bag is asking the same question. Finally, the loss function minimizes the cross entropy of the training data. Due to the out-of-order of the bag, we do not model the bag representation by CNN or LSTM, and experiments show the pooling-based method works well."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" The basic Q-Q (query-question) matching model hybrid CNN (hCNN) BIBREF5 is presented as the background. ",
"hCNN is a CNN based matching model which is fast enough to work on the industry application.",
"After that, an MLP with softmax is applied to predict whether the query and the bag is asking the same question. Finally, the loss function minimizes the cross entropy of the training data. Due to the out-of-order of the bag, we do not model the bag representation by CNN or LSTM, and experiments show the pooling-based method works well."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"3531770e5b6c6a3091299899d1aa2a5e20c98846",
"6a8677c0f33a9b5fc8ce0faa41511cb5ba43d456",
"c82a8e26483efbefb785ef257624a9f4e4f31e70"
],
"answer": [
{
"evidence": [
"We conduct experiments on the AliMe and Quora dataset for the query-bag matching based information-seeking conversation. Compared with baselines, we verify the effectiveness of our model. Our model obtains 0.05 and 0.03 $\\text{R}_{10}@1$ gains comparing to the strongest baseline in the two datasets. The ablation study shows the usefulness of the components. The contributions in this paper are summarized as follows: 1) To the best of our knowledge, we are the first to adopt query-bag matching in the information-seeking conversation. 2) We propose the mutual coverage model to measure the information coverage in the query-bag matching. 3) We release the composite Quora dataset to facilitate the research in this area."
],
"extractive_spans": [
"the AliMe and Quora dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct experiments on the AliMe and Quora dataset for the query-bag matching based information-seeking conversation. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conduct experiments on two datasets: AliMe and Quora. The AliMe dataset is collected from the AliMe intelligent assistant system and the Quora dataset is composed of a public dataset."
],
"extractive_spans": [
"AliMe and Quora"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct experiments on two datasets: AliMe and Quora. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conduct experiments on two datasets: AliMe and Quora. The AliMe dataset is collected from the AliMe intelligent assistant system and the Quora dataset is composed of a public dataset.",
"AliMe For the AliMe service in E-commerce, we collect 8,004 query-bag pairs to form our dataset. Negative sampling is also an important part of the matching model. For each query, we use the Lucene to retrieval the top-20 most similar questions from the whole question candidates. Then we filter the questions which are in the corresponding right bag. After that, we randomly sample one in the retrieved candidate and use the bag that the retrieved candidate belongs to as the negative case. In the bag construction stage, the annotators have already merged all the questions of the same meaning, so we can ensure that the after filtering retrieved cases are negative in our setting. We also restrict the number of questions in a bag not more than 5 and discard the redundant questions. Finally, we get 12,008 training cases, 2,000 valid cases, and 10,000 test cases. Please notice, for the testing, we sampled 9 negative bags instead of 1, and thus formed 10 candidates for ranking.",
"Quora The Quora dataset is originally released for the duplicated question detection task. The dataset contains 400,000 question pairs and each pair is marked whether they are asking the same question. Due to the huge amount of duplicated question pairs, we group the questions as question bag via the union-find algorithm from the duplicated questions. We get 60,400 bags, and all the questions in a bag are asking the same question. We filter the bags that contain questions less than 3 to make the bag not too small. The new bag dataset will help similar questions recommendation on the Quora website. We then extract one question in the bag as query and the other questions make up the bag in our task. Considering the negative samples, we follow the same strategy as AliMe dataset. Finally, we get 20,354 training set, 2,000 validation set, and 10,000 test set. To facilitate the research in this area, the composed Quora dataset are released."
],
"extractive_spans": [
"AliMe ",
"Quora"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct experiments on two datasets: AliMe and Quora. The AliMe dataset is collected from the AliMe intelligent assistant system and the Quora dataset is composed of a public dataset.",
"AliMe For the AliMe service in E-commerce, we collect 8,004 query-bag pairs to form our dataset. Negative sampling is also an important part of the matching model.",
"Quora The Quora dataset is originally released for the duplicated question detection task. The dataset contains 400,000 question pairs and each pair is marked whether they are asking the same question. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"03db8084375b5a24c03476ff501ffc07fbff9d4e",
"74ff36ba455a63509bb82e05509e7c514ccd6d8b",
"b6583be50630a8aaec64d2c34c2caa69febff954"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Some words and their corresponding weights (e in Equation 4) in mutual coveragemodule. The average weight across the whole vocabulary is also presented here."
],
"extractive_spans": [],
"free_form_answer": "Chinese and English",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Some words and their corresponding weights (e in Equation 4) in mutual coveragemodule. The average weight across the whole vocabulary is also presented here."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"daa1a00d58ca03a44f29e3c07957b068754fe407"
],
"answer": [
{
"evidence": [
"We conduct experiments on the AliMe and Quora dataset for the query-bag matching based information-seeking conversation. Compared with baselines, we verify the effectiveness of our model. Our model obtains 0.05 and 0.03 $\\text{R}_{10}@1$ gains comparing to the strongest baseline in the two datasets. The ablation study shows the usefulness of the components. The contributions in this paper are summarized as follows: 1) To the best of our knowledge, we are the first to adopt query-bag matching in the information-seeking conversation. 2) We propose the mutual coverage model to measure the information coverage in the query-bag matching. 3) We release the composite Quora dataset to facilitate the research in this area.",
"To prove the effectiveness of our models, we propose two baselines from different aspects: the Q-Q matching based baseline and the query-bag matching based baseline."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Compared with baselines, we verify the effectiveness of our model. Our model obtains 0.05 and 0.03 $\\text{R}_{10}@1$ gains comparing to the strongest baseline in the two datasets. ",
"To prove the effectiveness of our models, we propose two baselines from different aspects: the Q-Q matching based baseline and the query-bag matching based baseline."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"2784f6cf0a3e9fa74a3c8c7af30c7fc59946d51a"
],
"answer": [
{
"evidence": [
"We conduct experiments on the AliMe and Quora dataset for the query-bag matching based information-seeking conversation. Compared with baselines, we verify the effectiveness of our model. Our model obtains 0.05 and 0.03 $\\text{R}_{10}@1$ gains comparing to the strongest baseline in the two datasets. The ablation study shows the usefulness of the components. The contributions in this paper are summarized as follows: 1) To the best of our knowledge, we are the first to adopt query-bag matching in the information-seeking conversation. 2) We propose the mutual coverage model to measure the information coverage in the query-bag matching. 3) We release the composite Quora dataset to facilitate the research in this area."
],
"extractive_spans": [
" the AliMe and Quora dataset "
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct experiments on the AliMe and Quora dataset for the query-bag matching based information-seeking conversation. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Does the query-bag matching model use a neural network?",
"What datasets are used for experiments?",
"Which natural language(s) is/are studied?",
"Is model compared to some baseline?",
"What datasets are used in experiments?"
],
"question_id": [
"da077b385d619305033785af5b204696d6145bd8",
"6d8a51e2790043497ed2637a1abc36bdffb39b71",
"de4cc9e7fa5d700f5046d60789770f47911b3dd7",
"8ad5ebca2f69023b60ccfa3aac0ed426234437ac",
"4afd4cfcb30433714b135b977baff346323af1e3"
],
"question_writer": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The proposed query-bag matching (QBM) model. The upper right is the query-bag coverage component (we only show the bag-to-query coverage in the Figure for demonstration, and the query-to-bag coverage is similar with bag-to-query coverage). q is the query, and bi is the ith question in the bag. Mi is the cross-attention matrix between q and bi . The bottom lines indicate the TF-IDF based bag representation construction. br is a new “question” for bag representation.",
"Table 1: Results of models and baselines with ablation study. MC and BR denote Mutual Coverage and Bag Representation respectively. “BR w/o Cov” denotes Bag Representation component without coverage module. ‡ and § means the results are significant with p-value < 0.05 measured by the Student’s paired t-test over the best baseline and the base model respectively.",
"Table 2: Some words and their corresponding weights (e in Equation 4) in mutual coveragemodule. The average weight across the whole vocabulary is also presented here."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"4-Table2-1.png"
]
} | [
"Which natural language(s) is/are studied?"
] | [
[
"1911.02747-4-Table2-1.png"
]
] | [
"Chinese and English"
] | 203 |
1908.10461 | A survey of cross-lingual features for zero-shot cross-lingual semantic parsing | The availability of corpora to train semantic parsers in English has lead to significant advances in the field. Unfortunately, for languages other than English, annotation is scarce and so are developed parsers. We then ask: could a parser trained in English be applied to language that it hasn't been trained on? To answer this question we explore zero-shot cross-lingual semantic parsing where we train an available coarse-to-fine semantic parser (Liu et al., 2018) using cross-lingual word embeddings and universal dependencies in English and test it on Italian, German and Dutch. Results on the Parallel Meaning Bank - a multilingual semantic graphbank, show that Universal Dependency features significantly boost performance when used in conjunction with other lexical features but modelling the UD structure directly when encoding the input does not. | {
"paragraphs": [
[
"Semantic parsing is a task of transducing natural language to meaning representations, which in turn can be expressed through many different semantic formalisms including lambda calculus BIBREF1, DCS BIBREF2, Discourse Representation Theory (DRT) BIBREF3, AMR BIBREF4 and so on. This availability of annotated data in English has translated into the development of a plethora of models, including encoder-decoders BIBREF5, BIBREF6 as well as tree or graph-structured decoders BIBREF5, BIBREF7, BIBREF0, BIBREF8.",
"Whereas the majority of semantic banks focus on English, recent effort has focussed on building multilingual representations, e.g. PMB BIBREF9, MRS BIBREF10 and FrameNetBIBREF11. However, manually annotating meaning representations in a new language is a painstaking process which explains why there are only a few datasets available for different formalisms in languages other than English. As a consequence, whereas the field has made great advances for English, little work has been done in other languages.",
"We ask: can we learn a semantic parser for English and test it where in another where annotations are not available? What would that require?",
"To answer this question, previous work have leveraged machine translation techniques to map the semantics from a language to another BIBREF12. However, these methods require parallel corpora to extract automatic alignments which are often noisy or not available at all.",
"In this paper we explore parameter-shared models instead, where a model is trained on English using language independent features and tested in a target language.",
"To show how this approach performs, we focus on the Parallel Meaning Bank BIBREF13 – a multilingual semantic bank, where sentences in English, German, Italian and Dutch have been annotated with their meaning representations. The annotations in the PMB are based on Discourse Representation Theory BIBREF3, a popular theory of meaning representation designed to account for intra and inter-sentential phenomena, like temporal expressions and anaphora. Figure 1 shows an example DRT for the sentence `I sat down and opened my laptop' in its canonical `box' representation. A DRS is a nested structure with the top part containing the discourse references and the bottom with unary and binary predicates, as well as semantic constants (e.g. `speaker'). DRS can be linked to each other via logic operator (e.g. $\\lnot $, $\\rightarrow $, $\\diamond $) or, as in this case, discourse relations (e.g. CONTINUATION, RESULT, ELABORATION, etc.).",
"To test our approach we leverage the DRT parser of liu2018discourse, an encoder-decoder architecture where the meaning representation is reconstructed in three stages, coarse-to-fine, by first building the DRS skeleton (i.e. the `box' structures) and then fill each DRS with predicates and variables. Whereas the original parser utilizes a sequential Bi-LSTM encoder with monolingual lexical features, we experiment with language-independent features in the form of cross-lingual word-embeddings, universal PoS tags and universal dependencies. In particular, we also make use of tree encoders to assess whether modelling syntax can be beneficial in cross-lingual settings, as shown for other semantic tasks (e.g. negation scope detection BIBREF14).",
"Results show that language-independent features are a valid alternative to projection methods for cross-lingual semantic parsing. We show that adding dependency relation as features is beneficial, even when they are the only feature used during encoding. However, we also show that modeling the dependency structure directly via tree encoders does not outperform a sequential BiLSTM architecture for the three languages we have experimented with."
],
[
"In this section, we describe the modifications to the coarse-to-fine encoder-decoder architecture of BIBREF0; for more detail, we refer the reader to the original paper."
],
[
"BiLSTM. We use BIBREF0's Bi-LSTM as baseline. However, whereas the original model represents each token in the input sentence as the concatenation of word ($e_{w_i}$) and lemma embeddings, we discard the latter and add a POS tag embedding ($e_{p_i}$) and dependency relation embedding ($e_{d_i}$) feature. These embeddings are concatenated to represent the input token. The final encoder representation is obtained by concatenating both final forward and backward hidden states.",
"TreeLSTM. To model the dependency structure directly, we use a child-sum tree-LSTM BIBREF15, where each word in the input sentence corresponds to a node in the dependency tree. In particular, summing across children is advantageous for cross-lingual tasks since languages might display different word orders. Computation follows Equation (1).",
"Po/treeLSTM. Completely discarding word order might hurt performance for related languages, where a soft notion of positioning can help. To this end, we add a positional embeddings $P_i$ BIBREF16 that helps the child-sum tree-LSTM discriminating between the left and right child of a parent node. This is computed following Equation (2) where $i$ is the position of the word, $j$ is the $jth$ dimension in total $d$ dimensions.",
"Bi/treeLSTM. Finally, similarly to chen2017improved, we combine tree-LSTM and Bi-LSTM, where a tree-LSTM come is initialized using the last layer of a Bi-LSTM, which encodes order information. Computation is shown in Equation (3)."
],
[
"The decoder of liu2018discourse reconstructs the DRS in three steps, by first predicting the overall structure (the `boxes'), then the predicates and finally the referents, with each subsequent step being conditioned on the output of the previous. During predicate prediction, the decoder uses a copying mechanism to predict those unary predicates that are also lemmas in the input sentence (e.g. `eat'). For the those that are not, soft attention is used instead. No modifications were done to the decoder; for more detail, we refer the reader to the original paper."
],
[
"We use the PMB v.2.1.0 for the experiments. The dataset consists of 4405 English sentences, 1173 German sentences, 633 Italian sentences and 583 Dutch sentences. We divide the English sentences into 3072 training sentences, 663 development and 670 testing sentences. We consider all the sentences in other languages as test set.",
"In order to be used as input to the parser, liu2018discourse first convert the DRS into tree-based representations, which are subsequently linearized into PTB-style bracketed sequences. This transformation is lossless in that re-entrancies are duplicated to fit in the tree structure. We use the same conversion in this work; for further detail we refer the reader to the original paper.",
"Finally, it is worth noting that lexical predicates in PMB are in English, even for non-English languages. Since this is not compatible with our copy mechanism, we revert predicates to their original language by substituting them with the lemmas of the tokens they are aligned to (since gold alignment information is included in the PMB)."
],
[
"In order to make the model directly transferable to the German, Italian and Dutch test data, we use the following language-independent features.",
"Multilingual word embeddings. We use the MUSE BIBREF17 pre-trained multilingual word embeddings and keep them fixed during training.",
"UD relations and structure. We use UDPipe BIBREF18 to obtain parses for English, German, Italian and Dutch. UD relation embeddings are randomly initialized and updated.",
"Universal POS tags. We use the Universal POS tags BIBREF19 obtained with UDPipe parser. Universal POS tag embeddings are randomly initialized and updated during training."
],
[
"We use the BiLSTM model as baseline (Bi) and compare it to the child-sum tree-LSTM (tree) with positional information added (Po/tree), as well as to a treeLSTM initialized with the hidden states of the BiLSTM(Bi/tree). We also conduct an ablation study on the features used, where WE, PE and DE are the word-embedding, PoS embedding and dependency relation embedding respectively. For completeness, along with the results for the cross-lingual task, we also report results for monolingual English semantic parsing, where word embedding features are randomly initialized."
],
[
"We use Counter BIBREF20 to evaluate the performance of our models. Counter looks for the best alignment between the predicted and gold DRS and computes precision, recall and F1. For further details about Counter, the reader is referred to van2018evaluating. It is worth reminding that unlike other work on the PMB BIBREF21, BIBREF0 does not deal with presupposition. In the PMB, presupposed variables are extracted from a main box and included in a separate one. In our work, we revert this process so to ignore presupposed boxes. Similarly, we also do not deal with sense tags which we aim to include in future work."
],
[
"Table TABREF12 shows the performance of our cross-lingual models in German, Italian and Dutch. We summarize the results as follows:",
"Dependency features are crucial for zero-shot cross-lingual semantic parsing. Adding dependency features dramatically improves the performance in all three languages, when compared to using multilingual word-embedding and universal PoS embeddings alone. We hypothesize that the quality of the multilingual word-embeddings is poor, given that models using embeddings for the dependency relations alone outperform those using the other two features.",
"TreeLSTMs slightly improve performance only for German. TreeLSTMs do not outperform a baseline BiLSTM for Italian and Dutch and they show little improvement in performance for German. This might be due to different factors that deserve more analysis including the performance of the parsers and syntactic similarity between these languages. When only dependency features are available, we found treeLSTM to boost performance only for Dutch.",
"BiLSTM are still state-of-the-art for monolingual semantic parsing for English. Table TABREF14 shows the result for the models trained and tested in English. Dependency features in conjunction with word and PoS embeddings lead to the best performance; however, in all settings explored treeLSTMs do not outperform a BiLSTM."
],
[
"We perform an error analysis to assess the quality of the prediction for operators (i.e. logic operators like “Not” as well as discourse relations “Contrast”), non-lexical predicates, such as binary predicates (e.g. Agent(e,x)) as well as unary predicates (e.g. time(t), entity(x), etc.), as well as for lexical predicates (e.g. open(e)). Results in Table TABREF15 show that predicting operators and binary predicates across language is hard, compared to the other two categories. Prediction of lexical predicates is relatively good even though most tokens in the test set where never seen during training; this can be attributable to the copy mechanism that is able to transfer tokens from the input directly during predication."
],
[
"Previous work have explored two main methods for cross-lingual semantic parsing. One method requires parallel corpora to extract alignments between source and target languages using machine translation BIBREF11, BIBREF22, BIBREF23 The other method is to use parameter-shared models in the target language and the source language by leveraging language-independent features such as multilingual word embeddings, Universal POS tags and UD BIBREF24, BIBREF25, BIBREF26, BIBREF27. For semantic parsing, encoder-decoder models have achieved great success. Amongst these, tree or graph-structured decoders have recently shown to be state-of-the-art BIBREF5, BIBREF7, BIBREF0, BIBREF28, BIBREF8."
],
[
"We go back to the questions in the introduction:",
"Can we train a semantic parser in a language where annotation is available?. In this paper we show that this is indeed possible and we propose a zero-shot cross-lingual semantic parsing method based on language-independent features, where a parser trained in English – where labelled data is available, is used to parse sentences in three languages, Italian, German and Dutch.",
"What would that require? We show that universal dependency features can dramatically improve the performance of a cross-lingual semantic parser but modelling the tree structure directly does not outperform sequential BiLSTM architectures, not even when the two are combined together.",
"We are planning to extend this initial survey to other DRS parsers that does not exclude presupposition and sense as well as to other semantic formalisms (e.g. AMR, MRS) where data sets annotated in languages other than English are available. Finally, we want to understand whether adding a bidirectionality to the treeLSTM will help improving the performance on modelling the dependency structure directly."
],
[
"This work was done while Federico Fancellu was a post-doctoral researcher at the University of Edinburgh. The views expressed are his own and do not necessarily represent the views of Samsung Research."
]
],
"section_name": [
"Introduction",
"Methods ::: Model",
"Methods ::: Model ::: Encoder",
"Methods ::: Model ::: Decoder",
"Methods ::: Data",
"Methods ::: Cross-lingual features",
"Methods ::: Model comparison",
"Methods ::: Evaluation",
"Results and Analysis",
"Results and Analysis ::: Error Analysis",
"Related work",
"Conclusions",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"6292cf2a300915426d600e539c96bef3641a24ed",
"714c9be4922032d74921289accd299cb0b4beaf4",
"76e7a31fb369c2474303ab6dac8dc5bdafb7a8f2"
],
"answer": [
{
"evidence": [
"Universal POS tags. We use the Universal POS tags BIBREF19 obtained with UDPipe parser. Universal POS tag embeddings are randomly initialized and updated during training."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Universal POS tags. We use the Universal POS tags BIBREF19 obtained with UDPipe parser. Universal POS tag embeddings are randomly initialized and updated during training."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"BiLSTM. We use BIBREF0's Bi-LSTM as baseline. However, whereas the original model represents each token in the input sentence as the concatenation of word ($e_{w_i}$) and lemma embeddings, we discard the latter and add a POS tag embedding ($e_{p_i}$) and dependency relation embedding ($e_{d_i}$) feature. These embeddings are concatenated to represent the input token. The final encoder representation is obtained by concatenating both final forward and backward hidden states."
],
"extractive_spans": [],
"free_form_answer": "3: In addition to word embedding, there is a POS tag embedding and a dependcy relation embedding. ",
"highlighted_evidence": [
"However, whereas the original model represents each token in the input sentence as the concatenation of word ($e_{w_i}$) and lemma embeddings, we discard the latter and add a POS tag embedding ($e_{p_i}$) and dependency relation embedding ($e_{d_i}$) feature. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"annotation_id": [
"814621e6354e7d6e7387f418460a3f8646ec5674",
"9d55bc060cc024dc5d86ca46b304835253cacce0"
],
"answer": [
{
"evidence": [
"Table TABREF12 shows the performance of our cross-lingual models in German, Italian and Dutch. We summarize the results as follows:",
"FLOAT SELECTED: Table 1: Results of zero-shot cross-lingual semantic parsing for models trained in English and tested in German, Italian and Dutch.2"
],
"extractive_spans": [],
"free_form_answer": "Best authors achieved (different models) in terms of F1 score is:\nGerman - 0.6446\nItalian - 0.6999\nDutch - 0.6057",
"highlighted_evidence": [
"Table TABREF12 shows the performance of our cross-lingual models in German, Italian and Dutch.",
"FLOAT SELECTED: Table 1: Results of zero-shot cross-lingual semantic parsing for models trained in English and tested in German, Italian and Dutch.2"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Results of zero-shot cross-lingual semantic parsing for models trained in English and tested in German, Italian and Dutch.2",
"FLOAT SELECTED: Table 2: Results for monolingual semantic parsing (i.e. trained and tested in English)"
],
"extractive_spans": [],
"free_form_answer": "Max-F Scores for German .6446, Italian .6999. Dutch .6057 compared to 0.8748 for English",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Results of zero-shot cross-lingual semantic parsing for models trained in English and tested in German, Italian and Dutch.2",
"FLOAT SELECTED: Table 2: Results for monolingual semantic parsing (i.e. trained and tested in English)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
]
},
{
"annotation_id": [
"9aaf4ef49fa9abcb4c57d9d4243500541ee97cf9",
"c32644a49afbf8a0415c514fdb4df124dbe61f57"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"d5f72426cbe5e24313f4b171834c528b20b669f4",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
]
},
{
"annotation_id": [
"81e3f4cb315a4621705313d1c03d3886f0881454",
"a8018c349521dc3cd2720bf041480fdd6d000524"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"We use the BiLSTM model as baseline (Bi) and compare it to the child-sum tree-LSTM (tree) with positional information added (Po/tree), as well as to a treeLSTM initialized with the hidden states of the BiLSTM(Bi/tree). We also conduct an ablation study on the features used, where WE, PE and DE are the word-embedding, PoS embedding and dependency relation embedding respectively. For completeness, along with the results for the cross-lingual task, we also report results for monolingual English semantic parsing, where word embedding features are randomly initialized."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We use the BiLSTM model as baseline (Bi) and compare it to the child-sum tree-LSTM (tree) with positional information added (Po/tree), as well as to a treeLSTM initialized with the hidden states of the BiLSTM(Bi/tree). We also conduct an ablation study on the features used, where WE, PE and DE are the word-embedding, PoS embedding and dependency relation embedding respectively."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2a9289f4954a4a79505de4ae535fcde2524f406b",
"ddbe7eb8b1ffac4cff381702e5b000a71ad9c211"
],
"answer": [
{
"evidence": [
"We use the PMB v.2.1.0 for the experiments. The dataset consists of 4405 English sentences, 1173 German sentences, 633 Italian sentences and 583 Dutch sentences. We divide the English sentences into 3072 training sentences, 663 development and 670 testing sentences. We consider all the sentences in other languages as test set."
],
"extractive_spans": [
"4405 English sentences, 1173 German sentences, 633 Italian sentences and 583 Dutch sentences"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the PMB v.2.1.0 for the experiments. The dataset consists of 4405 English sentences, 1173 German sentences, 633 Italian sentences and 583 Dutch sentences. We divide the English sentences into 3072 training sentences, 663 development and 670 testing sentences. We consider all the sentences in other languages as test set."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the PMB v.2.1.0 for the experiments. The dataset consists of 4405 English sentences, 1173 German sentences, 633 Italian sentences and 583 Dutch sentences. We divide the English sentences into 3072 training sentences, 663 development and 670 testing sentences. We consider all the sentences in other languages as test set."
],
"extractive_spans": [],
"free_form_answer": "6794 sentences",
"highlighted_evidence": [
"We use the PMB v.2.1.0 for the experiments. The dataset consists of 4405 English sentences, 1173 German sentences, 633 Italian sentences and 583 Dutch sentences. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
]
},
{
"annotation_id": [
"35657d9b3b1e72114b0115e506b56edaad6c5bc7",
"6e46d23f1e152066a4657f7cffda340c0029cbe7"
],
"answer": [
{
"evidence": [
"Multilingual word embeddings. We use the MUSE BIBREF17 pre-trained multilingual word embeddings and keep them fixed during training."
],
"extractive_spans": [
"MUSE BIBREF17"
],
"free_form_answer": "",
"highlighted_evidence": [
"Multilingual word embeddings. We use the MUSE BIBREF17 pre-trained multilingual word embeddings and keep them fixed during training."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Multilingual word embeddings. We use the MUSE BIBREF17 pre-trained multilingual word embeddings and keep them fixed during training."
],
"extractive_spans": [
"MUSE BIBREF17"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the MUSE BIBREF17 pre-trained multilingual word embeddings and keep them fixed during training."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How many lexical features are considered?",
"What is the performance for the three languages tested?",
"How many Universal Dependency features are considered?",
"Do they evaluate any non-zero-shot parsers on the three languages?",
"How big is the Parallel Meaning Bank?",
"What is the source of the crosslingual word embeddings?"
],
"question_id": [
"b2dc0c813da92cf13d86528bd32c12286ec9b9cd",
"c4c06f36454fbfdc5d218fb84ce74eaf7f78fc98",
"347dc2fd6427b39cf2358d43864750044437dff8",
"6911e8724dfdb178fa81bf58019947b71ef8fbe7",
"b012df09fa2a3d6b581032d68991768cf4bc9d7b",
"62edffd051d056cf60e17deafcc55a8c9af398cb"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668"
],
"search_query": [
"italian",
"italian",
"italian",
"semantic parsing",
"semantic parsing",
"semantic parsing"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The Discourse Representation Structure (DRS) for “I sat down and opened my laptop”. For simplicity, we have omitted any time reference.",
"Table 1: Results of zero-shot cross-lingual semantic parsing for models trained in English and tested in German, Italian and Dutch.2",
"Table 2: Results for monolingual semantic parsing (i.e. trained and tested in English)",
"Table 3: Error analysis."
],
"file": [
"1-Figure1-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png"
]
} | [
"How many lexical features are considered?",
"What is the performance for the three languages tested?",
"How big is the Parallel Meaning Bank?"
] | [
[
"1908.10461-Methods ::: Cross-lingual features-3",
"1908.10461-Methods ::: Model ::: Encoder-0"
],
[
"1908.10461-4-Table2-1.png",
"1908.10461-Results and Analysis-0",
"1908.10461-4-Table1-1.png"
],
[
"1908.10461-Methods ::: Data-0"
]
] | [
"3: In addition to word embedding, there is a POS tag embedding and a dependcy relation embedding. ",
"Max-F Scores for German .6446, Italian .6999. Dutch .6057 compared to 0.8748 for English",
"6794 sentences"
] | 204 |
1612.05202 | Building a robust sentiment lexicon with (almost) no resource | Creating sentiment polarity lexicons is labor intensive. Automatically translating them from resourceful languages requires in-domain machine translation systems, which rely on large quantities of bi-texts. In this paper, we propose to replace machine translation by transferring words from the lexicon through word embeddings aligned across languages with a simple linear transform. The approach leads to no degradation, compared to machine translation, when tested on sentiment polarity classification on tweets from four languages. | {
"paragraphs": [
[
"Sentiment analysis is a task that aims at recognizing in text the opinion of the writer. It is often modeled as a classification problem which relies on features extracted from the text in order to feed a classifier. Relevant features proposed in the literature span from microblogging artifacts including hashtags, emoticons BIBREF0 , BIBREF1 , intensifiers like all-caps words and character repetitions BIBREF2 , sentiment-topic features BIBREF3 , to the inclusion of polarity lexicons.",
"The objective of the work presented in this paper is the creation of sentiment polarity lexicons. They are word lists or phrase lists with positive and negative sentiment labels. Sentiment lexicons allow to increase the feature space with more relevant and generalizing characteristics of the input. Unfortunately, creating sentiment lexicons requires human expertise, is time consuming, and often results in limited coverage when dealing with new domains.",
"In the literature, it has been proposed to extend existing lexicons without supervision BIBREF4 , BIBREF5 , or to automatically translate existing lexicons from resourceful languages with statistical machine translation (SMT) systems BIBREF6 . While the former requires seed lexicons, the later are very interesting because they can automate the process of generating sentiment lexicons without any human expertise. But automatically translating sentiment lexicons leads to two problems: (1) out-of-vocabulary words, such as mis-spellings, morphological variants and slang, cannot be translated, and (2) machine translation performance strongly depends on available training resources such as bi-texts.",
"In this paper, we propose to apply the method proposed in BIBREF7 for automatically mapping word embeddings across languages and use them to translate sentiment lexicons only given a small, general bilingual dictionary. After creating monolingual word embeddings in the source and target language, we train a linear transform on the bilingual dictionary and apply that transform to words for which we don't have a translation.",
"We perform experiments on 3-class polarity classification in tweets, and report results on four different languages: French, Italian, Spanish and German. Existing English sentiment lexicons are translated to the target languages through the proposed approach, given gs trained on the respective Wikipedia of each language. Then, a SVM-based classifier is fed with lexicon features, comparing machine translation with embedding transfer.",
"After presenting related work in Section SECREF2 , the extraction of word gs and their mapping across languages are detailed in Section SECREF3 . The corpus on which experiments are carried out and the results of our experiments are presented in Section SECREF4 . Finally, we conclude with a discussion of possible directions in Section SECREF5 ."
],
[
"Many methods have been proposed for extending polarity lexicons: propagate polarity along thesaurus relations BIBREF8 , BIBREF9 , BIBREF10 or use cooccurrence statistics to identify similar words BIBREF11 , BIBREF12 .",
"Porting lexicons to other languages has also been studied: use aligned thesauri and propagate at the sense level BIBREF13 , BIBREF14 , translate the lexicon directly BIBREF15 , BIBREF16 , take advantage of off-the-shelf translation and include sample word context to get better translations BIBREF17 or use crowd sourcing to quickly bootstrap lexicons in non-english languages BIBREF18 ."
],
[
"Our approach consists in creating distributional word representations in the source and target languages, and map them to each other with a linear transform trained given a small bilingual dictionary of frequent words. Then, source language words from the polarity lexicon can be projected in the target language embedding. The closest words to the projecting are used as translation.",
"In our experiments, word embeddings are estimated on the source and target language Wikipedia corpora using the word2vec toolkit BIBREF19 . The embeddings are trained using skip-gram approach with a window of size 7 and 5 iterations. The dimension of the embeddings is fixed to 200.",
" BIBREF20 have shown that the skip-gram word embedding model is in fact a linear decomposition of the cooccurrence matrix. This decomposition is unique up to a linear transformation. Therefore, given two word representations created from the same cooccurrence matrix, a linear transform can be devised to map words from the first to the second. Assuming that cooccurrence matrices for the source and target languages are sampled from the same language-independent cooccurrent matrix, one can find a linear transform for mapping source words to target words, up to an error component which represents sampling error. This assumption is realistic for comparable corpora, such as embeddings trained on wikipedia in various languages. In our experiments, we preferred to estimate word embeddings on Wikipedia rather than Twitter corpora because across languages, Tweets can cover different event from different countries, reducing the overlap.",
"However, word embeddings represent a mixture from the senses of each word, making the cross-language mapping non bijective (a word can have multiple translations), which will probably contribute to the residual. Therefore, it should be reasonable to train a linear transform to map words between the source and target languages. Note that a linear transform would conserve the translations associated to linguistic regularities observed in the vector spaces.",
"The idea is to translate words in another language in the goal to generate sentiment lexicon. In BIBREF7 , the authors propose to estimate a transformation matrix INLINEFORM0 such that INLINEFORM1 , where INLINEFORM2 is the embedding of a word in the source language and INLINEFORM3 is the embedding of its translation in the target language. In order to estimate the INLINEFORM4 matrix, suppose we are given a set of word pairs and their associated vector representations INLINEFORM5 where INLINEFORM6 is the embeddings of word INLINEFORM7 in the source language and INLINEFORM8 is the embedding of its translation. The matrix INLINEFORM9 can be learned by the following optimization problem: DISPLAYFORM0 ",
"which we solve with the least square method.",
"At prediction time, for any given new word INLINEFORM0 , we can map it to the other language space by computing INLINEFORM1 . Then we find the words whose representations are closest to INLINEFORM2 in the target language space using the cosine similarity as distance metric. In our experiments, we select all representations which cosine similarity is superior to INLINEFORM3 (with INLINEFORM4 set empirically).",
"In practice, we only have manual translations for a small subset of words, not necessarily polarity infused, on which we train INLINEFORM0 . We use that INLINEFORM1 to find translations for all words of the sentiment lexicon."
],
[
"The sentiment polarity classification task is set as a three-class problem: positive, negative and neutral. The metrics used to measure performance is macro-fmeasure. We developed our system on French and apply the same components on Italian, Spanish and German. A concise description of the training data follows.",
"The French (FR) corpus comes from the DEFT'15 evaluation campaign . It consists of 7,836 tweets for training and 3,381 tweets for testing. The Italian (IT) corpus was released as part of the SentiPOLC'14 evaluation campaign BIBREF24 . It consists of 4,513 tweets for training and 1,930 tweets for testing. For Spanish (ES), the TASS'15 corpus is used BIBREF25 . Since the evaluation campaign was still ongoing at the time of writing, we use 3-fold validation on the training corpus composed of 7,219 tweets. German (DE) tweets come from the Multilingual Sentiment Dataset BIBREF26 . It consists of 844 tweets for training and 844 tweets for testing.",
"In order to extract features on those corpora, polarity lexicons are translated from English using the method described in Section SECREF3 . The following lexicons are translated:",
"MPQA: The MPQA (Multi-Perspective Question Answering) lexicon is composed of 4913 negatives words and 2718 positives words BIBREF27 .",
"BingLiu: This lexicon contains 2006 positive words and 4783 negative words. This lexicon includes mis-spellings, morphological variants and slang BIBREF28 .",
"HGI: The Harvard General Inquirer (HGI) lexicons contains several dictionaries, we only used positive and negative lexicons that contains respectively 1915 and 2291 words BIBREF29 .",
"NRC: NRC Emotion Lexicon is a large word list constructed by Amazon Mechanical Turk BIBREF30 ."
],
[
"In order to test the value of the create lexicons, we use them in a typical sentiment polarity classification system BIBREF31 . We first tokenize the tweets with a tokenizer based on macaon BIBREF32 . Then, hashtags and usertags are mapped to generic tokens. Each tweet is represented with the following features and an SVM classifier with a linear kernel is trained to perform the task.",
"Words n-grams",
"All-caps: the number of words with all characters in upper case",
"Hashtags: the number of hashtags",
"Lexicons: number of words present in each lexicon",
"Punctuation: the number of contiguous sequences of exclamation marks, question marks, and both exclamation and question marks",
"Last punctuation: whether the last token contains an exclamation or question mark",
"Emoticons: presence or absence of positive and negative emoticons at any position in the tweet",
"Last emoticon: whether the last token is a positive or negative emoticon",
"Elongated words: the number of words with one character repeated more than three times, for example : “loooool\"",
"We did not implement part-of-speech and cluster features as they cannot be assumed to be available in the target languages. This system was part of the system combination that obtained the best results at the TASS 2015 BIBREF25 , BIBREF33 and DEFT 2015 BIBREF34 , BIBREF35 evaluation campaigns."
],
[
"Table TABREF2 reports the results of the system and different baselines. The No Sentiment Lexicon system does not have any lexicon feature. It obtains a macro-fmeasure of 60.65 on the four corpora.",
"Systems denoted BIBREF21 , BIBREF22 , BIBREF23 are baselines that correspond respectively to unsupervised, supervised and semi-supervised approaches for generating the lexicon. We observe that adding sentiment lexicons improves performance.",
"The Moses system consists in translating the different sentiment lexicons with the Moses SMT toolkit. It is trained on the Europarl bi-texts. The approach based on translation obtains better results than the Baseline systems. In our experiments, we observe that some words have not been correctly translated (for example: slang words). The main drawback on this approach is that for correctly translating sentiment lexica, the SMT system must be trained on in-domain bi-texts..",
"The BWE (Bilingual Word Embeddings) system consists in translating the sentiment lexicons with our method. This approach obtains results comparable to the SMT approach. The main advantage of this approach is to be able to generalize on words unknown to the SMT system.",
"Moses and BWE can be combined by creating a lexicon from the union of the lexicons obtained by those systems. This combination yields even better results than translation or mapping alone.",
"Our second experiment consists in varying the size of the bilingual dictionary used to train INLINEFORM0 . Figure FIGREF20 shows the evolution of average macro f-measure (over the four languages) when the INLINEFORM1 most frequent words from Wikipedia are part of the bilingual dictionary. It can be observed that using the 50k most frequent words leads to the best performance (an average macro-fmeasure of 61.72) while only 1,000 words already brings nice improvements.",
"In a last experiment, we look into the gains that can be obtained by manually translating a small part of the lexicon and use it as bilingual dictionary when training the transformation matrix. Figure FIGREF21 shows average macro-fmeasure on the four languages when translating up to 2,000 words from the MPQA lexicon (out of 8k). It can be observed that from 600 words on, performance is better than that of the statistical translation system."
],
[
"This paper is focused on translating sentiment polarity lexicons from a resourceful language through word embeddings mapped from the source to the target language. Experiments on four languages with mappings from English show that the approach performs as well as full-fledged SMT. While the approach was successful for languages close to English where word-to-word translations are possible, it may not be as effective for languages where this assumption does not hold. We will explore this aspect for future work."
],
[
"The research leading to these results has received funding from the European Union - Seventh Framework Programme (FP7/2007-2013) under grant agreement no 610916 SENSEI."
]
],
"section_name": [
"Introduction",
"Related Work",
"Approach",
"Corpus and Metrics",
"System",
"Results",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"002fe6fa9e9086ba470b1362c19ddc9d410a20ae",
"f0ea6927e0585e2511e9352305faaa970a03faf3"
],
"answer": [
{
"evidence": [
"In a last experiment, we look into the gains that can be obtained by manually translating a small part of the lexicon and use it as bilingual dictionary when training the transformation matrix. Figure FIGREF21 shows average macro-fmeasure on the four languages when translating up to 2,000 words from the MPQA lexicon (out of 8k). It can be observed that from 600 words on, performance is better than that of the statistical translation system."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In a last experiment, we look into the gains that can be obtained by manually translating a small part of the lexicon and use it as bilingual dictionary when training the transformation matrix. Figure FIGREF21 shows average macro-fmeasure on the four languages when translating up to 2,000 words from the MPQA lexicon (out of 8k). It can be observed that from 600 words on, performance is better than that of the statistical translation system."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"In a last experiment, we look into the gains that can be obtained by manually translating a small part of the lexicon and use it as bilingual dictionary when training the transformation matrix. Figure FIGREF21 shows average macro-fmeasure on the four languages when translating up to 2,000 words from the MPQA lexicon (out of 8k). It can be observed that from 600 words on, performance is better than that of the statistical translation system."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In a last experiment, we look into the gains that can be obtained by manually translating a small part of the lexicon and use it as bilingual dictionary when training the transformation matrix. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"3c1080aaf7826e891e7d354f740c064c09e643e0",
"fd75bb40a42064fb7c090df41b7408187b8fdca8"
],
"answer": [
{
"evidence": [
"Table TABREF2 reports the results of the system and different baselines. The No Sentiment Lexicon system does not have any lexicon feature. It obtains a macro-fmeasure of 60.65 on the four corpora."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The No Sentiment Lexicon system does not have any lexicon feature. It obtains a macro-fmeasure of 60.65 on the four corpora."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Table TABREF2 reports the results of the system and different baselines. The No Sentiment Lexicon system does not have any lexicon feature. It obtains a macro-fmeasure of 60.65 on the four corpora."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The No Sentiment Lexicon system does not have any lexicon feature. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"3923253f3ef9944e5e731308d2c8aea2ceb78561",
"4942f2ef5a67e4063509f37fc6e390859756d4b8"
],
"answer": [
{
"evidence": [
"We perform experiments on 3-class polarity classification in tweets, and report results on four different languages: French, Italian, Spanish and German. Existing English sentiment lexicons are translated to the target languages through the proposed approach, given gs trained on the respective Wikipedia of each language. Then, a SVM-based classifier is fed with lexicon features, comparing machine translation with embedding transfer."
],
"extractive_spans": [],
"free_form_answer": "English-French, English-Italian, English-Spanish, English-German.",
"highlighted_evidence": [
"We perform experiments on 3-class polarity classification in tweets, and report results on four different languages: French, Italian, Spanish and German. Existing English sentiment lexicons are translated to the target languages through the proposed approach, given gs trained on the respective Wikipedia of each language."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We perform experiments on 3-class polarity classification in tweets, and report results on four different languages: French, Italian, Spanish and German. Existing English sentiment lexicons are translated to the target languages through the proposed approach, given gs trained on the respective Wikipedia of each language. Then, a SVM-based classifier is fed with lexicon features, comparing machine translation with embedding transfer."
],
"extractive_spans": [
"French, Italian, Spanish and German",
"Existing English sentiment lexicons are translated to the target languages"
],
"free_form_answer": "",
"highlighted_evidence": [
"We perform experiments on 3-class polarity classification in tweets, and report results on four different languages: French, Italian, Spanish and German. Existing English sentiment lexicons are translated to the target languages through the proposed approach, given gs trained on the respective Wikipedia of each language."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do they compare against manually-created lexicons?",
"Do they compare to non-lexicon methods?",
"What language pairs are considered?"
],
"question_id": [
"d5c393df758dec6ea6827ae5b887eb6c303a4f4d",
"11a3af3f056e0fb5559fe5cbff1640e022732735",
"07a214748a69b31400585aef7aba6af3e3d9cce2"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1. Results in macro-fmeasure obtained on the different languages (French, Italian, Spanish and German) using different sentiment lexicon (MPQA, BingLiu, HGI and NRC).",
"Fig. 1. Average macro-fmeasure over the four languages when training the linear transform with a bilingual dictionary of n most frequent words (MPQA sentiment lexicon).",
"Fig. 2. Average macro-fmeasure over the four language when training the linear transform with a small part of the lexicon words (MPQA sentiment lexicon)."
],
"file": [
"3-Table1-1.png",
"3-Figure1-1.png",
"4-Figure2-1.png"
]
} | [
"What language pairs are considered?"
] | [
[
"1612.05202-Introduction-4"
]
] | [
"English-French, English-Italian, English-Spanish, English-German."
] | 205 |