Edit model card

LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention

LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transformer. LUKE treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. LUKE adopts an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores.

LUKE achieves state-of-the-art results on five popular NLP benchmarks including SQuAD v1.1 (extractive question answering), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), TACRED (relation classification), and Open Entity (entity typing).

Please check the official repository for more details and updates.

This is the LUKE large model with 24 hidden layers, 1024 hidden size. The total number of parameters in this model is 483M. It is trained using December 2018 version of Wikipedia.

Experimental results

The experimental results are provided as follows:

Task Dataset Metric LUKE-large luke-base Previous SOTA
Extractive Question Answering SQuAD v1.1 EM/F1 90.2/95.4 86.1/92.3 89.9/95.1 (Yang et al., 2019)
Named Entity Recognition CoNLL-2003 F1 94.3 93.3 93.5 (Baevski et al., 2019)
Cloze-style Question Answering ReCoRD EM/F1 90.6/91.2 - 83.1/83.7 (Li et al., 2019)
Relation Classification TACRED F1 72.7 - 72.0 (Wang et al. , 2020)
Fine-grained Entity Typing Open Entity F1 78.2 - 77.6 (Wang et al. , 2020)

Citation

If you find LUKE useful for your work, please cite the following paper:

@inproceedings{yamada2020luke,
  title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention},
  author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto},
  booktitle={EMNLP},
  year={2020}
}
Downloads last month
7,897
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.