Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- flair
|
4 |
+
- entity-mention-linker
|
5 |
+
---
|
6 |
+
|
7 |
+
## sapbert-ncbi-taxonomy-no-ab3p
|
8 |
+
|
9 |
+
Biomedical Entity Mention Linking for UMLS.
|
10 |
+
We use this model for species since NCBI Taxonomy is contained in UMLS:
|
11 |
+
|
12 |
+
- Model: [cambridgeltl/SapBERT-from-PubMedBERT-fulltext](https://huggingface.co/cambridgeltl/SapBERT-from-PubMedBERT-fulltext)
|
13 |
+
- Dictionary: [NCBI Taxonomy](https://www.ncbi.nlm.nih.gov/taxonomy) (See [FTP](https://ftp.ncbi.nih.gov/pub/taxonomy/new_taxdump/))
|
14 |
+
|
15 |
+
This model variant does not perform abbreviation resolution via [A3bP](https://github.com/ncbi-nlp/Ab3P)
|
16 |
+
|
17 |
+
### Demo: How to use in Flair
|
18 |
+
|
19 |
+
Requires:
|
20 |
+
|
21 |
+
- **[Flair](https://github.com/flairNLP/flair/)>=0.14.0** (`pip install flair` or `pip install git+https://github.com/flairNLP/flair.git`)
|
22 |
+
|
23 |
+
```python
|
24 |
+
from flair.data import Sentence
|
25 |
+
from flair.models import Classifier, EntityMentionLinker
|
26 |
+
from flair.tokenization import SciSpacyTokenizer
|
27 |
+
|
28 |
+
sentence = Sentence(
|
29 |
+
"The mutation in the ABCD1 gene causes X-linked adrenoleukodystrophy, "
|
30 |
+
"a neurodegenerative disease, which is exacerbated by exposure to high "
|
31 |
+
"levels of mercury in dolphin populations.",
|
32 |
+
use_tokenizer=SciSpacyTokenizer()
|
33 |
+
)
|
34 |
+
|
35 |
+
# load hunflair to detect the entity mentions we want to link.
|
36 |
+
tagger = Classifier.load("hunflair-species")
|
37 |
+
tagger.predict(sentence)
|
38 |
+
|
39 |
+
# load the linker and dictionary
|
40 |
+
linker = EntityMentionLinker.load("species-linker")
|
41 |
+
linker.predict(sentence)
|
42 |
+
|
43 |
+
# print the results for each entity mention:
|
44 |
+
for span in sentence.get_spans(tagger.label_type):
|
45 |
+
for link in span.get_labels(linker.label_type):
|
46 |
+
print(f"{span.text} -> {link.value}")
|
47 |
+
```
|
48 |
+
|
49 |
+
As an alternative to downloading the already precomputed model (much storage). You can also build the model
|
50 |
+
and compute the embeddings for the dataset using:
|
51 |
+
|
52 |
+
```python
|
53 |
+
linker = EntityMentionLinker.build("cambridgeltl/SapBERT-from-PubMedBERT-fulltext", dictionary_name_or_path="ncbi-taxonomy", entity_type="species", hybrid_search=False)
|
54 |
+
```
|
55 |
+
|
56 |
+
This will reduce the download requirements, at the cost of computation. Note `hybrid_search=False` as SapBERT unlike BioSyn is trained only for dense retrieval.
|