--- license: other license_name: ihtsdo-and-nlm-licences license_link: https://www.nlm.nih.gov/databases/umls.html language: - nl - en library_name: sentence-transformers tags: - medical - biology pipeline_tag: sentence-similarity widget: - source_sentence: bartonellosis sentences: - kattenkrabziekte - wond, kattenkrab - door teken overgedragen orbiviruskoorts - kattenbont --- # In-Context Dutch Clinical Embeddings with BioLORD & MedMentions Do mentions sharing the same text need to have the same embedding? No! This model supports embedding biomedical entities in both English and Dutch, but support in-context embedding of concepts, using the following template: ``` mention text [SEP] (context: ... a textual example containing mention text and some more text on both sides ...) ``` It also supports embedding mentions without context, particularly in English. **NOTE:** Unlike other models in the series, this model uses the [CLS] token to embed the mention. ## References ### 📖 BioLORD-2023: semantic textual representations fusing large language models and clinical knowledge graph insights Journal of the American Medical Informatics Association, 2024
François Remy, Kris Demuynck, Thomas Demeester
[view online](https://academic.oup.com/jamia/advance-article/doi/10.1093/jamia/ocae029/7614965) ### 📖 Annotation-preserving machine translation of English corpora to validate Dutch clinical concept extraction tools Under review, with a preprint available on Medrxiv.org, 2024
Tom Seinen, Jan Kors, Erik van Mulligen, Peter Rijnbeek
[view online](https://www.medrxiv.org/content/medrxiv/early/2024/03/15/2024.03.14.24304289.full.pdf) ## Citation This model accompanies the [BioLORD-2023: Learning Ontological Representations from Definitions](https://arxiv.org/abs/2311.16075) paper. When you use this model, please cite the original paper as follows: ```latex @article{remy-etal-2023-biolord, author = {Remy, François and Demuynck, Kris and Demeester, Thomas}, title = "{BioLORD-2023: semantic textual representations fusing large language models and clinical knowledge graph insights}", journal = {Journal of the American Medical Informatics Association}, pages = {ocae029}, year = {2024}, month = {02}, issn = {1527-974X}, doi = {10.1093/jamia/ocae029}, url = {https://doi.org/10.1093/jamia/ocae029}, eprint = {https://academic.oup.com/jamia/advance-article-pdf/doi/10.1093/jamia/ocae029/56772025/ocae029.pdf}, } ``` ## Usage (Sentence-Transformers) This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model has been finentuned for the biomedical domain. While it preserves a good ability to produce embeddings for general-purpose text, it will be more useful to you if you are trying to process medical documents such as EHR records or clinical notes. Both sentences and phrases can be embedded in the same latent space. Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["wond door kattenscrab", "kattenkrabziekte", "bartonellosis"] model = SentenceTransformer('FremyCompany/BioLORD-2023-M-Dutch-InContext-v1 ') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["wond door kattenscrab", "kattenkrabziekte", "bartonellosis"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('FremyCompany/BioLORD-2023-M-Dutch-InContext-v1 ') model = AutoModel.from_pretrained('FremyCompany/BioLORD-2023-M') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## License My own contributions for this model are covered by the MIT license. However, given the data used to train this model originates from UMLS and SnomedCT, you will need to ensure you have proper licensing of UMLS and SnomedCT before using this model. Both UMLS and SnomedCT are free of charge in most countries, but you might have to create an account and report on your usage of the data yearly to keep a valid license.