Upload 3 files
Browse files- medlexsp/README.md +217 -0
- medlexsp/process_dataset.py +32 -0
- medlexsp/using_dataset_hugginface.py +203 -0
medlexsp/README.md
ADDED
@@ -0,0 +1,217 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
## LICENSE
|
4 |
+
|
5 |
+
[FORM for apply](https://digital.csic.es/bitstream/10261/270429/2/MedLexSp_License_2022.pdf)
|
6 |
+
|
7 |
+
Licenser grants Licensee a non-exclusive license to use the MedLexSp lexicon. Licensee agrees:
|
8 |
+
• to use the lexicon only for non-commercial, non-profit research purposes;
|
9 |
+
• to make no changes to the lexicon;
|
10 |
+
• and to acknowledge the use of the lexicon in all publications reporting on results
|
11 |
+
produced with the help of MedLexSp.
|
12 |
+
This license is granted by Licenser to Licensee free of charge. You agree that the owner has the
|
13 |
+
right to publicly identify you as a User/Licensee of MedLexSp. This Agreement and the
|
14 |
+
appendix hereto embody the entire understanding between the parties relating to the subject
|
15 |
+
matter hereof, and there are no terms or conditions hereof express or implied written or oral.
|
16 |
+
This Agreement supersedes all prior oral or written representations, agreements, promises or
|
17 |
+
other communications, concerning or relating to the subject matter of this Agreement. Use of
|
18 |
+
the lexicon or use of data derived from the lexicon for any commercial purposes requires
|
19 |
+
explicit written agreement of Licenser.
|
20 |
+
|
21 |
+
[Reference on Github](https://github.com/lcampillos/MedLexSp)
|
22 |
+
|
23 |
+
|
24 |
+
GENERAL INFORMATION
|
25 |
+
|
26 |
+
1. Title of Dataset: Medical Lexicon for Spanish (MedLexSp)
|
27 |
+
|
28 |
+
|
29 |
+
2. Authors: Campillos-Llanos, Leonardo
|
30 |
+
|
31 |
+
|
32 |
+
3. Date of data collection: 2022-5-7
|
33 |
+
|
34 |
+
|
35 |
+
4. Date of data publication on repository: 2022-5-25
|
36 |
+
|
37 |
+
|
38 |
+
5. Geographic location of data collection <latitude, longitude, or city/region, Country, continent as appropriate>:
|
39 |
+
Spain, Latin America and United States of America (data from MedlinePlus Spanish and the Spanish version of the National Cancer Institute Dictionary of Medical Terms)
|
40 |
+
|
41 |
+
|
42 |
+
6. Information about funding sources that supported the collection of the data (including research project reference/acronym):
|
43 |
+
|
44 |
+
This work has been done under the NLPMedTerm project, funded by the European Union’s Horizon 2020 research programme under the Marie Skodowska-Curie grant agreement nº. 713366 (InterTalentum UAM), and the CLARA-MeD project (PID2020-116001RA-C33), funded by MCIN/AEI/10.13039/501100011033/, in project call: "Proyectos I+D+i Retos Investigación".
|
45 |
+
|
46 |
+
|
47 |
+
7. Recommended citation for this dataset:
|
48 |
+
|
49 |
+
Campillos-Llanos, Leonardo; 2022; Medical Lexicon for Spanish (MedLexSp) [Dataset]; DIGITAL.CSIC; https://doi.org/10.20350/digitalCSIC/14656
|
50 |
+
|
51 |
+
|
52 |
+
SHARING/ACCESS/CONTEXT INFORMATION
|
53 |
+
|
54 |
+
1. Usage Licenses/restrictions placed on the data (please indicate if different data files have different usage license):
|
55 |
+
|
56 |
+
The Medical Lexicon for Spanish gathers terms from several sources:
|
57 |
+
- Terminologies in the Spanish language in the UMLS® Metathesaurus: the Medical Subject Headings (MeSH), the World Health Organization (WHO) Adverse Drug Reactions terminology, the International Classification of Primary Care (ICPC), the Medical Dictionary for Regulatory Activities (MedDRA) and the Systematized Nomenclature of Medicine - Clinical Terms (SNOMED-CT®). MedLexSp also includes terms mapped to Spanish through the UMLS from the English versions of the International Classification of Diseases vs. 10 (ICD-10), the Diagnostic and Statistical Manual of Mental Disorders (DSM-5®) and the Online Mendelian Inheritance in Man (OMIM®) catalog of genes and genetic disorders.
|
58 |
+
- A subset of terms from the Diccionario de términos médicos (DTM) developed by Unidad de Terminología Médica (Real Academia Nacional de Medicina de España), in collaboration with Asociación Latinoamericana de Academias Nacionales de Medicina, España y Portugal (ALANAM).
|
59 |
+
- Supplementary terminological sources such as the Anatomical Therapeutical Classification from the WHO, the National Cancer Institute (NCI) Dictionary of Cancer Terms, OrphaData from INSERM, or the Nomenclátor de Prescripción from AEMPS.
|
60 |
+
- Corpus-based terms derived from the Spanish version of MedlinePlus (supported by the National Library of Medicine), Summaries of Product Characteristics in Spanish from the Spanish Drug Effect database, and the Chilean Waiting List Corpus.
|
61 |
+
- Terms from the following named entity recognition challenges and shared tasks: CANTEMIST, CODIESP and PharmaCoNER (see point 6)
|
62 |
+
|
63 |
+
Terms from the Diccionario panhispánico de términos médicos (Real Academia Nacional de Medicina de España) and the Medical Subject Headings (MeSH, by BIREME) were obtained through a distribution and usage agreement from the corresponding institutions who develop them.
|
64 |
+
|
65 |
+
The version of MedLexSp freely available does not include terms or coding data from terminological sources with copyright rights (MedDRA and SNOMED-CT); only the subset of data in MedLexSp without usage restrictions is accessible.
|
66 |
+
|
67 |
+
|
68 |
+
2. Links to publications/other research outputs that cite the data:
|
69 |
+
|
70 |
+
Báez, P., Bravo-Marquez, F., Dunstan, J., Rojas, M., & Villena, F. (2022). Automatic Extraction of Nested Entities in Clinical Referrals in Spanish. ACM Transactions on Computing for Healthcare (HEALTH), 3(3), 1-22.
|
71 |
+
|
72 |
+
Campillos-Llanos, L., Valverde-Mateos, A., Capllonch-Carrión, A., & Moreno-Sandoval, A. (2021). A clinical trials corpus annotated with UMLS entities to enhance the access to evidence-based medicine. BMC medical informatics and decision making, 21(1), 1-19. https://doi.org/10.1186/s12911-021-01395-z
|
73 |
+
|
74 |
+
Dunstan, J., Villena, F., Pérez, J., & Lagos, R. (2021). Supporting the classification of patients in public hospitals in Chile by designing, deploying and validating a system based on natural language processing. BMC medical informatics and decision making, 21(1), 1-11.
|
75 |
+
|
76 |
+
Tamine, L., & Goeuriot, L. (2021). Semantic information retrieval on medical texts: Research challenges, survey, and open issues. ACM Computing Surveys (CSUR), 54(7), 1-38.
|
77 |
+
|
78 |
+
|
79 |
+
3. Links to publications/other research outputs that use the data:
|
80 |
+
|
81 |
+
Campillos-Llanos, L., Valverde-Mateos, A., Capllonch-Carrión, A., & Moreno-Sandoval, A. (2021). A clinical trials corpus annotated with UMLS entities to enhance the access to evidence-based medicine. BMC medical informatics and decision making, 21(1), 1-19. https://doi.org/10.1186/s12911-021-01395-z
|
82 |
+
|
83 |
+
|
84 |
+
4. Links to other publicly accessible locations of the data:
|
85 |
+
https://github.com/lcampillos/MedLexSp
|
86 |
+
|
87 |
+
|
88 |
+
5. Links/relationships to ancillary data sets:
|
89 |
+
N/A
|
90 |
+
|
91 |
+
|
92 |
+
6. Was data derived from another source? If so, please add link where such work is located:
|
93 |
+
|
94 |
+
- Cancer Text Mining Shared Task (CANTEMIST) (Biomedical Text Mining Unit): https://temu.bsc.es/cantemist/
|
95 |
+
- Chilean Waiting List Corpus (Universidad de Chile): https://zenodo.org/record/3926705
|
96 |
+
- Clinical Case Coding in Spanish Shared Task (CODIESP) shared task (Biomedical Text Mining Unit): https://temu.bsc.es/codiesp/
|
97 |
+
- Clinical Trials for Evidence-Based Medicine in Spanish (CT-EBM-SP) corpus: https://zenodo.org/record/6059737
|
98 |
+
- DELAS electronic dictionaries for Spanish: Blanco X. Les dictionnaires électroniques de l’espagnol (DELASs et DELACs). Lingvisticæ Investigationes. 2000;23(2):201–18.
|
99 |
+
- Diccionario de términos médicos (Real Academia Nacional de Medicina de España): https://dtme.ranm.es
|
100 |
+
- MedlinePlus Spanish (National Library of Medicine, NLM): https://medlineplus.gov/spanish/
|
101 |
+
- National Cancer Institute (NCI) Dictionary of Cancer Terms, Spanish version: https://www.cancer.gov/publications/dictionaries/cancer-terms
|
102 |
+
- Nomenclátor de Prescripció́n (AEMPS): https://listadomedicamentos.aemps.gob.es/prescripcion.zip
|
103 |
+
- Orphadata: Free access data from Orphanet: http://www.orphadata.org
|
104 |
+
- Pharmacological Substances, Compounds and proteins and Named Entity Recognition (PharmaCoNER) challenge (Biomedical Text Mining Unit): https://temu.bsc.es/pharmaconer/
|
105 |
+
- SPACCC POS-TAGGER: https://doi.org/10.5281/zenodo.2621286
|
106 |
+
- Spanish Drug Effect database: https://github.com/isegura/ADR
|
107 |
+
- Unified Medical Language System: https://www.nlm.nih.gov/research/umls/index.html
|
108 |
+
|
109 |
+
|
110 |
+
DATA & FILE OVERVIEW
|
111 |
+
|
112 |
+
1. File List:
|
113 |
+
- MedLexSp.dsv: a delimiter-separated value file, with the following data fields: Field 1 is the UMLS CUI of the entity; field 2, the lemma; field 3, the variant forms; field 4, the part-of-speech; field 5, the semantic types(s); and field 6, the semantic group.
|
114 |
+
- MedLexSp.xml: an XML-encoded version using the Lexical Markup Framework (LMF), which includes the morphological data (number, gender, verb tense and person, and information about affix/abbreviation data). The Document Type Definition file is also provided (lmf.dtd).
|
115 |
+
- Lexical Record files: in subfolder "LR/":
|
116 |
+
· LR_abr.dsv: list of equivalences between acronyms/abbreviations and full forms.
|
117 |
+
· LR_affix.dsv: provides the equivalence between affixes/roots and their meanings.
|
118 |
+
· LR_n_v.dsv: list of deverbal nouns.
|
119 |
+
· LR_adj_n.dsv: list of adjectives derived from nouns.
|
120 |
+
- Spacy lemmatizer (in subfolder "spacy_lemmatizer/"): lemmatizer.py
|
121 |
+
- Stanza lemmatizer (in subfolder "stanza_lemmatizer/"): ancora-medlexsp.pt
|
122 |
+
Companion code and files can be found in the github repository: https://github.com/lcampillos/MedLexSp
|
123 |
+
|
124 |
+
|
125 |
+
2. Relationship between files, if important:
|
126 |
+
N/A
|
127 |
+
|
128 |
+
|
129 |
+
3. Additional related data collected that was not included in the current data package:
|
130 |
+
Codes from the source terminologies and ontologies are not distributed.
|
131 |
+
|
132 |
+
|
133 |
+
4. Are there multiple versions of the dataset? If so, please indicate where they are located:
|
134 |
+
This is version 1 (which updates the preliminary version 0 from 2019).
|
135 |
+
|
136 |
+
|
137 |
+
METHODOLOGICAL INFORMATION
|
138 |
+
|
139 |
+
1. Description of methods used for collection/generation of data:
|
140 |
+
Methods are explained in the companion scientific article.
|
141 |
+
|
142 |
+
|
143 |
+
2. Instrument- or software-specific information needed to interpret/reproduce the data, please indicate their location:
|
144 |
+
- For the XML file, an XML editor.
|
145 |
+
- For the Spacy and Stanza lemmatizers, these python libraries are needed:
|
146 |
+
· Spacy: https://spacy.io/
|
147 |
+
· Stanza: https://stanfordnlp.github.io/stanza/
|
148 |
+
|
149 |
+
|
150 |
+
3. Standards and calibration information, if appropriate:
|
151 |
+
See the companion article, where the evaluation of MedLexSp is explained.
|
152 |
+
|
153 |
+
|
154 |
+
4. Environmental/experimental conditions:
|
155 |
+
N/A
|
156 |
+
|
157 |
+
|
158 |
+
5. Describe any quality-assurance procedures performed on the data:
|
159 |
+
See the companion article, where the evaluation of MedLexSp is explained.
|
160 |
+
|
161 |
+
|
162 |
+
6. People involved with sample collection, processing, analysis and/or submission, please specify using CREDIT roles https://casrai.org/credit/:
|
163 |
+
Leonardo Campillos-Llanos
|
164 |
+
|
165 |
+
|
166 |
+
7. Author contact information:
|
167 |
+
Leonardo Campillos-Llanos
|
168 |
+
Instituto de Lengua, Literatura y Antropología (ILLA)
|
169 |
+
Spanish National Research Council (CSIC)
|
170 |
+
c/Albasanz 26-28, 28037, Madrid, Spain
|
171 | |
172 |
+
|
173 |
+
|
174 |
+
DATA-SPECIFIC INFORMATION:
|
175 |
+
|
176 |
+
|
177 |
+
1. Number of variables: N/A
|
178 |
+
|
179 |
+
|
180 |
+
2. Number of cases/rows:
|
181 |
+
- 100 887 lemmas (term entries)
|
182 |
+
- 302 543 inflected forms (conjugated verbs, and number/gender variants)
|
183 |
+
- 42 958 Unified Medical Language System (UMLS) Concept Unique Identifiers (CUIs).
|
184 |
+
|
185 |
+
|
186 |
+
3. Variable List: N/A
|
187 |
+
|
188 |
+
|
189 |
+
4. Missing data codes:
|
190 |
+
8520 out of 100 887 entries
|
191 |
+
|
192 |
+
5. Specialized formats or other abbreviations used:
|
193 |
+
- DSV: Delimiter Separated Values file
|
194 |
+
- LMF: Lexical Markup Framework (ISO 24613:2008): https://en.wikipedia.org/wiki/Lexical_Markup_Framework
|
195 |
+
- XML: Extensible Markup Language
|
196 |
+
|
197 |
+
|
198 |
+
6. Dictionaries/codebooks used:
|
199 |
+
- A subset of terms from the Diccionario panhispánico de términos médicos (DPTM) developed by Unidad de Terminología Médica (Real Academia Nacional de Medicina de España), in collaboration with Asociación Latinoamericana de Academias Nacionales de Medicina, España y Portugal (ALANAM).
|
200 |
+
- National Cancer Institute (NCI) Dictionary of Cancer Terms.
|
201 |
+
- OrphaData from INSERM.
|
202 |
+
- Nomenclátor de Prescripción from AEMPS.
|
203 |
+
|
204 |
+
|
205 |
+
7. Controlled vocabularies/ontologies used:
|
206 |
+
- Anatomical Therapeutical Classification from the WHO
|
207 |
+
- International Classification of Diseases vs. 10 (ICD-10)
|
208 |
+
- International Classification of Primary Care (ICPC)
|
209 |
+
- Medical Dictionary for Regulatory Activities (MedDRA)
|
210 |
+
- Medical Subject Headings (MeSH)
|
211 |
+
- Online Mendelian Inheritance in Man (OMIM®) catalog of genes and genetic disorders
|
212 |
+
- Systematized Nomenclature of Medicine - Clinical Terms (SNOMED-CT®)
|
213 |
+
- World Health Organization (WHO) Adverse Drug Reactions terminology
|
214 |
+
|
215 |
+
|
216 |
+
|
217 |
+
|
medlexsp/process_dataset.py
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from datasets import load_dataset
|
2 |
+
import os
|
3 |
+
import re
|
4 |
+
|
5 |
+
from pathlib import Path
|
6 |
+
from zipfile import ZipFile
|
7 |
+
import tarfile
|
8 |
+
|
9 |
+
import xml.etree.ElementTree as ET
|
10 |
+
|
11 |
+
FILE_PATH = "MedLexSp_v2" + os.sep + "MedLexSp_v2" + os.sep + "MedLexSp_v2.xml"
|
12 |
+
|
13 |
+
path = Path(__file__).parent.absolute()
|
14 |
+
tree = ET.parse(str(path) + os.sep + FILE_PATH)
|
15 |
+
root = tree.getroot()
|
16 |
+
sets = []
|
17 |
+
counterSeveralType = 0
|
18 |
+
counterDocument = 0
|
19 |
+
for group in root.findall("{http://www.lexicalmarkupframework.org/}Lexicon"):
|
20 |
+
for igroup in group.findall("{http://www.lexicalmarkupframework.org/}LexicalEntry"):
|
21 |
+
for item in igroup.findall("{http://www.lexicalmarkupframework.org/}Lemma"):
|
22 |
+
print (str(item.attrib['writtenForm']).capitalize())
|
23 |
+
counterDocument += 1
|
24 |
+
for doc in igroup.findall("{http://www.lexicalmarkupframework.org/}SemanticType"):
|
25 |
+
setID = doc.attrib['val']
|
26 |
+
print ("\t Type ==> " + str(setID).capitalize())
|
27 |
+
sets.append(setID)
|
28 |
+
counterSeveralType += 1
|
29 |
+
|
30 |
+
print (f"Size of Document is {counterDocument}")
|
31 |
+
print (f"Size of Types on Document is {counterSeveralType}")
|
32 |
+
#print(sets)
|
medlexsp/using_dataset_hugginface.py
ADDED
@@ -0,0 +1,203 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# -*- coding: utf-8 -*-
|
2 |
+
"""using_dataset_hugginface.ipynb
|
3 |
+
|
4 |
+
Automatically generated by Colaboratory.
|
5 |
+
|
6 |
+
Original file is located at
|
7 |
+
https://colab.research.google.com/drive/1soGxkZu4antYbYG23GioJ6zoSt_GhSNT
|
8 |
+
"""
|
9 |
+
|
10 |
+
"""**Hugginface loggin for push on Hub**"""
|
11 |
+
###
|
12 |
+
#
|
13 |
+
# Used bibliografy:
|
14 |
+
# https://huggingface.co/learn/nlp-course/chapter5/5
|
15 |
+
#
|
16 |
+
###
|
17 |
+
|
18 |
+
import os
|
19 |
+
import time
|
20 |
+
import math
|
21 |
+
from huggingface_hub import login
|
22 |
+
from datasets import load_dataset, concatenate_datasets
|
23 |
+
from functools import reduce
|
24 |
+
from pathlib import Path
|
25 |
+
import pandas as pd
|
26 |
+
import pathlib
|
27 |
+
import xml.etree.ElementTree as ET
|
28 |
+
|
29 |
+
# Load model directly
|
30 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
31 |
+
|
32 |
+
typeMedicalDictionary = {
|
33 |
+
'Nucleotide_Sequence':'Secuencia de Nucleotidos',
|
34 |
+
'Gene_or_Genome': 'Gen o Genoma',
|
35 |
+
'Professional_Society': 'Sociedad Profesional',
|
36 |
+
'Molecular_Biology_Research_Technique':'Técnica de Investigación de Biología Molecular',
|
37 |
+
'Occupation_or_Discipline':'Ocupacion o Disciplina',
|
38 |
+
'Natural_Phenomenon_or_Process':'Proceso o Fenómeno Natural',
|
39 |
+
'Bird':'Pájaro', 'Drug_Delivery_Device':'Dispositivo de entrega de medicamentos',
|
40 |
+
'Animal':'Animal', 'Temporal_Concept':'Concepto Temporal', 'Physiologic_Function':'Función Psicológica',
|
41 |
+
'Regulation_or_Law':'Ley o Regulacion', 'Mental_or_Behavioral_Dysfunction':'Disfunción mental o de comportamiento',
|
42 |
+
'Event':'Evento', 'Antibiotic':'Antibiótico', 'Family_Group':'Grupo Familiar', 'Chemical':'Quimico',
|
43 |
+
'Educational_Activity':'Actividad Educacional', 'Organism_Attribute':'Atributo organismo', 'Functional_Concept':'Concepto Funcional',
|
44 |
+
'Age_Group':'Grupo Etareo', 'Organic_Compound':'Compuesto orgánico', 'Human':'Humano', 'Health_Care_Activity':'Actividad de cuidado de salud',
|
45 |
+
'Mental_Process':'Proceso mental', 'Hormone':'Hormona', 'Experimental_Model_of_Disease':'Modelo experimental de una enfermedad',
|
46 |
+
'Fully_Formed_Anatomical_Structure':'Estructura anatómica completamente formada', 'Classification':'Clasificación', 'Food':'Comida', 'Amino_Acid_Peptide_or_Protein':'Aminoácido péptido o proteína',
|
47 |
+
'Injury_or_Poisoning':'Lesión o envenenamiento', 'Substance':'Sustancia', 'Organization':'Organizacion', 'Intellectual_Product':'Producto Intelectual', 'Behavior':'Comportamiento',
|
48 |
+
'Body_Part_Organ_or_Organ_Component':'Parte del cuerpo órgano o componente del órgano', 'Cell_or_Molecular_Dysfunction':'Disfunción celular o molecular', 'Fish':'Pescado', 'Vertebrate':'Vertebrado',
|
49 |
+
'Congenital_Abnormality':'Anormalidad congénita', 'Governmental_or_Regulatory_Activity':'Actividad gubernamental o regulatoria',
|
50 |
+
'Daily_or_Recreational_Activity':'Actividad diaria o recreacional', 'Hazardous_or_Poisonous_Substance':'Sustancia peligrosa o venenosa', 'Group_Attribute':'Atributo grupo', 'Immunologic_Factor':'Factor inmunológico', 'Laboratory_or_Test_Result':'Resultado de la prueba o laboratorio',
|
51 |
+
'Neoplastic_Process':'Proceso neoplásico', 'Phenomenon_or_Process':'Fenómeno o proceso', 'Cell_Component':'Componente celular', 'Health_Care_Related_Organization':'Organización relacionada con el cuidado dela_salud', 'Anatomical_Structure': 'Estructura anatómica', 'Chemical_Viewed_Structurally':'Química vista estructuralmente',
|
52 |
+
'Population_Group':'Grupo poblacional', 'Biologic_Function':'Función biológica', 'Biologically_Active_Substance':'Sustancia activa biologicamente', 'Clinical_Attribute':'Atributo clínico', 'Laboratory_Procedure':'Procedimiento de laboratorio', 'Fungus':'Hongo', 'Body_Space_or_Junction':'Espacio del cuerpo o unión', 'Finding':'Hallazgo', 'Spatial_Concept':'Concepto espacial',
|
53 |
+
'Quantitative_Concept':'Concepto cuantitativo', 'Archaeon':'Arqueón', 'Biomedical_Occupation_or_Discipline':'Ocupación o disciplina biomédica', 'Therapeutic_or_Preventive_Procedure':'Procedimiento terapéutico o preventivo', 'Organ_or_Tissue_Function': 'Función de órgano o tejido', 'Cell':'Célula', 'Organic_Chemical':'Orgánico químico',
|
54 |
+
'Human-caused_Phenomenon_or_Process':'Fenómeno o proceso causado por el humano', 'Body_System':'Sistema corporal', 'Sign_or_Symptom':'Signo o síntoma', 'Plant':'Planta', 'Virus':'Virus', 'Activity':'Actividad', 'Organism_Function':'Organismo Función', 'Molecular_Sequence':'Secuencia molecular', 'Steroid':'Esteroide', 'Reptile':'Reptil',
|
55 |
+
'Molecular_Function':'Función molecular', 'Professional_or_Occupational_Group':'Grupo profesional o ocupacional', 'Embryonic_Structure':'Estructura embrionaria', 'Organism':'Organismo', 'Anatomical_Abnormality':'Anormalidad anatómica', 'Patient_or_Disabled_Group':'Grupo de paciente o discapacitado', 'Qualitative_Concept':'Concepto cualitativo',
|
56 |
+
'Bacterium':'Bacteria', 'Idea_or_Concept':'Idea o concepto', 'Enzyme':'Enzima', 'Research_Device':'Dispositivo de investigación', 'Geographic_Area':'Área geográfica', 'Entity':'Entidad', 'Body_Location_or_Region':'Ubicación del cuerpo o región', 'Social_Behavior':'Comportamiento social', 'Self-help_or_Relief_Organization':'Organización de ayuda o alivio',
|
57 |
+
'Inorganic_Chemical':'Químico inorgánico', 'Body_Substance':'Sustancia corporal', 'Conceptual_Entity':'Entidad conceptual', 'Physical_Object':'Objeto físico',
|
58 |
+
'Mammal':'Mamífero', 'Manufactured_Object':'Objeto fabricado', 'Eukaryote':'Eucariota', 'Pathologic_Function':'Función patológica', 'Machine_Activity':'Actividad mecánica', 'Occupational_Activity':'Actividad ocupacional', 'Vitamin':'Vitamina', 'Research_Activity':'Actividad de investigación',
|
59 |
+
'Biomedical_or_Dental_Material':'Material biomédico o dental', 'Environmental_Effect_of_Humans':'Efecto ambiental de los humanos', 'Amino_Acid_Sequence':'Secuencia de aminoácidos', 'Clinical_Drug':'Fármaco clinico', 'Receptor':'Receptor', 'Diagnostic_Procedure':'Procedimiento diagnóstico',
|
60 |
+
'Pharmacologic_Substance':'Sustancia farmacológica', 'Medical_Device':'Dispositivo médico', 'Cell_Function':'Función celular', 'Nucleic_Acid_Nucleoside_or_Nucleotide':'Nucleósido o nucleósido de ácido nucleico', 'Language':'Idioma', 'Chemical_Viewed_Functionally':'Químico visto funcionalmente',
|
61 |
+
'Group':'Grupo', 'Tissue':'Tejido', 'Element_Ion_or_Isotope':'Elemento ion o isótopo', 'Individual_Behavior':'Comportamiento individual', 'Indicator_Reagent_or_Diagnostic_Aid':'Indicador reactivo o ayuda de diagnóstico', 'Genetic_Function':'Función genética', 'Acquired_Abnormality': 'Anormalidad adquirida', 'Disease_or_Syndrome':'Enfermedad o síndrome'
|
62 |
+
}
|
63 |
+
|
64 |
+
|
65 |
+
HF_TOKEN = ''
|
66 |
+
DATASET_TO_LOAD = 'bigbio/distemist'
|
67 |
+
DATASET_TO_UPDATE = 'somosnlp/spanish_medica_llm'
|
68 |
+
DATASET_SOURCE_ID = '13'
|
69 |
+
BASE_DIR = "SPACC" + os.sep + "SPACCC" + os.sep + "SPACCC" + os.sep + "corpus"
|
70 |
+
FILE_PATH = "MedLexSp_v2" + os.sep + "MedLexSp_v2" + os.sep + "MedLexSp_v2.xml"
|
71 |
+
|
72 |
+
#Loggin to Huggin Face
|
73 |
+
login(token = HF_TOKEN)
|
74 |
+
|
75 |
+
dataset_CODING = load_dataset(DATASET_TO_LOAD)
|
76 |
+
royalListOfCode = {}
|
77 |
+
issues_path = 'dataset'
|
78 |
+
tokenizer = AutoTokenizer.from_pretrained("DeepESP/gpt2-spanish-medium")
|
79 |
+
|
80 |
+
#Read current path
|
81 |
+
path = Path(__file__).parent.absolute()
|
82 |
+
MAIN_FILE_ADRESS = str(path) + os.sep + BASE_DIR
|
83 |
+
#print ( os.listdir(str(path) + os.sep + BASE_DIR))
|
84 |
+
|
85 |
+
# # raw_text: Texto asociado al documento, pregunta, caso clínico u otro tipo de información.
|
86 |
+
|
87 |
+
# # topic: (puede ser healthcare_treatment, healthcare_diagnosis, tema, respuesta a pregunta, o estar vacío p.ej en el texto abierto)
|
88 |
+
|
89 |
+
# # speciality: (especialidad médica a la que se relaciona el raw_text p.ej: cardiología, cirugía, otros)
|
90 |
+
|
91 |
+
# # raw_text_type: (puede ser caso clínico, open_text, question)
|
92 |
+
|
93 |
+
# # topic_type: (puede ser medical_topic, medical_diagnostic,answer,natural_medicine_topic, other, o vacio)
|
94 |
+
|
95 |
+
# # source: Identificador de la fuente asociada al documento que aparece en el README y descripción del dataset.
|
96 |
+
|
97 |
+
# # country: Identificador del país de procedencia de la fuente (p.ej.; ch, es) usando el estándar ISO 3166-1 alfa-2 (Códigos de país de dos letras.).
|
98 |
+
cantemistDstDict = {
|
99 |
+
'raw_text': '',
|
100 |
+
'topic': '',
|
101 |
+
'speciallity': '',
|
102 |
+
'raw_text_type': 'question',
|
103 |
+
'topic_type': 'answer',
|
104 |
+
'source': DATASET_SOURCE_ID,
|
105 |
+
'country': 'es',
|
106 |
+
'document_id': ''
|
107 |
+
}
|
108 |
+
|
109 |
+
totalOfTokens = 0
|
110 |
+
corpusToLoad = []
|
111 |
+
countCopySeveralDocument = 0
|
112 |
+
counteOriginalDocument = 0
|
113 |
+
setOfTopic = set()
|
114 |
+
|
115 |
+
#print (dataset_CODING['train'][5]['entities'])
|
116 |
+
|
117 |
+
path = Path(__file__).parent.absolute()
|
118 |
+
tree = ET.parse(str(path) + os.sep + FILE_PATH)
|
119 |
+
root = tree.getroot()
|
120 |
+
sets = []
|
121 |
+
counterSeveralType = 0
|
122 |
+
counterDocument = 0
|
123 |
+
for group in root.findall("{http://www.lexicalmarkupframework.org/}Lexicon"):
|
124 |
+
for igroup in group.findall("{http://www.lexicalmarkupframework.org/}LexicalEntry"):
|
125 |
+
for item in igroup.findall("{http://www.lexicalmarkupframework.org/}Lemma"):
|
126 |
+
text = str(item.attrib['writtenForm']).capitalize()
|
127 |
+
#print (str(item.attrib['writtenForm']).capitalize())
|
128 |
+
counteOriginalDocument += 1
|
129 |
+
|
130 |
+
listOfTokens = tokenizer.tokenize(text)
|
131 |
+
currentSizeOfTokens = len(listOfTokens)
|
132 |
+
totalOfTokens += currentSizeOfTokens
|
133 |
+
newCorpusRow = cantemistDstDict.copy()
|
134 |
+
|
135 |
+
|
136 |
+
newCorpusRow['raw_text'] = text
|
137 |
+
newCorpusRow['document_id'] = str(counteOriginalDocument)
|
138 |
+
|
139 |
+
|
140 |
+
counterType = 0
|
141 |
+
for doc in igroup.findall("{http://www.lexicalmarkupframework.org/}SemanticType"):
|
142 |
+
if counterType > 0:
|
143 |
+
newCorpusRow = cantemistDstDict.copy()
|
144 |
+
newCorpusRow['raw_text'] = text
|
145 |
+
newCorpusRow['document_id'] = str(counteOriginalDocument)
|
146 |
+
|
147 |
+
topic = doc.attrib['val']
|
148 |
+
|
149 |
+
newCorpusRow['topic'] = typeMedicalDictionary[topic]
|
150 |
+
|
151 |
+
setOfTopic.add(topic)
|
152 |
+
|
153 |
+
counterSeveralType += 1
|
154 |
+
counterType += 1
|
155 |
+
corpusToLoad.append(newCorpusRow)
|
156 |
+
|
157 |
+
df = pd.DataFrame.from_records(corpusToLoad)
|
158 |
+
|
159 |
+
if os.path.exists(f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl"):
|
160 |
+
os.remove(f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl")
|
161 |
+
|
162 |
+
df.to_json(f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl", orient="records", lines=True)
|
163 |
+
print(
|
164 |
+
f"Downloaded all the issues for {DATASET_TO_LOAD}! Dataset stored at {issues_path}/spanish_medical_llms.jsonl"
|
165 |
+
)
|
166 |
+
|
167 |
+
print(' On dataset there are as document ', counteOriginalDocument)
|
168 |
+
print(' On dataset there are as copy document ', countCopySeveralDocument)
|
169 |
+
print(' On dataset there are as size of Tokens ', totalOfTokens)
|
170 |
+
file = Path(f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl") # or Path('./doc.txt')
|
171 |
+
size = file.stat().st_size
|
172 |
+
print ('File size on Kilobytes (kB)', size >> 10) # 5242880 kilobytes (kB)
|
173 |
+
print ('File size on Megabytes (MB)', size >> 20 ) # 5120 megabytes (MB)
|
174 |
+
print ('File size on Gigabytes (GB)', size >> 30 ) # 5 gigabytes (GB)
|
175 |
+
|
176 |
+
#Once the issues are downloaded we can load them locally using our
|
177 |
+
local_spanish_dataset = load_dataset("json", data_files=f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl", split="train")
|
178 |
+
|
179 |
+
|
180 |
+
##Update local dataset with cloud dataset
|
181 |
+
try:
|
182 |
+
spanish_dataset = load_dataset(DATASET_TO_UPDATE, split="train")
|
183 |
+
print("=== Before ====")
|
184 |
+
print(spanish_dataset)
|
185 |
+
spanish_dataset = concatenate_datasets([spanish_dataset, local_spanish_dataset])
|
186 |
+
except Exception:
|
187 |
+
spanish_dataset = local_spanish_dataset
|
188 |
+
|
189 |
+
spanish_dataset.push_to_hub(DATASET_TO_UPDATE)
|
190 |
+
|
191 |
+
print("=== After ====")
|
192 |
+
print(spanish_dataset)
|
193 |
+
|
194 |
+
#print('List of Term Topic')
|
195 |
+
#print(setOfTopic)
|
196 |
+
|
197 |
+
# Augmenting the dataset
|
198 |
+
|
199 |
+
#Importan if exist element on DATASET_TO_UPDATE we must to update element
|
200 |
+
# in list, and review if the are repeted elements
|
201 |
+
|
202 |
+
|
203 |
+
|