Upload 5 files
Browse files- spanish_biomedical_craw_corpus/README.md +0 -0
- spanish_biomedical_craw_corpus/example.txt +61 -0
- spanish_biomedical_craw_corpus/process_dataset.py +29 -0
- spanish_biomedical_craw_corpus/spanish_biomedical_craw_corpus_process.ipynb +267 -0
- spanish_biomedical_craw_corpus/using_dataset_hugginface.py +150 -0
spanish_biomedical_craw_corpus/README.md
ADDED
File without changes
|
spanish_biomedical_craw_corpus/example.txt
ADDED
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Existen diversos factores que influyen en el paso de sustancias a la leche materna como, por ejemplo, la unión a proteínas plasmáticas, ionización, grado de liposolubilidad, peso molecular, etc. Tales parámetros varían según los fármacos.
|
2 |
+
Además, no hay suficientes estudios para un número elevado de medicamentos, sobre todo para los de reciente comercialización.
|
3 |
+
De hecho, la recomendación «contraindicado durante la lactancia», hace referencia sobre todo a la falta de estudios farmacocinéticos sobre la excreción en leche materna, y no a la existencia de observaciones clínicas.
|
4 |
+
En este artículo se exponen los datos disponibles sobre fármacos de uso común y algunos principios sencillos para facilitar la labor de los profesionales de la salud a la hora de prescribir medicación a una mujer durante la lactancia.
|
5 |
+
|
6 |
+
El asma es una enfermedad inflamatoria crónica de las vías respiratorias que provoca una obstrucción bronquial reversible.
|
7 |
+
En los países industrializados, la prevalencia y la gravedad se encuentran en aumento desde 1970, pero la mortalidad permanece estable.
|
8 |
+
El diagnóstico, sobre todo clínico, se basa en el interrogatorio.
|
9 |
+
Luego ha de confirmarse mediante la espirometría, que pone de manifiesto el trastorno ventilatorio obstructivo y su reversibilidad.
|
10 |
+
Para tratar la enfermedad, es fundamental identificar todos los elementos desencadenantes y/o agravantes de la misma (neumoalérgenos, rinitis y sinusitis, irritantes, etc.).
|
11 |
+
La intensidad de los síntomas y el flujo espiratorio máximo constituyen los parámetros de la gravedad del asma.
|
12 |
+
El tratamiento se prescribe según esta escala de gravedad, y se pretende limitar los síntomas, las exacerbaciones y la necesidad de fármacos así como mantener una función ventilatoria normal.
|
13 |
+
Por lo general, el tratamiento farmacológico consiste en una asociación de corticoides (inhalados en la mayoría de los casos) y agonistas β 2 adrenérgicos.
|
14 |
+
Asimismo, resulta imprescindible brindar explicaciones que faciliten la adhesión del paciente al programa terapéutico.
|
15 |
+
|
16 |
+
Las dilataciones bronquiales o bronquiectasias son frecuentes y los mecanismos que las causan bien conocidos.
|
17 |
+
Los adelantos en las técnicas de diagnóstico por imagen han grandemente contribuido al enfoque diagnóstico, completando una historia y una presentación clínicas a menudo muy características.
|
18 |
+
Casi en el 50% de los casos son idiopáticas, aunque las afecciones que las originan o que están asociadas se pueden detectar hoy en día con mayor facilidad.
|
19 |
+
El tratamiento actual está bien definido y se basa en el control de los elementos del círculo vicioso descrito por Cole.
|
20 |
+
|
21 |
+
La tuberculosis es una enfermedad infecciosa que se transmite de persona a persona y que se debe a Mycobacterium tuberculosis; el 33% de la población mundial está infectada por este bacilo, con una mortalidad que alcanza los 3 millones de personas al año.
|
22 |
+
Un porcentaje minoritario de los pacientes infectados desarrollan la enfermedad tuberculosa.
|
23 |
+
El tratamiento está bien establecido, pero depende de un número limitado de antibióticos activos, lo que obliga a un cumplimiento riguroso, para evitar la aparición de cepas resistentes del bacilo.
|
24 |
+
|
25 |
+
El tratamiento de las disfunciones eréctiles ha evolucionado en los últimos años.
|
26 |
+
Aunque el tratamiento etiológico continúa siendo de actualidad, el tratamiento sintomático se ha convertido en el objetivo primordial cuando no puede identificarse la etiología.
|
27 |
+
Los tratamientos orales (inhibidores de las fosfodiesterasas V) que facilitan la erección, cuyas características farmacológicas permiten, en algunos casos, evitar la programación del acto con escasos efectos secundarios, han pasado a ser, desde hace algunos años, la opción terapéutica principal.
|
28 |
+
Sin embargo, las inyecciones intracavernosas de prostaglandina E1 siguen ocupando un importante lugar como tratamiento de elección o como recurso cuando fracasan los tratamientos orales.
|
29 |
+
Los erectores de vacío también son una alternativa útil para los pacientes que no pueden recibir inyecciones.
|
30 |
+
Pese a los progresos de la farmacología, los implantes peneanos se siguen utilizando cuando fracasan los tratamientos menos agresivos.
|
31 |
+
En los pacientes bien informados tratados por urólogos con experiencia en este tipo de cirugía, los porcentajes de buenos resultados son altos.
|
32 |
+
En pocos años el arsenal terapéutico ha aumentado de manera considerable, y hoy en día es posible tratar prácticamente a todos los pacientes impotentes que lo solicitan.
|
33 |
+
|
34 |
+
La orina normal es estéril y, cuando se recoge por micción aséptica, se acepta la presencia de 10 5 colibacilos y 10 4 leucocitos/ml.
|
35 |
+
La mayoría de los gérmenes urinarios son enterobacterias, fundamentalmente Escherichia coli.
|
36 |
+
No obstante, las infecciones nosocomiales pueden deberse a otros Gram negativos o positivos.
|
37 |
+
Una infección urinaria puede ser primaria, aparecer en un tracto urinario sano y deberse normalmente a un germen uropatógeno que contiene adhesinas.
|
38 |
+
La infección urinaria secundaria es consecuencia de una uropatía o de una intervención urológica.
|
39 |
+
En la mujer, estas infecciones se manifiestan a través de cistitis, aisladas o recidivantes, o por pielonefritis aguda que requiere algunas técnicas de diagnóstico por imagen y que se trata fácilmente con una antibioticoterapia adaptada.
|
40 |
+
En cambio, una pielonefritis aguda con obstrucción es una urgencia urológica.
|
41 |
+
En varios contextos, las pielonefritis son peligrosas: fundamentalmente en las mujeres embarazadas, en el diabético (en los que son frecuentes las necrosis papilares) y en los pacientes con trasplante renal.
|
42 |
+
En el hombre, las prostatitis agudas requieren un tratamiento prolongado para evitar la evolución hacia una prostatitis crónica.
|
43 |
+
|
44 |
+
La insuficiencia renal crónica es una enfermedad general.
|
45 |
+
Las dos principales causas son la diabetes mellitus y las nefropatías vasculares crónicas.
|
46 |
+
La edad de los pacientes aumenta progresivamente y, en la primera diálisis, tienen en promedio más de 60 años.
|
47 |
+
La detección precoz, la cuantificación y el seguimiento del déficit funcional renal se basan, en la práctica, en una correcta interpretación de la creatininemia.
|
48 |
+
Cuando la tasa de filtración glomerular disminuye más del 50%, es necesario tratar activamente al paciente: tomar todas las medidas necesarias para intentar reducir la velocidad de degradación de la insuficiencia renal, asegurar un buen control de la homeostasia y proteger los dos principales órganos amenazados, los sistemas cardiovascular y osteoarticular.
|
49 |
+
Las técnicas de tratamiento de la insuficiencia renal terminal son la hemodiálisis, la diálisis peritoneal y el trasplante renal.
|
50 |
+
Si es necesario, estos tratamientos pueden combinarse, garantizándose supervivencias muy prolongadas, de más de 30 años en la actualidad.
|
51 |
+
Existen factores extrarrenales que determinan la supervivencia: la edad y, sobre todo, las enfermedades sistémicas asociadas, como la diabetes y las enfermedades cardiovasculares.
|
52 |
+
Todos los esfuerzos deben converger en una mejor definición de los grupos de riesgo de insuficiencia renal, una detección más precoz de la enfermedad en esos grupos y una mejor prevención.
|
53 |
+
|
54 |
+
Este artículo considera las diferentes formas de presentación de las nefropatías glomerulares, precisa las diferentes pruebas complementarias útiles para orientar el diagnóstico y la importancia de la biopsia renal por punción.
|
55 |
+
Se estudian también las principales causas de las nefropatías glomerulares agudas y crónicas.
|
56 |
+
|
57 |
+
La incidencia acumulada de litiasis urinaria es del 10% en varones y del 5% en mujeres.
|
58 |
+
Una litiasis no diagnosticada o que no ha sido adecuadamente tratada puede producir el deterioro de la función renal.
|
59 |
+
La litiasis sigue siendo una causa de insuficiencia renal terminal que puede requerir la realización de diálisis.
|
60 |
+
Por ello, este trastorno requiere un diagnóstico etiológico completo cuyo primer paso consiste en el análisis del cálculo.
|
61 |
+
No se debe concluir el tratamiento del cólico nefrítico hasta comprobar la desaparición del cálculo.
|
spanish_biomedical_craw_corpus/process_dataset.py
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from datasets import load_dataset
|
2 |
+
import os
|
3 |
+
import re
|
4 |
+
|
5 |
+
from pathlib import Path
|
6 |
+
|
7 |
+
|
8 |
+
|
9 |
+
|
10 |
+
path = Path(__file__).parent.absolute()
|
11 |
+
|
12 |
+
with open( str(path) + os.sep + 'example.txt',encoding='utf8') as file:
|
13 |
+
"""
|
14 |
+
# Build a dictionary with ICD-O-3 associated with
|
15 |
+
# healtcare problems
|
16 |
+
"""
|
17 |
+
linesInFile = file.readlines()
|
18 |
+
|
19 |
+
for index, iLine in enumerate(linesInFile):
|
20 |
+
print([linesInFile[index]]) if len(linesInFile[index]) > 1 else print('**************') if linesInFile[index] == '\n' else print ('******* ERROR ********')
|
21 |
+
|
22 |
+
|
23 |
+
# if re.match('^Las dilataciones bronquiales',iLine):
|
24 |
+
# break
|
25 |
+
|
26 |
+
|
27 |
+
# code = listOfData[0]
|
28 |
+
# description = reduce(lambda a, b: a + " "+ b, listOfData[1:2], "")
|
29 |
+
# royalListOfCode[code.strip()] = description.strip()
|
spanish_biomedical_craw_corpus/spanish_biomedical_craw_corpus_process.ipynb
ADDED
@@ -0,0 +1,267 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"nbformat": 4,
|
3 |
+
"nbformat_minor": 0,
|
4 |
+
"metadata": {
|
5 |
+
"colab": {
|
6 |
+
"provenance": [],
|
7 |
+
"gpuType": "T4"
|
8 |
+
},
|
9 |
+
"kernelspec": {
|
10 |
+
"name": "python3",
|
11 |
+
"display_name": "Python 3"
|
12 |
+
},
|
13 |
+
"language_info": {
|
14 |
+
"name": "python"
|
15 |
+
},
|
16 |
+
"accelerator": "GPU"
|
17 |
+
},
|
18 |
+
"cells": [
|
19 |
+
{
|
20 |
+
"cell_type": "code",
|
21 |
+
"source": [
|
22 |
+
"!git clone https://github.com/dionis/SpanishMedicaLLM.git"
|
23 |
+
],
|
24 |
+
"metadata": {
|
25 |
+
"colab": {
|
26 |
+
"base_uri": "https://localhost:8080/"
|
27 |
+
},
|
28 |
+
"id": "tFtxsPeDsZTE",
|
29 |
+
"outputId": "1b9547d0-62b4-4ab9-94f2-3ff767bb1728"
|
30 |
+
},
|
31 |
+
"execution_count": 1,
|
32 |
+
"outputs": [
|
33 |
+
{
|
34 |
+
"output_type": "stream",
|
35 |
+
"name": "stdout",
|
36 |
+
"text": [
|
37 |
+
"Cloning into 'SpanishMedicaLLM'...\n",
|
38 |
+
"remote: Enumerating objects: 1410, done.\u001b[K\n",
|
39 |
+
"remote: Counting objects: 100% (1375/1375), done.\u001b[K\n",
|
40 |
+
"remote: Compressing objects: 100% (908/908), done.\u001b[K\n",
|
41 |
+
"remote: Total 1410 (delta 502), reused 1259 (delta 417), pack-reused 35\u001b[K\n",
|
42 |
+
"Receiving objects: 100% (1410/1410), 48.85 MiB | 9.93 MiB/s, done.\n",
|
43 |
+
"Resolving deltas: 100% (506/506), done.\n"
|
44 |
+
]
|
45 |
+
}
|
46 |
+
]
|
47 |
+
},
|
48 |
+
{
|
49 |
+
"cell_type": "code",
|
50 |
+
"source": [
|
51 |
+
"cd SpanishMedicaLLM/"
|
52 |
+
],
|
53 |
+
"metadata": {
|
54 |
+
"colab": {
|
55 |
+
"base_uri": "https://localhost:8080/"
|
56 |
+
},
|
57 |
+
"id": "R7q87k4lvKN0",
|
58 |
+
"outputId": "e2e32e14-87e0-47d9-c77d-c83f4fe61040"
|
59 |
+
},
|
60 |
+
"execution_count": 2,
|
61 |
+
"outputs": [
|
62 |
+
{
|
63 |
+
"output_type": "stream",
|
64 |
+
"name": "stdout",
|
65 |
+
"text": [
|
66 |
+
"/content/SpanishMedicaLLM\n"
|
67 |
+
]
|
68 |
+
}
|
69 |
+
]
|
70 |
+
},
|
71 |
+
{
|
72 |
+
"cell_type": "code",
|
73 |
+
"source": [
|
74 |
+
"cd finetuning/hugginface_dataset/spanish_biomedical_craw_corpus/"
|
75 |
+
],
|
76 |
+
"metadata": {
|
77 |
+
"colab": {
|
78 |
+
"base_uri": "https://localhost:8080/"
|
79 |
+
},
|
80 |
+
"id": "MNleOUfBvOWA",
|
81 |
+
"outputId": "f45dbd6b-583e-4c51-d49d-741ec721b738"
|
82 |
+
},
|
83 |
+
"execution_count": 3,
|
84 |
+
"outputs": [
|
85 |
+
{
|
86 |
+
"output_type": "stream",
|
87 |
+
"name": "stdout",
|
88 |
+
"text": [
|
89 |
+
"/content/SpanishMedicaLLM/finetuning/hugginface_dataset/spanish_biomedical_craw_corpus\n"
|
90 |
+
]
|
91 |
+
}
|
92 |
+
]
|
93 |
+
},
|
94 |
+
{
|
95 |
+
"cell_type": "code",
|
96 |
+
"source": [
|
97 |
+
"!git checkout 3-develop_a_finetunning_test_on_training_process_with_QLora-Epfl"
|
98 |
+
],
|
99 |
+
"metadata": {
|
100 |
+
"colab": {
|
101 |
+
"base_uri": "https://localhost:8080/"
|
102 |
+
},
|
103 |
+
"id": "kGjsbyxavXSS",
|
104 |
+
"outputId": "3c6496d6-2dc7-4176-cefb-78872a3a4d6e"
|
105 |
+
},
|
106 |
+
"execution_count": 4,
|
107 |
+
"outputs": [
|
108 |
+
{
|
109 |
+
"output_type": "stream",
|
110 |
+
"name": "stdout",
|
111 |
+
"text": [
|
112 |
+
"Branch '3-develop_a_finetunning_test_on_training_process_with_QLora-Epfl' set up to track remote branch '3-develop_a_finetunning_test_on_training_process_with_QLora-Epfl' from 'origin'.\n",
|
113 |
+
"Switched to a new branch '3-develop_a_finetunning_test_on_training_process_with_QLora-Epfl'\n"
|
114 |
+
]
|
115 |
+
}
|
116 |
+
]
|
117 |
+
},
|
118 |
+
{
|
119 |
+
"cell_type": "code",
|
120 |
+
"source": [
|
121 |
+
"from urllib import request\n",
|
122 |
+
"URL = 'https://zenodo.org/records/5513237/files/CoWeSe.txt?download=1'\n",
|
123 |
+
"response = request.urlretrieve(URL, \"CoWeSe.txt\")"
|
124 |
+
],
|
125 |
+
"metadata": {
|
126 |
+
"id": "XRsE_iw1JUQX"
|
127 |
+
},
|
128 |
+
"execution_count": 5,
|
129 |
+
"outputs": []
|
130 |
+
},
|
131 |
+
{
|
132 |
+
"cell_type": "code",
|
133 |
+
"source": [
|
134 |
+
"!pip install datasets"
|
135 |
+
],
|
136 |
+
"metadata": {
|
137 |
+
"colab": {
|
138 |
+
"base_uri": "https://localhost:8080/"
|
139 |
+
},
|
140 |
+
"id": "0WGb685FxA1u",
|
141 |
+
"outputId": "5e72e20e-727a-4e49-f77a-022e94c3e968"
|
142 |
+
},
|
143 |
+
"execution_count": 6,
|
144 |
+
"outputs": [
|
145 |
+
{
|
146 |
+
"output_type": "stream",
|
147 |
+
"name": "stdout",
|
148 |
+
"text": [
|
149 |
+
"Collecting datasets\n",
|
150 |
+
" Downloading datasets-2.18.0-py3-none-any.whl (510 kB)\n",
|
151 |
+
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m510.5/510.5 kB\u001b[0m \u001b[31m9.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
152 |
+
"\u001b[?25hRequirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from datasets) (3.13.3)\n",
|
153 |
+
"Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from datasets) (1.25.2)\n",
|
154 |
+
"Requirement already satisfied: pyarrow>=12.0.0 in /usr/local/lib/python3.10/dist-packages (from datasets) (14.0.2)\n",
|
155 |
+
"Requirement already satisfied: pyarrow-hotfix in /usr/local/lib/python3.10/dist-packages (from datasets) (0.6)\n",
|
156 |
+
"Collecting dill<0.3.9,>=0.3.0 (from datasets)\n",
|
157 |
+
" Downloading dill-0.3.8-py3-none-any.whl (116 kB)\n",
|
158 |
+
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m116.3/116.3 kB\u001b[0m \u001b[31m14.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
159 |
+
"\u001b[?25hRequirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from datasets) (1.5.3)\n",
|
160 |
+
"Requirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.10/dist-packages (from datasets) (2.31.0)\n",
|
161 |
+
"Requirement already satisfied: tqdm>=4.62.1 in /usr/local/lib/python3.10/dist-packages (from datasets) (4.66.2)\n",
|
162 |
+
"Collecting xxhash (from datasets)\n",
|
163 |
+
" Downloading xxhash-3.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (194 kB)\n",
|
164 |
+
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m194.1/194.1 kB\u001b[0m \u001b[31m19.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
165 |
+
"\u001b[?25hCollecting multiprocess (from datasets)\n",
|
166 |
+
" Downloading multiprocess-0.70.16-py310-none-any.whl (134 kB)\n",
|
167 |
+
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m134.8/134.8 kB\u001b[0m \u001b[31m16.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
168 |
+
"\u001b[?25hRequirement already satisfied: fsspec[http]<=2024.2.0,>=2023.1.0 in /usr/local/lib/python3.10/dist-packages (from datasets) (2023.6.0)\n",
|
169 |
+
"Requirement already satisfied: aiohttp in /usr/local/lib/python3.10/dist-packages (from datasets) (3.9.3)\n",
|
170 |
+
"Requirement already satisfied: huggingface-hub>=0.19.4 in /usr/local/lib/python3.10/dist-packages (from datasets) (0.20.3)\n",
|
171 |
+
"Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from datasets) (24.0)\n",
|
172 |
+
"Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from datasets) (6.0.1)\n",
|
173 |
+
"Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.3.1)\n",
|
174 |
+
"Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (23.2.0)\n",
|
175 |
+
"Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.4.1)\n",
|
176 |
+
"Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (6.0.5)\n",
|
177 |
+
"Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.9.4)\n",
|
178 |
+
"Requirement already satisfied: async-timeout<5.0,>=4.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (4.0.3)\n",
|
179 |
+
"Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub>=0.19.4->datasets) (4.10.0)\n",
|
180 |
+
"Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets) (3.3.2)\n",
|
181 |
+
"Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets) (3.6)\n",
|
182 |
+
"Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets) (2.0.7)\n",
|
183 |
+
"Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets) (2024.2.2)\n",
|
184 |
+
"Requirement already satisfied: python-dateutil>=2.8.1 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets) (2.8.2)\n",
|
185 |
+
"Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets) (2023.4)\n",
|
186 |
+
"Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.8.1->pandas->datasets) (1.16.0)\n",
|
187 |
+
"Installing collected packages: xxhash, dill, multiprocess, datasets\n",
|
188 |
+
"Successfully installed datasets-2.18.0 dill-0.3.8 multiprocess-0.70.16 xxhash-3.4.1\n"
|
189 |
+
]
|
190 |
+
}
|
191 |
+
]
|
192 |
+
},
|
193 |
+
{
|
194 |
+
"cell_type": "code",
|
195 |
+
"execution_count": 10,
|
196 |
+
"metadata": {
|
197 |
+
"id": "yH5nDblMIjoS",
|
198 |
+
"colab": {
|
199 |
+
"base_uri": "https://localhost:8080/"
|
200 |
+
},
|
201 |
+
"outputId": "c650bc1d-bdd6-40c9-a586-7d9fd197207b"
|
202 |
+
},
|
203 |
+
"outputs": [
|
204 |
+
{
|
205 |
+
"output_type": "stream",
|
206 |
+
"name": "stdout",
|
207 |
+
"text": [
|
208 |
+
"Token will not been saved to git credential helper. Pass `add_to_git_credential=True` if you want to set the git credential as well.\n",
|
209 |
+
"Token is valid (permission: write).\n",
|
210 |
+
"Your token has been saved to /root/.cache/huggingface/token\n",
|
211 |
+
"Login successful\n",
|
212 |
+
"Downloaded all the issues for CoWeSe.txt! Dataset stored at dataset/spanish_medical_llms.jsonl\n",
|
213 |
+
" On dataset there are as document 1973048\n",
|
214 |
+
" On dataset there are as copy document 0\n",
|
215 |
+
" On dataset there are as size of Tokens 1973048\n",
|
216 |
+
"File size on Kilobytes (kB) 270594\n",
|
217 |
+
"File size on Megabytes (MB) 264\n",
|
218 |
+
"File size on Gigabytes (GB) 0\n",
|
219 |
+
"Generating train split: 1973048 examples [00:05, 368953.10 examples/s]\n",
|
220 |
+
"Downloading readme: 100% 8.04k/8.04k [00:00<00:00, 21.2MB/s]\n",
|
221 |
+
"Downloading data: 100% 23.7M/23.7M [00:00<00:00, 52.0MB/s]\n",
|
222 |
+
"Generating train split: 100% 33941/33941 [00:00<00:00, 94956.62 examples/s] \n",
|
223 |
+
"Uploading the dataset shards: 0% 0/1 [00:00<?, ?it/s]\n",
|
224 |
+
"Creating parquet from Arrow format: 0% 0/2007 [00:00<?, ?ba/s]\u001b[A\n",
|
225 |
+
"Creating parquet from Arrow format: 0% 6/2007 [00:00<00:40, 49.88ba/s]\u001b[A\n",
|
226 |
+
"Creating parquet from Arrow format: 1% 20/2007 [00:00<00:20, 98.10ba/s]\u001b[A\n",
|
227 |
+
"Creating parquet from Arrow format: 2% 50/2007 [00:00<00:10, 185.59ba/s]\u001b[A\n",
|
228 |
+
"Creating parquet from Arrow format: 9% 176/2007 [00:00<00:03, 596.42ba/s]\u001b[A\n",
|
229 |
+
"Creating parquet from Arrow format: 15% 297/2007 [00:00<00:02, 812.60ba/s]\u001b[A\n",
|
230 |
+
"Creating parquet from Arrow format: 21% 424/2007 [00:00<00:01, 964.03ba/s]\u001b[A\n",
|
231 |
+
"Creating parquet from Arrow format: 28% 554/2007 [00:00<00:01, 1070.27ba/s]\u001b[A\n",
|
232 |
+
"Creating parquet from Arrow format: 34% 673/2007 [00:00<00:01, 1105.61ba/s]\u001b[A\n",
|
233 |
+
"Creating parquet from Arrow format: 40% 793/2007 [00:00<00:01, 1132.45ba/s]\u001b[A\n",
|
234 |
+
"Creating parquet from Arrow format: 45% 909/2007 [00:01<00:00, 1138.54ba/s]\u001b[A\n",
|
235 |
+
"Creating parquet from Arrow format: 51% 1024/2007 [00:01<00:00, 1141.13ba/s]\u001b[A\n",
|
236 |
+
"Creating parquet from Arrow format: 57% 1139/2007 [00:01<00:00, 1115.44ba/s]\u001b[A\n",
|
237 |
+
"Creating parquet from Arrow format: 63% 1262/2007 [00:01<00:00, 1148.52ba/s]\u001b[A\n",
|
238 |
+
"Creating parquet from Arrow format: 69% 1378/2007 [00:01<00:00, 1091.05ba/s]\u001b[A\n",
|
239 |
+
"Creating parquet from Arrow format: 75% 1503/2007 [00:01<00:00, 1134.94ba/s]\u001b[A\n",
|
240 |
+
"Creating parquet from Arrow format: 81% 1633/2007 [00:01<00:00, 1180.42ba/s]\u001b[A\n",
|
241 |
+
"Creating parquet from Arrow format: 88% 1759/2007 [00:01<00:00, 1201.19ba/s]\u001b[A\n",
|
242 |
+
"Creating parquet from Arrow format: 94% 1880/2007 [00:01<00:00, 1193.59ba/s]\u001b[A\n",
|
243 |
+
"Creating parquet from Arrow format: 100% 2007/2007 [00:01<00:00, 1024.03ba/s]\n",
|
244 |
+
"Uploading the dataset shards: 100% 1/1 [00:03<00:00, 3.62s/it]\n",
|
245 |
+
"README.md: 100% 8.04k/8.04k [00:00<00:00, 26.8MB/s]\n",
|
246 |
+
"Dataset({\n",
|
247 |
+
" features: ['raw_text', 'topic', 'speciallity', 'raw_text_type', 'topic_type', 'source', 'country', 'document_id'],\n",
|
248 |
+
" num_rows: 1973048\n",
|
249 |
+
"})\n"
|
250 |
+
]
|
251 |
+
}
|
252 |
+
],
|
253 |
+
"source": [
|
254 |
+
"!python using_dataset_hugginface.py"
|
255 |
+
]
|
256 |
+
},
|
257 |
+
{
|
258 |
+
"cell_type": "code",
|
259 |
+
"source": [],
|
260 |
+
"metadata": {
|
261 |
+
"id": "tgtTlLB1EfBj"
|
262 |
+
},
|
263 |
+
"execution_count": null,
|
264 |
+
"outputs": []
|
265 |
+
}
|
266 |
+
]
|
267 |
+
}
|
spanish_biomedical_craw_corpus/using_dataset_hugginface.py
ADDED
@@ -0,0 +1,150 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# -*- coding: utf-8 -*-
|
2 |
+
"""using_dataset_hugginface.ipynb
|
3 |
+
|
4 |
+
Automatically generated by Colaboratory.
|
5 |
+
|
6 |
+
Original file is located at
|
7 |
+
https://colab.research.google.com/drive/1soGxkZu4antYbYG23GioJ6zoSt_GhSNT
|
8 |
+
"""
|
9 |
+
|
10 |
+
"""**Hugginface loggin for push on Hub**"""
|
11 |
+
###
|
12 |
+
#
|
13 |
+
# Used bibliografy:
|
14 |
+
# https://huggingface.co/learn/nlp-course/chapter5/5
|
15 |
+
#
|
16 |
+
###
|
17 |
+
|
18 |
+
import os
|
19 |
+
import time
|
20 |
+
import math
|
21 |
+
from huggingface_hub import login
|
22 |
+
from datasets import load_dataset, concatenate_datasets
|
23 |
+
from functools import reduce
|
24 |
+
from pathlib import Path
|
25 |
+
import pandas as pd
|
26 |
+
|
27 |
+
|
28 |
+
# Load model directly
|
29 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
30 |
+
|
31 |
+
HF_TOKEN = ''
|
32 |
+
DATASET_TO_LOAD = 'CoWeSe.txt'
|
33 |
+
EXAMPLE_DATASET_TO_LOAD = 'example.txt'
|
34 |
+
DATASET_TO_UPDATE = 'somosnlp/spanish_medica_llm'
|
35 |
+
|
36 |
+
#Loggin to Huggin Face
|
37 |
+
login(token = HF_TOKEN)
|
38 |
+
|
39 |
+
royalListOfCode = {}
|
40 |
+
issues_path = 'dataset'
|
41 |
+
tokenizer = AutoTokenizer.from_pretrained("DeepESP/gpt2-spanish-medium")
|
42 |
+
DATASET_SOURCE_ID = '4'
|
43 |
+
#Read current path
|
44 |
+
path = Path(__file__).parent.absolute()
|
45 |
+
|
46 |
+
'''
|
47 |
+
Bibliografy:
|
48 |
+
https://www.w3schools.com/python/python_mysql_getstarted.asp
|
49 |
+
https://www.w3schools.com/python/python_mysql_select.as
|
50 |
+
|
51 |
+
'''
|
52 |
+
|
53 |
+
# raw_text: Texto asociado al documento, pregunta, caso clínico u otro tipo de información.
|
54 |
+
|
55 |
+
# topic: (puede ser healthcare_treatment, healthcare_diagnosis, tema, respuesta a pregunta, o estar vacío p.ej en el texto abierto)
|
56 |
+
|
57 |
+
# speciality: (especialidad médica a la que se relaciona el raw_text p.ej: cardiología, cirugía, otros)
|
58 |
+
|
59 |
+
# raw_text_type: (puede ser caso clínico, open_text, question)
|
60 |
+
|
61 |
+
# topic_type: (puede ser medical_topic, medical_diagnostic,answer,natural_medicine_topic, other, o vacio)
|
62 |
+
|
63 |
+
# source: Identificador de la fuente asociada al documento que aparece en el README y descripción del dataset.
|
64 |
+
|
65 |
+
# country: Identificador del país de procedencia de la fuente (p.ej.; ch, es) usando el estándar ISO 3166-1 alfa-2 (Códigos de país de dos letras.).
|
66 |
+
cantemistDstDict = {
|
67 |
+
'raw_text': '',
|
68 |
+
'topic': '',
|
69 |
+
'speciallity': '',
|
70 |
+
'raw_text_type': 'open_text',
|
71 |
+
'topic_type': '',
|
72 |
+
'source': DATASET_SOURCE_ID,
|
73 |
+
'country': 'es',
|
74 |
+
'document_id': ''
|
75 |
+
}
|
76 |
+
|
77 |
+
totalOfTokens = 0
|
78 |
+
corpusToLoad = []
|
79 |
+
countCopySeveralDocument = 0
|
80 |
+
counteOriginalDocument = 0
|
81 |
+
|
82 |
+
FILE_TO_PROCESS = DATASET_TO_LOAD
|
83 |
+
|
84 |
+
if not os.path.exists(str(path) + os.sep + FILE_TO_PROCESS):
|
85 |
+
FILE_TO_PROCESS = EXAMPLE_DATASET_TO_LOAD
|
86 |
+
|
87 |
+
with open( str(path) + os.sep + FILE_TO_PROCESS,encoding='utf8') as file:
|
88 |
+
#linesInFile = file.readlines()
|
89 |
+
paragraph = ''
|
90 |
+
while True:
|
91 |
+
linesInFile = file.readlines(8192)
|
92 |
+
if not linesInFile:
|
93 |
+
break
|
94 |
+
for index, iLine in enumerate(linesInFile):
|
95 |
+
text = linesInFile[index] if len(linesInFile[index]) > 1 else ''
|
96 |
+
paragraph += text + ' '
|
97 |
+
|
98 |
+
if text == '':
|
99 |
+
counteOriginalDocument += 1
|
100 |
+
idFile = str(counteOriginalDocument)
|
101 |
+
newCorpusRow = cantemistDstDict.copy()
|
102 |
+
listOfTokens = tokenizer.tokenize(paragraph)
|
103 |
+
currentSizeOfTokens = len(listOfTokens)
|
104 |
+
totalOfTokens += currentSizeOfTokens
|
105 |
+
|
106 |
+
newCorpusRow['raw_text'] = paragraph
|
107 |
+
newCorpusRow['document_id'] = idFile
|
108 |
+
corpusToLoad.append(newCorpusRow)
|
109 |
+
paragraph = ''
|
110 |
+
paragraph = ''
|
111 |
+
|
112 |
+
|
113 |
+
|
114 |
+
df = pd.DataFrame.from_records(corpusToLoad)
|
115 |
+
|
116 |
+
if os.path.exists(f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl"):
|
117 |
+
os.remove(f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl")
|
118 |
+
|
119 |
+
|
120 |
+
df.to_json(f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl", orient="records", lines=True)
|
121 |
+
print(
|
122 |
+
f"Downloaded all the issues for {DATASET_TO_LOAD}! Dataset stored at {issues_path}/spanish_medical_llms.jsonl"
|
123 |
+
)
|
124 |
+
|
125 |
+
print(' On dataset there are as document ', counteOriginalDocument)
|
126 |
+
print(' On dataset there are as copy document ', countCopySeveralDocument)
|
127 |
+
print(' On dataset there are as size of Tokens ', totalOfTokens)
|
128 |
+
file = Path(f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl") # or Path('./doc.txt')
|
129 |
+
size = file.stat().st_size
|
130 |
+
print ('File size on Kilobytes (kB)', size >> 10) # 5242880 kilobytes (kB)
|
131 |
+
print ('File size on Megabytes (MB)', size >> 20 ) # 5120 megabytes (MB)
|
132 |
+
print ('File size on Gigabytes (GB)', size >> 30 ) # 5 gigabytes (GB)
|
133 |
+
|
134 |
+
#Once the issues are downloaded we can load them locally using our
|
135 |
+
local_spanish_dataset = load_dataset("json", data_files=f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl", split="train")
|
136 |
+
|
137 |
+
try:
|
138 |
+
spanish_dataset = load_dataset(DATASET_TO_UPDATE, split="train")
|
139 |
+
spanish_dataset = concatenate_datasets([spanish_dataset, local_spanish_dataset])
|
140 |
+
except Exception:
|
141 |
+
print ('<=== Error ===>')
|
142 |
+
spanish_dataset = local_spanish_dataset
|
143 |
+
|
144 |
+
spanish_dataset.push_to_hub(DATASET_TO_UPDATE)
|
145 |
+
|
146 |
+
print(local_spanish_dataset)
|
147 |
+
|
148 |
+
|
149 |
+
|
150 |
+
|