Neuroinformatica
commited on
Commit
β’
7b18832
1
Parent(s):
b033ee4
Update README.md
Browse filesAdded code to run the model
README.md
CHANGED
@@ -12,4 +12,63 @@ library_name: Haystack
|
|
12 |
From this repository you can download the **BioBIT_QA** (Biomedical Bert for ITalian for Question Answering) checkpoint.
|
13 |
|
14 |
**BioBIT_QA** is built on top of [BioBIT](https://huggingface.co/IVN-RIN/bioBIT), fine-tuned on an Italian Neuropsychological Italian datasets.
|
15 |
-
More details will follow!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
From this repository you can download the **BioBIT_QA** (Biomedical Bert for ITalian for Question Answering) checkpoint.
|
13 |
|
14 |
**BioBIT_QA** is built on top of [BioBIT](https://huggingface.co/IVN-RIN/bioBIT), fine-tuned on an Italian Neuropsychological Italian datasets.
|
15 |
+
More details will follow!
|
16 |
+
|
17 |
+
|
18 |
+
## Install libraries:
|
19 |
+
|
20 |
+
```
|
21 |
+
pip install farm-haystack[inference]
|
22 |
+
```
|
23 |
+
|
24 |
+
|
25 |
+
## Download model locally:
|
26 |
+
|
27 |
+
```
|
28 |
+
git clone https://huggingface.co/IVN-RIN/bioBIT_QA
|
29 |
+
```
|
30 |
+
|
31 |
+
|
32 |
+
## Run the code
|
33 |
+
|
34 |
+
```
|
35 |
+
# Import libraries
|
36 |
+
from haystack.nodes import FARMReader
|
37 |
+
from haystack.schema import Document
|
38 |
+
|
39 |
+
# Define the reader
|
40 |
+
reader = FARMReader(
|
41 |
+
model_name_or_path="bioBIT_QA",
|
42 |
+
return_no_answer=True
|
43 |
+
)
|
44 |
+
|
45 |
+
# Define context and question
|
46 |
+
context = '''
|
47 |
+
This is an example of context
|
48 |
+
'''
|
49 |
+
question = 'This is a question example, ok?'
|
50 |
+
|
51 |
+
# Wrap context in Document
|
52 |
+
docs = Document(
|
53 |
+
content = context
|
54 |
+
)
|
55 |
+
|
56 |
+
# Predict answer
|
57 |
+
prediction = reader.predict(
|
58 |
+
query = question,
|
59 |
+
documents = [docs],
|
60 |
+
top_k = 5
|
61 |
+
)
|
62 |
+
|
63 |
+
# Print the 5 first predicted answers
|
64 |
+
for i, ans in enumerate(prediction['answers']):
|
65 |
+
print(f'Answer num {i+1}, with score {ans.score*100:.2f}%: "{ans.answer}"')
|
66 |
+
|
67 |
+
# Inferencing Samples: 100%|ββββββββββ| 1/1 [00:01<00:00, 1.14s/ Batches]
|
68 |
+
# Answer num 1, with score 97.91%: "Example answer 01"
|
69 |
+
# Answer num 2, with score 53.69%: "Example answer 02"
|
70 |
+
# Answer num 3, with score 0.03%: "Example answer 03"
|
71 |
+
# ...
|
72 |
+
|
73 |
+
|
74 |
+
```
|