Update README.md
Browse files
README.md
CHANGED
@@ -27,6 +27,19 @@ match_result = em_match(reference_answer, candidate_answer)
|
|
27 |
print("Exact Match: ", match_result)
|
28 |
```
|
29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
#### F1 Score
|
31 |
```python
|
32 |
from qa_metrics.f1 import f1_match,f1_score_with_precision_recall
|
@@ -49,18 +62,6 @@ match_result = cfm.cf_match(reference_answer, candidate_answer, question)
|
|
49 |
print("Score: %s; CF Match: %s" % (scores, match_result))
|
50 |
```
|
51 |
|
52 |
-
#### Transformer Match
|
53 |
-
Our fine-tuned BERT model is on π€ [Huggingface](https://huggingface.co/Zongxia/answer_equivalence_bert?text=The+goal+of+life+is+%5BMASK%5D.). Our Package also supports downloading and matching directly. More Matching transformer models will be available π₯π₯π₯
|
54 |
-
|
55 |
-
```python
|
56 |
-
from qa_metrics.transformerMatcher import TransformerMatcher
|
57 |
-
|
58 |
-
question = "who will take the throne after the queen dies"
|
59 |
-
tm = TransformerMatcher("bert")
|
60 |
-
scores = tm.get_scores(reference_answer, candidate_answer, question)
|
61 |
-
match_result = tm.transformer_match(reference_answer, candidate_answer, question)
|
62 |
-
print("Score: %s; CF Match: %s" % (scores, match_result))
|
63 |
-
```
|
64 |
|
65 |
## Datasets
|
66 |
Our Training Dataset is adapted and augmented from [Bulian et al](https://github.com/google-research-datasets/answer-equivalence-dataset). Our [dataset repo](https://github.com/zli12321/Answer_Equivalence_Dataset.git) includes the augmented training set and QA evaluation testing sets discussed in our paper.
|
|
|
27 |
print("Exact Match: ", match_result)
|
28 |
```
|
29 |
|
30 |
+
#### Transformer Match
|
31 |
+
Our fine-tuned BERT model is repository. Our Package also supports downloading and matching directly. More Matching transformer models will be available π₯π₯π₯
|
32 |
+
|
33 |
+
```python
|
34 |
+
from qa_metrics.transformerMatcher import TransformerMatcher
|
35 |
+
|
36 |
+
question = "who will take the throne after the queen dies"
|
37 |
+
tm = TransformerMatcher("bert")
|
38 |
+
scores = tm.get_scores(reference_answer, candidate_answer, question)
|
39 |
+
match_result = tm.transformer_match(reference_answer, candidate_answer, question)
|
40 |
+
print("Score: %s; CF Match: %s" % (scores, match_result))
|
41 |
+
```
|
42 |
+
|
43 |
#### F1 Score
|
44 |
```python
|
45 |
from qa_metrics.f1 import f1_match,f1_score_with_precision_recall
|
|
|
62 |
print("Score: %s; CF Match: %s" % (scores, match_result))
|
63 |
```
|
64 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
|
66 |
## Datasets
|
67 |
Our Training Dataset is adapted and augmented from [Bulian et al](https://github.com/google-research-datasets/answer-equivalence-dataset). Our [dataset repo](https://github.com/zli12321/Answer_Equivalence_Dataset.git) includes the augmented training set and QA evaluation testing sets discussed in our paper.
|