Zongxia commited on
Commit
c1c416b
1 Parent(s): 84f0014

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -7
README.md CHANGED
@@ -33,10 +33,13 @@ The python package currently provides four QA evaluation metrics.
33
  ```python
34
  from qa_metrics.em import em_match
35
 
36
- reference_answer = ["Charles , Prince of Wales"]
37
- candidate_answer = "Prince Charles"
38
  match_result = em_match(reference_answer, candidate_answer)
39
  print("Exact Match: ", match_result)
 
 
 
40
  ```
41
 
42
  #### Transformer Match
@@ -45,11 +48,14 @@ Our fine-tuned BERT model is this repository. Our Package also supports download
45
  ```python
46
  from qa_metrics.transformerMatcher import TransformerMatcher
47
 
48
- question = "who will take the throne after the queen dies"
49
- tm = TransformerMatcher("distilbert")
50
  scores = tm.get_scores(reference_answer, candidate_answer, question)
51
  match_result = tm.transformer_match(reference_answer, candidate_answer, question)
52
- print("Score: %s; CF Match: %s" % (scores, match_result))
 
 
 
53
  ```
54
 
55
  #### F1 Score
@@ -61,17 +67,24 @@ print("F1 stats: ", f1_stats)
61
 
62
  match_result = f1_match(reference_answer, candidate_answer, threshold=0.5)
63
  print("F1 Match: ", match_result)
 
 
 
 
64
  ```
65
 
66
  #### PANDA
67
  ```python
68
  from qa_metrics.cfm import CFMatcher
69
 
70
- question = "who will take the throne after the queen dies"
71
  cfm = CFMatcher()
72
  scores = cfm.get_scores(reference_answer, candidate_answer, question)
73
  match_result = cfm.cf_match(reference_answer, candidate_answer, question)
74
  print("Score: %s; bert Match: %s" % (scores, match_result))
 
 
 
75
  ```
76
 
77
  If you find this repo avialable, please cite our paper:
@@ -88,7 +101,7 @@ If you find this repo avialable, please cite our paper:
88
 
89
 
90
  ## Updates
91
- - [01/24/24] 🔥 The full paper is uploaded and can be accessed [here]([https://arxiv.org/abs/2310.14566](https://arxiv.org/abs/2401.13170)). The dataset is expanded and leaderboard is updated.
92
  - Our Training Dataset is adapted and augmented from [Bulian et al](https://github.com/google-research-datasets/answer-equivalence-dataset). Our [dataset repo](https://github.com/zli12321/Answer_Equivalence_Dataset.git) includes the augmented training set and QA evaluation testing sets discussed in our paper.
93
  - Now our model supports [distilroberta](https://huggingface.co/Zongxia/answer_equivalence_distilroberta), [distilbert](https://huggingface.co/Zongxia/answer_equivalence_distilbert), a smaller and more robust matching model than Bert!
94
 
 
33
  ```python
34
  from qa_metrics.em import em_match
35
 
36
+ reference_answer = ["The Frog Prince", "The Princess and the Frog"]
37
+ candidate_answer = "The movie \"The Princess and the Frog\" is loosely based off the Brother Grimm's \"Iron Henry\""
38
  match_result = em_match(reference_answer, candidate_answer)
39
  print("Exact Match: ", match_result)
40
+ '''
41
+ Exact Match: False
42
+ '''
43
  ```
44
 
45
  #### Transformer Match
 
48
  ```python
49
  from qa_metrics.transformerMatcher import TransformerMatcher
50
 
51
+ question = "Which movie is loosley based off the Brother Grimm's Iron Henry?"
52
+ tm = TransformerMatcher("distilroberta")
53
  scores = tm.get_scores(reference_answer, candidate_answer, question)
54
  match_result = tm.transformer_match(reference_answer, candidate_answer, question)
55
+ print("Score: %s; TM Match: %s" % (scores, match_result))
56
+ '''
57
+ Score: {'The Frog Prince': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.88954514}, 'The Princess and the Frog': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.9381995}}; TM Match: True
58
+ '''
59
  ```
60
 
61
  #### F1 Score
 
67
 
68
  match_result = f1_match(reference_answer, candidate_answer, threshold=0.5)
69
  print("F1 Match: ", match_result)
70
+ '''
71
+ F1 stats: {'f1': 0.25, 'precision': 0.6666666666666666, 'recall': 0.15384615384615385}
72
+ F1 Match: False
73
+ '''
74
  ```
75
 
76
  #### PANDA
77
  ```python
78
  from qa_metrics.cfm import CFMatcher
79
 
80
+ question = "Which movie is loosley based off the Brother Grimm's Iron Henry?"
81
  cfm = CFMatcher()
82
  scores = cfm.get_scores(reference_answer, candidate_answer, question)
83
  match_result = cfm.cf_match(reference_answer, candidate_answer, question)
84
  print("Score: %s; bert Match: %s" % (scores, match_result))
85
+ '''
86
+ Score: {'the frog prince': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.7131625951317375}, 'the princess and the frog': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.854451712151719}}; CF Match: True
87
+ '''
88
  ```
89
 
90
  If you find this repo avialable, please cite our paper:
 
101
 
102
 
103
  ## Updates
104
+ - [01/24/24] 🔥 The full paper is uploaded and can be accessed [here](https://arxiv.org/abs/2402.11161). The dataset is expanded and leaderboard is updated.
105
  - Our Training Dataset is adapted and augmented from [Bulian et al](https://github.com/google-research-datasets/answer-equivalence-dataset). Our [dataset repo](https://github.com/zli12321/Answer_Equivalence_Dataset.git) includes the augmented training set and QA evaluation testing sets discussed in our paper.
106
  - Now our model supports [distilroberta](https://huggingface.co/Zongxia/answer_equivalence_distilroberta), [distilbert](https://huggingface.co/Zongxia/answer_equivalence_distilbert), a smaller and more robust matching model than Bert!
107