DeDeckerThomas commited on
Commit
85a6da2
1 Parent(s): 93edba3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -7,7 +7,7 @@ datasets:
7
  tags:
8
  - keyphrase-extraction
9
  metric:
10
- - f1
11
 
12
  ---
13
  ** Work in progress **
@@ -18,7 +18,7 @@ Now with the recent innovations in deep learning methods (such as recurrent neur
18
 
19
 
20
  ## 📓 Model Description
21
- KBIR pre-trained model fine-tuned on the Inspec dataset. KBIR
22
  Keyphrase Boundary Infilling with Replacement (KBIR) which utilizes a multi-task learning setup for optimizing a combined loss of Masked Language Modeling (MLM), Keyphrase Boundary Infilling (KBI) and Keyphrase Replacement Classification (KRC).
23
  Paper: https://arxiv.org/abs/2112.08547
24
 
@@ -165,6 +165,7 @@ def preprocess_fuction(all_samples_per_split):
165
  ```
166
  ## 📝Evaluation results
167
 
 
168
  The model achieves the following results on the Inspec test set:
169
 
170
  | Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M |
@@ -172,6 +173,7 @@ The model achieves the following results on the Inspec test set:
172
  | Inspec Test Set | 0.53 | 0.47 | 0.46 | 0.36 | 0.58 | 0.41 | 0.58 | 0.60 | 0.56 |
173
 
174
  For more information on the evaluation process, you can take a look at the keyphrase extraction evaluation notebook.
 
175
  ### Bibliography
176
  Debanjan Mahata, Navneet Agarwal, Dibya Gautam, Amardeep Kumar, Sagar Dhiman, Anish Acharya, & Rajiv Ratn Shah. (2021). LDkp Dataset [Data set]. Zenodo. https://doi.org/10.5281/zenodo.5501744
177
 
 
7
  tags:
8
  - keyphrase-extraction
9
  metric:
10
+ - seqeval
11
 
12
  ---
13
  ** Work in progress **
 
18
 
19
 
20
  ## 📓 Model Description
21
+ This model is a KBIR pre-trained model fine-tuned on the Inspec dataset. KBIR
22
  Keyphrase Boundary Infilling with Replacement (KBIR) which utilizes a multi-task learning setup for optimizing a combined loss of Masked Language Modeling (MLM), Keyphrase Boundary Infilling (KBI) and Keyphrase Replacement Classification (KRC).
23
  Paper: https://arxiv.org/abs/2112.08547
24
 
 
165
  ```
166
  ## 📝Evaluation results
167
 
168
+ One of the traditional evaluation methods are the precision, recall and F1-score @k,m where k is the number that stands for the first k predicted keyphrases and m for the average amount of predicted keyphrases.
169
  The model achieves the following results on the Inspec test set:
170
 
171
  | Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M |
 
173
  | Inspec Test Set | 0.53 | 0.47 | 0.46 | 0.36 | 0.58 | 0.41 | 0.58 | 0.60 | 0.56 |
174
 
175
  For more information on the evaluation process, you can take a look at the keyphrase extraction evaluation notebook.
176
+
177
  ### Bibliography
178
  Debanjan Mahata, Navneet Agarwal, Dibya Gautam, Amardeep Kumar, Sagar Dhiman, Anish Acharya, & Rajiv Ratn Shah. (2021). LDkp Dataset [Data set]. Zenodo. https://doi.org/10.5281/zenodo.5501744
179