Ikala-allen commited on
Commit
54553f6
1 Parent(s): cb05f51

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -38
README.md CHANGED
@@ -25,30 +25,30 @@ This metric can be used in relation extraction evaluation.
25
  ## How to Use
26
  This metric takes 2 inputs, prediction and references(ground truth). Both of them are a list of list of dictionary of entity's name and entity's type:
27
  ```python
28
- >>> import evaluate
29
- >>> metric_path = "Ikala-allen/relation_extraction"
30
- >>> module = evaluate.load(metric_path)
31
- >>> references = [
32
- ... [
33
- ... {"head": "phip igments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"},
34
- ... {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"},
35
- ... ]
36
- ... ]
37
- >>> predictions = [
38
- ... [
39
- ... {"head": "phipigments", "head_type": "product", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"},
40
- ... {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"},
41
- ... ]
42
- ... ]
43
- >>> evaluation_scores = module.compute(predictions=predictions, references=references)
44
- >>> print(evaluation_scores)
45
- {'sell': {'tp': 1, 'fp': 1, 'fn': 1, 'p': 50.0, 'r': 50.0, 'f1': 50.0}, 'ALL': {'tp': 1, 'fp': 1, 'fn': 1, 'p': 50.0, 'r': 50.0, 'f1': 50.0, 'Macro_f1': 50.0, 'Macro_p': 50.0, 'Macro_r': 50.0}}
46
  ```
47
 
48
-
49
  ### Inputs
50
  - **predictions** (`list` of `list` of `dictionary`): relation and its type of prediction
51
- - **references** (`list` of `list` of `dictionary`): references for each relation and its type.
 
 
 
52
 
53
  ### Output Values
54
 
@@ -73,7 +73,7 @@ This metric takes 2 inputs, prediction and references(ground truth). Both of the
73
 
74
  Output Example:
75
  ```python
76
- {'sell': {'tp': 1, 'fp': 1, 'fn': 1, 'p': 50.0, 'r': 50.0, 'f1': 50.0}, 'ALL': {'tp': 1, 'fp': 1, 'fn': 1, 'p': 50.0, 'r': 50.0, 'f1': 50.0, 'Macro_f1': 50.0, 'Macro_p': 50.0, 'Macro_r': 50.0}}
77
  ```
78
 
79
  Remind : Macro_f1、Macro_p、Macro_r、p、r、f1 are always a number between 0 and 1. And tp、fp、fn depend on how many data inputs.
@@ -81,23 +81,23 @@ Remind : Macro_f1、Macro_p、Macro_r、p、r、f1 are always a number between 0
81
  ### Examples
82
  Example of only one prediction and reference:
83
  ```python
84
- >>> metric_path = "Ikala-allen/relation_extraction"
85
- >>> module = evaluate.load(metric_path)
86
- >>> references = [
87
- ... [
88
- ... {"head": "phip igments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"},
89
- ... {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"},
90
- ... ]
91
- ... ]
92
- >>> predictions = [
93
- ... [
94
- ... {"head": "phipigments", "head_type": "product", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"},
95
- ... {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"},
96
- ... ]
97
- ... ]
98
- >>> evaluation_scores = module.compute(predictions=predictions, references=references)
99
- >>> print(evaluation_scores)
100
- {'sell': {'tp': 1, 'fp': 1, 'fn': 1, 'p': 50.0, 'r': 50.0, 'f1': 50.0}, 'ALL': {'tp': 1, 'fp': 1, 'fn': 1, 'p': 50.0, 'r': 50.0, 'f1': 50.0, 'Macro_f1': 50.0, 'Macro_p': 50.0, 'Macro_r': 50.0}}
101
  ```
102
 
103
  Example with two or more prediction and reference:
 
25
  ## How to Use
26
  This metric takes 2 inputs, prediction and references(ground truth). Both of them are a list of list of dictionary of entity's name and entity's type:
27
  ```python
28
+ import evaluate
29
+ metric_path = "Ikala-allen/relation_extraction"
30
+ module = evaluate.load(metric_path)
31
+ references = [
32
+ [
33
+ {"head": "phip igments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"},
34
+ {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"},
35
+ ]
36
+ ]
37
+ predictions = [
38
+ [
39
+ {"head": "phipigments", "head_type": "product", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"},
40
+ {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"},
41
+ ]
42
+ ]
43
+ evaluation_scores = module.compute(predictions=predictions, references=references)
 
 
44
  ```
45
 
 
46
  ### Inputs
47
  - **predictions** (`list` of `list` of `dictionary`): relation and its type of prediction
48
+ - **references** (`list` of `list` of `dictionary`): references for each relation and its type
49
+ - **mode** (`str`): define strict or boundaries mode
50
+ - **only_all** (`bool`): define whether only output ["ALL"] relation_type score or every relation_type score, default True
51
+ - **relation_types** (`list`): define relation type that need to be evaluate, if not given, it will construct relation_types from ground truth, default []
52
 
53
  ### Output Values
54
 
 
73
 
74
  Output Example:
75
  ```python
76
+ {'tp': 1, 'fp': 1, 'fn': 1, 'p': 50.0, 'r': 50.0, 'f1': 50.0, 'Macro_f1': 50.0, 'Macro_p': 50.0, 'Macro_r': 50.0}
77
  ```
78
 
79
  Remind : Macro_f1、Macro_p、Macro_r、p、r、f1 are always a number between 0 and 1. And tp、fp、fn depend on how many data inputs.
 
81
  ### Examples
82
  Example of only one prediction and reference:
83
  ```python
84
+ metric_path = "Ikala-allen/relation_extraction"
85
+ module = evaluate.load(metric_path)
86
+ references = [
87
+ [
88
+ {"head": "phip igments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"},
89
+ {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"},
90
+ ]
91
+ ]
92
+ predictions = [
93
+ [
94
+ {"head": "phipigments", "head_type": "product", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"},
95
+ {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"},
96
+ ]
97
+ ]
98
+ evaluation_scores = module.compute(predictions=predictions, references=references)
99
+ print(evaluation_scores)
100
+ >>> {'sell': {'tp': 1, 'fp': 1, 'fn': 1, 'p': 50.0, 'r': 50.0, 'f1': 50.0}, 'ALL': {'tp': 1, 'fp': 1, 'fn': 1, 'p': 50.0, 'r': 50.0, 'f1': 50.0, 'Macro_f1': 50.0, 'Macro_p': 50.0, 'Macro_r': 50.0}}
101
  ```
102
 
103
  Example with two or more prediction and reference: