Ikala-allen commited on
Commit
88b0888
1 Parent(s): 046a29a
README.md CHANGED
@@ -1,13 +1,50 @@
1
  ---
2
- title: Relation Extraction
3
- emoji: 🐢
4
- colorFrom: red
5
- colorTo: red
 
 
 
6
  sdk: gradio
7
- sdk_version: 3.45.1
8
  app_file: app.py
9
  pinned: false
10
- license: apache-2.0
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: relation_extraction
3
+ datasets:
4
+ - none
5
+ tags:
6
+ - evaluate
7
+ - metric
8
+ description: "TODO: add a description here"
9
  sdk: gradio
10
+ sdk_version: 3.19.1
11
  app_file: app.py
12
  pinned: false
 
13
  ---
14
 
15
+ # Metric Card for relation_extraction
16
+
17
+ ***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.*
18
+
19
+ ## Metric Description
20
+ *Give a brief overview of this metric, including what task(s) it is usually used for, if any.*
21
+
22
+ ## How to Use
23
+ *Give general statement of how to use the metric*
24
+
25
+ *Provide simplest possible example for using the metric*
26
+
27
+ ### Inputs
28
+ *List all input arguments in the format below*
29
+ - **input_field** *(type): Definition of input, with explanation if necessary. State any default value(s).*
30
+
31
+ ### Output Values
32
+
33
+ *Explain what this metric outputs and provide an example of what the metric output looks like. Modules should return a dictionary with one or multiple key-value pairs, e.g. {"bleu" : 6.02}*
34
+
35
+ *State the range of possible values that the metric's output can take, as well as what in that range is considered good. For example: "This metric can take on any value between 0 and 100, inclusive. Higher scores are better."*
36
+
37
+ #### Values from Popular Papers
38
+ *Give examples, preferrably with links to leaderboards or publications, to papers that have reported this metric, along with the values they have reported.*
39
+
40
+ ### Examples
41
+ *Give code examples of the metric being used. Try to include examples that clear up any potential ambiguity left from the metric description above. If possible, provide a range of examples that show both typical and atypical results, as well as examples where a variety of input parameters are passed.*
42
+
43
+ ## Limitations and Bias
44
+ *Note any known limitations or biases that the metric has, with links and references if possible.*
45
+
46
+ ## Citation
47
+ *Cite the source where this metric was introduced.*
48
+
49
+ ## Further References
50
+ *Add any useful further references.*
app.py CHANGED
@@ -2,9 +2,11 @@ import evaluate
2
  from evaluate.utils import launch_gradio_widget
3
 
4
  # Define the path to your custom metric directory
5
- metric_path = "./custom_metric"
6
 
7
 
8
  module = evaluate.load(metric_path)
9
  launch_gradio_widget(module)
10
 
 
 
 
2
  from evaluate.utils import launch_gradio_widget
3
 
4
  # Define the path to your custom metric directory
5
+ metric_path = "Ikala-allen/relation_extraction"
6
 
7
 
8
  module = evaluate.load(metric_path)
9
  launch_gradio_widget(module)
10
 
11
+
12
+
custom_metric/custom_metric.py → relation_extraction.py RENAMED
@@ -2,6 +2,7 @@ import evaluate
2
  import datasets
3
  import numpy as np
4
 
 
5
  _CITATION = """\
6
  @InProceedings{huggingface:module,
7
  title = {A great new module},
@@ -119,10 +120,10 @@ class relation_extraction(evaluate.Metric):
119
  def _compute(self, predictions, references, mode="strict", relation_types=[]):
120
  """Returns the scores"""
121
  # TODO: Compute the different scores of the module
122
- print(predictions)
123
  predictions = convert_format(predictions)
124
  references = convert_format(references)
125
- print(predictions)
126
  assert mode in ["strict", "boundaries"]
127
 
128
  # construct relation_types from ground truth if not given
 
2
  import datasets
3
  import numpy as np
4
 
5
+ # TODO: Add BibTeX citation
6
  _CITATION = """\
7
  @InProceedings{huggingface:module,
8
  title = {A great new module},
 
120
  def _compute(self, predictions, references, mode="strict", relation_types=[]):
121
  """Returns the scores"""
122
  # TODO: Compute the different scores of the module
123
+
124
  predictions = convert_format(predictions)
125
  references = convert_format(references)
126
+
127
  assert mode in ["strict", "boundaries"]
128
 
129
  # construct relation_types from ground truth if not given