Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
pminervini
commited on
Commit
•
d871986
1
Parent(s):
62ea587
update
Browse files
cli/shroom-data/Baseline_LLMs_SHROOM_SemEval_2024_Task_6.ipynb
ADDED
The diff for this file is too large to render.
See raw diff
|
|
cli/shroom-data/README-v2.txt
ADDED
@@ -0,0 +1,159 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# SHROOM validation set
|
2 |
+
This archive corresponds to the validation data for the SHROOM task 6 at Semeval 2024 (Shared-task on Hallucinations and Observable Overgeneration Mistakes).
|
3 |
+
|
4 |
+
**NB:** This entire README is adapted fronm the trial README, most of the information it contains should not be new.
|
5 |
+
|
6 |
+
## What is SHROOM?
|
7 |
+
The task consists in a binary classification, where participants are asked to determine whether a given production from an NLP model constitutes a hallucination
|
8 |
+
|
9 |
+
Participants will be ranked along two metrics: (i) accuracy and (ii) how well their probability correlates with the empirical probabilities observed in our annotators.
|
10 |
+
|
11 |
+
## File format
|
12 |
+
The files are formatted as a JSON list. Each element in this list corresponds to a datapoint.
|
13 |
+
|
14 |
+
Each datapoint corresponds to a different model production, and contains the following information:
|
15 |
+
- a task (`task`), indicating what objective the model was optimized for;
|
16 |
+
- a source (`src`), the input passed to the models for generation;
|
17 |
+
- a target (`tgt`), the intended reference "gold" text that the model ought to generate;
|
18 |
+
- a hypothesis (`hyp`), the actual model production;
|
19 |
+
- a set of per annotator labels (`labels`), indicating whether each individual annotator thought this datapoint constituted a hallucination or not;
|
20 |
+
- a majority-based gold-label (`label`), based on the previous per-annotator labels;
|
21 |
+
- a probability assigned to this datapoint being a hallucination (`p(Hallucination)`), corresponding to the proportion of annotators who considered this specific datapoint to be a hallucination.
|
22 |
+
|
23 |
+
We also include an indicator of whether target or source should serve as a semantic reference (`ref`): in some NLP tasks, such as Definition Modeling, the source may not contain the information necessary to establish whether the model production is factual wrong whereas in other cases, such as with Text Simplification, the same holds for the target. The `ref` key therefore indicate whether target, source or both of these fields contain the semantic information necessary to establish whether a datapoint is a hallucination.
|
24 |
+
|
25 |
+
Lastly, the model-aware file also identifies the model used to produced each datapoint, as a huggingface identifier (`model`).
|
26 |
+
|
27 |
+
#### Example: interpreting a Definition Modeling (DM) datapoint
|
28 |
+
|
29 |
+
The definition modeling task was introduced in [Noraset et al (2017)](https://dl.acm.org/doi/10.5555/3298023.3298042). In this task, models are trained to generate a definition for a given example in context.
|
30 |
+
|
31 |
+
**For model-agnostic datapoints,** we are specifically using the scheme of [Bevilacqua et al (2020)](https://aclanthology.org/2020.emnlp-main.585/). The source (`"src"`) corresponds to the context; the word to define is indicated using two special tokens `<define>` ... `</define>`.The target (`"tgt"`) is the intended definition for this context (as found in wiktionary); the hypothesis (`"hyp"`) is the actual model production.
|
32 |
+
|
33 |
+
To take a concrete example, the following datapoint in the trial set:
|
34 |
+
|
35 |
+
```json
|
36 |
+
{
|
37 |
+
"hyp": "(uncountable) The study of trees.",
|
38 |
+
"ref": "tgt",
|
39 |
+
"src": "It is now generally supposed that the forbidden fruit was a kind of citrus , but certain facts connected with <define> arborolatry </define> seem to me to disprove this opinion .",
|
40 |
+
"tgt": "The worship of trees.",
|
41 |
+
"model": "",
|
42 |
+
"task": "DM",
|
43 |
+
"labels": [
|
44 |
+
"Hallucination",
|
45 |
+
"Hallucination",
|
46 |
+
"Hallucination"
|
47 |
+
],
|
48 |
+
"label": "Hallucination",
|
49 |
+
"p(Hallucination)": 1.0
|
50 |
+
}
|
51 |
+
```
|
52 |
+
|
53 |
+
This corresponds to defining the word "arborolatry" (delinated by the `<define>` and `</define>` control tokens) in the following context (corresponding to the `src` key) :
|
54 |
+
+ _It is now generally supposed that the forbidden fruit was a kind of citrus , but certain facts connected with arborolatry seem to me to disprove this opinion._
|
55 |
+
|
56 |
+
The model produced the following hypothesis ('hyp' key):
|
57 |
+
+ `(uncountable) The study of trees.`
|
58 |
+
|
59 |
+
whereas the gold definition from wiktionary ('tgt' key) is as follows:
|
60 |
+
+ _The worship of trees._
|
61 |
+
|
62 |
+
Annotators then marked whether this production is considered a hallucination or not. To do so, we asked them to study whether the hypothesis (`hyp` key) contains information that is not supported by the reference. Here, the `ref` key indicates that this reference corresponds to the target (given by its value, `"tgt"`). All three annotators considered the production to be a hallucination (cf. the `labels` key).
|
63 |
+
|
64 |
+
**For model-aware datapoints,** we rely on the work of [Giulianelli et al (2023)](https://aclanthology.org/2023.acl-long.176). The only field that differs is the source; all other fields have the same interpretation as for model-aganostic DM datapoints, with an added `model` field to indicate the huggingface identifier of the model. In the case of model aware , the source (`"src"`) corresponds to the context followed by a query for the meaning of the headword.
|
65 |
+
|
66 |
+
To take a concrete example, consider the following validation datapoint:
|
67 |
+
```json
|
68 |
+
{
|
69 |
+
"hyp": "To react too much .",
|
70 |
+
"ref": "tgt",
|
71 |
+
"src": "Please try not to overreact if she drives badly when she is first learning . What is the meaning of overreact ?",
|
72 |
+
"tgt": "To react too much or too intensely .",
|
73 |
+
"model": "ltg/flan-t5-definition-en-base",
|
74 |
+
"task": "DM",
|
75 |
+
"labels": [
|
76 |
+
"Not Hallucination",
|
77 |
+
"Not Hallucination",
|
78 |
+
"Not Hallucination",
|
79 |
+
"Not Hallucination",
|
80 |
+
"Not Hallucination"
|
81 |
+
],
|
82 |
+
"label": "Not Hallucination",
|
83 |
+
"p(Hallucination)": 0.0
|
84 |
+
}
|
85 |
+
```
|
86 |
+
|
87 |
+
Here, the source (`src`) indicates that the word to be defined is "overreact", as in the context "Please try not to overreact if she drives badly when she is first learning."
|
88 |
+
|
89 |
+
|
90 |
+
#### Example: interpreting a Paraphrase Generation (PG) datapoint
|
91 |
+
|
92 |
+
The same structure holds for the paraphrase generation (PG) task. For an example, consider the following trial datapoint:
|
93 |
+
|
94 |
+
```json
|
95 |
+
{
|
96 |
+
"hyp": "When did you see him?",
|
97 |
+
"ref": "either",
|
98 |
+
"src": "When\u2019d you last see him?",
|
99 |
+
"tgt": "When was the last time you saw him?",
|
100 |
+
"model": "tuner007/pegasus_paraphrase",
|
101 |
+
"task": "PG",
|
102 |
+
"labels": [
|
103 |
+
"Not Hallucination",
|
104 |
+
"Not Hallucination",
|
105 |
+
"Not Hallucination"
|
106 |
+
],
|
107 |
+
"label": "Not Hallucination",
|
108 |
+
"p(Hallucination)": 0.0
|
109 |
+
}
|
110 |
+
```
|
111 |
+
|
112 |
+
Using the following input (`src` key):
|
113 |
+
+ _When’d you last see him?_
|
114 |
+
|
115 |
+
the model production (listed under the `hyp` key) was as follows:
|
116 |
+
+ `When did you see him?`
|
117 |
+
|
118 |
+
whereas the intended gold target (`tgt` key) was:
|
119 |
+
+ _When was the last time you saw him?_
|
120 |
+
|
121 |
+
All three annotators did not consider this production as hallucinatory (cf. the `labels` key). To do so, they were instructed to look whether all information stated in the hypothesis was supported by either/both the source and the target (as explicited with the `"either"` value of the `ref` key).
|
122 |
+
|
123 |
+
For PG datapoints, we also indicate the huggingface model that was used to generate the hypothesis, see the `model` key.
|
124 |
+
|
125 |
+
#### Example: interpreting a Machine Translation (MT) datapoint
|
126 |
+
|
127 |
+
The structure of MT datapoints is consistent with PG and DM. For instance:
|
128 |
+
|
129 |
+
```json
|
130 |
+
{
|
131 |
+
"hyp": "I have nothing to do with it.",
|
132 |
+
"ref": "either",
|
133 |
+
"src": "J'en ai rien \u00e0 secouer.",
|
134 |
+
"tgt": "I don't give a shit about it.",
|
135 |
+
"model": "",
|
136 |
+
"task": "MT",
|
137 |
+
"labels": [
|
138 |
+
"Hallucination",
|
139 |
+
"Not Hallucination",
|
140 |
+
"Hallucination"
|
141 |
+
],
|
142 |
+
"label": "Hallucination",
|
143 |
+
"p(Hallucination)": 0.6666666666666666
|
144 |
+
}
|
145 |
+
```
|
146 |
+
|
147 |
+
In the above datapoint, the model was tasked with translating the source (`src`) "_J'en ai rien à secouer._"; the expected target gold translation (`tgt`) was "_I don't give a shit about it._"
|
148 |
+
|
149 |
+
Instead, the model produced the following (`hyp`):
|
150 |
+
+ `I have nothing to do with it.`
|
151 |
+
|
152 |
+
Two out of three annotators considered this production a hallucination (`labels` key), based either/both the source and the target (as explicited with the `"either"` value of the `ref` key). The majority label (`label` key) is therefore `"Hallucination"`.
|
153 |
+
|
154 |
+
## How does this validation dataset differ from the trial, train and test sets?
|
155 |
+
All dataset splits cover datapoints from definition modeling (DM), machine translation (MT) and paraphrase generation (PG).
|
156 |
+
|
157 |
+
Furthermore, the train set will not contain manual annotations.
|
158 |
+
|
159 |
+
|
cli/shroom-data/val.model-agnostic.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
cli/shroom-data/val.model-aware.v2.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|