Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,7 @@ p{color:Black !important;}
|
|
28 |
|
29 |
Wikipedia contradict benchmark is a dataset consisting of 253 high-quality, human-annotated instances designed to assess LLM performance when augmented with retrieved passages containing real-world knowledge conflicts. The dataset was created intentionally with that task in mind, focusing on a benchmark consisting of high-quality, human-annotated instances.
|
30 |
|
31 |
-
Note that, in the dataset viewer, there are 130 valid-tag instances, but each instance can contain more that one question and its respective two answers. Then, the total number of questions and answers is 253.
|
32 |
|
33 |
<!-- This dataset card has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). -->
|
34 |
|
@@ -100,34 +100,20 @@ Wikipedia contradict benchmark is given in CSV format to store the corresponding
|
|
100 |
|
101 |
The description of each field (when the instance contains two questions) is as follows:
|
102 |
|
103 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
104 |
- **url:** URL of article.
|
105 |
-
|
106 |
-
- **paragraph_A_clean:** Paragraph automatically retrieved (removing the tag).
|
107 |
-
- **tag:** Type of tag of the article (Inconsistent/Self-contradictory/Contradict-other).
|
108 |
-
- **tagDate:** Date of the tag.
|
109 |
-
- **tagReason:** Reason for the tag.
|
110 |
-
- **wikitag_label_valid:** Valid or invalid tag (Valid/Invalid).
|
111 |
-
- **valid_comment:** Comment on the tag.
|
112 |
-
- **paragraphA_article:** Title of article containing passage 1.
|
113 |
-
- **paragraphA_information:** Relevant information of passage 1.
|
114 |
-
- **paragraphA_information_standalone:** Decontextualized relevant information of passage 1.
|
115 |
-
- **paragraphB_article:** Relevant information of passage 2.
|
116 |
-
- **paragraphB_information:** Relevant information of passage 2.
|
117 |
-
- **paragraphB_information_standalone:** Decontextualized relevant information of passage 2.
|
118 |
-
- **wikitag_label_samepassage:** Boolean value stating whether passage 1 and passage 2 are the same (Same/Different).
|
119 |
-
- **relevantInfo_comment_A:** Comment on the information of passage 1.
|
120 |
-
- **relevantInfo_comment_B:** Comment on the information of passage 2.
|
121 |
-
- **Contradict type I:** Contradiction type I focuses on the fine-grained semantics of the contradiction, e.g., date/time, location, language, etc.
|
122 |
-
- **Contradict type II:** Contradiction type II focuses on the modality the contradiction. It describes the modality of passage 1 and passage 2, whether the information is from a piece of text, or from a row an infobox or a table.
|
123 |
-
- **Contradict type III:** Contradiction type III focuses on the source the contradiction. It describes whether passage 1 and passage 2 are from the same article or not.
|
124 |
-
- **Contradict type IV:** Contradiction type IV focuses on the reasoning aspect. It describes whether the contraction is explicit or implicit (Explicit/Implicit). Implicit contradiction requires some reasoning to understand why passage 1 and passage 2 are contradicted.
|
125 |
-
- **question1:** Question 1 inferred from the contradiction.
|
126 |
-
- **question1_answer1:** Gold answer to question 1 according to passage 1.
|
127 |
-
- **question1_answer2:** Gold answer to question 1 according to passage 2.
|
128 |
-
- **question2:** Question 2 inferred from the contradiction.
|
129 |
-
- **question2_answer1:** Gold answer to question 2 according to passage 1.
|
130 |
-
- **question2_answer2:** Gold answer to question 2 according to passage 2.
|
131 |
|
132 |
|
133 |
## Usage of the Dataset
|
|
|
28 |
|
29 |
Wikipedia contradict benchmark is a dataset consisting of 253 high-quality, human-annotated instances designed to assess LLM performance when augmented with retrieved passages containing real-world knowledge conflicts. The dataset was created intentionally with that task in mind, focusing on a benchmark consisting of high-quality, human-annotated instances.
|
30 |
|
31 |
+
<!-- Note that, in the dataset viewer, there are 130 valid-tag instances, but each instance can contain more that one question and its respective two answers. Then, the total number of questions and answers is 253. -->
|
32 |
|
33 |
<!-- This dataset card has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). -->
|
34 |
|
|
|
100 |
|
101 |
The description of each field (when the instance contains two questions) is as follows:
|
102 |
|
103 |
+
|
104 |
+
- **question_ID:** ID of question.
|
105 |
+
- **question:** Question nferred from the contradiction.
|
106 |
+
- **context1:** Decontextualized relevant information of context1.
|
107 |
+
- **context2:** Decontextualized relevant information of context2.
|
108 |
+
- **answer1:** Gold answer to question according to context1.
|
109 |
+
- **answer2:** Gold answer to question according to context2.
|
110 |
+
- **contradictType:** It focuses on the reasoning aspect. It describes whether the contraction is explicit or implicit (Explicit/Implicit). Implicit contradiction requires some reasoning to understand why context1 and context2 are contradicted.
|
111 |
+
- **samepassage:** It focuses on the source the contradiction. It describes whether context 1 and context 2 are the same or not.
|
112 |
+
- **merged_context:** context1 and context2 merged in a single paragraph ("context1. context2").
|
113 |
+
- **ref_answer:** Answer 1 and answer 2 merged in a single paragraph ("answer1|answer2").
|
114 |
+
- **WikipediaArticleTitle:** Title of article.
|
115 |
- **url:** URL of article.
|
116 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
117 |
|
118 |
|
119 |
## Usage of the Dataset
|