Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ H2{color:DarkOrange !important;}
|
|
14 |
p{color:Black !important;}
|
15 |
</style>
|
16 |
|
17 |
-
#
|
18 |
|
19 |
<!-- Provide a quick summary of the dataset. -->
|
20 |
|
@@ -25,7 +25,7 @@ p{color:Black !important;}
|
|
25 |
|
26 |
|
27 |
|
28 |
-
|
29 |
|
30 |
This dataset card has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
|
31 |
|
@@ -35,7 +35,7 @@ This dataset card has been generated using [this raw template](https://github.co
|
|
35 |
|
36 |
<!-- Provide a longer summary of what this dataset is. -->
|
37 |
|
38 |
-
|
39 |
|
40 |
Each instance consists of a question, a pair of contradictory passages extracted from Wikipedia, and two distinct answers, each derived from on the passages. The pair is annotated by a human annotator who identify where the conflicted information is and what type of conflict is observed. The annotator then produces a set of questions related to the passages with different answers reflecting the conflicting source of knowledge.
|
41 |
|
@@ -73,7 +73,7 @@ N/A.
|
|
73 |
|
74 |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
75 |
|
76 |
-
|
77 |
The description of each key (when the instance contains two questions) is as follows:
|
78 |
|
79 |
- **title:** Title of article.
|
@@ -113,7 +113,7 @@ The description of each key (when the instance contains two questions) is as fol
|
|
113 |
|
114 |
<!-- Motivation for the creation of this dataset. -->
|
115 |
|
116 |
-
Retrieval-augmented generation (RAG) has emerged as a promising solution to mitigate the limitations of large language models (LLMs), such as hallucinations and outdated information. However, it remains unclear how LLMs handle knowledge conflicts arising from different augmented retrieved passages, especially when these passages originate from the same source and have equal trustworthiness. In this regard, the motivation of
|
117 |
|
118 |
### Source Data
|
119 |
|
|
|
14 |
p{color:Black !important;}
|
15 |
</style>
|
16 |
|
17 |
+
# Wikipedia Contradict Benchmark
|
18 |
|
19 |
<!-- Provide a quick summary of the dataset. -->
|
20 |
|
|
|
25 |
|
26 |
|
27 |
|
28 |
+
Wikipedia Contradict Benchmark is a dataset consisting of 253 high-quality, human-annotated instances designed to assess LLM performance when augmented with retrieved passages containing real-world knowledge conflicts. The dataset was created intentionally with that task in mind, focusing on a benchmark consisting of high-quality, human-annotated instances.
|
29 |
|
30 |
This dataset card has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
|
31 |
|
|
|
35 |
|
36 |
<!-- Provide a longer summary of what this dataset is. -->
|
37 |
|
38 |
+
Wikipedia Contradict Benchmark is a QA-based benchmark consisting of 253 human-annotated instances that cover different types of real-world knowledge conflicts.
|
39 |
|
40 |
Each instance consists of a question, a pair of contradictory passages extracted from Wikipedia, and two distinct answers, each derived from on the passages. The pair is annotated by a human annotator who identify where the conflicted information is and what type of conflict is observed. The annotator then produces a set of questions related to the passages with different answers reflecting the conflicting source of knowledge.
|
41 |
|
|
|
73 |
|
74 |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
75 |
|
76 |
+
Wikipedia Contradict Benchmark is given in JSON format to store the corresponding information, so researchers can easily use our data. There are 253 instances in total.
|
77 |
The description of each key (when the instance contains two questions) is as follows:
|
78 |
|
79 |
- **title:** Title of article.
|
|
|
113 |
|
114 |
<!-- Motivation for the creation of this dataset. -->
|
115 |
|
116 |
+
Retrieval-augmented generation (RAG) has emerged as a promising solution to mitigate the limitations of large language models (LLMs), such as hallucinations and outdated information. However, it remains unclear how LLMs handle knowledge conflicts arising from different augmented retrieved passages, especially when these passages originate from the same source and have equal trustworthiness. In this regard, the motivation of Wikipedia Contradict Benchmark is to comprehensively evaluate LLM-generated answers to questions that have varying answers based on contradictory passages from Wikipedia, a dataset widely regarded as a high-quality pre-training resource for most LLMs.
|
117 |
|
118 |
### Source Data
|
119 |
|