Datasets:
ibm
/

Modalities:
Text
Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
javiccano commited on
Commit
8b8d3c2
1 Parent(s): d330815

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -0
README.md CHANGED
@@ -107,6 +107,61 @@ The description of each key (when the instance contains two questions) is as fol
107
  - **question2_answer2:** Gold answer to question 2 according to passage 2.
108
 
109
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
110
  ## Dataset Creation
111
 
112
  ### Curation Rationale
@@ -176,14 +231,18 @@ Since our dataset requires manual annotation, annotation noise is inevitably int
176
 
177
  <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
178
 
 
 
179
  **BibTeX:**
180
 
 
181
  @article{hou2024wikicontradict,
182
  title={{WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia}},
183
  author={Hou, Yufang and Pascale, Alessandra and Carnerero-Cano, Javier and Tchrakian, Tigran and Marinescu, Radu and Daly, Elizabeth and Padhi, Inkit and Sattigeri, Prasanna},
184
  journal={arXiv preprint arXiv:2406.13805},
185
  year={2024}
186
  }
 
187
 
188
  **APA:**
189
 
 
107
  - **question2_answer2:** Gold answer to question 2 according to passage 2.
108
 
109
 
110
+ ## Usage of the Dataset
111
+
112
+ We provide the following starter code:
113
+
114
+
115
+ ```python
116
+ from genai import Client, Credentials
117
+ import datetime
118
+ import pytz
119
+ import logging
120
+ import json
121
+ import copy
122
+ from dotenv import load_dotenv
123
+ from genai.text.generation import CreateExecutionOptions
124
+ from genai.schema import (
125
+ DecodingMethod,
126
+ LengthPenalty,
127
+ ModerationParameters,
128
+ ModerationStigma,
129
+ TextGenerationParameters,
130
+ TextGenerationReturnOptions,
131
+ )
132
+
133
+ try:
134
+ from tqdm.auto import tqdm
135
+ except ImportError:
136
+ print("Please install tqdm to run this example.")
137
+ raise
138
+
139
+ load_dotenv()
140
+ client = Client(credentials=Credentials.from_env())
141
+
142
+ logging.getLogger("bampy").setLevel(logging.DEBUG)
143
+ fh = logging.FileHandler('bampy.log')
144
+ fh.setLevel(logging.DEBUG)
145
+ logging.getLogger("bampy").addHandler(fh)
146
+
147
+ parameters = TextGenerationParameters(
148
+ max_new_tokens=250,
149
+ min_new_tokens=1,
150
+ decoding_method=DecodingMethod.GREEDY,
151
+ # length_penalty=LengthPenalty(start_index=5, decay_factor=1.5),
152
+ return_options=TextGenerationReturnOptions(
153
+ # if ordered is False, you can use return_options to retrieve the corresponding prompt
154
+ input_text=True,
155
+ ),
156
+ )
157
+
158
+ testingUnits = load_testingdata()
159
+ print("question size", len(testingUnits))
160
+ generateAnswers_bam_models(testingUnits)
161
+ ```
162
+
163
+
164
+
165
  ## Dataset Creation
166
 
167
  ### Curation Rationale
 
231
 
232
  <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
233
 
234
+ If this dataset is utilized in your research, kindly cite the following paper:
235
+
236
  **BibTeX:**
237
 
238
+ ```
239
  @article{hou2024wikicontradict,
240
  title={{WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia}},
241
  author={Hou, Yufang and Pascale, Alessandra and Carnerero-Cano, Javier and Tchrakian, Tigran and Marinescu, Radu and Daly, Elizabeth and Padhi, Inkit and Sattigeri, Prasanna},
242
  journal={arXiv preprint arXiv:2406.13805},
243
  year={2024}
244
  }
245
+ ```
246
 
247
  **APA:**
248