|
--- |
|
dataset_info: |
|
features: |
|
- name: passages |
|
struct: |
|
- name: is_selected |
|
sequence: int32 |
|
- name: passage_text |
|
sequence: string |
|
- name: url |
|
sequence: string |
|
- name: query |
|
dtype: string |
|
- name: query_id |
|
dtype: int32 |
|
- name: query_type |
|
dtype: string |
|
- name: golden_passages |
|
sequence: string |
|
- name: answer |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 326842258 |
|
num_examples: 70616 |
|
download_size: 168328467 |
|
dataset_size: 326842258 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
--- |
|
|
|
This dataset was created by filtering and adding columns needed to evaluate retrievers to "v1.1" version of [MSMARCO dataset] (https://github.com/microsoft/MSMARCO-Question-Answering). I am additionally providing the code used to filter the dataset in order to make everything clear. |
|
|
|
``` |
|
msmarco = load_dataset("ms_marco", "v1.1", split="train").to_pandas() |
|
msmarco["golden_passages"] = [row["passages"]["passage_text"][row["passages"]["is_selected"]==1] for _, row in msmarco.iterrows()] |
|
msmarco_correct_answers = msmarco[msmarco["answers"].apply(lambda x: len(x) == 1)] |
|
msmarco_correct_answers = msmarco_correct_answers[msmarco_correct_answers["wellFormedAnswers"].apply(lambda x: len(x) == 0)] |
|
msmarco_correct_answers.dropna(inplace=True) |
|
msmarco_correct_answers["answer"] = msmarco_correct_answers["answers"].apply(lambda x: x[0]) |
|
msmarco_correct_answers.drop(["wellFormedAnswers", "answers"], axis=1, inplace=True) |
|
msmarco_correct_answers.reset_index(inplace=True, drop=True) |
|
``` |
|
|