File size: 1,591 Bytes
8168a5e 6d90d4d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
dataset_info:
features:
- name: passages
struct:
- name: is_selected
sequence: int32
- name: passage_text
sequence: string
- name: url
sequence: string
- name: query
dtype: string
- name: query_id
dtype: int32
- name: query_type
dtype: string
- name: golden_passages
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 326842258
num_examples: 70616
download_size: 168328467
dataset_size: 326842258
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This dataset was created by filtering and adding columns needed to evaluate retrievers to "v1.1" version of [MSMARCO dataset] (https://github.com/microsoft/MSMARCO-Question-Answering). I am additionally providing the code used to filter the dataset in order to make everything clear.
```
msmarco = load_dataset("ms_marco", "v1.1", split="train").to_pandas()
msmarco["golden_passages"] = [row["passages"]["passage_text"][row["passages"]["is_selected"]==1] for _, row in msmarco.iterrows()]
msmarco_correct_answers = msmarco[msmarco["answers"].apply(lambda x: len(x) == 1)]
msmarco_correct_answers = msmarco_correct_answers[msmarco_correct_answers["wellFormedAnswers"].apply(lambda x: len(x) == 0)]
msmarco_correct_answers.dropna(inplace=True)
msmarco_correct_answers["answer"] = msmarco_correct_answers["answers"].apply(lambda x: x[0])
msmarco_correct_answers.drop(["wellFormedAnswers", "answers"], axis=1, inplace=True)
msmarco_correct_answers.reset_index(inplace=True, drop=True)
```
|