The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
This dataset serves as a convenient way to access the ParlAI dialogue NLI dataset. I do not own the rights and all credit go to the original authors.
Dialogue Natural Language Inference
Sean Welleck, Jason Weston, Arthur Szlam, Kyunghyun Cho
arxiv link: https://arxiv.org/abs/1811.00671
Abstract: Consistency is a long standing issue faced by dialogue models. In this paper, we frame the consistency of dialogue agents as natural language inference (NLI) and create a new natural language inference dataset called Dialogue NLI. We propose a method which demonstrates that a model trained on Dialogue NLI can be used to improve the consistency of a dialogue model, and evaluate the method with human evaluation and with automatic metrics on a suite of evaluation sets designed to measure a dialogue model’s consistency.
How to use
from datasets import load_dataset
dataset = load_dataset('xksteven/dialogue_nli', split='train')
label candidates:
- entailment
- contradiction
- neutral
Train dataset features.
Dataset({
features: ['id', 'label', 'premise', 'hypothesis', 'dtype'],
num_rows: 310110
})
Citation
@misc{welleck2019dialogue,
title={Dialogue Natural Language Inference},
author={Sean Welleck and Jason Weston and Arthur Szlam and Kyunghyun Cho},
year={2019},
eprint={1811.00671},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 64