# dadrah_dataset.json # persian-conversational-dataset """TODO(empathetic_dialogues): Add a description here.""" import csv,json import datasets _DESCRIPTION = """\ persian-conversational-dataset """ _URL = "https://dl.fbaipublicfiles.com/parlai/empatheticdialogues/empatheticdialogues.tar.gz" class persianConversation(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("0.1.0") def _info(self): # TODO(empathetic_dialogues): Specifies the datasets.DatasetInfo object return datasets.DatasetInfo( # This is the description that will appear on the datasets page. description=_DESCRIPTION, # datasets.features.FeatureConnectors features=datasets.Features( { "title": datasets.Value("string"), "question": datasets.Value("string"), "answers": datasets.Value("list"), "keywords": datasets.Value("list"), # These are the features of your dataset like images, labels ... } ), # If there's a common (input, target) tuple from the features, # specify them here. They'll be used if as_supervised=True in # builder.as_dataset. supervised_keys=None, # Homepage of the dataset for documentation ) def _split_generators(self, dl_manager): """Returns SplitGenerators.""" # TODO(empathetic_dialogues): Downloads the data and defines the splits # dl_manager is a datasets.download.DownloadManager that can be used to # download and extract URLs return [ datasets.SplitGenerator( name=datasets.Split.TEST, # These kwargs will be passed to _generate_examples gen_kwargs={"files": ["dadrah_dataset.json"], "split_file": "dadrah_dataset.json"}, ), ] def _generate_examples(self, files, split_file): """Yields examples.""" for path, f in files: if split_file == path: with open(split_file, 'r', encoding='utf-8') as fmm: data=json.load(fmm) for id_, row in enumerate(data): title=row[0] question=row[1] answers=row[2] keywords=row[3] if id_==20: break; yield id_, { "title": title, "question": question, "answers": answers, "keywords": keywords, } break