Datasets:
Getting NonMatchingSplitsSizesError when loading Python subset
#4
by
cassanof
- opened
Hello! I am trying to load the Python subset of the dataset. It raises a NonMatchingSplitsSizesError 15% into loading the dataset.
>>> import datasets
>>> ds = datasets.load_dataset("bigcode/the-stack-v2", data_dir="data/Python", split="train")
Generating train split: 15%|_____________ | 96448523/664910862 [02:09<12:41, 746839.37 examples/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/scratch/federicoc/.env/lib/python3.10/site-packages/datasets/load.py", line 2582, in load_dataset
builder_instance.download_and_prepare(
File "/scratch/federicoc/.env/lib/python3.10/site-packages/datasets/builder.py", line 1005, in download_and_prepare
self._download_and_prepare(
File "/scratch/federicoc/.env/lib/python3.10/site-packages/datasets/builder.py", line 1118, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/scratch/federicoc/.env/lib/python3.10/site-packages/datasets/utils/info_utils.py", line 101, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=303603153658, num_examples=664910862, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=42044267633, num_examples=96448523, shard_lengths=[1148000, 1148000, 1147000, 1148000, 1147000, 1148000, 1147000, 1147048, 1148000, 1147000, 1148000, 1148000, 1147000, 1147000, 1148000, 1147048, 1148000, 1148000, 1147000, 1148000, 1147000, 1147000, 1148048, 1148000, 1147000, 1148000, 1148000, 1147000, 1147000, 1148000, 1147048, 1147000, 1147000, 1147000, 1148000, 1147000, 1147000, 1148000, 1148048, 1148000, 1147000, 1148000, 1147000, 1147000, 1148000, 1147048, 1147000, 1147000, 1147000, 1148000, 1147000, 1148000, 1148000, 1147047, 1148000, 1148000, 1147000, 1147000, 1148000, 1148000, 1147000, 1147047, 1148000, 1147000, 1148000, 1148000, 1147000, 1147000, 1147047, 1147000, 1147000, 1148000, 1148000, 1148000, 1147000, 1147000, 1148047, 1148000, 1147000, 1148000, 1147000, 1147000, 1147000, 1147000, 62047], dataset_name='the-stack-v2')}]
Is this a known issue? I'm on the latest datasets
version: datasets==2.18.0
works! thanks!
cassanof
changed discussion status to
closed