Datasets:
[Help Wanted] Support for GigaSpeech 2 Splits
when i run "dataset = load_dataset("speechcolab/gigaspeech2", split='data.th')" The program was interrupted by:
Traceback (most recent call last):
File "/root/miniforge3/envs/audio_process/lib/python3.8/site-packages/datasets/builder.py", line 1894, in _prepare_split_single
writer.write_table(table)
File "/root/miniforge3/envs/audio_process/lib/python3.8/site-packages/datasets/arrow_writer.py", line 570, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/root/miniforge3/envs/audio_process/lib/python3.8/site-packages/datasets/table.py", line 2324, in table_cast
return cast_table_to_schema(table, schema)
File "/root/miniforge3/envs/audio_process/lib/python3.8/site-packages/datasets/table.py", line 2282, in cast_table_to_schema
raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
ValueError: Couldn't cast
If you want to download specific files, you can use the following approach:
from datasets import load_dataset
train_ds = load_dataset("speechcolab/gigaspeech2", data_files={"train": "data/th/train/*.tar.gz"}, split="train")
train_raw_ds = load_dataset("speechcolab/gigaspeech2", data_files={"train_raw": "data/th/train_raw.tsv"}, split="train_raw")
train_refined_ds = load_dataset("speechcolab/gigaspeech2", data_files={"train_refined": "data/th/train_refined.tsv"}, split="train_refined")
dev_ds = load_dataset("speechcolab/gigaspeech2", data_files={"dev": "data/th/dev.tar.gz"}, split="dev")
dev_meta_ds = load_dataset("speechcolab/gigaspeech2", data_files={"dev_meta": "data/th/dev.tsv"}, split="dev_meta")
test_ds = load_dataset("speechcolab/gigaspeech2", data_files={"test": "data/th/test.tar.gz"}, split="test")
test_meta_ds = load_dataset("speechcolab/gigaspeech2", data_files={"test_meta": "data/th/test.tsv"}, split="test_meta")
print("Train dataset:", train_ds)
print("Train raw dataset:", train_raw_ds)
print("Train refined dataset:", train_refined_ds)
print("Dev dataset:", dev_ds)
print("Dev meta dataset:", dev_meta_ds)
print("Test dataset:", test_ds)
print("Test meta dataset:", test_meta_ds)
Hey @yfyeung ! Would it be simpler to incorporate the metadata and audio files directly into a single dataset? E.g. as we have for GigaSpeech v1? In doing so, users can download and pre-process the the dataset in just 2 lines of code:
from datasets import load_dataset
# download the audio + metadata for the xs split
gigaspeech = load_dataset("speechcolab/gigaspeech", "xs")
Hey @yfyeung ! Would it be simpler to incorporate the metadata and audio files directly into a single dataset? E.g. as we have for GigaSpeech v1? In doing so, users can download and pre-process the the dataset in just 2 lines of code:
from datasets import load_dataset # download the audio + metadata for the xs split gigaspeech = load_dataset("speechcolab/gigaspeech", "xs")
I see. Maybe we should create a gigaspeech2.py to support this.
Hi Hugging Face Team @ylacombe @polinaeterna @lhoestq ,
We kindly request your assistance in creating a dataset script for our GigaSpeech 2 dataset. Our dataset contains data in three languages: Thai (th), Indonesian (id), and Vietnamese (vi). Each language subset contains train_raw, train_refined, dev, and test sets.
Here is the structure of our dataset:
GigaSpeech 2
├── data
│ ├── id
│ │ ├── md5
│ │ ├── dev.tar.gz
│ │ ├── dev.tsv
│ │ ├── test.tar.gz
│ │ ├── test.tsv
│ │ ├── train
│ │ │ ├── 0.tar.gz
│ │ │ ├── 1.tar.gz
│ │ │ └── ...
│ │ ├── train_raw.tsv
│ │ └── train_refined.tsv
│ ├── th
│ │ ├── md5
│ │ ├── dev.tar.gz
│ │ ├── dev.tsv
│ │ ├── test.tar.gz
│ │ ├── test.tsv
│ │ ├── train
│ │ │ ├── 0.tar.gz
│ │ │ ├── 1.tar.gz
│ │ │ └── ...
│ │ ├── train_raw.tsv
│ │ └── train_refined.tsv
│ └── vi
│ ├── md5
│ ├── dev.tar.gz
│ ├── dev.tsv
│ ├── test.tar.gz
│ ├── test.tsv
│ ├── train
│ │ ├── 0.tar.gz
│ │ ├── 1.tar.gz
│ │ └── ...
│ ├── train_raw.tsv
│ └── train_refined.tsv
├── metadata.json
└── README.md
We want to create a dataset script named gigaspeech2.py
that supports loading the three language subsets (th, id, vi) with their respective splits (train_raw, train_refined, dev, test).
Could you help us?
Thank you in advance!
Best regards,
Yifan
Hi ! We stopped supporting dataset scripts for security reasons.
Have you considered uploading the dataset in a supported format like WebDataset instead ?
Hi ! We stopped supporting dataset scripts for security reasons.
Have you considered uploading the dataset in a supported format like WebDataset instead ?
Hi @lhoestq , thanks for your reply. I uploaded this dataset using Git LFS. The dataset is a bit large, around 4TB. Do you have any suggestions?
The only way is to re-upload in a supported format :/
The only way is to re-upload in a supported format :/
I see. Thanks.
I generally use the local files method to create an audio dataset on a local machine with the desired format, then use .push_to_hub
to push the dataset to the Hugging Face Hub.
Here's a skeleton script for how you can do this for the different langs/splits:
from datasets import DatasetDict
for lang in ["id", "th", "vi"]:
dataset_dict = DatasetDict()
for split in ["train_raw", "train_refined", "dev", "test"]:
# convert dataset to HF format
dataset_dict[split] = ...
# push dataset for language to the Hub in parquet format
dataset_dict.push_to_hub("gigaspeech2", config_name=lang)
For converting the dataset to HF format, they key here is defining two (or more) lists for each split:
- List of paths to the audio files
- List of transcriptions (in string form)
- (additional) lists for any other metadata to be included
It's then straightforward to convert the dataset to the correct format, e.g. for the audios + transcriptions:
from datasets import Dataset, Audio
dataset_dict[split] = Dataset.from_dict({"audio": list_of_audio_paths, "transcription": list_of_text_transcriptions})
# convert audio's to audio feature
dataset_dict[split] = dataset_dict[split].cast_column("audio", Audio(sampling_rate=16000)
Thank you for your guidance and the detailed script. We would like to retain the .tar.gz
format. Our plan includes creating mirror resources accessible to users in mainland China, and using a universal compression format is essential for this purpose.
完整下完所有的数据,大概需要多大的存储
完整下完所有的数据,大概需要多大的存储
4T左右
i managed to write a loading script for the vietnamese subset (easily adaptable to thai & indo subsets) but i don't have enough disk space to download
see my code: https://github.com/phineas-pta/fine-tune-whisper-vi/blob/main/misc/gigaspeech2.py
anyone able to help ?
N.B. i'd be able to do it if IterableDataset
support .push_to_hub
so i wouldn't have to download all files then re-upload to hf
(would be cool to have a .push_to_hub
implementation for IterableDataset
, happy to help if anyone wants to give it a shot, cc
@albertvillanova
for viz)