Missing English dataset?

#8
by FengTing - opened

The ASR resources table from the official MLS website over OpenSLR contains English. However, I can't find English here. Is it missing or being merged to the librispeech-ASR?

Same issue. Where is the English dataset? Without it, this version of the dataset becomes unusable.

Same issue. Anyone can solve this?

Hey @FengTing , @apopkes and @dedongli ! Thanks for commenting here - you are correct in that the English subset is missing. This must've been missed off when we ported the dataset. Adding this to my TODOs

cc @lhoestq

@sanchit-gandhi Any update on this? I very much need the English dataset. It would be great to have it uploaded this week :)

@sanchit-gandhi Any update on this? I very much need the English dataset. It would be great to have it uploaded very soon.

I'm also interested to have the MLS dataset for English language added here. Thanks!

Bumping this for visibility, English is the largest subset of this dataset and it is missing.
Hope it gets added soon!

cc: @lhoestq @sanchit-gandhi

Hey @FengTing , @apopkes and @dedongli ! Thanks for commenting here - you are correct in that the English subset is missing. This must've been missed off when we ported the dataset. Adding this to my TODOs

cc @lhoestq

Hi @sanchit-gandhi , I can imagine that your TODOs are quite extensive. But since your message is a year old to this day, is there any chance for you to finally port the missing English subset, or at least give a time estimate? Many people are waiting excitedly. Thank you for all your great (community) work! :)

cc @lhoestq

Hi ! @ylacombe did an amazing re-uploading this dataset recently and could help with the english subset

Hey there, we actually already uploaded it in a different repository: https://huggingface.co/datasets/parler-tts/mls_eng

Adding the English subset back to this repository should be fairly easy to do, but since it's a big subset (705GB), I haven't had the time yet to actually do the transfer

🐐

If you have the bandwidth and the compute to do it, here's a quick snippet that could help!

from datasets import load_dataset
dataset = load_dataset("parler-tts/mls_eng", num_proc=24)
pushed = False
while not pushed:
    try:
        dataset.push_to_hub("facebook/multilingual_librispeech", "english", create_pr=True)
        pushed = True
    except:
        pass

There's also now a way to upload large folder more easily, cf this PR, but I haven't had time to look into this

Sign up or log in to comment