Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -376,26 +376,26 @@ Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basqu
|
|
376 |
|
377 |
### How to use
|
378 |
|
379 |
-
|
380 |
|
381 |
-
|
382 |
```python
|
383 |
from datasets import load_dataset
|
384 |
|
385 |
CV_11_hi_train = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
|
386 |
```
|
387 |
|
388 |
-
Using datasets, you can
|
389 |
```python
|
390 |
from datasets import load_dataset
|
391 |
|
392 |
CV_11_hi_train_stream = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train", streaming=True)
|
393 |
|
394 |
-
#
|
395 |
print(next(iter(CV_11_hi_train_stream)))
|
396 |
```
|
397 |
|
398 |
-
Bonus
|
399 |
```python
|
400 |
from datasets import load_dataset
|
401 |
from torch.utils.data.sampler import BatchSampler, RandomSampler
|
@@ -405,7 +405,7 @@ batch_sampler = BatchSampler(RandomSampler(ds), batch_size=32, drop_last=False)
|
|
405 |
dataloader = DataLoader(ds, batch_sampler=batch_sampler)
|
406 |
```
|
407 |
|
408 |
-
|
409 |
```python
|
410 |
from datasets import load_dataset
|
411 |
from torch.utils.data import DataLoader
|
@@ -416,7 +416,7 @@ dataloader = DataLoader(ds, batch_size=32)
|
|
416 |
|
417 |
### Example scripts
|
418 |
|
419 |
-
|
420 |
|
421 |
## Dataset Structure
|
422 |
|
|
|
376 |
|
377 |
### How to use
|
378 |
|
379 |
+
To get started, you should be able to plug-and-play this dataset in your existing Machine Learning workflow
|
380 |
|
381 |
+
The entire dataset (or a particular split) can be downloaded to your local drive by using the `load_dataset` function.
|
382 |
```python
|
383 |
from datasets import load_dataset
|
384 |
|
385 |
CV_11_hi_train = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
|
386 |
```
|
387 |
|
388 |
+
Using the datasets library, you can stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode allows one to iterate over the dataset without downloading it on disk.
|
389 |
```python
|
390 |
from datasets import load_dataset
|
391 |
|
392 |
CV_11_hi_train_stream = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train", streaming=True)
|
393 |
|
394 |
+
# Iterate through the stream and fetch individual data points as you need them
|
395 |
print(next(iter(CV_11_hi_train_stream)))
|
396 |
```
|
397 |
|
398 |
+
*Bonus*: Create a PyTorch dataloader with directly with the downloaded/ streamed datasets.
|
399 |
```python
|
400 |
from datasets import load_dataset
|
401 |
from torch.utils.data.sampler import BatchSampler, RandomSampler
|
|
|
405 |
dataloader = DataLoader(ds, batch_sampler=batch_sampler)
|
406 |
```
|
407 |
|
408 |
+
ofcourse, you can do the same with streaming datasets as well.
|
409 |
```python
|
410 |
from datasets import load_dataset
|
411 |
from torch.utils.data import DataLoader
|
|
|
416 |
|
417 |
### Example scripts
|
418 |
|
419 |
+
Train your own CTC or Seq2Seq Automatic Speech Recognition models with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
|
420 |
|
421 |
## Dataset Structure
|
422 |
|