YouTubeMix Dataset
YouTubeMix is a raw audio waveform dataset used in the paper "It's Raw! Audio Generation with State-Space Models". It has been used primarily as a source of single instrument piano music for training music generation models at a small scale.
The dataset uses the audio track from https://www.youtube.com/watch?v=EhO_MrRfftU, and was originally used in the SampleRNN GitHub repository from the Deep Sound Project (https://github.com/deepsound-project/samplernn-pytorch).
Please note that download and use of this data should be for academic and research purposes only, in order to constitute fair use under US copyright law. We take no responsibility for any copyright infringements that take place by users who download and use this data.
We include two versions of the dataset:
youtubemix.zip
is a zip file containing 241 1-minute audio clips (re)sampled at 16kHz. These were generated by splitting the original audio track. This is provided for use with the https://github.com/HazyResearch/state-spaces repository to reproduce SaShiMi results, and was the dataset used in the paper.raw.wav
is the raw audio track from the YouTube video, sampled at 44.1kHz.
We recommend (and follow) the following train-validation-test split for the audio files in youtubemix.zip
:
out000.wav
toout211.wav
for trainingout212.wav
toout225.wav
for validationout226.wav
toout240.wav
for testing
You can use the following BibTeX entries to appropriately cite prior work if you decide to use this in your research:
@article{goel2022sashimi,
title={It's Raw! Audio Generation with State-Space Models},
author={Goel, Karan and Gu, Albert and Donahue, Chris and R\'{e}, Christopher},
journal={arXiv preprint arXiv:2202.09729},
year={2022}
}
@misc{deepsound,
author = {DeepSound},
title = {SampleRNN},
year = {2017},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/deepsound-project/samplernn-pytorch}},
}
- Downloads last month
- 12