Edit model card

This is a ControlNet model that turns main melody spectrograms to accompaniment spectrograms. It's trained on top of Riffusion using music downloaded from YouTube Music.

The main melody and accompaniment is separated by Spleeter. This assumes that your main melody will be vocals. Main melodies other than vocals are not tested yet. The dataset contains vocals in Traditional Chinese, English and Japanese.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .