Pipelines
Pipelines provide a simple way to run state-of-the-art diffusion models in inference by bundling all of the necessary components (multiple independently-trained models, schedulers, and processors) into a single end-to-end class. Pipelines are flexible and they can be adapted to use different schedulers or even model components.
All pipelines are built from the base [DiffusionPipeline
] class which provides basic functionality for loading, downloading, and saving all the components. Specific pipeline types (for example [StableDiffusionPipeline
]) loaded with [~DiffusionPipeline.from_pretrained
] are automatically detected and the pipeline components are loaded and passed to the __init__
function of the pipeline.
You shouldn't use the [DiffusionPipeline
] class for training. Individual components (for example, [UNet2DModel
] and [UNet2DConditionModel
]) of diffusion pipelines are usually trained individually, so we suggest directly working with them instead.
Pipelines do not offer any training functionality. You'll notice PyTorch's autograd is disabled by decorating the [~DiffusionPipeline.__call__
] method with a torch.no_grad
decorator because pipelines should not be used for training. If you're interested in training, please take a look at the Training guides instead!
The table below lists all the pipelines currently available in 🤗 Diffusers and the tasks they support. Click on a pipeline to view its abstract and published paper.
Pipeline | Tasks |
---|---|
AltDiffusion | image2image |
AnimateDiff | text2video |
Attend-and-Excite | text2image |
Audio Diffusion | image2audio |
AudioLDM | text2audio |
AudioLDM2 | text2audio |
BLIP Diffusion | text2image |
Consistency Models | unconditional image generation |
ControlNet | text2image, image2image, inpainting |
ControlNet with Stable Diffusion XL | text2image |
ControlNet-XS | text2image |
ControlNet-XS with Stable Diffusion XL | text2image |
Cycle Diffusion | image2image |
Dance Diffusion | unconditional audio generation |
DDIM | unconditional image generation |
DDPM | unconditional image generation |
DeepFloyd IF | text2image, image2image, inpainting, super-resolution |
DiffEdit | inpainting |
DiT | text2image |
GLIGEN | text2image |
InstructPix2Pix | image editing |
Kandinsky 2.1 | text2image, image2image, inpainting, interpolation |
Kandinsky 2.2 | text2image, image2image, inpainting |
Kandinsky 3 | text2image, image2image |
Latent Consistency Models | text2image |
Latent Diffusion | text2image, super-resolution |
LDM3D | text2image, text-to-3D, text-to-pano, upscaling |
LEDITS++ | image editing |
MultiDiffusion | text2image |
MusicLDM | text2audio |
Paint by Example | inpainting |
ParaDiGMS | text2image |
Pix2Pix Zero | image editing |
PixArt-α | text2image |
PNDM | unconditional image generation |
RePaint | inpainting |
Score SDE VE | unconditional image generation |
Self-Attention Guidance | text2image |
Semantic Guidance | text2image |
Shap-E | text-to-3D, image-to-3D |
Spectrogram Diffusion | |
Stable Diffusion | text2image, image2image, depth2image, inpainting, image variation, latent upscaler, super-resolution |
Stable Diffusion Model Editing | model editing |
Stable Diffusion XL | text2image, image2image, inpainting |
Stable Diffusion XL Turbo | text2image, image2image, inpainting |
Stable unCLIP | text2image, image variation |
Stochastic Karras VE | unconditional image generation |
T2I-Adapter | text2image |
Text2Video | text2video, video2video |
Text2Video-Zero | text2video |
unCLIP | text2image, image variation |
Unconditional Latent Diffusion | unconditional image generation |
UniDiffuser | text2image, image2text, image variation, text variation, unconditional image generation, unconditional audio generation |
Value-guided planning | value guided sampling |
Versatile Diffusion | text2image, image variation |
VQ Diffusion | text2image |
Wuerstchen | text2image |
DiffusionPipeline
[[autodoc]] DiffusionPipeline - all - call - device - to - components
[[autodoc]] pipelines.StableDiffusionMixin.enable_freeu
[[autodoc]] pipelines.StableDiffusionMixin.disable_freeu
FlaxDiffusionPipeline
[[autodoc]] pipelines.pipeline_flax_utils.FlaxDiffusionPipeline
PushToHubMixin
[[autodoc]] utils.PushToHubMixin