|
<!--Copyright 2024 The HuggingFace Team. All rights reserved. |
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with |
|
the License. You may obtain a copy of the License at |
|
|
|
http://www.apache.org/licenses/LICENSE-2.0 |
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on |
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the |
|
specific language governing permissions and limitations under the License. |
|
--> |
|
|
|
# Speed up inference |
|
|
|
There are several ways to optimize Diffusers for inference speed, such as reducing the computational burden by lowering the data precision or using a lightweight distilled model. There are also memory-efficient attention implementations, [xFormers](xformers) and [scaled dot product attention](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) in PyTorch 2.0, that reduce memory usage which also indirectly speeds up inference. Different speed optimizations can be stacked together to get the fastest inference times. |
|
|
|
> [!TIP] |
|
> Optimizing for inference speed or reduced memory usage can lead to improved performance in the other category, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about lowering memory usage in the [Reduce memory usage](memory) guide. |
|
|
|
The inference times below are obtained from generating a single 512x512 image from the prompt "a photo of an astronaut riding a horse on mars" with 50 DDIM steps on a NVIDIA A100. |
|
|
|
| setup | latency | speed-up | |
|
|----------|---------|----------| |
|
| baseline | 5.27s | x1 | |
|
| tf32 | 4.14s | x1.27 | |
|
| fp16 | 3.51s | x1.50 | |
|
| combined | 3.41s | x1.54 | |
|
|
|
## TensorFloat-32 |
|
|
|
On Ampere and later CUDA devices, matrix multiplications and convolutions can use the [TensorFloat-32 (tf32)](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) mode for faster, but slightly less accurate computations. By default, PyTorch enables tf32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling tf32 for matrix multiplications. It can significantly speed up computations with typically negligible loss in numerical accuracy. |
|
|
|
```python |
|
import torch |
|
|
|
torch.backends.cuda.matmul.allow_tf32 = True |
|
``` |
|
|
|
Learn more about tf32 in the [Mixed precision training](https://huggingface.co/docs/transformers/en/perf_train_gpu_one#tf32) guide. |
|
|
|
## Half-precision weights |
|
|
|
To save GPU memory and get more speed, set `torch_dtype=torch.float16` to load and run the model weights directly with half-precision weights. |
|
|
|
```Python |
|
import torch |
|
from diffusers import DiffusionPipeline |
|
|
|
pipe = DiffusionPipeline.from_pretrained( |
|
"runwayml/stable-diffusion-v1-5", |
|
torch_dtype=torch.float16, |
|
use_safetensors=True, |
|
) |
|
pipe = pipe.to("cuda") |
|
``` |
|
|
|
> [!WARNING] |
|
> Don't use [torch.autocast](https://pytorch.org/docs/stable/amp.html#torch.autocast) in any of the pipelines as it can lead to black images and is always slower than pure float16 precision. |
|
|
|
## Distilled model |
|
|
|
You could also use a distilled Stable Diffusion model and autoencoder to speed up inference. During distillation, many of the UNet's residual and attention blocks are shed to reduce the model size by 51% and improve latency on CPU/GPU by 43%. The distilled model is faster and uses less memory while generating images of comparable quality to the full Stable Diffusion model. |
|
|
|
> [!TIP] |
|
> Read the [Open-sourcing Knowledge Distillation Code and Weights of SD-Small and SD-Tiny](https://huggingface.co/blog/sd_distillation) blog post to learn more about how knowledge distillation training works to produce a faster, smaller, and cheaper generative model. |
|
|
|
The inference times below are obtained from generating 4 images from the prompt "a photo of an astronaut riding a horse on mars" with 25 PNDM steps on a NVIDIA A100. Each generation is repeated 3 times with the distilled Stable Diffusion v1.4 model by [Nota AI](https://hf.co/nota-ai). |
|
|
|
| setup | latency | speed-up | |
|
|------------------------------|---------|----------| |
|
| baseline | 6.37s | x1 | |
|
| distilled | 4.18s | x1.52 | |
|
| distilled + tiny autoencoder | 3.83s | x1.66 | |
|
|
|
Let's load the distilled Stable Diffusion model and compare it against the original Stable Diffusion model. |
|
|
|
```py |
|
from diffusers import StableDiffusionPipeline |
|
import torch |
|
|
|
distilled = StableDiffusionPipeline.from_pretrained( |
|
"nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True, |
|
).to("cuda") |
|
prompt = "a golden vase with different flowers" |
|
generator = torch.manual_seed(2023) |
|
image = distilled("a golden vase with different flowers", num_inference_steps=25, generator=generator).images[0] |
|
image |
|
``` |
|
|
|
<div class="flex gap-4"> |
|
<div> |
|
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/original_sd.png"/> |
|
<figcaption class="mt-2 text-center text-sm text-gray-500">original Stable Diffusion</figcaption> |
|
</div> |
|
<div> |
|
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/distilled_sd.png"/> |
|
<figcaption class="mt-2 text-center text-sm text-gray-500">distilled Stable Diffusion</figcaption> |
|
</div> |
|
</div> |
|
|
|
### Tiny AutoEncoder |
|
|
|
To speed inference up even more, replace the autoencoder with a [distilled version](https://huggingface.co/sayakpaul/taesdxl-diffusers) of it. |
|
|
|
```py |
|
import torch |
|
from diffusers import AutoencoderTiny, StableDiffusionPipeline |
|
|
|
distilled = StableDiffusionPipeline.from_pretrained( |
|
"nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True, |
|
).to("cuda") |
|
distilled.vae = AutoencoderTiny.from_pretrained( |
|
"sayakpaul/taesd-diffusers", torch_dtype=torch.float16, use_safetensors=True, |
|
).to("cuda") |
|
|
|
prompt = "a golden vase with different flowers" |
|
generator = torch.manual_seed(2023) |
|
image = distilled("a golden vase with different flowers", num_inference_steps=25, generator=generator).images[0] |
|
image |
|
``` |
|
|
|
<div class="flex justify-center"> |
|
<div> |
|
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/distilled_sd_vae.png" /> |
|
<figcaption class="mt-2 text-center text-sm text-gray-500">distilled Stable Diffusion + Tiny AutoEncoder</figcaption> |
|
</div> |
|
</div> |
|
|