Is it possible to output the upscaled_image in an array of tiles in one diffusion.
To practically upscale any large-size image with this program, you need to dissect the low_res_img into an array of 128x128 tiles and upscale them individually with the StableDiffusionUpscalePipeline.from_pretrained(). The tiles are upscaled in different diffusion seasons, so they cannot paste together seamlessly. As the upscaling seems to be the most GPU-demanding step, is it possible to let the Pipeline process the whole low_res_img, then tile it into an array of 128x128 tiles. After that upscale each tile separately? Finally, output the upscaled_image in np.array?
Same thought here, most of the images we are using are a lot larger than 128x128, if only it could combine tiling techniques.
We have tiling methods for the VAE model. Did you try applying it, e.g.:
pipe.vae.enable_tiling()
see: https://huggingface.co/docs/diffusers/api/models#diffusers.AutoencoderKL.enable_tiling
Thanks for the information, I was trying to make it work with ControlNet pipeline but looks like it's not designed to fit into that way. I got this error
The config of `pipeline.unet` expects 7 but received latent channels: 4, Please verify the config of `pipeline.unet` and the `pipeline.vae`
Hey @patrickvonplaten , when enabling tiling I suddenly start to see weird exposure artifacts in the upscaled image. This doesn't happen with tiling disabled and changing the steps (tried both 15 and 100) or guidance value (tried between 7 and 12) seemed to have no effect. Any ideas?