As a work-around, this would be fine...

#17
by John6666 - opened

This is a workaround for a problem where the size of the fp16 file created stays the same as the fp32... same problem in the SDXL conversion space and local environment. I think it was happening at least about 6 months ago when I first converted my files... I forgot to report this.
https://huggingface.co/spaces/diffusers/sdxl-to-diffusers

The workaround is fine with code like this, but the real problem is that .to(torch.float16) doesn't seem to work.
I have not verified whether the file size is still 32 bits but only the internal precision is 16 bits, but perhaps it is not working as expected.
If this is indeed a bug, it would be better to fix that bug rather than merge this commit.

The torch_dtype=torch.float16 at load time works fine, so this workaround works.

Before fix

https://huggingface.co/John6666/convtest2

After fix

https://huggingface.co/John6666/convtest3

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment