Ryukijano's picture
Update README.md
e3bec90
metadata
license: mit
base_model: waifu-diffusion/wd-1-5-beta2
tags:
  - text-to-image
  - jax
  - pytorch
  - stable-diffusion
  - diffusers
  - jax-diffusers-event
inference: true
datasets:
  - animelover/danbooru2022

Jax<3 Experimental proof of concept made for the Huggingface JAX/Diffusers community sprint

Demo available here [My teammate's demo is available here] (https://huggingface.co/spaces/Cognomen/CatCon-Controlnet-WD-1-5-b2)

This is a controlnet for the Stable Diffusion checkpoint Waifu Diffusion 1.5 beta 2 which aims to guide image generation by conditioning outputs with patches of images from a common category of the training target examples. The current checkpoint has been trained for approx. 100k steps on a filtered subset of Danbooru 2021 using artists as the conditioned category with the aim of learning robust style transfer from an image example.

Prompt : Prompt: "a chibi of hatsune miku".

Conditioning image: An image from the class of style we want to generate. conditioning.jpg

Target: target.jpg

Major limitations:

  • The current checkpoint was trained on 768x768 crops without aspect ratio checkpointing. Loss in coherence for non-square aspect ratios can be expected.
  • The training dataset is extremely noisy and used without filtering stylistic outliers from within each category, so performance may be less than ideal. A more diverse dataset with a larger variety of styles and categories would likely have better performance.
  • The Waifu Diffusion base model is a hybrid anime/photography model, and can unpredictably jump between those modalities.
  • As styling is sensitive to divergences in model checkpoints, the capabilities of this controlnet are not expected to predictably apply to other SD 2.X checkpoints.

Waifu Diffusion 1.5 beta 2 is licensed under Fair AI Public License 1.0-SD. This controlnet imposes no restrictions beyond the MIT license, but it cannot be used independently of a base model.