metadata
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: false
extra_gated_prompt: >-
One more step before getting this model.
This model is open access and available to all, with a CreativeML OpenRAIL-M
license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or
harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use
them and are accountable for their use which must not go against the
provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as
a service. If you do, please be aware you have to include the same use
restrictions as the ones in the license and share a copy of the CreativeML
OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here:
https://huggingface.co/spaces/CompVis/stable-diffusion-license
By clicking on "Access repository" below, you accept that your *contact
information* (email address and username) can be shared with the model authors
as well.
extra_gated_fields:
I have read the License and agree with its terms: checkbox
Model Details
License: The CreativeML OpenRAIL M license is an Open RAIL M license, adapted from the work that BigScience and the RAIL Initiative are jointly carrying in the area of responsible AI licensing. See also the article about the BLOOM Open RAIL license on which our license is based.
Training : This model is fine-tuned from the vae use in this stable-diffusion checkpoint CompVis/stable-diffusion-v1-4
- Dataset: a subset of Danbooru2017, can be downloaded from kaggle.
- Compute: The training using only one RTX 3090. Training was stopped at about 17 hours. And the latest checkpoint is exported.
- Training code: The code used for training can be found in this github repo: cccntu/fine-tune-models
Usage
- this model can be loaded using stable_diffusion_jax
from stable_diffusion_jax import AutoencoderKL
vae, vae_params = AutoencoderKL.from_pretrained(
"ttj/stable-diffusion-vae-anime", _do_init=False, dtype=dtype, use_auth_token=True
)
For example on using this model, please refer to this notebook in the github repo.