ashllay commited on
Commit
3656316
1 Parent(s): 387527a

Update README.md

Browse files

Updated download links and changed github link to archived source.

Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -44,11 +44,11 @@ Stable Diffusion Inpainting is a latent text-to-image diffusion model capable of
44
 
45
  The **Stable-Diffusion-Inpainting** was initialized with the weights of the [Stable-Diffusion-v-1-2](https://steps/huggingface.co/CompVis/stable-diffusion-v-1-2-original). First 595k steps regular training, then 440k steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning to improve classifier-free [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
46
 
47
- [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/runwayml/stable-diffusion-inpainting) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
48
  :-------------------------:|:-------------------------:|
49
  ## Examples:
50
 
51
- You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion).
52
 
53
  ### Diffusers
54
 
@@ -79,8 +79,8 @@ image.save("./yellow_cat_on_park_bench.png")
79
 
80
  ### Original GitHub Repository
81
 
82
- 1. Download the weights [sd-v1-5-inpainting.ckpt](https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt)
83
- 2. Follow instructions [here](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion).
84
 
85
  ## Model Details
86
  - **Developed by:** Robin Rombach, Patrick Esser
@@ -88,7 +88,7 @@ image.save("./yellow_cat_on_park_bench.png")
88
  - **Language(s):** English
89
  - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
90
  - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
91
- - **Resources for more information:** [GitHub Repository](https://github.com/runwayml/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
92
  - **Cite as:**
93
 
94
  @InProceedings{Rombach_2022_CVPR,
 
44
 
45
  The **Stable-Diffusion-Inpainting** was initialized with the weights of the [Stable-Diffusion-v-1-2](https://steps/huggingface.co/CompVis/stable-diffusion-v-1-2-original). First 595k steps regular training, then 440k steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning to improve classifier-free [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
46
 
47
+ [![Deprecated Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/runwayml/stable-diffusion-inpainting) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
48
  :-------------------------:|:-------------------------:|
49
  ## Examples:
50
 
51
+ You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [Archive of RunwayML GitHub repository](https://github.com/ashllay/stable-diffusion-archive).
52
 
53
  ### Diffusers
54
 
 
79
 
80
  ### Original GitHub Repository
81
 
82
+ 1. Download the weights [sd-v1-5-inpainting.ckpt](https://huggingface.co/ashllay/stable-diffusion-v1-5-inpainting-archive/resolve/main/sd-v1-5-inpainting.ckpt)
83
+ 2. Follow instructions [here](https://github.com/ashllay/stable-diffusion-archive?tab=readme-ov-file#inpainting-with-stable-diffusion).
84
 
85
  ## Model Details
86
  - **Developed by:** Robin Rombach, Patrick Esser
 
88
  - **Language(s):** English
89
  - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
90
  - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
91
+ - **Resources for more information:** [GitHub Repository](https://github.com/ashllay/stable-diffusion-archive), [Paper](https://arxiv.org/abs/2112.10752).
92
  - **Cite as:**
93
 
94
  @InProceedings{Rombach_2022_CVPR,