|
<!--Copyright 2024 The HuggingFace Team. All rights reserved. |
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with |
|
the License. You may obtain a copy of the License at |
|
|
|
http://www.apache.org/licenses/LICENSE-2.0 |
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on |
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the |
|
specific language governing permissions and limitations under the License. |
|
--> |
|
|
|
# ์ปค๋ฎค๋ํฐ ํ์ดํ๋ผ์ธ |
|
|
|
> **์ปค๋ฎค๋ํฐ ํ์ดํ๋ผ์ธ์ ๋ํ ์์ธํ ๋ด์ฉ์ [์ด ์ด์](https://github.com/huggingface/diffusers/issues/841)๋ฅผ ์ฐธ์กฐํ์ธ์. |
|
|
|
**์ปค๋ฎค๋ํฐ** ์์ ๋ ์ปค๋ฎค๋ํฐ์์ ์ถ๊ฐํ ์ถ๋ก ๋ฐ ํ๋ จ ์์ ๋ก ๊ตฌ์ฑ๋์ด ์์ต๋๋ค. |
|
๋ค์ ํ๋ฅผ ์ฐธ์กฐํ์ฌ ๋ชจ๋ ์ปค๋ฎค๋ํฐ ์์ ์ ๋ํ ๊ฐ์๋ฅผ ํ์ธํ์๊ธฐ ๋ฐ๋๋๋ค. **์ฝ๋ ์์ **๋ฅผ ํด๋ฆญํ๋ฉด ๋ณต์ฌํ์ฌ ๋ถ์ฌ๋ฃ๊ธฐํ ์ ์๋ ์ฝ๋ ์์ ๋ฅผ ํ์ธํ ์ ์์ต๋๋ค. |
|
์ปค๋ฎค๋ํฐ๊ฐ ์์๋๋ก ์๋ํ์ง ์๋ ๊ฒฝ์ฐ ์ด์๋ฅผ ๊ฐ์คํ๊ณ ์์ฑ์์๊ฒ ํ์ ๋ณด๋ด์ฃผ์ธ์. |
|
|
|
| ์ | ์ค๋ช
| ์ฝ๋ ์์ | ์ฝ๋ฉ |์ ์ | |
|
|:---------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------:| |
|
| CLIP Guided Stable Diffusion | CLIP ๊ฐ์ด๋ ๊ธฐ๋ฐ์ Stable Diffusion์ผ๋ก ํ
์คํธ์์ ์ด๋ฏธ์ง๋ก ์์ฑํ๊ธฐ | [CLIP Guided Stable Diffusion](#clip-guided-stable-diffusion) | [![์ฝ๋ฉ์์ ์ด๊ธฐ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/CLIP_Guided_Stable_diffusion_with_diffusers.ipynb) | [Suraj Patil](https://github.com/patil-suraj/) | |
|
| One Step U-Net (Dummy) | ์ปค๋ฎค๋ํฐ ํ์ดํ๋ผ์ธ์ ์ด๋ป๊ฒ ์ฌ์ฉํด์ผ ํ๋์ง์ ๋ํ ์์(์ฐธ๊ณ https://github.com/huggingface/diffusers/issues/841) | [One Step U-Net](#one-step-unet) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) | |
|
| Stable Diffusion Interpolation | ์๋ก ๋ค๋ฅธ ํ๋กฌํํธ/์๋ ๊ฐ Stable Diffusion์ latent space ๋ณด๊ฐ | [Stable Diffusion Interpolation](#stable-diffusion-interpolation) | - | [Nate Raw](https://github.com/nateraw/) | |
|
| Stable Diffusion Mega | ๋ชจ๋ ๊ธฐ๋ฅ์ ๊ฐ์ถ **ํ๋์** Stable Diffusion ํ์ดํ๋ผ์ธ [Text2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py), [Image2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py) and [Inpainting](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py) | [Stable Diffusion Mega](#stable-diffusion-mega) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) | |
|
| Long Prompt Weighting Stable Diffusion | ํ ํฐ ๊ธธ์ด ์ ํ์ด ์๊ณ ํ๋กฌํํธ์์ ํ์ฑ ๊ฐ์ค์น ์ง์์ ํ๋ **ํ๋์** Stable Diffusion ํ์ดํ๋ผ์ธ, | [Long Prompt Weighting Stable Diffusion](#long-prompt-weighting-stable-diffusion) |- | [SkyTNT](https://github.com/SkyTNT) | |
|
| Speech to Image | ์๋ ์์ฑ ์ธ์์ ์ฌ์ฉํ์ฌ ํ
์คํธ๋ฅผ ์์ฑํ๊ณ Stable Diffusion์ ์ฌ์ฉํ์ฌ ์ด๋ฏธ์ง๋ฅผ ์์ฑํฉ๋๋ค. | [Speech to Image](#speech-to-image) | - | [Mikail Duzenli](https://github.com/MikailINTech) | |
|
|
|
์ปค์คํ
ํ์ดํ๋ผ์ธ์ ๋ถ๋ฌ์ค๋ ค๋ฉด `diffusers/examples/community`์ ์๋ ํ์ผ ์ค ํ๋๋ก์ `custom_pipeline` ์ธ์๋ฅผ `DiffusionPipeline`์ ์ ๋ฌํ๊ธฐ๋ง ํ๋ฉด ๋ฉ๋๋ค. ์์ ๋ง์ ํ์ดํ๋ผ์ธ์ด ์๋ PR์ ๋ณด๋ด์ฃผ์๋ฉด ๋น ๋ฅด๊ฒ ๋ณํฉํด๋๋ฆฌ๊ฒ ์ต๋๋ค. |
|
```py |
|
pipe = DiffusionPipeline.from_pretrained( |
|
"CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder" |
|
) |
|
``` |
|
|
|
## ์ฌ์ฉ ์์ |
|
|
|
### CLIP ๊ฐ์ด๋ ๊ธฐ๋ฐ์ Stable Diffusion |
|
|
|
๋ชจ๋ ๋
ธ์ด์ฆ ์ ๊ฑฐ ๋จ๊ณ์์ ์ถ๊ฐ CLIP ๋ชจ๋ธ์ ํตํด Stable Diffusion์ ๊ฐ์ด๋ํจ์ผ๋ก์จ CLIP ๋ชจ๋ธ ๊ธฐ๋ฐ์ Stable Diffusion์ ๋ณด๋ค ๋ ์ฌ์ค์ ์ธ ์ด๋ฏธ์ง๋ฅผ ์์ฑ์ ํ ์ ์์ต๋๋ค. |
|
|
|
๋ค์ ์ฝ๋๋ ์ฝ 12GB์ GPU RAM์ด ํ์ํฉ๋๋ค. |
|
|
|
```python |
|
from diffusers import DiffusionPipeline |
|
from transformers import CLIPImageProcessor, CLIPModel |
|
import torch |
|
|
|
|
|
feature_extractor = CLIPImageProcessor.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K") |
|
clip_model = CLIPModel.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16) |
|
|
|
|
|
guided_pipeline = DiffusionPipeline.from_pretrained( |
|
"CompVis/stable-diffusion-v1-4", |
|
custom_pipeline="clip_guided_stable_diffusion", |
|
clip_model=clip_model, |
|
feature_extractor=feature_extractor, |
|
torch_dtype=torch.float16, |
|
) |
|
guided_pipeline.enable_attention_slicing() |
|
guided_pipeline = guided_pipeline.to("cuda") |
|
|
|
prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece" |
|
|
|
generator = torch.Generator(device="cuda").manual_seed(0) |
|
images = [] |
|
for i in range(4): |
|
image = guided_pipeline( |
|
prompt, |
|
num_inference_steps=50, |
|
guidance_scale=7.5, |
|
clip_guidance_scale=100, |
|
num_cutouts=4, |
|
use_cutouts=False, |
|
generator=generator, |
|
).images[0] |
|
images.append(image) |
|
|
|
# ์ด๋ฏธ์ง ๋ก์ปฌ์ ์ ์ฅํ๊ธฐ |
|
for i, img in enumerate(images): |
|
img.save(f"./clip_guided_sd/image_{i}.png") |
|
``` |
|
|
|
์ด๋ฏธ์ง` ๋ชฉ๋ก์๋ ๋ก์ปฌ์ ์ ์ฅํ๊ฑฐ๋ ๊ตฌ๊ธ ์ฝ๋ฉ์ ์ง์ ํ์ํ ์ ์๋ PIL ์ด๋ฏธ์ง ๋ชฉ๋ก์ด ํฌํจ๋์ด ์์ต๋๋ค. ์์ฑ๋ ์ด๋ฏธ์ง๋ ๊ธฐ๋ณธ์ ์ผ๋ก ์์ ์ ์ธ ํ์ฐ์ ์ฌ์ฉํ๋ ๊ฒ๋ณด๋ค ํ์ง์ด ๋์ ๊ฒฝํฅ์ด ์์ต๋๋ค. ์๋ฅผ ๋ค์ด ์์ ์คํฌ๋ฆฝํธ๋ ๋ค์๊ณผ ๊ฐ์ ์ด๋ฏธ์ง๋ฅผ ์์ฑํฉ๋๋ค: |
|
|
|
![clip_guidance](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/clip_guidance/merged_clip_guidance.jpg). |
|
|
|
### One Step Unet |
|
|
|
์์ "one-step-unet"๋ ๋ค์๊ณผ ๊ฐ์ด ์คํํ ์ ์์ต๋๋ค. |
|
|
|
```python |
|
from diffusers import DiffusionPipeline |
|
|
|
pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet") |
|
pipe() |
|
``` |
|
|
|
**์ฐธ๊ณ **: ์ด ์ปค๋ฎค๋ํฐ ํ์ดํ๋ผ์ธ์ ๊ธฐ๋ฅ์ผ๋ก ์ ์ฉํ์ง ์์ผ๋ฉฐ ์ปค๋ฎค๋ํฐ ํ์ดํ๋ผ์ธ์ ์ถ๊ฐํ ์ ์๋ ๋ฐฉ๋ฒ์ ์์์ผ ๋ฟ์
๋๋ค(https://github.com/huggingface/diffusers/issues/841 ์ฐธ์กฐ). |
|
|
|
### Stable Diffusion Interpolation |
|
|
|
๋ค์ ์ฝ๋๋ ์ต์ 8GB VRAM์ GPU์์ ์คํํ ์ ์์ผ๋ฉฐ ์ฝ 5๋ถ ์ ๋ ์์๋ฉ๋๋ค. |
|
|
|
```python |
|
from diffusers import DiffusionPipeline |
|
import torch |
|
|
|
pipe = DiffusionPipeline.from_pretrained( |
|
"CompVis/stable-diffusion-v1-4", |
|
torch_dtype=torch.float16, |
|
safety_checker=None, # Very important for videos...lots of false positives while interpolating |
|
custom_pipeline="interpolate_stable_diffusion", |
|
).to("cuda") |
|
pipe.enable_attention_slicing() |
|
|
|
frame_filepaths = pipe.walk( |
|
prompts=["a dog", "a cat", "a horse"], |
|
seeds=[42, 1337, 1234], |
|
num_interpolation_steps=16, |
|
output_dir="./dreams", |
|
batch_size=4, |
|
height=512, |
|
width=512, |
|
guidance_scale=8.5, |
|
num_inference_steps=50, |
|
) |
|
``` |
|
|
|
walk(...)` ํจ์์ ์ถ๋ ฅ์ `output_dir`์ ์ ์๋ ๋๋ก ํด๋์ ์ ์ฅ๋ ์ด๋ฏธ์ง ๋ชฉ๋ก์ ๋ฐํํฉ๋๋ค. ์ด ์ด๋ฏธ์ง๋ฅผ ์ฌ์ฉํ์ฌ ์์ ์ ์ผ๋ก ํ์ฐ๋๋ ๋์์์ ๋ง๋ค ์ ์์ต๋๋ค. |
|
|
|
> ์์ ๋ ํ์ฐ์ ์ด์ฉํ ๋์์ ์ ์ ๋ฐฉ๋ฒ๊ณผ ๋ ๋ง์ ๊ธฐ๋ฅ์ ๋ํ ์์ธํ ๋ด์ฉ์ https://github.com/nateraw/stable-diffusion-videos ์์ ํ์ธํ์๊ธฐ ๋ฐ๋๋๋ค. |
|
|
|
### Stable Diffusion Mega |
|
|
|
The Stable Diffusion Mega ํ์ดํ๋ผ์ธ์ ์ฌ์ฉํ๋ฉด Stable Diffusion ํ์ดํ๋ผ์ธ์ ์ฃผ์ ์ฌ์ฉ ์ฌ๋ก๋ฅผ ๋จ์ผ ํด๋์ค์์ ์ฌ์ฉํ ์ ์์ต๋๋ค. |
|
```python |
|
#!/usr/bin/env python3 |
|
from diffusers import DiffusionPipeline |
|
import PIL |
|
import requests |
|
from io import BytesIO |
|
import torch |
|
|
|
|
|
def download_image(url): |
|
response = requests.get(url) |
|
return PIL.Image.open(BytesIO(response.content)).convert("RGB") |
|
|
|
|
|
pipe = DiffusionPipeline.from_pretrained( |
|
"CompVis/stable-diffusion-v1-4", |
|
custom_pipeline="stable_diffusion_mega", |
|
torch_dtype=torch.float16, |
|
) |
|
pipe.to("cuda") |
|
pipe.enable_attention_slicing() |
|
|
|
|
|
### Text-to-Image |
|
|
|
images = pipe.text2img("An astronaut riding a horse").images |
|
|
|
### Image-to-Image |
|
|
|
init_image = download_image( |
|
"https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" |
|
) |
|
|
|
prompt = "A fantasy landscape, trending on artstation" |
|
|
|
images = pipe.img2img(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images |
|
|
|
### Inpainting |
|
|
|
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" |
|
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" |
|
init_image = download_image(img_url).resize((512, 512)) |
|
mask_image = download_image(mask_url).resize((512, 512)) |
|
|
|
prompt = "a cat sitting on a bench" |
|
images = pipe.inpaint(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.75).images |
|
``` |
|
|
|
์์ ํ์๋ ๊ฒ์ฒ๋ผ ํ๋์ ํ์ดํ๋ผ์ธ์์ 'ํ
์คํธ-์ด๋ฏธ์ง ๋ณํ', '์ด๋ฏธ์ง-์ด๋ฏธ์ง ๋ณํ', '์ธํ์ธํ
'์ ๋ชจ๋ ์คํํ ์ ์์ต๋๋ค. |
|
|
|
### Long Prompt Weighting Stable Diffusion |
|
|
|
ํ์ดํ๋ผ์ธ์ ์ฌ์ฉํ๋ฉด 77๊ฐ์ ํ ํฐ ๊ธธ์ด ์ ํ ์์ด ํ๋กฌํํธ๋ฅผ ์
๋ ฅํ ์ ์์ต๋๋ค. ๋ํ "()"๋ฅผ ์ฌ์ฉํ์ฌ ๋จ์ด ๊ฐ์ค์น๋ฅผ ๋์ด๊ฑฐ๋ "[]"๋ฅผ ์ฌ์ฉํ์ฌ ๋จ์ด ๊ฐ์ค์น๋ฅผ ๋ฎ์ถ ์ ์์ต๋๋ค. |
|
๋ํ ํ์ดํ๋ผ์ธ์ ์ฌ์ฉํ๋ฉด ๋จ์ผ ํด๋์ค์์ Stable Diffusion ํ์ดํ๋ผ์ธ์ ์ฃผ์ ์ฌ์ฉ ์ฌ๋ก๋ฅผ ์ฌ์ฉํ ์ ์์ต๋๋ค. |
|
|
|
#### pytorch |
|
|
|
```python |
|
from diffusers import DiffusionPipeline |
|
import torch |
|
|
|
pipe = DiffusionPipeline.from_pretrained( |
|
"hakurei/waifu-diffusion", custom_pipeline="lpw_stable_diffusion", torch_dtype=torch.float16 |
|
) |
|
pipe = pipe.to("cuda") |
|
|
|
prompt = "best_quality (1girl:1.3) bow bride brown_hair closed_mouth frilled_bow frilled_hair_tubes frills (full_body:1.3) fox_ear hair_bow hair_tubes happy hood japanese_clothes kimono long_sleeves red_bow smile solo tabi uchikake white_kimono wide_sleeves cherry_blossoms" |
|
neg_prompt = "lowres, bad_anatomy, error_body, error_hair, error_arm, error_hands, bad_hands, error_fingers, bad_fingers, missing_fingers, error_legs, bad_legs, multiple_legs, missing_legs, error_lighting, error_shadow, error_reflection, text, error, extra_digit, fewer_digits, cropped, worst_quality, low_quality, normal_quality, jpeg_artifacts, signature, watermark, username, blurry" |
|
|
|
pipe.text2img(prompt, negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0] |
|
``` |
|
|
|
#### onnxruntime |
|
|
|
```python |
|
from diffusers import DiffusionPipeline |
|
import torch |
|
|
|
pipe = DiffusionPipeline.from_pretrained( |
|
"CompVis/stable-diffusion-v1-4", |
|
custom_pipeline="lpw_stable_diffusion_onnx", |
|
revision="onnx", |
|
provider="CUDAExecutionProvider", |
|
) |
|
|
|
prompt = "a photo of an astronaut riding a horse on mars, best quality" |
|
neg_prompt = "lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry" |
|
|
|
pipe.text2img(prompt, negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0] |
|
``` |
|
|
|
ํ ํฐ ์ธ๋ฑ์ค ์ํ์ค ๊ธธ์ด๊ฐ ์ด ๋ชจ๋ธ์ ์ง์ ๋ ์ต๋ ์ํ์ค ๊ธธ์ด๋ณด๋ค ๊ธธ๋ฉด(*** > 77). ์ด ์ํ์ค๋ฅผ ๋ชจ๋ธ์์ ์คํํ๋ฉด ์ธ๋ฑ์ฑ ์ค๋ฅ๊ฐ ๋ฐ์ํฉ๋๋ค`. ์ ์์ ์ธ ํ์์ด๋ ๊ฑฑ์ ํ์ง ๋ง์ธ์. |
|
### Speech to Image |
|
|
|
๋ค์ ์ฝ๋๋ ์ฌ์ ํ์ต๋ OpenAI whisper-small๊ณผ Stable Diffusion์ ์ฌ์ฉํ์ฌ ์ค๋์ค ์ํ์์ ์ด๋ฏธ์ง๋ฅผ ์์ฑํ ์ ์์ต๋๋ค. |
|
```Python |
|
import torch |
|
|
|
import matplotlib.pyplot as plt |
|
from datasets import load_dataset |
|
from diffusers import DiffusionPipeline |
|
from transformers import ( |
|
WhisperForConditionalGeneration, |
|
WhisperProcessor, |
|
) |
|
|
|
|
|
device = "cuda" if torch.cuda.is_available() else "cpu" |
|
|
|
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") |
|
|
|
audio_sample = ds[3] |
|
|
|
text = audio_sample["text"].lower() |
|
speech_data = audio_sample["audio"]["array"] |
|
|
|
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to(device) |
|
processor = WhisperProcessor.from_pretrained("openai/whisper-small") |
|
|
|
diffuser_pipeline = DiffusionPipeline.from_pretrained( |
|
"CompVis/stable-diffusion-v1-4", |
|
custom_pipeline="speech_to_image_diffusion", |
|
speech_model=model, |
|
speech_processor=processor, |
|
|
|
torch_dtype=torch.float16, |
|
) |
|
|
|
diffuser_pipeline.enable_attention_slicing() |
|
diffuser_pipeline = diffuser_pipeline.to(device) |
|
|
|
output = diffuser_pipeline(speech_data) |
|
plt.imshow(output.images[0]) |
|
``` |
|
์ ์์๋ ๋ค์์ ๊ฒฐ๊ณผ ์ด๋ฏธ์ง๋ฅผ ๋ณด์
๋๋ค. |
|
|
|
![image](https://user-images.githubusercontent.com/45072645/196901736-77d9c6fc-63ee-4072-90b0-dc8b903d63e3.png) |