File size: 2,242 Bytes
221cc5e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2b979a1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
license: cc-by-2.0
language:
- en
library_name: diffusers
tags:
- art
- code
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
---
# Khabib Sketch SDXL LoRA

A LoRA adaptation of SDXL to produce sketches of the MMA fighter and G.O.A.T Khabib.

<figure>
  <img src="https://i.imgur.com/eIn5oqJ.png" alt="Khabib" width="256" height="256">
  <figcaption>Sketch of Khabib fighting a Bengal Tiger</figcaption>
</figure>


These are LoRA adaption weights for `stabilityai/stable-diffusion-xl-base-1.0`. 
The weights were trained on sketches of Khabib by [ritwikraha](https://www.ritwikraha.com/) using [DreamBooth](https://dreambooth.github.io/). 
You can find some example images in the following. 

Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.

DataSet: custom hand-drawn sketches by [ritwikraha](https://www.ritwikraha.com/)

## Usage

```
!pip install diffusers accelerate -q
import torch
from PIL import Image
from diffusers import DiffusionPipeline, AutoencoderKL

vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    vae=vae,
    torch_dtype=torch.float16,
    variant="fp16",
    use_safetensors=True
)
pipe.load_lora_weights('ritwikraha/khabib_sketch_LoRA')
_ = pipe.to("cuda")

prompt = "a sketch of TOK khabib pointing at another khabib like the spiderman meme, monchrome, pen sketch"
negative_prompt ="ugly face, multiple bodies, bad anatomy, disfigured, extra fingers"
image = pipe(prompt=prompt,
             negative_prompt=negative_prompt,
             guidance_scale=3,
             num_inference_steps=50).images[0]
image
```


## Examples

| Image 1 | Image 2 |
|---|---|
| ![khabib sketch example 1](./1.png) | ![khabib sketch example 2](./2.png) |
| Image 3 | Image 4 |
|---|---|
| ![khabib sketch example 3](./3.png) | ![sks dog sample 4](./4.png) |




## Tips

- The examples are all sketches created in Procreate so prompts with words like sketch, and monochrome work best
- Use a negative prompt and guidance scale for the model
- Images at 1024X1024 will be better than other dimensions