Spaces:
Runtime error
Runtime error
Zafaflahfksdf
commited on
Commit
•
7b46ac3
1
Parent(s):
8624bb6
Update README.md
Browse files
README.md
CHANGED
@@ -1,149 +1,149 @@
|
|
1 |
-
---
|
2 |
-
title: _
|
3 |
-
app_file: iasam_app.py
|
4 |
-
sdk: gradio
|
5 |
-
sdk_version:
|
6 |
-
---
|
7 |
-
# Inpaint Anything (Inpainting with Segment Anything)
|
8 |
-
|
9 |
-
Inpaint Anything performs stable diffusion inpainting on a browser UI using any mask selected from the output of [Segment Anything](https://github.com/facebookresearch/segment-anything).
|
10 |
-
|
11 |
-
|
12 |
-
Using Segment Anything enables users to specify masks by simply pointing to the desired areas, instead of manually filling them in. This can increase the efficiency and accuracy of the mask creation process, leading to potentially higher-quality inpainting results while saving time and effort.
|
13 |
-
|
14 |
-
[Extension version for AUTOMATIC1111's Web UI](https://github.com/Uminosachi/sd-webui-inpaint-anything)
|
15 |
-
|
16 |
-
![Explanation image](images/inpaint_anything_explanation_image_1.png)
|
17 |
-
|
18 |
-
## Installation
|
19 |
-
|
20 |
-
Please follow these steps to install the software:
|
21 |
-
|
22 |
-
* Create a new conda environment:
|
23 |
-
|
24 |
-
```bash
|
25 |
-
conda create -n inpaint python=3.10
|
26 |
-
conda activate inpaint
|
27 |
-
```
|
28 |
-
|
29 |
-
* Clone the software repository:
|
30 |
-
|
31 |
-
```bash
|
32 |
-
git clone https://github.com/Uminosachi/inpaint-anything.git
|
33 |
-
cd inpaint-anything
|
34 |
-
```
|
35 |
-
|
36 |
-
* For the CUDA environment, install the following packages:
|
37 |
-
|
38 |
-
```bash
|
39 |
-
pip install -r requirements.txt
|
40 |
-
```
|
41 |
-
|
42 |
-
* If you are using macOS, please install the package from the following file instead:
|
43 |
-
|
44 |
-
```bash
|
45 |
-
pip install -r requirements_mac.txt
|
46 |
-
```
|
47 |
-
|
48 |
-
## Running the application
|
49 |
-
|
50 |
-
```bash
|
51 |
-
python iasam_app.py
|
52 |
-
```
|
53 |
-
|
54 |
-
* Open http://127.0.0.1:7860/ in your browser.
|
55 |
-
* Note: If you have a privacy protection extension enabled in your web browser, such as DuckDuckGo, you may not be able to retrieve the mask from your sketch.
|
56 |
-
|
57 |
-
### Options
|
58 |
-
|
59 |
-
* `--save-seg`: Save the segmentation image generated by SAM.
|
60 |
-
* `--offline`: Execute inpainting using an offline network.
|
61 |
-
* `--sam-cpu`: Perform the Segment Anything operation on CPU.
|
62 |
-
|
63 |
-
## Downloading the Model
|
64 |
-
|
65 |
-
* Launch this application.
|
66 |
-
* Click on the `Download model` button, located next to the [Segment Anything Model ID](https://github.com/facebookresearch/segment-anything#model-checkpoints). This includes the [SAM 2](https://github.com/facebookresearch/segment-anything-2), [Segment Anything in High Quality Model ID](https://github.com/SysCV/sam-hq), [Fast Segment Anything](https://github.com/CASIA-IVA-Lab/FastSAM), and [Faster Segment Anything (MobileSAM)](https://github.com/ChaoningZhang/MobileSAM).
|
67 |
-
* Please note that the SAM is available in three sizes: Base, Large, and Huge. Remember, larger sizes consume more VRAM.
|
68 |
-
* Wait for the download to complete.
|
69 |
-
* The downloaded model file will be stored in the `models` directory of this application's repository.
|
70 |
-
|
71 |
-
## Usage
|
72 |
-
|
73 |
-
* Drag and drop your image onto the input image area.
|
74 |
-
* Outpainting can be achieved by the `Padding options`, configuring the scale and balance, and then clicking on the `Run Padding` button.
|
75 |
-
* The `Anime Style` checkbox enhances segmentation mask detection, particularly in anime style images, at the expense of a slight reduction in mask quality.
|
76 |
-
* Click on the `Run Segment Anything` button.
|
77 |
-
* Use sketching to point the area you want to inpaint. You can undo and adjust the pen size.
|
78 |
-
* Hover over either the SAM image or the mask image and press the `S` key for Fullscreen mode, or the `R` key to Reset zoom.
|
79 |
-
* Click on the `Create mask` button. The mask will appear in the selected mask image area.
|
80 |
-
|
81 |
-
### Mask Adjustment
|
82 |
-
|
83 |
-
* `Expand mask region` button: Use this to slightly expand the area of the mask for broader coverage.
|
84 |
-
* `Trim mask by sketch` button: Clicking this will exclude the sketched area from the mask.
|
85 |
-
* `Add mask by sketch` button: Clicking this will add the sketched area to the mask.
|
86 |
-
|
87 |
-
### Inpainting Tab
|
88 |
-
|
89 |
-
* Enter your desired Prompt and Negative Prompt, then choose the Inpainting Model ID.
|
90 |
-
* Click on the `Run Inpainting` button (**Please note that it may take some time to download the model for the first time**).
|
91 |
-
* In the Advanced options, you can adjust the Sampler, Sampling Steps, Guidance Scale, and Seed.
|
92 |
-
* If you enable the `Mask area Only` option, modifications will be confined to the designated mask area only.
|
93 |
-
* Adjust the iteration slider to perform inpainting multiple times with different seeds.
|
94 |
-
* The inpainting process is powered by [diffusers](https://github.com/huggingface/diffusers).
|
95 |
-
|
96 |
-
#### Tips
|
97 |
-
|
98 |
-
* You can directly drag and drop the inpainted image into the input image field on the Web UI. (useful with Chrome and Edge browsers)
|
99 |
-
|
100 |
-
#### Model Cache
|
101 |
-
* The inpainting model, which is saved in HuggingFace's cache and includes `inpaint` (case-insensitive) in its repo_id, will also be added to the Inpainting Model ID dropdown list.
|
102 |
-
* If there's a specific model you'd like to use, you can cache it in advance using the following Python commands:
|
103 |
-
```bash
|
104 |
-
python
|
105 |
-
```
|
106 |
-
```python
|
107 |
-
from diffusers import StableDiffusionInpaintPipeline
|
108 |
-
pipe = StableDiffusionInpaintPipeline.from_pretrained("Uminosachi/dreamshaper_5-inpainting")
|
109 |
-
exit()
|
110 |
-
```
|
111 |
-
* The model diffusers downloaded is typically stored in your home directory. You can find it at `/home/username/.cache/huggingface/hub` for Linux and MacOS users, or at `C:\Users\username\.cache\huggingface\hub` for Windows users.
|
112 |
-
* When executing inpainting, if the following error is output to the console, try deleting the corresponding model from the cache folder mentioned above:
|
113 |
-
```
|
114 |
-
An error occurred while trying to fetch model name...
|
115 |
-
```
|
116 |
-
|
117 |
-
### Cleaner Tab
|
118 |
-
|
119 |
-
* Choose the Cleaner Model ID.
|
120 |
-
* Click on the `Run Cleaner` button (**Please note that it may take some time to download the model for the first time**).
|
121 |
-
* Cleaner process is performed using [Lama Cleaner](https://github.com/Sanster/lama-cleaner).
|
122 |
-
|
123 |
-
### Mask only Tab
|
124 |
-
|
125 |
-
* Gives ability to just save mask without any other processing, so it's then possible to use the mask in other graphic applications.
|
126 |
-
* `Get mask as alpha of image` button: Save the mask as RGBA image, with the mask put into the alpha channel of the input image.
|
127 |
-
* `Get mask` button: Save the mask as RGB image.
|
128 |
-
|
129 |
-
![UI image](images/inpaint_anything_ui_image_1.png)
|
130 |
-
|
131 |
-
## Auto-saving images
|
132 |
-
|
133 |
-
* The inpainted image will be automatically saved in the folder that matches the current date within the `outputs` directory.
|
134 |
-
|
135 |
-
## Development
|
136 |
-
|
137 |
-
With the [Inpaint Anything library](README_DEV.md), you can perform segmentation and create masks using sketches from other applications.
|
138 |
-
|
139 |
-
## License
|
140 |
-
|
141 |
-
The source code is licensed under the [Apache 2.0 license](LICENSE).
|
142 |
-
|
143 |
-
## References
|
144 |
-
|
145 |
-
* Ravi, N., Gabeur, V., Hu, Y.-T., Hu, R., Ryali, C., Ma, T., Khedr, H., Rädel, R., Rolland, C., Gustafson, L., Mintun, E., Pan, J., Alwala, K. V., Carion, N., Wu, C.-Y., Girshick, R., Dollár, P., & Feichtenhofer, C. (2024). [SAM 2: Segment Anything in Images and Videos](https://ai.meta.com/research/publications/sam-2-segment-anything-in-images-and-videos/). arXiv preprint.
|
146 |
-
* Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A. C., Lo, W-Y., Dollár, P., & Girshick, R. (2023). [Segment Anything](https://arxiv.org/abs/2304.02643). arXiv:2304.02643.
|
147 |
-
* Ke, L., Ye, M., Danelljan, M., Liu, Y., Tai, Y-W., Tang, C-K., & Yu, F. (2023). [Segment Anything in High Quality](https://arxiv.org/abs/2306.01567). arXiv:2306.01567.
|
148 |
-
* Zhao, X., Ding, W., An, Y., Du, Y., Yu, T., Li, M., Tang, M., & Wang, J. (2023). [Fast Segment Anything](https://arxiv.org/abs/2306.12156). arXiv:2306.12156 [cs.CV].
|
149 |
-
* Zhang, C., Han, D., Qiao, Y., Kim, J. U., Bae, S-H., Lee, S., & Hong, C. S. (2023). [Faster Segment Anything: Towards Lightweight SAM for Mobile Applications](https://arxiv.org/abs/2306.14289). arXiv:2306.14289.
|
|
|
1 |
+
---
|
2 |
+
title: _
|
3 |
+
app_file: iasam_app.py
|
4 |
+
sdk: gradio
|
5 |
+
sdk_version: 4.41.0
|
6 |
+
---
|
7 |
+
# Inpaint Anything (Inpainting with Segment Anything)
|
8 |
+
|
9 |
+
Inpaint Anything performs stable diffusion inpainting on a browser UI using any mask selected from the output of [Segment Anything](https://github.com/facebookresearch/segment-anything).
|
10 |
+
|
11 |
+
|
12 |
+
Using Segment Anything enables users to specify masks by simply pointing to the desired areas, instead of manually filling them in. This can increase the efficiency and accuracy of the mask creation process, leading to potentially higher-quality inpainting results while saving time and effort.
|
13 |
+
|
14 |
+
[Extension version for AUTOMATIC1111's Web UI](https://github.com/Uminosachi/sd-webui-inpaint-anything)
|
15 |
+
|
16 |
+
![Explanation image](images/inpaint_anything_explanation_image_1.png)
|
17 |
+
|
18 |
+
## Installation
|
19 |
+
|
20 |
+
Please follow these steps to install the software:
|
21 |
+
|
22 |
+
* Create a new conda environment:
|
23 |
+
|
24 |
+
```bash
|
25 |
+
conda create -n inpaint python=3.10
|
26 |
+
conda activate inpaint
|
27 |
+
```
|
28 |
+
|
29 |
+
* Clone the software repository:
|
30 |
+
|
31 |
+
```bash
|
32 |
+
git clone https://github.com/Uminosachi/inpaint-anything.git
|
33 |
+
cd inpaint-anything
|
34 |
+
```
|
35 |
+
|
36 |
+
* For the CUDA environment, install the following packages:
|
37 |
+
|
38 |
+
```bash
|
39 |
+
pip install -r requirements.txt
|
40 |
+
```
|
41 |
+
|
42 |
+
* If you are using macOS, please install the package from the following file instead:
|
43 |
+
|
44 |
+
```bash
|
45 |
+
pip install -r requirements_mac.txt
|
46 |
+
```
|
47 |
+
|
48 |
+
## Running the application
|
49 |
+
|
50 |
+
```bash
|
51 |
+
python iasam_app.py
|
52 |
+
```
|
53 |
+
|
54 |
+
* Open http://127.0.0.1:7860/ in your browser.
|
55 |
+
* Note: If you have a privacy protection extension enabled in your web browser, such as DuckDuckGo, you may not be able to retrieve the mask from your sketch.
|
56 |
+
|
57 |
+
### Options
|
58 |
+
|
59 |
+
* `--save-seg`: Save the segmentation image generated by SAM.
|
60 |
+
* `--offline`: Execute inpainting using an offline network.
|
61 |
+
* `--sam-cpu`: Perform the Segment Anything operation on CPU.
|
62 |
+
|
63 |
+
## Downloading the Model
|
64 |
+
|
65 |
+
* Launch this application.
|
66 |
+
* Click on the `Download model` button, located next to the [Segment Anything Model ID](https://github.com/facebookresearch/segment-anything#model-checkpoints). This includes the [SAM 2](https://github.com/facebookresearch/segment-anything-2), [Segment Anything in High Quality Model ID](https://github.com/SysCV/sam-hq), [Fast Segment Anything](https://github.com/CASIA-IVA-Lab/FastSAM), and [Faster Segment Anything (MobileSAM)](https://github.com/ChaoningZhang/MobileSAM).
|
67 |
+
* Please note that the SAM is available in three sizes: Base, Large, and Huge. Remember, larger sizes consume more VRAM.
|
68 |
+
* Wait for the download to complete.
|
69 |
+
* The downloaded model file will be stored in the `models` directory of this application's repository.
|
70 |
+
|
71 |
+
## Usage
|
72 |
+
|
73 |
+
* Drag and drop your image onto the input image area.
|
74 |
+
* Outpainting can be achieved by the `Padding options`, configuring the scale and balance, and then clicking on the `Run Padding` button.
|
75 |
+
* The `Anime Style` checkbox enhances segmentation mask detection, particularly in anime style images, at the expense of a slight reduction in mask quality.
|
76 |
+
* Click on the `Run Segment Anything` button.
|
77 |
+
* Use sketching to point the area you want to inpaint. You can undo and adjust the pen size.
|
78 |
+
* Hover over either the SAM image or the mask image and press the `S` key for Fullscreen mode, or the `R` key to Reset zoom.
|
79 |
+
* Click on the `Create mask` button. The mask will appear in the selected mask image area.
|
80 |
+
|
81 |
+
### Mask Adjustment
|
82 |
+
|
83 |
+
* `Expand mask region` button: Use this to slightly expand the area of the mask for broader coverage.
|
84 |
+
* `Trim mask by sketch` button: Clicking this will exclude the sketched area from the mask.
|
85 |
+
* `Add mask by sketch` button: Clicking this will add the sketched area to the mask.
|
86 |
+
|
87 |
+
### Inpainting Tab
|
88 |
+
|
89 |
+
* Enter your desired Prompt and Negative Prompt, then choose the Inpainting Model ID.
|
90 |
+
* Click on the `Run Inpainting` button (**Please note that it may take some time to download the model for the first time**).
|
91 |
+
* In the Advanced options, you can adjust the Sampler, Sampling Steps, Guidance Scale, and Seed.
|
92 |
+
* If you enable the `Mask area Only` option, modifications will be confined to the designated mask area only.
|
93 |
+
* Adjust the iteration slider to perform inpainting multiple times with different seeds.
|
94 |
+
* The inpainting process is powered by [diffusers](https://github.com/huggingface/diffusers).
|
95 |
+
|
96 |
+
#### Tips
|
97 |
+
|
98 |
+
* You can directly drag and drop the inpainted image into the input image field on the Web UI. (useful with Chrome and Edge browsers)
|
99 |
+
|
100 |
+
#### Model Cache
|
101 |
+
* The inpainting model, which is saved in HuggingFace's cache and includes `inpaint` (case-insensitive) in its repo_id, will also be added to the Inpainting Model ID dropdown list.
|
102 |
+
* If there's a specific model you'd like to use, you can cache it in advance using the following Python commands:
|
103 |
+
```bash
|
104 |
+
python
|
105 |
+
```
|
106 |
+
```python
|
107 |
+
from diffusers import StableDiffusionInpaintPipeline
|
108 |
+
pipe = StableDiffusionInpaintPipeline.from_pretrained("Uminosachi/dreamshaper_5-inpainting")
|
109 |
+
exit()
|
110 |
+
```
|
111 |
+
* The model diffusers downloaded is typically stored in your home directory. You can find it at `/home/username/.cache/huggingface/hub` for Linux and MacOS users, or at `C:\Users\username\.cache\huggingface\hub` for Windows users.
|
112 |
+
* When executing inpainting, if the following error is output to the console, try deleting the corresponding model from the cache folder mentioned above:
|
113 |
+
```
|
114 |
+
An error occurred while trying to fetch model name...
|
115 |
+
```
|
116 |
+
|
117 |
+
### Cleaner Tab
|
118 |
+
|
119 |
+
* Choose the Cleaner Model ID.
|
120 |
+
* Click on the `Run Cleaner` button (**Please note that it may take some time to download the model for the first time**).
|
121 |
+
* Cleaner process is performed using [Lama Cleaner](https://github.com/Sanster/lama-cleaner).
|
122 |
+
|
123 |
+
### Mask only Tab
|
124 |
+
|
125 |
+
* Gives ability to just save mask without any other processing, so it's then possible to use the mask in other graphic applications.
|
126 |
+
* `Get mask as alpha of image` button: Save the mask as RGBA image, with the mask put into the alpha channel of the input image.
|
127 |
+
* `Get mask` button: Save the mask as RGB image.
|
128 |
+
|
129 |
+
![UI image](images/inpaint_anything_ui_image_1.png)
|
130 |
+
|
131 |
+
## Auto-saving images
|
132 |
+
|
133 |
+
* The inpainted image will be automatically saved in the folder that matches the current date within the `outputs` directory.
|
134 |
+
|
135 |
+
## Development
|
136 |
+
|
137 |
+
With the [Inpaint Anything library](README_DEV.md), you can perform segmentation and create masks using sketches from other applications.
|
138 |
+
|
139 |
+
## License
|
140 |
+
|
141 |
+
The source code is licensed under the [Apache 2.0 license](LICENSE).
|
142 |
+
|
143 |
+
## References
|
144 |
+
|
145 |
+
* Ravi, N., Gabeur, V., Hu, Y.-T., Hu, R., Ryali, C., Ma, T., Khedr, H., Rädel, R., Rolland, C., Gustafson, L., Mintun, E., Pan, J., Alwala, K. V., Carion, N., Wu, C.-Y., Girshick, R., Dollár, P., & Feichtenhofer, C. (2024). [SAM 2: Segment Anything in Images and Videos](https://ai.meta.com/research/publications/sam-2-segment-anything-in-images-and-videos/). arXiv preprint.
|
146 |
+
* Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A. C., Lo, W-Y., Dollár, P., & Girshick, R. (2023). [Segment Anything](https://arxiv.org/abs/2304.02643). arXiv:2304.02643.
|
147 |
+
* Ke, L., Ye, M., Danelljan, M., Liu, Y., Tai, Y-W., Tang, C-K., & Yu, F. (2023). [Segment Anything in High Quality](https://arxiv.org/abs/2306.01567). arXiv:2306.01567.
|
148 |
+
* Zhao, X., Ding, W., An, Y., Du, Y., Yu, T., Li, M., Tang, M., & Wang, J. (2023). [Fast Segment Anything](https://arxiv.org/abs/2306.12156). arXiv:2306.12156 [cs.CV].
|
149 |
+
* Zhang, C., Han, D., Qiao, Y., Kim, J. U., Bae, S-H., Lee, S., & Hong, C. S. (2023). [Faster Segment Anything: Towards Lightweight SAM for Mobile Applications](https://arxiv.org/abs/2306.14289). arXiv:2306.14289.
|