Paint by Inpaint: Learning to Add Image Objects by Removing Them First
Abstract
Image editing has advanced significantly with the introduction of text-conditioned diffusion models. Despite this progress, seamlessly adding objects to images based on textual instructions without requiring user-provided input masks remains a challenge. We address this by leveraging the insight that removing objects (Inpaint) is significantly simpler than its inverse process of adding them (Paint), attributed to the utilization of segmentation mask datasets alongside inpainting models that inpaint within these masks. Capitalizing on this realization, by implementing an automated and extensive pipeline, we curate a filtered large-scale image dataset containing pairs of images and their corresponding object-removed versions. Using these pairs, we train a diffusion model to inverse the inpainting process, effectively adding objects into images. Unlike other editing datasets, ours features natural target images instead of synthetic ones; moreover, it maintains consistency between source and target by construction. Additionally, we utilize a large Vision-Language Model to provide detailed descriptions of the removed objects and a Large Language Model to convert these descriptions into diverse, natural-language instructions. We show that the trained model surpasses existing ones both qualitatively and quantitatively, and release the large-scale dataset alongside the trained models for the community.
Community
Congratulations on the release, @navvew ! The examples are super impressive. It would be amazing to have a demo on Spaces. A demo could be an excellent way for the community to engage with the model and their provide valuable feedback.
Thanks! We plan to release a demo soon!
Here's a plain-english rewrite of the paper (feedback welcome!): https://www.aimodels.fyi/papers/arxiv/paint-by-inpaint-learning-to-add-image
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- InstructGIE: Towards Generalizable Image Editing (2024)
- StyleBooth: Image Style Editing with Multimodal Instruction (2024)
- Locate, Assign, Refine: Taming Customized Image Inpainting with Text-Subject Guidance (2024)
- ByteEdit: Boost, Comply and Accelerate Generative Image Editing (2024)
- BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Revolutionizing Image Editing: Adding Objects by Removing Them First!
Links ๐:
๐ Subscribe: https://www.youtube.com/@Arxflix
๐ Twitter: https://x.com/arxflix
๐ LMNT (Partner): https://lmnt.com/