OutfitAnyone: Ultra-high Quality Virtual Try-On for Any Clothing and Any Person
Abstract
Virtual Try-On (VTON) has become a transformative technology, empowering users to experiment with fashion without ever having to physically try on clothing. However, existing methods often struggle with generating high-fidelity and detail-consistent results. While diffusion models, such as Stable Diffusion series, have shown their capability in creating high-quality and photorealistic images, they encounter formidable challenges in conditional generation scenarios like VTON. Specifically, these models struggle to maintain a balance between control and consistency when generating images for virtual clothing trials. OutfitAnyone addresses these limitations by leveraging a two-stream conditional diffusion model, enabling it to adeptly handle garment deformation for more lifelike results. It distinguishes itself with scalability-modulating factors such as pose, body shape and broad applicability, extending from anime to in-the-wild images. OutfitAnyone's performance in diverse scenarios underscores its utility and readiness for real-world deployment. For more details and animated results, please see https://humanaigc.github.io/outfit-anyone/.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- IMAGDressing-v1: Customizable Virtual Dressing (2024)
- WildVidFit: Video Virtual Try-On in the Wild via Image-Based Controlled Diffusion Models (2024)
- AnyFit: Controllable Virtual Try-on for Any Combination of Attire Across Any Scenario (2024)
- Self-Supervised Vision Transformer for Enhanced Virtual Clothes Try-On (2024)
- VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper