Improving Long-Text Alignment for Text-to-Image Diffusion Models
Abstract
The rapid advancement of text-to-image (T2I) diffusion models has enabled them to generate unprecedented results from given texts. However, as text inputs become longer, existing encoding methods like CLIP face limitations, and aligning the generated images with long texts becomes challenging. To tackle these issues, we propose LongAlign, which includes a segment-level encoding method for processing long texts and a decomposed preference optimization method for effective alignment training. For segment-level encoding, long texts are divided into multiple segments and processed separately. This method overcomes the maximum input length limits of pretrained encoding models. For preference optimization, we provide decomposed CLIP-based preference models to fine-tune diffusion models. Specifically, to utilize CLIP-based preference models for T2I alignment, we delve into their scoring mechanisms and find that the preference scores can be decomposed into two components: a text-relevant part that measures T2I alignment and a text-irrelevant part that assesses other visual aspects of human preference. Additionally, we find that the text-irrelevant part contributes to a common overfitting problem during fine-tuning. To address this, we propose a reweighting strategy that assigns different weights to these two components, thereby reducing overfitting and enhancing alignment. After fine-tuning 512 times 512 Stable Diffusion (SD) v1.5 for about 20 hours using our method, the fine-tuned SD outperforms stronger foundation models in T2I alignment, such as PixArt-alpha and Kandinsky v2.2. The code is available at https://github.com/luping-liu/LongAlign.
Community
Welcome to follow our work: Improving Long-Text Alignment for Text-to-Image Diffusion Models (LongAlign).
Q: How can we effectively enhance the text alignment of text-to-image models?
A: Using CLIP to provide additional training signals can improve text alignment, but it is prone to overfitting.
Q: How do you address the overfitting problem?
A: This arises from a conical distribution in CLIP representations (P1). To solve this, we develope a decomposition and reweighting of the CLIP representations, which significantly improve text alignment and reduce overfitting.
Q: What are the final results of your method?
A: After training with LongAlign, sdv1.5 reached or even exceeded the level of pixart-alpha (P2), highlighting the effectiveness of our approach.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- TextBoost: Towards One-Shot Personalization of Text-to-Image Models via Fine-tuning Text Encoder (2024)
- CoRe: Context-Regularized Text Embedding Learning for Text-to-Image Personalization (2024)
- Empowering Backbone Models for Visual Text Generation with Input Granularity Control and Glyph-Aware Training (2024)
- UniFashion: A Unified Vision-Language Model for Multimodal Fashion Retrieval and Generation (2024)
- Learning to Customize Text-to-Image Diffusion In Diverse Context (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper