Papers
arxiv:2410.11817

Improving Long-Text Alignment for Text-to-Image Diffusion Models

Published on Oct 15
· Submitted by luping-liu on Oct 17
Authors:
,
,
,
,

Abstract

The rapid advancement of text-to-image (T2I) diffusion models has enabled them to generate unprecedented results from given texts. However, as text inputs become longer, existing encoding methods like CLIP face limitations, and aligning the generated images with long texts becomes challenging. To tackle these issues, we propose LongAlign, which includes a segment-level encoding method for processing long texts and a decomposed preference optimization method for effective alignment training. For segment-level encoding, long texts are divided into multiple segments and processed separately. This method overcomes the maximum input length limits of pretrained encoding models. For preference optimization, we provide decomposed CLIP-based preference models to fine-tune diffusion models. Specifically, to utilize CLIP-based preference models for T2I alignment, we delve into their scoring mechanisms and find that the preference scores can be decomposed into two components: a text-relevant part that measures T2I alignment and a text-irrelevant part that assesses other visual aspects of human preference. Additionally, we find that the text-irrelevant part contributes to a common overfitting problem during fine-tuning. To address this, we propose a reweighting strategy that assigns different weights to these two components, thereby reducing overfitting and enhancing alignment. After fine-tuning 512 times 512 Stable Diffusion (SD) v1.5 for about 20 hours using our method, the fine-tuned SD outperforms stronger foundation models in T2I alignment, such as PixArt-alpha and Kandinsky v2.2. The code is available at https://github.com/luping-liu/LongAlign.

Community

Paper author Paper submitter
edited 21 days ago

Welcome to follow our work: Improving Long-Text Alignment for Text-to-Image Diffusion Models (LongAlign).

Q: How can we effectively enhance the text alignment of text-to-image models?
A: Using CLIP to provide additional training signals can improve text alignment, but it is prone to overfitting.

Q: How do you address the overfitting problem?
A: This arises from a conical distribution in CLIP representations (P1). To solve this, we develope a decomposition and reweighting of the CLIP representations, which significantly improve text alignment and reduce overfitting.

Q: What are the final results of your method?
A: After training with LongAlign, sdv1.5 reached or even exceeded the level of pixart-alpha (P2), highlighting the effectiveness of our approach.

微信图片_20241016184059.png

微信图片_20241016184104.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.11817 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.11817 in a Space README.md to link it from this page.

Collections including this paper 2