Papers
arxiv:2510.21583

Sample By Step, Optimize By Chunk: Chunk-Level GRPO For Text-to-Image Generation

Published on Oct 24
· Submitted by Yu Changqian on Oct 27
Authors:
,
,
,
,
,
Kai Wu ,
,

Abstract

Chunk-GRPO, a chunk-level optimization approach for text-to-image generation, improves preference alignment and image quality by addressing inaccurate advantage attribution and neglecting temporal dynamics.

AI-generated summary

Group Relative Policy Optimization (GRPO) has shown strong potential for flow-matching-based text-to-image (T2I) generation, but it faces two key limitations: inaccurate advantage attribution, and the neglect of temporal dynamics of generation. In this work, we argue that shifting the optimization paradigm from the step level to the chunk level can effectively alleviate these issues. Building on this idea, we propose Chunk-GRPO, the first chunk-level GRPO-based approach for T2I generation. The insight is to group consecutive steps into coherent 'chunk's that capture the intrinsic temporal dynamics of flow matching, and to optimize policies at the chunk level. In addition, we introduce an optional weighted sampling strategy to further enhance performance. Extensive experiments show that ChunkGRPO achieves superior results in both preference alignment and image quality, highlighting the promise of chunk-level optimization for GRPO-based methods.

Community

Paper submitter
This comment has been hidden (marked as Resolved)

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.21583 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.21583 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.21583 in a Space README.md to link it from this page.

Collections including this paper 2