Papers
arxiv:2511.21541

Video Generation Models Are Good Latent Reward Models

Published on Nov 26
· Submitted by Guozhen Zhang on Nov 28
#1 Paper of the day
Authors:
,
,
,
,
,
,
,
,

Abstract

PRFL optimizes video generation preferences in latent space, improving alignment with human preferences while reducing memory consumption and training time.

AI-generated summary

Reward feedback learning (ReFL) has proven effective for aligning image generation with human preferences. However, its extension to video generation faces significant challenges. Existing video reward models rely on vision-language models designed for pixel-space inputs, confining ReFL optimization to near-complete denoising steps after computationally expensive VAE decoding. This pixel-space approach incurs substantial memory overhead and increased training time, and its late-stage optimization lacks early-stage supervision, refining only visual quality rather than fundamental motion dynamics and structural coherence. In this work, we show that pre-trained video generation models are naturally suited for reward modeling in the noisy latent space, as they are explicitly designed to process noisy latent representations at arbitrary timesteps and inherently preserve temporal information through their sequential modeling capabilities. Accordingly, we propose Process Reward Feedback Learning~(PRFL), a framework that conducts preference optimization entirely in latent space, enabling efficient gradient backpropagation throughout the full denoising chain without VAE decoding. Extensive experiments demonstrate that PRFL significantly improves alignment with human preferences, while achieving substantial reductions in memory consumption and training time compared to RGB ReFL.

Community

Paper author Paper submitter

🎬 PRFL: Efficient Video Generation Alignment in Latent Space

We introduce Process Reward Feedback Learning (PRFL), a novel framework that enables efficient human preference alignment for video generation models—entirely in latent space!

Key Innovation: Instead of relying on expensive pixel-space reward models, we demonstrate that pre-trained video generation models themselves are excellent reward models. They naturally understand noisy latent representations at any timestep and preserve temporal information.

Why it matters:
✨ Full denoising chain optimization without VAE decoding
⚡ Significantly reduced memory & training time vs RGB-based ReFL
🎯 Better alignment with human preferences

This opens up new possibilities for scaling video generation alignment! Check out our paper and project page for demos.

📄 Paper: https://arxiv.org/abs/2511.21541
🌐 Project: https://kululumi.github.io/PRFL/

cool! 👍👍👍

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

I am very interested in your work. I was wondering if you have any plans to open-source the code in the near future.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2511.21541 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2511.21541 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2511.21541 in a Space README.md to link it from this page.

Collections including this paper 1