MotionRAG: Motion Retrieval-Augmented Image-to-Video Generation
Abstract
MotionRAG enhances video generation by integrating motion priors from reference videos using a retrieval-augmented framework, improving motion realism with negligible computational overhead.
Image-to-video generation has made remarkable progress with the advancements in diffusion models, yet generating videos with realistic motion remains highly challenging. This difficulty arises from the complexity of accurately modeling motion, which involves capturing physical constraints, object interactions, and domain-specific dynamics that are not easily generalized across diverse scenarios. To address this, we propose MotionRAG, a retrieval-augmented framework that enhances motion realism by adapting motion priors from relevant reference videos through Context-Aware Motion Adaptation (CAMA). The key technical innovations include: (i) a retrieval-based pipeline extracting high-level motion features using video encoder and specialized resamplers to distill semantic motion representations; (ii) an in-context learning approach for motion adaptation implemented through a causal transformer architecture; (iii) an attention-based motion injection adapter that seamlessly integrates transferred motion features into pretrained video diffusion models. Extensive experiments demonstrate that our method achieves significant improvements across multiple domains and various base models, all with negligible computational overhead during inference. Furthermore, our modular design enables zero-shot generalization to new domains by simply updating the retrieval database without retraining any components. This research enhances the core capability of video generation systems by enabling the effective retrieval and transfer of motion priors, facilitating the synthesis of realistic motion dynamics.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MoCo: Motion-Consistent Human Video Generation via Structure-Appearance Decoupling (2025)
- VimoRAG: Video-based Retrieval-augmented 3D Motion Generation for Motion Language Models (2025)
- SSG-Dit: A Spatial Signal Guided Framework for Controllable Video Generation (2025)
- Lynx: Towards High-Fidelity Personalized Video Generation (2025)
- Versatile Transition Generation with Image-to-Video Diffusion (2025)
- MotionFlow:Learning Implicit Motion Flow for Complex Camera Trajectory Control in Video Generation (2025)
- MV-RAG: Retrieval Augmented Multiview Diffusion (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper


