WEAVE: Unleashing and Benchmarking the In-context Interleaved Comprehension and Generation
Abstract
WEAVE introduces a comprehensive suite including a large dataset and a benchmark to assess and improve multi-turn, context-dependent image generation and editing in unified multimodal models.
Recent advances in unified multimodal models (UMMs) have enabled impressive progress in visual comprehension and generation. However, existing datasets and benchmarks focus primarily on single-turn interactions, failing to capture the multi-turn, context-dependent nature of real-world image creation and editing. To address this gap, we present WEAVE, the first suite for in-context interleaved cross-modality comprehension and generation. Our suite consists of two complementary parts. WEAVE-100k is a large-scale dataset of 100K interleaved samples spanning over 370K dialogue turns and 500K images, covering comprehension, editing, and generation tasks that require reasoning over historical context. WEAVEBench is a human-annotated benchmark with 100 tasks based on 480 images, featuring a hybrid VLM judger evaluation framework based on both the reference image and the combination of the original image with editing instructions that assesses models' abilities in multi-turn generation, visual memory, and world-knowledge reasoning across diverse domains. Experiments demonstrate that training on WEAVE-100k enables vision comprehension, image editing, and comprehension-generation collaboration capabilities. Furthermore, it facilitates UMMs to develop emergent visual-memory capabilities, while extensive evaluations on WEAVEBench expose the persistent limitations and challenges of current approaches in multi-turn, context-aware image generation and editing. We believe WEAVE provides a view and foundation for studying in-context interleaved comprehension and generation for multi-modal community.
Community
WEAVE: Unleashing and Benchmarking the In-context Interleaved Comprehension and Generation
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- EditVerse: Unifying Image and Video Editing and Generation with In-Context Learning (2025)
- UniREditBench: A Unified Reasoning-based Image Editing Benchmark (2025)
- OpenGPT-4o-Image: A Comprehensive Dataset for Advanced Image Generation and Editing (2025)
- GIR-Bench: Versatile Benchmark for Generating Images with Reasoning (2025)
- ROVER: Benchmarking Reciprocal Cross-Modal Reasoning for Omnimodal Generation (2025)
- BEAR: Benchmarking and Enhancing Multimodal Language Models for Atomic Embodied Capabilities (2025)
- InteractiveOmni: A Unified Omni-modal Model for Audio-Visual Multi-turn Dialogue (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper