Papers
arxiv:2511.17487

Downscaling Intelligence: Exploring Perception and Reasoning Bottlenecks in Small Multimodal Models

Published on Nov 21
Ā· Submitted by Mark Endo on Nov 24
Authors:

Abstract

Reducing the capacity of large language models disproportionately impacts visual capabilities in multimodal systems, but visual extraction tuning combined with step-by-step reasoning improves efficiency and performance.

AI-generated summary

Scaling up multimodal models has enabled remarkable advances in visual understanding and reasoning, but practical demands call for smaller, efficient systems. In this work, we conduct a principled analysis of downscaling intelligence in multimodal models, examining how reduced large language model (LLM) capacity affects multimodal capabilities. Our initial findings reveal an interesting trend: LLM downscaling disproportionately affects visual capabilities, rather than abilities inherited from the LLM. We then examine whether this drop mainly reflects the expected decline in visual reasoning or a more fundamental loss of perceptual abilities. Isolating the effect of LLM downscaling on perception, we find performance still drops sharply, often matching or exceeding the impact on reasoning. To address this bottleneck, we introduce visual extraction tuning, which explicitly trains the model to extract instruction-relevant visual details consistently across tasks. With these extracted visual details, we then apply step-by-step reasoning to generate answers. Together, these components form our Extract+Think approach, setting a new standard for efficiency and performance in this space.

Community

Sign up or log in to comment

Models citing this paper 4

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2511.17487 in a Space README.md to link it from this page.

Collections including this paper 1