Downscaling Intelligence: Exploring Perception and Reasoning Bottlenecks in Small Multimodal Models
Abstract
Reducing the capacity of large language models disproportionately impacts visual capabilities in multimodal systems, but visual extraction tuning combined with step-by-step reasoning improves efficiency and performance.
Scaling up multimodal models has enabled remarkable advances in visual understanding and reasoning, but practical demands call for smaller, efficient systems. In this work, we conduct a principled analysis of downscaling intelligence in multimodal models, examining how reduced large language model (LLM) capacity affects multimodal capabilities. Our initial findings reveal an interesting trend: LLM downscaling disproportionately affects visual capabilities, rather than abilities inherited from the LLM. We then examine whether this drop mainly reflects the expected decline in visual reasoning or a more fundamental loss of perceptual abilities. Isolating the effect of LLM downscaling on perception, we find performance still drops sharply, often matching or exceeding the impact on reasoning. To address this bottleneck, we introduce visual extraction tuning, which explicitly trains the model to extract instruction-relevant visual details consistently across tasks. With these extracted visual details, we then apply step-by-step reasoning to generate answers. Together, these components form our Extract+Think approach, setting a new standard for efficiency and performance in this space.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Decoupling Reasoning and Perception: An LLM-LMM Framework for Faithful Visual Reasoning (2025)
- BLINK-Twice: You see, but do you observe? A Reasoning Benchmark on Visual Perception (2025)
- VTPerception-R1: Enhancing Multimodal Reasoning via Explicit Visual and Textual Perceptual Grounding (2025)
- Diagnosing Visual Reasoning: Challenges, Insights, and a Path Forward (2025)
- From Perception to Cognition: A Survey of Vision-Language Interactive Reasoning in Multimodal Large Language Models (2025)
- Learning to See Before Seeing: Demystifying LLM Visual Priors from Language Pre-training (2025)
- Agentic Jigsaw Interaction Learning for Enhancing Visual Perception and Reasoning in Vision-Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 4
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper