EN | 中文
SenseNova-SI: Scaling Spatial Intelligence with Multimodal Foundation Models
🔥Please check out our newly released SenseNova-SI-1.2-InternVL3-8B, achieving state-of-the-art performance among open-source models of comparable size across eight recent spatial intelligence benchmarks: VSI, MMSI, MindCube, ViewSpatial, SITE, BLINK, 3DSRBench, EmbSpatial-Bench.
Overview
Despite remarkable progress, multimodal foundation models still exhibit surprising deficiencies in spatial intelligence. In this work, we explore scaling up multimodal foundation models to cultivate spatial intelligence within the SenseNova-SI family, built upon established multimodal foundations including visual understanding models (i.e., Qwen3-VL and InternVL3) and unified understanding and generation models (i.e., Bagel). We take a principled approach to constructing high-performing and robust spatial intelligence by systematically curating SenseNova-SI-8M: eight million diverse data samples under a rigorous taxonomy of spatial capabilities. SenseNova-SI demonstrates unprecedented performance across a broad range of spatial intelligence benchmarks: 68.7% on VSI-Bench, 43.3% on MMSI, 85.6% on MindCube, 54.6% on ViewSpatial, and 50.1% on SITE, while maintaining strong general multimodal understanding (e.g., 84.9% on MMBench-En). More importantly, we analyze the impact of data scaling, discuss early signs of emergent generalization capabilities enabled by diverse data training, analyze the risk of overfitting and language shortcuts, present a preliminary study on spatial chain-of-thought reasoning, and validate the potential downstream application. SenseNova-SI is an ongoing project, and this report will be updated continuously. All newly trained multimodal foundation models are publicly released to facilitate further research in this direction. In the future, SenseNova-SI will be integrated with larger-scale in-house models.
Release Information
To facilitate the research in this area, as a first step, we have released a highly effective subset, SenseNova-SI-800K. This subset captures a substantial portion of the performance gains of the full SenseNova-SI dataset while providing a manageable data scale for experimentation.
Building on this, we now release SenseNova-SI-1.1-InternVL3-8B-800K, a model trained exclusively on the SenseNova-SI-800K subset. This model is provided as a reference for researchers working with the 800K-scale dataset, enabling experiments and validation of scaling behaviors observed in SenseNova-SI.
Models trained on this subset demonstrate notable improvements over the base model and achieve competitive performance against strong spatial intelligence baselines. However, we emphasize that SenseNova-SI-1.1-InternVL3-8B (800K) is released solely for research and scaling-law analysis purposes. It is not the primary recommended model of the SenseNova-SI series, nor does it represent the full performance achievable with the complete dataset.
| Model | SI Dataset | VSI | MMSI | MindCube-Tiny | ViewSpatial | SITE |
|---|---|---|---|---|---|---|
| InternVL3-8B | - | 42.1 | 28.0 | 41.5 | 38.6 | 41.1 |
| VST-7B-SFT | VST-P-4.1M | 60.6 | 32.0 | 39.7 | 50.5 | 39.6 |
| Cambrian-S-7B | VSI-590K | 67.5 | 25.8 | 39.6 | 40.9 | 33.0 |
| *SenseNova-SI-1.1-InternVL3-8B-800K | SenseNova-SI-800K | 60.9 | 36.4 | 56.9 | 52.5 | 47.7 |
| SenseNova-SI-1.1-InternVL3-8B | SenseNova-SI-8M | 68.7 | 43.3 | 85.6 | 54.6 | 47.7 |
Installation
We recommend using uv to manage the environment.
uv installation guide: https://docs.astral.sh/uv/getting-started/installation/#installing-uv
git clone [email protected]:OpenSenseNova/SenseNova-SI.git
cd SenseNova-SI/
uv sync --extra cu124 # or one of [cu118|cu121|cu124|cu126|cu128|cu129], depending on your CUDA version
uv sync
source .venv/bin/activate
Hello World
A simple image-free test to verify environment setup and download the model.
python example.py \
--question "Hello" \
--model_path sensenova/SenseNova-SI-1.1-InternVL3-8B
Evaluation
To reproduce the benchmark results above, please refer to EASI to evaluate SenseNova-SI on mainstream spatial intelligence benchmarks.
🖊️ Citation
@article{sensenova-si,
title = {Scaling Spatial Intelligence with Multimodal Foundation Models},
author = {Cai, Zhongang and Wang, Ruisi and Gu, Chenyang and Pu, Fanyi and Xu, Junxiang and Wang, Yubo and Yin, Wanqi and Yang, Zhitao and Wei, Chen and Sun, Qingping and Zhou, Tongxi and Li, Jiaqi and Pang, Hui En and Qian, Oscar and Wei, Yukun and Lin, Zhiqian and Shi, Xuanke and Deng, Kewang and Han, Xiaoyang and Chen, Zukai and Fan, Xiangyu and Deng, Hanming and Lu, Lewei and Pan, Liang and Li, Bo and Liu, Ziwei and Wang, Quan and Lin, Dahua and Yang, Lei},
journal = {arXiv preprint arXiv:2511.13719},
year = {2025}
}
- Downloads last month
- 8
Model tree for sensenova/SenseNova-SI-1.1-InternVL3-8B-800K
Base model
OpenGVLab/InternVL3-8B-Pretrained