SplAttN: Bridging 2D and 3D with Gaussian Soft Splatting and Attention for Point Cloud Completion
Abstract
SplAttN addresses cross-modal entropy collapse in point cloud completion by replacing hard projection with differentiable gaussian splatting for dense image representation, demonstrating superior performance on multiple benchmarks.
Although multi-modal learning has advanced point cloud completion, the theoretical mechanisms remain unclear. Recent works attribute success to the connection between modalities, yet we identify that standard hard projection severs this connection: projecting a sparse point cloud onto the image plane yields an extremely sparse support, which hinders visual prior propagation, a failure mode we term Cross-Modal Entropy Collapse. To address this practical limitation, we propose SplAttN, which replaces hard projection with Differentiable Gaussian Splatting to produce a dense, continuous image-plane representation. By reformulating projection as continuous density estimation, SplAttN avoids collapsed sparse support, facilitates gradient flow, and improves cross-modal connection learnability. Extensive experiments show that SplAttN achieves state-of-the-art performance on PCN and ShapeNet-55/34. Crucially, we utilize the real-world KITTI benchmark as a stress test for multi-modal reliance. Counter-factual evaluation reveals that while baselines degenerate into unimodal template retrievers insensitive to visual removal, SplAttN maintains a robust dependency on visual cues, validating that our method establishes an effective cross-modal connection. Code is available at https://github.com/zay002/SplAttN.
Community
Hi everyone! I'm one of the authors of SplAttN. In this work, we tackle a common failure mode in image-guided point cloud completion: "Cross-Modal Entropy Collapse." We found that hard 3D-to-2D projection often makes the image plane too sparse, effectively breaking the 2D-3D connection.
Our solution is straightforward, we replace hard projection with differentiable Gaussian soft splatting. This produces dense, continuous multi-view maps that allow visual priors and gradients to flow much more reliably. Architecture-wise, we use a TinyViT image encoder and a two-stage SDG decoder for coarse-to-fine completion.
We also included a counterfactual stress test on KITTI to prove the model genuinely leverages visual cues instead of just "hallucinating" from point cloud priors. We’ve released the code and checkpoints for our ICML 2026 Spotlight, feel free to check it out and let us know what you think! 🚀
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CFSR: Geometry-Conditioned Shadow Removal via Physical Disentanglement (2026)
- GSCompleter: A Distillation-Free Plugin for Metric-Aware 3D Gaussian Splatting Completion in Seconds (2026)
- LESV: Language Embedded Sparse Voxel Fusion for Open-Vocabulary 3D Scene Understanding (2026)
- SGR-OCC: Evolving Monocular Priors for Embodied 3D Occupancy Prediction via Soft-Gating Lifting and Semantic-Adaptive Geometric Refinement (2026)
- AirSplat: Alignment and Rating for Robust Feed-Forward 3D Gaussian Splatting (2026)
- ViewSplat: View-Adaptive Dynamic Gaussian Splatting for Feed-Forward Synthesis (2026)
- {\Psi}-Map: Panoptic Surface Integrated Mapping Enables Real2Sim Transfer (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2605.01466 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 3
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper