--- license: cc-by-nc-4.0 --- # SUM Parts: Benchmarking Part-Level Semantic Segmentation of Urban Meshes **CVPR 2025** [![Website](https://img.shields.io/badge/%F0%9F%A4%8D%20Project%20-Website-blue)](https://tudelft3d.github.io/SUMParts/) [![GitHub Code](https://img.shields.io/badge/GitHub-Code-181717?style=flat&logo=github)](https://github.com/tudelft3d/SUM-Parts-Benchmarks.git) [![YouTube Video](https://img.shields.io/badge/🎥%20YouTube%20-Video-red)](https://youtu.be/CUi1Hf_GSlQ?si=AvghBzWzSCtXCllk) [![arXiv](https://img.shields.io/badge/arXiv-PDF-b31b1b)](https://arxiv.org/abs/2503.15300) [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://raw.githubusercontent.com/parametric-completion/paco/main/LICENSE) -----
Dataset Overview
**SUM Parts** provides part-level semantic segmentation of urban textured meshes, covering 2.5km² with 21 classes. From left to right: textured mesh, face-based and texture-based annotations. Classes include: | Class | Icon | Class | Icon | Class | Icon | |-------|------|-------|------|-------|------| | unclassified | ![unclassified](assets/icons/unclassified.png) | terrain | ![terrain](assets/icons/terrain.png) | high vegetation | ![high vegetation](assets/icons/high_vegetation.png) | | water | ![water](assets/icons/water.png) | car | ![car](assets/icons/car.png) | boat | ![boat](assets/icons/boat.png) | | wall | ![wall](assets/icons/wall.png) | roof surface | ![roof surface](assets/icons/roof_surface.png) | facade surface | ![facade surface](assets/icons/facade_surface.png) | | chimney | ![chimney](assets/icons/chimney.png) | dormer | ![dormer](assets/icons/dormer.png) | balcony | ![balcony](assets/icons/balcony.png) | | roof installation | ![roof installation](assets/icons/roof_installation.png) | window | ![window](assets/icons/window.png) | door | ![door](assets/icons/door.png) | | low vegetation | ![low vegetation](assets/icons/low_vegetation.png) | impervious surface | ![impervious surface](assets/icons/impervious_surface.png) | road | ![road](assets/icons/road.png) | | road marking | ![road marking](assets/icons/road_marking.png) | cycle lane | ![cycle lane](assets/icons/cycle_lane.png) | sidewalk | ![sidewalk](assets/icons/sidewalk.png) | ## 📊 Benchmark Datasets Our benchmark datasets include textured meshes and semantic point clouds sampled on mesh surfaces using different methods. The textured meshes are stored in ASCII ply files, while semantic point clouds are stored in binary ply files to save space. This repository contains all data used in the [SUM Parts](https://openaccess.thecvf.com/content/CVPR2025/html/Gao_SUM_Parts_Benchmarking_Part-Level_Semantic_Segmentation_of_Urban_Meshes_CVPR_2025_paper.html) paper: 1. **Textured mesh data** (`mesh/`): - Subdivided into `train`, `validate`, and `test` sets - `train`/`validate`: textured meshes with mesh face labels + semantic texture masks - `test`: unlabeled data 2. **Sampled point clouds** (`pcl/`): - **Face-labeling track** (`face_labeling/`): - `face_cen_pcl/`: Face-centered - `random_pcl/`: Random - `possion_pcl/`: Poisson-disk - `texsp_pcl/`: Superpixel-based (proposed) - **Pixel-labeling track** (`pixel_labeling/`): - `random_pcl/`: Random - `possion_pcl/`: Poisson-disk - `texsp_pcl/`: Superpixel-based (proposed) - All point clouds follow the same `train`/`validate`/`test` split as meshes, with `test` being unlabeled 3. **Example data** (`demo/`): - Single examples per data type from (1) and (2) ### Semantic textured meshes The semantic label types of textured meshes are defined in the ply header via `comment label` and `comment texlabel`, while face semantic labels are stored in the ply file as `property int label`. Texture labels are saved in semantic texture mask images named `mask_texturefilename.png` or `full_mask_texturefilename.png`, where the former includes only texture semantic information and the latter adds face semantic information converted to texture semantics. Different colors can be mapped to semantic categories based on header definitions. Below is a ply file header example: ``` ply format ascii 1.0 comment TextureFile Tile_+1991_+2695_0.jpg comment label 0 unclassified comment label 1 terrain comment label 2 high_vegetation comment label 3 facade_surface comment label 4 water comment label 5 car comment label 6 boat comment label 7 roof_surface comment label 8 chimney comment label 9 dormer comment label 10 balcony comment label 11 roof_installation comment label 12 wall comment texlabel 13 window 100 100 255 comment texlabel 14 door 150 30 60 comment texlabel 15 low_vegetation 200 255 0 comment texlabel 16 impervious_surface 100 150 150 comment texlabel 17 road 200 200 200 comment texlabel 18 road_marking 150 100 150 comment texlabel 19 cycle_lane 255 85 127 comment texlabel 20 sidewalk 255 255 170 element vertex 54890 property float x property float y property float z element face 108322 property list uchar int vertex_indices property list uchar float texcoord property float r property float g property float b property float nx property float ny property float nz property int label property int texnumber end_header ``` Below are examples of texture mask images. In order: the original texture image, the texture image with semantic pixel labels, and the full-semantic texture image incorporating face semantic information.
textuer mask1 mask2
### Semantic colored point clouds Our point clouds are sampled from mesh surfaces, containing semantic labels, texture colors, geometric positions, and normal vectors. We classify the sampled point clouds into two types: mesh face-sampled point clouds and texture pixel-sampled point clouds. For face-sampled point clouds, we evaluated four mesh sampling strategies: face centroid, random, Poisson disk sampling, and our proposed superpixel texture sampling. For pixel labels, we tested three sampling methods: random, Poisson disk, and superpixel texture sampling. The number of random and Poisson disk samples depends on the superpixel texture sampling count, while face centroid sampling matches the number of mesh faces. To enable bidirectional semantic information transfer between textured meshes and point clouds: - Face centroids correspond to the semantic information of each mesh face. - Random and Poisson disk sampling use KNN to find the nearest mesh or texture pixels, transferring semantics via a voting mechanism. - Superpixel texture sampling maintains one-to-one correspondence with original texture pixels by preserving superpixel labels (texture pixels can compute their triangular face coordinates via texture coordinates). A header example for binary ply point clouds: ``` ply format binary_little_endian 1.0 element vertex 362516 property float x property float y property float z property float nx property float ny property float nz property float r property float g property float b property int label property int sp_id end_header ``` ### Visualization #### Mapple For rendering semantic textured meshes, use the 'Coloring' function in the Surface module of [Mapple](https://github.com/LiangliangNan/Easy3D/releases/tag/v2.6.1): - `f:color` or `v:color` displays per-face or per-point colors. - `scalar - f:label` or `scalar - v:label` shows legend colors for different semantic labels. - `h:texcoord` displays mesh texture colors, with corresponding texture images or semantic texture masks selectable via the 'Texture' dropdown.
Dataset Overview
#### MeshLab [MeshLab](https://www.meshlab.net/) can also visualize semantic textured meshes by displaying face colors or textures, but it **cannot process scalar values** (such as labels):
Dataset Overview
## 🔍 Evaluation Due to diverse point cloud sampling methods and dual-track (mesh face and texture pixel labels) annotations, evaluation is complex. Currently, please use the built-in ground truth labels in each types of data for initial evaluation. For fine-grained test set evaluation consistent with the paper, send predictions to our email for local assessment. Auto-evaluation code will be added soon. ## ✏️ Annotation Service To prevent potential cheating in benchmark evaluations and competitions (later), the annotation tool and source code are temporarily not publicly released. We will make them available later. The tool is designed for fine-grained annotation of textured meshes. Compared to 2D image or point cloud annotation tools, it is feature-complete but complex to operate, requiring at least 3 hours of professional training for proficiency. We will gradually create help documents and tutorial videos. For users needing annotation services, we offer paid semantic annotation for textured meshes. Contact us via email for quotation details. ## 📋 TODOs - [x] Project page, code, and dataset - [ ] Evaluation script - [ ] Annotation tools, code, and manuals ## 🎓 Citation If you use SUM Parts or SUM in a scientific work, please consider citing the following papers: [paper]  [supplemental]  [arxiv]  [bibtex]
```bibtex @InProceedings{Gao_2025_CVPR, author = {Gao, Weixiao and Nan, Liangliang and Ledoux, Hugo}, title = {SUM Parts: Benchmarking Part-Level Semantic Segmentation of Urban Meshes}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {24474-24484} } ``` # [paper]  [project]  [arxiv]  [bibtex]
```bibtex @article{Gao_2021_ISPRS, title = {SUM: A benchmark dataset of Semantic Urban Meshes}, journal = {ISPRS Journal of Photogrammetry and Remote Sensing}, volume = {179}, pages = {108-120}, year = {2021}, issn = {0924-2716}, doi = {https://doi.org/10.1016/j.isprsjprs.2021.07.008}, url = {https://www.sciencedirect.com/science/article/pii/S0924271621001854} } ``` ## ⚖️ License SUM Parts (the dataset) is licensed under a [Creative Commons Attribution-NonCommercial 4.0 International License](https://creativecommons.org/licenses/by-nc/4.0/). You are free to share and adapt the material, provided that you give appropriate credit, provide a link to the license, and indicate if changes were made. You may not use the material for commercial purposes. If you have any questions, comments, or suggestions, please contact me at gaoweixiaocuhk@gmail.com [Weixiao GAO](https://3d.bk.tudelft.nl/weixiao/) Jun. 21, 2025