yanglei18 commited on
Commit
0796e51
·
1 Parent(s): a08c508

V2X-Radar released

Browse files
README.md CHANGED
@@ -1,3 +1,170 @@
1
  ---
 
 
 
 
 
2
  license: cc-by-nc-nd-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ viewer: false
3
+ annotations_creators:
4
+ - expert-annotated
5
+ language:
6
+ - en
7
  license: cc-by-nc-nd-4.0
8
+ multilinguality: monolingual
9
+ pretty_name: V2X-Radar
10
+ size_categories:
11
+ - 10K<n<100K
12
+ source_datasets: []
13
+ tags:
14
+ - autonomous-driving
15
+ - cooperative-perception
16
+ - multimodal
17
+ - 4d-radar
18
+ - v2x
19
+ - lidar
20
+ - camera
21
+
22
+ task_categories:
23
+ - robotics
24
+ - object-detection
25
  ---
26
+
27
+ <p align="center">
28
+ <h1 align="center">V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative Perception</h1>
29
+ <p align="center">
30
+ <a href="https://scholar.google.com.hk/citations?user=EUnI2nMAAAAJ&hl=zh-CN&oi=sra"><strong>Lei Yang</strong></a>
31
+ ·
32
+ <a href="https://scholar.google.com.hk/citations?user=0Q7pN4cAAAAJ&hl=zh-CN"><strong>Xinyu Zhang</strong></a>
33
+ ·
34
+ <a href="https://www.tsinghua.edu.cn/"><strong>Jun Li</strong></a>
35
+ ·
36
+ <a href="https://www.tsinghua.edu.cn/"><strong>Chen Wang</strong></a>
37
+ ·
38
+ <a href="https://scholar.google.com.hk/citations?user=S3cQz1AAAAAJ&hl=zh-CN&oi=ao"><strong>Jiaqi Ma</strong></a>
39
+ ·
40
+ <a href="https://scholar.google.com.hk/citations?user=joReSgYAAAAJ&hl=zh-CN&oi=sra"><strong>Zhiying Song</strong></a>
41
+ ·
42
+ <a href="https://scholar.google.com.hk/citations?user=tTnWi_EAAAAJ&hl=zh-CN"><strong>Tong Zhao</strong></a>
43
+ ·
44
+ <a href="https://scholar.google.com.hk/citations?user=tIjCAKEAAAAJ&hl=zh-CN"><strong>Ziying Song</strong></a>
45
+ ·
46
+ <a href="https://scholar.google.com.hk/citations?user=pmzKjcUAAAAJ&hl=zh-CN"><strong>Li Wang</strong></a>
47
+ ·
48
+ <a href="https://www.tsinghua.edu.cn/"><strong> Mo Zhou</strong></a>
49
+ ·
50
+ <a href="https://www.tsinghua.edu.cn/"><strong> Yang Shen</strong></a>
51
+ ·
52
+ <a href="https://scholar.google.com.hk/citations?hl=zh-CN&user=ElfT3eoAAAAJ"><strong> Kai Wu</strong></a>
53
+ ·
54
+ <a href="https://scholar.google.com.hk/citations?user=UKVs2CEAAAAJ&hl=zh-CN"><strong> Chen Lv</strong></a>
55
+ </p>
56
+
57
+ <div align="center">
58
+ <img src="./assets/teaser-v2.jpg" alt="Logo" width="100%">
59
+ </div>
60
+
61
+ <p align="center">
62
+ <br>
63
+ <a href="https://neurips.cc/virtual/2025/poster/121426"><img alt="website" src="https://img.shields.io/badge/Website-Explore%20Now-blueviolet?style=flat&logo=google-chrome"></a>
64
+ <a href="https://arxiv.org/pdf/2411.10962"><img alt="paper" src="https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg"></a>
65
+ <a href="https://github.com/yanglei18/V2X-Radar">
66
+ <img alt="github" src="https://img.shields.io/badge/GitHub-Code-black?style=flat&logo=github"></a>
67
+ <a href='https://youtu.be/nzmj_-9M_lg'><img src='https://img.shields.io/badge/Video-Presentation-F9D371' alt='Docker'></a>
68
+
69
+ </a>
70
+ <br></br>
71
+ </a>
72
+ </p>
73
+ </p>
74
+
75
+ This is the official implementation of **"V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative Perception"** (<span style="color:red">**NeuIPS 2025 Spotlight**</span>).
76
+
77
+ Supported by the [THU OpenMDP Lab](http://openmpd.com/column/V2X-Radar).
78
+
79
+ ## 📘 Dataset Summary
80
+ **V2X-Radar** is a large-scale cooperative perception dataset collected from complex urban intersections in mainland China. It is the **first public dataset** that integrates **4D imaging radar**, **LiDAR**, and **multi-view cameras** across **vehicle-to-everything (V2X)** configurations. The dataset aims to advance **multi-sensor fusion**, **cooperative 3D detection**, and **adverse-weather perception** research in autonomous driving.
81
+
82
+ ## 🧩 Supported Tasks
83
+ - **3D Object Detection** (Radar/LiDAR/Camera/V2X Fusion)
84
+ - **Cooperative Perception** (V2V / V2I / V2X)
85
+ - **Temporal Misalignment & Communication Delay Benchmarking**
86
+ - **Domain Adaptation and Sensor-Robust Learning**
87
+
88
+ ## 🗣️ Languages
89
+ All metadata and annotations are provided in **English**.
90
+ File paths and geographic identifiers are anonymized to comply with Chinese data export regulations.
91
+
92
+ ## 📊 Dataset Structure
93
+ ```
94
+ V2X-Radar
95
+ │ ├── V2X-Radar-I # KITTI Format
96
+ │ │ ├── training
97
+ │ │ │ ├── velodyne
98
+ │ │ │ ├── radar
99
+ │ │ │ ├── calib
100
+ │ │ │ ├── image_1
101
+ │ │ │ ├── image_2
102
+ │ │ │ ├── image_3
103
+ │ │ │ ├── label_2
104
+ │ │ ├── ImageSets
105
+ │ │ │ ├── train.txt
106
+ │ │ │ ├── trainval.txt
107
+ │ │ │ ├── val.txt
108
+ │ │ │ ├── test.txt
109
+ │ ├── V2X-Radar-V # KITTI Format
110
+ │ │ ├── training
111
+ │ │ │ ├── velodyne
112
+ │ │ │ ├── radar
113
+ │ │ │ ├── calib
114
+ │ │ │ ├── image_2
115
+ │ │ │ ├── label_2
116
+ │ │ ├── ImageSets
117
+ │ │ │ ├── train.txt
118
+ │ │ │ ├── trainval.txt
119
+ │ │ │ ├── val.txt
120
+ │ │ │ ├── test.txt
121
+ │ ├── V2X-Radar-C # OpenV2V Format
122
+ │ │ ├── train
123
+ │ │ │ ├── 2024-05-15-16-28-09
124
+ │ │ │ │ ├── -1 # RoadSide
125
+ │ │ │ │ │ ├── 00000.pcd - 00250.pcd # LiDAR point clouds from timestamp 0 to 250
126
+ │ │ │ │ │ ├── 00000_radar.pcd - 00250_radar.pcd # the 4D Radar point clouds from timestamp 0 to 250
127
+ │ │ │ │ │ ├── 00000.yaml - 00250.yaml # metadata for each timestamp
128
+ │ │ │ │ │ ├── 00000_camera0.jpg - 00250_camera0.jpg # left camera images
129
+ │ │ │ │ │ ├── 00000_camera1.jpg - 00250_camera1.jpg # front camera images
130
+ │ │ │ │ │ ├── 00000_camera2.jpg - 00250_camera2.jpg # right camera images
131
+ │ │ │ │ ├── 142 # Vehicle Side
132
+ │ │ ├── validate
133
+ │ │ ├── test
134
+ ```
135
+
136
+ ## ⚙️ Data Fields
137
+
138
+ | Field | Type | Description |
139
+ |:------|:----:|:------------|
140
+ | `radar_points` | array(float) | 4D Radar point clouds (x, y, z, doppler, intensity) |
141
+ | `lidar_points` | array(float) | LiDAR point clouds |
142
+ | `images` | list(image) | Multi-view RGB frames |
143
+ | `calibration` | dict | Intrinsics + extrinsics |
144
+ | `timestamp` | float | Absolute timestamp (ms) |
145
+ | `annotations` | dict | 3D bounding boxes, categories and track IDs |
146
+
147
+
148
+ ## 🧭 Data Collection and Geographic Coverage
149
+ Data were recorded in **Chinese metropolitan cities** using research-licensed vehicles and roadside units.
150
+ All raw sensor data underwent **manual anonymization** and **privacy filtering** (no personal identities, license plates, or facial information remain).
151
+
152
+
153
+ ## ⚖️ Licensing Information
154
+ This dataset is released under the **CC BY-NC-ND 4.0**.
155
+
156
+ - **Attribution** — Users must credit “V2X-Radar Dataset, 2025”.
157
+ - **Non-Commercial** — Use for research and education only.
158
+ - **No Derivatives** — Do not redistribute modified versions.
159
+
160
+ Full license text: [https://creativecommons.org/licenses/by-nc-nd/4.0/](https://creativecommons.org/licenses/by-nc-nd/4.0/)
161
+
162
+
163
+ ## 🪪 Citation
164
+ ```bibtex
165
+ @article{yang2024v2x,
166
+ title={V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative Perception},
167
+ author={Yang, Lei and Zhang, Xinyu and Li, Jun and Wang, Chen and Ma, Jiaqi and Song, Zhiying and Zhao, Tong and Song, Ziying and Wang, Li and Zhou, Mo and Shen, Yang and Lv, Chen},
168
+ journal={Advances in Neural Information Processing Systems (NeurIPS)},
169
+ year={2025}
170
+ }
V2X-Radar-C/.DS_Store ADDED
Binary file (8.2 kB). View file
 
V2X-Radar-C/test/test.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ccb291cc12f6fb03e88b579f9b031a847212d7549de5cc1e314fe85015d7298
3
+ size 6778370399
V2X-Radar-C/train/train.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9791ec85a6d71416b041022b54f29c02339159b3abbceeb3e2483eaddbf3b3fc
3
+ size 34009172923
V2X-Radar-C/validate/validate.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e682c56f6729187b5fc7b57e7a1f232e9a377bf28a2bd570da07d75b5a678ec9
3
+ size 5748103705
V2X-Radar-I/ImageSets.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:acb064a97bd9a67e39b72f4c991f3d43eef37859c4f3c2ddfeea8f6948d194ee
3
+ size 41470
V2X-Radar-I/V2X-Radar-I.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b781932c25afa56ac81f449547fa001bb736a24e9f30fc9d80cf194100ab3f4
3
+ size 27120710431
V2X-Radar-V/ImageSets.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83cd25c352eb6abdd61e2f04953c5ef5445f5ef438026cc08420f35d114a6c91
3
+ size 49367
V2X-Radar-V/V2X-Radar-V.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d7506ccd8716e70cccdb3ec034de42c439b2a01399da04bef3f6f6fafbd93df
3
+ size 23926089849
assets/teaser-v2.jpg ADDED

Git LFS Details

  • SHA256: cd9762fd46b0d6fc18de50f717bf2b9a894c7e7d1b04349405f72fefcbb3747f
  • Pointer size: 133 Bytes
  • Size of remote file: 13.8 MB