Datasets:
ArXiv:
License:
| # Samples of VR-Folding dataset | |
| ## Data structure | |
| We provide 1 video sequence for *Folding* task. | |
| ### RGB images | |
| The following diretory contains RGB images of the video sequence rendered with Unity. Note that these images are only for visualization, so we have rendered both hands additionally. | |
| - `Tshirt_folding_hands_rgb` | |
| ### Processed Data: Zarr data | |
| All the multi-view RGB-D images will be transformed into point clouds and merged together into [zarr](https://zarr.readthedocs.io/en/stable/) format. All the other annotations are also contained in the data with [zarr](https://zarr.readthedocs.io/en/stable/) format. The following diretory contain samples for *Folding* task in [zarr](https://zarr.readthedocs.io/en/stable/) format. | |
| - `VR_Folding/vr_simulation_folding_dataset_example.zarr/Tshirt` | |
| Here is the detailed tree structure of a data example of one frame. | |
| ``` | |
| 00068_Tshirt_000000_000000 | |
| βββ grip_vertex_id | |
| β βββ left_grip_vertex_id (1,) int32 | |
| β βββ right_grip_vertex_id (1,) int32 | |
| βββ hand_pose | |
| β βββ left_hand_euler (25, 3) float32 | |
| β βββ left_hand_pos (25, 3) float32 | |
| β βββ right_hand_euler (25, 3) float32 | |
| β βββ right_hand_pos (25, 3) float32 | |
| βββ marching_cube_mesh | |
| β βββ is_vertex_on_surface (6410,) bool | |
| β βββ marching_cube_faces (12816, 3) int32 | |
| β βββ marching_cube_verts (6410, 3) float32 | |
| βββ mesh | |
| β βββ cloth_faces_tri (8312, 3) int32 | |
| β βββ cloth_nocs_verts (4434, 3) float32 | |
| β βββ cloth_verts (4434, 3) float32 | |
| βββ point_cloud | |
| βββ cls (30000,) uint8 | |
| βββ nocs (30000, 3) float16 | |
| βββ point (30000, 3) float16 | |
| βββ rgb (30000, 3) uint8 | |
| βββ sizes (4,) int64 | |
| ``` | |
| ## Visualization | |
| We provide a simple script for visualizing data in [zarr](https://zarr.readthedocs.io/en/stable/) format. This script will filter out the static frames (i.e. garment pose remains unchanged) in the video and only visualize dynamic frames. | |
| ### Setup | |
| Requirements: Python >= 3.8 | |
| This code has been tested on Windows 10 and Ubuntu 18.04. | |
| ``` | |
| pip install -r requirements.txt | |
| ``` | |
| ### Run | |
| ``` | |
| python vis_samples.py | |
| ``` | |
| This script will use Open3D to visualize the following elements: | |
| - the input partial point cloud with colors | |
| - the grasping points of both hands (represented by blue and red spheres) | |
| - the complete GT mesh colored with NOCS coordinates | |
| Note that our recorded data in Zarr format contains complete hand poses (positions and euler angles of 25 bones for each hand). | |
| In this simplified 3D visualization script, we only visualize the valid grasping points on the garment surface instead of complete hands for fast implementation. |