A Comprehensive View-Dependent Dataset for Objects Reconstruction - Synthetic Set Part A

Published: 2 January 2024| Version 1 | DOI: 10.17632/z88tpm3926.1
Contributors:
,

Description

Data were collected in the custom simulator that loads random graspable objects from the ShapeNet dataset and random table. The graspable object is placed above the table in a random position. Then, the scene is simulated using the PhysX engine to make sure that the scene is physically plausible. The simulator captures images of the scene from a random pose and then takes the second image from the camera pose that is on the opposite side of the scene. It contains RGB, depth, segmentation images of the scenes and objects on the scene and information about the camera poses that can be used to create a full 3D model of the scene and develop methods that reconstruct objects from a single RGB-D camera view. Part A of the dataset contains four categories: a bottle, a box, a can, and a cube.

Files

Steps to reproduce

The dataset is described in: [1] Rafał Staszak, Dominik Belter, 3D Object Reconstruction: A Comprehensive View-Dependent Dataset, 2024 (under review) The method for 3D scene reconstruction is described here: [2] Rafał Staszak, Bartłomiej Kulecki, Witold Sempruch, Dominik Belter, What’s on the Other Side? a Single-View 3D Scene Reconstruction, Proceedings of the 17th International Conference on Control, Automation, Robotics and Vision (ICARCV), December 11-13, 2022, Singapore, s. 173-180, 2022 [3] Rafal Staszak, Bartlomiej Kulecki, Marek Kraft and Dominik Belter, MirrorNet: Hallucinating 2.5D Depth Images for Efficient 3D Scene Reconstruction, 2024 (under review)

Institutions

Politechnika Poznanska

Categories

Computer Science Applications, Robotics, RGB-D Image

Funding

Narodowe Centrum Nauki

UMO-2019/35/D/ST6/03959

Licence