A Comprehensive View-Dependent Dataset for Objects Reconstruction - Kinect Azure Set

Published: 5 January 2024| Version 2 | DOI: 10.17632/jd8w5r3ncw.2
Contributors:
, Dominik Belter

Description

The dataset was collected by moving the Kinect Azure camera above a set of real objects located on the ArUco board that was used to estimate the camera pose. To estimate the camera poses for the objects from the subset of real objects, we used the ArUco board, which consisted of a grid of 7x5 unique ArUco markers. We placed the object on the ArUco board and utilized the OpenCV library methods to estimate the camera poses located around the object and the marker. For estimating camera poses and creating the 3D model of the objects, we employed an ArUco board featuring a grid of 7x5 distinctive ArUco markers. By situating the object on the ArUco board, we utilized OpenCV library methods to calculate the camera poses positioned both around the object and the markers.

Files

Steps to reproduce

The dataset is described in: [1] Rafał Staszak, Dominik Belter, 3D Object Reconstruction: A Comprehensive View-Dependent Dataset, 2024 (under review) The method for 3D scene reconstruction is described here: [2] Rafał Staszak, Bartłomiej Kulecki, Witold Sempruch, Dominik Belter, What’s on the Other Side? a Single-View 3D Scene Reconstruction, Proceedings of the 17th International Conference on Control, Automation, Robotics and Vision (ICARCV), December 11-13, 2022, Singapore, s. 173-180, 2022 [3] Rafal Staszak, Bartlomiej Kulecki, Marek Kraft and Dominik Belter, MirrorNet: Hallucinating 2.5D Depth Images for Efficient 3D Scene Reconstruction, 2024 (under review)

Institutions

Politechnika Poznanska

Categories

Computer Science Applications, Robotics, RGB-D Image

Funding

National Science Center

UMO-2019/35/D/ST6/03959

Licence