ViNU: Vision-Navigation Dataset for Unstructured Environment

Published: 15 May 2024| Version 1 | DOI: 10.17632/gnsb8mwc73.1
Contributors:
Nour Elhouda Ben Saadi,
,

Description

The dataset was initially gathered using the Robot Pomona, equipped with an RGBD camera, lidar, IMU, and GPS. This diverse sensor setup allowed for the collection of color images, depth images, GNSS data, point clouds, and IMU data. Initially, all data were captured and stored in a ROS bag format. Subsequent to the initial collection, the data were synchronized offline and transferred to a new ROS bag to ensure accurate alignment across the different data types. Finally, the synchronized data were organized and extracted into separate folders for each data type, resulting in distinct datasets for color images, depth images, GNSS, point clouds, and IMU data. This structured organization facilitates easier access and use for research and development in navigating unstructured environments. Potential Applications: Autonomous Navigation: Enhancing the ability of autonomous vehicles to navigate in complex, unstructured environments such as off-road terrains or disaster-stricken areas. Robotics Research: Providing a rich source of sensor data for developing and testing algorithms related to simultaneous localization and mapping (SLAM), object recognition, and path planning. Augmented Reality: Supporting AR applications that require real-time environmental mapping and interaction. Environmental Monitoring: Assisting in tasks that involve monitoring changes in environments over time, useful in ecological research or urban development planning. Machine Learning: Serving as a training and validation dataset for machine learning models focused on sensor fusion, depth perception, and predictive analytics in dynamic scenarios.

Files

Steps to reproduce

Steps to Reproduce ViNU Dataset: Setup and Configuration: Equip the Robot Pamona with the necessary sensors: an RGBD camera, a lidar, an IMU, and a GPS module. Configure each sensor to ensure optimal performance and synchronization capabilities. Install and configure ROS (Robot Operating System) to handle data capture and storage. Data Collection: Initiate data collection by deploying Robot Pamona in a variety of unstructured environments to ensure diverse data capture. Record data simultaneously from all sensors to capture comprehensive environmental interactions. Store this data initially in a ROS bag file format for real-time collection. Data Synchronization and Storage: Post-collection, use ROS tools to synchronize data from different sensors. This is crucial to align the timestamps of the data collected from the RGBD camera, lidar, IMU, and GPS. Transfer the synchronized data into a new ROS bag to consolidate the aligned data. Data Extraction and Organization: Extract each type of sensor data from the consolidated ROS bag and organize them into separate folders labeled accordingly: Color Images, Depth Images, GNSS, Point Clouds, and IMU Data. Ensure each folder is clearly labeled and contains metadata regarding the data type, collection time, and sensor specifications. Validation: Perform a basic validation check to ensure data integrity and consistency across the datasets. Review a subset of data manually to confirm that sensor data aligns correctly with the environmental conditions reported during collection.

Institutions

Eotvos Lorand Tudomanyegyetem Informatikai Kar

Categories

Computer Vision, Robotics, Sensor Fusion, Navigation, Deep Learning

Licence