Multi-Environment Object Detection Dataset for Improved Indoor Navigation: Focusing on the Importance of Different Door Handle Types

Published: 12 June 2023| Version 1 | DOI: 10.17632/m9n4f6prn4.1
Contributors:
,

Description

The object detection dataset consists of 3,292 images captured from various environments using a simple data collection system. Images were taken from different perspectives, including handheld and camera-mounted on an EPW (Environmental Process Workstation), to ensure diversity and varied perspectives. The dataset was divided into 60% for training (1,975 images), 10% for validation (330 images), and 30% for testing (987 images). The image resolution is 512x512x3 pixels, which is lower than a proposed semantic segmentation dataset but still comparable to standard datasets. The dataset contains annotations on the bounding box level and includes eight object categories such as doors, fire extinguishers, key slots, switches, ID readers, and various types of door handles. The dataset addresses the absence of infrequent objects found in other publicly available datasets. Unclassifiable objects are left unannotated. Folders description: - images -> dataset images. Files descriptions: - AllDataTable -> All the images and the corresponding bounding boxes in a Matlab table format. - trainingData_datastore.mat -> training datastore in Matlab format. The combined datastore has the images file path and the corresponding bounding boxes. - testingData_datastore.mat -> testing datastore in Matlab format. The combined datastore has the images file path and the corresponding bounding boxes. - validationData_datastore.mat -> validation datastore in Matlab format. The combined datastore has the images file path and the corresponding bounding boxes. Note: files path in the training, validation and testing Matlab files need to be modified to point to the location where the images locate. Classes: - Classes -> 'door' 'pull door handle' 'push button' 'moveable door handle' 'push door handle' 'fire extinguisher' 'key slot' 'id_reader' Paper: Mohamed, E.; Sirlantzis, K.; Howells, G.; Hoque, S. Optimisation of Deep Learning Small-Object Detectors with Novel Explainable Verification. Sensors 2022, 22, 5596. https://doi.org/10.3390/s22155596

Files

Steps to reproduce

The present research endeavors to compile an object detection dataset, derived from varied indoor settings, by employing a handheld camera and a camera affixed beneath the powered wheelchair joystick. Using two different setups enhance the dataset's diversity and capture images from multiple perspectives. For the powered wheelchair case, the camera is positioned at a fixed height of 68 cm above the ground to maintain consistency across data collection. However, the handheld camera increase the flexibility of collecting the data from various environments. Subsequently, the acquired data is subjected to processing using MATLAB software, specifically the Video and Image Labeller tool, to meticulously annotate video frames and images at the level of bounding boxes. The output of this data processing stage generates the ground truth data that can be used to train deep learning models.

Institutions

Canterbury Christ Church University

Categories

Object Detection, Multi-Objective Optimization, Indoor Environment, Convolutional Neural Network

Licence