VARSew: A Visual Action Recognition Dataset on Garment Sewing
VARSew dataset is designed for visual action recognition in the garment sewing industry. Also, VARSew is designed to address multiple research objectives in visual recognition, such as binary and multi-class action classification. The proposed data offers new research opportunities, such as visual pattern recognition and Internet-of-Things(IOT) based manufacturing. Also, it offers research opportunities on human activity and behavior during production time. To the best of our knowledge, no such industrial human action dataset is available in the computer vision community. VARSew dataset consists of high-resolution trimmed videos of certain actions. There are two levels of categorical labels in the VARSew dataset, i.e., super action classes and actions classes. The dataset includes 3,121 videos of 49,936 frames. The videos are grouped into value-added and non-value-added for human sewing action binary classification. For multi-class classification, the videos are also grouped into eight classes: sew, release, handle, prepare, adjust, wait, check, and maintain. Each video was fixed to 16 frames. Multiple human operators were employed to collect the videos, including Sewing Machine Operators (SMO) and Maintenance Machine Operators (MMO), respectively 26 and 8 operators. Subjective annotations for the participant proficiency throughout the 3,121 videos.