2024-DTU Risø Wind Turbine Blade Inpsction Video Dataset Description
Description
AI-based blade identification in operational wind turbines through similarity analysis aided drone inspection Shohreh Sheiati a, *, Xiaodong Jia a, Malcolm McGugan a, Kim Branner a, Xiao Chen a, * a Technical University of Denmark, Department of Wind and Energy Systems, Roskilde 4000, Denmark. Abstract Developing an automated method to monitor the structural health of each individual wind turbine blade over its lifespan is imperative for timely warning of potential damages. This study proposes an AI-based image recognition model to identify individual blades based on their unique surface features. To eliminate the influence of the image background on the identification task, a deep learning segmentation method was used to isolate the blade image from the complex background in optical images captured by drone. Subsequently, we utilized and repurposed similarity-based Siamese networks to establish a blade search capability to identify images of the same blade. This model automatically retrieves the corresponding blade images in response to a single query image. The results show that our trained model distinguishes similar images from non-similar images and identifies the blades from unseen images successfully. Our developed similarity-based identification method ensures continuous tracking and monitoring of the structural health of each individual blade over time. Dataset description We captured 29 videos from a wind turbine situated at DTU Risø campus, Roskilde municipality, Denmark. These videos were recorded under diverse circumstances, accounting for varying imaging conditions and different segments of the blades, using a DJI Mavic 2 drone. We organized the dataset into a folder named “Dataset”, which comprises 29 subfolders, each dedicated to one of the original videos along with extracted image frames and their corresponding masks. The initial set of 24 videos in the collection was used for training and validation processes. The remaining 5 videos, not utilized during the training process, were used for testing. Masks were obtained from two techniques: bi-level thresholding and the GrabCut foreground-background segmentation method. The bi-level thresholding technique was applied to images with clear contrast between the foreground (blade) and background (all videos except 6, 13, 15, 25). For images with very complex backgrounds containing divers objects with different colours and textures (videos: 6, 13, 15, 25), the GrabCut method was employed. Contact: xiac@dtu.dk