Dataset for AI-based optical-thermal video data fusion for near real-time blade segmentation in normal wind turbine operation
Description
Blade damage inspection without stopping the normal operation of wind turbines has significant economic value. This study proposes an AI-based method AQUADA-Seg to segment the images of blades from complex backgrounds by fusing optical and thermal videos taken from normal operating wind turbines. The method follows an encoder-decoder architecture and uses both optical and thermal videos to overcome the challenges associated with field application. A memory is designed between the encoder and decoder to improve the method’s performance by utilizing time history information in the videos to achieve temporal complementarity. The designed memory shares information between optical and thermal modalities to achieve multimodal complementarity. We collected a large-scale dataset, i.e., 100 video pairs and over 55,000 images, of optical-thermal videos of blades in operational wind turbines to train and test the method. Experimental results show that AQUADA-Seg: i) achieves near real-time thermal-optical blade video segmentation and can analyze videos with complex backgrounds in real-world field applications; ii) achieves 0.996 and 0.981 MIoU on optical and thermal videos, respectively, outperforming state-of-the-art methods, particularly in the videos with complex backgrounds. This study provides an essential step towards automated blade damage detection using computer vision without stopping the normal operation of wind turbines.