Data for: Improved Weed Segmentation in UAV Imagery of Sorghum Fields with a Combined Deblurring Segmentation Model
Description
In this study we propose a new deep-learning based combined deblurring and segmentation model called "DeBlurWeedSeg" to detect weeds in sharp as well as in motion blurred drone images. Therefore we generated a manually annotated and expert curated drone image dataset for weed detection in sorghum fields consisting of image-pairs with and without motion blur. The UAV captures were generated at an experimental agricultural sorghum field in Southern Germany using a ”DJI Mavic 2 Pro” consumer-grade drone where sorghum was at BBCH 13. We observed Chenopodium album L. as the main weed species present with low quantities of Cirsium arvense L. (Scop.). The data.zip directory consists of 3 subfolders: "gt" consists of 1300 pairs of 128x128 px² image patches of motion blurred and sharp drone images and their corresponding ground-truth. This is the complete dataset used in this study. In the subfolder "splits", we provide the dataset split files as csv for training, validation and testing our models. For an in-depth comparison of our "DeBlurWeedSeg" model we generated a new ground-truth for the deblurring part of the model and publish them in the subfolder "gt_testset". The proposed model "DeBlurWeedSeg" is trained using pytorch and saved under model.zip. Please refer to the Github repository for further information on how to load this model and usage of the data. Github link: https://github.com/grimmlab/DeBlurWeedSeg