GobhiSet: Dataset of raw, manually and automatically annotated RGB images across phenology of Brassica oleracea var. Botrytis

Published: 16 April 2024| Version 3 | DOI: 10.17632/dcjjcwc5dh.3
Shubham Rana,


This dataset encompasses a compilation of unprocessed aerial RGB images and orthomosaics. These images, captured via a DJI Phantom 4, span several dates and depict Brassica oleracea crops. The images are uniformly distributed across crop spaces and have undergone both manual and automatic annotation. This data pool is engineered to facilitate the detection, segmentation, and growth modelling of crops, utilizing pixel information annotated both manually and automatically. The publicly accessible repository houses 244 raw RGB images, acquired over six distinct dates in October and November of 2020. The experimental farm is located in Portici, Italy. Each raw image bears a dimension of 5472×3648 pixels. The initial three sets of images, captured on October 8, 2020, October 21, 2020, and October 29, 2020, were manually annotated using bounding boxes via the Visual Geometry Group Image Annotator (VIA). These annotations were exported in the Common Objects in Context (COCO) segmentation format. The manual labelling data of the imagery dated October 8, October 21, and October 29, including region and shape attributes, is detailed in JavaScript Object Notation (JSON). These three dates served as training data for the annotator to improve the automated labelling across all dates: 8 October, 21 October, 29 October, 11 November, 18 November, and 25 November. The benchmark annotation was noted to be of 21 October, 2020, in terms of quantitative assessment criteria. Seven classes, designated as Row 1 through Row 7, have been identified for crop labelling within them. Additional attributes such as individual crop ID and the repetitiveness of individual crop specimens are delineated in the Comma Separated Values (CSV) version of the manual annotation. For the generation of automated annotations, the manual annotations were trained over a framework of Grounding DINO + Segment Anything Model (SAM), and the labels were archived in Pascal Visual Object Classes (PASCAL VOC) format. The segmentation masks, derived from automated annotations, are furnished in the form of Portable Network Graphics (PNG) images, catering to three distinct scenarios: aerial images, individual crop rows, and orthomosaics. These automated annotations facilitate the monitoring of growth across the crop phenology, employing evaluation based on binary masks of individually identified crop rows, captured across various dates. The codes utilized for these processes are accessible to ensure transparency and reproducibility. The dataset not only furnishes annotation information but can also assist in the refinement of various machine learning models.


Steps to reproduce

The acquisition of the images was done in field environments for three date imagery, usually around noon (12:00 to 12:25). These images were captured at nadir via DJI Phantom IV Pro Obsidian. Orthomosaics were generated using the raw images of every date. This activity was an aerial drone-based image acquisition in a linear grid path. The flying altitude varied between 4.275 m and 4.749 due to wind turbulence. The manual labelling was performed using VIA annotator version 1.0.6. The automated annotations were done using a combined framework of Grounding DINO and Segment Anything Model. The scripts aimed to extract binary masks based on manual and automatic annotations for entire image, crop rows and specific crop specimen were written in python.


Universita degli Studi di Napoli Federico II Dipartimento di Agraria, Universita degli Studi della Campania Luigi Vanvitelli Dipartimento di Ingegneria, Universita degli studi della Campania Luigi Vanvitelli Dipartimento di Scienze e Tecnologie Ambientali Biologiche e Farmaceutiche


Computer Vision, Annotation, Image Segmentation, Object Detection, Automated Segmentation, Pattern Recognition, Crop Management, Precision Agriculture, Deep Learning