UAV-based multispectral images of spring wheat
The experiments were conducted at the Agriculture Experimental located south of Brazil, from May to October in 2018. The study area contains several 2.5m x 1m rectangular parcels, referred to as plots, which hold two Brazilians wheat varieties. The genotypes used were TBIO Toruk and BRS Parrudo (48 Toruk plots and 40 Parrudo plots). Variability in the growth of the crops was created for all test areas, where each one received a varying quantity of nitrogen. Different Nitrogen (N) rates were chosen to generate crop growth variability, to evaluate the response of biomass and grain yield to N availability, which we called spatial variability. The database consists of images captured by Unmanned Aerial Vehicles (UAV) and biomass manual measurements. Two different processes were used to create this dataset. In the first, we collected the biomass to make a ground-truth. This step is done manually and destructively. Shoot dry biomass was determined at three growth stages: the stage of six fully expanded leaves, referred herein as V6, three nodes, and at flowering by the collecting of plants in an area of 0.27 m² for each plot. This was done to create a temporal variability. The plants collected were oven-dried at 65ºC until constant weight and weighed. Then the value is extrapolated for kilograms/hectare (Kg/ha). That is the BIOMASS value in the CSV file. The second process was to acquire the images at the height of 50m above ground using a camera coupled to a DJI Matrice 100 Quadcopter to obtain images with a ground sample distance (GSD) of 0.0465m. The camera mounted in the UAV was a multispectral Parrot Sequoia MicaSense (2018) for agricultural applications that is independent of the UAV systems. This multispectral camera has four independent monochrome channels (1280 x 960, 10 bits per pixel). The single grid flight plan for image acquisition was used in this work, being a common setup used by UAV flight control applications when mapping terrain used in agriculture Pix4d (2019). The frontal and lateral overlaps between images were adjusted to 80% and 60%, respectively, which is controlled by the camera itself using an internal GNSS trigger. Post-processing of the acquired images includes georeferencing and orthomosaic generation.