Viewpoints analysis for sweet peppers maturity classification

Published: 3 June 2020| Version 1 | DOI: 10.17632/ttntwwxkfd.1
Ben Harel


‘Photocell’ dataset 97 red sweet peppers (cultivar: Banji; seed company: Efal) and 102 yellow sweet peppers (cultivar: Liri; Seed company: Hazera) from maturity classes 1–4 were harvested from a commercial greenhouse in Kmehin, Israel in January 2019. Each pepper was manually placed inside a photocell, to ensure uniform illumination; each of the four photocell sides included three light-emitting diodes (LED) spots of 35 watts each [46], resulting in total illumination of 49 lux. Images were acquired using an IDS Ui-5250RE RGB color camera with a resolution of 1600 x 1200 pixels, placed 38 cm above the black cell floor. Images of each pepper were acquired from four viewpoints: three from the sides of the peppers, taken in no particular order, and the bottom viewpoint of the pepper. ‘Robotic’ dataset 69 red and 70 yellow peppers from maturity classes 2, 3, and 4 were harvested from the same commercial greenhouse in Kmehin, Israel, in November 2019. No peppers were collected from class 1 since we assumed that immature peppers that are entirely green would not be detected by a harvesting robot. Each pepper was individually attached to a pepper plant at a random orientation in a lab environment without controlled illumination. The peppers were attached to the plant hanging down straight in a way that does not create occlusion by leaves or stems and prevents overlap between peppers. Images for each pepper were acquired from three side viewpoints from the same height. No pictures of the bottom viewpoint were taken since it is not always feasible for harvesting robots due to plant parts that prevent the robot from reaching the pepper from the bottom. The images were acquired using an Intel RealSense D435 RGB-D (color + depth) camera mounted on a Sawyer robotic 7-degree-of-freedom torque-controlled arm from Rethink Robotics. Automatic exposure and white-balance were enabled. The use of the robotic arm enables images from the three viewpoints to be made from the same pose for all peppers. The dataset, which is published as part of this paper, resulted in 417 RGB-D images of the different sides of the peppers and the whole pepper ground truth information of their maturity class. ‘Orientation’ dataset As part of the ‘robotic’ dataset collection, 14 red peppers and 14 yellow peppers from maturity classes 2 and 3 (seven from each of class) were taken from the peppers harvested for the robotic dataset. Each pepper was placed hanging straight down in a random orientation on a pepper plant in the lab. Images from three viewpoints were acquired for each pepper three times, and each time the pepper was twisted 120˚ clockwise around the z-axis, resulting in a different surface of the pepper facing the camera from the initial viewpoint. This collection produced a dataset of 252 RGB images.



Ben-Gurion University of the Negev