TreePillars: An end-to-end 3D target recognition algorithm for nursery and orchard spray robot

Published: 23 December 2024| Version 1 | DOI: 10.17632/ymg89yyj4j.1
Contributors:
,

Description

The dataset used in this study includes 1000 frames of point cloud data featuring scenes of osmanthus and cherry blossom trees. The data were collected using a Velodyne 16-line LiDAR during three distinct acquisition periods to capture seasonal variations in tree structure: November 2022, May 2023, and April 2024. This approach was intentionally designed to ensure a comprehensive and diverse dataset, with each period reflecting different stages in the trees’ growth cycle. The initial data collection in November 2022, referenced from Liu et al. (2024b,a), provides a baseline to observe changes in tree characteristics over time. The trained TreePillars algorithm achieves an mAP of 61.34%, a computation complexity of 7.34 GFLOPs, and a parameter count of 4.24 million, respectively. Compared to PointPillars, the average precision is increased by 10.94%, while the computation complexity and parameters count are reduced by 4.6% and 12.4%, respectively. It has a more accurate recognition effect on sparse and occluded point cloud targets. Experiments on the intelligent robot show that the location relative error for trees, mAP, and average time for single-frame detection of TreePillars are 1.17%, 77.44%, and 19.14ms, respectively. Compared to the improved DBSCAN and lightweight PointNet, the location relative error is reduced by 1.88%, with detection results showing precise bounding boxes for targets, and a detection time shortened by 76.77%. Relative to the VoteNet algorithm, TreePillars reduces the positiong error by 0.7%, improves mAP by 26.33%, and shortens detection time by 98.29%, meeting the requirements of real-time target detection.

Files

Institutions

Jiangsu University

Categories

Pedestrian, Tree

Funding

National Natural Science Foundation of China

32171908

Licence