Background: Plant architecture can influence crop yield and quality. Manual extraction of architectural traits is, however, time-consuming, tedious, and error prone. The trait estimation from 3D data addresses occlusion issues with the availability of depth information while deep learning approaches enable learning features without manual design. The goal of this study was to develop a data processing workflow by leveraging 3D deep learning models and a novel 3D data annotation tool to segment cotton plant parts and derive important architectural traits. Results: The Point Voxel Convolutional Neural Network (PVCNN) combining both point- and voxel-based representations of 3D data shows less time consumption and better segmentation performance than point-based networks. Results indicate that the best mIoU (89.12%) and accuracy (96.19%) with average inference time of 0.88 s were achieved through PVCNN, compared to Pointnet and Pointnet++. On the seven derived architectural traits from segmented parts, an R2 value of more than 0.8 and mean absolute percentage error of less than 10% were attained. Conclusion: This plant part segmentation method based on 3D deep learning enables effective and efficient architectural trait measurement from point clouds, which could be useful to advance plant breeding programs and characterization of in-season developmental traits. The plant part segmentation code is available at https://github.com/UGA-BSAIL/plant_3d_deep_learning.
CITATION STYLE
Saeed, F., Sun, S., Rodriguez-Sanchez, J., Snider, J., Liu, T., & Li, C. (2023). Cotton plant part 3D segmentation and architectural trait extraction using point voxel convolutional neural networks. Plant Methods, 19(1). https://doi.org/10.1186/s13007-023-00996-1
Mendeley helps you to discover research relevant for your work.