Application of Plant Phenotype Extraction Using Virtual Data with Deep Learning

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Deep learning can enable image-based high-throughput phenotype analysis of plants. However, deep learning methods require large amounts of artificially annotated data. For application in plant phenotyping, the available data sets are usually small; it is expensive to generate new data and challenging to improve model accuracy with limited data. In this study, the L-system was used to generate virtual image data for training deep learning models. The precision (P), recall (R), and F-score (F) of the image segmentation model using a combination of virtual data and real data reached 0.95, 0.91, and 0.93, respectively; Mean Average Precision (mAP) and Intersection over Union (IoU) of the target detection model reached 0.96 and 0.92, respectively; the coefficient of determination (R2) and the standardized root mean square error evaluation of the leaf count model reached 0.94 and 0.93, respectively; all the results outperformed the results of training with only real data. Thus, we demonstrated that virtual data improves the effectiveness of the prediction accuracy of deep neural network models, and the findings of this study can provide technical support for high-throughput phenotype analysis.

Cite

CITATION STYLE

APA

Chen, G., Huang, S., Cao, L., Chen, H., Wang, X., & Lu, Y. (2022). Application of Plant Phenotype Extraction Using Virtual Data with Deep Learning. In Journal of Physics: Conference Series (Vol. 2356). Institute of Physics. https://doi.org/10.1088/1742-6596/2356/1/012039

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free