3d plant modeling: Localization, mapping and segmentation for plant phenotyping using a single hand-held camera

29Citations
Citations of this article
53Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Functional-structural modeling and high-throughput phenomics demand tools for 3D measurements of plants. In this work, structure from motion is employed to estimate the position of a hand-held camera, moving around plants, and to recover a sparse 3D point cloud sampling the plants’ surfaces. Multiple-view stereo is employed to extend the sparse model to a dense 3D point cloud. The model is automatically segmented by spectral clustering, properly separating the plant’s leaves whose surfaces are estimated by fitting trimmed B-splines to their 3D points. These models are accurate snapshots for the aerial part of the plants at the image acquisition moment and allow the measurement of different features of the specimen phenotype. Such state-of-the-art computer vision techniques are able to produce accurate 3D models for plants using data from a single free moving camera, properly handling occlusions and diversity in size and structure for specimens presenting sparse canopies. A data set formed by the input images and the resulting camera poses and 3D points clouds is available, including data for sunflower and soybean specimens.

Cite

CITATION STYLE

APA

Santos, T. T., Koenigkan, L. V., Barbedo, J. G. A., & Rodrigues, G. C. (2015). 3d plant modeling: Localization, mapping and segmentation for plant phenotyping using a single hand-held camera. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8928, pp. 247–263). Springer Verlag. https://doi.org/10.1007/978-3-319-16220-1_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free