Increasingly, natural history museum collections are being used to generate large-scale morphological datasets to address a range of macroecological and macroevolutionary questions. One challenge to this approach is that large numbers of individuals either from a single species or from taxonomically broad sets of species may be necessary to characterize morphology at the relevant spatial, phylogenetic or temporal scales. We present ‘Skelevision’, a method for rapidly handling, photographing and measuring skeletal specimens with a computer vision approach that uses a deep neural network to segment the photographs of specimens into individual bones, and identify and measure functional aspects of those bones. We demonstrate the scale of what is feasible with Skelevision by estimating 11 functional traits from 11 different bones for 12,450 bird skeletal specimens spanning 1,882 species of passerines (~32% of all passerine diversity). We quantify the accuracy of Skelevision estimates by comparing them to handmade measurements for 174 specimens from 115 species across 79 genera that span 59 families. Skelevision is precise, with a mean standard deviation of 0.86 mm for repeated independent measurements of individual bones, and is extremely accurate, with a mean RMSE of 0.89 mm across all traits when compared to handmade measurements. There is minimal phylogenetic signal in the measurement error (mean Pagel's λ across traits = 0.13), and Skelevision estimates are robust to variation in the degree to which specimens remain articulated. This approach has several important advantages over traditional methods for building large-scale morphological datasets (e.g. measurements from long-term field-based operations or handmade measurements of museum specimens). First, measuring new specimens only requires the collection of photographs, which can then be measured automatically, and effectively instantaneously, with the neural network. This is a significant departure from the time and skill required to measure skeletal specimens by hand. Second, the measurements are repeatable. Third, even as the dataset of photographed specimens expands, the amount of annotation data needed to measure new traits on all of the photographed specimens using the neural network will remain fixed and can be done without re-capturing images.
CITATION STYLE
Weeks, B. C., Zhou, Z., O’Brien, B. K., Darling, R., Dean, M., Dias, T., … Fouhey, D. F. (2023). A deep neural network for high-throughput measurement of functional traits on museum skeletal specimens. Methods in Ecology and Evolution, 14(2), 347–359. https://doi.org/10.1111/2041-210X.13864
Mendeley helps you to discover research relevant for your work.