Crowdsourcing Experiment and Fully Convolutional Neural Networks for Coastal Remote Sensing of Seagrass and Macroalgae

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Recently, convolutional neural networks and fully convolutional neural networks (FCNs) have been successfully used for monitoring coastal marine ecosystems, in particular vegetation. However, even with recent advances in computational modeling and data acquisition, deep learning models require substantial amounts of good quality reference data to effectively self-learn internal representations of input imagery. The classical approach for coastal mapping requires experts to transcribe in situ records and delineate polygons from high-resolution imagery such that FCNs can self-learn. However, labeling by a single individual limits the training data, whereas crowdsourcing labels can increase the volume of training data, but may compromise label quality and consistency. In this article, we assessed the reliability of crowdsourced labels on a complex multiclass problem domain for estuarine vegetation and unvegetated sediment. An interobserver variability experiment was conducted in order to assess the statistical differences in crowdsourced annotations for plant species and sediment. The participants were grouped based on their discipline and level of expertise, and the statistical differences were evaluated using Cochran's Q-test and the annotation accuracy of each group to determine observation biases. Given the crowdsourced labels, FCNs were trained with majority-vote annotations from each group to check whether observation biases were propagated to FCN performance. Two scenarios were examined: first, a direct comparison of FCNs trained with transcribed in situ labels and crowdsourced labels from each group was established. Then, transcribed in situ labels were supplemented with crowdsourced labels to investigate the feasibility of training FCNs with crowdsourced labels in coastal mapping applications. We show that annotations sourced from discipline experts (ecologists and geomorphologists) familiar with the study site were more accurate than experts with no prior knowledge of the site and nonexperts, with our results confirming that biases in participant annotation were propagated in FCN performance. Furthermore, FCNs trained with a combined dataset of in situ and crowdsourced labels performed better than FCNs trained on the same imagery with in situ labels.

Cite

CITATION STYLE

APA

Hobley, B., Mackiewicz, M., Bremner, J., Dolphin, T., & Arosio, R. (2023). Crowdsourcing Experiment and Fully Convolutional Neural Networks for Coastal Remote Sensing of Seagrass and Macroalgae. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 16, 8734–8746. https://doi.org/10.1109/JSTARS.2023.3312820

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free