Crowdsourcing satellite imagery analysis: Study of parallel and iterative models

15Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we investigate how a crowdsourcing approach i.e. the involvement of non-experts, could support the effort of experts to analyze satellite imagery e.g. geo-referencing objects. An underlying challenge in crowdsourcing and especially volunteered geographical information (VGI) is the strategy used to allocate the volunteers in order to optimize a set of criteria, especially the quality of data. We study two main strategies of organization: the parallel and iterative models. In the parallel model, a set of volunteers performs independently the same task and an aggregation function is used to generate a collective output. In the iterative model, a chain of volunteers improves the work of previous workers. We first study their qualitative differences. We then introduce the use of Mechanical Turk Service as a simulator in VGI to benchmark both models. We ask volunteers to identify buildings on three maps and investigate the relationship between the amount of non-trained volunteers and the accuracy and consistency of the result. For the parallel model we propose a new clustering algorithm called democratic clustering algorithm DCA taking into account spatial and democratic constraints to form clusters. While both strategies are sensitive to their parameters and implementations we find that parallel model tends to reduce type I errors (less false identification) by filtering only consensual results, while the iterative model tends to reduce type II errors (better completeness) and outperforms the parallel model for difficult/complex areas thanks to knowledge accumulation. However in terms of consistency the parallel model is better than the iterative one. Secondly, the Linus' law studied for OpenStreetMap [7] (iterative model) is of limited validity for the parallel model: after a given threshold, adding more volunteers does not change the consensual output. As side analysis, we also investigate the use of the spatial inter-agreement as indicator of the intrinsic difficulty to analyse an area. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Maisonneuve, N., & Chopard, B. (2012). Crowdsourcing satellite imagery analysis: Study of parallel and iterative models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7478 LNCS, pp. 116–131). https://doi.org/10.1007/978-3-642-33024-7_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free