As computer vision datasets grow larger the community is increasingly relying on crowdsourced annotations to train and test our algorithms. Due to the heterogeneous and unpredictable capability of online annotators, various strategies have been proposed to "clean" crowdsourced annotations. However, these strategies typically involve getting more annotations, perhaps different types of annotations (e.g. a grading task), rather than computationally assessing the annotation or image content. In this paper we propose and evaluate several strategies for automatically estimating the quality of a spatial object annotation. We show that one can significantly outperform simple baselines, such as that used by LabelMe, by combining multiple image-based annotation assessment strategies. © 2011. The copyright of this document resides with its authors.
CITATION STYLE
Vittayakorn, S., & Hays, J. (2011). Quality assessment for crowdsourced object annotations. In BMVC 2011 - Proceedings of the British Machine Vision Conference 2011. British Machine Vision Association, BMVA. https://doi.org/10.5244/C.25.109
Mendeley helps you to discover research relevant for your work.