Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network

184Citations
Citations of this article
151Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods.

Cite

CITATION STYLE

APA

Liu, Z., Gao, J., Yang, G., Zhang, H., & He, Y. (2016). Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network. Scientific Reports, 6. https://doi.org/10.1038/srep20410

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free