A hybrid vegetation detection framework: Integrating vegetation indices and convolutional neural network

5Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

Abstract

Vegetation inspection and monitoring is a time-consuming task. In the era of industrial revolution 4.0 (IR 4.0), unmanned aerial vehicles (UAV), commercially known as drones, are in demand, being adopted for vegetation inspection and monitoring activities. However, most off-the-shelf drones are least favoured by vegetation maintenance departments for on-site inspection due to limited spectral bands camera restricting advanced vegetation analysis. Most of these drones are normally equipped with a normal red, green, and blue (RGB) camera. Additional spectral bands are found to produce more accurate analysis during vegetation inspection, but at the cost of advanced camera functionalities, such as multispectral camera. Vegetation indices (VI) is a technique to maximize detection sensitivity related to vegetation characteristics while minimizing other factors which are not categorised otherwise. The emergence of machine learning has slowly influenced the existing vegetation analysis technique in order to improve detection accuracy. This study focuses on exploring VI techniques in identifying vegetation objects. The selected VIs investigated are Visible Atmospheric Resistant Index (VARI), Green Leaf Index (GLI), and Vegetation Index Green (VIgreen). The chosen machine learning technique is You Only Look Once (YOLO), which is a clever convolutional neural network (CNN) offering object detection in real time. The CNN model has a symmetrical structure along the direction of the tensor flow. Several series of data collection have been conducted at identified locations to obtain aerial images. The proposed hybrid methods were tested on captured aerial images to observe vegetation detection performance. Segmentation in image analysis is a process to divide the targeted pixels for further detection testing. Based on our findings, more than 70% of the vegetation objects in the images were accurately detected, which reduces the misdetection issue faced by previous VI techniques. On the other hand, hybrid segmentation methods perform best with the combination of VARI and YOLO at 84% detection accuracy.

Cite

CITATION STYLE

APA

Hashim, W., Eng, L. S., Alkawsi, G., Ismail, R., Alkahtani, A. A., Dzulkifly, S., … Hussain, A. (2021). A hybrid vegetation detection framework: Integrating vegetation indices and convolutional neural network. Symmetry, 13(11). https://doi.org/10.3390/sym13112190

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free