Because weeds compete directly with crops for moisture, nutrients, space, and sunlight, their monitoring and control is an essential necessity in agriculture. The most important step in choosing an effective and time-saving weed control method is the detection of weed species. Deep learning approaches have been proven to be effective in smart agricultural tasks such as plant classification and disease detection. The performance of Deep Learning-based classification models is often influenced by the complexity of the feature extraction backbone. The limited availability of data in weed classification problems poses a challenge when increasing the number of parameters in the backbone of a model. While a substantial increase in backbone parameters may only result in marginal performance improvements, it can also lead to overfitting and increased training difficulty. In this study, we aim to explore the impact of adjusting the architecture depth and width on the performance of deep neural networks for weed classification using Unmanned Aerial Vehicles (UAV) imagery. Specifically, we focus on comparing the performance of well-known convolutional neural networks with varying levels of complexity, including heavy and light architectures. By investigating the impact of scaling deep layers, we seek to understand how it influences attention mechanisms, enhances the learning of meaningful representations, and ultimately improves the performance of deep networks in weed classification tasks with UAV images. Data were collected using a high-resolution camera on a UAV flying at low altitudes over a winter wheat field. Using the transfer learning strategy, we trained deep learning models and performed species-level classification tasks with the weed species: Lithospermum arvense, Spergula arvensis, Stellaria media, Chenopodium album, and Lamium purpureum observed in that field. The results obtained from this study reveal that networks with deeper layers do not effectively learn meaningful representations, thereby hindering the expected performance gain in the context of the specific weed classification task addressed in this study.
CITATION STYLE
Alirezazadeh, P., Schirrmann, M., & Stolzenburg, F. (2024). A comparative analysis of deep learning methods for weed classification of high-resolution UAV images. Journal of Plant Diseases and Protection, 131(1), 227–236. https://doi.org/10.1007/s41348-023-00814-9
Mendeley helps you to discover research relevant for your work.