Tea Disease Recognition Based on Image Segmentation and Data Augmentation

8Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Accurate identification of tea leaf diseases is crucial for intelligent tea cultivation and monitoring. However, the complex environment of tea plantations - affected by weather variations and uneven lighting - poses significant challenges for building effective disease recognition models using raw field-captured images. To address this, we propose a method that combines two-stage image segmentation with an improved conditional generative adversarial network (IC-GAN). The two-stage segmentation approach, integrating graph cuts and support vector machines (SVM), effectively isolates disease regions from complex backgrounds. The IC-GAN augments the dataset by generating high-quality synthetic disease images for model training. Finally, an Inception Embedded Pooling Convolutional Neural Network (IDCNN) is developed for disease recognition. Experimental results demonstrate that the segmentation method improves recognition accuracy from 53.36% to 75.63%, while the IC-GAN increases the dataset size. The IDCNN achieves 97.66% accuracy, 97.36% recall, and a 96.98% F1 score across three types of tea diseases. Comparative evaluations on two additional datasets further confirm the method's robustness and accuracy, offering a practical solution to reduce tea production losses and improve quality.

Cite

CITATION STYLE

APA

Li, J., & Liao, C. (2025). Tea Disease Recognition Based on Image Segmentation and Data Augmentation. IEEE Access, 13, 19664–19677. https://doi.org/10.1109/ACCESS.2025.3534024

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free