Integrated Image Sensor and Deep Learning Network for Fabric Pilling Classification

2Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Manufacturers' fabrics are tested for abrasion resistance before leaving the factory, and the fabrics are manually visually graded to ensure that there are no defects. However, manual visual classification consumes a lot of human resources. In addition, long-term visual inspections using the eyes often result in occupational injuries. As a result, the overall efficiency is reduced. To overcome and avoid such situations, we devised an image preprocessing technology and deep learning network for classifying the pilling level of knitted fabrics. In the first step, fabric images are collected using an image optical sensor. The fast Fourier transform (FFT) and Gaussian filter are used for image preprocessing to strengthen the pilling characteristics in the fabric images. In the second step, the characteristics and classification of fabric pilling are automatically captured and identified using a deep learning network. The experimental results show that the average accuracy of the proposed method for pilling level classification is 100%. The proposed method has 0.3% and 2.7% higher average accuracy than deep-principalcomponent- analysis-based neural networks (DPCANN) and the type-2 fuzzy cerebellar model articulation controller (T2FCMAC), respectively, demonstrating the superiority of the proposed model.

Cite

CITATION STYLE

APA

Shih, C. H., Lin, C. J., & Lee, C. L. (2022). Integrated Image Sensor and Deep Learning Network for Fabric Pilling Classification. Sensors and Materials, 34(1), 93–104. https://doi.org/10.18494/SAM3548

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free