There are a large category of textile images that have the characteristics of high local feature repetition rate and complex background information. These types of images have significant intra-class differences and small inter-class differences, making it impossible to perform classification training. Making it difficult for existing methods to accurately retrieve textile images. To improve the retrieval accuracy of textile images, this paper defines multiple repeated local fine-grained features in textile images as textile image 'feature components', extracts multiple 'feature components' from a textile image, and fuses the 'feature components' to generate the definition of textile 'fingerprints'. We propose an image retrieval method based on a pre-trained Mask R-CNN model to extract multiple 'feature components' in the textile image, then extract the depth feature of the textile again through a convolution neural network, and fuse the extracted depth feature to obtain the 'fingerprint' of the textile. The obtained 'fingerprint' can effectively eliminate the interference of large area background and a large number of local repeated features in the textile image, and improve the efficiency of textile retrieval. A series of comparative experiments are carried out on textile image data sets with repeated features. The experimental results show that the proposed method is generally effective.
CITATION STYLE
Tan, S., Dong, L., Zhang, M., & Zhang, Y. (2023). Fine-Grained Retrieval Method of Textile Image. IEEE Access, 11, 70525–70533. https://doi.org/10.1109/ACCESS.2023.3287630
Mendeley helps you to discover research relevant for your work.