Image-Based Recommendation Engine Using VGG Model

7Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Image retrieval is broadly classified into two parts—Description-based image retrieval and content-based image retrieval. DBIR involves annotating the images and then using text retrieval techniques to retrieve them. This requires vast amount of labor and is expensive for large image databases. Semantic gap is another major issue in DBIR. Different people may perceive the contents of the image differently and hence annotate it differently, e.g., a bright shiny silver car for one person can be boring dull gray car for another. Thus, no objective keywords for search can be defined, whereas CBIR aims at indexing and retrieving images based on their visual contents like color, texture, and shape features. For a content-based image retrieval system, the performance is dependent on how the feature vector is represented and what similarity metrics is chosen. The disadvantage with visual content descriptor is that it is very domain specific, e.g., color correlogram and color histogram for color feature extraction, Tamura and GLCM for texture feature extraction, local binary pattern and HOG for face recognition, SIFT and SURF for object detection, etc. Hence, convolution neural network which is not domain specific would be a better approach.

Cite

CITATION STYLE

APA

Vasudevan, S., Chauhan, N., Sarobin, V., & Geetha, S. (2021). Image-Based Recommendation Engine Using VGG Model. In Lecture Notes in Electrical Engineering (Vol. 668, pp. 257–265). Springer. https://doi.org/10.1007/978-981-15-5341-7_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free