When the organization of images in a database is well described with pre-defined semantic categories, it can be useful for category specific searching. In this work, we investigate a supervised learning approach to associate low-dimensional image features with their high level semantic categories and utilize the category specific feature distribution information in statistical similarity matching. A multi-class support vector classifier (SVC) is trained to predict the categories of query and database images. Based on the online prediction, pre-computed category specific first and second order statistical parameters are utilized in similarity measure functions on the assumption that, distributions are multivariate Gaussian. A high dimensional feature vector would increase the computational complexity, logical database size and moreover, incorporate inaccuracy in parameter estimation. We also propose a fusion (early, late, and no fusion) based principal component analysis (PCA) to reduce the dimensionality based on both independent and dependent assumptions of image features. Experimental results on the reduced feature dimensions are reported on a generic image database with ground-truth or known categories. Performances of two statistical distance measures (e.g., Bhattacharyya & Mahalanobis) are evaluated and compared with commonly used Euclidean distance, which show the effectiveness of the proposed technique. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
Rahman, M. M., Bhattacharya, P., & Desai, B. C. (2005). Similarity searching in image retrieval with statistical distance measures and supervised learning. In Lecture Notes in Computer Science (Vol. 3686, pp. 315–324). Springer Verlag. https://doi.org/10.1007/11551188_34
Mendeley helps you to discover research relevant for your work.