Abstract
Similarity analysis is a powerful tool for shape matching/retrieval and other computer vision tasks. In the literature, various shape (dis)similarity measures have been introduced. Different measures specialize on different aspects of the data. In this paper, we consider the problem of improving retrieval accuracy by systematically fusing several different measures. To this end, we propose the locally constrained mixed-diffusion method, which partly fuses the given measures into one and propagates on the resulted locally dense data space. Furthermore, we advocate the use of self-adaptive neighborhoods to automatically determine the appropriate size of the neighborhoods in the diffusion process, with which the retrieval performance is comparable to the best manually tuned κNNs. The superiority of our approach is empirically demonstrated on both shape and image datasets. Our approach achieves a score of 100% in the bull's eye test on the MPEG-7 shape dataset, which is the best reported result to date. © 1999-2012 IEEE.
Author supplied keywords
Cite
CITATION STYLE
Luo, L., Shen, C., Zhang, C., & Van Den Hengel, A. (2013). Shape similarity analysis by self-tuning locally constrained mixed-diffusion. IEEE Transactions on Multimedia, 15(5), 1174–1183. https://doi.org/10.1109/TMM.2013.2242450
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.