Ontology alignment is the process where two different ontologies that usually describe similar domains are ‘aligned’, i.e. a set of correspondences between their entities, regarding semantic equivalence, is determined. In order to identify these correspondences several methods have been proposed in literature. The most common features that these methods employ are string-, lexical-, structure-and semantic-based features for which several approaches have been developed. However, what hasn’t been investigated is the usage of visual-based features for determining entity similarity. Nowadays the existence of several resources that map lexical concepts onto images allows for exploiting visual features for this purpose. In this paper, a novel method, defining a visual-based similarity metric for ontology matching, is presented. Each ontological entity is associated with sets of images. State of the art visual feature extraction, clustering and indexing for computing the visual-based similarity between entities is employed. An adaptation of a Wordnet-based matching algorithm to exploit the visual similarity is also proposed. The proposed visual similarity approach is compared with standard metrics and demonstrates promising results.
CITATION STYLE
Doulaverakis, C., Vrochidis, S., & Kompatsiaris, I. (2016). A visual similarity metric for ontology alignment. In Communications in Computer and Information Science (Vol. 631, pp. 175–190). Springer Verlag. https://doi.org/10.1007/978-3-319-52758-1_11
Mendeley helps you to discover research relevant for your work.