Set Semantic Similarity for Image Prosthetic Knowledge Exchange

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Concept information can be expressed by text, images or general objects which semantic meaning is clear to a human in a specific cultural context. For a computer, when available, text with its semantics (e.g., metadata, comments, captions) can convey more precise meaning than images or general objects with low-level features (e.g., color distribution, shapes, sound peaks) to extract the concept underlying the object. Among semantic measures, web-based proximity measures e.g., confidence, PMING, NGD, Jaccard, Dice, are particularly useful for concept evaluation, exploiting statistical data provided by search engines on terms and expressions provided in texts associated with the object. Where Artificial Intelligence can be a support for impaired individuals, e.g., having disabilities related to vision and hearing, understanding the concept underlying an object can be critical for an intelligent artificial assistant. In this work we propose to use the set semantic distance, already used in literature for semantic similarity measurement of web objects, as a tool for artificial assistants to support knowledge extraction; in other words, as prosthetic knowledge.

Cite

CITATION STYLE

APA

Franzoni, V., Li, Y., & Milani, A. (2019). Set Semantic Similarity for Image Prosthetic Knowledge Exchange. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11624 LNCS, pp. 513–525). Springer Verlag. https://doi.org/10.1007/978-3-030-24311-1_37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free