The diverse and distributed nature of the information published on the World Wide Web has made it difficult to collate and track information related to specific topics. Whereas most existing work on web information fusion has focused on multiple document summarization, this paper presents a novel approach for discovering associations between images and text segments, which subsequently can be used to support cross-media web content summarization. Specifically, we employ a similarity-based multilingual retrieval model and adopt a vague transformation technique for measuring the information similarity between visual features and textual features. The experimental results on a terrorist domain document set suggest that combining visual and textual features provides a promising approach to image and text fusion. © Springer-Verlag Berlin Heidelberg 2006.
CITATION STYLE
Jiang, T., & Tan, A. H. (2006). Discovering image-text associations for cross-media web information fusion. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4213 LNAI, pp. 561–568). Springer Verlag. https://doi.org/10.1007/11871637_56
Mendeley helps you to discover research relevant for your work.