Multiplying concept sources for graph modeling

13Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The main idea in this paper is to incorporate medical knowledge in the language modeling approach to information retrieval (IR). Our model makes use of the textual part of ImageCLEFmed corpus and of the medical knowledge as found in the Unified Medical Language System (UMLS) knowledge sources. The use of UMLS allows us to create a conceptual representation of each sentence in the corpus. We use these representations to create a graph model for each document. As in the standard language modeling approach, we evaluate the probability that a document graph model generates the query graph. Graphs are created from medical texts and queries, and are built for different languages, with different methods. After developing the graph model, we present our tests, which involve mixing different concepts sources (i.e. languages and methods) for the matching of the query and text graphs. Results show that using language model on concepts provides good results in IR. Multiplying the concept sources further improves the results. Lastly, using relations between concepts (provided by the graphs under consideration) improves results when only few conceptual sources are used to analyze the query. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Maisonnasse, L., Gaussier, E., & Chevallet, J. P. (2008). Multiplying concept sources for graph modeling. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5152 LNCS, pp. 585–592). Springer Verlag. https://doi.org/10.1007/978-3-540-85760-0_73

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free