Learning to harvest information for the semantic web

  • Ciravegna F
  • Chapman S
  • Dingli A
 et al. 
  • 48

    Readers

    Mendeley users who have this article in their library.
  • 40

    Citations

    Citations of this article.

Abstract

In this paper we describe a methodology for harvesting information from large distributed repositories (e.g. large Web sites) with minimum user intervention. The methodology is based on a combination of information extraction, information integration and machine learning techniques. Learning is seeded by extracting information from structured sources (e.g. databases and digital libraries) or a user-defined lexicon. Retrieved information is then used to partially annotate documents. Annotated documents are used to bootstrap learning for simple Information Extraction (IE) methodologies, which in turn will produce more annotation to annotate more documents that will be used to train more complex IE engines and so on. In this paper we describe the methodology and its implementation in the Armadillo system, compare it with the current state of the art, and describe the details of an implemented application. Finally we draw some conclusions and highlight some challenges and future work. © Springer-Verlag Berlin Heidelberg 2004.

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Authors

  • F. Ciravegna

  • S. Chapman

  • A. Dingli

  • Y. Wilks

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free