Crowdsourcing fact extraction from scientific literature

  • Seifert C
  • Granitzer M
  • H??fler P
 et al. 
  • 27


    Mendeley users who have this article in their library.
  • 4


    Citations of this article.


Scientific publications constitute an extremely valuable body of knowledge and can be seen as the roots of our civilisation. However, with the exponential growth of written publications, comparing facts and findings between different research groups and communities becomes nearly impossible. In this paper, we present a conceptual approach and a first implementation for creating an open knowledge base of scientific knowledge mined from research publications. This requires to extract facts - mostly empirical observations - from unstructured texts (mainly PDF’s). Due to the importance of extracting facts with high-accuracy and the impreciseness of automatic methods, human quality control is of utmost importance. In order to establish such quality control mechanisms, we rely on intelligent visual interfaces and on establishing a toolset for crowdsourcing fact extraction, text mining and data integration tasks.

Author-supplied keywords

  • crowdsourcing
  • linked-open-data
  • triplification
  • web 2.0
  • web-based visual analytics

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Get full text


  • Christin Seifert

  • Michael Granitzer

  • Patrick H??fler

  • Belgin Mutlu

  • Vedran Sabol

  • Kai Schlegel

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free