Information extraction from web sourcesbased on multi-aspect content analysis

9Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Information extraction from web pages is often recognized asa difficult task mainly due to the loose structure and insufficient semantic annotation of their HTML code. Since the web pages are primarily created for being viewed by human readers, their authors usually do not pay much attention to the structure and even validity of the HTML code itself. The CEUR Workshop Proceedings pages are a good illustration of this. Their code varies from an invalid HTML markup to fully valid and semantically annotated documents while preserving a kind of unified visual presentation of the contents. In this paper, as a contribution to the ESWC 2015 Semantic Publishing Challenge, we present an information extraction approach based on analyzing the rendered pages rather than their code. The documents are represented by an RDF-based model that allows to combine the results of different page analysis methods such as layout analysis and the visual and textual feature classification. Thisallows to specify a set of generic rules for extracting a particular information from the page independently on its code.

Cite

CITATION STYLE

APA

Milicka, M., & Burget, R. (2015). Information extraction from web sourcesbased on multi-aspect content analysis. In Communications in Computer and Information Science (Vol. 548, pp. 81–92). Springer Verlag. https://doi.org/10.1007/978-3-319-25518-7_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free