Sampling the national deep web

7Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A huge portion of today's Web consists of web pages filled with information from myriads of online databases. This part of the Web, known as the deep Web, is to date relatively unexplored and even major characteristics such as number of searchable databases on the Web or databases' subject distribution are somewhat disputable. In this paper, we revisit a problem of deep Web characterization: how to estimate the total number of online databases on the Web? We propose the Host-IP clustering sampling method to address the drawbacks of existing approaches for deep Web characterization and report our findings based on the survey of Russian Web. Obtained estimates together with a proposed sampling technique could be useful for further studies to handle data in the deep Web. © 2011 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Shestakov, D. (2011). Sampling the national deep web. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6860 LNCS, pp. 331–340). https://doi.org/10.1007/978-3-642-23088-2_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free