Data-parallel web crawling models

5Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The need to quickly locate, gather, and store the vast amount of material in the Web necessitates parallel computing. In this paper, we propose two models, based on multi-constraint graph-partitioning, for efficient data-parallel Web crawling. The models aim to balance the amount of data downloaded and stored by each processor as well as balancing the number of page requests made by the processors. The models also minimize the total volume of communication during the link exchange between the processors. To evaluate the performance of the models, experimental results are presented on a sample Web repository containing around 915,000 pages. © Springer-Verlag 2004.

Cite

CITATION STYLE

APA

Cambazoglu, B. B., Turk, A., & Aykanat, C. (2004). Data-parallel web crawling models. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3280, 801–809. https://doi.org/10.1007/978-3-540-30182-0_80

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free