Parallel web spiders for cooperative information gathering

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Web spider is a widely used approach to obtain information for search engines. As the size of the Web grows, it becomes a natural choice to parallelize the spider's crawling process. This paper presents a parallel web spider model based on multi-agent system for cooperative information gathering. It uses the dynamic assignment mechanism to wipe off redundant web pages caused by parallelization. Experiments show that the parallel spider is effective to improve the information gathering performance within an acceptable interaction efficiency cost for controlling. This approach provides a novel perspective for the next generation advanced search engine. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Luo, J., Shi, Z., Wang, M., & Wang, W. (2005). Parallel web spiders for cooperative information gathering. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3795 LNCS, pp. 1192–1197). https://doi.org/10.1007/11590354_143

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free