Parallel web crawler architecture for clickstream analysis

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The tremendous growth of the Web causes many challenges for single-process crawlers including the presence of some irrelevant answers among search results and the coverage and scaling issues. As a result, more robust algorithms needed to produce more precise and relevant search results in an appropriate timely manner. The existed Web crawlers mostly implement link dependent Web page importance metrics. One of the barriers of applying this metrics is that these metrics produce considerable communication overhead on the multi agent crawlers. Moreover, they suffer from the shortcoming of high dependency to their own index size that ends in their failure to rank Web pages with complete accuracy. Hence more enhanced metrics need to be addressed in this area. Proposing new Web page importance metric needs define a new architecture as a framework to implement the metric. The aim of this paper is to propose architecture for a focused parallel crawler. In this framework, the decision-making on Web page importance is based on a combined metric of clickstream analysis and context similarity analysis to the issued queries. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Ahmadi-Abkenari, F., & Selamat, A. (2012). Parallel web crawler architecture for clickstream analysis. In Communications in Computer and Information Science (Vol. 295 CCIS, pp. 123–132). https://doi.org/10.1007/978-3-642-32826-8_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free