Speeding Up the Web Crawling Process on a Multi-Core Processor Using Virtualization

  • Al-Bahadili H
  • Qtishat H
  • S. Naoum R
N/ACitations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

A Web crawler is an important component of the Web search engine. It demands large amount of hardware resources (CPU and memory) to crawl data from the rapidly growing and changing Web. So that the crawling process should be a continuous process performed from time-to-time to maintain up-to-date crawled data. This paper develops and investigates the performance of a new approach to speed up the crawling process on a multi-core processor through virtualization. In this approach, the multi-core processor is divided into a number of virtual-machines (VMs) that can run in parallel (concurrently) performing different crawling tasks on different data. It presents a description, implementation, and evaluation of a VM-based distributed Web crawler. In order to estimate the speedup factor achieved by the VM-based crawler over a non-virtualization crawler, extensive crawling experiments were carried-out to estimate the crawling times for various numbers of documents. Furthermore, the average crawling rate in documents per unit time is computed, and the effect of the number of VMs on the speedup factor is investigated. For example, on an Intel® Core™ i5-2300 CPU @2.80 GHz and 8 GB memory, a speedup factor of ~1.48 is achieved when crawling 70000 documents on 3 and 4 VMs.

Cite

CITATION STYLE

APA

Al-Bahadili, H., Qtishat, H., & S. Naoum, R. (2013). Speeding Up the Web Crawling Process on a Multi-Core Processor Using Virtualization. International Journal on Web Service Computing, 4(1), 19–37. https://doi.org/10.5121/ijwsc.2013.4102

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free