Seamlessly selecting the best copy from internet-wide replicated web servers

2Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The explosion of the web has led to a situation where a majority of the traffic on the Internet is web related. Today, practically all of the popular web sites are served from single locations. This necessitates frequent long distance network transfers of data (potentially repeatedly) which results in a high response time for users, and is wasteful of the available network bandwidth. Moreover, it commonly creates a single point of failure between the web site and its Internet provider. This paper presents a new approach to web replication, where each of the replicas resides in a different part of the network, and the browser is automatically and transparently directed to the “best” server. Implementing this architecture for popular web sites will result in a better response-time and a higher availability of these sites. Equally important, this architecture will potentially cut down a significant fraction of the traffic on the Internet, freeing bandwidth for other uses.

Cite

CITATION STYLE

APA

Amir, Y., Peterson, A., & Shaw, D. (1998). Seamlessly selecting the best copy from internet-wide replicated web servers. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1499, pp. 22–33). Springer Verlag. https://doi.org/10.1007/bfb0056471

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free