Appraising the delay accuracy in browser-based network measurement

17Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

Conducting network measurement in a web browser (e.g., speedtest and Netalyzr) enables end users to understand their network and application performance. However, very little is known about the (in) accuracy of the various methods used in these tools. In this paper, we evaluate the accuracy of ten HTTP-based and TCP socket-based methods for measuring the round-trip time (RTT) with the five most popular browsers on Linux and Windows. Our measurement results show that the delay overheads incurred in most of the HTTP-based methods are too large to ignore. Moreover, the overheads incurred by some methods (such as Flash GET and POST) vary significantly across different browsers and systems, making it very difficult to calibrate. The socket-based methods, on the other hand, incur much smaller overhead. Another interesting and important finding is that Date.getTime(), a typical timing API in Java, does not provide the millisecond resolution assumed by many measurement tools on some OSes (e.g., Windows 7). This results in a serious under-estimation of RTT. On the other hand, some tools over-estimate the RTT by including the TCP handshaking phase. Copyright 2013 ACM.

Author supplied keywords

Cite

CITATION STYLE

APA

Li, W., Mok, R. K. P., Chang, R. K. C., & Fok, W. W. T. (2013). Appraising the delay accuracy in browser-based network measurement. In Proceedings of the ACM SIGCOMM Internet Measurement Conference, IMC (pp. 361–367). https://doi.org/10.1145/2504730.2504760

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free