Conducting network measurement in a web browser (e.g., speedtest and Netalyzr) enables end users to understand their network and application performance. However, very little is known about the (in) accuracy of the various methods used in these tools. In this paper, we evaluate the accuracy of ten HTTP-based and TCP socket-based methods for measuring the round-trip time (RTT) with the five most popular browsers on Linux and Windows. Our measurement results show that the delay overheads incurred in most of the HTTP-based methods are too large to ignore. Moreover, the overheads incurred by some methods (such as Flash GET and POST) vary significantly across different browsers and systems, making it very difficult to calibrate. The socket-based methods, on the other hand, incur much smaller overhead. Another interesting and important finding is that Date.getTime(), a typical timing API in Java, does not provide the millisecond resolution assumed by many measurement tools on some OSes (e.g., Windows 7). This results in a serious under-estimation of RTT. On the other hand, some tools over-estimate the RTT by including the TCP handshaking phase. Copyright 2013 ACM.
CITATION STYLE
Li, W., Mok, R. K. P., Chang, R. K. C., & Fok, W. W. T. (2013). Appraising the delay accuracy in browser-based network measurement. In Proceedings of the ACM SIGCOMM Internet Measurement Conference, IMC (pp. 361–367). https://doi.org/10.1145/2504730.2504760
Mendeley helps you to discover research relevant for your work.