On heavy-user bias in A/B testing

11Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.
Get full text

Abstract

On-line experimentation (also known as A/B testing) has become an integral part of software development. To timely incorporate user feedback and continuously improve products, many software companies have adopted the culture of agile deployment, requiring online experiments to be conducted and concluded on limited sets of users for a short period. While conceptually efficient, the result observed during the experiment duration can deviate from what is seen after the feature deployment, which makes the A/B test result biased. In this paper, we provide theoretical analysis to show that heavy-users can contribute significantly to the bias, and propose a re-sampling estimator for bias adjustment.

Cite

CITATION STYLE

APA

Wang, Y., Gupta, S., Lu, J., Mahmoudzadeh, A., & Liu, S. (2019). On heavy-user bias in A/B testing. In International Conference on Information and Knowledge Management, Proceedings (pp. 2425–2428). Association for Computing Machinery. https://doi.org/10.1145/3357384.3358143

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free