In prior work, a machine learning approach was used to develop a suggestion system for 80 privacy settings, based on a limited sample of five user preferences. Such suggestion systems may help with the user-burden of preference selection. However, such a system may also be used by a malicious provider to manipulate users’ preference selections through nudging the output of the algorithm. This paper reports an experiment with such manipulation to clarify the impact and users’ resistance of or susceptibility to such manipulation. Users are shown to be highly accepting of suggestions, even where the suggestions are random (though less so than for nudged suggestions).
CITATION STYLE
Nakamura, T., Adams, A. A., Murata, K., Kiyomoto, S., & Suzuki, N. (2019). The effects of nudging a privacy setting suggestion algorithm’s outputs on user acceptability. Journal of Information Processing, 27, 787–801. https://doi.org/10.2197/ipsjjip.27.787
Mendeley helps you to discover research relevant for your work.