The concept of learned index structures relies on the idea that the input-output functionality of a database index can be viewed as a prediction task and, thus, implemented using a machine learning model instead of traditional algorithmic techniques. This novel angle for a decades-old problem has inspired exciting results at the intersection of machine learning and data structures. However, the advantage of learned index structures, i.e., the ability to adjust to the data at hand via the underlying ML-model, can become a disadvantage from a security perspective as it could be exploited. In this work, we present the first study of data poisoning attacks on learned index structures. Our poisoning approach is different from all previous works since the model under attack is trained on a cumulative distribution function (CDF) and, thus, every injection on the training set has a cascading impact on multiple data values. We formulate the first poisoning attacks on linear regression models trained on a CDF, which is a basic building block of the proposed learned index structures. We generalize our poisoning techniques to attack the advanced two-stage design of learned index structures called recursive model index (RMI), which has been shown to outperform traditional B-Trees. We evaluate our attacks under a variety of parameterizations of the model and show that the error of the RMI increases up to 300x and the error of its second-stage models increases up to 3000x.
CITATION STYLE
Kornaropoulos, E. M., Ren, S., & Tamassia, R. (2022). The Price of Tailoring the Index to Your Data: Poisoning Attacks on Learned Index Structures. In Proceedings of the ACM SIGMOD International Conference on Management of Data (pp. 1331–1344). Association for Computing Machinery. https://doi.org/10.1145/3514221.3517867
Mendeley helps you to discover research relevant for your work.