In a lot of real-world data mining and machine learning applications, data are provided by multiple providers and each maintains private records of different feature sets about common entities. It is challenging to train these vertically partitioned data effectively and efficiently while keeping data privacy for traditional data mining and machine learning algorithms. In this paper, we focus on nonlinear learning with kernels,and propose a federated doubly stochastic kernel learning (FDSKL) algorithm for vertically partitioned data. Specifically, we use random features to approximate the kernel mapping function and use doubly stochastic gradients to update the solutions, which are all computed federatedly without the disclosure of data. Importantly, we prove that FDSKL has a sublinear convergence rate, and can guarantee the data security under the semi-honest assumption. Extensive experimental results on a variety of benchmark datasets show that FDSKL is significantly faster than state-of-the-art federated learning methods when dealing with kernels, while retaining the similar generalization performance.
CITATION STYLE
Gu, B., Dang, Z., Li, X., & Huang, H. (2020). Federated Doubly Stochastic Kernel Learning for Vertically Partitioned Data. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 2483–2493). Association for Computing Machinery. https://doi.org/10.1145/3394486.3403298
Mendeley helps you to discover research relevant for your work.