Privacy-preserving vertical federated learning

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many federated learning (FL) proposals follow the structure of horizontal FL, where each party has all the necessary information to train a model available to them. However, in important real-world FL scenarios, not all parties have access to the same information, and not all have what is required to train a machine learning model. In what is known as vertical scenarios, multiple parties provide disjoint sets of information that, when brought together, can create a full feature set with labels, which can be used for training. Legislation, practical considerations, and privacy requirements inhibit moving all data to a single place or freely sharing among parties. Horizontal FL techniques cannot be applied to vertical settings. This chapter discusses the use cases and challenges of vertical FL. It introduces the most important approaches for vertical FL and describes in detail FedV, an efficient solution to perform secure gradient computation for popular ML models. FedV is designed to overcome some of the pitfalls inherent to applying existing state-of-the art techniques. Using FedV substantially reduces training time and the amount of data transfer and enables the use of vertical FL in more real-world use cases.

Cite

CITATION STYLE

APA

Xu, R., Baracaldo, N., Zhou, Y., Abay, A., & Anwar, A. (2022). Privacy-preserving vertical federated learning. In Federated Learning: A Comprehensive Overview of Methods and Applications (pp. 417–438). Springer International Publishing. https://doi.org/10.1007/978-3-030-96896-0_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free