FedKC: Federated Knowledge Composition for Multilingual Natural Language Understanding

14Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Multilingual natural language understanding, which aims to comprehend multilingual documents, is an important task. Existing efforts have been focusing on the analysis of centrally stored text data, but in real practice, multilingual data is usually distributed. Federated learning is a promising paradigm to solve this problem, which trains local models with decentralized data on local clients and aggregates local models on the central server to achieve a good global model. However, existing federated learning methods assume that data are independent and identically distributed (IID), and cannot handle multilingual data, that are usually non-IID with severely skewed distributions: First, multilingual data is stored on local client devices such that there are only monolingual or bilingual data stored on each client. This makes it difficult for local models to know the information of documents in other languages. Second, the distribution over different languages could be skewed. High resource language data is much more abundant than low resource language data. The model trained on such skewed data may focus more on high resource languages but fail to consider the key information of low resource languages. To solve the aforementioned challenges of multilingual federated NLU, we propose a plug-and-play knowledge composition (KC) module, called FedKC, which exchanges knowledge among clients without sharing raw data. Specifically, we propose an effective way to calculate a consistency loss defined based on the shared knowledge across clients, which enables models trained on different clients achieve similar predictions on similar data. Leveraging this consistency loss, joint training is thus conducted on distributed data respecting the privacy constraints. We also analyze the potential risk of FedKC and provide theoretical bound to show that it is difficult to recover data from the corrupted data. We conduct extensive experiments on three public multilingual datasets for three typical NLU tasks, including paraphrase identification, question answering matching, and news classification. The experiment results show that the proposed FedKC can outperform state-of-the-art baselines on the three datasets significantly.

Cite

CITATION STYLE

APA

Wang, H., Zhao, H., Wang, Y., Yu, T., Gu, J., & Gao, J. (2022). FedKC: Federated Knowledge Composition for Multilingual Natural Language Understanding. In WWW 2022 - Proceedings of the ACM Web Conference 2022 (pp. 1839–1850). Association for Computing Machinery, Inc. https://doi.org/10.1145/3485447.3511988

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free