Iterative Network Pruning with Uncertainty Regularization for Lifelong Sentiment Classification

19Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Lifelong learning capabilities are crucial for sentiment classifiers to process continuous streams of opinioned information on the Web. However, performing lifelong learning is non-trivial for deep neural networks as continually training of incrementally available information inevitably results in catastrophic forgetting or interference. In this paper, we propose a novel i terative network p runing with uncertainty r egularization method for l ifelong s entiment classification (IPRLS), which leverages the principles of network pruning and weight regularization. By performing network pruning with uncertainty regularization in an iterative manner, IPRLS can adapt a single BERT model to work with continuously arriving data from multiple domains while avoiding catastrophic forgetting and interference. Specifically, we leverage an iterative pruning method to remove redundant parameters in large deep networks so that the freed-up space can then be employed to learn new tasks, tackling the catastrophic forgetting problem. Instead of keeping the old-tasks fixed when learning new tasks, we also use an uncertainty regularization based on the Bayesian online learning framework to constrain the update of old tasks weights in BERT, which enables positive backward transfer, i.e. learning new tasks improves performance on past tasks while protecting old knowledge from being lost. In addition, we propose a task-specific low-dimensional residual function in parallel to each layer of BERT, which makes IPRLS less prone to losing the knowledge saved in the base BERT network when learning a new task. Extensive experiments on 16 popular review corpora demonstrate that the proposed IPRLS method significantly outperforms the strong baselines for lifelong sentiment classification. For reproducibility, we submit the code and data at: \urlhttps://github.com/siat-nlp/IPRLS .

Cite

CITATION STYLE

APA

Geng, B., Yang, M., Yuan, F., Wang, S., Ao, X., & Xu, R. (2021). Iterative Network Pruning with Uncertainty Regularization for Lifelong Sentiment Classification. In SIGIR 2021 - Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 1229–1238). Association for Computing Machinery, Inc. https://doi.org/10.1145/3404835.3462902

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free