GA-SAM: Gradient-Strength based Adaptive Sharpness-Aware Minimization for Improved Generalization

5Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

Recently, Sharpness-Aware Minimization (SAM) algorithm has shown state-of-the-art generalization abilities in vision tasks. It demonstrates that flat minima tend to imply better generalization abilities. However, it has some difficulty implying SAM to some natural language tasks, especially to models with drastic gradient changes, such as RNNs. In this work, we analyze the relation between the flatness of the local minimum and its generalization ability from a novel and straightforward theoretical perspective. We propose that the shift of the training and test distributions can be equivalently seen as a virtual parameter corruption or perturbation, which can explain why flat minima that are robust against parameter corruptions or perturbations have better generalization performances. On its basis, we propose a Gradient-Strength based Adaptive Sharpness-Aware Minimization (GA-SAM) algorithm to help to learn algorithms find flat minima that generalize better. Results in various language benchmarks validate the effectiveness of the proposed GA-SAM algorithm on natural language tasks.

Cite

CITATION STYLE

APA

Zhang, Z., Luo, R., Su, Q., & Sun, X. (2022). GA-SAM: Gradient-Strength based Adaptive Sharpness-Aware Minimization for Improved Generalization. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 3888–3903). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.257

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free