In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks. We formulate a new loss function that is equipped with an additional entropic regularization. Our loss function considers the contribution of adversarial samples that are drawn from a specially designed distribution in the data space that assigns high probability to points with high loss and in the immediate neighborhood of training samples. Our proposed algorithms optimize this loss to seek adversarially robust valleys of the loss landscape. Our approach achieves competitive (or better) performance in terms of robust classification accuracy as compared to several state-of-the-art robust learning approaches on benchmark datasets such as MNIST and CIFAR-10.
CITATION STYLE
Jagatap, G., Joshi, A., Chowdhury, A. B., Garg, S., & Hegde, C. (2022). Adversarially Robust Learning via Entropic Regularization. Frontiers in Artificial Intelligence, 4. https://doi.org/10.3389/frai.2021.780843
Mendeley helps you to discover research relevant for your work.