Unforgeability in Stochastic Gradient Descent

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Stochastic Gradient Descent (SGD) is a popular training algorithm, a cornerstone of modern machine learning systems. Several security applications benefit from determining if SGD executions are forgeable, i.e., whether the model parameters seen at a given step are obtainable by more than one distinct set of data samples. In this paper, we present the first attempt at proving impossibility of such forgery. We furnish a set of conditions, which are efficiently checkable on concrete checkpoints seen during training runs, under which checkpoints are provably unforgeable at that step. Our experiments show that the conditions are somewhat mild and hence always satisfied at checkpoints sampled in our experiments. Our results sharply contrast prior findings at a high level: We show that checkpoints we find to be provably unforgeable have been deemed to be forgeable using the same methodology and experimental setup suggested in prior work. This discrepancy arises because of unspecified subtleties in definitions. We experimentally confirm that the distinction matters, i.e., small errors amplify during training to produce significantly observable difference in final models trained. We hope our results serve as a cautionary note on the role of algebraic precision in forgery definitions and related security arguments.

Cite

CITATION STYLE

APA

Baluta, T., Nikolić, I., Jain, R., Aggarwal, D., & Saxena, P. (2023). Unforgeability in Stochastic Gradient Descent. In CCS 2023 - Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security (pp. 1138–1152). Association for Computing Machinery, Inc. https://doi.org/10.1145/3576915.3623093

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free