Abstract
Machine Learning is a well established tool used in a variety of applications. As training advanced models requires considerable amounts of meaningful data in addition to specific knowledge, a new business model separate models creators from model users. Pre-trained models are sold or made available as a service. This raises several security challenges, among others the one of intellectual property protection. Therefore, a new research track actively seeks to provide techniques for model watermarking that would enable model identification in case of suspicion of model theft or misuse. In this paper, we focus on the problem of secure watermarks verification, which affects all of the proposed techniques and until now was barely tackled. First, we revisit the existing threat model. In particular, we explain the possible threats related to a semi-honest or dishonest verification authority. Secondly, we show how to reduce trust requirements between participants by performing the watermarks verification on encrypted data. Finally, we describe a novel secure verification protocol as well as detail its possible implementation using Multi-Party Computation. The proposed solution does not only preserve the confidentiality of the watermarks but also helps detecting evasion attacks. It could be adopted to work with other authentication schemes based on watermarking, especially with image watermarking schemes.
Author supplied keywords
Cite
CITATION STYLE
Kapusta, K., Thouvenot, V., Bettan, O., Beguinet, H., & Senet, H. (2021). A Protocol for Secure Verification of Watermarks Embedded into Machine Learning Models. In IH and MMSec 2021 - Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security (pp. 171–176). Association for Computing Machinery, Inc. https://doi.org/10.1145/3437880.3460409
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.