Distillation-Resistant Watermarking for Model Protection in NLP

5Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.

Abstract

How can we protect the intellectual property of trained NLP models? Modern NLP models are prone to stealing by querying and distilling from their publicly exposed APIs. However, existing protection methods such as watermarking only work for images but are not applicable to text. We propose Distillation-Resistant Watermarking (DRW), a novel technique to protect NLP models from being stolen via distillation. DRW protects a model by injecting watermarks into the victim's prediction probability corresponding to a secret key and is able to detect such a key by probing a suspect model. We prove that a protected model still retains the original accuracy within a certain bound. We evaluate DRW on a diverse set of NLP tasks including text classification, part-of-speech tagging, and named entity recognition. Experiments show that DRW protects the original model and detects stealing suspects at 100% mean average precision for all four tasks while the prior method fails on two.

Cite

CITATION STYLE

APA

Zhao, X., Li, L., & Wang, Y. X. (2022). Distillation-Resistant Watermarking for Model Protection in NLP. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 5073–5084). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.370

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free