Machine learning (ML)-based techniques for electronic design automation (EDA) have boosted the performance of modern integrated circuits (ICs). Such achievement makes ML model to be of importance for the EDA industry. In addition, ML models for EDA are widely considered having high development cost because of the time-consuming and complicated training data generation process. Thus, confidentiality protection for EDA models is a critical issue. However, an adversary could apply model extraction attacks to steal the model in the sense of achieving the comparable performance to the victim's model. As model extraction attacks have posed great threats to other application domains, e.g., computer vision and natural language process, in this paper, we study model extraction attacks for EDA models under two real-world scenarios. It is the first work that (1) introduces model extraction attacks on EDA models and (2) proposes two attack methods against the unlimited and limited query budget scenarios. Our results show that our approach can achieve competitive performance with the well-trained victim model without any performance degradation. Based on the results, we demonstrate that model extraction attacks truly threaten the EDA model privacy and hope to raise concerns about ML security issues in EDA.
CITATION STYLE
Chang, C. C., Pan, J., Xie, Z., Hu, J., & Chen, Y. (2023). Rethink before Releasing Your Model: ML Model Extraction Attack in EDA. In Proceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC (pp. 252–257). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1145/3566097.3567896
Mendeley helps you to discover research relevant for your work.