Deep learning has shown impressive performance across a variety of domains, including data-driven prognostics. However, research has shown that deep neural networks are susceptible to adversarial perturbations, which are small but specially designed modifications to normal data inputs that can adversely affect the quality of the machine learning predictor. We study the impact of such adversarial perturbations in data-driven prognostics where sensor readings are utilized for system health status prediction including status classification and remaining useful life regression. We find that we can introduce obvious errors in prognostics by adding imperceptible noise to a normal input and that the hybrid model with randomization and structural contexts is more robust to adversarial perturbations than the conventional deep neural network. Our work shows limitations of current deep learning techniques in pure data-driven prognostics, and indicates a potential technical path forward. To the best of our knowledge, this work is the first to investigate the implications of using randomization and semantic structural contexts against current adversarial attacks for deep learning-based prognostics.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Zhou, X., Canady, R., Li, Y., & Gokhale, A. (2020). Overcoming adversarial perturbations in data-driven prognostics through semantic structural context-driven deep learning. In Proceedings of the Annual Conference of the Prognostics and Health Management Society, PHM (Vol. 12). Prognostics and Health Management Society. https://doi.org/10.36001/phmconf.2020.v12i1.1182