Adaptive neural recovery for highly robust brain-like representation

6Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Today's machine learning platforms have major robustness issues dealing with insecure and unreliable memory systems. In conventional data representation, bit flips due to noise or attack can cause value explosion, which leads to incorrect learning prediction. In this paper, we propose RobustHD, a robust and noise-tolerant learning system based on HyperDimensional Computing (HDC), mimicking important brain functionalities. Unlike traditional binary representation, RobustHD exploits a redundant and holographic representation, ensuring all bits have the same impact on the computation. RobustHD also proposes a runtime framework that adaptively identifies and regenerates the faulty dimensions in an unsupervised way. Our solution not only provides security against possible bit-flip attacks but also provides a learning solution with high robustness to noises in the memory. We performed a cross-stacked evaluation from a conventional platform to emerging processing in-memory architecture. Our evaluation shows that under 10% random bit flip attack, RobustHD provides a maximum of 0.53% quality loss, while deep learning solutions are losing over 26.2% accuracy.

Cite

CITATION STYLE

APA

Poduval, P., Ni, Y., Kim, Y., Ni, K., Kumar, R., Cammarota, R., & Imani, M. (2022). Adaptive neural recovery for highly robust brain-like representation. In Proceedings - Design Automation Conference (pp. 367–372). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1145/3489517.3530659

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free