Today's machine learning platforms have major robustness issues dealing with insecure and unreliable memory systems. In conventional data representation, bit flips due to noise or attack can cause value explosion, which leads to incorrect learning prediction. In this paper, we propose RobustHD, a robust and noise-tolerant learning system based on HyperDimensional Computing (HDC), mimicking important brain functionalities. Unlike traditional binary representation, RobustHD exploits a redundant and holographic representation, ensuring all bits have the same impact on the computation. RobustHD also proposes a runtime framework that adaptively identifies and regenerates the faulty dimensions in an unsupervised way. Our solution not only provides security against possible bit-flip attacks but also provides a learning solution with high robustness to noises in the memory. We performed a cross-stacked evaluation from a conventional platform to emerging processing in-memory architecture. Our evaluation shows that under 10% random bit flip attack, RobustHD provides a maximum of 0.53% quality loss, while deep learning solutions are losing over 26.2% accuracy.
CITATION STYLE
Poduval, P., Ni, Y., Kim, Y., Ni, K., Kumar, R., Cammarota, R., & Imani, M. (2022). Adaptive neural recovery for highly robust brain-like representation. In Proceedings - Design Automation Conference (pp. 367–372). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1145/3489517.3530659
Mendeley helps you to discover research relevant for your work.