In this study, we address the evolving threat of Maleficent Neural Networks, also known as 'Evil' Neural Networks, malicious neural networks embedded with malware. Due to the absence of effective detection mechanisms, these malicious models remain undetected, posing significant challenges to the security of users and systems in the rapidly expanding field of Artificial Intelligence and Machine Learning. This research provides a comprehensive examination of Maleficent Neural Networks, and their detection, mitigation, and security issues, based on recent foundational studies. A discussion of ethical and legal concerns surrounding the deliberate infusion of malware into neural networks is also included, emphasising the need for collaborative efforts among experts in the fields of AI, machine learning, and cyber security. The study shows that this new threat possesses several risks, and the number of works on the topic we identified confirms that more research is needed in this direction. Moreover, we propose promising future directions, including the creation of advanced adversarial defence mechanisms and the development of new methods to detect malware within neural networks.
CITATION STYLE
Portales, S. Z., & Riegler, M. A. (2024). Maleficent Neural Networks, the Embedding of Malware in Neural Networks: A Survey. IEEE Access, 12, 69753–69764. https://doi.org/10.1109/ACCESS.2024.3401578
Mendeley helps you to discover research relevant for your work.