When ChatGPT goes rogue: exploring the potential cybersecurity threats of AI-powered conversational chatbots

0Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.

Abstract

ChatGPT has garnered significant interest since its release in November 2022 and it has showcased a strong versatility in terms of potential applications across various industries and domains. Defensive cybersecurity is a particular area where ChatGPT has demonstrated considerable potential thanks to its ability to provide customized cybersecurity awareness training and its capability to assess security vulnerabilities and provide concrete recommendations to remediate them. However, the offensive use of ChatGPT (and AI-powered conversational agents, in general) remains an underexplored research topic. This preliminary study aims to shed light on the potential weaponization of ChatGPT to facilitate and initiate cyberattacks. We briefly review the defensive usage of ChatGPT in cybersecurity, then, through practical examples and use-case scenarios, we illustrate the potential misuse of ChatGPT to launch hacking and cybercrime activities. We discuss the practical implications of our study and provide some recommendations for future research.

Cite

CITATION STYLE

APA

Iqbal, F., Samsom, F., Kamoun, F., & MacDermott, Á. (2023). When ChatGPT goes rogue: exploring the potential cybersecurity threats of AI-powered conversational chatbots. Frontiers in Communications and Networks, 4. https://doi.org/10.3389/frcmn.2023.1220243

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free