Combining Variational Autoencoders and Transformer Language Models for Improved Password Generation

5Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Password generation techniques have recently been explored by leveraging deep-learning natural language processing (NLP) algorithms. Previous work has raised the state of the art for password guessing algorithms significantly, by approaching the problem using either variational autoencoders with CNN-based encoder and decoder architectures or transformer-based architectures (namely GPT2) for text generation. In this work we aim to combine both paradigms, introducing a novel architecture that leverages the expressive power of transformers with the natural sampling approach to text generation of variational autoencoders. We show how our architecture generates state-of-the-art results in password matching performance across multiple benchmark datasets.

Cite

CITATION STYLE

APA

Biesner, D., Cvejoski, K., & Sifa, R. (2022). Combining Variational Autoencoders and Transformer Language Models for Improved Password Generation. In ACM International Conference Proceeding Series. Association for Computing Machinery. https://doi.org/10.1145/3538969.3539000

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free