Password generation techniques have recently been explored by leveraging deep-learning natural language processing (NLP) algorithms. Previous work has raised the state of the art for password guessing algorithms significantly, by approaching the problem using either variational autoencoders with CNN-based encoder and decoder architectures or transformer-based architectures (namely GPT2) for text generation. In this work we aim to combine both paradigms, introducing a novel architecture that leverages the expressive power of transformers with the natural sampling approach to text generation of variational autoencoders. We show how our architecture generates state-of-the-art results in password matching performance across multiple benchmark datasets.
CITATION STYLE
Biesner, D., Cvejoski, K., & Sifa, R. (2022). Combining Variational Autoencoders and Transformer Language Models for Improved Password Generation. In ACM International Conference Proceeding Series. Association for Computing Machinery. https://doi.org/10.1145/3538969.3539000
Mendeley helps you to discover research relevant for your work.