Interpretable Quantum Advantage in Neural Sequence Learning

7Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.

Abstract

Quantum neural networks have been widely studied in recent years, given their potential practical utility and recent results regarding their ability to efficiently express certain classical data. However, analytic results to date rely on assumptions and arguments from complexity theory. Because of this, there is little intuition as to the source of the expressive power of quantum neural networks or for which classes of classical data any advantage can be reasonably expected to hold. Here, we study the relative expressive power between a broad class of neural network sequence models and a class of recurrent models based on Gaussian operations with non-Gaussian measurements. We explicitly show that quantum contextuality is the source of an unconditional memory separation in the expressivity of the two model classes. We use this intuition to study the relative performance of our introduced model on a standard translation data set exhibiting linguistic contextuality. In doing so, we demonstrate that our introduced quantum models are able to outperform state-of-the-art classical models even in practice.

Cite

CITATION STYLE

APA

Anschuetz, E. R., Hu, H. Y., Huang, J. L., & Gao, X. (2023). Interpretable Quantum Advantage in Neural Sequence Learning. PRX Quantum, 4(2). https://doi.org/10.1103/PRXQuantum.4.020338

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free