Scrambling ability of quantum neural network architectures

17Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.

Abstract

In this Letter, we propose a guiding principle for how to design the architecture of a quantum neural network in order to achieve a high learning efficiency. This principle is inspired by the equivalence between extracting information from the input state to the readout qubit and scrambling information from the readout qubit to input qubits. We characterize the quantum information scrambling by operator size growth. By Haar random averaging over operator sizes, we propose an averaged operator size to describe the information scrambling ability of a given quantum neural network architecture. The key conjecture of this Letter is that this quantity is positively correlated with the learning efficiency of this architecture. To support this conjecture, we consider several different architectures, and we also consider two typical learning tasks. One is a regression task of a quantum problem, and the other is a classification task on classical images. In both cases, we find that, for the architecture with a larger averaged operator size, the loss function decreases faster or the prediction accuracy increases faster as the training epoch increases, which means higher learning efficiency. Our results can be generalized to more complicated quantum versions of machine learning algorithms.

Cite

CITATION STYLE

APA

Wu, Y., Zhang, P., & Zhai, H. (2021). Scrambling ability of quantum neural network architectures. Physical Review Research, 3(3). https://doi.org/10.1103/PhysRevResearch.3.L032057

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free