Group-Invariant Quantum Machine Learning

49Citations
Citations of this article
59Readers
Mendeley users who have this article in their library.

Abstract

Quantum machine learning (QML) models are aimed at learning from data encoded in quantum states. Recently, it has been shown that models with little to no inductive biases (i.e., with no assumptions about the problem embedded in the model) are likely to have trainability and generalization issues, especially for large problem sizes. As such, it is fundamental to develop schemes that encode as much information as available about the problem at hand. In this work we present a simple, yet powerful, framework where the underlying invariances in the data are used to build QML models that, by construction, respect those symmetries. These so-called group-invariant models produce outputs that remain invariant under the action of any element of the symmetry group G associated with the dataset. We present theoretical results underpinning the design of G-invariant models, and exemplify their application through several paradigmatic QML classification tasks, including cases when G is a continuous Lie group and also when it is a discrete symmetry group. Notably, our framework allows us to recover, in an elegant way, several well-known algorithms for the literature, as well as to discover new ones. Taken together, we expect that our results will help pave the way towards a more geometric and group-theoretic approach to QML model design.

Cite

CITATION STYLE

APA

Larocca, M., Sauvage, F., Sbahi, F. M., Verdon, G., Coles, P. J., & Cerezo, M. (2022). Group-Invariant Quantum Machine Learning. PRX Quantum, 3(3). https://doi.org/10.1103/PRXQuantum.3.030341

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free