Structural risk minimization for quantum linear classifiers

22Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

Quantum machine learning (QML) models based on parameterized quantum circuits are often highlighted as candidates for quantum computing’s near-term “killer application”. However, the understanding of the empirical and generalization performance of these models is still in its infancy. In this paper we study how to balance between training accuracy and generalization performance (also called structural risk minimization) for two prominent QML models introduced by Havlíček et al. [1], and Schuld and Killoran [2]. Firstly, using relationships to well understood classical models, we prove that two model parameters – i.e., the dimension of the sum of the images and the Frobenius norm of the observables used by the model – closely control the models’ complexity and therefore its generalization performance. Secondly, using ideas inspired by process tomography, we prove that these model parameters also closely control the models’ ability to capture correlations in sets of training examples. In summary, our results give rise to new options for structural risk minimization for QML models.

Cite

CITATION STYLE

APA

Gyurik, C., van Vreumingen, D., & Dunjko, V. (2023). Structural risk minimization for quantum linear classifiers. Quantum, 7. https://doi.org/10.22331/q-2023-01-13-893

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free