A quantum extension of SVM-perf for training nonlinear SVMs in almost linear time

8Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

We propose a quantum algorithm for training nonlinear support vector machines (SVM) for feature space learning where classical input data is encoded in the amplitudes of quantum states. Based on the classical SVM-perf algorithm of Joachims [1], our algorithm has a running time which scales linearly in the number of training examples m (up to polylogarithmic factors) and applies to the standard soft-margin ℓ1-SVM model. In contrast, while classical SVM-perf has demonstrated impressive performance on both linear and nonlinear SVMs, its efficiency is guaranteed only in certain cases: It achieves linear m scaling only for linear SVMs, where classification is performed in the original input data space, or for the special cases of low-rank or shift-invariant kernels. Similarly, previously proposed quantum algorithms either have super-linear scaling in m, or else apply to different SVM models such as the hard-margin or least squares ℓ2-SVM which lack certain desirable properties of the soft-margin ℓ1-SVM model. We classically simulate our algorithm and give evidence that it can perform well in practice, and not only for asymptotically large data sets.

Cite

CITATION STYLE

APA

Allcock, J., & Hsieh, C. Y. (2020). A quantum extension of SVM-perf for training nonlinear SVMs in almost linear time. Quantum, 4. https://doi.org/10.22331/Q-2020-10-15-342

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free