Structure from randomness in halfspace learning with the zero-one loss

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

We prove risk bounds for halfspace learning when the data dimensionality is allowed to be larger than the sample size, using a notion of compressibility by random projection. In particular, we give upper bounds for the empirical risk minimizer learned efficiently from randomly projected data, as well as uniform upper bounds in the full high-dimensional space. Our main findings are the following: i) In both settings, the obtained bounds are able to discover and take advantage of benign geometric structure, which turns out to depend on the cosine similarities between the classifier and points of the input space, and provide a new interpretation of margin distribution type arguments. ii) Furthermore our bounds allow us to draw new connections between several existing successful classification algorithms, and we also demonstrate that our theory is predictive of empirically observed performance in numerical simulations and experiments. iii) Taken together, these results suggest that the study of compressive learning can improve our understanding of which benign structural traits - if they are possessed by the data generator - make it easier to learn an effective classifier from a sample.

Cite

CITATION STYLE

APA

Kaban, A., & Durrant, R. J. (2020). Structure from randomness in halfspace learning with the zero-one loss. Journal of Artificial Intelligence Research, 69, 733–764. https://doi.org/10.1613/JAIR.1.11506

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free