Optimal margin distribution learning in dynamic environments

4Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Recently a promising research direction of statistical learning has been advocated, i.e., the optimal margin distribution learning with the central idea that instead of the minimal margin, the margin distribution is more crucial to the generalization performance. Although the superiority of this new learning paradigm has been verified under batch learning settings, it remains open for online learning settings, in particular, the dynamic environments in which the underlying decision function varies over time. In this paper, we propose the dynamic optimal margin distribution machine and theoretically analyze its regret. Although the obtained bound has the same order with the best known one, our method can significantly relax the restrictive assumption that the function variation should be given ahead of time, resulting in better applicability in practical scenarios. We also derive an excess risk bound for the special case when the underlying decision function only evolves several discrete changes rather than varying continuously. Extensive experiments on both synthetic and real data sets demonstrate the superiority of our method.

Cite

CITATION STYLE

APA

Zhang, T., Zhao, P., & Jin, H. (2020). Optimal margin distribution learning in dynamic environments. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 6821–6828). AAAI press. https://doi.org/10.1609/aaai.v34i04.6162

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free