Effective monotone knowledge integration in kernel support vector machines

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In many machine learning applications there exists prior knowledge that the response variable should be increasing (or decreasing) in one or more of the features. This is the knowledge of ‘monotone’ relationships. This paper presents two new techniques for incorporating monotone knowledge into non-linear kernel support vector machine classifiers. Incorporating monotone knowledge is useful because it can improve predictive performance, and satisfy user requirements. While this is relatively straight forward for linear margin classifiers, for kernel SVM it is more challenging to achieve efficiently. We apply the new techniques to real datasets and investigate the impact of monotonicity and sample size on predictive accuracy. The results show that the proposed techniques can significantly improve accuracy when the unconstrained model is not already fully monotone, which often occurs at smaller sample sizes. In contrast, existing techniques demonstrate a significantly lower capacity to increase monotonicity or achieve the resulting accuracy improvements.

Cite

CITATION STYLE

APA

Bartley, C., Liu, W., & Reynolds, M. (2016). Effective monotone knowledge integration in kernel support vector machines. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10086 LNAI, pp. 3–18). Springer Verlag. https://doi.org/10.1007/978-3-319-49586-6_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free