Regularized learning with flexible constraints

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

By its very nature, inductive inference performed by machine learning methods is mainly data-driven. Still, the consideration of background knowledge - if available - can help to make inductive inference more efficient and to improve the quality of induced models. Fuzzy set-based modeling techniques provide a convenient tool for making expert knowledge accessible to computational methods. In this paper, we exploit such techniques within the context of the regularization (penalization) framework of inductive learning. The basic idea is to express knowledge about an underlying data-generating model in terms of flexible constraints and to penalize those models violating these constraints. Within this framework, an optimal model is one that achieves an optimal trade-off between fitting the data and satisfying the constraints. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Hüllermeier, E. (2003). Regularized learning with flexible constraints. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2810, 13–24. https://doi.org/10.1007/978-3-540-45231-7_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free