Learning low cost multi-target models by enforcing sparsity

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider how one can lower the costs of making predictions for multi-target learning problems by enforcing sparsity on the matrix containing the coefficients of the linear models. Four types of sparsity patterns are formalized, as well as a greedy forward selection framework for enforcing these patterns in the coefficients of learned models. We discuss how these patterns relate to costs in different types of application scenarios, introducing the concepts of extractor and extraction costs of features. We experimentally demonstrate on two real-world data sets that in order to achieve as low prediction costs as possible while also maintaining acceptable predictive accuracy for the models, it is crucial to correctly match the type of sparsity constraints enforced to the use scenario where the model is to be applied.

Cite

CITATION STYLE

APA

Naula, P., Airola, A., Salakoski, T., & Pahikkala, T. (2015). Learning low cost multi-target models by enforcing sparsity. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9101, pp. 252–261). Springer Verlag. https://doi.org/10.1007/978-3-319-19066-2_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free