Interpretable MTL from Heterogeneous Domains using Boosted Tree

7Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multi-task learning (MTL) aims at improving the generalization performance of several related tasks by leveraging useful information contained in them. However, in industrial scenarios, interpretability is always demanded, and the data of different tasks may be in heterogeneous domains, making the existing methods unsuitable or unsatisfactory. In this paper, following the philosophy of boosted tree, we proposed a two-stage method. In stage one, a common model is built to learn the commonalities using the common features of all instances. Different from the training of conventional boosted tree model, we proposed a regularization strategy and an early-stopping mechanism to optimize the multi-task learning process. In stage two, started by fitting the residual error of the common model, a specific model is constructed with the task-specific instances to further boost the performance. Experiments on both benchmark and real-world datasets validate the effectiveness of the proposed method. What's more, interpretability can be naturally obtained from the tree based method, satisfying the industrial needs.

Cite

CITATION STYLE

APA

Zhang, Y. L., & Li, L. (2019). Interpretable MTL from Heterogeneous Domains using Boosted Tree. In International Conference on Information and Knowledge Management, Proceedings (pp. 2053–2056). Association for Computing Machinery. https://doi.org/10.1145/3357384.3358072

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free