Tree-based transforms for privileged learning

2Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In many machine learning applications, samples are characterized by a variety of data modalities. In some instances, the training and testing data might include overlapping, but not identical sets of features. In this work, we describe a versatile decision forest methodology to train a classifier based on data that includes several modalities, and then deploy it for use with test data that only presents a subset of the modalities. To this end, we introduce the concept of cross-modality tree feature transforms. These are feature transformations that are guided by how a different feature partitions the training data. We have used the case of staging cognitive impairments to show the benefits of this approach. We train a random forest model that uses both MRI and PET, and can be tested on data that only includes MRI features. We show that the model provides an 8% improvement in accuracy of separating of progressive cognitive impairments from stable impairments, compared to a model that uses MRI only for training and testing.

Cite

CITATION STYLE

APA

Moradi, M., Syeda-Mahmood, T., & Hor, S. (2016). Tree-based transforms for privileged learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10019 LNCS, pp. 188–195). Springer Verlag. https://doi.org/10.1007/978-3-319-47157-0_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free