Extensions to quantile regression forests for very high-dimensional data

8Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper describes new extensions to the state-of-the-art regression random forests Quantile Regression Forests (QRF) for applications to high-dimensional data with thousands of features. We propose a new subspace sampling method that randomly samples a subset of features from two separate feature sets, one containing important features and the other one containing less important features. The two feature sets partition the input data based on the importance measures of features. The partition is generated by using feature permutation to produce raw importance feature scores first and then applying p-value assessment to separate important features from the less important ones. The new subspace sampling method enables to generate trees from bagged sample data with smaller regression errors. For point regression, we choose the prediction value of Y from the range between two quantiles Q0.05 and Q0.95 instead of the conditional mean used in regression random forests. Our experiment results have shown that random forests with these extensions outperformed regression random forests and quantile regression forests in reduction of root mean square residuals. © 2014 Springer International Publishing.

Cite

CITATION STYLE

APA

Tung, N. T., Huang, J. Z., Khan, I., Li, M. J., & Williams, G. (2014). Extensions to quantile regression forests for very high-dimensional data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8444 LNAI, pp. 247–258). Springer Verlag. https://doi.org/10.1007/978-3-319-06605-9_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free