Random forests are a popular classification method based on an ensemble of a single type of decision tree. In the literature, there are many different types of decision tree algorithms, including C4.5, CART and CHAID. Each type of decision tree algorithms may capture different information and structures. In this paper, we propose a novel random forest algorithm, called a hybrid random forest. We ensemble multiple types of decision trees into a random forest, and exploit diversity of the trees to enhance the resulting model. We conducted a series of experiments on six text classification datasets to compare our method with traditional random forest methods and some other text categorization methods. The results show that our method consistently outperforms these compared methods. © 2012 Springer-Verlag.
CITATION STYLE
Xu, B., Huang, J. Z., Williams, G., Li, M. J., & Ye, Y. (2012). Hybrid random forests: Advantages of mixed trees in classifying text data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7301 LNAI, pp. 147–158). https://doi.org/10.1007/978-3-642-30217-6_13
Mendeley helps you to discover research relevant for your work.