A Detailed Review on Decision Tree and Random Forest

  • Talekar B
N/ACitations
Citations of this article
92Readers
Mendeley users who have this article in their library.

Abstract

The decision tree method works by repeatedly dividing the location of features into imaginary limb regions so that each imaginary location provides a basis for making a different approximation. The decision tree system in existence so far applies to various future tasks such as classification and regression. These methods are popular in the field of data science with various benefits. This is due to limitations such as instability of predictions before slight changes in data, and this leads to a major change in the structure of the decision-making tree and has detrimental effects in terms of forecasting. On the other hand, to improve the prediction accuracy of a single base classifier or regressor, multiple decision trees are given parallel training for forecasting purposes and are known as random forests. The random forest technique is an ensemble methods, it comprises of several decision tree which are trained on the subset of data or with the feature subspace, once all the tree are trained, their results are combined together for the purpose of prediction. As random forest is more stable than a decision tree it become more popular in the field of data science and machine learning. In this paper, we had provided an detailed introduction of the decision tree methods and random forest method. Also, how they works and for which type of problem they are suitable.

Cite

CITATION STYLE

APA

Talekar, B. (2020). A Detailed Review on Decision Tree and Random Forest. Bioscience Biotechnology Research Communications, 13(14), 245–248. https://doi.org/10.21786/bbrc/13.14/57

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free