Concept of Random Forest | Mathematics | Machine Learning | ML Algorithm | Data Science
Concept of Random Forest | Mathematics | Machine Learning | ML Algorithm | Data Science
Photo by David Kovalenko on Unsplash |
In the random forest, the process of finding the root node and leaf node runs randomly and it is made of more than one decision trees. So, it is called Random Forest.
Ensemble Technique:
Basically, sometimes we use more than one model together to increase the efficiency of model and accuracy of predictions. So, it is called Ensemble Technique. It has further two types i.e. Bagging and Boosting.
The bagging is also known as the bootstrap aggregation. In bagging the different base models feed with the different sample of data from the main dataset for the purpose of training of the models. After training of all models, a test dataset is fed to all the trained models and then we use vote classifier to find the output with the majority.
Random Forest:
Random forest is an improvement over the bagging technique in which the multiple base model of a decision tree is fed with the randomly selected sample dataset and then it is used to predict the output. Let's have a look at How it works?
In random forests data set we actually withdraw multiple datasets randomly from the main dataset, and then fed it to other multiple decision tree model to generate output. Then the majority one will be selected as an output.
If we are aggregating predictions from multiples models in ensemble technique it will work better if the predictions from the models are uncorrelated, so the random forest changes the algorithm in a way that the predictions made from various sub-models are less correlated. It is the advantage of Random Forest over the bagging. In bagging the predictions are more correlated but the random forest provides a twist to decorrelate it.
Comments
Post a Comment