Random Forest

Random Forest is a supervised algorithm and an ensemble learning method that constructs a multitude of decision trees during training and outputs the class that is the mode of the classes (classification) or the mean prediction (regression) of the individual trees. It is known for its flexibility and is used for both classification and regression tasks.

The Random forest consolidates thousands of decision trees and trains each tree on a separate set of observations. And then divides nodes in each tree assuming a limited number of attributes or features. The final outcome of the random forest is secured by making an average of the predictions of each individual tree. The higher the number of trees greater the accuracy and counters the obstacle of overfitting.

The difference between the Random Forest algorithm and the decision tree algorithm is that in Random Forest, the processes of finding the root node and splitting the feature nodes will run randomly.

Building Blocks of a Random Forest

A Random Forest is fundamentally composed of decision trees. A decision tree is a flowchart-like structure where an internal node represents a feature (or attribute), the branch represents a decision rule, and each leaf node represents the outcome.

How Does Random Forest Work:

Step 1: Random Sampling of Training Data

  • For each tree in the forest, a subset of the training data is selected at random. This subset can be chosen with or without replacement. This is known as bootstrapping.

Step 2: Building a Decision Tree

  • For each subset of the data, a decision tree is constructed. However, instead of using the best feature to split the data at each level of the tree, a random subset of features is chosen, and the best feature from this subset is used to make the split. This process is repeated until the tree is fully grown.

Step 3: Repeat

  • Steps 1 and 2 are repeated, and many decision trees are constructed. The number of trees is a parameter chosen by the user and is typically large, making the ensemble of trees the “forest”.

Step 4: Making a Prediction

  • To make a prediction for a new data point, the data point is passed down to each of the trees in the forest. Each tree gives its prediction (a class label in classification or a continuous value in regression).

Step 5: Aggregating the Predictions

  1. Entropy: It is a degree or strength of randomness in a given dataset.
  2. Information gain: When the dataset gets split, there is a reduction in entropy, and that decreased measurement is called information gain.
  3. Leaf-nodes: It is particularly a node or intersection to support the classification or decisions.
  4. Decision-node: A simple node that has two or more branches or intersections.
  5. Root-node: The leading or uppermost decision node, where all the data is available.

Key Advantages of Random Forest:

  1. Robustness to Overfitting: Random Forest is less prone to overfitting compared to individual decision trees.
    Overfitting can be a concern in imbalanced datasets, especially when the model becomes too biased toward the majority class.
    The ensemble nature of Random Forest, with its averaging or voting, helps mitigate overfitting and provides more reliable predictions for the minority class. Overfitting can be controlled in Random Forest by adjusting hyperparameters like the maximum depth of trees, the minimum number of samples per leaf node, and the number of trees in the forest. Cross-validation can also help in tuning these hyperparameters.
  2. Handling Large Data Sets: It can handle large datasets with higher dimensionality and can still output great performance.
  3. Handling Missing Values: It has an in-built method to take care of missing values. When a feature with missing values is selected for splitting at a particular node, the algorithm assigns a weighted average of the outcomes (e.g., class labels in classification tasks or target values in regression tasks) based on the proportions of non-missing values among the samples in that node. For example, if you have a classification problem with a feature indicating “age” and some samples have missing values for age, the algorithm will calculate the weighted average of the class labels (e.g., “Yes” or “No”) for the samples with known ages. The weight assigned to each class label is proportional to the number of samples with non-missing values.
    In a regression problem, if the feature is a numerical one, the algorithm will calculate the weighted average of the target values for samples with known values of that feature. Class weights are typically inversely proportional to the class frequencies. That is, the minority class is assigned a higher weight than the majority class.
  4. Handling Categorical values:
    1. Categorical Features: Random Forest can naturally handle categorical features. During the construction of the individual trees in the forest, when a categorical feature is selected to split the data, the algorithm creates branches for each category, effectively separating the data based on the categories.
    2. Categorical Targets: Random Forest is also adept at handling categorical target variables, making it a strong choice for classification problems. It predicts the mode (most frequent class) of the target variable based on the individual trees’ votes.


  1. Complexity: Random Forests can be computationally intense and may require more resources.
  2. Less Interpretability: Unlike a single decision tree, a Random Forest model is harder to visually interpret.
  3. Since the final prediction is based on the mean predictions from subset trees, it won’t give precise values for the regression model.


Think of Random Forest as a team of experts (the individual trees), where each expert gets a say in the final decision. They each look at slightly different aspects of the data, and then their decisions are pooled together to make the final prediction, reducing the chance of a single tree’s bias or error affecting the outcome significantly.

Random Forest is like forming a group of trees to make a decision. Instead of relying on one tree, we gather opinions from many. Each tree looks at a few aspects of the data and gives an opinion. In the end, we combine all their advice for a final decision. This makes Random Forest reliable and accurate, making it a popular choice for predicting or classifying information in many areas.

Some practical applications of the Random Forest algorithm:

  1. Healthcare:
    • Disease Identification and Diagnosis: Random Forest can be used to identify diseases by analyzing the patient’s medical records.
    • Drug Discovery: Predicting the potential success of drug compounds in early phases of drug development.
  2. Finance:
    • Credit Scoring: Determining the creditworthiness of loan applicants.
    • Fraud Detection: Spotting patterns and anomalies that could indicate fraudulent transactions.
  3. E-commerce:
    • Product Recommendation: Based on a user’s browsing history and purchase behavior.
    • Customer Lifetime Value Prediction: Predicting how much a customer will spend over time.
  4. Real Estate:
    • Property Value Estimation: Predicting the selling price of homes based on attributes like location, size, age, and amenities.

Comments are closed.