# Naive Bayes in Machine Learning

## Introduction

Naive Bayes in Machine learning is a supervised learning type of algorithm. This algorithm is primarily based on the concepts of conditional probability and bayes theorem. The Naive Bayes algorithm is mostly used to solve classification problems in Machine Learning.

## Table of Contents

- What is Naive Bayes? Why is it called so?
- Bayes Theorem and Conditional Probability
- Types of Naive Bayes Model
- Working of Naive Bayes Classifier

## What is Naive Bayes? Why is it called so?

As already introduced, Naive Bayes is a supervised machine learning algorithm, which helps in solving classification problems. This algorithm not only helps in making quick predictions buy also helps in building fast machine learning models, thereby helping us to solve classification problems in a fast manner.

It is a probabilistic classifier, which means that it makes its predictions based on the probability of an even or happening of an object.

Coming to the resolution of the query as why is it called so. This can be understood by analyzing the two terms, Naive and Bayes separately.

Naive: The word naive is used because here Naïve means that the occurrence of a certain feature is independent of the occurrence of other features.

For example there is an animal which we have to classify into some species based on its features such as color, no. of legs, smelly nature, etc., then each feature will individually contribute to identify that it belongs to so and so species without the features themselves depending on each other.

Bayes: This term is used because this algorithm depend on the principle of Bayes Theorem of probability to determine the predictive outputs. We will be discussing about Bayes theorem in detail in our next section.

## Bayes Theorem and Conditional Probability

Bayes’ Theorem is a theorem or a rule used to determine the probability of a later event or hypothesis based on some prior knowledge of some other event. It depends on the concept of Conditional probability.

The mathematical formula for Bayes’ theorem is given as follows:

where,

- P(A|B) is posterior probability : Probability of hypothesis A on the observed event B.
- P(B|A) is Likelihood probability : Probability of the evidence given that the probability of a hypothesis is true.
- P(A) is Prior Probability : Probability of hypothesis before observing the evidence.
- P(B) is Marginal Probability : Probability of Evidence.

## Types of Naive Bayes Model

There are three major types of Naive Bayes model. These are:

- Gaussian Model: The Gaussian type of Bayes model assumes that the attributes follow a normal distribution. It means that if the predictors take continuous values instead of discontinuous or discrete ones, then the model assumes that these values are sampled from the Gaussian distribution.
- Multinomial Model: The multinomial type of model is usually put to use when the data is multinomially distributed instead of a simple normal distribution. It is primarily applied in case of document classification types of problems. These types of problems include classifying documents into different categories such as Sports, Politics, Education, and so on.
- Bernoulli Model: The Bernoulli model of Bayes model works similar to multinomial classifier. However, there is a difference that the predictor variables in the case of Bernoulli model are independent boolean variables. This type of model is also used to solve document classification tasks.

## Working of Naive Bayes Classifier

Now, having known about the concepts used in Naive Bayes Classifier, now will see the working of this classifier using an example dataset shown below:

The data above shows various attributes such as color, no. of legs, height and smelly nature. Based on these features we have to predict whether a new instance will belong to M or an H species.

The data along with the new instance is shown below:

So, we will approach our problem by first calculating the probability of H and M in the total dataset.

P(H)=4/8=0.5

P(M)=4/8=0.5

So, now will calculate the probability for the new instance for each of its features with respect to each Species H and M by using the data provided in the above tables and also the formulae which we have studied earlier.

P(M | New instance)= P(M)*P(color=green | M)* P(Legs=2 | M)* P(Height= tall | M)* P(Smelly= no | M)

Putting the corresponding values in place, we get:

P(M | New instance)= (0.5)* (1/2)* (1/4)* (3/4)* (1/4)= 0.0117

P(H | New instance)= P(H)*P(color=green | H)* P(Legs=2 | H)* P(Height= tall | H)* P(Smelly= no | H)

P(H | New instance)= (0.5)* (1/4)* (1)* (1/2)* (3/4)= 0.047

As we see,

**P(H | New instance) > P(M | New instance)**

Thus, we can say that the new instance belongs to the H species.

## End Notes

I hope this blog gave a deep understanding of Naive Bayes classifier as it deals with various dimensions of this algorithm.