# Linear Classifier and and Nonlinear Classifier

**- Linear Classifier**

A linear classifier is a supervised machine learning algorithm that separates two types of objects using a line or hyperplane. It's a model that classifies data points into a discrete class based on a linear combination of its explanatory variables.

For example, a linear classifier could use a dog's weight, height, color, and other features to determine its species.

In a binary linear classifier, the observation is classified into one of two possible classes using a linear boundary in the input feature space.

Linear classifiers are the simplest classifiers made by linear functions for classifying observations into different categories in linear and nonlinear spaces.

**- Nonlinear Classifier**

A nonlinear classifier predicts categorical outcomes based on input features when the decision boundary is not linear. It uses methods that can capture complex, nonlinear relationships between variables.

Nonlinear classifiers can capture intricate patterns and relationships within the data that linear classifiers might miss.

Here are some examples of nonlinear classifiers:

- Kernel Support Vector Machines
- Neural Networks
- Decision Trees and Random Forests
- K-Nearest Neighbors (KNN)
- Naive Bayes (under certain conditions)

A nonlinear model is a function that has a variable rate of change. For example, y = x^2 + 3 is a nonlinear model, where y is the output and x is the input. The slope of the curve is not constant, but depends on the value of x.

Non-Linear Classification refers to categorizing those instances that are not linearly separable. Some of the classifiers that use non-linear functions to separate classes are Quadratic Discriminant Classifier, Multi-Layer Perceptron (MLP), Decision Trees, Random Forest, and K-Nearest Neighbours (KNN).

**- Linear Classifier vs. Nonlinear Classifier**

Linear classifiers categorize data points into a discrete class based on a linear combination of its explanatory variables. Non-linear classifiers categorize instances that are not linearly separable.

Linear classifiers can only separate data points using straight lines or planes. Non-linear classifiers can create non-linear decision boundaries, allowing them to separate data points using curves, circles, or other non-linear shapes.

Non-linear classifiers are often more accurate than linear classifiers if a problem is nonlinear and its class boundaries cannot be approximated well with linear hyperplanes. However, in many datasets that are not linearly separable, a linear classifier will still classify most instances correctly.

Some advantages of linear classifiers include:

- You can apply different regularization in the formulation.
- You don't need the training data once the model is trained.
- Linear classifier is relatively faster than non-linear classifier.
- Just one parameter to tune.

A typical example of a nonlinearly separable problem is the Exclusive OR (XOR) Boolean function.

**[More to come ...]**