Personal tools

Shallow Learning Research and Applications

Venice_Italy_Walks_100120A
[Gondolas on Canals in Venice, Italy - Walks of Italy]

- Overview

Shallow models are machine learning (ML) models with a simple structure, usually consisting of one or several layers of processing units. Shallow learning (SL), also known as “traditional ML,” refers to a class of algorithms that typically involve a limited number of layers or levels of abstraction in their models.

A layer is a set of units that performs some computation on input data, such as applying linear or nonlinear transformations or aggregating information from multiple sources. 

Examples of shallow models include linear regression, logistic regression, decision trees, and support vector machines. 

Shallow models are easy to interpret and train, but their expressiveness and generalization capabilities may be limited, which means they may not capture complex patterns and relationships in the data, or adapt well to new or unseen data. 

 

- Examples of Shallow Learning Models

Some common examples of shallow learning algorithms include:

  • Logistic regression: A linear model used for binary or multi-class classification. It models the relationship between input features and predicted probabilities for different classes.
  • Support Vector Machine (SVM): A binary classification algorithm that finds the optimal hyperplane to classify data into different categories. SVM can also handle nonlinear classification tasks by using kernel functions.
  • Random Forest: An integrated learning algorithm that combines multiple decision trees for prediction. Each tree is trained on a different subset of data and features, and the final prediction is made by aggregating the predictions from each tree.
  • Naive Bayes: A probabilistic algorithm that uses Bayes' theorem to calculate the posterior probability of each class given input features. It assumes independence between features, resulting in simple and computationally efficient models.
  • k-Nearest Neighbor (k-NN): A nonparametric algorithm that classifies new data points based on the class labels of their k nearest neighbors in feature space. The choice of k determines the sensitivity of the model to local patterns.
 

[More to come ...]


Document Actions