Optimization Algorithms in ML
- (The University of Chicago, Alvin Wei-Cheng Wong)
- Overview
The concept of optimization is an integral part of machine learning (ML). Most ML models use training data to learn the relationship between input and output data. These models can then be used to predict trends or classify new input data. This training is an optimization process because each iteration aims to improve the model's accuracy and reduce the margin of error.
Optimization is a theme that runs through every step of ML. This includes the optimization and improvement of labeled training data by data scientists, as well as the iterative training and improvement of models. Training a ML model is essentially an optimization problem because the model needs to learn to perform a certain function in the most efficient way. The most important part of ML optimization is the adjustment and tuning of model configurations, or hyperparameters.
Hyperparameters are elements of a model that a data scientist or developer sets. It includes elements such as the learning rate or the number of clusters for classification, and is a way to optimize a model to fit a specific data set.
In contrast, parameters are elements that a ML model develops itself during the training process. Choosing optimal hyperparameters is key to ensuring that a ML model is accurate and efficient.
ML optimization can be performed by optimization algorithms, which use a range of techniques to improve and refine the model.
[More to come ...]