Personal tools

ML Research and Applications

[AI Technologies - Legal Executive Institute]

Machine Learning - Discovering the New Era of Intelligence



- Machine Learning (ML) Today

Machine learning (ML) today is a method of data analysis that automates analytical model building. It is a branch of AI based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. While AI is the broad science of mimicking human abilities, ML is a specific subset of AI that trains a machine how to learn.

Does ML require coding? Yes, machine learning requires programming languages. First, understand that ML involves algorithms. Mathematics is a compulsory course for learning algorithm concepts. But when you implement ML to solve real-world problems, you do need to code. Python and R are the programming languages of choice in AI and data science.

ML is something that performs a function with the data given to it and gets progressively better over time. ML overlaps heavily with statistics, since both fields study the analysis of data. But unlike statistics, researchers concern ML with the algorithmic complexity of computational implementations. Part of ML research is the development of tractable approximate inference algorithms. 

Because of new computing technologies, ML today is not like ML of the past. It was born from pattern recognition and the theory that computers can learn without being programmed to perform specific tasks; researchers interested in AL wanted to see if computers could learn from data. The iterative aspect of ML is important because as models are exposed to new data, they are able to independently adapt. They learn from previous computations to produce reliable, repeatable decisions and results. It’s a science that’s not new – but one that has gained fresh momentum.  


- Three Main Types of ML Algorithms 

Machine Learning is a broad field, but it is classified into three classes of supervised, unsupervised and reinforcement learning. All these three paradigms are used everywhere to power intelligent applications.  

  • Supervised Learning – Task Driven (Predict Next Value): Supervised Learning is the most popular paradigm for performing machine learning operations. It is widely used for data where there is a precise mapping between input-output data. The dataset, in this case, is labeled, meaning that the algorithm identifies the features explicitly and carries out predictions or classification accordingly. As the training period progresses, the algorithm is able to identify the relationships between the two variables such that we can predict a new outcome. Supervised learning is for example used to classify email as spam or non-spam and to detect fraud.
  • Unsupervised Learning – Data Driven (Identify Clusters): In the case of unsupervised learning algorithm, the data is not explicitly labeled into different classes, that is, there are no labels. The model is able to learn from the data by finding implicit patterns. Unsupervised Learning algorithms identify the data based on their densities, structures, similar segments, and other similar features. Unsupervised Learning Algorithms are based on Hebbian Learning. Unsupervised learning is where you’ll hear most of the excitement when people talk about ‘the future of AI’ due to its limitless potential. It’s for example used for market segmentation (i.e. clustering groups of customers based on common characteristics) and to provide product recommendations based on a shopper’s historical purchase behavior. 
  • Reinforcement Learning – Learn from Mistakes: Reinforcement Learning covers more area of Artificial Intelligence which allows machines to interact with their dynamic environment in order to reach their goals. With this, machines and software agents are able to evaluate the ideal behavior in a specific context. With the help of this reward feedback, agents are able to learn the behavior and improve it in the longer run. This simple feedback reward is known as a reinforcement signal. The agent in the environment is required to take actions that are based on the current state. This type of learning is different from Supervised Learning in the sense that the training data in the former has output mapping provided such that the model is capable of learning the correct answer. Whereas, in the case of reinforcement learning, there is no answer key provided to the agent when they have to perform a particular task. When there is no training dataset, it learns from its own experience. The goal is find the best actions which maximize the long-term reward. The algorithm thus learns by trial and error. An example is learning to play a computer game by playing against an opponent.

Big data acts as an ingredient. Think of it as when you are making a cake – the data represents the flour and the actual process of baking the cake is represented through ML. AI will then be the output, or the cake if you will.


- The Evolution of Machine Learning (ML) 

Biological evolution Inspires ML. Evolution allows life to explore almost limitless diversity and complexity. Scientists hope to recreate such open-endedness in the laboratory or in computer simulations, but even sophisticated computational techniques like ML and AI can't provide the open-ended tinkering associated with evolution.

The earliest computers were designed to perform complex calculations, and their architecture allowed for the storage of not only data but also the instructions as to how to manipulate that data. This evolved to the point where the computer processed data according to a structure model of the real world, expressible in mathematical terms. The computer did not learn but was merely following instructions. 

The next step was to create a set of instructions that would allow the computer to learn from experience, i.e., to extract its own rules from large amounts of data and use those rules for classification and prediction. This was the beginning of ML and has led to the field that is collectively defined as AI. 

A major breakthrough came with the implementation of algorithms that were loosely modeled on brain architecture, with multiple interconnecting units sharing weighted puts among them, organized in computational layers (deep learning). 


- Data Science and Machine Learning Models

Machine learning (ML) is a form of AI that enables a system to learn from data rather than through explicit programming. However, machine learning is not a simple process. As the algorithms ingest training data, it is then possible to produce more precise models based on that data. 

A ML model is the output generated when you train your machine-learning algorithm with data. After training, when you provide a model with an input, you will be given an output. For example, a predictive algorithm will create a predictive model. Then, when you provide the predictive model with data, you will receive a prediction based on the data that trained the model.

ML enables models to train on data sets before being deployed. Some ML models are online and continuous. This iterative process of online models leads to an improvement in the types of associations made between data elements. Due to their complexity and size, these patterns and associations could have easily been overlooked by human observation. 

After a model has been trained, it can be used in real time to learn from data. The improvements in accuracy are a result of the training process and automation that are part of machine learning.  ML techniques are required to improve the accuracy of predictive models. Depending on the nature of the business problem being addressed, there are different approaches based on the type and volume of the data. In this section, we discuss the categories of machine learning.


San Francisco_122820A
[San Francisco, Califonia - Civil Engineering Discoveries]

- ML Algorithms and Applications

While many ML algorithms have been around for a long time, the ability to automatically apply complex mathematical calculations to big data – over and over, faster and faster – is a recent development. In the last few years, we have witnessed a renaissance in ML and AI. AI broadly refers to the ability of machines to "think" like humans and perform tasks considered "smart," without explicitly being programmed to do so. ML is a subset of AI. ML algorithms build a mathematical model based on training data, and they leverage the model to make predictions when new data is provided. For example, a computer-vision ML model can be trained with millions of sample images that have been labeled by humans so that it can automatically classify objects in a new image. 

ML is about building intelligent artifacts, almost by necessity, typically, that learn over time based on experience. ML uses programming through neural networks. This is where ML “learns” through training algorithms and determines the probable outcome of a situation. The process requires a human to program the information into the ML with data, hours of training and testing and fixing issues in the outcomes. The important thing to remember with ML is that it can only output what is input based on the large sets of data it is given. It can only check from what knowledge it has been “taught.” If that information is not available, it cannot create an outcome on its own. Therefore ML will go for the solution whether or not it is the most optimal solution. 

As many people have wisely observed, the dream of ML is not new. It has been around since the very earliest days of computing. Pioneers have always imagined ways to build intelligent learning machines. ML is one of the most disruptive technologies of the 21st century. In the coming years, we are likely to see more advanced applications that stretch its capabilities to unimaginable levels. Examples of ML and DL are everywhere. It's how Netflix knows which show you’ll want to watch next, how Facebook knows whose face is in a photo, what makes self-driving cars a reality, and how a customer service representative will know if you'll be satisfied with their support before you even take a customer satisfaction survey.


- The Purposes of Three Main Types of ML Algorithms

There are different kinds of Machine Learning (ML), including supervised learning, unsupervised learning, deep and reinforcement learning. They are used for different purposes. The purpose of supervised learning is to establish a relationship between two datasets and to use one dataset to forecast the other. The purpose of unsupervised learning is to try to understand the structure of data, and to identify the main drivers behind it. The purpose of deep learning is to use multi-layered neural networks to analyze a trend, while reinforcement learning encourages algorithms to explore and discover the best action to yield best results. 

Supervised learning, in which we have examples in the data that have labels, and unsupervised learning, in which we have only features for those examples, but no labels. Reinforcement learning is characterized by an agent continuously interacting and learning from its stochastic environment, and in which an agent learns its behavior based on the feedback it receives from the environment in the form of a reward. So in reinforcement learning, the agent can keep adapting its behavior as time goes by, based on its environment, to maximize this reward. Reinforcement learning is described as learning from delayed reward. The feedback in reinforcement learning may come several steps after the decisions that you've actually made. 

Let’s distinguish between two general categories of machine learning: supervised and unsupervised. We apply supervised ML techniques when we have a piece of data that we want to predict or explain. We do so by using previous data of inputs and outputs to predict an output based on a new input. For example, you could use supervised ML techniques to help a service business that wants to predict the number of new users who will sign up for the service next month. By contrast, unsupervised ML looks at ways to relate and group data points without the use of a target variable to predict. In other words, it evaluates data in terms of traits and uses the traits to form clusters of items that are similar to one another. For example, you could use unsupervised learning techniques to help a retailer that wants to segment products with similar characteristics — without having to specify in advance which characteristics to use.


[More to come ...]

Document Actions