Personal tools

Mathematical Logic

Harvard (Charles River) IMG 7718
(Harvard University - Harvard Taiwan Student Association)

- Overview

Probability and statistics are used in artificial intelligence (AI) to evaluate and compare machine learning (ML) algorithms. These techniques can help assess the performance of different models and choose the most suitable one for a given task. 

Here are some ways probability and statistics are used in AI: 

  • Probabilistic AI models: These models incorporate uncertainty and probability distributions into AI systems. By outlining various potential outcomes and their likelihoods, it enables AI to make informed decisions.
  • Probabilistic reasoning: This is a form of knowledge representation in which the concept of probability is used to indicate the degree of uncertainty in knowledge.
  • Statistical measures: These include accuracy, precision, recall, and F1 score.

Here are some other ways probability is used in AI: 

  • Logistic regression: This probabilistic method can learn the parameters that best fit your data and make predictions based on the probability of the outcome.
  • Building distributions: These can be used to draw samples for training and testing data sets.
  • Coin tossing: The probability of an event is the number of ways an event can occur divided by the total number of possible outcomes. For example, the probability of heads is 1 (Head) / 2 (Heads and Tails) = 0.5.

- Mathematical Logic

Mathematical logic is a foundational framework that enables AI algorithms to reason, analyze, and make decisions. 

AI has the potential to aid new mathematical discoveries. Particularly as the amount of data available grows beyond what any person can study, AI can be useful in its power to identify patterns in data and refine relationships between properties. 

Mathematics also provides the foundation for AI algorithms and models, allowing machines to process, analyze, and interpret large amounts of data. 

Automated reasoning tools use mathematical techniques to answer questions about a program or logic formula. These tools can help determine what is true about a statement or expression. 

Symbolic AI systems like Wolfram Alpha and Mathematica are considered top choices for tasks that involve theorem proving and formal reasoning. These systems are good at simplifying equations, manipulating mathematical expressions, and providing step-by-step solutions. 

Symbolic reasoning is a form of reasoning where humans create the rules. To build a symbolic reasoning system, humans must first learn the rules by which two phenomena relate. They then hard-code those relationships into a static program.  

Abductive reasoning is a form of logical reasoning that starts with one or more observations. It then seeks to find the most likely explanation or conclusion for the observation. 


- Statistical Reasoning in AI

Statistical reasoning in AI involves learning from data to make predictions or decisions. It involves understanding uncertainty and building mental models to capture key aspects of real-world phenomena. 

Statistical reasoning combines ideas about data and chance, which leads to making inferences and interpreting statistical results. It involves a conceptual understanding of important ideas, such as distribution, center, spread, association, uncertainty, randomness, and sampling. 

In AI, probabilistic models are used to examine data using statistical codes. Probabilistic reasoning is a type of knowledge where the rule of probability is applied to mark the degree of uncertainty.  It gives the user a reason for any outcome by giving them probabilities. Using these probabilities, we can predict the happening of any events.

Statistical AI is good at intuitive judgements, such as pattern recognition and object classification. One example of statistical AI is knowledge management, which can be implemented with artificial intelligence systems to allow users to find information more quickly.


- Statistical Models Vs. Machine Learning Models

Statistical models and machine learning (ML) models differ in their purposes:
  • Statistical models: Use statistics to create a representation of data, then analyze it to discover relationships between variables or insights.
  • ML models: Use mathematical and/or statistical models to gain a general understanding of data to make predictions.


Statistical models are more about finding relationships between variables and the significance of those relationships, while also catering for prediction. ML models are built for providing accurate predictions without explicit programming. 

Statistical models are mathematics intensive and based on coefficient estimation. ML models have generally very strong predictive power because they work on comprehensive data and are independent of all assumptions. 

Statistical models are better for trying to prove a relationship between variables or make inferences from data. ML models are better for creating algorithms that can make predictions on topics such as the performance of an ad or real estate pricing. 

Some examples of statistical models include: linear regression, logistic regression, ANOVA. Some examples of ML models include: neural networks, random forests, support vector machines.

The biggest difference between statistics and ML is their purposes. While statistical models are used for finding and explaining the relationships between variables, ML models are built for providing accurate predictions without explicit programming.


- Data-driven Modeling Vs. Machine Learning

Data-driven modeling and ML are both closely related fields that use historical data to create models that can identify patterns and make predictions.

Data-driven modeling is the process of using data to derive the functional form of a model or the parameters of an algorithm. Machine learning, on the other hand, is the process of fitting parameters to the data to minimize a cost function when applying a model to the data. The "learning" part requires data.

Here are some more details about data-driven modeling and machine learning:

  • Data-driven modeling: Uses data that describes a system of interest, but doesn't know many (or any) properties of a model that can describe it. Instead, it uses mathematically flexible models that are useful in fitting and making predictions.
  • Machine learning: Leverages algorithms to analyze data, learn from it, and forecast trends.
  • Data science: Focuses on managing, processing, and interpreting big data to effectively inform decision-making.
  • AI: Requires a continuous feed of data to learn and improve decision-making.

[More to come ...]

Document Actions