Personal tools

AI Environments

AI_Agent_and_Environment_080320A
[An AI Agent Interacting with an Environment]



- Overview

An AI environment is the external context in which an AI agent operates, providing stimuli and feedback. The type of environment an AI agent operates in greatly influences how it learns, makes decisions, and reacts. 

An AI environment is the physical or digital space where an AI agent operates, perceives, acts, and learns. The environment provides stimuli and feedback that shape an agent's decisions and behavior.

The nature of the environment is important to understand when solving problems using AI. For example, a chess bot's environment is a chessboard, while a room cleaner robot's environment is a room. 

From that perspective, there are several categories we use to group AI problems based on the nature of the environment, including: 

  • Deterministic: The next state is observable at a given time, so there is no uncertainty. For example, a traffic signal is a deterministic environment.
  • Stochastic: The next state is unpredictable for the agent, so randomness exists in the environment. For example, a radio station is a stochastic environment.
  • Fully observable: Agents have access to complete and accurate information about the world state at any given time.
  • Partially observable: Agents have limited or noisy information about the world state.

Understanding the characteristics of an AI environment is one of the first tasks for AI practitioners to solve specific AI problems.

Here are some characteristics of AI environments:

  • Data: The shape and frequency of the data
  • Problem: The nature of the problem
  • Knowledge: The amount of knowledge available at any given time

 

- Nature of AI Environments

Can machines think for themselves? Not yet, but AI agents and environments are building blocks. By understanding how AI agents "see" and react to their environment, we can build truly intelligent systems - systems that can effectively sense, reason, and act in any situation.

An environment in AI is the surrounding of the agent. The agent takes input from the environment through sensors and delivers the output to the environment through actuators.

When designing AI solutions, we spend a lot of time focusing on aspects such as the nature of learning algorithms [ex: supervised, unsupervised, semi-supervised] or the characteristics of the data [ex: classified, unclassified…]. 

However, little attention is often provided to the nature of the environment on which the AI solution operates. As it turns out, the characteristics of the environment are one of the absolutely key elements to determine the right models for an AI solution. 

An environment is everything in the world which surrounds the agent, but it is not a part of an agent itself. An environment can be described as a situation in which an agent is present. The environment is where agent lives, operate and provide the agent with something to sense and act upon it. 

The agent takes input from the environment through sensors and delivers the output to the environment through actuators. For example, program a chess bot, the environment is a chessboard and creating a room cleaner robot, the environment is Room. 

Each environment has its own properties and agents should be designed such as it can explore environment states using sensors and act accordingly using actuators. 

 

- Complete vs. Incomplete

Complete AI environments are those on which, at any give time, we have enough information to complete a branch of the problem. Chess is a classic example of a complete AI environment. Poker, on the other hand, is an incomplete environments as AI strategies can’t anticipate many moves in advance and, instead, they focus on finding a good "equilibrium” at any given time.

 

- Fully Observable and Partially Observable

An agent’s sensors give it access to the complete state of the environment at each point in time, if fully observable, otherwise not. A fully observable AI environment has access to all required information to complete target task. Image recognition operates in fully observable domains. Partially observable environments such as the ones encountered in self-driving vehicle scenarios deal with partial information in order to solve AI problems.

 

- Deterministic vs. Stochastic

The next state of the environment is completely determined by the current state and the action executed by the agent. Stochastic environment is random in nature and cannot be completely determined. For example, 8-puzzle has a deterministic environment, but self-driving car does not. Deterministic AI environments are those on which the outcome can be determined base on a specific state. In other words, deterministic environments ignore uncertainty. Most real world AI environments are not deterministic. Instead, they can be classified as stochastic. Self-driving vehicles are a classic example of stochastic AI processes.

 

- Static vs. Dynamic

The static environment is unchanged while an agent is deliberating. A dynamic environment, on the other hand, does change. Backgammon has static environment and a roomba has dynamic. Static AI environments rely on data-knowledge sources that don’t change frequently over time. Speech analysis is a problem that operates on static AI environments. Contrasting with that model, dynamic AI environments such as the vision AI systems in drones deal with data sources that change quite frequently.

 

- Discrete vs. Continuous

A limited number of distinct, clearly defined perceptions and actions, constitute a discrete environment. Discrete AI environments are those on which a finite [although arbitrarily large] set of possibilities can drive the final outcome of the task. Chess is also classified as a discrete AI problem. Continuous AI environments rely on unknown and rapidly changing data sources. Vision systems in drones or self-driving cars operate on continuous AI environments.

 

- Single-agent and Multi-agent

An agent operating just by itself has a single agent environment. However if there are other agents involved, then it’s a multi agent environment. Self-driving cars have multi agent environment.

 

- Episodic/Non-episodic

In an episodic environment, each episode consists of the agent perceiving and then acting. The quality of its action depends just on the episode itself. Subsequent episodes do not depend on the actions in the previous episodes. Episodic environments are much simpler because the agent does not need to think ahead. 

 

- Known vs Unknown

In a known environment, the output for all probable actions is given. Obviously, in case of unknown environment, for an agent to make a decision, it has to gain knowledge about how the environment works.


 
[More to come ...]

 

 

 

 

 

Document Actions