Personal tools

Edge AI

Edge AI_011124A
[Edge AI - Visio.ai]
 
 

Edge AI: The Future of AI and Edge Computing

 

- Overview

Artificial intelligence (AI) solutions, especially those based on deep learning in the field of computer vision, are completed in cloud environments that require large amounts of computing power. 

Inference is a relatively less computationally intensive task than training, where latency is more important to provide immediate results on the model. Most inference is still performed in the cloud or on servers, but as the diversity of AI applications grows, centralized training and inference paradigms are being questioned. 

Edge AI, or edge artificial intelligence, is the use of artificial intelligence (AI) techniques in an edge computing environment. 

Presently, common examples of edge AI include smartphones, wearable health-monitoring accessories (e.g., smart watches), real-time traffic updates on autonomous vehicles, connected devices and smart appliances

 

- The Main Drivers of Moving AI to the Edge

Privacy is one of the main drivers pushing artificial intelligence to the edge, especially for consumer devices such as phones, smart speakers, home security cameras, and consumer robots. 

Network latency affects the autonomous mobility of drones, robots and self-driving cars, with all device categories likely to have sub-millisecond latency requirements. However, network latency is also important from a consumer experience perspective, and Google's auto-suggestion apps on mobile phones have a 200 millisecond latency requirement. 

Bandwidth will impact vision-based applications (collectively known as HMDs) such as augmented reality (AR), virtual reality (VR) and mixed reality (MR), where bandwidth requirements will increase from 2 megabits per second ( Mbps) to the current 20 Mbps, and then to 20 Mbps. 50 Mbps because the HMD supports 360° 4K video. 

Safety is an important consideration for AI use cases such as security cameras, self-driving cars, and drones. While hardened chips and secure hardware packaging are critical to preventing tampering or physical attacks, having edge devices store and process data locally often increases redundancy and reduces the number of security vulnerabilities. 

The cost of performing AI processing in the cloud and edge needs to consider the cost of AI device hardware, bandwidth costs, and the cost of AI cloud/server processing. 

The ability to run large-scale deep neural network (DNN) models on edge devices depends not only on improvements in hardware, but also on improvements in software and techniques that can compress models to fit into small hardware form factors with limited power and power. Performance ability.

 

- The Benefits of Moving AI to the Edge

There are several drivers for moving AI processing to the edge. In an edge AI environment, AI computations are done at the edge of a network, usually on the device where the data is created. 

This is different from using cloud infrastructure, where AI computations are done in a centralized facility. Here are some benefits of edge AI: 

  • Reduced latency: Processing data locally can help reduce latency and bandwidth requirements
  • Faster analysis: Edge devices perform computations locally, which can lead to faster analysis
  • Real-time data processing: Edge AI enables real-time data processing and analysis without reliance on cloud infrastructure
  • Reduced reliance on external resources: Edge devices can reduce reliance on external resources

 

Edge AI doesn't require connectivity and integration between systems, allowing users to process data on the device in real time.

Some examples of edge AI applications include: 

  • Facial recognition
  • Object detection
  • Pose estimation
  • Autonomous AI video surveillance software

 

- Micro-Data Centers

Edge computing plays a vital role in the efficient implementation of several embedded applications such as artificial intelligence (AI), machine learning (ML), deep learning (DL), and the Internet of Things (IoT). However, today's data centers are currently unable to meet the requirements of these types of applications. This is where the Edge-Micro Data Center (EMDC) comes into play.

By moving intelligence closer to the embedded system (i.e., the edge), it is possible to create systems with a high degree of autonomy and decision-making capabilities. In this way, reliance on the cloud (typically centralized systems) is reduced, resulting in benefits in terms of energy savings, reduced latency, and lower costs.

Self-driving cars, robotic surgery, augmented reality in manufacturing, and drones are a few examples of early applications of edge computing. As of today, current data centers with "cloud services" (hyperscale, mega, and colocation) cannot meet the requirements of these applications, thus requiring complementary edge infrastructure such as EMDC and "edge services".

This edge infrastructure, hardware, and edge services must meet the following requirements:

  • High computational speed, requiring data to be processed as locally as possible (i.e. at the edge)
  • High elasticity
  • High efficiency

 

- Edge AI

Edge AI means that AI software algorithms are processed locally on a hardware device. The algorithms are using data (sensor data or signals) that are created on the device. A device using Edge AI software does not need to be connected in order to work properly, it can process data and take decisions independently without a connection.

AI relies heavily on data transmission and computation of complex machine learning algorithms. Edge computing sets up a new age computing paradigm that moves AI and ML to where the data generation and computation actually take place: the network’s edge. The amalgamation of both edge computing and AI gave birth to a new frontier: Edge AI. 

Edge AI allows faster computing and insights, better data security, and efficient control over continuous operation. As a result, it can enhance the performance of AI-enabled applications and keep the operating costs down. Edge AI can also assist AI in overcoming the technological challenges associated with it. 

Edge AI facilitates ML, autonomous application of DL models, and advanced algorithms on the Internet of Things (IoT) devices itself, away from cloud services.

 
 
The University of Chicago_050723B
[The University of Chicago]

- Edge AI Is The Next Wave of AI

Edge AI is the next wave of AI. detaching the requirement of cloud systems. Edge AI is processing information closer to the users and devices that require it, rather than sending that data for processing in central locations in the cloud.

In the last few years, AI implementations in various companies have changed around the world. As more enterprise-wide efforts dominate, Cloud Computing became an essential component of the AI evolution. 

As customers spend more time on their devices, businesses increasingly realize the need to bring essential computation onto the device to serve more customers. This is the reason that the Edge Computing market will continue to accelerate in the next few years.

Today, it is possible and easier to run AI and ML and perform analytics at the edge, depending on the size and scale of the edge site and the specific systems used. 

Although edge site computing systems are much smaller than those in central data centers, they have matured and can now successfully run many workloads due to the tremendous growth in processing power of today's x86 commodity servers. It’s amazing how many workloads can run successfully at the edge now.

  

- Distributed Edge Computing and Edge AI

Distributed edge computing and edge AI are two popular paradigms. 

Distributed edge computing delegates computational workloads to autonomous devices located at the data source. This is different from edge computing, which moves computation and data storage closer to the data source. 

Edge AI uses artificial intelligence (AI) techniques to enable a data gathering device in the field to provide actionable intelligence. Edge AI chips have three main parts: 

  • Scalar engines: Run Linux-class applications and safety-critical code
  • Adaptable engines: Process data from sensors
  • Intelligence engines: Run common edge workloads such as AI


Edge AI has many notable examples, including: 

  • Facial recognition
  • Real-time traffic updates on semi-autonomous vehicles
  • Connected devices
  • Smartphones
  • Robots
  • Drones
  • Wearable health monitoring devices
  • Security cameras

 
 

[More to come ...]



Document Actions