Personal tools

Edge AI Devices

Edge AI Modeled_060124A
[Edge AI Modeled - NVIDIA]

- Overview

Edge AI refers to deploying AI algorithms and AI models directly on local edge devices such as sensors or IoT devices, enabling instant data processing and analysis without continuous reliance on cloud infrastructure. 

Simply put, edge AI refers to the combination of edge computing and artificial intelligence (AI) to perform machine learning (ML) tasks directly on interconnected edge devices. Edge computing allows data to be stored close to the device, and AI algorithms can process data at the edge of the network regardless of whether there is a network connection. This helps process data in milliseconds and provide instant feedback.

Technologies such as self-driving cars, wearables, security cameras, and smart appliances leverage edge AI capabilities to provide users with real-time information when it matters most.

 

- Edge AI Devices

Edge AI devices use embedded algorithms to collect and process device data, monitor device behavior, and make decisions. This allows the device to automatically correct problems and predict future performance. 

Edge AI devices include: 

  • Smart speakers
  • Smart phones
  • Laptops
  • Robots
  • Self-driven cars
  • Drones
  • Surveillance cameras that use video analytics
  • Wearable health-monitoring accessories (e.g., smart watches)
  • Real-time traffic updates on autonomous vehicles
  • Connected devices
  • Smart appliances
  • Medical devices 
  • Scientific instruments
     
Granada_Spain_120520A
[Granada, Spain - Civil Enginering Discoveries]

- How to Choose an Edge AI Device

With all the buzz surrounding edge computing these days, perhaps you're thinking it’s time to invest in intelligent edge technologies for your applications. What are the key factors in pinpointing the right platform for your system?

Edge AI devices can run on a wide range of hardware, including: Existing central processing units (CPUs), Microcontrollers, and Advanced neural processing devices. 

Some examples of edge devices include: Embedded computing platforms such as the Intel NUC or SoC computers, Edge servers, Mobile devices, Desktop computing devices with regular or embedded hardware, and IoT cameras. 

When choosing edge devices for AI tasks, you should consider factors like power consumption, cooling, and CPU power. Edge devices are often designed to operate in remote or resource-constrained environments.

Depending on the AI ​​application and device category, there are a variety of hardware options for performing AI edge processing. These options include central processing units (CPUs), GPUs, application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), and system-on-chip (SoC) accelerators.

 

- The Main Goal of AI Edge Processing

Today, the focus of AI edge processing is to move the inference part of the AI ​​workflow to the device, keeping the data contained on the device. The main factors driving the choice of cloud or edge processing are privacy, security, cost, latency and bandwidth. 

Applications such as autonomous driving have sub-millisecond (ms) latency requirements, while other applications such as voice/speech recognition on smart speakers require privacy concerns. Keeping AI processing on edge devices circumvents privacy concerns while avoiding the bandwidth, latency and cost issues of cloud computing.

Model compression technologies such as Google's Learn2Compress can compress large AI models into small hardware, and their impact has also promoted the rise of AI edge processing. 

Federated learning and blockchain-based decentralized artificial intelligence architecture are also part of the shift of AI processing to the edge, and some training may also be moved to the edge.

 

[More to come ...]



Document Actions