Select Page

Edge AI: The Era of Distributed AI Computing

Edge AI: The Era of Distributed AI Computing

August 13, 2021

by Innova Solutions

The exponential growth in IoT devices has resulted in a vast amount of data being generated, which has led to the development of edge computing technology. Edge Computing is a prominent part of cloud computing that moves part of the service-specific processing and data storage from the central cloud to edge network nodes, which is physically and reasonably close to the data providers and end-users.

Edge Computing + AI = Edge AI

Along with edge computing, rapid advancements in Artificial Intelligence (AI) and IoT will help organizations develop a smart/intelligent connected network of edge devices called Edge AI or Edge AIoT (Artificial Intelligence of Things) or Intelligent Internet of Things. Already, many companies are planning to adopt edge computing in combination with AI to improve efficiency and reduce the overall cost of production/ services. It uses machine-to-machine sensors or live video feeds to get data and provides real-time insights to foresee challenges; it prevents costly errors and workplace injuries. According to Forbes, the cost of unplanned downtime of machines in industrial production is around $50 billion per year, which leads to productivity loss, causing delays, unhappy customers, and lost revenue. Here, Edge AI will have a critical role to play.

A case in point

In the future, while the application of Edge AI will help companies to reduce their load on the network and other IT infrastructure and keep the cost low, it is mainly dependent on two major areas: hardware and software improvement. Let us take a use case where devices that monitor manufacturing equipment on a factory floor or an internet-connected video camera send live footage from a remote office. It is easy to transmit from a single device or source, producing data across the network. However, the challenge arises when the number of devices that are transmitting data at the same time increases. Instead of one video camera sending live footage, there are thousands of devices now. Not only will the quality suffer due to latency, but the costs in maintaining that transmission bandwidth will also be extremely high.

The above challenges can be overcome by Edge AI, as it utilizes devices to analyze captured data from the exact location itself rather than sending the data to a cloud or a central location. The result of the analysis is stored on the cloud to generate business insights, allowing companies to make more efficient use of their network resources. Furthermore, with the help of Edge AI, the plant devices can perform most of the analytics at the site. Thus, it can significantly reduce the data that needs to travel across the network, resulting in reduced cloud computing costs and increased analytics speed, which are the key advantages of utilizing Edge AI.

A simple illustration of a system implementing the video analytics service scenario in a manufacturing plant

Traditionally, computer vision, a subsection of AI, was deployed as a cloud-based IT process. However, there was a significant improvement in the capacity of processing systems that led to a change of paradigm in computer vision. Particularly, image classification and object recognition embraced the use of neural networks. For vision analytics, the system required two main capabilities: real-time execution and fast matrix calculations, and to develop it, we used application-specific mathematical algorithm models.

There are two types of deep learning algorithms, which are segregated as:

  • One-stage detection (Regression-Based Object Detectors), in which detection happens in one step approach and
  • Two-stage detection (Classification-Based Object Detectors), in which detection happens in a two-step approach. The computation speed is one of the main factors between the two methods.

To know more, please check out our whitepaper on A Study on Computer Vision Algorithm.

The shift toward Edge AI

Based on the above computational capability of the algorithms, a new development in the classes of intelligent devices and high-stake applications (ranging from augmented/virtual reality (AR/VR) to drones applications and self-driving vehicles) has made cloud-based AI inadequate. These real-time applications cannot afford latency and must operate under high reliability, even when network connectivity is lost. These new applications have triggered enormous interest in distributed, low-latency, and reliable AI, resulting in a significant shift from cloud-based and centralized training and inference toward Edge AI, in which:

  • Training data is not distributed evenly over the networks of edge devices, such as network base stations (BSs) and/ or mobile devices, including phones, cameras, vehicles, and drones
  • Every edge device will have access to a tiny fraction of the data, and training and inference are carried out collectively and
  • Edge devices can communicate and exchange locally trained models (e.g., neural networks) instead of private data.

On the above three-pointers, we present a defect detection flow diagram using multiple first edge nodes.

The short abbreviations used in the diagram are:

  • Tstart: Trigger point at which object detection process starts
  • Tstop: Trigger point at which object detection process stop
  • Difftime: Frame difference at time t
  • tblank: Time record of no frames
A flow diagram with multiple first edge nodes integrated with second edge node and cloud

When a product defect is identified on the conveyor belt by the first node, it sends the video frame with its marked ID to the second node, which provides additional processing capability. The second node defect detection module is then triggered, which is waiting and holding the GPU resources to process it further. If the defect is not found during the process, the defect detection module sends a trigger message to stop the defect tracking procedure and releases the GPU resources in the second node, thus releasing the unused GPU resource. After the process in the second node, during the defect detection, permission is flagged in the cloud server. The cloud server helps to identify the authorized users liable to that node ID. It sends a message only to the authenticated users of the respective node on which the defect is detected.

Some of the significant advantages of using Edge AI are:

  • Performing inference locally on connected devices reduces latency and the cost of sending device-generated data to the cloud for prediction.
  • Rather than sending all data to the cloud to perform AI inference, the inference is run directly on the device, and data is sent to the cloud only when additional processing is required.
  • Getting inference results with very low latency is essential in ensuring that mission-critical Internet of things (IoT) applications respond quickly to local events.
  • Unlike cloud-based AI, edge AI is privacy-preserving in which training data is not logged at the cloud but is kept locally on every device, and the global shared model is learned by aggregating locally computed updates, denoted as model state information (MSI), in a peer-to-peer manner or via a coordinating (federating) server; and
  • Training on a wealth of user-generated data samples can help achieve a high accuracy that may even include privacy-sensitive information, such as healthcare records, factory/network operational status, and personal location history.

Some of the application areas of AI on the edge services are as follows:

  • Computing-as-a-Service: It will provide intelligent computing capabilities when and wherever the customer needs them, i.e., fulfilling user requirements regarding computing power, latency, cost, service reliability, etc.
  • Smart IoT Models: It will cover IoT data models, architectures, and smart services, especially with distributed services in IoT over different platforms and implementations. The main area of focus is to allow services that can adapt based on available IoT devices/services in the network.
  • Real-time Insights: The benefit of localized AI functions is to filter necessary information transmitted to the cloud. With constrained computation resources, AI can help optimize the edge hardware’s real-time processing capabilities.
  • Collective Intelligence: The algorithms running on diversified and geographically distant (imposing latency requirements) platforms will be able to solve an AI problem jointly.
  • Data Intelligence: It is enabled by applying high-tech communication technology and AI, which supports Edge AI by collecting, aggregating, processing, distributing data coming from different sources at the Edge. Along with it, users will learn from data and infer and control data in both static and dynamic environments.
  • Other Applications: Autonomous and driving-assisted vehicles, autonomous drones, traffic control, smart factories, smart farms, smart roads, smart homes, smart cities, and many more

Visual analytics (a part of DNNs) in collaboration with Edge computing and IoT has a significant impact on multiple industries. The integration of AI, IoT, and Edge computing on the networks can help industry players to enter the era of the Intelligent Internet of Everything, increasing the speed of the data collection, processing, and analytics at the Edge of the network. This integration can provide real-time insights through video analytics, even with HD cameras in microseconds, which will transform the current business models. Also, it will act as a catalyst for significant growth in vision-based AI applications across industries. To know more about the architecture pattern for the above applications areas, please feel free to contacts us at [email protected]

Case Studies

Contributors

Arvind Ramachandra

Arvind Ramachandra

Vice President of Technology & Cloud Services

[email protected]

LinkedIn

Munish Singh

Munish Singh

Lead, Research & Advisory AI/ML

[email protected]

LinkedIn

Get in touch with us for feasibility studies, joint workshops, pilots and ideation sessions.

[contact-form-7 id=”23154″ title=”Blog Request a
Consultation Form”]

Request a Consultation

Services

Digital Product Engineering

Cloud Services

Data & Analytics

AI and Automation
Cybersecurity
Modern Managed Services

Build Operate Transfer

Innova Orion GCC Services

Talent Solutions

Industries

Communications & Media

Government Solutions

Healthcare, Life Sciences,
and Insurance

Banking & Financial Services

Energy, Oil & Gas and Utilities

Hi-Tech

Retail & CPG
Manufacturing

Travel & Transportation and Hospitality

Partnerships

AWS

Automation Anywhere

Databricks

Google

IBM

Microsoft

Pega

Salesforce

SAP
ServiceNow

Snowflake

Uipath

Innovation @ Work

Blogs and Insights

Research and Whitepapers

Case Studies

Podcasts

Webinars & Tech Talks
US Employment Reports

Company

About Us

Leadership Team

Strategic Partnerships

Office Locations

Newsroom

Events

ESG

The Innova Foundation

Careers

Explore Open Positions

Life @ Innova Solutions

Candidate Resource Library

Let's Connect