Date 03/12/22

The past, present and future of Edge AI

If you’ve noticed a surge of AI powered products and services hitting the marketplace lately, you are not mistaken. Artificial Intelligence (AI) and machine learning (ML) technology have been developing rapidly in recent years, with possibilities growing in tandem with a greater availability of data and advancements in computing capability and storage solutions.


In fact, if you look behind the scenes, you can spot many examples of ML technology already in practice in all kinds of industries—ranging from consumer goods and social media to financial services and manufacturing.

 
But the question remains: How did ML evolve from science fiction to reality in such a short period time? After all, it was only in 1959 that data scientist Arthur Lee Samuel successfully developed a computer program that could teach itself how to play checkers.

To find the answer, let’s chart the course of machine learning’s development by taking a look at the past and present, and envisioning what might be coming next.

What is ML? 

Machine learning (ML) is a sub-set of AI where machines, enabled with trained algorithms and neural network models, are able to autonomously learn from data and continuously improve performance and decision making accuracy related to a specific task.

Machine Learning in the Past 

Can a machine exercise intelligence?

The origin of ML can be traced back to a series of profound events in the 1950s in which pioneering research established computers’ ability to learn. In 1950, the famous “Turing Test” was developed by the English mathematician Alan Turing to determine if a machine exhibits intelligent behavior equal or similar to a human.

In 1952, the data scientist Arthur Lee Samuel managed to teach an IBM computer program to not only learn the game of checkers but to improve the more it played.

Then in 1957, the world’s first neural network for computers was designed by the American psychologist Frank Rosenblatt. From there, experimentation escalated.

In the 1960s, Bayesian methods for probabilistic interference in machine learning were introduced. And in 1986, the computer scientist Rina Dechter introduced the Deep Learning technique, based on artificial neural networks, to the machine learning community.

Adopting a data-driven approach

It wasn’t until the 1990s that ML shifted from a knowledge-driven approach to the data-driven approach we are familiar with today. Scientists started creating computer programs that could analyze large quantities of data and learn from the results.

It was during this period that support vector machines and recurrent neural networks rose in popularity. In the 2000s, Kernal methods for algorithm pattern analysis, like Support Vector Clustering became prominent.

Hardware for efficient processing

The next momentous event that helped enable machine learning as we know it today is hardware advancements which occurred in the early 2000s.

Graphics processing units (GPUs) were developed that could not only speed up algorithm training significantly—from weeks to days— but could also be used in embedded systems.

In 2009, Nvidia’s GPUs were used by the famous Google Brain to create capable deep neural networks that could learn to recognize unlabeled pictures of cats from YouTube.

Once deep learning became demonstrably feasible, a promising new era of AI and machine learning for software services and applications could begin.

Machine Learning in the Present 


Big demand for GPUs

Today, the demand for GPUs continues to rise as companies from all kinds of industries seek to put their data to work and realize the benefits of AI and machine learning.

Some examples of machine learning applications we can see today are medical diagnosis, machine maintenance prediction, and targeted advertising.


However, when it comes to applying ML models in the real world, there’s a certain stumbling block that is hindering progress. And that stumbling block is called latency.


Edge ML

Most companies today store their data in the cloud. This means that data has to travel from edge devices to the cloud —which is often located thousands of miles away—for model comparison before the concluding insight can be relayed back to the edge devices. This is a critical, and even dangerous problem in cases such as fall detection where time is of the essence.

The problem of latency is what is driving many companies to move from the cloud to the edge today. “Intelligence on the edge,” Edge AI” or “Edge ML” means that, instead of being processed in algorithms located in the cloud, data is processed locally in algorithms stored on a hardware device, ie at the edge.

This not only enables real-time operations, but it also helps to significantly reduce the power consumption and security vulnerability associated with processing data in the cloud.

Power constraint issues in edge ML

As we move towards applying AI and edge ML to smaller and smaller devices and wearables, resource constraints are presenting another major roadblock. How can we run Edge ML applications without sacrificing performance and accuracy?

 
While moving from the cloud to the edge is a vital step in solving resource constraint issues, many ML models are still using too much computing power and memory to be able to fit the small microprocessors on the market today.

Many are approaching this challenge by creating more efficient software, algorithms and hardware. Or by combining these components in a specialized way.  

Edge ML in the Future 

So what’s next? The future of ML is continuously evolving, as new developments and milestones are achieved in the present. While that makes it challenging to offer accurate predictions, we can, however, identify some key trends.

Edge ML applications in the future

A number of existing platforms for Edge ML include smart speakers like Amazon’s Echo and Google’s Home. In the energy and industrial space, some companies have developed Edge ML systems with predictive sensors and algorithms that monitor the health of the components to notify technicians when maintenance is required. Other Edge ML systems monitor for emergencies like machine malfunctions or meltdown.

In the future, there is talk about developing Edge ML based systems in healthcare and assisted living facilities to monitor things like patient heart rate, glucose levels, and falls (using radar sensors, cameras and/or motion sensors). These technologies could be life-saving and, if the data is processed locally at the edge, staff would be notified in real-time when a quick response would be essential for saving lives.

Edge ML and sustainability

Working with Edge ML applications has opened up a new world of possibilities for developing highly sustainable solutions. These Edge ML applications has resulted in portable, smarter, energy-efficient, and more economical devices. Edge ML can be used to help manage environmental impacts across a variety of applications e.g clean distributed energy grids, improved supply chains, environmental monitoring, agriculture applications as well as improved weather and disaster predictions. 

Data centres consume an estimated 200 terawatt hours (TWh) of energy each year–more than the energy consumption of some countries. They also produce an estimated 2% of all global CO2 emissions. This means that Edge ML can help reduce power consumption in data centres.

Unsupervised machine learning

In the majority of AI and ML projects today, the tedious process of sorting and labelling data takes up the bulk of development time. In fact, the analyst firm Cognilytica estimated that in the average AI project, about 80% of project time is used aggregating, cleaning, labeling, and augmenting data to be used in ML models.

This is why the prospect of unsupervised learning is so exciting. In the future, more and more machines will be able to independently identify previously unknown patterns within a data set which has not been labelled or categorized.

Unsupervised learning is particularly useful for discovering previously unknown patterns in a data set when you do not know what the outcome should be. This could be useful for applications such as analyzing consumer data on edge devices to determine the target market for a new product or detecting data anomalies like fraudulent transactions or malfunctioning hardware.

Hardware acceleration at the edge

A new generation of purpose-built accelerators is emerging as chip manufacturers and startups work to speed up and optimize the workloads involved in Edge ML projects—ranging from training to inferencing at the edge. Faster, cheaper, more power-efficient and scalable. These accelerators promise to boost edge devices and Edge ML systems to a new level of performance.

One of the ways they achieve this is by relieving edge devices’ central processing units of the complex and heavy mathematical work involved in running deep learning models. What does this mean? Get ready for faster predictions.

Companies such as Arm, Synaptics, Greenwaves, Syntiant and many others are developing edge ML chips that are optimised for performance and low power consumption at the edge.


Scaling up Edge ML

In the future, the much talked about Internet of Things will become increasingly tangible in our everyday lives. Especially as AI and ML technology continues to become increasingly affordable. However, as the number of edge ML devices increase, we will need to ensure we have an infrastructure to match. According to Drew Henry, Senior Vice President of Strategy Planning & Operations at Arm in a recent article:
 
“The world of one trillion IoT devices we anticipate by 2035 will deliver infrastructural and architectural challenges on a new scale…our technology must keep evolving to cope. On the edge computing side, it means Arm will continue to invest heavily in developing the hardware, software, and tools to enable intelligent decision-making at every point in the infrastructure stack. It also means using heterogeneous [computation] at the processor level and throughout the network—from cloud to edge to endpoint device.”
 
When we look at history and where we are today, it appears that the evolution of edge ML is fast and unstoppable. As future developments continue to unfold, prepare for impact and make sure you're ready to seize the opportunities this technology brings.

LATEST ARTICLES
arrow_forward
Date 09/13/24

New research on data quality's role in model effic...

Earlier this month, at the 9th International Conference on F...

Date 09/03/24

September Release of Imagimob Studio

We're sharing the exciting new features from this month's re...

Date 07/05/24

Imagimob at tinyML Innovation Forum 2024

Date 07/01/24

Imagimob Studio 5.0 has arrived!

Date 05/13/24

May release of Imagimob Studio

Date 04/11/24

2024 State of Edge AI Report

Date 03/11/24

What is Edge AI?

Date 03/08/24

March release of Imagimob Studio

Date 02/18/24

What is tinyML?

Date 02/06/24

February release of Imagimob Studio

Date 01/16/24

Introducing Graph UX: A new way to visualize your ...

Date 12/06/23

Imagimob Ready Models are here. Time to accelerate...

Date 01/27/23

Deploying Quality SED models in a week

Date 11/17/22

An introduction to Sound Event Detection (SED)

Date 11/14/22

Imagimob condition monitoring AI-demo on Texas Ins...

Date 11/01/22

Alert Vest – connected tinyML safety vest by Swanh...

Date 10/21/22

Video recording from tinyML AutoML Deep Dive

Date 10/19/22

Edge ML Project time-estimates

Date 10/05/22

An introduction to Fall detection - The art of mea...

Date 04/20/22

Imagimob to exhibit at Embedded World 2022

Date 03/12/22

The past, present and future of Edge AI

Date 03/10/22

Recorded AI Tech Talk by Imagimob and Arm on April...

Date 03/05/22

The Future is Touchless: Radical Gesture Control P...

Date 01/31/22

Quantization of LSTM layers - a Technical White Pa...

Date 01/07/22

How to build an embedded AI application

Date 12/07/21

Don’t build your embedded AI pipeline from scratch...

Date 12/02/21

Imagimob @ CES 2022

Date 11/25/21

Imagimob AI in Agritech

Date 10/19/21

Deploying Edge AI Models - Acconeer example

Date 10/11/21

Imagimob AI used for condition monitoring of elect...

Date 09/21/21

Tips and Tricks for Better Edge AI models

Date 06/18/21

Imagimob AI integration with IAR Embedded Workbenc...

Date 05/10/21

Recorded Webinar - Imagimob at Arm AI Tech Talks o...

Date 04/23/21

Gesture Visualization in Imagimob Studio

Date 04/01/21

New team members

Date 03/15/21

Imagimob featured in Dagens Industri

Date 02/22/21

Customer Case Study: Increasing car safety through...

Date 12/18/20

Veoneer, Imagimob and Pionate in joint research pr...

Date 11/20/20

Edge computing needs Edge AI

Date 11/12/20

Imagimob video from tinyML Talks

Date 10/28/20

Agritech: Monitoring cattle with IoT and Edge AI

Date 10/19/20

Arm Community Blog: Imagimob - The fastest way fro...

Date 09/21/20

Imagimob video from Redeye AI seminar

Date 05/07/20

Webinar - Gesture control using radar and Edge AI

Date 04/08/20

tinyML article with Nordic Semiconductors

Date 12/11/19

Edge AI for techies, updated December 11, 2019

Date 12/05/19

Article in Dagens Industri: This is how Stockholm-...

Date 09/06/19

The New Path to Better Edge AI Applications

Date 07/01/19

Edge Computing in Modern Agriculture

Date 04/07/19

Our Top 3 Highlights from Hannover Messe 2019

Date 03/26/19

The Way You Collect Data Can Make or Break Your Ne...

Date 03/23/18

AI Research and AI Safety

Date 01/30/18

Imagimob and Autoliv demo at CES 2018

Date 05/24/17

Wearing Intelligence On Your Sleeve

LOAD MORE keyboard_arrow_down