Date 12/11/19

Edge AI for techies, updated December 11, 2019

Artificial Intelligence (AI) is about to enter the traditional industry segments. When this happens we can start to see the real gains with AI, where the industry becomes radically more efficient and the environment much cleaner. But currently, many are struggling to bring the intelligence to all the small devices that together form the industry, in other words, to bring AI to the edge.

AI keeps entering new fields with great promises at a high pace. So far, the connected and already digital industries such as media and advertising, finance and retail have been exploited the most. There is no doubt that AI has created real value in these segments: There are plenty of convenient services and functions nowadays that make our lives a little bit easier and smoother. However, the big and important problems are still ahead of us. The solution to climate- and environmental problems is to remove old, dirty and inefficient technology and replace it with clean energy and an efficient industry, the latter commonly referred to as Industry 4.0.

A crucial component in Industry 4.0 is by introducing intelligence “on the edge”. The edge is referred to as the part of our world that is outside the range of high speed and large bandwidth connections, which, frankly speaking, is most of our world at the moment. Intelligence on the edge means that even the smallest devices and machines around us are able to sense their environment, learn from it and react on it. This allows for instance the machines in some factory to take higher level decisions, act autonomously and to feedback important flaws or improvements to the user or the cloud. 

Practically, the sensing part is achieved by having a sensor of some sort (e.g. accelerometer, data from an engine or any other type of sensor) connected to a small microcontroller unit (MCU). The MCU is loaded with some software that has been pre-trained on the typical scenarios that the device will encounter i.e. some data that has been collected beforehand and fed to the software. This is the learning part, which can also be a continuous process so that the device learns as soon as it encounters new things. The last part, the reaction of the AI (inference), can be some physical actuation on its immediate environment, or it can be a signal to a human being or the cloud for further action and assistance.

To be able to learn from data, the software is based on some machine learning technique. However, some of the most promising machine learning models widely applied in the cloud and digital industries, still use too much computing power and memory to be able to fit the small MCUs on the market. 

Read the complete story in our White Paper - Edge AI for techies


Date 12/12/21

The Future is Touchless: Radical Gesture Control P...

As the pandemic has swept over the world, gesture control an...

Date 12/02/21

Imagimob @ CES 2022

We will be present at CES 2022 in Las Vegas. Please send an ...

Date 11/25/21

Imagimob AI in Agritech

Date 11/12/21

Webinar: Build and deploy a keyword spotting AI ap...

Date 10/19/21

Deploying Edge Models - Acconeer example

Date 10/11/21

Imagimob AI used for condition monitoring of elect...

Date 09/21/21

Tips and Tricks for Better Edge AI models

Date 09/07/21

Don’t build your embedded ML/AI pipeline from scra...

Date 06/18/21

Imagimob AI integration with IAR Embedded Workbenc...

Date 06/17/21

Video recording of webinar "Gesture control using ...

Date 05/10/21

Webinar - Imagimob to present at Arm AI Virtual Te...

Date 04/23/21

Gesture Visualization in Imagimob Studio

Date 04/01/21

New team members

Date 03/15/21

Imagimob featured in Dagens Industri

Date 03/09/21

Imagimob AI 2.6

Date 02/22/21

Customer Case Study: Increasing car safety through...

Date 01/22/21

tinyML Summit 2021

Date 12/18/20

Veoneer, Imagimob and Pionate in joint research pr...

Date 11/20/20

Edge computing needs Edge AI

Date 11/12/20

Imagimob video from tinyML Talks

Date 10/28/20

Agritech: Monitoring cattle with IoT and Edge AI

Date 10/19/20

Arm Community Blog: Imagimob - The fastest way fro...

Date 10/01/20

Edge Computing World in Santa Clara, CA, USA

Date 09/21/20

Imagimob video from Redeye AI seminar

Date 09/02/20

Edge AI Summit 2020 (US)

Date 08/20/20

Edge AI & Industrial Edge (Germany)

Date 06/07/20

Case study - How to Build an Embedded AI applicati...

Date 05/20/20

Redeye - Artificial Intelligence Seminar 2020

Date 05/07/20

Webinar - Gesture control using radar and Edge AI

Date 04/08/20

Nordic Semiconductors

Date 03/11/20

CES 2020 - Las Vegas

Date 02/23/20

How to build an embedded AI application

Date 12/23/19

Hannover Messe

Date 12/11/19

Edge AI for techies, updated December 11, 2019

Date 12/11/19

Edge AI for techies, updated December 11, 2019

Date 12/05/19

Article in Dagens Industri: This is how Stockholm-...

Date 10/24/19

Imagimob AI - the video

Date 10/17/19

The past, present and future of edge machine learn...

Date 09/06/19

The New Path to Better Edge AI Applications

Date 07/01/19

Edge Computing in Modern Agriculture

Date 06/14/19

The Guide to Successful AI Projects - A Real World...

Date 04/07/19

Our Top 3 Highlights from Hannover Messe 2019

Date 03/26/19

The Way You Collect Data Can Make or Break Your Ne...

Date 04/04/18

The AI Project Process - How to succeed including ...

Date 03/23/18

AI Research and AI Safety

Date 03/11/18

What is Edge AI / tinyML ?

Date 01/30/18

Wearing Intelligence On Your Sleeve

Date 01/30/18

Imagimob and Autoliv demo at CES 2018

Date 11/23/17

Wearing Intelligence On Your Sleeve - Part 2 (vide...

Date 05/24/17

Wearing Intelligence On Your Sleeve - Part 1

LOAD MORE keyboard_arrow_down