Date 04/23/21

Gesture Visualization in Imagimob Studio

By Sam Al-Attiyah and Songyi Ma

Imagimob AI can help you get a head start on your project in terms of identifying which classes to focus on in the model building. In this article, we are going to make gesture control come true using the Acconeer radar and Imagimob AI. Specifically, we are going to explore gesture data visualization, visualising pre-processing and gesture selection. We will also show how to do all of these in Imagimob Studio, which is a part of the Imagimob AI package. With the help from Imagimob Studio, you can build and deploy great models by starting you on the right path.

Gesture Data Visualization
Of course all Machine learning engineers should understand the data before any training. In Imagimob Studio, data visualization as a built-in functionality is amazingly smooth, which happens once you import the data. This gives you a clear idea of the time-series data that is fed into the model for training.

Fig1. Gestures collected by Acconeer sensors viewed in Imagimob Studio.

Visualising pre-processing & selecting gestures
You can use Imagimob AI to visualise your data, but even more powerful than that you can use Imagimob AI to visualise your pre-processing very easily. By visualising your pre-processing you know exactly what goes into your model. This helps you not only improve your model performance but to also identify what events, gestures or classes are easy to distinguish.

This is Imagimob AI’s Create Track from Preprocessing feature and it allows you to circumvent the obstacle of gesture selection. This functionality allows you to visualise your pre-processing so you can ensure that what you pass to the model is distinct and unique for each gesture.
As can be seen in Fig2 that with a given list of 4 gestures, we can easily choose 3 ones that we want to proceed with. We can also already see the model’s potential based on the class differentiation.

Fig2. Gesture selection from different gestures

Visualising Processing in Imagimob Studio
We’ve talked about the benefits of visualising processing, in this section we show you how to do it. To do this in Imagimob Studio, simply collect all the gestures you wish to evaluate into one file. It’s good to collect multiple iterations of each gesture to make sure you get a good overview. Then do the following:

1 Add the data to your project file

2 Add your desired pre-processing functions and hit “Create Track from Preprocessor”

LATEST ARTICLES
arrow_forward
Date 10/19/21

Deploying Edge Models - Acconeer example

Trying to put an AI model in a real-world scenario can be da...

Date 10/11/21

Imagimob AI used for condition monitoring of elect...

Condition monotoring is a hot topic. Imagimob presented at a...

Date 09/21/21

Tips and Tricks for Better Edge AI models

Date 09/07/21

Don’t build your embedded ML/AI pipeline from scra...

Date 06/18/21

Imagimob AI integration with IAR Embedded Workbenc...

Date 06/17/21

Video recording of webinar "Gesture control using ...

Date 05/10/21

Webinar - Imagimob to present at Arm AI Virtual Te...

Date 04/23/21

Gesture Visualization in Imagimob Studio

Date 04/01/21

New team members

Date 03/15/21

Imagimob featured in Dagens Industri

Date 03/09/21

Imagimob AI 2.6

Date 02/22/21

Customer Case Study: Increasing car safety through...

Date 01/22/21

tinyML Summit 2021

Date 01/19/21

The Future is Touchless: Radical Gesture Control P...

Date 12/18/20

Veoneer, Imagimob and Pionate in joint research pr...

Date 11/20/20

Edge computing needs Edge AI

Date 11/12/20

Imagimob video from tinyML Talks

Date 10/28/20

Agritech: Monitoring cattle with IoT and Edge AI

Date 10/19/20

Arm Community Blog: Imagimob - The fastest way fro...

Date 10/01/20

Edge Computing World in Santa Clara, CA, USA

Date 09/21/20

Imagimob video from Redeye AI seminar

Date 09/02/20

Edge AI Summit 2020 (US)

Date 08/20/20

Edge AI & Industrial Edge (Germany)

Date 06/07/20

Case study - How to Build an Embedded AI applicati...

Date 05/20/20

Redeye - Artificial Intelligence Seminar 2020

Date 05/07/20

Webinar - Gesture control using radar and Edge AI

Date 04/08/20

Nordic Semiconductors

Date 03/11/20

CES 2020 - Las Vegas

Date 02/23/20

How to build an embedded AI application

Date 12/23/19

Hannover Messe

Date 12/11/19

Edge AI for techies, updated December 11, 2019

Date 12/11/19

Edge AI for techies, updated December 11, 2019

Date 12/05/19

Article in Dagens Industri: This is how Stockholm-...

Date 10/24/19

Imagimob AI - the video

Date 10/17/19

The past, present and future of edge machine learn...

Date 09/06/19

The New Path to Better Edge AI Applications

Date 07/01/19

Edge Computing in Modern Agriculture

Date 06/14/19

The Guide to Successful AI Projects - A Real World...

Date 04/07/19

Our Top 3 Highlights from Hannover Messe 2019

Date 03/26/19

The Way You Collect Data Can Make or Break Your Ne...

Date 04/04/18

The AI Project Process - How to succeed including ...

Date 03/23/18

AI Research and AI Safety

Date 03/11/18

What is Edge AI / tinyML ?

Date 01/30/18

Wearing Intelligence On Your Sleeve

Date 01/30/18

Imagimob and Autoliv demo at CES 2018

Date 11/23/17

Wearing Intelligence On Your Sleeve - Part 2 (vide...

Date 05/24/17

Wearing Intelligence On Your Sleeve - Part 1

LOAD MORE keyboard_arrow_down