Date 04/23/21

Gesture Visualization in Imagimob Studio

By Sam Al-Attiyah and Songyi Ma

Imagimob AI can help you get a head start on your project in terms of identifying which classes to focus on in the model building. In this article, we are going to make gesture control come true using the Acconeer radar and Imagimob AI. Specifically, we are going to explore gesture data visualization, visualising pre-processing and gesture selection. We will also show how to do all of these in Imagimob Studio, which is a part of the Imagimob AI package. With the help from Imagimob Studio, you can build and deploy great models by starting you on the right path.

Gesture Data Visualization
Of course all Machine learning engineers should understand the data before any training. In Imagimob Studio, data visualization as a built-in functionality is amazingly smooth, which happens once you import the data. This gives you a clear idea of the time-series data that is fed into the model for training.

Fig1. Gestures collected by Acconeer sensors viewed in Imagimob Studio.

Visualising pre-processing & selecting gestures
You can use Imagimob AI to visualise your data, but even more powerful than that you can use Imagimob AI to visualise your pre-processing very easily. By visualising your pre-processing you know exactly what goes into your model. This helps you not only improve your model performance but to also identify what events, gestures or classes are easy to distinguish.

This is Imagimob AI’s Create Track from Preprocessing feature and it allows you to circumvent the obstacle of gesture selection. This functionality allows you to visualise your pre-processing so you can ensure that what you pass to the model is distinct and unique for each gesture.
As can be seen in Fig2 that with a given list of 4 gestures, we can easily choose 3 ones that we want to proceed with. We can also already see the model’s potential based on the class differentiation.

Fig2. Gesture selection from different gestures

Visualising Processing in Imagimob Studio
We’ve talked about the benefits of visualising processing, in this section we show you how to do it. To do this in Imagimob Studio, simply collect all the gestures you wish to evaluate into one file. It’s good to collect multiple iterations of each gesture to make sure you get a good overview. Then do the following:

1 Add the data to your project file

2 Add your desired pre-processing functions and hit “Create Track from Preprocessor”

LATEST ARTICLES
arrow_forward
Date 03/08/24

March release of IMAGIMOB Studio

This month, we released IMAGIMOB Studio 4.6. Here are some o...

Date 02/06/24

February release of IMAGIMOB Studio

We just released IMAGIMOB Studio v. 4.5, which brings you ne...

Date 01/16/24

Introducing Graph UX: A new way to visualize your ...

Date 12/06/23

IMAGIMOB Ready Models are here. Time to accelerate...

Date 01/27/23

Deploying Quality SED models in a week

Date 11/17/22

An introduction to Sound Event Detection (SED)

Date 11/14/22

Imagimob condition monitoring AI-demo on Texas Ins...

Date 11/01/22

Alert Vest – connected tinyML safety vest by Swanh...

Date 10/21/22

Video recording from tinyML AutoML Deep Dive

Date 10/19/22

Edge ML Project time-estimates

Date 10/05/22

An introduction to Fall detection - The art of mea...

Date 04/20/22

Imagimob to exhibit at Embedded World 2022

Date 03/12/22

The past, present and future of edge ML

Date 03/10/22

Recorded AI Tech Talk by Imagimob and Arm on April...

Date 03/05/22

The Future is Touchless: Radical Gesture Control P...

Date 01/31/22

Quantization of LSTM layers - a Technical White Pa...

Date 01/07/22

How to build an embedded AI application

Date 12/07/21

Don’t build your embedded AI pipeline from scratch...

Date 12/02/21

Imagimob @ CES 2022

Date 11/25/21

Imagimob AI in Agritech

Date 10/19/21

Deploying Edge AI Models - Acconeer example

Date 10/11/21

Imagimob AI used for condition monitoring of elect...

Date 09/21/21

Tips and Tricks for Better Edge AI models

Date 06/18/21

Imagimob AI integration with IAR Embedded Workbenc...

Date 05/10/21

Recorded Webinar - Imagimob at Arm AI Tech Talks o...

Date 04/23/21

Gesture Visualization in Imagimob Studio

Date 04/01/21

New team members

Date 03/15/21

Imagimob featured in Dagens Industri

Date 02/22/21

Customer Case Study: Increasing car safety through...

Date 12/18/20

Veoneer, Imagimob and Pionate in joint research pr...

Date 11/20/20

Edge computing needs Edge AI

Date 11/12/20

Imagimob video from tinyML Talks

Date 10/28/20

Agritech: Monitoring cattle with IoT and Edge AI

Date 10/19/20

Arm Community Blog: Imagimob - The fastest way fro...

Date 09/21/20

Imagimob video from Redeye AI seminar

Date 05/07/20

Webinar - Gesture control using radar and Edge AI

Date 04/08/20

tinyML article with Nordic Semiconductors

Date 02/18/20

What is tinyML?

Date 12/11/19

Edge AI for techies, updated December 11, 2019

Date 12/05/19

Article in Dagens Industri: This is how Stockholm-...

Date 09/06/19

The New Path to Better Edge AI Applications

Date 07/01/19

Edge Computing in Modern Agriculture

Date 04/07/19

Our Top 3 Highlights from Hannover Messe 2019

Date 03/26/19

The Way You Collect Data Can Make or Break Your Ne...

Date 03/23/18

AI Research and AI Safety

Date 03/11/18

What is Edge AI?

Date 01/30/18

Imagimob and Autoliv demo at CES 2018

Date 05/24/17

Wearing Intelligence On Your Sleeve

LOAD MORE keyboard_arrow_down