Imagimob Studio - Development Platform for AI/Machine Learning on Edge Devices - now in a game-changing new UX format!

Imagimob Studio is now in the new Graph UX format. What does this mean for you as a user? It's easier to use the platform, you get better model output, and you can see insights into your models that were never before possible. 

Here's a preview of what it looks like, and you can read more about it below!


Imagimob's Graph UX makes it easier to create better AI models

Graph UX is an intuitive interface to visualize the end-to-end ML workflow as graphs. This graph ML framework is designed to provide a clear understanding of the modeling workflow from building to evaluating machine learning models.

Modern day ML is all about neural networks, which are easily represented as graphs. This analogy works throughout the machine learning development process, from data collection, model building, model training, all the way to model evaluation and deployment on device. Putting these parts together into one coherent view makes it easier to design your own Edge AI models.

Read more about Graph UX and learn how to use it.



Covering the entire machine learning workflow, optimized for embedded devices

1. Collect and annotate high quality data
2. Manage, analyze and process your data
3. Build great models without being an ML expert
4. Evaluate, verify and select the best models
5. Quickly deploy your models on your target hardware 

Build production grade ML applications for...

Predictive maintenance - Recognize machine state, detect machine anomalies and act in milliseconds, on device.
Audio applications - Classify sound events, spot keywords, and recognize your sound environment.
Gesture recognition -
Detect hand gestures using low-power radars, capacitive touch sensors or accelerometers
Signal classification -
Recognize repeatable signal patterns from any sensor
Fall detection -
Fall detection using IMUs or a single accelerometer
Material detection -
Real time material detection using low-power radars

Without any data ever leaving the device without your permission. 

Start training your first models within 5 minutes using our starter projects.

Collect and annotate High Quality Data

Are you tired of the mundane process of collecting data from your embedded devices and sensors, spending days fiddling with how to capture the data with good quality, and then spending even more time annotating and cleaning all of that data?

With Imagimob Studio you get several tools for collecting high quality data straight from any hardware or sensor, either wirelessly or by cable. Capture data with your phone out in the field or straight to any computer or platform running Python. Once the data is collected, Imagimob Studio verifies that all your data is consistent and error free. 

Annotate at a glance

Our experience shows that ~80% of the time spent in a successful AI project is spent collecting, annotating, cleaning, processing and experimenting with different data sets. And a lot of engineers find this to be the most frustrating part of the whole process

With Imagimob Studio, once your data is collected, it can easily and efficiently be annotated by dragging out labels on top of it. Labels can then be copied, resized and modified in seconds. To further automate the process we also provide scripts that can run through and annotate data for you according to your set criteria, automatically.

Functionality includes

  • Quick integration with any sensor/hardware over Serial/UART
  • Collect data from your sensor/hardware directly to any platform running Python (PC/Laptop/Raspberry Pi etc)
  • Import the collected data straight into the Imagimob Studio desktop client for verification
  • Import data collected using other tools, supporting timeseries CSV and most WAVE audio formats
  • Support collecting and viewing of high dimensional data with high frequency (>50 kHz)
  • Auto-scan through your data to find dimensionality, frequency or other data inconsistencies 
  • Visualize all data on a timeline for easy verification, analysis and annotation
  • Annotate any timeseries data easily by creating, dragging, copying, pasting and resizing labels
  • Auto-annotate data using our provided annotation scripts
  • Top to bottom workflow verifying each part of the data collection and annotation process

Manage, analyze and process your data

Once data is collected, it must be presented to the model in the right way for the model to "learn" i.e. tune its trainable parameters to match the training data - without overfitting, so that it also performs excellent on data it has never encountered before. 

Imagimob Studio has a well designed, easy-to-use workflow for managing all collected data into different datasets to get you lined up for model building and training. Are you an experienced user? Then you might wonder if you can control exactly which samples goes to which dataset? Well, of course you can!

Avoid costly data mistakes

Even the most experienced machine learning engineers make costly mistakes during the preparation of data for model building. These errors includes mislabeled data, inconsistent data frequency, or mixing data of different dimensionality. Like stereo and mono audio files when building a sound event detector. These mistakes can cause really poor model performance while taking days to detect and fix. 
In our workflow theses errors are instantly located and easily tracked down.

Functionality includes

  • Get your data ready for model building without needing prior ML knowledge
  • Manage your training, validation and test sets in a easy-to-use UI
  • Automatically shuffle your data according to best practice using predefined metrics and split settings
  • Track the data distribution of the symbols/events you want your model to learn
  • Set weights to prioritize learning important symbols/events
  • Assign individual samples into any set if needed (advanced users)
  • No limit on dataset size (limits are only imposed during training, depending on your subscription tier)
  • Annotate any timeseries data easily by creating, dragging, copying, pasting and resizing labels

“The visualization of the results is great to understand what performance/accuracy we have got for individual labels.”

Anurkash, Software Engineer

“I really like this, now I have much more information on what is going on with my models”

Timotej, Software Engineer at Inovasense

“Training of AI models is faster than all other tools we have tried.”

User of Imagimob Studio, Software Engineer

Build and train great models

A great machine learning engineer knows the importance of running quick experiments. What this means is, training different models, evaluate the results, adjust accordingly and do it again. This used to be a tedious process involving a lot of intuition gained from thousands of previous experiments, plus the privilege of having access to massive, raw, compute power. The winners are those who have access to the best engineers and the most compute power.

With the AutoML functionality of Imagimob Studio you get the intuition from our experience ML engineers, built-in. It generates high performance AI models according to your data, automatically, and already at this stage, before deployment, these models are optimized for speed and low footprint. And by the way, the generated models are fully transparent, you can view, edit, delete the generated models however you like. 

The importance of insane training speeds…

Once your models are generated, training is started in our cloud at a click of a button. At this stage you will want to get the results as quickly as possible so that you can evaluate the results and adjust accordingly. 

This is achieved by training your models on high performance training hardware, but there's one thing that really sets us apart - Imagimob Studio always trains several of your models in parallel, minimum four at once, instantly giving you a 400% performance increase compared to training one model at a time.

What you get

  • AutoML generates high performance models according to your data
  • Fully transparent, deep learning models which you can edit, copy, delete and export
  • No setup, just login using your account and start training models in our Cloud
  • Very high training speeds - train up to four models in parallel 
  • Security - your training jobs runs in containers protected from other users and your data is automatically deleted after 14 days
  • Import your own AI models from TensorFlow if you want

Evaluate and find the best model, before deploying

Normally, when building AI models for embedded devices you would have to deploy them on the device so that live testing can be performed. This is time consuming and frustrating. Especially early on in a project where new data is collected and new models are built at a rapid pace.

Imagimob Studio solves this problem by outputting the model predictions on the same time-line as your data. This allows you to see exactly how the AI model interpret all your data, in real time. You can even play it back. This gives you a clear picture of how the AI model will perform live out in the field, before even deploying it. The biggest benefit of this is that you can wait with deployment and live tests until you have a good model. Saving lot's of time.

Evaluation functionality includes

  • Understand model performance without any required AI/ML knowledge
  • Track model predictions in real time on all your data
  • Playback your data and model predictions to see how it will perform live
  • Visualize model performance metrics (model size, accuracy, false-positive-rate, F1 score and more)
  • Measure the delay between input and prediction

Package your AI models at the click of a button

Packaging an AI model for an embedded device can take months. A firmware engineer needs to port the model to an Edge AI framework. Most of the time only the model and not the data preprocessing will be automatically translated. Once the model is translated it requires testing and optimization, requiring expert knowledge. 

With Imagimob Studio and Imagimob Edge this step is done with the click of a button - In a matter of seconds the AI model is optimized, verified and packaged. 

A simple API

When clicking on "Build Edge", a simple API is created and the model is ready to be deployed on the embedded device. Imagimob models are self contained, highly optimized C code and can therefore be deployed on almost any platform in the world.

The optimization and packaging functionality includes

  • Convert trained AI models into C code at the click of a button
  • Deploy models on any platform which can run C code (embedded platforms, PCs, Android, iOS, raspberry Pi, etc…)
  • Easy to use API (literally just three function calls needed)
  • Very low memory footprint
  • No dynamic memory allocations

Increasing car safety through audio classification

05/13/21

A compelling use-case for Edge AI is the ability to perform real time classification of audio on low-power devices. By enabling processing of audio data on an edge device, latency, power-usage and cos...

Veoneer, Imagimob and Pionate in joint research project regarding deep learning event detection on automotive data logging platform

12/18/20

We designed Imagimob Studio to fit for your Edge product

Imagimob Studio is built by a team of engineers, creators and researchers with one goal in mind. Helping you to create the best possible AI applications for small devices. That is why we have built an end-to-end solution, optimizing for edge devices from the data collection, through the model building and verification, to the final deploy.

Go from data to deployed model in days

We clocked it. Without Imagimob Studio it took 10 weeks to collect a dataset and build and deploy an AI model on an edge device. With Imagimob Studio it took one week to reach the same level of performance.

Learn the details about Imagimob Studio in this video.