We live in a world of data. Virtually everyone talks about data and the potential value we can extract from it. Massive amounts of raw data is complex and hard for us to interpret and, over the past few years, machine learning techniques have made it possible to better understand this data and leverage it to our benefit. So far, most of the value has been realized by online businesses, but now the value is also starting to spread to the physical world where the data is generated by sensors. For many, however, the path from sensor data to an embedded AI model seems almost insurmountable.
Writing embedded software is notoriously time consuming, and is known to take at least 10-20 times longer than desktop software development . It doesn’t have to be that way. Here, we’ll walk you through a real AI project—from data collection to embedded application—using our efficient, time-saving method.
Machine Learning on the Edge
Today, the vast majority of software for processing and interpreting sensor data is based on traditional methods: transformation, filtering, statistical analysis, etc. These methods are designed by a human who, referencing their personal domain knowledge, is looking for some kind of “fingerprint” in the data. Quite often, this fingerprint is a complex combination of events in the data, and machine learning is needed to successfully resolve the problem.
To be able to process sensor data in real time, the machine learning model needs to run locally on the chip, close to the sensor itself—usually called “the edge.” Here, we will explain how a machine learning application can be created, from the initial data collection phase to the final embedded application. As an example, we will look at a project we at Imagimob carried out together with the radar manufacturer Acconeer.
Embedded AI project: Gesture Recognition