Benchmarking Imagimob AI Against Deep Learning Reveals Impressive Results

29 January 2019

Press Release

Imagimob AI delivers the same high level of accuracy while requiring significantly less memory and instructions per time series classification, when compared to a leading deep learning model.

Stockholm, January 29, 2019—Imagimob announces today the completion of a benchmark study comparing Imagimob AI technology against a state-of-the-art deep learning network for time series classification. While the results revealed a similar level of accuracy between the two systems, Imagimob AI proved to be significantly more resource effective when it came to RAM memory usage and CPU operations.
 
The aim of the study was to benchmark Imagimob’s machine learning software, Imagimob AI, against one leading deep learning model for time series classification. In order to find the optimal deep learning model for the study, a wide range of architectures and hyper-parameters were tested. The case chosen for the study was sourced from a real customer project, where the aim was to detect two specific activities hidden inside a long sequence of motion data. The dataset consisted of accelerometer and gyro data from a microelectromechanical (MEMS) inertialmeasurement unit (IMU).
 
The results showed that the Imagimob AI system was 2% more accurate than the deep learning model. A difference that, while minor, is very impressive when considering the drastically different amount of computational resources used by the two approaches. Compared to Imagimob AI, the deep learning model used 33 times more RAM memory and required 800 times more instructions in order to make a single classification. 
 
Given the memory and CPU restrictions on an ARM M0MCU, with 32 kBof RAM and 45 million instructions per second (MIPS), the deep learning model uses approximately 10 times too much memory to fit the smallest microprocessors on the market today. If it were to fit the system, it would take approximately 8 seconds to finalize one classification through the network.
 
The Imagimob AIs ystem uses around 10% of the available memory on an ARM M0 and requires only10 milliseconds per classification. A crucial component that enables Imagimob AI to run efficiently on small devices while still delivering a high standard of performance and accuracy.
 
Details on Imagimob's benchmarking study can be found in a newly released white paper, which is now available for download here.

About Imagimob
Imagimob is a global leader in artificial intelligence products for edge devices. Based in Stockholm, Sweden, the company has been serving customers within the automotive, manufacturing, healthcare and lifestyle industries since 2013. The experienced and visionary team that makes up Imagimob is tirelessly dedicated to staying on top of the latest research, thinking new, and thinking big.

Contact
Anders Hardebring
CEO and Co-Founder
Mobile: +46 70 591 0614
email: anders@imagimob.com

 

 

 

 

Imagimob AI delivers the same high level of accuracy while requiring significantly less memory and instructions per time series classification, when compared to a leading deep learning model.