Imagimob tinyML platform supports quantization of LSTM and other TensorFlow layers

Imagimob today announced that its tinyML platform Imagimob AI supports quantization of so-called Long Short-Term Memory (LSTM) layers and a number of other Tensorflow layers. LSTM layers are well-suited to classify, process and make predictions based on time series data, and are therefore of great value when building tinyML applications. The Imagimob AI software with quantization was first shipped to a Fortune Global 500 customer in November, and is since then in production. Currently few other machine learning frameworks/platforms support quantization of LSTM.

Imagimob AI takes a Tensorflow/Keras h5-file and converts it to a single quantized, self-contained, C-code source file and its accompanying header file at the click of a button. No external runtime library is needed.

In tinyML applications, the main reason for quantization is that it reduces memory footprint and reduces the performance requirements on the MCU. That also allows tinyML applications to run on MCUs without a FPU (Floating Point Unit), which means that customers can lower the costs for device hardware.

Quantization refers to techniques for performing computations and storing tensors at lower bit widths than floating point precision. A quantized model executes some or all of the operations on tensors with integers rather than floating point values. This allows for a more compact model representation and the use of high performance vectorized operations on many hardware platforms. This technique is particularly useful at the inference time since it saves a lot of inference computation cost without sacrificing too much inference accuracy. In essence, it’s the process of converting the floating unit based models into integer ones and downgrading the unit resolution from 32 to 16 or 8 bits.

Initial benchmarking of an AI model including LSTM layers between a non-quantized and a quantized model running on an MCU without FPU show that the inference time for the quantized model is around 6 times faster as shown below, and that RAM memory requirements is reduced by 50 % for the quantized model when using 16 bit integer representation.

Further, the quantization algorithm is implemented with great care so that the error between the quantized and non-quantized neural network is kept at a minimum, meaning that argmax errors (misclassifications due to the quantization) rarely happen. This involves solving a difficult optimization problem.

Imagimob AI supported TensorFlow layers for quantization

- Batch Normalization (TensorFlow Class BatchNormalization)

- Convolution 1D (TensorFlow Class Conv1D)

- Dense (TensorFlow Class Dense)

- Dropout (TensorFlow Class Dropout)

- Flatten (TensorFlow Class Flatten)

- Stateless Long Short-Term Memory (TensorFlow Class LSTM)

- Max Pooling 1D (TensorFlow Class MaxPool1D)

- Reshape (TensorFlow Class Reshape)

- Time Distributed (TensorFlow Class TimeDistributed)

Imagimob AI supported TensorFlow activation functions (lookup tables)

- ReLU (TensorFlow Class ReLU)

- Tanh

More layers and activation functions are added continuously.

Imagimob AI software with quantization is available for evaluation. Please send an email to and we will get back to you.

Date 08/30/22

Imagimob starts tinyML community on Slack

tinyML by Imagimob is a community where tiny machine learning practitioners can meet other tinyML in...

Date 08/10/22

Imagimob launches free forever plan for its tinyML...

The new release of the Imagimob tinyML platform includes a forever free plan that will enable anyone...

Date 06/14/22

Imagimob Announces tinyML for Sound Event Detectio...

Date 05/11/22

Imagimob Announces tinyML for Fall Detection and G...

Date 03/23/22

Imagimob Selected as Renesas’ “Partner of the Mont...

Date 02/25/22

Imagimob AI the first tinyML platform to support d...

Date 02/13/22

Imagimob to attend tinyML Summit 2022

Date 12/14/21

Imagimob tinyML platform supports quantization of ...

Date 11/04/21

Imagimob announces content pack for Nordic Thingy:...

Date 10/27/21

New release of Imagimob tinyML platform enables de...

Date 10/13/21

Imagimob announces content pack for Renesas RA2L1 ...

Date 10/05/21

Imagimob announces content pack for Texas Instrume...

Date 09/16/21

Imagimob raises €1 million to grow its tinyML plat...

Date 09/08/21

Imagimob is a Strategic Partner to tinyML Foundati...

Date 07/07/21

Download of Imagimob Edge software now available o...

Date 06/27/21

Imagimob AI 2.7

Date 05/26/21

Video recording from Imagimob presentation at Arm ...

Date 04/07/21

Sigma Connectivity and Imagimob partners in develo...

Date 03/01/21

Acconeer, Imagimob and OSM Group shows gesture-con...

Date 01/28/21

Imagimob announces Gesture Content Pack for Accone...

Date 01/11/21

Gesture-controlled in-ear headphones by Acconeer, ...

Date 12/03/20

Imagimob and tinyML Foundation to start tinyML Mee...

Date 11/17/20

Imagimob Gold Sponsor at tinyML Summit 2021

Date 10/19/20

Introducing Imagimob Edge: Making TensorFlow AI Mo...

Date 10/13/20

Imagimob Top-3 in Edge Computing World Startup Awa...

Date 08/13/20

The BellPal Watch is on sale in the US

Date 06/02/20

Acconeer joins forces with Imagimob and Flexworks ...

Date 02/04/20

Semcon and Imagimob in new Edge AI partnership

Date 11/12/19

Acconeer and Imagimob combine Radar Technology and...

Date 10/28/19

Imagimob Joins STMicroelectronics Partner Program ...

Date 12/16/18

Imagimob Signs Licensing Agreement with Scania

Date 05/30/18

Imagimob completes private placement of EUR 1 mill...

Date 05/30/18

Imagimob receives €90k for research in Edge AI for...

Date 05/16/18

Imagimob on 33-listan for the 3rd consecutive year

Date 12/11/17

Imagimob receives €400k in funding for Edge AI res...

LOAD MORE keyboard_arrow_down