Are there fashion trends in AI?

1 March 2018

Researchers and engineers specialised in AI and machine learning tends to see themselves as above the influence of fashion trends. We tend to consider ourselves smart and immune to the influence of coolness / magic of the methods we use. The most popular “trendy” approaches seems to be the set of algorithms that are similar to how our brain is functioning. Artificial neural networks (ANN:s) are kind of magic.

The hype for these types of algorithms was recently manifested by the success of AlphaGo when it beat the world champion in the board game Go using DRNN (Deep Recurrent Neural Network).

Very few people understand what is happening inside these networks which of course boosts the egos of scientists grasping the complex networks they build. Analysing these complex networks are hard. Understanding how and why classification of events happen is difficult. Architecting and tuning of hyper parameters for the networks becomes to some degree a random walk of trial and error. 

In recent years decision tree-based (e.g. random forest, deep forest and GC Forest) systems have gained momentum. They are not as magical as neural networks and much easier to architect, tune, grasp, and less computationally heavy. Current benchmarking in different areas of classification seems to suggest that decision-tree based solutions still are a slightly behind ANN:s. However, when / if the “fashion trend” tips over and more research is put into developing decision-tree based models, what will happen?

Energy used by AlphaGo is approximately 1 MW which in energy consumption corresponds to 63 Tesla Model X Roaring down the highway simultaneously. The weight of the battery packs powering the Tesla cars sums up to almost 40 000 kg, not something you are likely to carry around in a backpack. Only the cost of electricity for powering such a system makes it unreasonable for most applications. By interpolating the exponential growth of computer efficiency using Moore’s law, we will end up having a system that can beat the best humans in board game like AlphaGo running on a normal laptop in year 2046.

According to this blog by Tim Urban, the median estimations (from futurists) on when the singularity occur is around year 2050. AlphaGo is of course still very far from AGI (Artificial General Intelligence). The conclusion to draw from this is that there is a need not only to come up with AI models that are better at solving problems. Computational complexity of AI also need to follow Moore’s law. Research on how to make AI computational efficient will need to receive higher attention. If this is not done, the power of AI will partly remain as an exotic exploratory technology showing off the muscles of big companies like Google and DeepMind. Getting computational complexity down will be the key enabler in the coming years.

At Imagimob we are leading the way in Edge AI, enabling AI to run on the edge on low-end hardware (e.g. Arm M0 architectures in embedded devices). It is our firm belief that we will see a huge growth in commercial AI systems deployed in embedded systems hand in hand with the emerging market of IoT.

If you want to find out more about Edge AI and SensorBeat AI solutions, please don’t hesitate to get in contact with us.