Researchers and engineers specialised in AI and machine learning tends to see themselves as above the influence of fashion trends. We tend to consider ourselves smart and immune to the influence of coolness / magic of the methods we use. The most popular “trendy” approaches seems to be the set of algorithms that are similar to how our brain is functioning. Artificial neural networks (ANN:s) are kind of magic.
The hype for these types of algorithms was recently manifested by the success of AlphaGo when it beat the world champion in the board game Go using DRNN (Deep Recurrent Neural Network).
Very few people understand what is happening inside these networks which of course boosts the egos of scientists grasping the complex networks they build. Analysing these complex networks are hard. Understanding how and why classification of events happen is difficult. Architecting and tuning of hyper parameters for the networks becomes to some degree a random walk of trial and error.