AI Research and AI Safety

23 March 2018

The term AI research is synonymous with advanced high tech and contains promises of how we can make our future a better place. This is very true and the potential we see is enormous. However, AI is potentially the most powerful technology that mankind has ever embarked upon developing, and to produce cutting edge AI with long-term goals and an ethical framework is crucial.

My name is Johan Malm and I have just joined the talented AI team at Imagimob, where I am fortunate enough to take part in advanced AI research. I have always been interested in and fascinated by the way nature works and it was natural for me to become a physicist. I loved the beauty of the exact mathematical solutions to physical problems. However, at the end of my studies, reality hit me and I realized how few of the physical problems in the real world we can solve without the aid of computers due to nonlinear effects, multiphysics and complex geometrical constraints. Solving these real-world problems using numerical methods and computing power became my new mission and I got my Ph.D. in numerical turbulence physics in 2011. Still, the problems that can be formulated as a partial differential equation are relatively few in a complex world where psychology, economics and biology interact with known deterministic laws on small and large scales. In these cases, a model cannot be created from first principles, but must instead be supplied with the ability to learn from experience. This has opened up a new way of programming and solving real world problems with mind-blowing results that often have been achieved many years earlier than anticipated. During the past years I have been combining this data-driven approach together with a more traditional dynamical-system way of formulating problems, and I see this as a powerful combination for future AI systems.

AI Safety is not Science Fiction anymore

As philosophers, researchers and inventors have claimed - such as Ray Kurzweil, Max Tegmark, Nick Bostrom and Elon Musk to mention a few - AI has great potential but is far from risk-free. By the creation of Future of Life Institute (FLI) and the AI safety conference in Puerto Rico in 2015 together with the follow-up conference Beneficial AI 2017 Max Tegmark and an amazing group of the world’s leading AI researchers have made great attempts to make people aware of the coming challenges we are facing if/when we reach Human-Level Artificial General Intelligence. As Tegmark mentions in his book Life 3.0, it is now harder to claim that people worried about AI safety don’t know what they are talking about, since that would also imply that the vast majority of the world’s leading AI researchers don’t know what they are talking about.

Creating Beneficial Intelligence

Here at Imagimob we are developing advanced new algorithms that make machines take decisions on their own, without being connected to any network. We have high ambitions and we don’t know yet where this technology will take us. All we know is that we have seen astonishing results so far. Being a part of the AI research community gives me the possibility to develop fantastic tools, create amazing opportunities and work for a bright future. To do so, it is crucial that our goal with AI is not to create undirected intelligence, but beneficial intelligence. This is an active choice and does not happen by itself. Therefore, I have chosen to sign the 23 Asilomar AI Principles at

Johan Malm, Ph.D and specialist in numerical analysis, computational physics and algorithm development