Without starting to speculate on how and why the AI systems in the Uber self driving car failed, but claiming that the self driving system did not do what the intention was is no understatement considering the tragic outcome. Some commentators have argued, referring to the released dashcam video, that the crash could not have been avoided even with an attentive human driver. Those commentators are wrongly mislead by the fact that the dashcam have a very narrow dynamic range which makes it appear that the cyclist come out of the shadows only meters before the impact. In reality the human eye have a high dynamic range especially in low light conditions and an attentive driver would have seen the cyclist several seconds before the impact.
Now the investigators and Uber will have the tedious work on understanding how and why the AI failed to detect the cyclist. Was it because the AI system detects pedestrians learning the way humans walk, and the system could not recognize a human walking with a bike? Lots of speculations like this have appeared on the internet and hopefully the underlying reasons can be clearly understood. The risk is of course that the real reasons are hidden inside the AI-black-box and that the inner windings of decision making neural networks cannot be easily understood by humans.
How can a new Uber fatal crash be avoided?
Validating systems that make decisions not easily understood is a huge disadvantage, especially in safety critical applications. Validating software code is of course not new. Since long, different measures to secure the execution and decision making of software have been practised. In classic programming unit tests are used to check that code blocks work as intended and metrics such as code coverage reports are used to show how large parts of the code is covered by these tests.
How- and why-questions in AI are hard to answer and this have opened up for an AI field called eXplainable AI (XAI) and AI unlearning. In XAI, decisions made by the AI system are regressable and the reasons for the decision can be understood by the developers. In this post about fashion trends in AI we suggest that focus soon may shift from deep learning to methods that are regressable and more easy to understand. The application of AI in safety critical systems would greatly benefit from such a shift in focus.
At Imagimob we are favouring regressable algorithms (XAI) not only because they are easier to debug when something goes wrong. Our main reasons are that when training XAI we have a chance to understand what information in the training data that our systems actually use in decision making. This is important for many aspects in understanding the AI models, but our main use is that we can highly optimize the models and run them on really low power hardware for application in different IOT projects.
If you are unfamiliar with the term XAI, Wikipedia have a great article on XAI that explains the problems with not understanding what is learned by using a movie scoring example. Imagine replacing the movie scoring example scenario with a safety critical application and it is evident that XAI should be favoured when wrong decisions can potentially have bad consequences.