While it is true that our vehicles are getting smarter and autonomous vehicles are becoming a reality, human drivers have a significant advantage – they can more easily adapt to new situations that may occur. For example, after driving through a road with many new potholes, human drivers would take steps to avoid such a road the next time. A human driver can also learn to avoid situations that cause unneeded stress for other drivers – those others would surely communicate their stress.
Theoretically, an autonomous vehicle could be programmed to also react and learn from various situations, just like humans. To do this, we can employ one of the many machine learning methods currently used in practice, such as artificial neural networks or support vector machines, which work by learning a model based on training examples. Such methods are successfully utilized for many applications, like detecting objects in images or spam detection. However, there is a major issue that concerns vehicle manufacturers and government regulatory agencies which may prevent their general use – they want to guarantee that vehicles behave in a predictable and safe manner. Yet some of the most effective machine learning methods we currently have at our disposal, ones that can learn very complex patterns, may result in models which we cannot fully understand. The behavior of the vehicles would be based on such models, so if we cannot understand them, we also cannot fully predict whether a vehicle would avoid crashing into a wall.
The solution is to either use methods which produce more understandable models or try to better understand the models produced by the more complex methods. Yet even this may not be the full solution. Due to liability issues, manufacturers would most likely be unwilling to produce vehicles whose behavior changes over time. Yet without this, the autonomous vehicles will never truly match humans.