With traditional automobiles known to be one of the most dangerous modes of transport, and their driverless equivalents inevitably carrying a degree of risk, the algorithms which determine their emergency-response mechanisms should be of great interest to potential buyers. Naturally, the following question arises: in life-vs-life scenarios, who should the driverless cars prioritise, and how is this decided? According to some, Bentham’s act utilitarianism may have the answer.
This is because if driverless cars were to be programmed according to the Utilitarian doctrine of 'the greatest good for the greatest number', this would result in the most lives being saved, a prospect few would argue against. Therefore, when faced with a choice between killing five pedestrians or diverting and killing one pedestrian, driverless cars would be programmed to opt for the latter. Do not be fooled, this approach itself is problematic, especially for those who argue that the car’s passengers should be the priority in critical circumstances. There is merit in the latter stance as it would best replicate the instinct of human drivers (but this appears to counter the concept of driverless cars). Furthermore, it is unlikely that consumers would be attracted by a product that would not prioritise their personal well-being.
A helpful means of understanding the ethical issues which could arise through the use of driverless cars is MIT's Moral machine platform. Here, users are presented with different scenarios and must decide which one they find more "moral". For example, one must choose between the car either running-over two individuals or crashing the car against a working site, resulting in the death of both passengers. Another example is one where the user must decide whether the car should divert and kill two elderly individuals who are crossing at a pedestrian green light, or whether the car should keep going forward and kill two youths crossing at a red light.
As the aforementioned instances demonstrate, not all scenarios which the vehicles would face would be a clear-cut trolley problem scenario (choosing between to run over five people, or to divert and run over one). Therefore, there are grounds to argue that pure act utilitarianism would offer a far too simplistic approach. This is because even if driverless cars were able to distinguish between age-groups in order to decide who to spare (and most of the public would rightfully have reservations about this too), a stone-cold act utilitarian approach would still have difficulty justifying this outcome if a delinquent youth is purposefully spared over a distinguished elderly doctor, for example.
Who will decide how these modern vehicles are programmed to react? This decision in itself is also highly contentious, as ultimately ‘algorithms become policy’ (Johannes Himmelreich), and will determine who lives or dies. If the decision is left solely with the government, this would wholly alienate consumers and their individual preferences from the behaviour of their vehicles, essentially leaving them with less control of their fate. Perhaps a better alternative would be for driverless car brands to offer different reaction programmes from which buyers can choose, and therefore allowing consumers to decide which best suits them. The result being that consumers are left free to decide which product they wish to adopt, depending on their varying opinions on how the discussed scenarios should be handled, and maintaining the personal responsibility of the individuals. Nonetheless, this ‘choose your own setting’ approach could lead to mass confusion as legislation would have to accomodate for a multitude of different vehicle settings, causing the general public to grow hostile to the use of driverless cars and their unpredictability.
Overall, there is no definitive solution which will appease all queries surrounding driverless vehicles. What is clear, however, is that the discussed matter is becoming more pertinent by the day, and whilst pure utilitarianism does offer a blanket response, this does little to placate the issues which arise in scenarios where numbers have no relevance.
Chiara Buttiglione is a research intern at the Adam Smith Institute.