Though not seen often – yet – autonomous vehicles are on the streets bringing the advantages of artificial intelligence (AI). The goal is to make driving decisions without the human tendencies toward distraction or impaired driving. These autonomous vehicles also must make life or death decisions when an unexpected event occurs. On what basis are these critical decisions made? Can this be programmed into a software driven car or truck?
Autonomous Vehicles Making Life or Death Decisions
Can moral decisions be programmed? If a self-driven car judges a potentially fatal accident is imminent, does it choose to sacrifice passengers or pedestrians? Does the number in each group matter; two passengers or five pedestrians? What if the passengers include a child and the pedestrians comprise a group of elderly citizens? Should it endanger passengers to avoid hitting an animal? These and millions of other difficult scenarios have been discussed for years. There is even a website platform, the Moral Machine, created by MIT Media Labs where anyone is invited to make judgements in these situations. So far, people in over 200 countries have contributed.
What does data show?
Interesting data has been uncovered from the Moral Machine regarding the decisions people would make. In general, there is a consensus to save children over adults. Yet, in Far Eastern countries, the elderly would be saved first. So, even areas of the world come to different decisions – Western, Eastern, Southern – especially in complex situations.
Nicholas Evans, philosophy professor at the University of Massachusetts, writes, “You could program a car to minimize the number of deaths or life-years lost in any situation, but then something counter-intuitive happens. When there’s a choice between a two-person car and you alone in your self-driving car, the result would be to run you off the road. People are much less likely to buy self-driving vehicles if they think theirs might kill them on purpose and be programmed to do so.” What people say in surveys versus what they would want to happen if they, or loved ones, are involved varies greatly.
Though extremely difficult to program to fit so many scenarios, there must be moral programming for the AI to make the autonomous vehicles accepted. This is especially true since there is not a global consensus for the morality of any given situation. In addition to life or death decisions, the vehicle is computing routes, traffic, obstacles, speed, condition of vehicle and countless other parameters.
What is next?
The ultimate goal is reducing accidents exponentially. Until almost all vehicles are controlled by AI and can interact, that will not happen. Even then, everyone who has anything electronic knows that there can be bugs and glitches, service going down, hackers, and the unknown.
Once there is an accident, there will need to be a decision on liability. These computer-driven vehicles will be equipped with essentially “black boxes” that record the previous 30 seconds or so of data. This information will make it easier to reconstruct what occurred, but who is to blame? What stage of the vehicle’s development created this accident: software developer, vehicle manufacturer, communication provider, or one of the other multiple vendors supplying parts?
Much is still unknown as this is a huge change in transportation globally. As more and more of these vehicles are introduced to our streets and highways, data collected will lead to more answers and probably more questions as well.