One of my personal favorite movies of recent times is ‘The Imitation Game.’ Among many others it raises a question that can in a way be mapped to our current discussion of the ethical dilemma autonomous systems face.
After the historic breaking of the Enigma, Alan Turing and his team decide to design an algorithm to decide what vessels to save and what not to, based on a preset priority list. This was to ensure that the Nazis do not come to know of the code’s failure to stay unbreakable and hence not try to change it. Whatever may be the historical discrepancies with retelling of the historic event that changed the course of the war, the question remains – Who, if any at all, decides who to save and who to not?
Maybe we’ve rushed into putting all points across a tad too quickly. Let’s go at a more revised pace. Enigma basically had coded information as to which ship would be attacked during the war. Designed to be unbreakable, the Nazis openly did all military communication using this. Once the code was broken, the military took to saving only a few select ships from potential harm after intercepting and decoding the Enigma in order to keep the decoding a secret. The ethical question here can be justified saying that wartime requires sacrifices.
But the same question can pose as one of the major hurdles autonomous cars have to face before being a common sight on roads. A TED Ed video fast forwards you a few decades to a thought experiment designed to illustrate the same.
Suppose you are cruising the highway a self-driving car when suddenly the contents of a truck in front of you fall down on the road leaving the car not enough time to come to a halt. On one side of the car there’s a motorcyclist and on the other an SUV. Now what does the car do? Hit the motorcyclist to ensure your safety? Or not stop and hit the truck minimizing damage to others at the cost of your life? Or take the middle ground by hitting the SUV which has a low probability of damage to life.
Now if such a situation arose in today’s roads any reaction would be considered just that. An impulsive response to a situation. A reaction. But in case of an autonomous car it is no more remains a reaction, but turns into a decision. A programmer who gives it instructions to follow if such a condition arose is in a way dictating the response based on his reasoning or instructions given to him. There lies the problem. Who decides for the car? Governments? Companies? ITs?
Granted, there are numerous advantages to self-driving vehicles most important of which is the removal of human error from the equation thus minimizing accidents, traffic jams, road congestion among many others. But that doesn’t mean accidents won’t happen and when they do, (if they do) the outcome is not always instinctive. It may mean that the response is pre-decided when the emergency handling algorithm is coded in, in all probability months before the accident actually happens.
Let’s go further to consider that the top priority in such a situation is ‘Minimize Harm.’ Even then you might be faced with a new set of problems. To illustrate this in a similar set up we consider two motorcyclists. One with a helmet and one without. If the car decided to ‘minimize harm’ and crash into the guy with a helmet it is in a way castigating the responsible, law abiding citizen. If it chooses to crash into the other guy deeming him irresponsible, it is exerting totalitarian justice and will go against the very principle of ‘minimize harm’ it is built on.
Consumers and manufacturers are faced with more ethical dilemmas with cars. On one hand you have a car that will minimize harm, even if it means getting you killed. On the other you have one that will save you no matter what, even if it means getting others killed. What will you choose?
This as previously stated is to be governed by standard protocols and drivers of manual vehicles who are in no way connected to these protocols sometimes turn out to be the victims. Is it somehow better to have a random, instinctive reaction rather than a predetermined one?
This is just one of the ethical dilemmas we face with this innovation coming to everyday use. Since the car is basically a computer running, put in a hacker there and change some part of the code and you end up with a disaster. Who is to be held accountable if such a thing happens? The company? The coders? The passengers? The government?
Even if we are to take ‘minimize harm’ as the ultimate decider protocol, then the minimal harm that can be done is NOT avoid the impending accident and let the passenger die. In that case, who’ll buy these driverless cars? And what is minimal harm exactly? 3 old men or 1 kid. If the car HAS to crash into one of the two, what is the factor that decides ‘minimal harm?’
In reality we may not always face the exact such a problem. Nevertheless, as in every experiment we test the limits of our theory and only if it is cent percent foolproof we use the theory to solve our problems thus making the quality of life better.