Coding Ethics: How to Teach Robots to Be Good

Author: Charles Q Choi

SHARE

Science fiction grandmaster Isaac Asimov's most famous legacy to the world may be his Three Laws of Robotics, rules designed to ensure robots protect and serve humans. These laws made the foundation of ethical rules for robots since they first appeared in the 1942 short story "Runaround," which was set in 2015.

Now it is 2015 and the world actually has robots. Perhaps in comparison to the walking and talking metal creatures of our imagination, driverless cars and killer drones seem rudimentary. Still, from this starting point, many more robotic achievements are to come, and scientists are already working on how to make those future robots ethical beings. Some researchers even argue that one day robots could actually behave more ethically than humans.

Military robots that can kill or spare targets are probably the most obvious concern facing the growing field of robot ethics. However, robots m ight one day find their way to our homes and face subtler situations to deal with.

"Every interaction between a robot and a human being has ethical import," said Susan Anderson, an applied ethicist at the University of Connecticut.

For instance, take a toddler-sized commercial toy robot called Nao that Anderson and her husband Michael Anderson of the University of Hartford have programmed to remind people to take their medicine. Although such a task seems trivial, the robot may have to decide between concern over a patient's health and respect for a patient's wishes.

"If a patient is getting to a point where he is going to be seriously harmed if he doesn't take medication and he is saying no, that's when you contact the doctor and let him know what's going on, but many times short of that, if the patient is in the middle of something and just wants to take it later and it isn't critical yet, it would make sense to allow this, to respect the patient's autonomy," Susan Anderson said.

Video: Nao, the robot programmed with an ethical principle.

Facing difficult choices

One of the greatest sets of challenges researchers face when it comes to developing ethical robots are dilemmas — problems where difficult choices must be made and there's no clear, agreed upon answer. A famous dilemma is the so-called Trolley Problem: You are the conductor of a runaway trolley, heading fast towards five people on the railway tracks. You can divert the trolley to a different set of tracks, where there's one person standing.

Even humans don't have a consensus on what would be the right course of action with the Trolley Problem and its many variations, so, what is a robotic car to do? Google and others working on developing driverless cars may soon face countless versions of the Trolley Problem. For example, if a driverless car expects an unavoidable crash, does it save its passengers or bystanders? What if the passengers are children? What if the bystanders are pregnant?

A human would make those decisions in the heat of the moment. But clear directions must be programmed into a robot beforehand—something incredibly hard to do when you don't have clear answers.

Dilemmas also may plague robots on search-and-rescue missions. Although they might not kill anyone directly, they could save a person and leave someone else to die.

Roboticist Alan Winfield of Bristol Robotics Laboratory in the United Kingdom found that robots could get paralyzed when choosing between lesser evils in such no-win situations. In an experiment, he first programmed one robot, dubbed the A-robot after Asimov, to intercept another machine, dubbed the H-robot for human, and save it from falling into a hole. The A-robot flawlessly did its job, pushing the H-robot away from danger. However, when another H-robot was added, in 14 out of 33 trials, the robot wasted so much time mulling over its decision that both H-robots fell into the hole.

Starting with a compass

Although the scientists designing robots face challenging questions, the hope is that these machines will end up doing more good than bad.

"Of course there is a potential for harm in these systems, but people are exploring them because the hope is that their benefits will significantly outweigh their risks," said Ronald Arkin, a roboticist and professor at the Georgia Institute of Technology. "No system is perfect, and people will get hurt and die, but hopefully less than what we see now."

One strategy for developing ethical robots is to program them with sets of rules that serve as moral compasses — for instance, military robots may be equipped with military rules of engagement and international humanitarian laws, "with permissions, prohibitions and obligations that constrain their behavior," Arkin said.

One difference between military robots and human soldiers is that robots could follow the laws of war to the letter, whereas human can become clouded by emotion or fear, Arkin said.

"I do think we can teach robots to behave ethically. In certain narrowly bounded circumstances, I think robots may do as well or better than human beings," Arkin said. "Will they be capable of moral reasoning at the level of a human being? No. But within very narrow sets of boundaries, I do think following instructions and rules, following human morals and norms, is within reach for robots."

Uncertain situations

The concept of programming robots with responses to ethical challenges is what ethicist Wendell Wallach at Yale University and philosopher Colin Allen at Indiana University call " operational morality." This scenario can succeed if the robots are given all the facts about a situation that they need for an ethical decision, but that's not always possible, Wallach said.

"The question that usually comes has to do with when designers can't predict the situations a robot will encounter," Wallach said. It can be difficult to identify all the factors in every situation — for instance, "how often is it nowadays that we'll have a battlefield where it's known who all the combatants and noncombatants are?" Wallach said. "Humans aren't that great at that either."

A strategy of relying on established rules may also not work as well outside the military context — for example, driverless cars, where there is currently no international body of law regulating their behavior. Ultimately, with driverless cars, "the responsibility is ultimately vested in human beings such as lawyers, who will come up with the guidelines as to what decisions the machines must make, and who will bear responsibility for those actions if those guidelines are enacted properly and appropriately," Arkin said.

For example, "if a driver is sitting in a car, as a passenger, are you responsible for what the driver does? No, you blame the driver," Arkin said. "Likewise, if the driver is a corporate entity, that entity will face responsibility for any damages that might occur. Either insurance groups or the federal government will probably try to provide guidelines for appropriate actions."

Learning to think for themselves

One step past operational morality is functional morality, where robots move into situation their designers did not predict and the machines have to apply ethical reasoning. Beyond functional morality is full moral agency, which involves "the moral discernment capabilities of a wise human being," Wallach said. "Whether robots with full moral agency will ever be realized is like asking whether if we'll ever have robots with human-level intelligence."

Image from Moral Machines: Teaching Robots Right From Wrong (Oxford University Press 2009).

One strategy to developing a robot that can act ethically in unforeseen circumstances might not be to program it with rules, but for it to learn how to do the right thing. The Andersons suggest something akin to how humans learn: to give a robot multiple duties — anything from minimizing harm, to bringing medication to patients, to keeping itself charged — and then presenting it with a variety of scenarios.

"Ethicists will then tell the system what the correct answer is for each case, and a lot can be inferred from that," Susan Anderson said.

"The idea is to train this system to balance its duties, to learn what duty might apply the most in a given situation," Michael Anderson said. "The aim is for the system to generalize principles out of these cases that it can apply to new cases that it has not seen before. The robots can act, and justify their actions."

Such a robot might be able to avoid the kind of paralysis that Winfield's machine experienced. "Every single action of the robot should be driven by ethical principles, even when it spends a moment just standing still," Michael Anderson said. When it came to Winfield's robot, instead of thinking about which of its dependents to save making it too late to save either or both of them, constantly thinking about what the right action might be could have helped the robot act in ways that could save at least one of its dependents.

Robots to look up to

Robots may have rudimentary ethics now, but once their design gets past the fundamental hurdles, perhaps they would become even more moral than humans, who have many flaws of their own.

"What I would like to see is us creating entities that are more ethical than human beings, that can actually serve as models for how we ought to behave," Susan Anderson said.

Whatever ethical robots end up being like, researchers find it unlikely that Asimov's Three Laws of Robotics will govern their behavior.

"We're not talking about an immutable hierarchy of duties — knowing which one is going to win out in a situation is interesting and difficult," Susan Anderson said. "We hope to make a lot of headway into ethical theory for humans with this work."

"The Three Laws of Robotics were literary devices created by Asimov to show ethical dilemmas that can arise, not to resolve any of them," Arkin said. "Mistakes that will be made with robots, but with appropriate design, hopefully those mistakes can be constrained to a level better than what you might get with human performance in certain narrow cases."

SHARE