Download the audio version.
Get my science column weekly as a podcast.


A couple of weeks ago I wrote about research aimed at making robot-human interactions more comfortable for humans. With more and more robots finding more and more uses in society, that kind of research is important.

But there’s something else we’re going to have to consider as robots become ubiquitous: ethics. How do we insure that robots don’t pose a threat to the much frailer humans they interact with (especially with robot caregivers being developed for use in places like Japan, where the elderly already make up 20 percent of the population and are swelling in number)?

The risk to humans from robots isn’t just hypothetical, especially with more and more robots being used militarily. Nor is the threat just to the intended targets of such robots: last month in South Africa a robotic antiaircraft cannon malfunctioned and began firing randomly around the range, killing nine soldiers and injuring another 11.

But it’s not just how robots will treat us that we need to consider. We also need to consider how we will treat robots, if and when artificial intelligence advances to the point that they become independently thinking and functioning beings.

Robert J. Sawyer is probably Canada’s best-known science fiction writer. Much of his science fiction focuses on the possible effects on near-future society of current technological trends. In an editorial in the November 16 issue of Science Magazine, he wrote about the growing interest in roboethics.

As Sawyer points out, the idea of “killer robots” has a long history in science fiction—but so does the notion that steps can be taken to make killer robots less likely. In a 1942 story called “Runaround,” Isaac Asimov first presented his Three Laws of Robotics: a robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey any orders given to it by a human unless those orders conflict with the First Law; and a robot must protect its own existence, provided that doing so does not conflict with the First or Second Laws.

Direct, simple, unambiguous–but Asimov then wrote a series of memorable stories exploring how those laws could have unintended consequences.

Would an even simpler “prime directive” for robots be better? Not necessarily: as Sawyer notes, Jack Williamson, in his 1947 short story “With Folded Hands,” wrote about robots whose programming instructed them simply “To serve and obey, and guard men from harm.” The result was a robot-ruled society in which humans were prohibited from doing pretty much anything because they might be injured–the ultimate nanny state.

“Unintended consequences” give rise to lots of science fiction stories. They also give rise to lots of real-world grief. As Sawyer writes, “all attempts to govern complex behavior with coded strictures may be misguided….And yet, we seem unable to resist trying.”

So, South Korea’s Ministry of Commerce, Industry, and Energy has established a Robot Ethics Charter. The European Robotics Research Network plans to develop guidelines for robots in the areas of safety, security, privacy, traceability and identifiability. Japan’s Ministry of Economy, Trade and Industry is also working on guidelines.

We haven’t seen much concern with roboethics in North America yet, but, as Sawyer points out, it’s likely that some of the most interesting debates over the issue will eventually surface in the United States legal system. A small precursor, perhaps: a Michigan jury awarded the family of the first human killed by a robot (accidentally, in 1979) $10 million.

With robots becoming more and more integrated into society, and getting smarter and more human-like all the time, there are probably a lot of unintended consequences down the road. Whether current efforts at roboethical guidelines will keep the worst of those at bay, only time will tell.

But it’s worth nothing that unintended consequences can also be positive. As Sawyer writes, “Isaac Asimov’s 1954 novel The Caves of Steel describes a fully equal robotic partner of a police officer. Lester del Rey’s 1938 story ‘Helen O’Loy’ portrays…a man marrying a robot woman, and living, as one day all humans and robots might, happily ever after.

“I, for one, look forward to that time.”

Me, too.

Permanent link to this article:

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Easy AdSense Pro by Unreal