© 2024 WNIJ and WNIU
Northern Public Radio
801 N 1st St.
DeKalb, IL 60115
815-753-9000
Northern Public Radio
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
00000179-e1ff-d2b2-a3fb-ffffd72a0000WNIJ's "Read With Me" archive collects dozens of interviews with authors from the WNIJ area -- northern Illinois and southern Wisconsin.On the third Monday of each month, Morning Edition host Dan Klefstad talks with an author about their latest book, and asks them to read an excerpt. Many of the interviews below feature an additional excerpt reading captured on video.We hope you take the time to read the books featured here. And if you talk about them on social media, please use #WNIJReadWithMe.

When Robots Attack: Who's Responsible?

Most of us don't think about computers and robots as conscious beings. NIU professor David J. Gunkel thinks we should, because the consequences of not doing so could be catastrophic.

The world of fiction provides many examples of hostile interactions between humans and artificially intelligent beings, or AIs. One of Gunkel's favorites is the film 2001: A Space Odyssey, especially the scene where astronaut David Bowman is locked outside the spaceship Discovery. The brain of the spacecraft, HAL, refuses to let him in:

Open the pod bay doors, HAL.
I’m sorry, Dave. I’m afraid I can’t do that. What’s the problem? I think you know what the problem is just as well as I do. What are you talking about, HAL? This mission is too important for me to allow you to jeopardize it. I don’t know what you’re talking about, HAL. I know that you and Frank were planning to disconnect me, and I’m afraid that’s something I cannot allow to happen.

This standoff follows a meeting between Bowman and fellow astronaut Frank Poole. The two determine HAL made serious errors affecting their mission to Jupiter, and they resolve to take control of the spacecraft. HAL learns of this and kills Poole, plus three other astronauts who were hibernating. Then HAL locks Bowman outside the spacecraft.

Bowman manages to enter through an emergency airlock, and shuts down HAL's higher functions, leaving just enough memory for the autopilot. For Gunkel, the scene raises ethical questions that are relevant today:

1) Is HAL a thinking being and, if so, is he morally responsible for killing four humans?

2) What are David Bowman's responsibilities toward HAL? Does he have the moral right to disconnect the brain of this AI?

Gunkel tackles these (and other) questions in his new book, The Machine Question. In the book, Gunkel writes that Discovery's crew members communicate with HAL as if he were a conscious being. For many philosophers, including Gunkel, this is a key test.

"The only way I know if you're a thinking thing is by communicating with you," Gunkel says. And if communication reveals intelligence, he says, the being in question could be viewed as morally responsible. Which leads Gunkel to ask,"What is HAL's moral responsibility?"

Gunkel says machines are making more and more autonomous decisions that affect our lives. "Decisions about our financial systems and whether we get credit or not," he says. "So what we see in 2001 is a portrayal of this on a grand scale. HAL decides to kill human astronauts to protect the mission."

And what about Bowman? What gives him the right to disconnect HAL? "He has taken it upon himself," Gunkel says, "to be judge, juror and executioner for HAL."

Gunkel says lawmakers, and the judicial system, need to address these questions soon because deadly encounters between humans and AIs already have occurred. In 1981, a Japanese maintenance engineer, Kenji Urada, was repairing a robotic arm, but forgot to turn it off. The robot pushed Urada into a grinding machine, killing him.

In 2006, a more advanced AI made decisions resulting in a higher body count. "An automated cannon that's used to shoot down aircraft was being used by the South African army," Gunkel says.  "It shot at and killed its own soldiers, and did it based on its best decision making."

Nine members of South Africa's military were killed.

Gunkel says humans need a system that assigns moral responsibility for AIs. One solution, he says, might be inspired by a different kind of man-made entity:

"Corporations make decisions and do things that they are legally and morally culpable for," he says, "even though they are completely artificial constructs." For Gunkel, it's a very simple extension to treat AIs the same way. "So I think we need to look at corporations as a precedent, because that's where the courts will see a precedent."

Even then, the question of culpability won't be easy to resolve. One might blame the machine's programmer, but Gunkel doubts this would be fair since many AIs are equipped with learning algorithms that, over time, make them more capable of behaving autonomously. "They often exceed their original programming," Gunkel says, "and do more than their original programming decided they'd be able to do."

For Gunkel, the best solution is for engineers and programmers to think ethically when designing machines. "The problem is, our engineers are not necessarily ethically trained." He acknowledges they get some ethical training, but calls it a minor footnote in their education. "The economy and efficiency equations often drive innovation," he says, "not good moral decision making."

And then there's the human component. Automation has replaced much of the work humans used to do. "When you bring automation into a factory, or into some other workplace," he says, "it displaces human workers. So there needs to be intelligent decision-making about where machines are implemented, how and why, and what the costs are regarding human well-being."

Gunkel suggests more people read science fiction, or watch films like 2001. "Science fiction is like a parable or a mythology," he says, "where we can tell stories about our worst - case scenarios, but also our best - case outcomes."  And hopefully, he says, learn from them.

David Gunkel is a Professor in NIU's Department of Communication. His book is published by The MIT Press.

Good morning, Early Riser! Since 1997 I've been waking WNIJ listeners with the latest news, weather, and program information with the goal of seamlessly weaving this content into NPR's Morning Edition.