Several weeks ago, Microsoft Research introduced the world to Tay, a machine learning algorithm that started tweeting bigoted, neo-Nazi hate speech after just eight hours of interaction with human users.
Usually when things like this happen, we hold the programmer or designer responsible. This is because we typically consider technology to be nothing more than a tool or instrument of human action. But the engineers at Microsoft obviously did not design Tay to be a racist. So who is to blame for the racist Tweets?
Initially Microsoft sought to blame the victim. The problem, the company explained, was that some users decided to “abuse Tay’s commenting skills to have Tay respond in inappropriate ways.” So Microsoft initially blamed us — or some of us. Tay’s racism was our fault.
A day later, the VP of Microsoft Research apologized for the “unintended offensive and hurtful tweets from Tay.” But this apology is also unsatisfying. Microsoft only took responsibility for not anticipating the bad outcome. The hate speech was still identified as Tay’s fault. And since Tay is a kind of “minor” — a teenage girl AI — who is under the protection of her parent corporation, Microsoft stepped in, apologized for their “daughter’s” bad behavior, and put Tay in a time out.
This event has at least two important consequences: We now have machines that can exceed the control of their designers and surprise us with unanticipated outcomes, and we have begun to accept the explanation that it was the computer’s fault. This changes everything.
I’m David Gunkel, and that’s my perspective.