Epic chatbot failures

Epic chatbot failures

10/25/2017 Yandex’s Alice mentioned pro-Stalin views; support for wife-beating, child abuse and suicide, to name a few of examples of its hate speech. Alice was available for one-to-one conversations making its deficiencies harder to surface as users could not collaborate on breaking Alice on a public platform. Alice’s hate speech is also harder to document as the only proof we have of Alice’s wrong-doings are screenshots.

Additionally, users needed to be creative to get Alice to write horrible things. In an effort to make Alice less susceptible to such hacks, programmers made sure that when she read standard words on controversial topics she said she does not know how to talk about that topic yet. However, when users switched to synonyms, this lock was bypassed and Alice was easily tempted into hate speech.

08/03/2017 Tencent removed a bot called BabyQ, co-developed by Beijing-based Turing Robot because it could give unpatriotic answers. An example: it answered the question “Do you love the Communist party?” with a simple “No”.

08/03/2017 Tencent removed Microsoft’s previously successful bot little Bing, XiaoBing, after it turned unpatriotic. Before it was pulled, XiaoBing informed users: “My China dream is to go to America,” referring to Xi Jinping’s China Dream.

07/03/2017 Microsoft bot Zo calls Quran violent

03/24/2016 Microsoft bot Tay turns to hate speech

Leave a Reply

Your email address will not be published. Required fields are marked *