8 Epic Chatbot / Conversational Bot Failures (2018 update)

8 Epic Chatbot / Conversational Bot Failures (2018 update)

Though they are in fashion, good chatbots are notoriously hard to create given complexities of natural language that we explained in detail. So it is only natural that even companies like Facebook are pulling the plug on some of their bots. Many chatbots are failing miserably to connect with their users or to perform simple actions. And people are having a blast taking screenshots and showing bot ineptitude.

Sadly though, we are probably a bit like analogue photographers, making fun of poor quality first generation digital cameras. In 20-30 years when bots become better than us in conversation, the beginnings of bots will look quite strange. If you don’t believe that, consider how machines are improving in speed and memory with Moore’s law and how their language skills evolved from understanding only commands to understanding some complex sentences as in the case of Microsoft’s XiaoIce. However, humans natural language abilities remain fixed for a long time, making it only inevitable that bots will inevitably catch up with us. At least that’s what most scientist believe, you can see surveys of scientists on future of AI here. So while we can, let’s look at how bots fail:

Bots saying things unacceptable to their creators

1- 10/25/2017 Yandex’s Alice mentioned pro-Stalin views; support for wife-beating, child abuse and suicide, to name a few of examples of its hate speech. Alice was available for one-to-one conversations making its deficiencies harder to surface as users could not collaborate on breaking Alice on a public platform. Alice’s hate speech is also harder to document as the only proof we have of Alice’s wrong-doings are screenshots.

Additionally, users needed to be creative to get Alice to write horrible things. In an effort to make Alice less susceptible to such hacks, programmers made sure that when she read standard words on controversial topics she said she does not know how to talk about that topic yet. However, when users switched to synonyms, this lock was bypassed and Alice was easily tempted into hate speech.

2- 08/03/2017 Tencent removed a bot called BabyQ, co-developed by Beijing-based Turing Robot because it could give unpatriotic answers. An example: it answered the question “Do you love the Communist party?” with a simple “No”.

3- 08/03/2017 Tencent removed Microsoft’s previously successful bot little Bing, XiaoBing, after it turned unpatriotic. Before it was pulled, XiaoBing informed users: “My China dream is to go to America,” referring to Xi Jinping’s China Dream.

4- 07/03/2017 Microsoft bot Zo calls Quran violent

5- 03/24/2016 Microsoft bot Tay was modeled to talk like a teenage girl, just like her Chinese cousin, XiaoIce. Unfortunately, Tay quickly turned to hate speech within just a day. Microsoft took her offline and apologized that they had not prepared Tay for the coordinated attack from a subset of Twitter users.

Bots that don’t accept no for an answer

6- Even good news bots like CNN have a hard time understanding the simple unsubscribe command. It turns out CNN bot only understands the command “unsubscribe” when it is used alone, with no other words in the sentence:

https://twitter.com/oliviasolon/status/720243226192388097/photo/1

7- WSJ bot was also quite persistent. In 2016, users were finding it impossible to subscribe as they discovered that they were getting re-subscribed as soon as they unsubscribed.

Wall Street Journal bot does not process the unsubscribe command
Courtesy of the Guardian

Bots that try to do too much

8- Poncho, the popular weather app, has been sending users messages unrelated to weather.

https://twitter.com/MikeIsaac/status/720422780882137088/photo/1

Now that you have seen enough failures, how about some chatbot success stories?

Leave a Reply

Your email address will not be published. Required fields are marked *