Why publicly trained chatbots are not always a good idea

Why publicly trained chatbots are not always a good idea

We have previously touched on how to make your chatbot a success, and also, how many chatbots fail (a list that is constantly growing). Today, we will show you how seemingly harmless chatbots learn to make racist and sexist comments, and what you can do to avoid that.

Background

Chatbots used to be, as described by our previous posts, handcoded FAQ services, presented in a conversational way. They were highly rigid in terms of what they could understand and what they could achieve. However, due to their simplicity, they would only say what you deemed appropriate. They would not learn inappropriate responses, because they could not learn.

As the chatbots get more complex, and start being more lifelike, the one-size-fits-all approach starts not being viable. When choosing a chatbot vendor today, or implementing your own chatbot, it is important to make sure the chatbot you use will learn from its past experiences. Modern chatbots make use of the advancements in AI and Machine Learning in order to learn what your customers ask, and how they can better answer that.

Xiaoice recognizing image of husky
Courtesy of New York Times, screenshot of Xiaoice chat

Machine Learning & Data

A crucial part of every Machine Learning application is data. For many cases, having more data is beneficial. Sometimes, it may even be worth to spend your time acquiring more data than implementing better solutions.

The data in the case of chatbots is text, usually English. Chatbots learning English might seem simple at first, but what you have to keep in mind is that computers do not have an understanding of the world (a common sense) to understand the concepts presented in our sentences. As a species, we use hundreds of different words to describe an elephant, but we all share the rough concept of an elephant, no matter what we call it.

How one sentence can be used as data.
How a computer uses text as data. Chris McCormick, Nearist

There is a field of Computer Science, called Natural Language Processing (NLP), that tries to make sense of text in a way that is useful to us. While the NLP methods have traditionally been rule-based (the same way we try to learn the rules of a foreign language), the field is now employing Machine Learning methods for many of the Natural Language tasks, and chatbots are no exception.

The bad news is, NLP is notoriously hard. An artificial agent cannot know what an elephant is, even if you give it the whole dictionary: dictionaries try to define words with other words – when you don’t know any of the words, they are surprisingly ineffective. However, much like a dictionary, the researchers have found a way to represents words by their relationships to one another.

 

Patterns in word representation
How word representations relate to each other. Tensorflow, Google Inc.

These representations make sure that the words that have similar uses are represented similarly. The most popular example is the way gender affects those representations: if we take the representation of the word king, remove from it the representation of man and add to it the representation of woman, we get a result that is very similar to the representation of queen. But for that, we need data.

The Human Bias

As we are dealing with natural languages, the data you use is ultimately generated by humans. Since it is hard to gather a lot of data, we prefer to use representations of words that have been previously computed using large datasets, and use our own data to fine-tune those representations. Common choices include Wikipedia, Twitter, Common Crawl (most frequently visited websites), and Google News.

The feedback you get from the people who respond to your chatbot is especially important. The way people interact with your chatbot will be unique to your chatbot, and your chatbot needs to be good at answering to the specific questions of your users. While you cannot directly know how good your chatbot is at forming responses, you know that the human responses are gold standard. That is, of course, until they aren’t.

The thing you have to keep in mind is that your chatbot will carry the characteristics of the underlying text data. That is generally desirable, as it makes your chatbot more human-like, but your chatbot also adopts the biases (large and small) that the people who wrote parts of your data carry. And that is when you have to ask to yourself: how human-like do I want my chatbot to be?

Common Crawl: A Case Study

Rob Speer, co-founder of Luminoso, recently published an article on how the Common Crawl data can lead to racist behavior. He illustrates it by using sentiment classification, a common task in Natural Language Processing that aims to assign positive or negative sentiments to words and sentences. Given a list of positive and negative words, he trains an artificial agent that learns to classify sentences into these two categories.

The results reveal some unfortunate truths: the sentence “Let’s go get Italian food” is classified as more positive than “Let’s go get Mexican food”, even though they are functionally the same.  A more horrifying bias exists for names, where a seemingly neutral sentence “My name is Emily” is classified as positive, while “My name is Shaniqua” is classified as negative. As he points out, the more stereotypically white a name is, the more positive its sentiment.

Predicted Sentiment of Ethnic Names
Rob Speer, Luminoso Technologies

Each and every dot in the above graph represents how positive a name that stereotypically belongs to that group is. We can see that names in the white group are, on average, perceived as more positive. Even if your chatbot does not specifically calculate the sentiment of your sentences, the underlying patterns that cause this issue are still in your data.

Rob also notes in a following post that Perspective API, created by Google’s Jigsaw team in an attempt to “improve conversations online”, has a model to evaluate how toxic a sentence is, which have the same racist biases. The fact that Jigsaw’s aim is to use technology to “make people in the world safer” only adds insult to the injury.

Combating Human Biases

There is research being done in an attempt to stop artificial agents from acquiring “dangerous” biases like racism and sexism. Although it is up to you to decide on which biases are dangerous and which biases are not, these are the baby steps the researchers are taking, and we will hopefully start hearing less chatbot fails.

In the meanwhile, you can try to improve the quality of the data your chatbot is working with. The chatbot fails that we have seen so many of are all caused by letting chatbots use all of the data they can get. There inevitably will be some people who have fun saying obscenities to your chatbot, and you should not use that as your data. Manually filtering the data so that only the high quality conversations are used will take time, energy, and money, but it may save you from a scandal.

If you are currently using a chatbot, you should also contact your chatbot vendor. See what they are doing to stop their chatbots from learning racist and sexist remarks, or if they even know how such things occur. If you are a vendor, understanding the problem and researching ways to avoid it is definitely worth your time.

Keep these in mind, and make sure you help your customers when your chatbot falls short. We are not yet at a point where you can let chatbots run wild without complications. Accepting that is the first step in making your chatbot a success.

Now that you know more about potential issues about chatbots, it might be good time to understand their huge benefits and the companies that provide you the tools to build your own chatbot or companies that offer an end-to-end chatbot service.

Leave a Reply

Your email address will not be published. Required fields are marked *