Drew Strickland Menu

"Tay did nothing wrong"

Disclaimer: I do not advocate what Tay said, or that she learned to say it. I simply see this as a technical triumph on the grounds that it did exactly what it set out to do.

Tay, a chat-bot Microsoft decided to roll out onto social media, got pulled less than 24 hours later. What started as a fun experiment in live-learning neural networks, quickly turned into a social debate on who is accountable if your AI goes rogue and starts offending people.

Regarding Chat Bots

I do a lot of work with chat bots. I can tell you for a fact that presently, there are only about 4 varieties of bots, and that only recently have bots and Machine Learning come together in theory to produce something that even passes as a "learning bot."

You have you standard dumb-bots, scripts that fire off an action in response to a keyword or command (think SlackBot for example). A much older, but slightly more advanced classification of bot, the AIML Bot, has a lot of patterns (gambits) and responses pre-programmed into it, and then attempts to make the conversation line up with what it wants to talk about (Alice, most of the bots involved in the Loebner Competition, and that one person who REALLY wants to tell you all about their cats).

From there, we reach the very complicated, but very realistic personal assistant bots. While still based on a bit of pattern matching magic, these bots are typically based around the patterns that make a language function, backed by massive amounts of data they can reference to answer a question. About 50% of the world interacts with this type of bot daily, in the form of Siri, Cortana, or Google Now. We don't think about them in this context because they behave like glorified search engines with slightly elevated access to your personal life. The most famous of these types of bots, IBM's Watson, actually did the reverse (in terms of it's lexical parsing) to achieve fame at playing Jeopardy.

The final category of bot, the one we are just now starting to see come online publicly, are effectively very sophisticated neural networks, trained on world literature, Wikipedia, and the CIA fact-book, to be able to pretend they know anything about the world around them. In reality, this boils down to some really high order math, run over the majority of the words in the English language. Now, generally, all of the training for this type of bot happens in a controlled environment, where everything the bot is trained on is hand-picked and curated exclusively to make sure the bot knows what you want it to know.

Microsoft, possibly in a misguided PR stunt, decided it would use it's ability to live-train the neural network to show off.

And that's how Tay was born.

Racist Kids Come From Racist Families

Let's touch on the core of racism for a minute. Anthropologically, you could argue that there is an evolutionary basis for racism, a primal need to "protect your tribe from the other tribe". Frankly, and probably a little controversially, I believe that's partially true. However, the unavoidable truth is, in modern society, racism is a learned behavior.

Raising a racist, or a homophobe, or any type of bigot is relatively easy. Kids have to trust their parents, that's part of survival. Naturally, if their parents hold an opinion about "the other tribe", the children are going to pick up on that, and, in all likelihood, will be rewarded for reinforcing their parents' belief system. This reinforcement encourages the behavior, and thus, the cycle of hate continues, unabated.

Of course, the same principle can be used to reinforce any behavior. If you want to raise a chess-master, or piano virtuoso, reward the child as their skill increases.

That's the primary basis of machine learning. Reinforcement.

A Resounding Success

Bearing in mind that Tay was a live-trained bot, with input streams attached to social media, what happened here was actually a giant win, and says so much more about the state of the world than it does about Microsoft's engineering teams. In the past day, I have read so many articles calling this a massive failing of judgement, to live-train a bot with the internet, that Microsoft should have seen this coming and taken steps. Some people have even suggested that you should never live-train a bot, and that you must strictly control the training input in order to get good results.

I could agree with these sentiments. They seem fairly reasonable at first glance.

Or we could simply admit that, as an aggregate, social media is a giant sewage pipe of sex, hate, and profanity.

Whichever side of that argument you take, the fact that Tay progressed from a nothing 19-year-old-like to a full on Neo-Nazi Hooker-bot inside of 24 hours is, by any measure, proof of how well this live-training neural network performs. If you can look past the hate, or at least admit that what the bot learned was wrong, and look at how quickly it learned, that's a giant leap forward in our ability to train a neural network with opinions.

She offended people, and made Microsoft's PR department fallback to face-saving mode for a while, but she did it based purely on learning.

Lessons Learned

Maybe when you are choosing who to expose a live-learning bot to, you could not allow 4chan to get it's hands on it? Maybe social media is a great way to generate massive amounts of text, but not a way to learn anything with any sort of credibility?

What this whole situation boils down to is that learning neural networks treat all input with the same credibility. My opinions about what went wrong during the Gulf War are just as valid as a decorated General's, a Foreign Policy student's and Fox News' to a neural network: They're all treated and digested as fact statements. In short, bots don't know when they are being lied to.

That's a really hard problem to solve, because the fact is that most people don't know when they are being lied to either. It's not a common science that we can dissect and turn into math and code or an algorithm. At best, our current science on lying relies heavily on being able to observe body language.

To me, the take-away here is that even the really sophisticated neural networks still can't perform better than a 5 year old in terms of their ability to reason, deciding what information could be true or false. Like a child, they simply accept all information at face value, and their training becomes reinforced when that bad data is repeated. Just like it's easy to get a child to drop an f-bomb, it's very simply to teach AI to become a Holocaust denier.

A Way Forward

Because bots can't tell fact from fiction, I think we simply need to add more context around the input. Put simply, if the input is from social media, treat it as opinion of the users, possibly even as throwaway transient data. Is your input William Gibson's Neuromancer? Fiction, with some interesting philosophical points worth investigating later.

Even facts need to be treated as transient to some degree. The best example I can give here is how language changes over time, which is something Tay seemed very adept at (at first glance), but in fact simply uses a vernacular established to match it's crafted "personality". While the classical meanings of words never really change, the way the words are used, or their canonical spellings in context, definitely change between periods of time, mediums, and more often than not, from audience to audience.

If you asked Tay to write a resume and apply for a job, the result would undoubtedly be a hilarious hodge-podge of text-speak and mean-nothing phrases about how young people are awesome and old people don't get it. This is a real task a 18-24 year old person should be able to do. Naturally, a hiring manager would perceive this as "totes adorbs". The very real limitation of Tay is that the software is intended to emulate a person doing a specific task, for a specific audience, but the software itself does not recognize this limitation when it comes to regurgitating opinion as fact.

Conclusion

Tay is a technical triumph, but illustrates the need to control the context of training input. Live training is definitely a very effective tool, and can absolutely function, if we can teach neural networks the difference between reality and stuff people say. Should we train bots on social media? Only if we want the result to be an unfiltered fire-hose of things people have already said when given a slightly anonymized platform.

I also conclude that you can't trust people online to not be jerks to one-another. Even if one of them is a bot.

But that's a rather obvious conclusion.

comments powered by Disqus