« Human Language Technology on Cellphones | Main | Reading, Life Indexing, and Self-Driving Cars »

Confusing Sarah Palin (chatbot)

In what I hope to make a recurring series, I today present and analyze the results from a chat I had with a chatbot. You can try this one for yourself over at Chatbotgame.com. I tried conversing with the Sarah Palin bot, with the following results:

Here's how my train of thought went: I assumed it would work like many advanced chat bots do, very strategically. It starts by trying to grab control of the conversation with a very pointed statement, as many do. Then, what usually happens is it picks out keywords from the human chat, matches them to canned responses, or response templates. In addition, it will usually do something like keeping track of "topic", probably defined as something like the main noun in the subject of the previous sentence. This is useful when trying to understand pronouns in the next sentence.

However, it quickly became clear that this was not an advanced chatbot (obviously, the word 'game' in the site's title should've suggested that to me). Once I noticed that, I looked around the site for some explanation, and found something kind of cool. This is a Web 2.0 chatbot. In other words, like other web 2.0 sites, it relies on user input for its value. This is a cool idea, and one I think can be very useful to some natural language processing tasks, but it doesn't look like they have the correct framework here.

The system they use requires users to enter very rigid chatbot response rules, based on keywords used both by the human user and the chatbot on the previous chat. This kind of setup gives you very little flexibility, and frankly, even as a game I don't find it very interesting or fun. One problem is that with this setup, there is no memory. If you want to talk about abortion 2 sentences in a row, you have to use the word abortion in both sentences. In addition, if you look at the image above, I tricked the Palin chat-bot into supporting abortion by inspecting its rule set and constructing a question that used the term 'abortion' but with negation. Now, it is possible for this system to avoid this problem, if enough users give negative ratings to results that don't make sense. However, I would speculate that this framework doesn't have the flexibility to implement any real intelligence, and further speculate that the best possible result is one where only the vaguest answers have positive scores, since they will be least likely to be obviously wrong and get negative ratings by users.

Is there any other way to prevent a chatbot from being tricked in this manner? Well, one way is to make the rule set so complex and non-deterministic that no one could predict what response would occur. But then, you most likely wouldn't have enough people using the system to get meaningful statistics on the usefulness of the different responses, and your quality of response would still be quite low.


great site yo have btw