You're Asking Too Much of Chat Bots. Just Let Them Grow Up

For now, talking to a bot is like, well, talking to a robot. And maybe that's good enough.
Web
Then One/WIRED

My editor doesn't like email. And that's probably the reason he's into the Google service that automatically generates replies to incoming messages.

Smart Reply, as my editor will tell you, is pretty smart (It is!—Ed.). Having analyzed millions of messages from across Google's Gmail service, it can guess how you might respond to a particular missive. That may sound impersonal, but it's useful. It lets you instantly reply to someone when you don't have time to open a laptop or even tap out a message on your smartphone. Some of these auto-replies, my editor swears, even sound like him.

But one reason this works so well is that Google limits the scope of its tool. For each message, the service offers not just one reply but three, letting you choose the reply that best suits what you want to say, and these replies are typically just a few words long. Google's tool gives itself a margin for error. It works because it doesn't try to do too much.

All this is worth remembering as we contemplate Silicon Valley's latest buzzword: Bots.

"Bots are the new apps," Microsoft CEO Satya Nadella announced at the end of March, during the company's big coder conference in San Francisco, and he was just saying what so many others are saying across the tech universe. Microsoft, Facebook, a host of startups, and an even larger gaggle of tech pundits are trumpeting the arrival of autonomous bots that can carry on conversations inside services like Slack and Skype and Facebook Messenger.

The idea is that these bots will let you interact with businesses much like you trade text with friends and family, letting you do stuff much quicker than you could using a dozens of disparate smartphone apps. Some people call this "conversational commerce." But there are limits to the conversation.

Chatbots, you see, don't chat very well. Even those built atop the latest tech are limited in what they can understand and how well they can respond. For now, talking to a bot is like talking to, well, a machine. That makes conversational commerce feel like a false promise. But maybe the problem isn't the tech. Maybe it's the promise. "I think we’re going through a temporary hype era of 'bot BS' right now," says Navid Hadzaad. And he runs a bot company.

Limiting the Conversation

In recent years, deep neural networks have helped automate so many online tasks. They can recognize faces and objects in photos. They can recognize commands spoken into smartphones. They can improve Internet search results. And they've made significant progress in the area of natural language understanding, where machines work to understand the natural way we humans talk. This is what powers Google's Smart Reply service. And it works.

But only up to a point. And that's telling. When it comes to automated conversation, deep neural networking is the best tech going. In other words, we're nowhere near the point where we can carry on a completely real conversation with a bot.

That's pretty much the message delivered by David Marcus, who oversees Facebook Messenger and its bot engine, a way for coders to build bots that can, in theory, do all the stuff that's now handled by smartphone apps. "Everybody wanted websites when the web was launched. And then everybody wanted apps. This is the start of a new era," Marcus says, before pointing out that the first apps were "kind of crappy." The implication is that bots will experience similar growing pains on their own.

Indeed, the Facebook bot engine doesn't even use deep learning. It uses less advanced technology provided by Wit.ai, an artificial intelligence platform Facebook acquired early last year. The hope may be, however, that this technology can help generate that kind of conversational data needed to train deep neural networks and push the state-of-the-art much further.

A Whole Lotta Chatter

Deep neural networks learn by analyzing enormous amounts of digital data. They can learn to recognize a cat by analyzing millions of cat photos. They can learn to understand the contents of an email by analyzing millions of email messages. And they can learn to chat by analyzing chats. But the data needed to drive "conversational commerce" is much harder to come by than cat photos. People don't typically interact with machines in this way. So, companies like Facebook must find other sources of data---or generate data on their own.

Marcus and company are already doing this with Facebook M, and experimental digital assistant, and they may hope to do so with the Messenger bot engine as well. But Facebook M employs more than just bots. It employs human assistants that work alongside the bots, and most of the data the system generates is related to how these humans respond to requests. It's unclear how much serious data you can generate with a chatbot that's kinda crappy. After all, how often will people use it if it doesn't really work?

"What kind of data are they really going to collect?" says Eugenia Kuyda, the founder of Luka.ai, which builds chatbots using deep neural networks. "People clicking on buttons. This is not really a dataset you can put into a neural network and train anything."

Keep It Simple

The best anyone can hope for now are bots that excel at one specialized kind of conversation. A good example is Hadzaad's service, GoButler, built by a startup he runs in New York. GoButler uses deep neural nets, but only to tackle a relatively small problem. Through a chat interface, the service provides a way of booking airplane flights, which limits the chatter to very specific requests and responses. "The technology is there---it works---if you restrain the use-case," Hadzaad says.

Hadzaad can't stand the term "conversational commerce." He doesn't even like "chatbot." If his employees utter these words, he says, they're required to drop some cash into an anti-buzzword jar. The chatbot movement driven by Microsoft and Facebook and so many others, he argues, should be less about conversing with bots atop our messaging services and more about just finding the best way---any way---to complete the task at hand without leaving these services.

That's also what we heard last week from Dan Grover, a product manager for WeChat, the Chinese messaging service that's already facilitating various commerce tasks, including hailing cars, checking the weather, and browsing subway schedules. The success of WeChat's commerce engine, he says, isn't really about conversation. It's about finding a much simpler way for people to interact with businesses. "Those that actually succeeded in bringing value to users were the ones that peeled back conventions of 'conversational,' he says.

The Long Wait

Yes, truly conversational bots will eventually arrive. Using deep neural nets, Google recently built a bot that discusses the meaning of life, and judging from the company's transcripts, it seems to work pretty well. But it's hard to know how long we'll have to wait for something like this to break out of the lab. As good as Google's chatbot seems, the company hasn't let anyone outside the company play with it. And training such bots relies on data that's harder to come by than you might think. Google used old movie dialogue.

But maybe we can afford to wait. Maybe we just want to get things done without too much talking. "Even if the technology is real, it's not the best consumer experience," Hadzaad says of conversations with business bots. "Back-and-forth conversations are just inefficient and not natural. People want things as efficient as possible." In other words, maybe the world doesn't need conversational commerce. It definitely doesn't need the hype.