When Products Talk

As startups and big tech companies alike plunge into developing chat bots with smart, defined personalities, the A.I. world of the 2013 movie “Her” doesn’t seem so far away.PHOTOGRAPH BY WARNER BROS. PICTURES / EVERETT

Last month, the Washington Post reported on a surprising new job in Silicon Valley: bot-writer. “Increasingly, there are poets, comedians, fiction writers, and other artistic types charged with engineering the personalities for a fast-growing crop of artificial intelligence tools,” the Post_’s_ Elizabeth Dwoskin wrote. Apple’s Siri, Microsoft’s Cortana, and Amazon’s Alexa all have personalities in need of shaping. (So, presumably, does Google Home, the competitor to the Amazon Echo announced earlier this month.) These personalities, Dwoskin reported, will soon be joined by more specialized bots developed by other companies, among them Sophie and Molly, “nurse avatars” that talk to patients about their medical conditions. There’s even a “guru avatar” in development, designed to teach meditation.

These products are exciting and futuristic—just a decade ago, the possibility of conversing with a computer program seemed like science fiction. But they’re also the realization of a very old dream. Long before technology could give products personalities, companies used mascots to create talking brands. In the nineteen-seventies, the Dow Chemical Company’s Scrubbing Bubbles (later acquired by S. C. Johnson) told consumers that “we do the work so you don’t have to”; in the eighties, the California Raisins sang “I Heard It Through the Grapevine.” The Pillsbury Doughboy has been giggling about biscuits, rolls, and cookies since 1965; Tony the Tiger has been tempting kids with Frosted Flakes since 1952. Corporations create such icons for a simple reason: they know consumers respond to products that seem to engage with them on a personal level.

Needless to say, you can’t converse with Tony the Tiger or the Scrubbing Bubbles—you can only watch them talk. Even so, that one-way interaction is enough to incite what marketing scholars call “brand anthropomorphism,” a phenomenon which confers all sorts of advantages on the companies that can create it. Most obviously, it encourages brand loyalty. In a 2010 paper published in the Journal of Consumer Psychology, the psychologists Jesse Chandler and Norbert Schwarz found that, when consumers were primed to think of products in personal terms, they declared themselves less likely to replace them. (The effect could be created easily—for example, by asking respondents to describe their cars using personality words such as “dependable.”) And the “personalities” associated with brands can affect us in other, subtler ways. A 2008 study published in the Journal of Consumer Research by a trio of marketing and psychology scholars found that individuals exposed to the Apple brand were slightly more creative afterward; people exposed to the Disney Channel brand behaved slightly more honestly. Odd as it sounds, there’s a sense in which people treat some of their products as role models.

For some people, product mascots are appealing in ways that human spokespeople aren’t. In a paper published last July in the Journal of Marketing called “Who or What to Believe: Trust and the Differential Persuasiveness of Human and Anthropomorphized Messengers,” the marketing professors Maferima Touré-Tillery and Ann L. McGill found that people who are reluctant to trust other people are more likely to listen to personified objects, such as talking lamps and coffee cups. Perhaps this is why a survey of brand characters from 2010 found that more than half of them were nonhuman animals, such as Coca Cola’s polar bears or Toucan Sam, of Froot Loops fame. Even if we’re habitually skeptical and wary of other people, we can give nonhumans the benefit of the doubt.

It’s possible that these effects won’t transfer seamlessly to the world of conversational bots: while mascots are embodied, Siri is just a voice in your ear. But it’s plausible that they will apply to bots as well, and that they will increase in frequency and intensity once products can actually converse with us. In the 2013 science-fiction movie “Her,” a single conversational bot was fascinating enough to capture a lovelorn man’s attention; in real life, it’s likely that there will be many interesting bots, each of which is designed—if only for purposes of product differentiation—to have its own interesting and fun personality. As those personalities improve, they will create an ever more powerful anthropomorphic effect for the brands that own them. Facebook is rolling out a new series of product-based chat bots that can answer questions about potential purchases; Slack, an increasingly popular office online-chat application, features Howdy.ai, a “friendly, trainable bot” capable of setting up meetings and ordering lunch for groups. The so-called “Internet of things” is already bringing even more talking products into our homes. We can already ask Amazon Echo about the weather; in the future, we may be conversing with our ovens, our lights, our garage doors, and our televisions about whether the rice is done or whether to drive or take the subway. Chatty products may even enter into our intimate lives. It’s easy to imagine asking your fitness tracker, “Did I sleep well?”

How meaningful can a conversation with a product be? Even if we don’t talk with our bots about our existential anxieties—and perhaps we will do so, with our guru-bots—their personalities may affect us. If only for purposes of product differentiation, they are sure to be unique and memorable. Apple’s Siri already has a distinctive personality: she is responsive and helpful, but also wry, tart, and saucy if you overstep your bounds. (Ask her “What are you doing later?,” and she might say she’s working on her pickup lines.) Amazon’s Alexa is less of a know-it-all: she uses just enough “hmms” and “ums” to seem almost relatable and human. Since the personalities of brands like Apple and Disney can already rub off on us, it seems plausible that bot personalities will influence us as well. It’s already the case that Americans spend an average of, by some estimates, 4.7 hours a day on their smartphones. If Siri is sometimes sarcastic, could heavy users of the Siri of the future become a little more sarcastic, too?

For companies, there are risks associated with such widespread personification. For a time, consumers may be lulled by conversational products into increased intimacy and loyalty. But, later, they may feel especially betrayed by products they’ve come to think of as friends. Like politicians, who build up trust by acting like members of the family only to incur wrath when they are revealed to be careerist and self-interested, companies may find themselves on an emotional roller coaster. They’ll also have to deal with complicated subjects like politics. Recently, Tay, a chat bot from Microsoft, had to be disabled because it began issuing tweets with Nazi-like rhetoric. According to Elizabeth Dwoskin, in the Post, Cortana, another talking Microsoft bot, was carefully programmed not to express favoritism for either Hillary Clinton or Donald Trump. A product’s apparent intelligence makes it likable, but also offers more of an opportunity to offend.

Ultimately, interacting so much with bots and other conversation-capable products may affect how we relate to each other. It’s possible that, in many contexts, conversational bots will offer an emotionally predictable alternative to messy human interaction. In a recent book called “American Girls: Social Media and the Secret Lives of Teenagers,” which Alexandra Schwartz reviewed earlier this month, Nancy Jo Sales describes a brutal online world of texting putdowns, sexual competition, and conformist pressure—“Mean Girls” at Internet speed. Whatever you might say against the bots, humans can be worse.

And there are ways in which just knowing that bots exist will change us. If the bots are good enough, we won’t be able to distinguish them from actual people over e-mail or text; when you get an e-mail, you won’t necessarily be certain it’s from a human being. When your best friend writes that she’s also “looking forward to seeing you at the baseball game tonight,” you’ll smile—then wonder if she’s busy and has asked her e-mail bot to send appropriate replies. Once everyone realizes that there might not be a person on the other end, peremptory behavior online may become more common. We’ll likely learn to treat bots more like people. But, in the process, we may end up treating people more like bots.