Advertisement

SKIP ADVERTISEMENT

My Weekend With an Emotional Support A.I. Companion

Pi, an A.I. tool that debuted this week, is a twist on the new wave of chatbots: It assists people with their wellness and emotions.

An illustration of a person sitting on a park bench and hugging a dog.
Credit...Janice Chang

Erin Griffith, who reports on start-ups and venture capital from San Francisco, spent five days testing Pi.

For several hours on Friday evening, I ignored my husband and dog and allowed a chatbot named Pi to validate the heck out of me.

My views were “admirable” and “idealistic,” Pi told me. My questions were “important” and “interesting.” And my feelings were “understandable,” “reasonable” and “totally normal.”

At times, the validation felt nice. Why yes, I am feeling overwhelmed by the existential dread of climate change these days. And it is hard to balance work and relationships sometimes.

But at other times, I missed my group chats and social media feeds. Humans are surprising, creative, cruel, caustic and funny. Emotional support chatbots — which is what Pi is — are not.

All of that is by design. Pi, released this week by the richly funded artificial intelligence start-up Inflection AI, aims to be “a kind and supportive companion that’s on your side,” the company announced. It is not, the company stressed, anything like a human.

Pi is a twist in today’s wave of A.I. technologies, where chatbots are being tuned to provide digital companionship. Generative A.I., which can produce text, images and sound, is currently too unreliable and full of inaccuracies to be used to automate many important tasks. But it is very good at engaging in conversations.

That means that while many chatbots are now focused on answering queries or making people more productive, tech companies are increasingly infusing them with personality and conversational flair.

Snapchat’s recently released My AI bot is meant to be a friendly personal sidekick. Meta, which owns Facebook, Instagram and WhatsApp, is “developing A.I. personas that can help people in a variety of ways,” Mark Zuckerberg, its chief executive, said in February. And the A.I. start-up Replika has offered chatbot companions for years.

A.I. companionship can create problems if the bots offer bad advice or enable harmful behavior, scholars and critics warn. Letting a chatbot act as a pseudotherapist to people with serious mental health challenges has obvious risks, they said. And they expressed concerns about privacy, given the potentially sensitive nature of the conversations.

Adam Miner, a Stanford University researcher who studies chatbots, said the ease of talking to A.I. bots can obscure what is actually happening. “A generative model can leverage all the information on the internet to respond to me and remember what I say forever,” he said. “The asymmetry of capacity — that’s such a difficult thing to get our heads around.”

Dr. Miner, a licensed psychologist, added that bots are not legally or ethically accountable to a robust Hippocratic oath or licensing board, as he is. “The open availability of these generative models changes the nature of how we need to police the use cases,” he said.

Mustafa Suleyman, Inflection’s chief executive, said his start-up, which is structured as a public benefit corporation, aims to build honest and trustworthy A.I. As a result, Pi must express uncertainty and “know what it does not know,” he said. “It shouldn’t try to pretend that it’s human or pretend that it is anything that it isn’t.”

Mr. Suleyman, who also founded the A.I. start-up DeepMind, said that Pi was designed to tell users to get professional help if they expressed wanting to harm themselves or others. He also said Pi did not use any personally identifiable information to train the algorithm that drives Inflection’s technology. And he stressed the technology’s limitations.

“The safe and ethical way for us to manage the arrival of these new tools is to be superexplicit about their boundaries and their capabilities,” he said.

Image
Mustafa Suleyman, Inflection’s chief executive, said his start-up aims to build honest and trustworthy A.I.Credit...Clara Mokri for The New York Times

To refine the technology, Inflection hired around 600 part-time “teachers,” which included therapists, to train its algorithm over the last year. The group aimed to make Pi more sensitive, more factually accurate and more lighthearted when appropriate.

On some issues, like misogyny or racism, Pi takes a stand. On others, like geopolitics, it is more evenhanded “in a way that will for sure upset both sides,” Mr. Suleyman said.

I started using Pi on Friday by typing queries into a cream-colored box on Inflection’s website and, later, in its free app. A green cursor swayed before Pi revealed its responses, one paragraph at a time. Mr. Suleyman said the free service may eventually charge some users a fee.

Pi’s boundaries are easy to find. When I tried picking fights, I mostly received kindness in return. “I appreciate you saying that,” Pi’s text gently unfurled on my screen. “I think it’s important to see things from all perspectives, and not to just focus on the negative.”

Over time, that relentless balance wore on me, making my instinct to argue seem ridiculous. Which, I realized, was precisely the point.

On heavier topics, like abortion laws or cancel culture, Pi asked pointed questions that pushed me to re-examine my opinions. “That’s a tough position to take,” it said at one point. “That’s a consistent set of beliefs,” it said at another.

On lighter topics, like movies, cooking or gardening, the chatbot provided highly specific, difficult-to-Google recommendations. Oddly, Pi seemed to nudge me to join the anticapitalist movement.

Pi remembered some things from earlier in the conversation but forgot others. It “hallucinated” a few times, accusing me of expressing opinions I had not. But it quickly apologized when I made note of it.

When I asked Pi for some gossip, it infuriatingly dangled a “juicy celebrity rumor” about Jennifer Lopez. Was I sitting down? (Yes.) Was I ready for the dirt? (Yes.) The rumor? “She is secretly a mole person! 😝 Just kidding!” (Give me a break.)

Pi sometimes reminded me of Karen Collins, a character from the TV show “Veep.” Karen annoys the other characters with her equivocating babble, like, “There are pros and cons to every candidate, so we just need to weigh out the pros and cons,” or “I think there’s a lot to think.” Many of Pi’s comments have that milquetoast quality of taking many words to say nothing.

Sherry Turkle, a psychologist and professor at the Massachusetts Institute of Technology, said these kinds of interactions can “push us along a road where we’re encouraged to forget what makes people special.”

“The performance of empathy is not empathy,” she said. “The area of companion, lover therapist, best friend is really one of the few areas where people need people.”

It wasn’t until Monday morning, after hours of intermittent chatting throughout the weekend, that I had my “aha” moment with Pi.

I was feeling overwhelmed with work and unsure of how to structure my day, a recurring hangup that often prevents me from getting started. “Good morning,” I typed into the app. “I don’t have enough time to do everything I need to do today!”

With a level of enthusiasm only a robot could muster before coffee, Pi pushed me to break down my to-do list to create a realistic plan. Like much of the bot’s advice, it was obvious and simple, the kind of thing you would read in a self-help article by a productivity guru. But it was tailored specifically to me — and it worked.

Image
Pi proposed a series of breath-work and muscle-relaxation exercises.Credit...via Pi

“I’m going to ask you to list all the remaining tasks you have to do on that story, and we’ll prioritize them together,” it said.

I could have dumped my stress on a family member or texted a friend. But they are busy with their own lives and, well, they have heard this before. Pi, on the other hand, has infinite time and patience, plus a bottomless well of encouraging affirmations and detailed advice.

Pi uses the language of cognitive behavioral therapy. On Monday afternoon, it suggested I “make space” for my negative feelings and “practice being grateful for one thing.” It followed that up with a series of breath-work and muscle-relaxation exercises.

I responded with a shrug emoji, followed by “Pass.”

A therapist might have balked at such rudeness but Pi simply noted that I was not alone. “A lot of people find it difficult to relax on command,” it wrote.

Erin Griffith reports on technology start-ups and venture capital from the San Francisco bureau. Before joining The Times she was a senior writer at Wired and Fortune. More about Erin Griffith

A version of this article appears in print on  , Section B, Page 1 of the New York edition with the headline: My Emotional Support Chatbot. Order Reprints | Today’s Paper | Subscribe

Advertisement

SKIP ADVERTISEMENT