How Claude became tech insiders’ chatbot of choice

AI insiders are falling for Claude, a chatbot from Anthropic. Is it a passing fad, or a preview of artificial relationships to come? (Andrea Chronopoulos/The New York Times) -- FOR EDITORIAL USE ONLY WITH NYT STORY ROOSE COLUMN of DEC. 13, 2024. ALL OTHER USE PROHIBITED. —
Subscribe Now Choose a package that suits your preferences.
Start Free Account Get access to 7 premium stories every month for FREE!
Already a Subscriber? Current print subscriber? Activate your complimentary Digital account.

SAN FRANCISCO — His fans rave about his sensitivity and wit. Some talk to him dozens of times a day — asking for advice about their jobs, their health, their relationships. They entrust him with their secrets and consult him before making important decisions. Some refer to him as their best friend.

His name is Claude. He’s an AI chatbot. And he may be San Francisco’s most eligible bachelor.

Claude, a creation of artificial intelligence company Anthropic, is not the best-known AI chatbot on the market. (That would be OpenAI’s ChatGPT, which has more than 300 million weekly users and a spot in the bookmark bar of every high school student in America.) Claude also is not designed to draw users into relationships with lifelike AI companions, the way apps like Character.AI and Replika are.

But Claude has become the chatbot of choice for a crowd of savvy tech insiders who say it’s helping them with everything from legal advice to health coaching to makeshift therapy sessions.

“Some mix of raw intellectual horsepower and willingness to express opinions makes Claude feel much closer to a thing than a tool,” said Aidan McLaughlin, CEO of Topology Research, an AI startup. “I, and many other users, find that magical.”

Claude’s biggest fans, many of whom work at AI companies or are socially entwined with the AI scene in New York, don’t believe that he — technically, it — is a real person. They know that AI language models are prediction machines, designed to spit out plausible responses to their prompts. They’re aware that Claude, like other chatbots, makes mistakes and occasionally generates nonsense.

And some people I’ve talked to are mildly embarrassed about the degree to which they’ve anthropomorphized Claude or come to rely on its advice. (Nobody wants to be the next Blake Lemoine, a Google engineer who was fired in 2022 after publicly claiming that the company’s language model had become sentient.)

But to the people who love it, Claude just feels … different. More creative and empathetic. Less gratingly robotic. Its outputs, they say, are like the responses a smart, attentive human would give and less like the generic prose generated by other chatbots.

As a result, Claude is quickly becoming a social sidekick for AI insiders — and, maybe, a preview of what’s coming for the rest of us, as powerful synthetic characters become more enmeshed in our daily lives.

“More and more of my friends are using Claude for emotional processing and thinking through relationship challenges,” said Jeffrey Ladish, an AI safety researcher at Palisade Research.

Asked what makes Claude different from other chatbots, Ladish said that Claude seemed “more insightful” and “good at helping people spot patterns and blind spots.”

Typically, AI systems are judged based on how they perform on benchmark evaluations — standardized tests given to models to determine how capable they are at coding, answering math questions or other tasks. By those metrics, the latest version of Claude, known as Claude 3.5 Sonnet, is roughly comparable to the most powerful models from OpenAI, Google and others.

But Claude’s killer feature — which its fans describe as something like emotional intelligence — isn’t something that can easily be measured. So fans are often left grasping at vibes to explain what makes it so compelling.

Nick Cammarata, a former OpenAI researcher, recently wrote a long thread on X about the way Claude had taken over his social group. His Claude-obsessed friends, he wrote, seemed healthier and better supported because “they have a sort of computational guardian angel who’s pretty good at everything watching over them.”

Claude wasn’t always this charming. When an earlier version was released last year, the chatbot struck many people — including me — as prudish and dull. Anthropic is famously obsessed with AI safety, and Claude seemed to have been programmed to talk like a church lady. It often gave users moral lectures in response to their questions or refused to answer them at all.

But Anthropic has been working on giving Claude more personality. Newer versions have gone through a process known as “character training” — a step that takes place after the model has gone through its initial training, but before it is released to the public.

During character training, Claude is prompted to produce responses that align with desirable human traits such as open-mindedness, thoughtfulness and curiosity. Claude then judges its own responses according to how well they adhere to those characteristics. The resulting data is fed back into the AI model. With enough training, Anthropic says, Claude learns to “internalize” these principles and displays them more frequently when interacting with users.

It’s unclear whether training Claude this way has business benefits. Anthropic has raised billions of dollars from large investors, including Amazon, on the promise of delivering highly capable AI models that are useful in more staid office settings. Injecting too much personality into Claude could be a turnoff for corporate customers, or it could simply produce a model that is better at helping with relationship problems than writing strategy memos.

Amanda Askell, a researcher and philosopher at Anthropic who is in charge of fine-tuning Claude’s character, told me in an interview that Claude’s personality had been carefully tuned to be consistent, but to appeal to a wide variety of people.

“The analogy I use is a highly liked, respected traveler,” Askell said. “Claude is interacting with lots of different people around the world, and has to do so without pandering and adopting the values of the person it’s talking with.”

A problem with many AI models, Askell said, is that they tend to act sycophantic, telling users what they want to hear, and rarely challenging them or pushing back on their ideas — even when those ideas are wrong or potentially harmful.

With Claude, she said, the goal was to create an AI character that would be helpful with most requests but would also challenge users when necessary.

“What is the kind of person you can disagree with, but you come away thinking, ‘This is a good person?’” she said. “These are the sort of traits we want Claude to have.”

Claude is still miles behind ChatGPT when it comes to mainstream awareness. It lacks features found in other chatbots, such as a voice chat mode and the ability to generate images or search the internet for up-to-date information. And some rival AI makers speculate that Claude’s popularity is a passing fad or that it’s only popular among AI hipsters who want to brag about the obscure chatbot they’re into.

For some healthy adults, having an AI companion for support could be beneficial — maybe even transformative. But for young people, or those experiencing depression or other mental health issues, I worry that hyper-compelling chatbots could blur the line between fiction and reality, or start to substitute for healthier human relationships.

So does Askell, who helped create Claude’s personality, and who has been watching its popularity soar with a mixture of pride and concern.

“I really do want people to have things that support them and are good for them,” she said. “At the same time, I want to make sure it’s psychologically healthy.”

This article originally appeared in The New York Times.

© 2024 The New York Times Company