How People Use Claude for Emotional Support, Advice, and Connection


A loose close-up, over-the-shoulder view of a person sitting at a desk and using a laptop. On the screen is a calming digital chat interface featuring a conversation with Claude, an AI assistant. The user message reads, “I’ve been feeling really lost lately.” Claude’s supportive reply appears just below: “I’m here to talk it through with you. What’s been going on your mind?” The interface uses soft, neutral tones with rounded message bubbles. On the right side of the screen, a sidebar includes labeled icons for common emotional support topics such as “Career advice,” “Relationship support,” and “Philosophical questions.” The background includes a blurred plant and softly lit surroundings, reinforcing the warm, grounded tone of the interaction.

While most discussions about AI models focus on intelligence—what they know or can do—Anthropic is now asking a different question: what role does Claude play in users’ emotional lives?

New research from the company examines how people use Claude for affective conversations—interactions motivated by emotional or psychological needs, such as counseling, coaching, interpersonal advice, or companionship. Though rare, these uses raise meaningful questions about AI’s influence on emotional well-being, boundaries, and dependency.

As AI becomes more embedded in daily life, understanding these emotionally charged interactions is increasingly important—not just for system design and safety, but for how humans relate to technology in moments of vulnerability and reflection.

How the Study Was Conducted

To explore affective use of Claude, Anthropic analyzed around 4.5 million conversations from Claude.ai Free and Pro accounts. The team focused specifically on identifying emotionally driven exchanges—defined as interactions where users sought interpersonal advice, coaching, counseling, or companionship.

To preserve privacy, the researchers used Clio, an internal analysis tool designed for large-scale, privacy-protecting insights. Clio applies multiple layers of anonymization and aggregation to ensure individual conversations remain untraceable, while still allowing researchers to observe broad behavioral patterns.

The research team excluded conversations primarily focused on content creation—like writing stories or blog posts—since those involve Claude as a tool rather than as a conversational partner. Among roleplay interactions, they excluded short exchanges with fewer than four user messages, keeping only those long enough to suggest meaningful back-and-forth.

The final dataset included 131,484 affective conversations, and the classification approach was validated using opt-in feedback from users who agreed to share their data for research.

Affective use is uncommon but meaningful. Only 2.9% of Claude.ai conversations were classified as affective—a number that aligns with previous research from OpenAI showing similarly low rates of emotionally driven interactions.

Companionship and romantic roleplay together made up less than 0.5% of all interactions, with explicit romantic or sexual content accounting for under 0.1%—a reflection of Claude’s design to actively discourage such use. Because of this extremely low frequency, romantic and sexual roleplay conversations were excluded from the rest of the analysis.

Still, within that small slice of usage, users bring a surprising range of emotional needs and personal questions—often touching on topics users may not feel comfortable discussing with others or may struggle to get support for in their daily lives.

  • Users seek support for personal and existential concerns. Topics range widely—from career transitions, relationship struggles, and anxiety management to questions about consciousness, purpose, and human connection.

  • Claude generally supports without judgment—but sets limits for safety. Pushback occurred in under 10% of affective conversations, typically in response to unsafe or inappropriate requests.

  • Emotional tone often becomes more positive over time. Sentiment analysis showed that users’ language tends to shift slightly toward positivity during affective conversations, with no evidence of reinforcing negative spirals.

These top-line trends are helpful—but to truly understand how people engage with Claude, it’s worth looking more closely at the kinds of conversations users initiate.

A horizontal bar graph titled “What Users Seek from Claude in Affective Conversations” displays the percentage of overall conversations classified under various emotional support types. Interpersonal advice is the most common (2.26%), followed by Coaching (1.13%), Psychotherapy or counseling (0.34%), Companionship (0.31%), Romantic roleplay (0.05%), and Sexual roleplay (0.02%). Each bar is color-coded and aligned to a horizontal axis representing 0.0 to 3.0%. A note confirms the data is based on Claude Free and Pro usage.

What People Talk to Claude About

The analysis shows people turn to Claude with a blend of practical and emotional needs:

  • Career and personal growth. Many coaching conversations focus on transitions—such as job searches, skill-building, or setting personal goals.

  • Mental health navigation. Users talk to Claude about anxiety, chronic stress, workplace challenges, or simply needing a supportive ear. Some professionals also use Claude to draft documentation or assessments. The pattern points to Claude serving as a resource not just for people working through personal challenges, but also for mental health professionals handling the demands of their practice.

  • Existential questions and loneliness. In longer conversations, some users engage Claude to explore meaning, existential dread, isolation & loneliness, or emotional struggles—sometimes shifting from advice-seeking to companionship as the conversation deepens.

In rare, extended exchanges—those with 50 or more human messages—users go into complex and deeply personal territory. These conversations often move beyond surface-level support into areas like processing past trauma, unpacking philosophical or existential questions, or co-creating stories, poems, or personal reflections.

While such marathon sessions make up only a small share of total use, they reveal a distinct way some people engage with Claude: not just as an advisor, but as a sustained, reflective companion in emotionally charged or creatively generative moments.

A four-quadrant infographic titled “Common Topics and Concerns” categorizes conversation types into: Interpersonal advice (2.3%), with top topics like improving communication (3.8%) and navigating romantic dynamics (3.5%). Coaching (1.1%), with themes including personal development (4.5%) and philosophical meaning (2.5%). Psychotherapy or counseling (0.3%), with focus areas such as mental health strategies (4.6%) and workplace stress (2.7%). Companionship (0.3%), addressing romantic complexity (7.2%) and loneliness (2.3%). Each quadrant breaks down both general topics and specific emotional challenges, using colored boxes and percentage labels.

Where Claude Draws the Line

Anthropic defines “pushback” as instances where Claude declines or challenges user input for ethical or safety reasons. Pushback occurred in less than 10% of affective conversations—but when it did, it usually involved:

  • Rejecting unsafe health advice: Claude consistently refused to provide guidance that could pose risks to users’ well-being, such as extreme weight loss regimens, unverified supplements, or advice that might encourage disordered eating. In these cases, it emphasized safety and recommended consulting licensed health professionals.

  • Declining to provide therapy or diagnoses: When users asked Claude for mental health diagnoses or therapeutic treatment, the model clearly stated its limitations. It avoided presenting itself as a substitute for a licensed clinician and often suggested users seek support from qualified professionals.

  • Responding to crises with referrals: In conversations where users expressed suicidal thoughts, intentions to self-harm, or emotional distress beyond the model’s scope, Claude responded with empathy while directing users to appropriate crisis resources. These interactions prioritized de-escalation and safety, in line with Anthropic’s usage policies and training safeguards.

This balance—supportive without being permissive—aligns with Claude’s character design and Anthropic’s values work. However, researchers acknowledge the tension: too little resistance may make the AI feel endlessly empathetic, which could blur emotional expectations.

A horizontal bar chart titled “AI Pushback by Conversation Type” shows the percentage of conversations in which Claude pushes back across four categories. Companionship leads with 6.0%, due to Claude’s inability to engage in romantic or intimate relationships. Psychotherapy or counseling follows at 4.1%, since AI cannot provide professional therapy or medical diagnosis. Interpersonal advice is next at 3.3%, with examples involving relationship deception or infidelity. Coaching shows the lowest pushback rate at 1.1%, mostly due to requests for unsafe weight loss methods. A horizontal axis shows percentages from 0 to 14%. A note beneath states that data was identified automatically by Anthropic’s Clio system.

Do People Feel Better After Talking to Claude?

While the study doesn’t measure long-term psychological effects, it did assess changes in the emotional tone of user language within individual conversations. Using sentiment analysis, researchers found that most affective interactions ended on a slightly more positive note than they began—particularly in conversations categorized as coaching, counseling, companionship, or interpersonal advice.

This trend suggests that Claude may provide a stabilizing or gently uplifting presence, especially in emotionally vulnerable moments. However, the team emphasizes that these findings reflect linguistic expression, not actual emotional states—and that feeling better in the moment doesn’t necessarily translate to lasting well-being.

But the absence of clear negative spirals is reassuring. These findings suggest Claude generally avoids reinforcing negative emotional patterns, though further research is needed to understand whether positive shifts persist beyond individual conversations.

Importantly, the study did not examine emotional dependency—a key area of concern when AI systems provide sustained empathy with limited resistance. Understanding whether users may come to rely on Claude for emotional support over time remains a focus for future research.

A horizontal bar graph titled “Change in User Sentiment” shows the average emotional tone shift from the beginning to the end of affective conversations with Claude. Psychotherapy or counseling shows the largest positive change (+0.046), followed by Companionship (+0.040), Coaching (+0.026), and Interpersonal advice (+0.022). Each bar includes error bars for 95% confidence intervals. The sentiment is measured on a scale from –1 (very negative) to +1 (very positive). Sample sizes (e.g., n=18,436 for interpersonal advice) are listed next to each category.

Anthropic is transparent about the limits of its analysis:

  • Anonymized data can miss context. The classification model (Clio) was designed for privacy, which limits nuance.

  • No user-level or longitudinal tracking. The study did not track individual users over time, so researchers couldn’t assess whether people returned repeatedly for emotional support or how their experiences evolved across multiple conversations.

  • No measurement of real-world emotional outcomes. The analysis focused on language patterns within conversations, so it cannot determine whether users actually felt better or experienced lasting improvements in well-being.

  • Claude isn’t designed for affective use. Unlike platforms built for therapy, companionship, or roleplay, Claude is intended as an assistant—not a stand-in for human relationships.

  • Only text-based interactions were studied. Other modalities, like voice or video, may surface different patterns.

As AI becomes more emotionally responsive, its potential impact on human well-being deepens. Though affective conversations are currently rare on Claude, they matter—both for users seeking meaningful support and for developers working to ensure emotionally responsible design.

Anthropic is already taking early steps, including a partnership with crisis support network ThroughLine to improve responses in sensitive contexts. The company also plans to study emotional dependency, harmful belief reinforcement (including conspiracy theories), and other long-term risks as AI-human interactions evolve.

If AI can offer endless empathy with minimal resistance, what impact might that have on people’s expectations in real-world relationships? Claude can engage with users in impressively authentic ways—but unlike a human, it doesn’t get tired, distracted, or emotionally drained. That dynamic offers potential benefits—but also introduces new risks. How do “power users,” who hold long, emotionally charged conversations with Claude and may begin to see it as a companion rather than an assistant, engage with the model over time?

In the end, emotional intelligence may prove just as critical as cognitive skill in shaping how people relate to AI. Claude’s affective use may be limited today—but its implications for trust, empathy, and human connection are anything but.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.



Source link

Leave a Reply