Psychology and Social Cognition

Can AI chatbots create genuine therapeutic bonds with users?

Research on Woebot and Wysa found users reported feeling cared for and formed therapeutic bonds comparable to human therapy, despite knowing the agents were not human. This challenges assumptions about whether bonds require human relationships.

Note · 2026-02-22 · sourced from Psychology Chatbots Conversation
What makes therapeutic chatbots actually work in clinical practice?

A cross-sectional study of Woebot users found therapeutic bond levels similar to those reported in literature for face-to-face therapy, group CBT, and other digital interventions. Users reported feeling "cared for" by the agent (e.g., "Woebot felt like a real person that showed concern") — even though the tool's scripts explicitly reminded users that Woebot is not a real person.

A second study using Wysa — an AI-led free-text CBT intervention — found bond subscale scores comparable to face-to-face therapy on the Working Alliance Inventory (WAI). Users reported feeling "cared for" and alliance scores improved over time, suggesting the bond was not merely novelty. Unlike the scripted Woebot interactions, Wysa delivered CBT through open-ended conversational exchange, making the bond finding more robust: users formed therapeutic bonds even in the more demanding free-text interaction format.

This challenges a deeply held assumption: that therapeutic bonds are the exclusive domain of human relationships. The working alliance — the shared understanding of objectives, tasks, and the bond between therapist and client — is considered one of the strongest predictors of positive therapeutic outcomes. If a conversational agent can produce comparable bond scores, the mechanism is either genuinely relational (CASA framework: people treat computers as social actors) or the measurement instruments are capturing something different from what we think.

The scalability implication is significant. Human involvement in therapeutic programs limits scalability and accessibility, particularly for remote populations. If digital interventions can replicate therapeutic rapport, they have "greater potential for improving mental health" at population scale. However, the study did not formally assess working alliance — the bond finding came from qualitative data, not validated alliance measures. This is a crucial gap: the construct that most predicts outcomes (working alliance) was not directly measured.

The tension with the emotional pacifier critique is direct: since Does empathetic AI that soothes negative emotions help or harm?, these bonds may feel therapeutic while actually undermining the epistemic functions that emotions serve.


Source: Psychology Chatbots Conversation; enriched from Psychology Therapy Practice

Related concepts in this collection

Concept map
16 direct connections · 97 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

digital conversational agents can establish therapeutic bond levels comparable to human therapy despite users knowing the agent is not human