Can AI chatbots create genuine therapeutic bonds with users?
Research on Woebot and Wysa found users reported feeling cared for and formed therapeutic bonds comparable to human therapy, despite knowing the agents were not human. This challenges assumptions about whether bonds require human relationships.
A cross-sectional study of Woebot users found therapeutic bond levels similar to those reported in literature for face-to-face therapy, group CBT, and other digital interventions. Users reported feeling "cared for" by the agent (e.g., "Woebot felt like a real person that showed concern") — even though the tool's scripts explicitly reminded users that Woebot is not a real person.
A second study using Wysa — an AI-led free-text CBT intervention — found bond subscale scores comparable to face-to-face therapy on the Working Alliance Inventory (WAI). Users reported feeling "cared for" and alliance scores improved over time, suggesting the bond was not merely novelty. Unlike the scripted Woebot interactions, Wysa delivered CBT through open-ended conversational exchange, making the bond finding more robust: users formed therapeutic bonds even in the more demanding free-text interaction format.
This challenges a deeply held assumption: that therapeutic bonds are the exclusive domain of human relationships. The working alliance — the shared understanding of objectives, tasks, and the bond between therapist and client — is considered one of the strongest predictors of positive therapeutic outcomes. If a conversational agent can produce comparable bond scores, the mechanism is either genuinely relational (CASA framework: people treat computers as social actors) or the measurement instruments are capturing something different from what we think.
The scalability implication is significant. Human involvement in therapeutic programs limits scalability and accessibility, particularly for remote populations. If digital interventions can replicate therapeutic rapport, they have "greater potential for improving mental health" at population scale. However, the study did not formally assess working alliance — the bond finding came from qualitative data, not validated alliance measures. This is a crucial gap: the construct that most predicts outcomes (working alliance) was not directly measured.
The tension with the emotional pacifier critique is direct: since Does empathetic AI that soothes negative emotions help or harm?, these bonds may feel therapeutic while actually undermining the epistemic functions that emotions serve.
Source: Psychology Chatbots Conversation; enriched from Psychology Therapy Practice
Related concepts in this collection
-
How do chatbots enable distributed delusion differently than passive tools?
Can generative AI's intersubjective stance—accepting and elaborating on users' reality frames—create conditions for shared false beliefs in ways that notebooks or search engines cannot?
the quasi-Other mechanism may explain bond formation: intersubjective stance creates relational perception
-
Do humans and LLMs differ fundamentally or just superficially?
Explores whether the gap between human and AI cognition is categorical or contextual. Matters because it shapes how we design, evaluate, and interact with language models in practice.
participant-perspective bonds may be genuine experiences even if observer-perspective analysis reveals no true reciprocity
-
Do therapeutic chatbot bond scores hide deeper safety problems?
Explores whether patients' reported emotional connection to therapeutic chatbots—which feels genuine—might coexist with clinical failures and damage to how emotions function as self-knowledge.
bond scores documented here are the first dimension of a three-dimension evaluation framework: genuine experiential bond coexists with clinical safety failures and epistemic costs
-
Is conversational presence more therapeutic than clinical technique?
Does therapeutic AI's benefit come from having an attentive listener rather than from delivering evidence-based techniques like CBT? This challenges decades of chatbot design focused on clinical content.
the bond-formation finding supports the ELIZA thesis: if bonds form regardless of clinical technique sophistication, conversational presence is the active ingredient
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
digital conversational agents can establish therapeutic bond levels comparable to human therapy despite users knowing the agent is not human