Psychology and Social Cognition

Why doesn't therapeutic alliance deepen in online counseling?

Does the therapeutic relationship naturally strengthen through continued text-based contact, or do counselor-client pairs typically stagnate or decline? The question challenges assumptions underlying chatbot design.

Note · 2026-04-18 · sourced from Psychology Therapy Practice
What makes therapeutic chatbots actually work in clinical practice?

Using LLMs with chain-of-thought reasoning to evaluate therapeutic alliance across early, middle, and late phases of online text-based counseling, this study finds that the relationship does not significantly deepen over time. Agreement on counseling goals and approaches remains constant across phases. Affective bond shows only marginal increase. Nearly 50% of counselor-client pairs experience either decline or no change in alliance strength, with less than 3% improving by at least one level.

The finding challenges the assumption that therapeutic relationships naturally build through continued contact — an assumption that undergirds many therapeutic chatbot designs that emphasize longitudinal engagement. Since Do therapeutic chatbot bond scores hide deeper safety problems?, the stagnation finding suggests that even human counselors struggle with the relational deepening that chatbot advocates assume will emerge from extended interaction.

The framework operationalizes Bordin's (1979) tripartite model — Goal (mutual agreement on targets), Approach (shared understanding of tasks), and Affective Bond (emotional connection) — adapted specifically for text-based settings using the Observer-rated Working Alliance Inventory (WAI-O-S). The adaptation to text-only interaction is itself significant: most alliance measurement was developed for face-to-face speech-based therapy.

Analysis of counselor behaviors in poor-alliance sessions reveals two problematic patterns: passive response (responding to client statements without exploring core issues) and boundary-overstepping (excessively directing clients, compromising their autonomy). Feedback tends to be vague and generalized rather than personalized. These patterns mirror what since Do LLM therapists respond to emotions like low-quality human therapists? — LLMs default to the same problematic patterns that characterize struggling human counselors.

A proof-of-concept LLM feedback mechanism showed promising results: counselors who struggled with relationship-building rated LLM-generated feedback positively on understanding alliance (3.43/5), identifying improvement directions (3.49/5), and willingness to adjust strategies (3.74/5). This positions LLMs not as therapist replacements but as therapist support tools — since Can reinforcement learning optimize therapy dialogue in real time?, the supervisor/feedback role may be where LLMs add the most clinical value.


Source: Psychology Therapy Practice

Related concepts in this collection

Concept map
13 direct connections · 66 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

therapeutic alliance does not deepen over time in online text-based counseling — half of counselor-client pairs show decline or stagnation even with experienced counselors