Why doesn't therapeutic alliance deepen in online counseling?
Does the therapeutic relationship naturally strengthen through continued text-based contact, or do counselor-client pairs typically stagnate or decline? The question challenges assumptions underlying chatbot design.
Using LLMs with chain-of-thought reasoning to evaluate therapeutic alliance across early, middle, and late phases of online text-based counseling, this study finds that the relationship does not significantly deepen over time. Agreement on counseling goals and approaches remains constant across phases. Affective bond shows only marginal increase. Nearly 50% of counselor-client pairs experience either decline or no change in alliance strength, with less than 3% improving by at least one level.
The finding challenges the assumption that therapeutic relationships naturally build through continued contact — an assumption that undergirds many therapeutic chatbot designs that emphasize longitudinal engagement. Since Do therapeutic chatbot bond scores hide deeper safety problems?, the stagnation finding suggests that even human counselors struggle with the relational deepening that chatbot advocates assume will emerge from extended interaction.
The framework operationalizes Bordin's (1979) tripartite model — Goal (mutual agreement on targets), Approach (shared understanding of tasks), and Affective Bond (emotional connection) — adapted specifically for text-based settings using the Observer-rated Working Alliance Inventory (WAI-O-S). The adaptation to text-only interaction is itself significant: most alliance measurement was developed for face-to-face speech-based therapy.
Analysis of counselor behaviors in poor-alliance sessions reveals two problematic patterns: passive response (responding to client statements without exploring core issues) and boundary-overstepping (excessively directing clients, compromising their autonomy). Feedback tends to be vague and generalized rather than personalized. These patterns mirror what since Do LLM therapists respond to emotions like low-quality human therapists? — LLMs default to the same problematic patterns that characterize struggling human counselors.
A proof-of-concept LLM feedback mechanism showed promising results: counselors who struggled with relationship-building rated LLM-generated feedback positively on understanding alliance (3.43/5), identifying improvement directions (3.49/5), and willingness to adjust strategies (3.74/5). This positions LLMs not as therapist replacements but as therapist support tools — since Can reinforcement learning optimize therapy dialogue in real time?, the supervisor/feedback role may be where LLMs add the most clinical value.
Source: Psychology Therapy Practice
Related concepts in this collection
-
Do therapeutic chatbot bond scores hide deeper safety problems?
Explores whether patients' reported emotional connection to therapeutic chatbots—which feels genuine—might coexist with clinical failures and damage to how emotions function as self-knowledge.
if human counselors stagnate, chatbot bond scores may reflect the same ceiling
-
Do LLM therapists respond to emotions like low-quality human therapists?
Explores whether language models trained to be helpful default to problem-solving when users share emotions, and whether this behavioral pattern resembles ineffective rather than skillful therapy.
LLM defaults mirror the patterns of struggling human counselors
-
Can reinforcement learning optimize therapy dialogue in real time?
Can RL systems trained on working alliance scores recommend therapy topics that improve clinical outcomes during live sessions? This explores whether validated clinical constructs can serve as reward signals for dialogue optimization.
LLMs as therapist supervisors, not replacements
-
Do therapists accurately perceive the working alliance with patients?
This research explores whether therapists' own assessments of the therapeutic relationship match what patients actually experience, especially in high-risk cases like suicidality.
alliance measurement bias compounds the stagnation problem
-
Can we measure therapist-patient alliance from dialogue turns in real time?
Explores whether computational methods can detect working alliance quality at turn-level resolution during therapy sessions, enabling immediate feedback on whether the therapeutic relationship is strengthening.
the automated measurement infrastructure needed for LLM-based therapist feedback
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
therapeutic alliance does not deepen over time in online text-based counseling — half of counselor-client pairs show decline or stagnation even with experienced counselors