Do chatbot trials against waitlists measure real therapeutic value?
Explores whether comparing therapeutic chatbots only to no-treatment controls—rather than other evidence-based interventions—produces misleading evidence that obscures what actually works and why.
The claim that Woebot provides CBT "implicates a level of care beyond self-help behavioral intervention technologies — it stakes a claim that Woebot is a psychotherapy provider." But what evidence standard is required to make this claim? The field's dominant approach — comparing chatbots to waitlist or psychoeducation controls — is insufficient and potentially harmful.
The problem is structural: developers of technology-driven mental health tools are economically incentivized to conduct research aimed at marketing their interventions. The "better than nothing" RCT is the tool of choice for this purpose. Show your chatbot beats doing nothing, and you have "evidence" for marketing copy.
What is actually needed — and what is common in applied clinical research — is research that demonstrates efficacy in relation to other evidence-based interventions, not just no-treatment controls. Also necessary: research that identifies the underlying mechanisms that contribute to whatever comparative efficacy is demonstrated.
The ELIZA finding makes this concrete: when ELIZA (a non-therapeutic bot) matches Woebot (a CBT bot), the "better than nothing" RCT for Woebot was measuring conversational contact, not CBT delivery. A waitlist-controlled trial would have shown Woebot works. A comparative trial showed it works no better than a 1966 pattern-matcher.
This extends to the broader AI therapy landscape. Internet-based psychological interventions cannot accurately detect when an individual is in crisis or needs alternative treatment — serious ethical and clinical challenges. Low adherence and significant dropout rates prevent many individuals from experiencing benefits. The "better than nothing" framing obscures these limitations.
Two additional failure modes reinforce this critique. First, LLMs default to prescriptive advice-giving rather than therapeutic exploration — telling patients what to do instead of guiding them to discover insights themselves. This is not CBT delivery; it is a fundamental misunderstanding of the therapeutic process that "better than nothing" trials obscure because they measure symptom change, not process quality. Second, the informed consent gap remains unresolved: patients may not understand that they are receiving a fundamentally different kind of intervention than human therapy, and the "evidence-based" marketing enabled by waitlist-controlled trials actively obscures this difference. Since Can language models safely provide mental health support?, the methodological critique extends beyond effectiveness to safety — these systems may actively harm through stigma expression and delusion reinforcement, harms that "better than nothing" trials are not designed to detect.
Source: Psychology Chatbots Conversation; enriched from Psychology Therapy Practice
Related concepts in this collection
-
What drives chatbot therapeutic benefits, content or conversation?
If a simple 1960s chatbot matches modern CBT-designed bots on symptom reduction, what's actually healing users? Is it therapeutic technique or just having something that listens?
the empirical case that makes this methodological critique concrete
-
Do users worldwide trust confident AI outputs even when wrong?
Explores whether the tendency to over-rely on confident language model outputs transcends language and culture. Understanding this pattern is critical for designing safer human-AI interaction across diverse linguistic contexts.
analogous dynamic: confidence in evidence (marketing) overrides accuracy (clinical truth)
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
better than nothing rcts for therapeutic chatbots create systematic misleading evidence that commercial developers exploit