← All notes

Why does conversational AI feel therapeutic when its mechanics aren't?

Research on how humans psychologically relate to conversational AI and where therapeutic mechanisms succeed or fail.

Topic Hub · 10 linked notes · 4 sections
View as

Sub-Topic Maps

6 notes

What makes therapeutic chatbots actually work in clinical practice?

Research explores whether conversational AI achieves therapeutic outcomes through specific clinical techniques or simply through the act of engaging conversation itself. Understanding the active ingredient is critical for designing effective and safe mental health interventions.

Explore related Read →

How do people come to trust conversational AI systems?

Explores the psychological mechanisms underlying human trust in AI—how people decide what to disclose, what relationships they form, and how personalization shapes these dynamics at both individual and population levels.

Explore related Read →

How do people build trust with conversational AI?

Explores how users form relationships with chatbots through self-disclosure, personalization, and social norm adaptation. Understanding these mechanisms reveals why AI lacks the speaker-anchored trust that humans naturally extend to people.

Explore related Read →

Does personalization in AI increase trust or manipulation risk?

AI personalization mechanisms like memory and persona can build trust, but also enable targeted persuasion. What determines whether these systems help or harm users?

Explore related Read →

Why do AI conversations reliably break down after multiple turns?

Explores why multi-turn conversations degrade in quality and coherence. Understanding failure modes—intent misalignment, memory management, and missing grounding mechanisms—is essential for designing more resilient dialogue systems.

Explore related Read →

Why do AI agents fail to take initiative?

Explores why the most capable AI models are structurally passive and what design changes could enable them to lead conversations, collaborate proactively, and identify missing information rather than simply respond to user prompts.

Explore related Read →

Competence Misattribution and Epistemic Status

3 notes

Do AI-assisted outputs fool users about their own skills?

When people use AI tools to produce high-quality work, do they mistakenly believe they personally possess the skills that generated it? This matters because such misattribution could mask genuine skill loss and prevent corrective action.

Explore related Read →

How do AI tools trick users into overestimating their own skills?

When people use language models to help with work, what system-level properties create false confidence in their own competence? Understanding this matters for recognizing hidden skill gaps.

Explore related Read →

Should we treat LLM outputs as real empirical data?

Can synthetic text generated by language models serve as evidence in the same way observations from the world do? This matters because researchers increasingly rely on AI-generated content without accounting for its fundamentally different epistemic status.

Explore related Read →

Archived Tensions

1 note