← All notes

How do people come to trust conversational AI systems?

How humans psychologically relate to conversational AI through trust, disclosure, and personalization mechanisms.

Topic Hub · 11 linked notes · 4 sections
View as

Sub-Topic Maps

2 notes

How do people build trust with conversational AI?

Explores how users form relationships with chatbots through self-disclosure, personalization, and social norm adaptation. Understanding these mechanisms reveals why AI lacks the speaker-anchored trust that humans naturally extend to people.

Explore related Read →

Does personalization in AI increase trust or manipulation risk?

AI personalization mechanisms like memory and persona can build trust, but also enable targeted persuasion. What determines whether these systems help or harm users?

Explore related Read →

Trust Calibration and Epistemic Status

2 notes

How much should we trust AI-generated data in inference?

Most AI workflows treat synthetic data with implicit full trust, but should there be an explicit parameter controlling how heavily AI outputs influence downstream reasoning and decision-making?

Explore related Read →

Do AI-assisted outputs fool users about their own skills?

When people use AI tools to produce high-quality work, do they mistakenly believe they personally possess the skills that generated it? This matters because such misattribution could mask genuine skill loss and prevent corrective action.

Explore related Read →

Writing Angles

1 note