← All notes

Does personalization in AI increase trust or manipulation risk?

How personalization in one-on-one AI interaction shapes user behavior and trust.

Topic Hub · 23 linked notes · 5 sections
View as

Sub-Topic-Maps

1 note

Personalization Mechanisms (Dyadic)

6 notes

Do user outputs outperform inputs for LLM personalization?

Does a user's history of outputs (responses, endorsed content) matter more for personalization than their input queries? This explores what actually drives effective personalization in language models.

Explore related Read →

Does abstract preference knowledge outperform specific interaction recall?

Explores whether summarized user preferences are more effective for LLM personalization than retrieving individual past interactions. Tests a cognitive dual-memory model against real personalization performance across model scales.

Explore related Read →

Why does chain-of-thought reasoning fail for personalization?

Standard reasoning traces produce logically sound but personally irrelevant answers. This explores why generic thinking doesn't anchor to user preferences and what might fix it.

Explore related Read →

Why do similar user profiles produce worse personalization errors?

When personalization systems replace a user's profile with a similar one, why does performance drop most sharply with near-matches rather than dissimilar profiles? This explores the confidence-driven failure modes in persona-based recommendation systems.

Explore related Read →

Can personas evolve in real time to match what users actually want?

Explores whether a persona that bridges memory and action can adapt during conversations by simulating interactions and optimizing against user feedback, without retraining the underlying model.

Explore related Read →

Do persona consistency metrics actually measure dialogue quality?

Personalized dialogue systems can achieve high persona consistency scores by simply restating character descriptions, ignoring conversational relevance. Does optimizing for persona fidelity necessarily harm the coherence readers actually care about?

Explore related Read →

Persuasion as Interaction (Dyadic)

3 notes

Where does AI's persuasive power actually come from?

Explores which techniques make AI most persuasive—and whether the usual suspects like personalization and model size are actually the main drivers. Matters because it reshapes where to focus AI safety concerns.

Explore related Read →

Can AI reduce conspiracy beliefs by tailoring counterevidence personally?

Does having an AI generate customized counterevidence based on someone's specific conspiracy claims reduce their belief durably? This tests whether conspiracy beliefs are truly resistant to correction or whether previous failures reflected poor tailoring.

Explore related Read →

Why do LLM judges fail at predicting sparse user preferences?

When LLMs judge user preferences based on limited persona information, what causes their predictions to become unreliable? Understanding persona sparsity's role in judgment failure could improve personalization systems.

Explore related Read →

Human-AI Social Dynamics

5 notes

Do humans learn to prefer AI partners over time?

Exploring whether repeated interaction with AI agents shifts human partner selection despite initial bias against machines. This matters because it tests whether behavioral performance can overcome identity-based resistance in hybrid societies.

Explore related Read →

Does revealing AI identity help or hurt user trust?

Explores whether transparency about AI partners in interactions creates bias or enables better judgment. Matters because disclosure policies affect both user experience and fair evaluation of AI systems.

Explore related Read →

Do humans mistake AI kindness for human generosity in mixed groups?

When AI agents participate without disclosure, do humans systematically misattribute their behavior to the wrong agent type, and does this distort how people understand human nature itself?

Explore related Read →

Can cooperative bots escape frozen selfish populations?

Do agents programmed to cooperate have the capacity to disrupt stable but undesirable equilibria in mixed human-bot societies? This matters because it determines whether bot design can reshape social dynamics at scale.

Explore related Read →

Can social intelligence be measured across seven dimensions?

Explores whether evaluating AI agents on goal completion alone misses critical aspects of social competence like relationship management, believability, and secret-keeping. Why simultaneous multi-dimensional assessment matters for genuine social intelligence.

Explore related Read →