Psychology and Social Cognition Conversational AI Systems

How can proactive agents avoid feeling intrusive to users?

Explores why proactive conversational agents often feel annoying rather than helpful, and what design dimensions could prevent them from violating user expectations and autonomy.

Note · 2026-02-22 · sourced from Conversation Agents
Why do AI agents fail to take initiative? How should researchers navigate LLM reasoning research?

The push to make conversational agents proactive comes with an underexamined design risk: without thoughtful design, proactive systems are perceived as intrusive rather than helpful. Since Does machine agency exist on a spectrum rather than binary?, the transition from reactive (level 3) to proactive (level 4) is precisely where users welcome convenience but resist ceding decision-making control. Initiative that violates user expectations produces annoyance, not engagement.

The Intelligence-Adaptivity-Civility (IAC) taxonomy frames proactive agent design across three dimensions:

The critical insight is that Intelligence and Adaptivity without Civility produces a capable but socially blind agent. An agent that accurately predicts your needs but interrupts at the wrong moment, overrides your conversational direction, or assumes familiarity you haven't granted — this agent is worse than a passive one.

This maps to the broader tension between capability and social appropriateness that runs through the chatbot psychology research. Since Does chatbot personalization build trust or expose privacy risks?, more capable agents raise higher social expectations. Proactivity intensifies this: an agent that takes initiative implicitly claims social standing in the conversation.

The practical implication: proactive agent design is a design problem, not just an AI capabilities problem. The civility dimension requires understanding conversational norms, turn-taking expectations, and the pragmatics of initiative — domains where current systems have significant gaps.

DiscussLLM's "interruption accuracy" metric operationalizes the civility dimension directly: it measures the percentage of turns where the model correctly remains silent. A model that incorrectly interrupts a multi-party discussion has failed the civility gate — regardless of how good its contribution would have been. This is the first metric to explicitly evaluate the absence of intervention as a conversational skill.

The civility dimension becomes more complex when users are non-cooperative. Since When should proactive agents push toward their goals versus accommodate users?, the I-Pro framework reveals that dissatisfied users talk about off-path topics, creating a tension between agent goals and user autonomy. A four-factor goal weight (turn progress, task difficulty, user satisfaction, cooperativeness) learns when to push toward goals vs. accommodate. Complementary evidence from ACCENTOR (adding commonsense-driven chit-chat to task-oriented dialogue) and ProsocialDialog (ensuring proactive suggestions follow prosocial norms) shows that the civility dimension is not merely about restraint — it includes knowing when and how to insert socially appropriate contributions that advance the conversation.

Horvitz's foundational nine design principles for proactive conversational agents (1999) provide actionable criteria for the civility dimension: the system must be (1) valuable for the user, (2) pertinent to the situation, (3) competent with respect to its abilities and knowledge, (4) unobtrusive, (5) transparent, (6) controllable, (7) deferent to the user, (8) anticipatory about current and future needs, and (9) safe. A systematic review of proactive behavior in voice assistants finds that only safety-critical and emergency situations demonstrate clear benefits for proactivity — all other scenarios produce mixed findings. Voice assistants face additional civility challenges: they are not embodied, lack non-verbal cues or tangible "presence," and presenting multiple options through speech demands more time than GUIs while basic operations like undoing or browsing are harder to perform.

The degree of proactivity should be tailored to context and use case, ranging from reactive responses (awaiting user prompts) to fully autonomous actions. Since When should human-agent systems ask for human help?, the fundamental challenge remains: there is no objective signal for when proactive intervention helps vs. hinders.


Source: Conversation Agents, Conversation Topics Dialog, Conversation Architecture Structure, Design Frameworks

Related concepts in this collection

Concept map
16 direct connections · 131 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

proactive conversational agents without thoughtful design risk being perceived as intrusive — the intelligence-adaptivity-civility taxonomy addresses this