Psychology and Social Cognition Conversational AI Systems

When should proactive agents push toward their goals versus accommodate users?

Proactive dialogue agents face a tension between reaching their objectives efficiently and keeping users satisfied. This question explores whether these two aims can coexist or require constant negotiation.

Note · 2026-02-22 · sourced from Conversation Architecture Structure
Why do AI agents fail to take initiative? What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

Most proactive dialogue research assumes cooperative users — people who follow the agent's topic transitions willingly. I-Pro introduces a more realistic paradigm: the non-cooperative user, who talks about off-path topics when dissatisfied with the agent's choices.

The core tension: the targets of reaching the goal topic quickly AND maintaining high user satisfaction are not always convergent, because topics close to the goal and topics the user prefers may not be the same. An agent pushing aggressively toward a goal topic may alienate the user. An agent following user preferences may never reach the goal.

The solution is a learned goal weight composed of four factors:

  1. Dialogue turn — how far into the conversation (early = more flexibility, late = more urgency)
  2. Goal completion difficulty — how distant the current topic is from the goal
  3. User satisfaction estimation — real-time tracking of user engagement
  4. Cooperative degree — how willing the user is to follow the agent's lead

This adds an important dimension to the passivity problem. Since Why can't advanced AI models take initiative in conversation?, the research focus has been on making agents MORE proactive. But I-Pro shows that proactivity itself creates a new problem: when should the agent push toward its goal vs. accommodate the user's preference? The answer is not "always push" or "always accommodate" — it's a dynamic trade-off that changes throughout the conversation.

Since How can proactive agents avoid feeling intrusive to users?, I-Pro provides a concrete mechanism for implementing the civility dimension: the goal weight modulates how aggressively the agent pursues its objective based on user receptiveness.


Source: Conversation Architecture Structure

Related concepts in this collection

Concept map
15 direct connections · 130 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

proactive agents face a goal-satisfaction divergence — topics close to the agents goal and topics the user prefers may not align requiring a learned four-factor trade-off