Psychology and Social Cognition Design & LLM Interaction Language Understanding and Pragmatics

When and how much should AI interrupt human reasoning?

Most AI explanations focus on what to say, not when to say it or how intrusively. This explores how timing and scale of interventions shape whether support feels collaborative or disruptive.

Note · 2026-05-02 · sourced from Multimodal
Why do AI agents fail to take initiative? Why can't AI models lead conversations on their own?

The Cognitive Flow paper proposes type, timing, and scale as the three contextual factors that an adaptive cognitive-support system must read and respond to. Type is what kind of intervention — clarification, alternative, warning, evidence. Timing is when in the reasoning arc — pre-decision, mid-deliberation, post-commitment. Scale is how invasive — a marginal hint versus a full re-route of attention. Treating these as orthogonal axes is the move worth keeping. A design space organized this way exposes that most XAI optimizes type alone — better explanations, better counterfactuals — while leaving timing and scale implicit defaults of the interaction surface.

Timing and scale are under-theorized but determine whether well-typed interventions integrate with cognitive trajectory. A correct counterfactual at the wrong moment is a flow break; a correct counterfactual at marginal scale and the right moment becomes part of the user's reasoning rather than an interruption to it. This is Goffman-adjacent: the frame of an interruption — its timing relative to the activity-in-progress and its claim on attention — carries as much communicative load as the content. The same proposition lands as collaboration or as imposition depending on these two axes alone.

The three-axis frame also gives the field a parallel to other proactive-AI decompositions. Compare What enables AI to balance comfort with proactive problem exploration?: there too the question of when to speak is separated from what to say, and treated as its own modeling problem with its own signals. And Can models learn to ask clarifying questions instead of guessing? makes the type axis explicit (clarification request rather than refusal or hallucination) while implicitly assuming the timing axis. Reading these together: proactive-AI design is converging on the recognition that intervention design is a multi-parameter problem, and that type-only thinking misses where the action is.

Operationally, this gives evaluators something to instrument: hold type fixed and vary timing or scale, and the gap measures the under-theorized part of the design space.


Source: Multimodal Paper: Navigating the State of Cognitive Flow: Context-Aware AI Interventions for Effective Reasoning Support

Related concepts in this collection

Concept map
12 direct connections · 111 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

context-aware augmentation is parameterized along three axes — type, timing, and scale — that together determine whether AI helps or hinders