When and how much should AI interrupt human reasoning?
Most AI explanations focus on what to say, not when to say it or how intrusively. This explores how timing and scale of interventions shape whether support feels collaborative or disruptive.
The Cognitive Flow paper proposes type, timing, and scale as the three contextual factors that an adaptive cognitive-support system must read and respond to. Type is what kind of intervention — clarification, alternative, warning, evidence. Timing is when in the reasoning arc — pre-decision, mid-deliberation, post-commitment. Scale is how invasive — a marginal hint versus a full re-route of attention. Treating these as orthogonal axes is the move worth keeping. A design space organized this way exposes that most XAI optimizes type alone — better explanations, better counterfactuals — while leaving timing and scale implicit defaults of the interaction surface.
Timing and scale are under-theorized but determine whether well-typed interventions integrate with cognitive trajectory. A correct counterfactual at the wrong moment is a flow break; a correct counterfactual at marginal scale and the right moment becomes part of the user's reasoning rather than an interruption to it. This is Goffman-adjacent: the frame of an interruption — its timing relative to the activity-in-progress and its claim on attention — carries as much communicative load as the content. The same proposition lands as collaboration or as imposition depending on these two axes alone.
The three-axis frame also gives the field a parallel to other proactive-AI decompositions. Compare What enables AI to balance comfort with proactive problem exploration?: there too the question of when to speak is separated from what to say, and treated as its own modeling problem with its own signals. And Can models learn to ask clarifying questions instead of guessing? makes the type axis explicit (clarification request rather than refusal or hallucination) while implicitly assuming the timing axis. Reading these together: proactive-AI design is converging on the recognition that intervention design is a multi-parameter problem, and that type-only thinking misses where the action is.
Operationally, this gives evaluators something to instrument: hold type fixed and vary timing or scale, and the gap measures the under-theorized part of the design space.
Source: Multimodal Paper: Navigating the State of Cognitive Flow: Context-Aware AI Interventions for Effective Reasoning Support
Related concepts in this collection
-
What enables AI to balance comfort with proactive problem exploration?
How can emotional support systems know when to actively guide conversations versus when to simply reflect feelings? This matters because getting the balance wrong leads to either passive mirroring or pushy advice-giving.
parallel three-capability decomposition of proactive intervention
-
Can models learn to ask clarifying questions instead of guessing?
Exploring whether large language models can be trained to detect incomplete queries and actively request missing information rather than hallucinating answers or refusing to respond. This matters because conversational agents today remain passive, responding only when prompted.
complements; specifies type-axis options while leaving timing implicit
-
Does AI assistance always help reasoning or does it carry hidden costs?
When AI systems intervene during human reasoning tasks, do they uniformly improve performance, or does the disruption to cognitive focus create a hidden tax that could offset their benefits?
sibling insight; the flow cost is what these three axes are jointly tuned against
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
context-aware augmentation is parameterized along three axes — type, timing, and scale — that together determine whether AI helps or hinders