Can AI guidance reduce anchoring bias better than AI decisions?
When humans and AI collaborate on decisions, does providing interpretive guidance instead of proposed answers reduce both over-trust in machines and abandonment on hard cases?
Most hybrid decision-making (HDM) approaches follow a learning to defer (LTD) pattern: the machine assesses whether it can handle a decision autonomously and defers to a human when it cannot. This creates two failure modes:
- Anchoring bias — when the machine does decide, humans over-trust its output, anchoring their judgment to the machine's answer rather than evaluating independently
- Unassisted hard cases — when the machine defers, the human faces the most difficult decisions completely alone — precisely the cases where assistance would be most valuable
Learning to Guide (LTG) eliminates both by changing what the machine provides. Instead of proposing potential decisions, the machine supplies interpretive guidance: highlighting aspects of the input that are useful for coming up with a sensible decision. All decisions are taken by the human under assistance. Responsibility cannot be shifted because the machine never proposes an answer.
The medical imaging example makes the stakes concrete: diagnosing lung pathologies from X-rays cannot be fully automated for safety reasons, but is difficult for humans alone under time pressure. LTD either gives an autonomous diagnosis (anchoring risk) or says "I can't help" (abandonment on hard cases). LTG highlights the relevant features of the scan — drawing attention to patterns the human might miss — without ever saying "this is pneumonia."
This connects to What makes delegation work beyond just splitting tasks?. The delegation design space maps whether tasks should be delegated to AI at all. LTG adds a third option beyond "do it" (automation) and "don't do it" (deferral): "help the human do it." This is particularly relevant for tasks high on subjectivity, irreversibility, and accountability — precisely the axes where full delegation is most dangerous.
The pattern also maps to Can AI agents communicate efficiently in joint decision problems?. LTG formalizes one specific form of joint optimization: the machine's role is reducing information asymmetry (highlighting useful aspects) rather than collapsing it into a decision. The human retains decision authority while benefiting from the machine's perceptual capabilities.
The broader implication: the dichotomy between "AI decides" and "human decides" is false. The most productive middle ground may be neither autonomous AI decisions nor deferred human decisions, but AI-guided human decisions where the machine contributes perception and the human contributes judgment.
Source: Assistants Personalization
Related concepts in this collection
-
What makes delegation work beyond just splitting tasks?
Delegation is more than task decomposition. What dimensions of a task—like verifiability, reversibility, and subjectivity—determine whether an agent can safely and effectively handle it?
LTG adds "guide" as third option beyond automate and defer
-
Can AI agents communicate efficiently in joint decision problems?
When humans and AI must collaborate to solve optimization problems under asymmetric information, what communication patterns enable effective coordination? Current LLMs struggle with this—why?
LTG as specific implementation of joint optimization
-
Does theory of mind predict who thrives in AI collaboration?
Explores whether perspective-taking ability—the capacity to model another's cognitive state—differentiates humans who benefit most from working with AI, separate from solo problem-solving skill.
guidance requires understanding what the human needs to see
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
learning to guide replaces learning to defer by supplying interpretive guidance rather than potential decisions — avoiding anchoring bias in hybrid human-AI decision making