Design & LLM Interaction Agentic and Multi-Agent Systems Psychology and Social Cognition

Can AI guidance reduce anchoring bias better than AI decisions?

When humans and AI collaborate on decisions, does providing interpretive guidance instead of proposed answers reduce both over-trust in machines and abandonment on hard cases?

Note · 2026-02-23 · sourced from Assistants Personalization
What kind of thing is an LLM really? Why do AI agents fail to take initiative? How should researchers navigate LLM reasoning research?

Most hybrid decision-making (HDM) approaches follow a learning to defer (LTD) pattern: the machine assesses whether it can handle a decision autonomously and defers to a human when it cannot. This creates two failure modes:

  1. Anchoring bias — when the machine does decide, humans over-trust its output, anchoring their judgment to the machine's answer rather than evaluating independently
  2. Unassisted hard cases — when the machine defers, the human faces the most difficult decisions completely alone — precisely the cases where assistance would be most valuable

Learning to Guide (LTG) eliminates both by changing what the machine provides. Instead of proposing potential decisions, the machine supplies interpretive guidance: highlighting aspects of the input that are useful for coming up with a sensible decision. All decisions are taken by the human under assistance. Responsibility cannot be shifted because the machine never proposes an answer.

The medical imaging example makes the stakes concrete: diagnosing lung pathologies from X-rays cannot be fully automated for safety reasons, but is difficult for humans alone under time pressure. LTD either gives an autonomous diagnosis (anchoring risk) or says "I can't help" (abandonment on hard cases). LTG highlights the relevant features of the scan — drawing attention to patterns the human might miss — without ever saying "this is pneumonia."

This connects to What makes delegation work beyond just splitting tasks?. The delegation design space maps whether tasks should be delegated to AI at all. LTG adds a third option beyond "do it" (automation) and "don't do it" (deferral): "help the human do it." This is particularly relevant for tasks high on subjectivity, irreversibility, and accountability — precisely the axes where full delegation is most dangerous.

The pattern also maps to Can AI agents communicate efficiently in joint decision problems?. LTG formalizes one specific form of joint optimization: the machine's role is reducing information asymmetry (highlighting useful aspects) rather than collapsing it into a decision. The human retains decision authority while benefiting from the machine's perceptual capabilities.

The broader implication: the dichotomy between "AI decides" and "human decides" is false. The most productive middle ground may be neither autonomous AI decisions nor deferred human decisions, but AI-guided human decisions where the machine contributes perception and the human contributes judgment.


Source: Assistants Personalization

Related concepts in this collection

Concept map
13 direct connections · 127 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

learning to guide replaces learning to defer by supplying interpretive guidance rather than potential decisions — avoiding anchoring bias in hybrid human-AI decision making