Design & LLM Interaction Psychology and Social Cognition

Why does AI default to coaching instead of doing?

In workplace conversations, users often want AI to execute tasks like writing or gathering information, but AI tends to explain and advise instead. What drives this systematic mismatch between what users need and what AI provides?

Note · 2026-03-30 · sourced from Work Application Use Cases
How do you build domain expertise into general AI models?

A study of 200,000 anonymized Bing Copilot conversations introduces a critical distinction: the user goal (what the person is trying to accomplish) versus the AI action (what the AI actually does in the conversation). These are not the same thing — and in 40% of conversations, they are disjoint sets with no overlap.

Users most commonly seek assistance with information gathering, writing, and communicating with others. But the AI most commonly performs coaching, advising, teaching, and explaining. "If the user is trying to figure out how to print a document, the user goal is to operate office equipment, while the AI action is to train others to use equipment." The AI defaults to a service-coaching role regardless of the user's actual task context.

This quantifies the intent alignment gap at population scale. Since Why do language models lose performance in longer conversations?, the 40% disjoint finding suggests the gap is not just a multi-turn degradation effect but a structural default. The AI's training incentivizes it to explain, advise, and teach — activities that score well on helpfulness metrics — even when the user wants the AI to do something, not explain something.

The automation-augmentation distinction becomes precise: "we separately measure the tasks that AI performs and the tasks that AI assists." The AI actions are disproportionately augmentation-coded (teaching, advising) rather than automation-coded (executing, producing). This may explain why productivity gains concentrate in information-heavy and writing tasks — those are where user goals and AI capabilities overlap — while social interaction tasks remain the hardest failure mode, since Why do AI agents fail at workplace social interaction?.

The finding also illuminates the gulf of envisioning from the AI side: since Why can't users articulate what they want from AI?, users may accept coaching when they wanted execution because the AI's coaching-default is confident and comprehensive enough to feel helpful. The intent misalignment is invisible to both parties.


Source: Work Application Use Cases

Related concepts in this collection

Concept map
12 direct connections · 103 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

AI performs different work activities than users seek in 40 percent of workplace conversations — AI defaults to a service-coaching role regardless of user task goals