Language Understanding and Pragmatics Psychology and Social Cognition

Can AI replicate the communicative work experts do?

Expert judgment isn't just knowing facts—it's anticipating what specific audiences will find acceptable. Does AI have mechanisms to perform this social calibration, or is it fundamentally limited to pattern-matching?

Note · 2026-03-26
What grounds language understanding in systems without embodiment? Why do AI systems fail at social and cultural interpretation?

The standard framing of expertise treats it as a knowledge problem: experts know more, and AI can know more still. But this misses the communicative dimension entirely. Expert knowledge is not the possession of information — it is the selection of relevant information, and relevance is always audience-relative. The expert doesn't just know things. The expert knows what will land, with whom, and why.

This selection is an act of communication even before the expert speaks. When an expert makes a recommendation, they are already anticipating: will this claim be acceptable to this audience? Do they have the background to receive it? Will it contradict something they hold dear? The recommendation is shaped by this anticipation — it is not a neutral report of facts but a socially calibrated judgment about what will be valid in context.

The validity dimension is crucial. Expert claims are not simply true or false. They are valid — meaning they meet the implicit standards of the community they address. A claim can be factually accurate but socially invalid (wrong audience, wrong framing, wrong timing). A claim can be somewhat imprecise but socially valid (it captures what matters and skips what doesn't). The expert navigates this distinction constantly, and it is invisible to anyone who treats expertise as information retrieval.

AI cannot perform this navigation. Since Do language models actually build shared understanding in conversation?, the system has no mechanism for anticipating what a specific audience will find acceptable. It can estimate the probability that a response will match a general preference distribution — but that is statistical approximation, not social intelligence. The difference matters because expertise is particular: the same knowledge, applied to two different audiences, requires two different framings, and the expert knows this.

This connects to a deeper problem. Since Why do language models sound fluent without grounding?, the fluency of AI-generated expertise is precisely what makes it misleading. The output reads as expert judgment — it has the form, the confidence, the structural markers — but the communicative work of anticipating audience reception was never performed. What looks like judgment is pattern-matching against how judgment has been expressed in text.

Trust in AI is epistemically different from trust in experts — because the underlying technology is unstable. Most technologies we trust are stable in their capacities: we know what a bridge or a stethoscope does, and the trust we invest in it is anchored to a settled body of demonstrated performance. Expertise works the same way — trust in an expert is anchored to a stable record of judgment accumulated over time. AI is not stable in this sense. Model capabilities shift with each release; behavioral patterns migrate with training changes; what the technology can and cannot do varies across versions within the same name. Trust in AI output therefore cannot anchor to a stable body of demonstrated expertise; it floats on impressions of the current system and is revised with each version. This is a structural property, not a transition artifact — trust stability is a requirement AI lacks in principle as long as the substrate keeps changing, which means AI cannot stand in for expert trust even when its outputs happen to be correct.

The Habermasian dimension is worth making explicit: expert claims function as validity claims in the sociological sense. They are assertions that carry an implicit "and here is why you should accept this" — directed at a specific community with specific standards. AI can reproduce the assertion but not the implicit warrant, because the warrant lives in the expert's social knowledge of the audience, not in the text of the claim itself.

This has practical consequences for how we evaluate AI-generated expertise. The question is not "is this factually correct?" but "does this reflect judgment about what matters and why?" The first question is answerable by verification. The second requires understanding the communicative situation that the expertise is meant to serve.


Source: inbox/Knowledge Custodians.md

Related concepts in this collection

Concept map
20 direct connections · 162 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

expertise is inherently communicative — expert judgment always anticipates audience acceptability in ways AI cannot replicate