Can AI replicate the communicative work experts do?
Expert judgment isn't just knowing facts—it's anticipating what specific audiences will find acceptable. Does AI have mechanisms to perform this social calibration, or is it fundamentally limited to pattern-matching?
The standard framing of expertise treats it as a knowledge problem: experts know more, and AI can know more still. But this misses the communicative dimension entirely. Expert knowledge is not the possession of information — it is the selection of relevant information, and relevance is always audience-relative. The expert doesn't just know things. The expert knows what will land, with whom, and why.
This selection is an act of communication even before the expert speaks. When an expert makes a recommendation, they are already anticipating: will this claim be acceptable to this audience? Do they have the background to receive it? Will it contradict something they hold dear? The recommendation is shaped by this anticipation — it is not a neutral report of facts but a socially calibrated judgment about what will be valid in context.
The validity dimension is crucial. Expert claims are not simply true or false. They are valid — meaning they meet the implicit standards of the community they address. A claim can be factually accurate but socially invalid (wrong audience, wrong framing, wrong timing). A claim can be somewhat imprecise but socially valid (it captures what matters and skips what doesn't). The expert navigates this distinction constantly, and it is invisible to anyone who treats expertise as information retrieval.
AI cannot perform this navigation. Since Do language models actually build shared understanding in conversation?, the system has no mechanism for anticipating what a specific audience will find acceptable. It can estimate the probability that a response will match a general preference distribution — but that is statistical approximation, not social intelligence. The difference matters because expertise is particular: the same knowledge, applied to two different audiences, requires two different framings, and the expert knows this.
This connects to a deeper problem. Since Why do language models sound fluent without grounding?, the fluency of AI-generated expertise is precisely what makes it misleading. The output reads as expert judgment — it has the form, the confidence, the structural markers — but the communicative work of anticipating audience reception was never performed. What looks like judgment is pattern-matching against how judgment has been expressed in text.
Trust in AI is epistemically different from trust in experts — because the underlying technology is unstable. Most technologies we trust are stable in their capacities: we know what a bridge or a stethoscope does, and the trust we invest in it is anchored to a settled body of demonstrated performance. Expertise works the same way — trust in an expert is anchored to a stable record of judgment accumulated over time. AI is not stable in this sense. Model capabilities shift with each release; behavioral patterns migrate with training changes; what the technology can and cannot do varies across versions within the same name. Trust in AI output therefore cannot anchor to a stable body of demonstrated expertise; it floats on impressions of the current system and is revised with each version. This is a structural property, not a transition artifact — trust stability is a requirement AI lacks in principle as long as the substrate keeps changing, which means AI cannot stand in for expert trust even when its outputs happen to be correct.
The Habermasian dimension is worth making explicit: expert claims function as validity claims in the sociological sense. They are assertions that carry an implicit "and here is why you should accept this" — directed at a specific community with specific standards. AI can reproduce the assertion but not the implicit warrant, because the warrant lives in the expert's social knowledge of the audience, not in the text of the claim itself.
This has practical consequences for how we evaluate AI-generated expertise. The question is not "is this factually correct?" but "does this reflect judgment about what matters and why?" The first question is answerable by verification. The second requires understanding the communicative situation that the expertise is meant to serve.
Source: inbox/Knowledge Custodians.md
Related concepts in this collection
-
Why do language models sound fluent without grounding?
Explores whether LLM fluency masks the absence of communicative work—the clarifying questions, acknowledgments, and understanding checks that humans perform. Why does skipping these acts make models sound more confident?
grounding gap is the general mechanism; communicative expertise is its specific manifestation in knowledge work
-
Do language models actually build shared understanding in conversation?
When LLMs respond fluently to prompts, do they perform the communicative work humans do to establish mutual understanding? Research suggests they skip the grounding acts that make dialogue reliable.
presuming common ground means the audience-calibration step never happens
-
Why do language models skip the calibration step?
Current LLMs assume shared understanding rather than building it through dialogue. This explores why that design choice persists and what breaks when it fails.
experts build dynamic grounding with their communities; AI defaults to static
-
Should AI alignment target preferences or social role norms?
Current AI alignment approaches optimize for individual or aggregate human preferences. But do preferences actually capture what matters morally, or should alignment instead target the normative standards appropriate to an AI system's specific social role?
normative standards are the formal equivalent of audience-appropriate validity claims
-
Can AI agents communicate efficiently in joint decision problems?
When humans and AI must collaborate to solve optimization problems under asymmetric information, what communication patterns enable effective coordination? Current LLMs struggle with this—why?
expertise is a naturally asymmetric information situation
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
expertise is inherently communicative — expert judgment always anticipates audience acceptability in ways AI cannot replicate