Language Understanding and Pragmatics

Can AI anticipate whether expert claims will be socially valid?

Expert knowledge involves more than correctness—it requires predicting whether fellow experts will accept a claim as valid. Can AI systems make this social judgment, or are they limited to statistical accuracy?

Note · 2026-03-26
What grounds language understanding in systems without embodiment? Why do LLMs fail at understanding what remains unsaid? Why do AI systems fail at social and cultural interpretation?

Expert claims are not just statements of fact. They are validity claims — assertions that carry an implicit "and here is why you should accept this." The implicit dimension is critical: the expert, in making a claim, is simultaneously performing a social calculation about whether this claim will be received as valid by the audience that matters.

This is not the same as being correct. A factually accurate claim can be socially invalid — wrong audience, wrong framing, wrong level of abstraction, wrong moment. And a simplified or imprecise claim can be socially valid — it captures what the audience needs to hear, in a form they can receive. The expert navigates this gap constantly, and the navigation is part of what makes them expert.

The circularity is structural, not incidental. Claims are valid because they are acceptable to the community of experts, and acceptable because they are valid by the community's standards. This is not a logical defect — it is how knowledge works in practice. Expert communities develop shared standards of what counts as a good argument, what evidence is sufficient, what framings are productive. New claims are evaluated against these standards, and the standards evolve through the accumulation of claims. The expert who makes a validity claim is invoking this entire apparatus — and the audience who evaluates it is operating within the same apparatus.

AI cannot perform this operation. When an LLM generates a response to a domain-specific question, it can estimate the probability that its output matches the distribution of "correct" answers in its training data. But this is a different calculation than anticipating whether a claim will be valid in the social sense. Since Should AI alignment target preferences or social role norms?, the normative-standards approach to alignment acknowledges this gap: the system should behave according to role-appropriate norms, not just preference-maximized outputs. But even role-alignment does not replicate the expert's anticipation of audience response, because role-alignment is a general policy, not a contextual judgment about a specific audience in a specific moment.

The practical stakes are highest in soft, interpretive domains. In formal domains (mathematics, logic, parts of engineering), the validity criteria are relatively explicit and standardized. An AI can check a proof against known rules. But in domains where expertise is more hermeneutic — law, medicine, strategic consulting, policy — the validity criteria are deeply contextual. What counts as a compelling argument in one jurisdiction, one clinical context, or one political climate may not count in another. The expert knows this because they are embedded in the context. The AI does not know this because it is embedded in a training distribution.

This connects to the problem of presupposition. Since Can LLMs identify the hidden assumptions that make arguments work?, LLMs can reproduce the surface structure of an argument without having access to the implicit warrants that make the argument valid for a specific audience. The validity claim is the warrant — the implicit "and this is why you should accept this" — and the warrant is audience-specific, context-dependent, and almost never stated in the text that the LLM was trained on.

The consequence for AI-generated expertise is that it can produce claims that look valid — that have the structural markers of expert claims — without being valid in the social sense. The output may be factually accurate, well-structured, and confidently stated, but it may fail the validity test when presented to the expert community because it doesn't account for what that community currently considers important, contested, or settled.


Source: inbox/Knowledge Custodians.md

Related concepts in this collection

Concept map
17 direct connections · 146 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

validity claims always anticipate audience response — expertise is knowing what will be acceptable to fellow experts not just what is correct