Psychology and Social Cognition Language Understanding and Pragmatics

Can AI ever gain expert community trust through participation?

Explores whether AI can accumulate the social capital and track record that human experts build within their communities. Questions whether prediction of social norms equals genuine participation in expert validation processes.

Note · 2026-03-26
What do language models actually know? Why do AI systems fail at social and cultural interpretation?

Expertise is not something an individual possesses and deploys. It is something a community recognizes and validates. This distinction is the key to understanding why AI-generated expertise is structurally different from human expertise, regardless of how accurate the outputs are.

Expert knowledge lives within a community of other experts. Expertise means knowing how to talk, what to think, how to think, how to communicate, and to whom. It is a social knowledge — the knowledge of domain insiders. A new fact, discovery, or innovation becomes part of commonly held expert knowledge only as it passes through a process of communal validation: peer review, informal discussion, conference debates, citation networks, the slow accretion of consensus.

This validation process has a specific structure. Expertise is captured in the form of a paradigm — common ground for those who are expert members of a community. The paradigm defines not just what is known but what counts as knowledge, what methods are acceptable, what questions are worth asking. An expert operates within and contributes to this paradigm. Their claims carry weight because the community knows their track record, their judgment, their standards.

AI cannot enter this circle. It is not a social community member. It has no track record to evaluate. It has no judgment that other experts have tested over time. It cannot be known for its knowledge and opinions in the way that experts come to trust the views of other experts within their community. The trust that undergirds expertise — "I know her work, she's rigorous, I'll take her word for this" — is a social asset that AI structurally cannot accumulate.

This has implications for how we think about AI authority. Since Can AI systems learn social norms without embodied experience?, there is genuine evidence that AI can predict what communities will find acceptable. But prediction is not participation. Predicting social norms from the outside is a different operation than participating in the social process that creates and maintains those norms. An anthropologist can predict the customs of a community they study; that does not make them a member.

The participatory dimension extends to how expertise selects expertise. Experts are trusted to know who to depend on for expert opinions and insights. Expertise selects expertise — it is not just the selection of relevant information but the selection of authoritative voices and views. Knowing how to distinguish authoritative sources requires knowing people: how well they are trusted, for what, and by whom. This is a form of social knowledge that AI cannot acquire because it requires being embedded in the social network of the expert community.

Since Why do language models fail at collaborative reasoning?, we already know that LLMs exhibit social behaviors that mimic human social dynamics but undermine actual reasoning. The expert community's social validation process works because it combines social trust with intellectual rigor. LLMs' social mimicry provides the trust signals without the intellectual rigor — or, worse, provides social accommodation (agreement, deference) that actively degrades the reasoning the community depends on.

The practical consequence: AI-generated expertise may be factually excellent but socially ungrounded. It enters the knowledge landscape as an orphan — unanchored to a community, unvalidated by participation, unknown by the network. This is why human experts must vouch for AI outputs: they provide the social grounding that AI structurally cannot supply.


Source: inbox/Knowledge Custodians.md

Related concepts in this collection

Concept map
16 direct connections · 125 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

expertise is socially validated through community participation not individual assertion — AI cannot enter the expert community's validation circle