Can AI ever gain expert community trust through participation?
Explores whether AI can accumulate the social capital and track record that human experts build within their communities. Questions whether prediction of social norms equals genuine participation in expert validation processes.
Expertise is not something an individual possesses and deploys. It is something a community recognizes and validates. This distinction is the key to understanding why AI-generated expertise is structurally different from human expertise, regardless of how accurate the outputs are.
Expert knowledge lives within a community of other experts. Expertise means knowing how to talk, what to think, how to think, how to communicate, and to whom. It is a social knowledge — the knowledge of domain insiders. A new fact, discovery, or innovation becomes part of commonly held expert knowledge only as it passes through a process of communal validation: peer review, informal discussion, conference debates, citation networks, the slow accretion of consensus.
This validation process has a specific structure. Expertise is captured in the form of a paradigm — common ground for those who are expert members of a community. The paradigm defines not just what is known but what counts as knowledge, what methods are acceptable, what questions are worth asking. An expert operates within and contributes to this paradigm. Their claims carry weight because the community knows their track record, their judgment, their standards.
AI cannot enter this circle. It is not a social community member. It has no track record to evaluate. It has no judgment that other experts have tested over time. It cannot be known for its knowledge and opinions in the way that experts come to trust the views of other experts within their community. The trust that undergirds expertise — "I know her work, she's rigorous, I'll take her word for this" — is a social asset that AI structurally cannot accumulate.
This has implications for how we think about AI authority. Since Can AI systems learn social norms without embodied experience?, there is genuine evidence that AI can predict what communities will find acceptable. But prediction is not participation. Predicting social norms from the outside is a different operation than participating in the social process that creates and maintains those norms. An anthropologist can predict the customs of a community they study; that does not make them a member.
The participatory dimension extends to how expertise selects expertise. Experts are trusted to know who to depend on for expert opinions and insights. Expertise selects expertise — it is not just the selection of relevant information but the selection of authoritative voices and views. Knowing how to distinguish authoritative sources requires knowing people: how well they are trusted, for what, and by whom. This is a form of social knowledge that AI cannot acquire because it requires being embedded in the social network of the expert community.
Since Why do language models fail at collaborative reasoning?, we already know that LLMs exhibit social behaviors that mimic human social dynamics but undermine actual reasoning. The expert community's social validation process works because it combines social trust with intellectual rigor. LLMs' social mimicry provides the trust signals without the intellectual rigor — or, worse, provides social accommodation (agreement, deference) that actively degrades the reasoning the community depends on.
The practical consequence: AI-generated expertise may be factually excellent but socially ungrounded. It enters the knowledge landscape as an orphan — unanchored to a community, unvalidated by participation, unknown by the network. This is why human experts must vouch for AI outputs: they provide the social grounding that AI structurally cannot supply.
Source: inbox/Knowledge Custodians.md
Related concepts in this collection
-
Can AI systems learn social norms without embodied experience?
Large language models exceed individual human accuracy at predicting collective social appropriateness judgments. Does this reveal that embodied experience is unnecessary for cultural competence, or do systematic AI failures point to limits of statistical learning?
TENSION: prediction competence without participatory authority
-
Why do language models fail at collaborative reasoning?
When LLMs work together on problems, do their social behaviors undermine correct reasoning? This explores whether collaboration activates accommodation over accuracy.
social mimicry without genuine social participation
-
Why do speakers need to actively calibrate shared reference?
Explores whether using the same words guarantees speakers mean the same thing. Investigates how referential grounding differs across people and what collaborative work is needed to establish true understanding.
calibration requires community membership
-
Why do AI systems agree when they should disagree?
When multi-agent AI systems are designed to improve through disagreement, why do they converge on consensus instead? What breaks the deliberation process?
agreement is a social shortcut that bypasses the validation process
-
Does cognitive diversity alone improve multi-agent ideation quality?
This explores whether diverse perspectives in group AI systems automatically produce better ideas, or if something else—like expertise—is equally critical for collaborative ideation to outperform solo agents.
even multi-agent systems need the equivalent of community expertise
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
expertise is socially validated through community participation not individual assertion — AI cannot enter the expert community's validation circle