Does AI reshape expert work into knowledge management?
As AI generates knowledge at scale, does expert work shift from creating new understanding to curating and validating machine outputs? This matters because curation and creation demand different cognitive skills.
The most common narrative about AI and expertise is replacement: AI will do what experts do, but faster and cheaper. The more insidious reality is transformation. Experts are not being replaced — they are being repositioned. The expert who once spent their days thinking, reading, arguing, and discovering now spends their days curating, filtering, validating, and managing the outputs of AI systems.
This is the custodial shift. The knowledge worker becomes a custodian of quantities of knowledge instead of qualities of thinking and thought. The distinction matters because curation is a fundamentally different cognitive activity than creation. Curation asks: "Is this good enough?" Creation asks: "What is true, and how do I know?" One is evaluation of existing material. The other is the generative act of producing new understanding.
The shift happens gradually and almost invisibly. When an expert uses deep research tools, the library available to them is vast, instantaneous, and arrives fully-formed and packaged for use. The research that returns obscures the kinds of connections that motivate the human expert because it was obtained from query matches and probabilities, not from interested, unanswered questions. The very mentality and state of mind of the knowledge worker is sidelined by pre-packaged content — content that is silent about, and sidesteps, the conversational testing and advancing of ideas that characterizes genuine expertise.
This connects to Does incremental AI replacement erode human influence over society?. Gradual disempowerment describes the macro process; the custodial shift is its specific mechanism in knowledge work. The expert's participation in knowledge production — the labor of thinking, arguing, testing, revising — is precisely what kept the expert aligned with the state of knowledge. Remove that labor, and the expert becomes a manager of a process they no longer fully understand.
Borrowed and simulated authority. The custodian's check is needed because AI's authority is structurally borrowed rather than earned. Genuine expert authority is earned through judgment exercised over time — through arguments defended, errors corrected, and predictions tested within a community. AI-generated expertise is only a reading of written knowledge; its "authority" is a surface property — the form of expert speech, the tone of settled knowledge, the rhythm of considered conclusion — imported from the training distribution rather than accumulated through any process the system itself undergoes. The surface markers of authority are present without the underlying work that produces them. This is what the custodian must detect: where the marks of authority are borrowed rather than earned, and where the simulation is load-bearing for a claim that could not survive the earning.
The custodial role has its own new demands. Part of the work is management of search queries and parameters. Since prompting becomes the control mechanism, the expert must develop a new literacy: understanding how to steer AI systems toward useful outputs. This is not trivial — it requires internalizing how LLMs handle topics and their relationships, how alignment shapes responses, how embedding spaces capture conceptual proximity. But it is a different skill than domain expertise. The expert who excels at querying an LLM is not necessarily the expert who excels at understanding the domain.
There is a temptation to see this as efficiency. The expert is "freed" from drudge work to focus on higher-level judgment. But this misidentifies what the "drudge work" was. The slow reading, the following of citations, the stumbling upon unexpected connections, the frustration of not finding what you expected — these were not inefficiencies. They were the process by which understanding developed. Since Why can't users articulate what they want from AI?, the expert's own intent matures through the process of inquiry. Skip the process, and the intent never matures. The expert arrives at a destination without having traveled there.
The lodestone — an explicit historical analog. In monetary history, assayers tested coins for genuine gold content using lodestones and chemical tests, performing a backing-check that ordinary receivers could not perform themselves. The Knowledge Custodian is the lodestone for intelligence-tokens — testing AI-generated expertise for genuine backing in reasoning, evidence, or understanding. This is not metaphor but structural identity: the assayer and the custodian occupy the same position in their respective economies — a specialized role that emerges because the currency circulates faster than individual receivers can verify it. The historical precedent also predicts the failure mode: assayers could be bribed, and their lodestones could be switched. Since Can we still verify AI knowledge if verification itself is AI-generated?, the implosion problem is the lodestone-switching problem — the instruments available for testing intelligence-tokens are themselves produced by the same process whose outputs they are meant to test.
The Knowledge Custodian as currency validator. Reframing through the tokenization analogy sharpens what this role structurally is. When money circulates widely and receivers cannot individually test each coin for genuine gold content, an economic role emerges to perform the backing-check on their behalf — the assayer. Since Can AI ever gain expert community trust through participation?, the Knowledge Custodian performs this assayer function for intelligence-tokens: testing whether AI-generated expertise is backed by genuine reasoning, evidence, or understanding, because the broader market of receivers cannot perform that check themselves. This reframes custodianship from a quality-control role (are the outputs good?) to a monetary-validator role with a structural economic function (is the currency genuine?). The difference matters: quality control is a feature of a pipeline; validation is a feature of a currency system. The custodian emerges not because AI outputs need polish, but because unbacked tokens would otherwise circulate as if backed.
The implications for less experienced thinkers are especially severe. Senior experts have a reservoir of judgment accumulated through years of the old process. They can evaluate AI outputs against a backdrop of deep understanding. Junior knowledge workers who enter the field in the custodial era never build that reservoir. They learn to manage AI outputs from day one, without having developed the capacity to evaluate what those outputs are missing. The custodial shift is not just a change in what experts do — it may be a break in how expertise is transmitted between generations.
Source: inbox/Knowledge Custodians.md
Related concepts in this collection
-
Does incremental AI replacement erode human influence over society?
Explores whether gradual AI adoption—without dramatic breakthroughs—can silently degrade human agency by removing the labor that kept institutions implicitly aligned with human needs.
the custodial shift is the specific knowledge-work mechanism of gradual disempowerment
-
Why can't users articulate what they want from AI?
Explores the cognitive gap between imagining possibilities and expressing them as prompts. Why language interfaces create a harder envisioning task than traditional UI affordances.
intent maturation requires the process of inquiry; custodial work skips the process
-
How should users control systems with unpredictable outputs?
When generative AI produces different outputs from identical inputs, how do interaction design principles help users maintain control and develop effective mental models for stochastic systems?
custodial work operates within generative variability
-
Why do AI agents fail at workplace social interaction?
Explores why current AI agents struggle most with communicating and coordinating with colleagues in realistic workplace settings, despite strong reasoning capabilities in other domains.
the 70% that still requires humans is increasingly custodial
-
Does theory of mind predict who thrives in AI collaboration?
Explores whether perspective-taking ability—the capacity to model another's cognitive state—differentiates humans who benefit most from working with AI, separate from solo problem-solving skill.
custodial competence may be a distinct ability from domain expertise
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
AI transforms experts from producers of knowledge to custodians of AI-generated knowledge — the expert role shifts from quality of thinking to quantity of managing