Language Understanding and Pragmatics Design & LLM Interaction Knowledge Retrieval and RAG

How does LLM-mediated search change what expertise requires?

When experts search through LLMs instead of traditional inquiry, do they need fundamentally different skills? This explores whether domain knowledge alone is enough when the search itself operates on statistical patterns rather than meaningful questions.

Note · 2026-03-26
How do you build domain expertise into general AI models? What happens to social order when AI removes ritual constraints?

There is a fundamental epistemic difference between how a human expert searches and how an LLM-mediated search works, and this difference changes the nature of expertise itself.

When a human expert searches, they are driven by questions — specific, interested, often half-formed questions that arise from the gap between what they know and what they need to know. The search is inquiry: an open-ended exploration where what you find changes what you're looking for. You follow a citation, discover an unexpected connection, revise your question, search again. The process is iterative and shaped by the expert's curiosity, judgment, and growing understanding. The expert's interests and questions are the search.

When search is mediated by an LLM, the dynamics shift. Queries are semantic queries — they look for similarities among texts and the user's prompts. They match based on probabilities core to the queries themselves and the embeddings of matched text. In this sense they are superficial to the language and discourse connections that characterize genuine inquiry. They capture topical proximity, not the interested, unanswered questions that drive human knowledge production.

This is not just a technical limitation — it is an epistemological transformation. Search shifts from finding to steering. The prompt doesn't discover what exists; it shapes how existing knowledge is packaged. Since Do vector embeddings actually measure task relevance?, we know that semantic proximity and task relevance are different things. The expert who queries an LLM is navigating embedding space, not navigating the conceptual territory of their domain. The map is not the territory, and the embedding space is a map of statistical co-occurrence, not a map of meaning.

The custodial consequence is profound. The expert must develop a new literacy: understanding how LLMs handle topics and their relationships. Where search results from a search engine might work by pagerank, authority, popularity, relevance, and personal preferences, prompted responses from LLMs work by linguistic probabilities and patterns shaped by alignment, human feedback, reinforcement learning, and "reasoning" reflections. The custodial work involves thinking like an LLM in order to get the best results from it.

This creates a strange recursion. The expert, whose value lies in domain judgment, must now develop judgment about the AI system that is supposed to augment their domain judgment. The meta-competence (knowing how to prompt effectively) becomes as important as the domain competence (knowing what to ask about). And the two competences are different — a brilliant researcher may be a poor prompter, and an effective prompter may lack the domain depth to evaluate what they retrieve.

The real-time mechanics make this worse. Since Why can't users articulate what they want from AI?, the expert's own intent matures through the process of search. But the speed and completeness of AI-mediated search short-circuits this maturation. The actual mechanics of watching or scanning searches conducted in real time against research sources is taxing — it occurs too quickly to permit real supervision. Over time, users become lazy and simply accept the queries and searches, as well as sources, that the LLM has chosen to use. This involves an abdication of one of the expert's core practices: the selection of relevant sources.

The irony: AI-mediated search is vastly more comprehensive than human search, but comprehensiveness is not the dimension on which expertise operates. Expertise operates on relevance — the ability to see which connections matter and which don't. Since Can AI distinguish which differences actually matter?, comprehensive retrieval without qualitative selection is not expertise augmented. It is expertise displaced by a different kind of operation.


Source: inbox/Knowledge Custodians.md

Related concepts in this collection

Concept map
17 direct connections · 171 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

prompting transforms search from disinterested inquiry to interested steering — the expert must internalize LLM mechanics to become an effective custodian