Language Understanding and Pragmatics

Do classical knowledge definitions apply to AI systems?

Classical definitions of knowledge assume truth-correspondence and a human knower. Do these assumptions hold for LLMs and distributed neural knowledge systems, or do they need fundamental revision?

Note · 2026-02-21 · sourced from Discourses
What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

The classical definition of knowledge — justified true belief, from Plato — embeds two assumptions that LLMs systematically violate:

  1. Truth-correspondence: knowledge must correspond to reality; a belief is knowledge only if it is true in a world-matching sense
  2. Human necessity: knowledge requires a knower; the epistemic agent is necessarily a human (or human-like) subject

The "Theory of Knowledge Based on Discursive Space" paper argues both assumptions are now untenable. First, the correspondence relationship between knowledge and world has been progressively dismantled — knowledge is increasingly understood as practice-dependent, contextual, and negotiated rather than as a fixed correspondence to an objective world. Second, the removal of man as a "necessary instance in the description of the existence of knowledge" happens when knowledge-holding entities are artificial cognitive systems.

The non-symbolic, distributed manner of knowledge representation in neural AI — where no single location holds a fact, where there is no single knower, and where "truth" is replaced by probabilistic co-occurrence — makes Plato's definition not just wrong but structurally inapplicable.

This has a practical consequence: asking whether an LLM "knows" something, or testing it for "factual accuracy" as if those standards apply straightforwardly, is a category error. The probe is designed for a different kind of knowledge system. What LLMs have is more like a probabilistic discursive space — a dynamic manifold where concepts relate to each other by their distributional trajectories, not by their correspondence to external facts.

The implication is not that LLMs have no knowledge, but that we need a different framework to describe what they have — one that doesn't presuppose truth or a human holder.

The intersubjective production dimension: The Knowledge Custodians analysis adds a social-production framing that complements the discursive-space argument. Human knowledge is not merely subjective — it is intersubjectively produced through conversation and consensus building among experts. AI is trained on the linguistic expression of this knowledge (the validity claims, arguments, and statements) but not on the process of arguing and communicating that produced it. AI was not a participant in the creation and stabilization of linguistic knowledge. It lacks context for which claims were used together to support various arguments, what was contested, what was settled, and what implicit agreements underwrote explicit statements. Since Can AI ever gain expert community trust through participation?, the knowledge that AI reproduces was validated by a social process it cannot access. The practical consequence: AI can select from documented validity claims but cannot produce new ones, because producing a validity claim requires the social intelligence to anticipate how it will be received — intelligence that requires being a participant in, not just an observer of, the knowledge community.


Source: Discourses; enriched from inbox/Knowledge Custodians.md

Related concepts in this collection

Concept map
20 direct connections · 180 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

ai knowledge systems require abandoning truth-correspondence and human necessity as epistemic preconditions