Do vector embeddings actually measure task relevance?
Vector embeddings rank semantic similarity, but RAG systems need topical relevance. When these diverge—as with king/queen versus king/ruler—does similarity-based retrieval fail in production?
The king/queen/ruler problem illustrates a fundamental misconception embedded in RAG architecture. Vector embeddings trained on language co-occurrence measure semantic association — how often concepts appear in related contexts. King and queen appear frequently together in discussions of royalty, so they are highly similar (92%). King and ruler appear in the same conceptual category but co-occur less, so similarity is lower (83%).
But the relevant criterion for a RAG system is not semantic association — it is whether a chunk answers the query. For a query about "king," chunks discussing "ruler" are more relevant than chunks discussing "queen" even though queen is more similar by embedding distance. Semantic similarity and task relevance diverge whenever concepts are closely associated but play different roles.
This divergence is not a calibration problem or a model quality problem — it is structural. Embeddings cannot know what role a concept plays in a query without understanding the query's intent. They can only return what is semantically nearby. For many RAG use cases this is sufficient approximation. For others — precision-critical domains, complex queries, queries where highly associated concepts would be wrong answers — it fails.
The production failure pattern: RAG demos work because demo queries are carefully chosen to favor semantic retrieval (simple, unique topics, clear information needs). Production queries are messy — underspecified, multi-intent, asking about concepts with many associated concepts that would be wrong answers. The semantic association measure that works in demos becomes a noise source in production.
Re-ranking, advanced chunking, and other "Advanced RAG" techniques address symptoms. They do not fix the fundamental mismatch between what embeddings measure and what retrieval needs to optimize.
LLM attention on graph-structured data reveals a parallel mismatch. When LLMs are fine-tuned on graph data, their attention patterns shift toward node tokens — they learn to recognize graph entities. But shuffling node connectivity has no effect on performance, meaning the model attends to nodes without modeling the relationships between them. The same structural limitation appears in both embedding retrieval (association without relevance) and LLM graph processing (recognition without relational modeling). See Can language models actually use graph structure information?.
Source: RAG; enriched from Knowledge Graphs
Related concepts in this collection
-
When do graph databases outperform vector embeddings for retrieval?
Vector similarity struggles with aggregate and relational queries that require traversing multiple entity connections. Can graph-oriented databases with deterministic queries solve this failure mode in enterprise domain applications?
complementary failure mode: relational/aggregate queries generate too many candidates; this note adds the simpler case where semantic proximity ≠ topical relevance
-
Do LLMs predict entailment based on what they memorized?
Explores whether language models make entailment decisions by recognizing memorized facts about the hypothesis rather than reasoning through the logical relationship between premise and hypothesis.
analogous structural problem: what the model is trained to measure (surface attestation) diverges from what the task requires (logical inference)
-
Can language models actually use graph structure information?
After fine-tuning on graph data, do LLMs learn to use actual connectivity patterns, or just recognize that graphs exist? This matters for understanding whether transformers can handle structured reasoning tasks.
parallel limitation: LLMs recognize graph entities without modeling relationships, mirroring embedding retrieval's association-without-relevance problem
-
Why do time-based queries fail in conversational retrieval systems?
Conversational memory systems struggle with questions that reference when something was discussed rather than what was said. Standard vector databases lack temporal indexing to retrieve by metadata like date, speaker, or session order.
conversational retrieval is a concrete domain where semantic association fails: "what did we discuss Tuesday?" requires temporal metadata, not semantic proximity
-
How do logic units preserve procedural coherence better than chunks?
Can structured retrieval units with prerequisites, headers, bodies, and linkers maintain step-by-step coherence in how-to answers where fixed-size chunks fail? This matters because procedural questions require sequential logic and conditional branching that chunk-based RAG cannot support.
logic units address the task-relevance gap directly: headers enable intent-based retrieval (matching queries to purpose rather than surface similarity), replacing semantic association with task-relevant indexing
-
Does semantic grounding in language models come in degrees?
Rather than asking whether LLMs truly understand meaning, this explores whether grounding is actually a multi-dimensional spectrum. The question matters because it reframes the sterile understand/don't-understand debate into measurable, distinct capacities.
the association-vs-relevance failure is the retrieval instantiation of the functional-vs-causal grounding gap: embeddings capture functional grounding (coherent internal associations) but lack causal grounding (what concept actually serves a query's purpose); king/queen similarity is high because functional association is strong, not because queen is relevant to a king-governance query
-
Do embedding dimensions fundamentally limit retrievable document combinations?
Can single-vector embeddings represent any top-k document subset a user might need? Research using communication complexity theory suggests there are hard geometric limits independent of training data or model architecture.
compounds the semantic problem with a mathematical one: even if embeddings measured relevance rather than association, dimension constraints limit the document combinations representable in the space
-
Why do decoder-only models underperform as text encoders?
Decoder-only LLMs use causal attention, which limits each token to seeing only prior context. This explores whether removing this constraint could make them competitive universal encoders without architectural redesign.
LLM2Vec's contrastive learning step is relevant because it aligns representations for similarity rather than association, potentially addressing the semantic-vs-relevance gap at the encoder level; causal masking may contribute to association-over-relevance by limiting each token's representation to preceding context only
-
Does reasoning ability actually degrade with longer inputs?
Explores whether modern language models can maintain reasoning performance when processing long contexts, and whether technical capacity translates to practical reasoning capability over extended text.
downstream consequence of the association-relevance mismatch: when retrieval returns semantically associated but task-irrelevant documents, it creates the irrelevant padding that FLenQA shows degrades reasoning from 0.92 to 0.68 at just 3000 tokens
-
Can rationale-driven selection beat similarity re-ranking for evidence?
Can LLMs generate search guidance that outperforms traditional similarity-based evidence ranking? This matters because current re-ranking lacks interpretability and fails against adversarial attacks.
a direct architectural response: replaces similarity scoring with rationale-driven criteria evaluation, achieving 33% better accuracy with 50% fewer chunks by measuring task relevance instead of semantic association
-
Why do speakers need to actively calibrate shared reference?
Explores whether using the same words guarantees speakers mean the same thing. Investigates how referential grounding differs across people and what collaborative work is needed to establish true understanding.
embedding retrieval has the same problem as LLM grounding failures: semantic proximity (word co-occurrence) does not equal shared reference (what concept actually serves a communicative purpose); the king/queen/ruler problem is a retrieval-level instantiation of referential grounding failure
-
Do language models actually build shared understanding in conversation?
When LLMs respond fluently to prompts, do they perform the communicative work humans do to establish mutual understanding? Research suggests they skip the grounding acts that make dialogue reliable.
retrieval systems share the same presumption failure: vector DBs presume semantic similarity is equivalent to query intent without verifying relevance; both systems proceed as if surface-level alignment (word proximity / lexical match) guarantees communicative success
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
vector embeddings measure semantic association not task relevance — causing production RAG failures