Can RAG systems refuse to answer without reliable evidence?
Explores whether retrieval-augmented generation can be designed to abstain from answering when sources are corrupted or insufficient, rather than filling gaps with plausible-sounding guesses. This matters for historical text where OCR errors and language drift are common.
A hybrid multilingual RAG system for question answering over noisy historical newspapers handles two kinds of corruption that modern RAG benchmarks largely ignore: OCR errors that scramble surface text and language drift where vocabulary and orthography shift across centuries within the same corpus. Its defense against both is structural rather than denoising. The pipeline uses semantic query expansion to widen what counts as a match, multi-query retrieval with Reciprocal Rank Fusion to consolidate evidence across query variants, and — most importantly — a grounded generation prompt that only produces answers when evidence is actually retrieved.
The grounded-refusal step is what distinguishes this from a typical noisy-RAG approach. When sources are corrupted, the temptation is for the generator to fill in the gaps from prior knowledge, which produces plausible-sounding but ungrounded answers. The grounded prompt makes refusal the default when retrieval fails, which preserves the integrity of the answer at the cost of coverage. Combined with the semantic and multi-query expansion that improves recall on degraded text, the system trades hallucination for honest "I cannot find this" responses. The cost of this trade is real: Does reasoning fine-tuning make models worse at declining to answer? shows that recent training trends actively work against this kind of refusal posture.
The general principle is that corruption-tolerant RAG should expand retrieval aggressively while constraining generation conservatively — recall up, but only generate when grounded. This inverts the implicit policy of most RAG systems, which is to retrieve narrowly and generate freely. For high-noise corpora the inversion is the correct trade.
Source: 12 types of RAG
Related concepts in this collection
-
Does reasoning fine-tuning make models worse at declining to answer?
When models are trained to reason better, do they lose the ability to say 'I don't know'? This matters for high-stakes applications like medical and legal AI that depend on appropriate uncertainty.
extends: documents the model-side obstacle to grounded refusal — recent fine-tuning regimes actively suppress the abstention capacity this RAG primitive depends on
-
Does training objective determine which direction models fail at abstention?
Calibration failures might not be universal—different training approaches could push models toward opposite extremes of refusing or overconfidently answering. Understanding whether the training objective, not just model capability, drives these failures could reshape how we think about fixing them.
extends: explains why the grounded-refusal prompt has to be explicit — without it, the underlying model's training objective biases generation away from "I don't know"
-
Can any computable LLM truly avoid hallucinating?
Explores whether formal theorems prove hallucination is mathematically inevitable for all computable language models, regardless of their design or training approach.
supports: gives the formal reason grounded-refusal is the right RAG primitive for noisy corpora — confabulation cannot be eliminated at the model level, only mediated by retrieval-time policy
-
Why do queries and documents occupy different embedding spaces?
Queries and documents express the same information in fundamentally different ways—short and interrogative versus long and declarative. Understanding this mismatch is crucial for why direct embedding retrieval often fails.
extends: same retrieval-side widening move (semantic query expansion ≈ HyDE) but coupled with grounded refusal rather than open generation
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
grounded generation that refuses to answer without evidence is the noise-tolerant RAG primitive — OCR errors and language drift do not justify confabulation