The correct document does not exist in the corpus.
The model fabricates a plausible answer.
rag-axis response: EmptyRetrievalError with corpus coverage signal.
The correct chunk is indexed but scored too low to survive the top-k cutoff.
The system behaves as if the data is missing.
rag-axis response: BelowThresholdRetrieval with highest_score exposed.
The chunk is retrieved but the chunking algorithm severed semantic dependencies.
The LLM misinterprets the isolated fragment.
rag-axis response: Chunk provenance (position, parent doc) mandatory via I1.
The relevant context is retrieved and injected but the LLM ignores it.
Lost-in-the-middle syndrome.
rag-axis response: Context ordering strategy in rag_axis.context.
The most dangerous class. HTTP 200, normal latency, confident but wrong answer.
Caused by relevance drift, knowledge staleness, and embedding model mismatch.
rag-axis response: corpus_version, content_age_days, EmbeddingModelMismatchError,
ScoreCollapseWarning — all mandatory signals, never optional.