RAG Failure Modes Guide

The 12 most common RAG failure modes — how to identify, diagnose, and fix them in production.

RAG (Retrieval-Augmented Generation) systems fail in predictable patterns. After analyzing hundreds of production RAG pipelines, we've identified 12 recurring failure modes grouped into three layers: retrieval, ranking, and generation.

🔍

Diagnose Your RAG Failure Automatically

Paste your RAG trace or describe the problem. Get instant failure mode classification and copy-paste code fixes.

Try RAG Failure Debugger — Free

3 free analyses/month · Pro unlimited at $9/mo

Retrieval Layer Failures

Chunk Boundary Mismatch

Your text splitter cuts across semantic boundaries — a question about 'payment refund policy' retrieves a chunk that starts mid-sentence about something else. Fix: Use RecursiveCharacterTextSplitter with 10-15% overlap, or switch to semantic chunking.

Embedding Model Drift

You updated the embedding model but didn't re-index. Query embeddings now live in a different vector space than document embeddings — similarity scores are meaningless. Fix: Always re-index after changing embedding models. Store model name + version in index metadata.

Similarity Threshold Too Low

Your retriever returns 20 chunks when you only need 5 relevant ones. The noise overwhelms the signal in the context window. Fix: Set score_threshold=0.7 as a starting point. Monitor average chunk count per query.

Query Preprocessing Gap

Users ask 'how do I cancel?' but your index contains 'subscription termination procedure'. No lexical overlap, low semantic similarity. Fix: Add query expansion or HyDE (Hypothetical Document Embeddings) to bridge vocabulary gaps.

Ranking Layer Failures

Re-ranker Model Mismatch

You're using a general-purpose cross-encoder on a domain-specific corpus (legal, medical, financial). Re-ranker confidence scores are unreliable. Fix: Use domain-fine-tuned re-rankers, or fall back to bi-encoder similarity for specialized domains.

Context Window Truncation

You retrieve 20 chunks but only 8 fit in the LLM context. The most relevant chunk happens to be #15 — it gets silently dropped. Fix: Track token count before sending. Prioritize by re-rank score, not retrieval order.

Metadata Filtering Conflict

A user asks about 'Q3 2024 pricing' but your metadata filter excludes documents from 2024. Zero results returned silently. Fix: Always log filter parameters. Return a 'no results with these filters' message rather than hallucinating.

Generation Layer Failures

System Prompt Drift

In a multi-turn conversation, injected context from turn 3 bleeds into the system prompt of turn 4. The LLM behaves inconsistently. Fix: Never mutate the system prompt. Reconstruct messages fresh each turn from an immutable core prompt.

Hallucination from Sparse Context

The retrieved chunks don't contain the answer. The LLM invents a plausible-sounding response rather than saying 'I don't know'. Fix: Add explicit fallback instruction: 'If the answer is not in the provided context, say so.'

Instruction Following Failure

The LLM ignores the 'answer in bullet points' or 'cite sources' instruction. Usually caused by context length pushing the system prompt out of the attention window. Fix: Repeat critical instructions at the end of the user message, not just in the system prompt.

Automate Your RAG Diagnosis

Manually working through this checklist for every RAG failure is time-consuming. The RAG Failure Debugger automates the classification step — paste your trace or describe the problem, and get an instant failure mode diagnosis with copy-paste code fixes.

🔍

Diagnose Your RAG Failure Automatically

Paste your RAG trace or describe the problem. Get instant failure mode classification and copy-paste code fixes.

Try RAG Failure Debugger — Free

3 free analyses/month · Pro unlimited at $9/mo

Recommended Hosting for AI/ML Projects

  • DigitalOcean — $200 free credit. GPU droplets for LLM inference, managed vector DBs coming soon.
  • Hostinger — From $2.99/mo. Fast VPS for RAG API servers.
  • Namecheap — Budget hosting + free domain for your AI projects.