A Comprehensive Guide to RAG Implementations

There are many types!

Welcome to the 465 new members this week! This newsletter now has 48,965 subscribers.

A Comprehensive Guide to RAG Implementations

Retrieval-Augmented Generation (RAG) is a rapidly evolving concept in AI-driven applications. As a freelancer specializing in building RAG applications, I've seen firsthand how versatile and powerful this architecture can be. This blog post will delve into various types of RAG implementations, providing insights into their structures, use cases, and advantages.

Today, Iโ€™ll cover the following:

  • Understanding RAG

  • Simple RAG

  • Simple RAG with Memory

  • Branched RAG

  • HyDe (Hypothetical Document Embedding)

  • Adaptive RAG

  • Corrective RAG (CRAG)

  • Self-RAG

  • Agentic RAG

  • Integrating Vector Stores

Letโ€™s dive in ๐Ÿคฟ

1. Understanding RAG

RAG isn't a machine learning algorithm; it's a pattern of software architecture. It leverages a generative AI system (such as a Large Language Model, LLM) and a data source to power AI applications. The primary goal is to enhance the output's relevance and quality, reduce hallucinations, and utilize proprietary data without training custom models.

Diagram of How RAG Works

I wrote a guide with the basic concepts of RAG a few months ago, you can access it here:

2. Simple RAG


  1. Input Reception: The application receives user input.

  2. Data Retrieval: The input is used to fetch relevant data from a database.

  3. Prompt Generation: The retrieved data is injected into a prompt for the LLM.

  4. Response Generation: The LLM generates a response, which is returned to the user.

Use Case: Ideal for straightforward applications where user queries directly relate to stored data.

3. Simple RAG with Memory


  1. Input Reception and Memory Check: The application receives user input and checks previous conversations.

  2. Query Transformation: The query is transformed based on conversation memory to include relevant context.

  3. Data Retrieval and Prompt Generation: Similar to Simple RAG, with added context for better relevance.

  4. Response Generation: The LLM generates a contextually aware response.

Use Case: Suitable for applications where maintaining context over extended interactions is crucial, such as customer support.

4. Branched RAG


  1. Input Reception: The application receives user input.

  2. Source Determination: Determines which data source(s) should be queried based on the input.

  3. Data Retrieval and Prompt Generation: Fetches data from the selected source and generates a prompt for the LLM.

  4. Response Generation: The LLM generates a response based on the specific data source.

Use Case: Effective in applications requiring data from multiple distinct sources, such as research or multi-domain knowledge systems.

5. HyDe (Hypothetical Document Embedding)


  1. Input Reception: The application receives user input.

  2. Hypothetical Answer Generation: The LLM generates a hypothetical answer to the query.

  3. Data Retrieval: Uses the hypothetical answer to fetch relevant documents from a similarity-based system, like a vector store.

  4. Prompt Generation and Response: Generates a prompt with the fetched documents and returns the LLMโ€™s response.

Use Case: Useful when the query itself isn't sufficient for effective data retrieval, enhancing the relevance of retrieved information.

Advanced RAG Strategies

6. Adaptive RAG

Concept: Adaptive RAG combines query analysis with active/self-corrective RAG. The system routes queries through different strategies based on their nature.


  1. Query Analysis: Determines the appropriate retrieval strategy (e.g., no retrieval, single-shot RAG, iterative RAG).

  2. Strategy Execution: Executes the determined strategy, adjusting as needed for optimal relevance and accuracy.

Use Case: Suitable for dynamic environments where queries vary widely, such as search engines or AI assistants.

Adaptive RAG

7. Corrective RAG (CRAG)

Concept: CRAG incorporates self-reflection and self-grading to enhance retrieval accuracy.


  1. Initial Retrieval: Retrieves documents based on the input query.

  2. Knowledge Refinement: Partitions documents into "knowledge strips" and grades each for relevance.

  3. Supplementary Retrieval: If necessary, performs additional retrieval using web search or other sources.

  4. Prompt Generation and Response: Uses refined knowledge for prompt generation and response.

Use Case: Effective in high-stakes environments where accuracy is critical, such as legal or medical applications.

Corrective RAG diagram

8. Self-RAG

Concept: Self-RAG includes self-reflection and self-grading on both retrieved documents and generated responses.


  1. Decision to Retrieve: Determines if retrieval is necessary based on the input query and previous generations.

  2. Relevance Check: Assesses the relevance of retrieved passages.

  3. Generation Verification: Verifies that the LLM's generation is supported by the retrieved documents.

  4. Response Utility: Ensures the generated response is useful and relevant.

Use Case: Ideal for applications requiring high reliability and minimal hallucination, such as automated research assistants or knowledge base systems.

Self-RAG Diagram

9. Agentic RAG

Agentic RAG is an advanced, agent-based approach to question answering over multiple documents in a coordinated manner. It involves comparing different documents, summarizing specific documents, or comparing various summaries. Agentic RAG is a flexible framework that supports complex tasks requiring planning, multi-step reasoning, tool use, and learning over time.

๐—ž๐—ฒ๐˜† ๐—–๐—ผ๐—บ๐—ฝ๐—ผ๐—ป๐—ฒ๐—ป๐˜๐˜€ ๐—ฎ๐—ป๐—ฑ ๐—”๐—ฟ๐—ฐ๐—ต๐—ถ๐˜๐—ฒ๐—ฐ๐˜๐˜‚๐—ฟ๐—ฒ

- Document Agents: Each document is assigned a dedicated agent capable of answering questions and summarizing within its own document.

- Meta-Agent: A top-level agent manages all the document agents, orchestrating their interactions and integrating their outputs to generate a coherent and comprehensive response.

Agentic RAG Diagram

10. Integrating Vector Stores

Vector stores are commonly used in RAG implementations. They transform text into embeddings (N-dimensional vectors), allowing cosine similarity to assess semantic closeness. This method significantly improves the relevance of retrieved information.

Learn more about Vector Databases with this guide:


RAG implementations offer a versatile and robust framework for building AI-driven applications. Each pattern serves unique needs and use cases, from simple retrieval and generation to advanced self-corrective strategies. Developers can create more effective, accurate, and reliable generative AI systems by understanding these patterns and their applications. Whether you're a freelancer like me or part of a larger organization, leveraging RAG can enhance the capabilities and performance of your AI solutions.

Hereโ€™s a recap and a quick summary:

RAG Technique

Simple Definition

When to Use

Simple RAG

Retrieves relevant documents based on the query and uses them to generate an answer

Basic question-answering tasks where context is needed

Simple RAG with Memory

Extends Simple RAG by maintaining context from previous interactions

Conversational AI where continuity between queries is important

Branched RAG

Performs multiple retrieval steps, refining the search based on intermediate results

Complex queries requiring multi-step reasoning or information synthesis

HyDE (Hypothetical Document Embedding)

Generates a hypothetical ideal document before retrieval to improve search relevance

When dealing with queries that might not have exact matches in the knowledge base

Adaptive RAG

Dynamically adjusts retrieval and generation strategies based on the query type or difficulty

Varied query types or when dealing with a diverse knowledge base

Corrective RAG (CRAG)

Iteratively refines generated responses by fact-checking against retrieved information

High-stakes scenarios requiring increased accuracy and fact verification


The model critiques and improves its own responses using self-reflection and retrieval

Tasks requiring high accuracy and when there's time for multiple refinement steps

Agentic RAG

Combines RAG with agentic behavior, allowing for more complex, multi-step problem-solving

Complex tasks requiring planning, decision-making, and external tool use

Enjoy your Sunday, folks! Summer is here! โ˜€๏ธ


Whenever you're ready, learn AI with me:

  • The 15-day Generative AI course: Join my 15-day Generative AI email course, and learn with just 5 minutes a day. You'll receive concise daily lessons focused on practical business applications. It is perfect for quickly learning and applying core AI concepts. 15,000+ Business Professionals are already learning with it.

Join the conversation

or to participate.