Sharath's blog

LLM Research: Adaptive RAG for Conversational Systems

Recommended1 | Source2
Supplementary reading to the research paper

RAGate: A Gating Model

Validation of RAGate


Integrating Large Language Models into Conversational Systems

Definition

Integrating LLMs into conversational systems means using these large, pre-trained models to power the dialogue management, response generation, and overall interaction dynamics within chatbots, virtual assistants, or any system that interacts with users through natural language.

Capabilities of LLMs
Traditional Conversational Systems (Pre-LLMs)
Advantages of Traditional Systems Over LLMs
Example Use Cases for Traditional Systems

Traditional Systems


Retrieval-Augmented Generation (RAG) in Conversational Systems

Need for RAG
Case for Such Assumptions

Types of RAG Approaches:

Single-Pass RAG
Iterative RAG
Knowledge-Enhanced RAG
Hybrid RAG
Memory-Augmented RAG
Cross-Attention RAG
Modular RAG
Task-Specific RAG

Efficiency Enhancement Methods

Dense Passage Retrieval Techniques
Public Search Service for Effective Retrievers
Task-Oriented Dialogue (TOD) Systems
Subgraph Retrieval-Augmented Generation (SURGE)

  1. Elvis from X

  2. Adaptive Retrieval-Augmented Generation for Conversational Systems

#genai