SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 18261850 of 2111 papers

TitleStatusHype
Evaluating Transferability in Retrieval Tasks: An Approach Using MMD and Kernel Methods0
Evaluation of Attribution Bias in Retrieval-Augmented Large Language Models0
Evaluation of RAG Metrics for Question Answering in the Telecom Domain0
Evaluation of Semantic Search and its Role in Retrieved-Augmented-Generation (RAG) for Arabic Language0
EventChat: Implementation and user-centric evaluation of a large language model-driven conversational recommender system for exploring leisure events in an SME context0
Everything Can Be Described in Words: A Simple Unified Multi-Modal Framework with Semantic and Temporal Alignment0
Evidence Contextualization and Counterfactual Attribution for Conversational QA over Heterogeneous Data with RAG Systems0
EvidenceMap: Learning Evidence Analysis to Unleash the Power of Small Language Models for Biomedical Question Answering0
EvoPat: A Multi-LLM-based Patents Summarization and Analysis Agent0
EvoWiki: Evaluating LLMs on Evolving Knowledge0
Experiments with Large Language Models on Retrieval-Augmented Generation for Closed-Source Simulation Software0
ExpertRAG: Efficient RAG with Mixture of Experts -- Optimizing Context Retrieval for Adaptive LLM Responses0
Explainable Biomedical Hypothesis Generation via Retrieval Augmented Generation enabled Large Language Models0
Explainable Lane Change Prediction for Near-Crash Scenarios Using Knowledge Graph Embeddings and Retrieval Augmented Generation0
Exploiting the Layered Intrinsic Dimensionality of Deep Models for Practical Adversarial Training0
Exploring Advanced Large Language Models with LLMsuite0
Exploring AI Text Generation, Retrieval-Augmented Generation, and Detection Technologies: a Comprehensive Overview0
Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems0
Exploring Fact Memorization and Style Imitation in LLMs Using QLoRA: An Experimental Study and Quality Assessment Methods0
Exploring Knowledge Boundaries in Large Language Models for Retrieval Judgment0
Exploring the Capabilities and Limitations of Large Language Models in the Electric Energy Sector0
Exploring the Impact of Table-to-Text Methods on Augmenting LLM-based Question Answering with Domain Hybrid Data0
Exploring the Meaningfulness of Nearest Neighbor Search in High-Dimensional Space0
Exploring the Potential of Large Language Models for Automation in Technical Customer Service0
Exploring the Role of Knowledge Graph-Based RAG in Japanese Medical Question Answering with Small-Scale LLMs0
Show:102550
← PrevPage 74 of 85Next →

No leaderboard results yet.