SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 701725 of 2111 papers

TitleStatusHype
Knowing When to Ask -- Bridging Large Language Models and Data0
Fine-Grained Retrieval-Augmented Generation for Visual Question Answering0
Can We Further Elicit Reasoning in LLMs? Critic-Guided Planning with Retrieval-Augmentation for Solving Challenging Tasks0
A Proposal for Evaluating the Operational Risk for ChatBots based on Large Language Models0
Novel Preprocessing Technique for Data Embedding in Engineering Code Generation Using Large Language Model0
Can LLMs Be Trusted for Evaluating RAG Systems? A Survey of Methods and Datasets0
Evaluating Students' Open-ended Written Responses with LLMs: Using the RAG Framework for GPT-3.5, GPT-4, Claude-3, and Mistral-Large0
Evaluating Self-Generated Documents for Enhancing Retrieval-Augmented Generation with Large Language Models0
FineFilter: A Fine-grained Noise Filtering Mechanism for Retrieval-Augmented Large Language Models0
Fine-grained Analysis of In-context Linear Estimation: Data, Architecture, and Beyond0
Evaluating Retrieval Augmented Generative Models for Document Queries in Transportation Safety0
Can Language Models Enable In-Context Database?0
Evaluating Quality of Answers for Retrieval-Augmented Generation: A Strong LLM Is All You Need0
Evaluating the Effect of Retrieval Augmentation on Social Biases0
Evaluating the Impact of Advanced LLM Techniques on AI-Lecture Tutors for a Robotics Course0
Evaluating the Performance of RAG Methods for Conversational AI in the Airport Domain0
Evaluating the Retrieval Component in LLM-Based Question Answering Systems0
Evaluating Transferability in Retrieval Tasks: An Approach Using MMD and Kernel Methods0
Can GPT Redefine Medical Understanding? Evaluating GPT on Biomedical Machine Reading Comprehension0
Evaluation of Attribution Bias in Retrieval-Augmented Large Language Models0
Evaluation of RAG Metrics for Question Answering in the Telecom Domain0
Application of NotebookLM, a Large Language Model with Retrieval-Augmented Generation, for Lung Cancer Staging0
Evaluation of Semantic Search and its Role in Retrieved-Augmented-Generation (RAG) for Arabic Language0
EventChat: Implementation and user-centric evaluation of a large language model-driven conversational recommender system for exploring leisure events in an SME context0
Evaluating Memory in LLM Agents via Incremental Multi-Turn Interactions0
Show:102550
← PrevPage 29 of 85Next →

No leaderboard results yet.