SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 781790 of 2111 papers

TitleStatusHype
ACoRN: Noise-Robust Abstractive Compression in Retrieval-Augmented Language Models0
Accommodate Knowledge Conflicts in Retrieval-augmented LLMs: Towards Reliable Response Generation in the Wild0
FreshStack: Building Realistic Benchmarks for Evaluating Retrieval on Technical Documents0
Estimating Optimal Context Length for Hybrid Retrieval-augmented Multi-document SummarizationCode0
InstructRAG: Leveraging Retrieval-Augmented Generation on Instruction Graphs for LLM-Based Task Planning0
A Human-AI Comparative Analysis of Prompt Sensitivity in LLM-Based Relevance JudgmentCode0
ARCeR: an Agentic RAG for the Automated Definition of Cyber Ranges0
On the Feasibility of Using MultiModal LLMs to Execute AR Social Engineering Attacks0
Towards Conversational AI for Human-Machine Collaborative MLOps0
A Visual RAG Pipeline for Few-Shot Fine-Grained Product Classification0
Show:102550
← PrevPage 79 of 212Next →

No leaderboard results yet.