SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 741750 of 2111 papers

TitleStatusHype
Patchwork: A Unified Framework for RAG Serving0
HalluMix: A Task-Agnostic, Multi-Domain Benchmark for Real-World Hallucination Detection0
EnronQA: Towards Personalized RAG over Private Documents0
Empowering Agentic Video Analytics Systems with Video Language Models0
A Multi-Granularity Retrieval Framework for Visually-Rich Documents0
Homa at SemEval-2025 Task 5: Aligning Librarian Records with OntoAligner for Subject Tagging0
Traceback of Poisoning Attacks to Retrieval-Augmented Generation0
Optimization of embeddings storage for RAG systems using quantization and dimensionality reduction techniques0
Talk Before You Retrieve: Agent-Led Discussions for Better RAG in Medical QACode0
ARCS: Agentic Retrieval-Augmented Code Synthesis with Iterative Refinement0
Show:102550
← PrevPage 75 of 212Next →

No leaderboard results yet.