SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 226250 of 2111 papers

TitleStatusHype
MTRAG: A Multi-Turn Conversational Benchmark for Evaluating Retrieval-Augmented Generation SystemsCode2
The Power of Noise: Redefining Retrieval for RAG SystemsCode2
Think-on-Graph 2.0: Deep and Faithful Large Language Model Reasoning with Knowledge-guided Retrieval Augmented GenerationCode2
FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows"Code2
LLM-based SPARQL Query Generation from Natural Language over Federated Knowledge GraphsCode2
FedRAG: A Framework for Fine-Tuning Retrieval-Augmented Generation SystemsCode2
LongRAG: A Dual-Perspective Retrieval-Augmented Generation Paradigm for Long-Context Question AnsweringCode2
Benchmarking Retrieval-Augmented Generation in Multi-Modal ContextsCode2
Improving Medical Reasoning through Retrieval and Self-Reflection with Retrieval-Augmented Large Language ModelsCode2
Empowering Large Language Models to Set up a Knowledge Retrieval Indexer via Self-LearningCode2
Enhancing Autonomous Driving Systems with On-Board Deployed Large Language ModelsCode2
MMLongBench: Benchmarking Long-Context Vision-Language Models Effectively and ThoroughlyCode2
UMBRELA: UMbrela is the (Open-Source Reproduction of the) Bing RELevance AssessorCode2
MeMemo: On-device Retrieval Augmentation for Private and Personalized Text GenerationCode2
ActiveRAG: Autonomously Knowledge Assimilation and Accommodation through Retrieval-Augmented AgentsCode2
Benchmarking Large Language Models in Retrieval-Augmented GenerationCode2
AiSAQ: All-in-Storage ANNS with Product Quantization for DRAM-free Information RetrievalCode2
Automated Evaluation of Retrieval-Augmented Language Models with Task-Specific Exam GenerationCode2
Evaluating Very Long-Term Conversational Memory of LLM AgentsCode1
Evaluating Retrieval Quality in Retrieval-Augmented GenerationCode1
ImageRAG: Enhancing Ultra High Resolution Remote Sensing Imagery Analysis with ImageRAGCode1
ERAGent: Enhancing Retrieval-Augmented Language Models with Improved Accuracy, Efficiency, and PersonalizationCode1
Enhancing Speech-to-Speech Dialogue Modeling with End-to-End Retrieval-Augmented GenerationCode1
LotusFilter: Fast Diverse Nearest Neighbor Search via a Learned Cutoff TableCode1
MacRAG: Compress, Slice, and Scale-up for Multi-Scale Adaptive Context RAGCode1
Show:102550
← PrevPage 10 of 85Next →

No leaderboard results yet.