SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 101125 of 2111 papers

TitleStatusHype
RAG and RAU: A Survey on Retrieval-Augmented Language Model in Natural Language ProcessingCode3
RAGLAB: A Modular and Research-Oriented Unified Framework for Retrieval-Augmented GenerationCode3
Fact, Fetch, and Reason: A Unified Evaluation of Retrieval-Augmented GenerationCode3
Auto-RAG: Autonomous Retrieval-Augmented Generation for Large Language ModelsCode3
PathRAG: Pruning Graph-based Retrieval Augmented Generation with Relational PathsCode3
PDL: A Declarative Prompt Programming LanguageCode3
OpenResearcher: Unleashing AI for Accelerated Scientific ResearchCode3
Panza: Design and Analysis of a Fully-Local Personalized Text Writing AssistantCode3
A Review of Prominent Paradigms for LLM-Based Agents: Tool Use (Including RAG), Planning, and Feedback LearningCode3
CRUD-RAG: A Comprehensive Chinese Benchmark for Retrieval-Augmented Generation of Large Language ModelsCode3
Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question ComplexityCode3
Parametric Retrieval Augmented GenerationCode3
PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language ModelsCode3
Ask in Any Modality: A Comprehensive Survey on Multimodal Retrieval-Augmented GenerationCode3
MultiHop-RAG: Benchmarking Retrieval-Augmented Generation for Multi-Hop QueriesCode3
MoC: Mixtures of Text Chunking Learners for Retrieval-Augmented Generation SystemCode3
Multi-Head RAG: Solving Multi-Aspect Problems with LLMsCode3
Meta-Chunking: Learning Text Segmentation and Semantic Completion via Logical PerceptionCode3
MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language ModelsCode3
Arctic Long Sequence Training: Scalable And Efficient Training For Multi-Million Token SequencesCode3
MMSearch-R1: Incentivizing LMMs to SearchCode3
ReasonIR: Training Retrievers for Reasoning TasksCode3
MedAgent-Pro: Towards Evidence-based Multi-modal Medical Diagnosis via Reasoning Agentic WorkflowCode2
EfficientRAG: Efficient Retriever for Multi-Hop Question AnsweringCode2
RGL: A Graph-Centric, Modular Framework for Efficient Retrieval-Augmented Generation on GraphsCode2
Show:102550
← PrevPage 5 of 85Next →

No leaderboard results yet.