SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 251275 of 2111 papers

TitleStatusHype
MemLLM: Finetuning LLMs to Use An Explicit Read-Write MemoryCode1
Merging-Diverging Hybrid Transformer Networks for Survival Prediction in Head and Neck CancerCode1
MIRAGE-Bench: Automatic Multilingual Benchmark Arena for Retrieval-Augmented Generation SystemsCode1
Med-R^2: Crafting Trustworthy LLM Physicians via Retrieval and Reasoning of Evidence-Based MedicineCode1
AT-RAG: An Adaptive RAG Model Enhancing Query Efficiency with Topic Filtering and Iterative ReasoningCode1
AtomR: Atomic Operator-Empowered Large Language Models for Heterogeneous Knowledge ReasoningCode1
MacRAG: Compress, Slice, and Scale-up for Multi-Scale Adaptive Context RAGCode1
MBA-RAG: a Bandit Approach for Adaptive Retrieval-Augmented Generation through Question ComplexityCode1
LotusFilter: Fast Diverse Nearest Neighbor Search via a Learned Cutoff TableCode1
Long Context vs. RAG for LLMs: An Evaluation and RevisitsCode1
Long-Context Inference with Retrieval-Augmented Speculative DecodingCode1
LLM-Lasso: A Robust Framework for Domain-Informed Feature Selection and RegularizationCode1
LLM-Empowered Embodied Agent for Memory-Augmented Task Planning in Household RoboticsCode1
LLMs Know What They Need: Leveraging a Missing Information Guided Framework to Empower Retrieval-Augmented GenerationCode1
LexRAG: Benchmarking Retrieval-Augmented Generation in Multi-Turn Legal Consultation ConversationCode1
GPIoT: Tailoring Small Language Models for IoT Program Synthesis and DevelopmentCode1
Logic-RAG: Augmenting Large Multimodal Models with Visual-Spatial Knowledge for Road Scene UnderstandingCode1
MedPix 2.0: A Comprehensive Multimodal Biomedical Data set for Advanced AI ApplicationsCode1
Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection AttacksCode1
Less is More: Making Smaller Language Models Competent Subgraph Retrievers for Multi-hop KGQACode1
L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding?Code1
Know Or Not: a library for evaluating out-of-knowledge base robustnessCode1
KnowTrace: Bootstrapping Iterative Retrieval-Augmented Generation with Structured Knowledge TracingCode1
AssistRAG: Boosting the Potential of Large Language Models with an Intelligent Information AssistantCode1
Knowledge graph enhanced retrieval-augmented generation for failure mode and effects analysisCode1
Show:102550
← PrevPage 11 of 85Next →

No leaderboard results yet.