SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 251275 of 2111 papers

TitleStatusHype
Extracting polygonal footprints in off-nadir images with Segment Anything ModelCode1
MM-PoisonRAG: Disrupting Multimodal RAG with Local and Global Poisoning AttacksCode1
FaithfulRAG: Fact-Level Conflict Modeling for Context-Faithful Retrieval-Augmented GenerationCode1
Finetune-RAG: Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented GenerationCode1
VulScribeR: Exploring RAG-based Vulnerability Augmentation with LLMsCode1
AT-RAG: An Adaptive RAG Model Enhancing Query Efficiency with Topic Filtering and Iterative ReasoningCode1
AtomR: Atomic Operator-Empowered Large Language Models for Heterogeneous Knowledge ReasoningCode1
Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language ModelsCode1
Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented GenerationCode1
Merging-Diverging Hybrid Transformer Networks for Survival Prediction in Head and Neck CancerCode1
EXIT: Context-Aware Extractive Compression for Enhancing Retrieval-Augmented GenerationCode1
Evaluating Very Long-Term Conversational Memory of LLM AgentsCode1
MetaGen Blended RAG: Higher Accuracy for Domain-Specific Q&A Without Fine-TuningCode1
Evaluating Retrieval Quality in Retrieval-Augmented GenerationCode1
Med-R^2: Crafting Trustworthy LLM Physicians via Retrieval and Reasoning of Evidence-Based MedicineCode1
MemLLM: Finetuning LLMs to Use An Explicit Read-Write MemoryCode1
ImageRAG: Enhancing Ultra High Resolution Remote Sensing Imagery Analysis with ImageRAGCode1
Enhancing Speech-to-Speech Dialogue Modeling with End-to-End Retrieval-Augmented GenerationCode1
ERAGent: Enhancing Retrieval-Augmented Language Models with Improved Accuracy, Efficiency, and PersonalizationCode1
GPIoT: Tailoring Small Language Models for IoT Program Synthesis and DevelopmentCode1
Follow My Instruction and Spill the Beans: Scalable Data Extraction from Retrieval-Augmented Generation SystemsCode1
MedPix 2.0: A Comprehensive Multimodal Biomedical Data set for Advanced AI ApplicationsCode1
MIRAGE-Bench: Automatic Multilingual Benchmark Arena for Retrieval-Augmented Generation SystemsCode1
Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection AttacksCode1
End-to-End Training of Neural Retrievers for Open-Domain Question AnsweringCode1
Show:102550
← PrevPage 11 of 85Next →

No leaderboard results yet.