SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 376400 of 2111 papers

TitleStatusHype
Not All Contexts Are Equal: Teaching LLMs Credibility-aware GenerationCode1
DomainRAG: A Chinese Benchmark for Evaluating Domain-specific Retrieval-Augmented GenerationCode1
Generation of Asset Administration Shell with Large Language Model Agents: Toward Semantic Interoperability in Digital Twins in the Context of Industry 4.0Code1
Dubo-SQL: Diverse Retrieval-Augmented Generation and Fine Tuning for Text-to-SQLCode1
NUDGE: Lightweight Non-Parametric Fine-Tuning of Embeddings for RetrievalCode1
Docopilot: Improving Multimodal Models for Document-Level UnderstandingCode1
Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection AttacksCode1
Multi-modal Retrieval Augmented Multi-modal Generation: A Benchmark, Evaluate Metrics and Strong BaselinesCode1
Multi-Meta-RAG: Improving RAG for Multi-Hop Queries using Database Filtering with LLM-Extracted MetadataCode1
Neuro-Symbolic Query CompilerCode1
Developing Retrieval Augmented Generation (RAG) based LLM Systems from PDFs: An Experience ReportCode1
GroUSE: A Benchmark to Evaluate Evaluators in Grounded Question AnsweringCode1
mmRAG: A Modular Benchmark for Retrieval-Augmented Generation over Text, Tables, and Knowledge GraphsCode1
Graphusion: A RAG Framework for Knowledge Graph Construction with a Global PerspectiveCode1
NeuSym-RAG: Hybrid Neural Symbolic Retrieval with Multiview Structuring for PDF Question AnsweringCode1
One Token Can Help! Learning Scalable and Pluggable Virtual Tokens for Retrieval-Augmented Large Language ModelsCode1
MM-PoisonRAG: Disrupting Multimodal RAG with Local and Global Poisoning AttacksCode1
MIRAGE-Bench: Automatic Multilingual Benchmark Arena for Retrieval-Augmented Generation SystemsCode1
Chronocept: Instilling a Sense of Time in MachinesCode1
Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented GenerationCode1
HEAL: Hierarchical Embedding Alignment Loss for Improved Retrieval and Representation LearningCode1
MetaGen Blended RAG: Higher Accuracy for Domain-Specific Q&A Without Fine-TuningCode1
Rationale-Guided Retrieval Augmented Generation for Medical Question AnsweringCode1
RAGSynth: Synthetic Data for Robust and Faithful RAG Component OptimizationCode1
DeepSolution: Boosting Complex Engineering Solution Design via Tree-based Exploration and Bi-point ThinkingCode1
Show:102550
← PrevPage 16 of 85Next →

No leaderboard results yet.