SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 451475 of 2111 papers

TitleStatusHype
Efficient Dynamic Clustering-Based Document Compression for Retrieval-Augmented-GenerationCode1
ELITE: Embedding-Less retrieval with Iterative Text ExplorationCode1
ECoRAG: Evidentiality-guided Compression for Long Context RAGCode1
ECG Semantic Integrator (ESI): A Foundation ECG Model Pretrained with LLM-Enhanced Cardiological TextCode1
Effective and Transparent RAG: Adaptive-Reward Reinforcement Learning for Decision TraceabilityCode1
Efficient and Reproducible Biomedical Question Answering using Retrieval Augmented GenerationCode1
Dubo-SQL: Diverse Retrieval-Augmented Generation and Fine Tuning for Text-to-SQLCode1
DRAGged into Conflicts: Detecting and Addressing Conflicting Sources in Search-Augmented LLMsCode1
Dynamic Retrieval Augmented Generation of Ontologies using Artificial Intelligence (DRAGON-AI)Code1
DomainRAG: A Chinese Benchmark for Evaluating Domain-specific Retrieval-Augmented GenerationCode1
CoTKR: Chain-of-Thought Enhanced Knowledge Rewriting for Complex Knowledge Graph Question AnsweringCode1
Docopilot: Improving Multimodal Models for Document-Level UnderstandingCode1
Do RAG Systems Cover What Matters? Evaluating and Optimizing Responses with Sub-Question CoverageCode1
Emotional RAG: Enhancing Role-Playing Agents through Emotional RetrievalCode1
Finetune-RAG: Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented GenerationCode1
InteractiveSurvey: An LLM-based Personalized and Interactive Survey Paper Generation SystemCode1
Toward Conversational Agents with Context and Time Sensitive Long-term MemoryCode1
Plancraft: an evaluation dataset for planning with LLM agentsCode1
Auto-GDA: Automatic Domain Adaptation for Efficient Grounding Verification in Retrieval Augmented Generation0
AutoFLUKA: A Large Language Model Based Framework for Automating Monte Carlo Simulations in FLUKA0
AIPatient: Simulating Patients with EHRs and LLM Powered Agentic Workflow0
Augmenting Textual Generation via Topology Aware Retrieval0
AI-native Memory: A Pathway from LLMs Towards AGI0
ACoRN: Noise-Robust Abstractive Compression in Retrieval-Augmented Language Models0
AI Legal Companion: Enhancing Access to Justice and Legal Literacy for the Public0
Show:102550
← PrevPage 19 of 85Next →

No leaderboard results yet.