SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 17011750 of 2111 papers

TitleStatusHype
PennyLang: Pioneering LLM-Based Quantum Code Generation with a Novel PennyLane-Centric Dataset0
Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering0
PERC: Plan-As-Query Example Retrieval for Underrepresented Code Generation0
WeQA: A Benchmark for Retrieval Augmented Generation in Wind Energy Domain0
PersianRAG: A Retrieval-Augmented Generation System for Persian Language0
PersonaAI: Leveraging Retrieval-Augmented Generation and Personalized Context for AI-Driven Digital Avatars0
Personalization Toolkit: Training Free Personalization of Large Vision Language Models0
Personalized Education with Generative AI and Digital Twins: VR, RAG, and Zero-Shot Sentiment Analysis for Industry 4.0 Workforce Development0
Personalized Text Generation with Contrastive Activation Steering0
Personalizing Student-Agent Interactions Using Log-Contextualized Retrieval Augmented Generation (RAG)0
Phantom: General Trigger Attacks on Retrieval Augmented Language Generation0
PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design0
Pirates of the RAG: Adaptively Attacking LLMs to Leak Knowledge Bases0
PISCO: Pretty Simple Compression for Retrieval-Augmented Generation0
Pistis-RAG: Enhancing Retrieval-Augmented Generation with Human Feedback0
PlanRAG: Planning-guided Retrieval Augmented Generation0
Plan with Code: Comparing approaches for robust NL to DSL generation0
Poisoned LangChain: Jailbreak LLMs by LangChain0
Poisoned-MRAG: Knowledge Poisoning Attacks to Multimodal Retrieval Augmented Generation0
Political Events using RAG with LLMs0
POLYRAG: Integrating Polyviews into Retrieval-Augmented Generation for Medical Applications0
Poly-Vector Retrieval: Reference and Content Embeddings for Legal Documents0
Position Engineering: Boosting Large Language Models through Positional Information Manipulation0
Post-training an LLM for RAG? Train on Self-Generated Demonstrations0
Practical Design and Benchmarking of Generative AI Applications for Surgical Billing and Coding0
Practical Poisoning Attacks against Retrieval-Augmented Generation0
P-RAG: Progressive Retrieval Augmented Generation For Planning on Embodied Everyday Task0
PRAGyan -- Connecting the Dots in Tweets0
PR-Attack: Coordinated Prompt-RAG Attacks on Retrieval-Augmented Generation in Large Language Models via Bilevel Optimization0
Privacy-Aware RAG: Secure and Isolated Knowledge Retrieval0
Privacy-Preserving Customer Support: A Framework for Secure and Scalable Interactions0
Privacy-Preserving Retrieval-Augmented Generation with Differential Privacy0
Probing Causality Manipulation of Large Language Models0
Probing-RAG: Self-Probing to Guide Language Models in Selective Document Retrieval0
Project Riley: Multimodal Multi-Agent LLM Collaboration with Emotional Reasoning and Voting0
Prompt Generate Train (PGT): Few-shot Domain Adaption of Retrieval Augmented Generation Models for Open Book Question-Answering0
Prompt Perturbation in Retrieval-Augmented Generation based Large Language Models0
Prompt-RAG: Pioneering Vector Embedding-Free Retrieval-Augmented Generation in Niche Domains, Exemplified by Korean Medicine0
Provenance: A Light-weight Fact-checker for Retrieval Augmented LLM Generation Output0
Provence: efficient and robust context pruning for retrieval-augmented generation0
Public Discourse Sandbox: Facilitating Human and AI Digital Communication Research0
QE-RAG: A Robust Retrieval-Augmented Generation Benchmark for Query Entry Errors0
QHackBench: Benchmarking Large Language Models for Quantum Code Generation Using PennyLane Hackathon Challenges0
QualBench: Benchmarking Chinese LLMs with Localized Professional Qualifications for Vertical Domain Evaluation0
Quantifying reliance on external information over parametric knowledge during Retrieval Augmented Generation (RAG) using mechanistic analysis0
QUB-Cirdan at "Discharge Me!": Zero shot discharge letter generation by open-source LLM0
QueEn: A Large Language Model for Quechua-English Translation0
Query Optimization for Parametric Knowledge Refinement in Retrieval-Augmented Large Language Models0
Query Performance Explanation through Large Language Model for HTAP Systems0
Query pipeline optimization for cancer patient question answering systems0
Show:102550
← PrevPage 35 of 43Next →

No leaderboard results yet.