SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 17261750 of 2111 papers

TitleStatusHype
Practical Poisoning Attacks against Retrieval-Augmented Generation0
P-RAG: Progressive Retrieval Augmented Generation For Planning on Embodied Everyday Task0
PRAGyan -- Connecting the Dots in Tweets0
PR-Attack: Coordinated Prompt-RAG Attacks on Retrieval-Augmented Generation in Large Language Models via Bilevel Optimization0
Privacy-Aware RAG: Secure and Isolated Knowledge Retrieval0
Privacy-Preserving Customer Support: A Framework for Secure and Scalable Interactions0
Privacy-Preserving Retrieval-Augmented Generation with Differential Privacy0
Probing Causality Manipulation of Large Language Models0
Probing-RAG: Self-Probing to Guide Language Models in Selective Document Retrieval0
Project Riley: Multimodal Multi-Agent LLM Collaboration with Emotional Reasoning and Voting0
Prompt Generate Train (PGT): Few-shot Domain Adaption of Retrieval Augmented Generation Models for Open Book Question-Answering0
Prompt Perturbation in Retrieval-Augmented Generation based Large Language Models0
Prompt-RAG: Pioneering Vector Embedding-Free Retrieval-Augmented Generation in Niche Domains, Exemplified by Korean Medicine0
Provenance: A Light-weight Fact-checker for Retrieval Augmented LLM Generation Output0
Provence: efficient and robust context pruning for retrieval-augmented generation0
Public Discourse Sandbox: Facilitating Human and AI Digital Communication Research0
QE-RAG: A Robust Retrieval-Augmented Generation Benchmark for Query Entry Errors0
QHackBench: Benchmarking Large Language Models for Quantum Code Generation Using PennyLane Hackathon Challenges0
QualBench: Benchmarking Chinese LLMs with Localized Professional Qualifications for Vertical Domain Evaluation0
Quantifying reliance on external information over parametric knowledge during Retrieval Augmented Generation (RAG) using mechanistic analysis0
QUB-Cirdan at "Discharge Me!": Zero shot discharge letter generation by open-source LLM0
QueEn: A Large Language Model for Quechua-English Translation0
Query Optimization for Parametric Knowledge Refinement in Retrieval-Augmented Large Language Models0
Query Performance Explanation through Large Language Model for HTAP Systems0
Query pipeline optimization for cancer patient question answering systems0
Show:102550
← PrevPage 70 of 85Next →

No leaderboard results yet.