SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 776800 of 2111 papers

TitleStatusHype
Federated Learning and RAG Integration: A Scalable Approach for Medical Large Language Models0
Generative AI in Cybersecurity: A Comprehensive Review of LLM Applications and Vulnerabilities0
Federated Retrieval-Augmented Generation: A Systematic Mapping Study0
Federated Retrieval Augmented Generation for Multi-Product Question Answering0
Generative AI Is Not Ready for Clinical Use in Patient Education for Lower Back Pain Patients, Even With Retrieval-Augmented Generation0
Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification0
Chats-Grid: An Iterative Retrieval Q&A Optimization Scheme Leveraging Large Model and Retrieval Enhancement Generation in smart grid0
GeoCoder: Solving Geometry Problems by Generating Modular Code through Vision-Language Models0
GenDFIR: Advancing Cyber Incident Timeline Analysis Through Retrieval Augmented Generation and Large Language Models0
Chinese SafetyQA: A Safety Short-form Factuality Benchmark for Large Language Models0
FinBERT2: A Specialized Bidirectional Encoder for Bridging the Gap in Finance-Specific Deployment of Large Language Models0
Augmenting Large Language Models with Static Code Analysis for Automated Code Quality Improvements0
FIND: Fine-grained Information Density Guided Adaptive Retrieval-Augmented Generation for Disease Diagnosis0
Evaluating and Enhancing Large Language Models Performance in Domain-specific Medicine: Osteoarthritis Management with DocOA0
FineFilter: A Fine-grained Noise Filtering Mechanism for Retrieval-Augmented Large Language Models0
CALLM: Understanding Cancer Survivors' Emotions and Intervention Opportunities via Mobile Diaries and Context-Aware Language Models0
A Pilot Empirical Study on When and How to Use Knowledge Graphs as Retrieval Augmented Generation0
Fine-Grained Retrieval-Augmented Generation for Visual Question Answering0
GEM-RAG: Graphical Eigen Memories For Retrieval Augmented Generation0
Establishing Performance Baselines in Fine-Tuning, Retrieval-Augmented Generation and Soft-Prompting for Non-Specialist LLM Users0
ESGReveal: An LLM-based approach for extracting structured data from ESG reports0
Fine-Tuning and Prompt Optimization: Two Great Steps that Work Better Together0
Fine-Tuning Large Language Models and Evaluating Retrieval Methods for Improved Question Answering on Building Codes0
Calibrated Decision-Making through LLM-Assisted Retrieval0
ER-RAG: Enhance RAG with ER-Based Unified Modeling of Heterogeneous Data Sources0
Show:102550
← PrevPage 32 of 85Next →

No leaderboard results yet.