SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 18011825 of 2111 papers

TitleStatusHype
Enhancing Scientific Reproducibility Through Automated BioCompute Object Creation Using Retrieval-Augmented Generation from Publications0
Enhancing Software-Related Information Extraction via Single-Choice Question Answering with Large Language Models0
Enhancing Talent Employment Insights Through Feature Extraction with LLM Finetuning0
Enhancing Thyroid Cytology Diagnosis with RAG-Optimized LLMs and Pa-thology Foundation Models0
Enhancing Tourism Recommender Systems for Sustainable City Trips Using Retrieval-Augmented Generation0
Enhancing tutoring systems by leveraging tailored promptings and domain knowledge with Large Language Models0
EnronQA: Towards Personalized RAG over Private Documents0
ENWAR: A RAG-empowered Multi-Modal LLM Framework for Wireless Environment Perception0
ERATTA: Extreme RAG for Table To Answers with Large Language Models0
ER-RAG: Enhance RAG with ER-Based Unified Modeling of Heterogeneous Data Sources0
ESGReveal: An LLM-based approach for extracting structured data from ESG reports0
Establishing Performance Baselines in Fine-Tuning, Retrieval-Augmented Generation and Soft-Prompting for Non-Specialist LLM Users0
Evaluating and Enhancing Large Language Models Performance in Domain-specific Medicine: Osteoarthritis Management with DocOA0
Bias Evaluation and Mitigation in Retrieval-Augmented Medical Question-Answering Systems0
Evaluating Consistencies in LLM responses through a Semantic Clustering of Question Answering0
Evaluating Knowledge Graph Based Retrieval Augmented Generation Methods under Knowledge Incompleteness0
Evaluating Memory in LLM Agents via Incremental Multi-Turn Interactions0
Evaluating Quality of Answers for Retrieval-Augmented Generation: A Strong LLM Is All You Need0
Evaluating Retrieval Augmented Generative Models for Document Queries in Transportation Safety0
Evaluating Self-Generated Documents for Enhancing Retrieval-Augmented Generation with Large Language Models0
Evaluating Students' Open-ended Written Responses with LLMs: Using the RAG Framework for GPT-3.5, GPT-4, Claude-3, and Mistral-Large0
Evaluating the Effect of Retrieval Augmentation on Social Biases0
Evaluating the Impact of Advanced LLM Techniques on AI-Lecture Tutors for a Robotics Course0
Evaluating the Performance of RAG Methods for Conversational AI in the Airport Domain0
Evaluating the Retrieval Component in LLM-Based Question Answering Systems0
Show:102550
← PrevPage 73 of 85Next →

No leaderboard results yet.