SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 151160 of 2111 papers

TitleStatusHype
MA-RAG: Multi-Agent Retrieval-Augmented Generation via Collaborative Chain-of-Thought Reasoning0
Anveshana: A New Benchmark Dataset for Cross-Lingual Information Retrieval On English Queries and Sanskrit Documents0
NeuSym-RAG: Hybrid Neural Symbolic Retrieval with Multiview Structuring for PDF Question AnsweringCode1
DoctorRAG: Medical RAG Fusing Knowledge with Patient Analogy through Textual Gradients0
DGRAG: Distributed Graph-based Retrieval-Augmented Generation in Edge-Cloud Systems0
LLM-Agent-Controller: A Universal Multi-Agent Large Language Model System as a Control Engineer0
Investigating Pedagogical Teacher and Student LLM Agents: Genetic Adaptation Meets Retrieval Augmented Generation Across Learning Style0
RankLLM: A Python Package for Reranking with LLMs0
Hypercube-RAG: Hypercube-Based Retrieval-Augmented Generation for In-domain Scientific Question-AnsweringCode0
Retrieval-Augmented Generation for Service Discovery: Chunking Strategies and Benchmarking0
Show:102550
← PrevPage 16 of 212Next →

No leaderboard results yet.