SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 131140 of 2111 papers

TitleStatusHype
Evaluating RAG-Fusion with RAGElo: an Automated Elo-based FrameworkCode2
MeMemo: On-device Retrieval Augmentation for Private and Personalized Text GenerationCode2
Empowering Large Language Models to Set up a Knowledge Retrieval Indexer via Self-LearningCode2
Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to RefuseCode2
LumberChunker: Long-Form Narrative Document SegmentationCode2
Enhancing Autonomous Driving Systems with On-Board Deployed Large Language ModelsCode2
MCTS-RAG: Enhancing Retrieval-Augmented Generation with Monte Carlo Tree SearchCode2
MedAgent-Pro: Towards Evidence-based Multi-modal Medical Diagnosis via Reasoning Agentic WorkflowCode2
OmniEval: An Omnidirectional and Automatic RAG Evaluation Benchmark in Financial DomainCode2
LongEmbed: Extending Embedding Models for Long Context RetrievalCode2
Show:102550
← PrevPage 14 of 212Next →

No leaderboard results yet.