SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 701725 of 2111 papers

TitleStatusHype
Are Large Language Models In-Context Graph Learners?0
RGAR: Recurrence Generation-augmented Retrieval for Factual-aware Medical Question Answering0
RAG-Gym: Optimizing Reasoning and Search Agents with Process Supervision0
HawkBench: Investigating Resilience of RAG Methods on Stratified Information-Seeking Tasks0
In-Place Updates of a Graph Index for Streaming Approximate Nearest Neighbor Search0
TrustRAG: An Information Assistant with Retrieval Augmented GenerationCode5
What are Models Thinking about? Understanding Large Language Model Hallucinations "Psychology" through Model Inner State Analysis0
DH-RAG: A Dynamic Historical Context-Powered Retrieval-Augmented Generation Method for Multi-Turn Dialogue0
HopRAG: Multi-Hop Reasoning for Logic-Aware Retrieval-Augmented Generation0
PathRAG: Pruning Graph-based Retrieval Augmented Generation with Relational PathsCode3
Towards an automated workflow in materials science for combining multi-modal simulative and experimental information using data mining and large language models0
Oreo: A Plug-in Context Reconstructor to Enhance Retrieval-Augmented Generation0
MomentSeeker: A Task-Oriented Benchmark For Long-Video Moment Retrieval0
Infinite Retrieval: Attention Enhanced LLMs in Long-Context Processing0
RAPID: Retrieval Augmented Training of Differentially Private Diffusion ModelsCode0
SearchRAG: Can Search Engines Be Helpful for LLM-based Medical Question Answering?0
Language Models are Few-Shot Graders0
Agentic Medical Knowledge Graphs Enhance Medical Question Answering: Bridging the Gap Between LLMs and Evolving Medical Knowledge0
REAL-MM-RAG: A Real-World Multi-Modal Retrieval Benchmark0
SmartLLM: Smart Contract Auditing using Custom Generative AI0
Cognitive-Aligned Document Selection for Retrieval-augmented Generation0
FineFilter: A Fine-grained Noise Filtering Mechanism for Retrieval-Augmented Large Language Models0
Revisiting Robust RAG: Do We Still Need Complex Robust Training in the Era of Powerful LLMs?0
Does RAG Really Perform Bad For Long-Context Processing?0
RAG vs. GraphRAG: A Systematic Evaluation and Key Insights0
Show:102550
← PrevPage 29 of 85Next →

No leaderboard results yet.