SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 18011825 of 2111 papers

TitleStatusHype
AI-native Memory: A Pathway from LLMs Towards AGI0
"Glue pizza and eat rocks" -- Exploiting Vulnerabilities in Retrieval-Augmented Generative Models0
Evaluating Quality of Answers for Retrieval-Augmented Generation: A Strong LLM Is All You Need0
Multi-step Inference over Unstructured Data0
RAGBench: Explainable Benchmark for Retrieval-Augmented Generation Systems0
Attention Instruction: Amplifying Attention in the Middle via PromptingCode0
On the Role of Long-tail Knowledge in Retrieval Augmented Large Language Models0
Context-augmented Retrieval: A Novel Framework for Fast Information Retrieval based Response Generation using Large Language Model0
Graph-Augmented LLMs for Personalized Health Insights: A Case Study in Sleep Analysis0
FS-RAG: A Frame Semantics Based Approach for Improved Factual Accuracy in Large Language ModelsCode0
Found in the Middle: Calibrating Positional Attention Bias Improves Long Context Utilization0
Retrieve-Plan-Generation: An Iterative Planning and Answering Framework for Knowledge-Intensive LLM GenerationCode0
LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs0
TemPrompt: Multi-Task Prompt Learning for Temporal Relation Extraction in RAG-based Crowdsourcing Systems0
Integrating Knowledge Retrieval and Large Language Models for Clinical Report Correction0
Pistis-RAG: Enhancing Retrieval-Augmented Generation with Human Feedback0
Towards Retrieval Augmented Generation over Large Video Libraries0
A Tale of Trust and Accuracy: Base vs. Instruct LLMs in RAG SystemsCode0
TTQA-RS- A break-down prompting approach for Multi-hop Table-Text Question Answering with Reasoning and Summarization0
Relation Extraction with Fine-Tuned Large Language Models in Retrieval Augmented Generation Frameworks0
DIRAS: Efficient LLM Annotation of Document Relevance in Retrieval Augmented GenerationCode0
QPaug: Question and Passage Augmentation for Open-Domain Question Answering of LLMsCode0
Thread: A Logic-Based Data Organization Paradigm for How-To Question Answering with Retrieval Augmented Generation0
Improving Zero-shot LLM Re-Ranker with Risk Minimization0
WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia0
Show:102550
← PrevPage 73 of 85Next →

No leaderboard results yet.