SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 18011850 of 2111 papers

TitleStatusHype
AI-native Memory: A Pathway from LLMs Towards AGI0
"Glue pizza and eat rocks" -- Exploiting Vulnerabilities in Retrieval-Augmented Generative Models0
Evaluating Quality of Answers for Retrieval-Augmented Generation: A Strong LLM Is All You Need0
Multi-step Inference over Unstructured Data0
RAGBench: Explainable Benchmark for Retrieval-Augmented Generation Systems0
Attention Instruction: Amplifying Attention in the Middle via PromptingCode0
On the Role of Long-tail Knowledge in Retrieval Augmented Large Language Models0
Context-augmented Retrieval: A Novel Framework for Fast Information Retrieval based Response Generation using Large Language Model0
Graph-Augmented LLMs for Personalized Health Insights: A Case Study in Sleep Analysis0
FS-RAG: A Frame Semantics Based Approach for Improved Factual Accuracy in Large Language ModelsCode0
Found in the Middle: Calibrating Positional Attention Bias Improves Long Context Utilization0
Retrieve-Plan-Generation: An Iterative Planning and Answering Framework for Knowledge-Intensive LLM GenerationCode0
LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs0
TemPrompt: Multi-Task Prompt Learning for Temporal Relation Extraction in RAG-based Crowdsourcing Systems0
Integrating Knowledge Retrieval and Large Language Models for Clinical Report Correction0
Pistis-RAG: Enhancing Retrieval-Augmented Generation with Human Feedback0
Towards Retrieval Augmented Generation over Large Video Libraries0
A Tale of Trust and Accuracy: Base vs. Instruct LLMs in RAG SystemsCode0
TTQA-RS- A break-down prompting approach for Multi-hop Table-Text Question Answering with Reasoning and Summarization0
Relation Extraction with Fine-Tuned Large Language Models in Retrieval Augmented Generation Frameworks0
DIRAS: Efficient LLM Annotation of Document Relevance in Retrieval Augmented GenerationCode0
QPaug: Question and Passage Augmentation for Open-Domain Question Answering of LLMsCode0
Thread: A Logic-Based Data Organization Paradigm for How-To Question Answering with Retrieval Augmented Generation0
Improving Zero-shot LLM Re-Ranker with Risk Minimization0
WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia0
FoRAG: Factuality-optimized Retrieval Augmented Generation for Web-enhanced Long-form Question Answering0
From RAGs to rich parameters: Probing how language models utilize external knowledge over parametric information for factual queries0
Query Routing for Homogeneous Tools: An Instantiation in the RAG Scenario0
RichRAG: Crafting Rich Responses for Multi-faceted Queries in Retrieval-Augmented Generation0
Retrieval-Augmented Generation for Generative Artificial Intelligence in Medicine0
Debate as Optimization: Adaptive Conformal Prediction and Diverse Retrieval for Event Extraction0
Identifying Performance-Sensitive Configurations in Software Systems through Code Analysis with LLM Agents0
Retrieval Meets Reasoning: Dynamic In-Context Editing for Long-Text Understanding0
Intermediate Distillation: Data-Efficient Distillation from Black-Box LLMs for Information Retrieval0
Iterative Utility Judgment Framework via LLMs Inspired by Relevance in Philosophy0
CrAM: Credibility-Aware Attention Modification in LLMs for Combating Misinformation in RAGCode0
Fine-Tuning or Fine-Failing? Debunking Performance Myths in Large Language Models0
Retrieval-Augmented Feature Generation for Domain-Specific Classification0
Evaluating the Efficacy of Open-Source LLMs in Enterprise-Specific RAG Systems: A Comparative Study of Performance and ScalabilityCode0
Refiner: Restructure Retrieval Content Efficiently to Advance Question-Answering CapabilitiesCode0
Vul-RAG: Enhancing LLM-based Vulnerability Detection via Knowledge-level RAG0
SeRTS: Self-Rewarding Tree Search for Biomedical Retrieval-Augmented Generation0
Satyrn: A Platform for Analytics Augmented GenerationCode0
RAEmoLLM: Retrieval Augmented LLMs for Cross-Domain Misinformation Detection Using In-Context Learning based on Emotional InformationCode0
Current state of LLM Risks and AI Guardrails0
Automating Pharmacovigilance Evidence Generation: Using Large Language Models to Produce Context-Aware SQL0
HIRO: Hierarchical Information Retrieval OptimizationCode0
ClimRetrieve: A Benchmarking Dataset for Information Retrieval from Corporate Climate DisclosuresCode0
Battling Botpoop using GenAI for Higher Education: A Study of a Retrieval Augmented Generation Chatbots Impact on Learning0
Exploring Fact Memorization and Style Imitation in LLMs Using QLoRA: An Experimental Study and Quality Assessment Methods0
Show:102550
← PrevPage 37 of 43Next →

No leaderboard results yet.