SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 17511800 of 2111 papers

TitleStatusHype
Multi-Head RAG: Solving Multi-Aspect Problems with LLMsCode3
A + B: A General Generator-Reader Framework for Optimizing LLMs to Unleash Synergy Potential0
Empirical Guidelines for Deploying LLMs onto Resource-constrained Edge Devices0
RAG-based Crowdsourcing Task Decomposition via Masked Contrastive Learning with Prompts0
RATT: A Thought Structure for Coherent and Correct LLM ReasoningCode1
Chain of Agents: Large Language Models Collaborating on Long-Context Tasks0
Enhancing Retrieval-Augmented LMs with a Two-stage Consistency Learning Compressor0
Analyzing Temporal Complex Events with Large Language Models? A Benchmark towards Temporal, Long Context UnderstandingCode0
UniOQA: A Unified Framework for Knowledge Graph Question Answering with Large Language Models0
Natural Language Interaction with a Household Electricity Knowledge-based Digital Twin0
Ask-EDA: A Design Assistant Empowered by LLM, Hybrid RAG and Abbreviation De-hallucination0
SoccerRAG: Multimodal Soccer Information Retrieval via Natural QueriesCode0
Demo: Soccer Information Retrieval via Natural Queries using SoccerRAGCode0
BadRAG: Identifying Vulnerabilities in Retrieval Augmented Generation of Large Language Models0
A Theory for Token-Level Harmonization in Retrieval-Augmented Generation0
Superhuman performance in urology board questions by an explainable large language model enabled for context integration of the European Association of Urology guidelines: the UroBot study0
Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low Cost0
COS-Mix: Cosine Similarity and Distance Fusion for Improved Information RetrievalCode4
Mix-of-Granularity: Optimize the Chunking Granularity for Retrieval-Augmented GenerationCode0
RAG Does Not Work for Enterprises0
Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial TrainingCode1
Retrieval Meets Reasoning: Even High-school Textbook Knowledge Benefits Multimodal Reasoning0
Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools0
Phantom: General Trigger Attacks on Retrieval Augmented Language Generation0
Is My Data in Your Retrieval Database? Membership Inference Attacks Against Retrieval Augmented Generation0
GNN-RAG: Graph Neural Retrieval for Large Language Model ReasoningCode3
One Token Can Help! Learning Scalable and Pluggable Virtual Tokens for Retrieval-Augmented Large Language ModelsCode1
Similarity is Not All You Need: Endowing Retrieval Augmented Generation with Multi Layered Thoughts0
Designing an Evaluation Framework for Large Language Models in Astronomy ResearchCode0
Toward Conversational Agents with Context and Time Sensitive Long-term MemoryCode1
Unlearning Climate Misinformation in Large Language Models0
CtrlA: Adaptive Retrieval-Augmented Generation via Inherent ControlCode2
Two-Layer Retrieval-Augmented Generation Framework for Low-Resource Medical Question Answering Using Reddit Data: Proof-of-Concept Study0
Can GPT Redefine Medical Understanding? Evaluating GPT on Biomedical Machine Reading Comprehension0
A Multi-Source Retrieval Question Answering Framework Based on RAG0
Don't Forget to Connect! Improving RAG with Graph-based Reranking0
Bridging the Gap: Dynamic Learning Strategies for Improving Multilingual Performance in LLMs0
ATM: Adversarial Tuning Multi-agent System Makes a Robust Retrieval-Augmented GeneratorCode0
QUB-Cirdan at "Discharge Me!": Zero shot discharge letter generation by open-source LLM0
EMERGE: Integrating RAG for Improved Multimodal EHR Predictive Modeling0
Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems0
Augmenting Textual Generation via Topology Aware Retrieval0
RAGSys: Item-Cold-Start Recommender as RAG System0
Video Enriched Retrieval Augmented Generation Using Aligned Video CaptionsCode1
Empowering Large Language Models to Set up a Knowledge Retrieval Indexer via Self-LearningCode2
Exploiting the Layered Intrinsic Dimensionality of Deep Models for Practical Adversarial Training0
ECG Semantic Integrator (ESI): A Foundation ECG Model Pretrained with LLM-Enhanced Cardiological TextCode1
CacheBlend: Fast Large Language Model Serving for RAG with Cached Knowledge FusionCode9
M-RAG: Reinforcing Large Language Model Performance through Retrieval-Augmented Generation with Multiple Partitions0
GRAG: Graph Retrieval-Augmented GenerationCode3
Show:102550
← PrevPage 36 of 43Next →

No leaderboard results yet.