SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 17761800 of 2111 papers

TitleStatusHype
GNN-RAG: Graph Neural Retrieval for Large Language Model ReasoningCode3
One Token Can Help! Learning Scalable and Pluggable Virtual Tokens for Retrieval-Augmented Large Language ModelsCode1
Similarity is Not All You Need: Endowing Retrieval Augmented Generation with Multi Layered Thoughts0
Designing an Evaluation Framework for Large Language Models in Astronomy ResearchCode0
Toward Conversational Agents with Context and Time Sensitive Long-term MemoryCode1
Unlearning Climate Misinformation in Large Language Models0
CtrlA: Adaptive Retrieval-Augmented Generation via Inherent ControlCode2
Two-Layer Retrieval-Augmented Generation Framework for Low-Resource Medical Question Answering Using Reddit Data: Proof-of-Concept Study0
Can GPT Redefine Medical Understanding? Evaluating GPT on Biomedical Machine Reading Comprehension0
A Multi-Source Retrieval Question Answering Framework Based on RAG0
Don't Forget to Connect! Improving RAG with Graph-based Reranking0
Bridging the Gap: Dynamic Learning Strategies for Improving Multilingual Performance in LLMs0
ATM: Adversarial Tuning Multi-agent System Makes a Robust Retrieval-Augmented GeneratorCode0
QUB-Cirdan at "Discharge Me!": Zero shot discharge letter generation by open-source LLM0
EMERGE: Integrating RAG for Improved Multimodal EHR Predictive Modeling0
Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems0
Augmenting Textual Generation via Topology Aware Retrieval0
RAGSys: Item-Cold-Start Recommender as RAG System0
Video Enriched Retrieval Augmented Generation Using Aligned Video CaptionsCode1
Empowering Large Language Models to Set up a Knowledge Retrieval Indexer via Self-LearningCode2
Exploiting the Layered Intrinsic Dimensionality of Deep Models for Practical Adversarial Training0
ECG Semantic Integrator (ESI): A Foundation ECG Model Pretrained with LLM-Enhanced Cardiological TextCode1
CacheBlend: Fast Large Language Model Serving for RAG with Cached Knowledge FusionCode9
M-RAG: Reinforcing Large Language Model Performance through Retrieval-Augmented Generation with Multiple Partitions0
GRAG: Graph Retrieval-Augmented GenerationCode3
Show:102550
← PrevPage 72 of 85Next →

No leaderboard results yet.