SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 15211530 of 2111 papers

TitleStatusHype
Hybrid Student-Teacher Large Language Model Refinement for Cancer Toxicity Symptom Extraction0
Medical Graph RAG: Towards Safe Medical Large Language Model via Graph Retrieval-Augmented GenerationCode4
Towards Explainable Network Intrusion Detection using Large Language Models0
EfficientRAG: Efficient Retriever for Multi-Hop Question AnsweringCode2
ACL Ready: RAG Based Assistant for the ACL ChecklistCode0
VulScribeR: Exploring RAG-based Vulnerability Augmentation with LLMsCode1
MaxMind: A Memory Loop Network to Enhance Software Productivity based on Large Language Models0
Large Language Model as a Catalyst: A Paradigm Shift in Base Station Siting Optimization0
A Comparison of LLM Finetuning Methods & Evaluation Metrics with Travel Chatbot Use Case0
FLASH: Federated Learning-Based LLMs for Advanced Query Processing in Social Networks through RAG0
Show:102550
← PrevPage 153 of 212Next →

No leaderboard results yet.