SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 20012025 of 2111 papers

TitleStatusHype
Causal Graphs Meet Thoughts: Enhancing Complex Reasoning in Graph-Augmented LLMsCode0
Interpersonal Memory Matters: A New Task for Proactive Dialogue Utilizing Conversational HistoryCode0
Why does in-context learning fail sometimes? Evaluating in-context learning on open and closed questionsCode0
XGraphRAG: Interactive Visual Analysis for Graph-based Retrieval-Augmented GenerationCode0
Satyrn: A Platform for Analytics Augmented GenerationCode0
Can Open-Source LLMs Compete with Commercial Models? Exploring the Few-Shot Performance of Current GPT Models in Biomedical TasksCode0
SBI-RAG: Enhancing Math Word Problem Solving for Students through Schema-Based Instruction and Retrieval-Augmented GenerationCode0
Refiner: Restructure Retrieval Content Efficiently to Advance Question-Answering CapabilitiesCode0
You Only Use Reactive Attention Slice For Long Context RetrievalCode0
IntellBot: Retrieval Augmented LLM Chatbot for Cyber Threat Knowledge DeliveryCode0
uTeBC-NLP at SemEval-2024 Task 9: Can LLMs be Lateral Thinkers?Code0
ELOQ: Resources for Enhancing LLM Detection of Out-of-Scope QuestionsCode0
Integrating A.I. in Higher Education: Protocol for a Pilot Study with 'SAMCares: An Adaptive Learning Hub'Code0
Can Github issues be solved with Tree Of Thoughts?Code0
Wikipedia in the Era of LLMs: Evolution and RisksCode0
Information Retrieval in the Age of Generative AI: The RGB ModelCode0
THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language ModelsCode0
Bridging the Gap Between Open-Source and Proprietary LLMs in Table QACode0
Attribute or Abstain: Large Language Models as Long Document AssistantsCode0
Scholarly Question Answering using Large Language Models in the NFDI4DataScience GatewayCode0
Attention Instruction: Amplifying Attention in the Middle via PromptingCode0
Developing a Pragmatic Benchmark for Assessing Korean Legal Language Understanding in Large Language ModelsCode0
Incorporating Legal Structure in Retrieval-Augmented Generation: A Case Study on Copyright Fair UseCode0
TRAQ: Trustworthy Retrieval Augmented Question Answering via Conformal PredictionCode0
Improving RAG for Personalization with Author Features and Contrastive ExamplesCode0
Show:102550
← PrevPage 81 of 85Next →

No leaderboard results yet.