SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 18511875 of 2111 papers

TitleStatusHype
Blowfish: Topological and statistical signatures for quantifying ambiguity in semantic search0
Ad Auctions for LLMs via Retrieval Augmented Generation0
Leveraging Large Language Models for Web Scraping0
TelecomRAG: Taming Telecom Standards with Retrieval Augmented Generation and LLMs0
DR-RAG: Applying Dynamic Document Relevance to Retrieval-Augmented Generation for Question-Answering0
Scholarly Question Answering using Large Language Models in the NFDI4DataScience GatewayCode0
Beyond Words: On Large Language Models Actionability in Mission-Critical Risk Analysis0
Should We Fine-Tune or RAG? Evaluating Different Techniques to Adapt LLMs for DialogueCode0
Evaluating the Retrieval Component in LLM-Based Question Answering Systems0
The Impact of Quantization on Retrieval-Augmented Generation: An Analysis of Small LLMs0
Machine Against the RAG: Jamming Retrieval-Augmented Generation with Blocker Documents0
RE-RAG: Improving Open-Domain QA Performance and Interpretability with Relevance Estimator in Retrieval-Augmented GenerationCode0
Corpus Poisoning via Approximate Greedy Gradient DescentCode0
A + B: A General Generator-Reader Framework for Optimizing LLMs to Unleash Synergy Potential0
Empirical Guidelines for Deploying LLMs onto Resource-constrained Edge Devices0
RAG-based Crowdsourcing Task Decomposition via Masked Contrastive Learning with Prompts0
Analyzing Temporal Complex Events with Large Language Models? A Benchmark towards Temporal, Long Context UnderstandingCode0
UniOQA: A Unified Framework for Knowledge Graph Question Answering with Large Language Models0
Chain of Agents: Large Language Models Collaborating on Long-Context Tasks0
Enhancing Retrieval-Augmented LMs with a Two-stage Consistency Learning Compressor0
BadRAG: Identifying Vulnerabilities in Retrieval Augmented Generation of Large Language Models0
Superhuman performance in urology board questions by an explainable large language model enabled for context integration of the European Association of Urology guidelines: the UroBot study0
Ask-EDA: A Design Assistant Empowered by LLM, Hybrid RAG and Abbreviation De-hallucination0
Natural Language Interaction with a Household Electricity Knowledge-based Digital Twin0
SoccerRAG: Multimodal Soccer Information Retrieval via Natural QueriesCode0
Show:102550
← PrevPage 75 of 85Next →

No leaderboard results yet.