SOTAVerified

RAG

Retrieval-Augmented Generation (RAG) is a task that combines the strengths of both retrieval-based models and generation-based models. In this approach, a retrieval system selects relevant documents or passages from a large corpus, and a generation model, typically a neural language model, uses the retrieved information to generate a response. This method enhances the accuracy and coherence of generated text, especially in tasks requiring detailed knowledge or long context handling.

RAG is particularly useful in open-domain question answering, knowledge-grounded dialogue, and summarization tasks. The retrieval step helps the model to access and incorporate external information, making it less reliant on memorized knowledge and better suited for generating responses based on the latest or domain-specific information.

The performance of RAG systems is usually measured using metrics such as precision, recall, F1 score, BLEU score, and exact match. Some popular datasets for evaluating RAG models include Natural Questions, MS MARCO, TriviaQA, and SQuAD.

Papers

Showing 20112020 of 2111 papers

TitleStatusHype
uTeBC-NLP at SemEval-2024 Task 9: Can LLMs be Lateral Thinkers?Code0
ELOQ: Resources for Enhancing LLM Detection of Out-of-Scope QuestionsCode0
Integrating A.I. in Higher Education: Protocol for a Pilot Study with 'SAMCares: An Adaptive Learning Hub'Code0
Can Github issues be solved with Tree Of Thoughts?Code0
Wikipedia in the Era of LLMs: Evolution and RisksCode0
Information Retrieval in the Age of Generative AI: The RGB ModelCode0
THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language ModelsCode0
Bridging the Gap Between Open-Source and Proprietary LLMs in Table QACode0
Attribute or Abstain: Large Language Models as Long Document AssistantsCode0
Scholarly Question Answering using Large Language Models in the NFDI4DataScience GatewayCode0
Show:102550
← PrevPage 202 of 212Next →

No leaderboard results yet.