SOTAVerified

Hallucination

Papers

Showing 12261250 of 1816 papers

TitleStatusHype
A Systematic Review of Data-to-Text NLG0
Careless Whisper: Speech-to-Text Hallucination HarmsCode0
Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language ModelsCode4
PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language ModelsCode3
G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question AnsweringCode4
Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & HallucinationsCode1
GLaM: Fine-Tuning Large Language Models for Domain Knowledge Graph Alignment via Neighborhood Partitioning and Generative Subgraph Encoding0
ViGoR: Improving Visual Grounding of Large Vision Language Models with Fine-Grained Reward ModelingCode0
ResumeFlow: An LLM-facilitated Pipeline for Personalized Resume Generation and RefinementCode3
Introspective Planning: Aligning Robots' Uncertainty with Inherent Task AmbiguityCode1
An Examination on the Effectiveness of Divide-and-Conquer Prompting in Large Language Models0
Enhancing Retrieval Processes for Language Generation with Augmented Queries0
INSIDE: LLMs' Internal States Retain the Power of Hallucination DetectionCode1
Training Language Models to Generate Text with Citations via Fine-grained RewardsCode1
The Instinctive Bias: Spurious Images lead to Illusion in MLLMsCode0
Unified Hallucination Detection for Multimodal Large Language ModelsCode1
Improving Assessment of Tutoring Practices using Retrieval-Augmented Generation0
Aligner: Efficient Alignment by Learning to Correct0
LLM-Enhanced Data ManagementCode4
A Closer Look at the Limitations of Instruction Tuning0
A Survey on Large Language Model Hallucination via a Creativity Perspective0
CorpusLM: Towards a Unified Language Model on Corpus for Knowledge-Intensive Tasks0
Skip : A Simple Method to Reduce Hallucination in Large Vision-Language ModelsCode1
PokeLLMon: A Human-Parity Agent for Pokemon Battles with Large Language ModelsCode3
Redefining "Hallucination" in LLMs: Towards a psychology-informed framework for mitigating misinformation0
Show:102550
← PrevPage 50 of 73Next →

No leaderboard results yet.