SOTAVerified

Hallucination

Papers

Showing 15911600 of 1816 papers

TitleStatusHype
Interpretable Zero-shot Learning with Infinite Class Concepts0
Interpreting and Mitigating Hallucination in MLLMs through Multi-agent Debate0
Invar-RAG: Invariant LLM-aligned Retrieval for Better Generation0
Investigating and Addressing Hallucinations of LLMs in Tasks Involving Negation0
Investigating the Role of Prompting and External Tools in Hallucination Rates of Large Language Models0
IPL: Leveraging Multimodal Large Language Models for Intelligent Product Listing0
Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection0
Is Your Text-to-Image Model Robust to Caption Noise?0
Iter-AHMCL: Alleviate Hallucination for Large Language Model via Iterative Model-level Contrastive Learning0
It's About Time: Incorporating Temporality in Retrieval Augmented Language Models0
Show:102550
← PrevPage 160 of 182Next →

No leaderboard results yet.