SOTAVerified

Hallucination

Papers

Showing 226250 of 1816 papers

TitleStatusHype
Distinguishing Ignorance from Error in LLM HallucinationsCode1
Can Knowledge Editing Really Correct Hallucinations?Code1
Paths-over-Graph: Knowledge Graph Empowered Large Language Model ReasoningCode1
Mitigating Hallucinations in Large Vision-Language Models via Summary-Guided DecodingCode1
FaithBench: A Diverse Hallucination Benchmark for Summarization by Modern LLMsCode1
Search Engines in an AI Era: The False Promise of Factual and Verifiable Source-Cited ResponsesCode1
VERIFIED: A Video Corpus Moment Retrieval Benchmark for Fine-Grained Video UnderstandingCode1
Automatic Curriculum Expert Iteration for Reliable LLM ReasoningCode1
OneNet: A Fine-Tuning Free Framework for Few-Shot Entity Linking via Large Language Model PromptingCode1
IterGen: Iterative Semantic-aware Structured LLM Generation with BacktrackingCode1
CriSPO: Multi-Aspect Critique-Suggestion-guided Automatic Prompt Optimization for Text GenerationCode1
FactAlign: Long-form Factuality Alignment of Large Language ModelsCode1
EventHallusion: Diagnosing Event Hallucinations in Video LLMsCode1
XTRUST: On the Multilingual Trustworthiness of Large Language ModelsCode1
FAIR GPT: A virtual consultant for research data management in ChatGPTCode1
Evaluating Image Hallucination in Text-to-Image Generation with Question-AnsweringCode1
Trustworthiness in Retrieval-Augmented Generation Systems: A SurveyCode1
Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path ReasoningCode1
Towards Empathetic Conversational Recommender SystemsCode1
ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language ModelsCode1
SLM Meets LLM: Balancing Latency, Interpretability and Consistency in Hallucination DetectionCode1
Reefknot: A Comprehensive Benchmark for Relation Hallucination Evaluation, Analysis and Mitigation in Multimodal Large Language ModelsCode1
Hallu-PI: Evaluating Hallucination in Multi-modal Large Language Models within Perturbed InputsCode1
Mitigating Multilingual Hallucination in Large Vision-Language ModelsCode1
Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMsCode1
Show:102550
← PrevPage 10 of 73Next →

No leaderboard results yet.