SOTAVerified

Hallucination Evaluation

Evaluate the ability of LLM to generate non-hallucination text or assess the capability of LLM to recognize hallucinations.

Papers

Showing 110 of 49 papers

TitleStatusHype
HalluSegBench: Counterfactual Visual Reasoning for Segmentation Hallucination Evaluation0
KnowRL: Exploring Knowledgeable Reinforcement Learning for FactualityCode1
MultiHal: Multilingual Dataset for Knowledge-Graph Grounded Evaluation of LLM HallucinationsCode0
Benchmarking LLM Faithfulness in RAG with Evolving LeaderboardsCode1
Mitigating Image Captioning Hallucinations in Vision-Language Models0
Localizing Before Answering: A Hallucination Evaluation Benchmark for Grounded Medical Multimodal LLMs0
Real-Time Evaluation Models for RAG: Who Detects Hallucinations Best?0
Instruction-Oriented Preference Alignment for Enhancing Multi-Modal Comprehension Capability of MLLMs0
Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and MitigationCode1
Evaluating LLMs' Assessment of Mixed-Context Hallucination Through the Lens of SummarizationCode0
Show:102550
← PrevPage 1 of 5Next →

No leaderboard results yet.