SOTAVerified

Object Hallucination

Papers

Showing 5171 of 71 papers

TitleStatusHype
KNVQA: A Benchmark for evaluation knowledge-based VQA0
From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language ModelsCode2
Ferret: Refer and Ground Anything Anywhere at Any GranularityCode5
Negative Object Presence Evaluation (NOPE) to Measure Object Hallucination in Vision-Language Models0
HallE-Control: Controlling Object Hallucination in Large Multimodal ModelsCode1
Analyzing and Mitigating Object Hallucination in Large Vision-Language ModelsCode1
Detecting and Preventing Hallucinations in Large Vision Language ModelsCode1
TinyLVLM-eHub: Towards Comprehensive and Efficient Evaluation for Large Vision-Language ModelsCode2
Transferable Decoding with Visual Entities for Zero-Shot Image CaptioningCode1
LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language ModelsCode2
Evaluating Object Hallucination in Large Vision-Language ModelsCode2
Simple Token-Level Confidence Improves Caption Correctness0
Plausible May Not Be Faithful: Probing Object Hallucination in Vision-Language Pre-trainingCode0
Deep Learning Approaches on Image Captioning: A Review0
Relational Graph Learning for Grounded Video Description Generation0
Consensus Graph Representation Learning for Better Grounded Image Captioning0
Let there be a clock on the beach: Reducing Object Hallucination in Image CaptioningCode1
``I've Seen Things You People Wouldn't Believe'': Hallucinating Entities in GuessWhat?!0
HyperPocket: Generative Point Cloud CompletionCode1
Explain and Improve: LRP-Inference Fine-Tuning for Image Captioning ModelsCode0
Object Hallucination in Image CaptioningCode0
Show:102550
← PrevPage 3 of 3Next →

No leaderboard results yet.