SOTAVerified

Hallucination

Papers

Showing 111120 of 1816 papers

TitleStatusHype
VHM: Versatile and Honest Vision Language Model for Remote Sensing Image AnalysisCode2
Calibrated Self-Rewarding Vision Language ModelsCode2
Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Attention LensCode2
Differential TransformerCode2
HALC: Object Hallucination Reduction via Adaptive Focal-Contrast DecodingCode2
Aligning Modalities in Vision Large Language Models via Preference Fine-tuningCode2
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction TuningCode2
Granite GuardianCode2
GPT-NER: Named Entity Recognition via Large Language ModelsCode2
Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question AnsweringCode2
Show:102550
← PrevPage 12 of 182Next →

No leaderboard results yet.