SOTAVerified

Hallucination

Papers

Showing 126150 of 1816 papers

TitleStatusHype
HALC: Object Hallucination Reduction via Adaptive Focal-Contrast DecodingCode2
TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful SpaceCode2
Less is More: Mitigating Multimodal Hallucination from an EOS Decision PerspectiveCode2
Reformatted AlignmentCode2
Aligning Modalities in Vision Large Language Models via Preference Fine-tuningCode2
InstructGraph: Boosting Large Language Models via Graph-centric Instruction Tuning and Preference AlignmentCode2
A Survey on Hallucination in Large Vision-Language ModelsCode2
LLaMP: Large Language Model Made Powerful for High-fidelity Materials Knowledge Retrieval and DistillationCode2
RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language ModelsCode2
OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-AllocationCode2
Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive DecodingCode2
A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open QuestionsCode2
Woodpecker: Hallucination Correction for Multimodal Large Language ModelsCode2
HallusionBench: An Advanced Diagnostic Suite for Entangled Language Hallucination and Visual Illusion in Large Vision-Language ModelsCode2
From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language ModelsCode2
FreshLLMs: Refreshing Large Language Models with Search Engine AugmentationCode2
MLAgentBench: Evaluating Language Agents on Machine Learning ExperimentationCode2
MMICL: Empowering Vision-language Model with Multi-Modal In-Context LearningCode2
Benchmarking Large Language Models in Retrieval-Augmented GenerationCode2
MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large Language ModelsCode2
TinyLVLM-eHub: Towards Comprehensive and Efficient Evaluation for Large Vision-Language ModelsCode2
Automatically Correcting Large Language Models: Surveying the landscape of diverse self-correction strategiesCode2
Think-on-Graph: Deep and Responsible Reasoning of Large Language Model on Knowledge GraphCode2
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction TuningCode2
ToolQA: A Dataset for LLM Question Answering with External ToolsCode2
Show:102550
← PrevPage 6 of 73Next →

No leaderboard results yet.