SOTAVerified

Hallucination

Papers

Showing 811820 of 1816 papers

TitleStatusHype
Misinforming LLMs: vulnerabilities, challenges and opportunities0
Piculet: Specialized Models-Guided Hallucination Decrease for MultiModal Large Language Models0
Hallu-PI: Evaluating Hallucination in Multi-modal Large Language Models within Perturbed InputsCode1
RAGEval: Scenario Specific RAG Evaluation Dataset Generation FrameworkCode3
Alleviating Hallucination in Large Vision-Language Models with Active Retrieval Augmentation0
Mitigating Multilingual Hallucination in Large Vision-Language ModelsCode1
DeliLaw: A Chinese Legal Counselling System Based on a Large Language ModelCode2
Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMsCode1
Cost-Effective Hallucination Detection for LLMs0
Prompting Medical Large Vision-Language Models to Diagnose Pathologies by Visual Question Answering0
Show:102550
← PrevPage 82 of 182Next →

No leaderboard results yet.