SOTAVerified

Hallucination

Papers

Showing 276300 of 1816 papers

TitleStatusHype
ECKGBench: Benchmarking Large Language Models in E-commerce Leveraging Knowledge Graph0
Poly-FEVER: A Multilingual Fact Verification Benchmark for Hallucination Detection in Large Language Models0
R^2: A LLM Based Novel-to-Screenplay Generation Framework with Causal Plot Graphs0
MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models0
Enhancing LLM Generation with Knowledge Hypergraph for Evidence-Based Medicine0
From "Hallucination" to "Suture": Insights from Language Philosophy to Enhance Large Language Models0
Learning on LLM Output Signatures for gray-box LLM Behavior AnalysisCode0
RAD: Retrieval-Augmented Decision-Making of Meta-Actions with Vision-Language Models in Autonomous Driving0
HICD: Hallucination-Inducing via Attention Dispersion for Contrastive Decoding to Mitigate Hallucinations in Large Language ModelsCode0
Grounded Chain-of-Thought for Multimodal Large Language ModelsCode1
ClearSight: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large language ModelsCode2
LLMSeR: Enhancing Sequential Recommendation via LLM-based Data Augmentation0
Applications of Large Language Model Reasoning in Feature Generation0
RAG-KG-IL: A Multi-Agent Hybrid Framework for Reducing Hallucinations and Enhancing LLM Reasoning through RAG and Incremental Knowledge Graph Learning Integration0
LLM Agents for Education: Advances and Applications0
AIstorian lets AI be a historian: A KG-powered multi-agent system for accurate biography generationCode0
Prompt Injection Detection and Mitigation via AI Multi-Agent NLP FrameworksCode0
Learning to Inference Adaptively for Multimodal Large Language Models0
TruthPrInt: Mitigating LVLM Object Hallucination Via Latent Truthful-Guided Pre-InterventionCode1
Through the Magnifying Glass: Adaptive Perception Magnification for Hallucination-Free VLM DecodingCode0
Conversational Gold: Evaluating Personalized Conversational Search System using Gold NuggetsCode0
Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection0
NVP-HRI: Zero Shot Natural Voice and Posture-based Human-Robot Interaction via Large Language ModelCode0
Attention Hijackers: Detect and Disentangle Attention Hijacking in LVLMs for Hallucination Mitigation0
Gradient-guided Attention Map Editing: Towards Efficient Contextual Hallucination Mitigation0
Show:102550
← PrevPage 12 of 73Next →

No leaderboard results yet.