SOTAVerified

Hallucination

Papers

Showing 801825 of 1816 papers

TitleStatusHype
GameVLM: A Decision-making Framework for Robotic Task Planning Based on Visual Language Models and Zero-sum Games0
Semantic Volume: Quantifying and Detecting both External and Internal Uncertainty in LLMs0
GameGPT: Multi-agent Collaborative Framework for Game Development0
Hermit Kingdom Through the Lens of Multiple Perspectives: A Case Study of LLM Hallucination on North Korea0
DeepRetro: Retrosynthetic Pathway Discovery using Iterative LLM Reasoning0
Conditional Hallucinations for Image Compression0
Attention-Aware Face Hallucination via Deep Reinforcement Learning0
Instruction-Oriented Preference Alignment for Enhancing Multi-Modal Comprehension Capability of MLLMs0
HKD4VLM: A Progressive Hybrid Knowledge Distillation Framework for Robust Multimodal Hallucination and Factuality Detection in VLMs0
HOB-CNN: Hallucination of Occluded Branches with a Convolutional Neural Network for 2D Fruit Trees0
Internal and External Knowledge Interactive Refinement Framework for Knowledge-Intensive Question Answering0
Interpretable Zero-shot Learning with Infinite Class Concepts0
Honest AI: Fine-Tuning "Small" Language Models to Say "I Don't Know", and Reducing Hallucination in RAG0
Deep Visual Anomaly detection with Negative Learning0
Fuse, Reason and Verify: Geometry Problem Solving with Parsed Clauses from Diagram0
FSM: A Finite State Machine Based Zero-Shot Prompting Paradigm for Multi-Hop Question Answering0
How to Build an AI Tutor That Can Adapt to Any Course Using Knowledge Graph-Enhanced Retrieval-Augmented Generation (KG-RAG)0
How to Detect and Defeat Molecular Mirage: A Metric-Driven Benchmark for Hallucination in LLM-based Molecular Comprehension0
How to Explore with Belief: State Entropy Maximization in POMDPs0
Defocus to focus: Photo-realistic bokeh rendering by fusing defocus and radiance priors0
Comprehensive Modeling and Question Answering of Cancer Clinical Practice Guidelines using LLMs0
H-POPE: Hierarchical Polling-based Probing Evaluation of Hallucinations in Large Vision-Language Models0
Comparing Hallucination Detection Metrics for Multilingual Generation0
From Pixels to Tokens: Revisiting Object Hallucinations in Large Vision-Language Models0
A Topic-level Self-Correctional Approach to Mitigate Hallucinations in MLLMs0
Show:102550
← PrevPage 33 of 73Next →

No leaderboard results yet.