SOTAVerified

Hallucination

Papers

Showing 226250 of 1816 papers

TitleStatusHype
Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language ModelsCode1
Introspective Planning: Aligning Robots' Uncertainty with Inherent Task AmbiguityCode1
IterGen: Iterative Semantic-aware Structured LLM Generation with BacktrackingCode1
Joint Evaluation of Answer and Reasoning Consistency for Hallucination Detection in Large Reasoning ModelsCode1
KCTS: Knowledge-Constrained Tree Search Decoding with Token-Level Hallucination DetectionCode1
LLMs Know What They Need: Leveraging a Missing Information Guided Framework to Empower Retrieval-Augmented GenerationCode1
FlySearch: Exploring how vision-language models exploreCode1
Automatic Curriculum Expert Iteration for Reliable LLM ReasoningCode1
3D Sketch-aware Semantic Scene Completion via Semi-supervised Structure PriorCode1
Advancing TTP Analysis: Harnessing the Power of Large Language Models with Retrieval Augmented GenerationCode1
BTR: Binary Token Representations for Efficient Retrieval Augmented Language ModelsCode1
AMBER: An LLM-free Multi-dimensional Benchmark for MLLMs Hallucination EvaluationCode1
CAFe: Unifying Representation and Generation with Contrastive-Autoregressive FinetuningCode1
AdaPlanner: Adaptive Planning from Feedback with Language ModelsCode1
Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & HallucinationsCode1
LAN-HDR: Luminance-based Alignment Network for High Dynamic Range Video ReconstructionCode1
Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based ReasoningCode1
Automated Review Generation Method Based on Large Language ModelsCode1
FineSurE: Fine-grained Summarization Evaluation using LLMsCode1
Antidote: A Unified Framework for Mitigating LVLM Hallucinations in Counterfactual Presupposition and Object PerceptionCode1
Can Knowledge Editing Really Correct Hallucinations?Code1
Automated Multi-level Preference for MLLMsCode1
Finetune-RAG: Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented GenerationCode1
LiDAR-based 4D Occupancy Completion and ForecastingCode1
PAINT: Paying Attention to INformed Tokens to Mitigate Hallucination in Large Vision-Language ModelCode1
Show:102550
← PrevPage 10 of 73Next →

No leaderboard results yet.