SOTAVerified

Hallucination

Papers

Showing 576600 of 1816 papers

TitleStatusHype
Ornithologist: Towards Trustworthy "Reasoning" about Central Bank Communications0
Adaptive Schema-aware Event Extraction with Retrieval-Augmented Generation0
Prioritizing Image-Related Tokens Enhances Vision-Language Pre-TrainingCode0
Improving the Reliability of LLMs: Combining CoT, RAG, Self-Consistency, and Self-Verification0
On the Cost and Benefits of Training Context with Utterance or Full Conversation Training: A Comparative Stud0
SEReDeEP: Hallucination Detection in Retrieval-Augmented Models via Semantic Entropy and Context-Parameter Fusion0
Critique Before Thinking: Mitigating Hallucination through Rationale-Augmented Instruction Tuning0
Multimodal Survival Modeling in the Age of Foundation ModelsCode0
TrumorGPT: Graph-Based Retrieval-Augmented Large Language Model for Fact-Checking0
Evolutionary thoughts: integration of large language models and evolutionary algorithmsCode0
Osiris: A Lightweight Open-Source Hallucination Detection System0
Interpretable Zero-shot Learning with Infinite Class Concepts0
Mitigating Image Captioning Hallucinations in Vision-Language Models0
Knowledge Graphs for Enhancing Large Language Models in Entity Disambiguation0
UCSC at SemEval-2025 Task 3: Context, Models and Prompt Optimization for Automated Hallucination Detection in LLM OutputCode0
SEval-Ex: A Statement-Level Framework for Explainable Summarization Evaluation0
A Comprehensive Analysis for Visual Object Hallucination in Large Vision-Language Models0
Regression is all you need for medical image translationCode0
Automated Parsing of Engineering Drawings for Structured Information Extraction Using a Fine-tuned Document Understanding Transformer0
Multi-agents based User Values Mining for Recommendation0
SmallPlan: Leverage Small Language Models for Sequential Path Planning with Simulation-Powered, LLM-Guided DistillationCode0
HalluMix: A Task-Agnostic, Multi-Domain Benchmark for Real-World Hallucination Detection0
Triggering Hallucinations in LLMs: A Quantitative Study of Prompt-Induced Hallucination in Large Language Models0
Efficient and robust 3D blind harmonization for large domain gaps0
Black-Box Visual Prompt Engineering for Mitigating Object Hallucination in Large Vision Language Models0
Show:102550
← PrevPage 24 of 73Next →

No leaderboard results yet.