SOTAVerified

Hallucination

Papers

Showing 901925 of 1816 papers

TitleStatusHype
VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty EstimationCode0
Enabling Explainable Recommendation in E-commerce with LLM-powered Product Knowledge Graph0
Understanding Multimodal LLMs: the Mechanistic Interpretability of Llava in Visual Question AnsweringCode0
INVARLLM: LLM-assisted Physical Invariant Extraction for Cyber-Physical Systems Anomaly Detection0
Chain-of-Programming (CoP) : Empowering Large Language Models for Geospatial Code Generation0
ViBe: A Text-to-Video Benchmark for Evaluating Hallucination in Large Multimodal Models0
A Novel Approach to Eliminating Hallucinations in Large Language Model-Assisted Causal Discovery0
Seeing Clearly by Layer Two: Enhancing Attention Heads to Alleviate Hallucination in LVLMs0
Mitigating Hallucination in Multimodal Large Language Model via Hallucination-targeted Direct Preference Optimization0
Layer Importance and Hallucination Analysis in Large Language Models via Enhanced Activation Variance-Sparsity0
LLM Hallucination Reasoning with Zero-shot Knowledge Test0
DAHL: Domain-specific Automated Hallucination Evaluation of Long-Form Text through a Benchmark Dataset in BiomedicineCode0
On the Limits of Language Generation: Trade-Offs Between Hallucination and Mode Collapse0
Bridging the Visual Gap: Fine-Tuning Multimodal Models with Knowledge-Adapted CaptionsCode0
Confidence-aware Denoised Fine-tuning of Off-the-shelf Models for Certified RobustnessCode0
Verbosity Veracity: Demystify Verbosity Compensation Behavior of Large Language ModelsCode0
Trustful LLMs: Customizing and Grounding Text Generation with Knowledge Bases and Dual Decoders0
DecoPrompt : Decoding Prompts Reduces Hallucinations when Large Language Models Meet False PremisesCode0
SHARP: Unlocking Interactive Hallucination via Stance Transfer in Role-Playing Agents0
Evaluating the Accuracy of Chatbots in Financial Literature0
Invar-RAG: Invariant LLM-aligned Retrieval for Better Generation0
Prompt-Efficient Fine-Tuning for GPT-like Deep Models to Reduce Hallucination and to Improve Reproducibility in Scientific Text Generation Using Stochastic Optimisation Techniques0
Mitigating Hallucination with ZeroG: An Advanced Knowledge Management Engine0
Seeing Through the Fog: A Cost-Effectiveness Analysis of Hallucination Detection Systems0
LLM-R: A Framework for Domain-Adaptive Maintenance Scheme Generation Combining Hierarchical Agents and RAG0
Show:102550
← PrevPage 37 of 73Next →

No leaderboard results yet.