SOTAVerified

Hallucination

Papers

Showing 15011550 of 1816 papers

TitleStatusHype
Parameter Efficient Audio Captioning With Faithful Guidance Using Audio-text Shared Latent Representation0
What Do LLMs Need to Understand Graphs: A Survey of Parametric Representation of Graphs0
Parenting: Optimizing Knowledge Selection of Retrieval-Augmented Language Models with Parameter Decoupling and Tailored Tuning0
Partial Person Re-identification with Alignment and Hallucination0
Patch-Based Image Hallucination for Super Resolution with Detail Reconstruction from Similar Sample Images0
Patch to the Future: Unsupervised Visual Prediction0
Pelican: Correcting Hallucination in Vision-LLMs via Claim Decomposition and Program of Thought Verification0
Perception in Reflection0
PerturboLLaVA: Reducing Multimodal Hallucinations with Perturbative Visual Training0
PFME: A Modular Approach for Fine-grained Hallucination Detection and Editing of Large Language Models0
Pierce the Mists, Greet the Sky: Decipher Knowledge Overshadowing via Knowledge Circuit Analysis0
Plane Geometry Problem Solving with Multi-modal Reasoning: A Survey0
Mitigating Hallucination of Large Vision-Language Models via Dynamic Logits CalibrationCode0
The Instinctive Bias: Spurious Images lead to Illusion in MLLMsCode0
Mitigating Hallucination in Fictional Character Role-PlayCode0
Mitigating Hallucination in Abstractive Summarization with Domain-Conditional Mutual InformationCode0
Mitigating Entity-Level Hallucination in Large Language ModelsCode0
"Merge Conflicts!" Exploring the Impacts of External Distractors to Parametric Knowledge GraphsCode0
Understanding and Detecting Hallucinations in Neural Machine Translation via Model IntrospectionCode0
MELO: Enhancing Model Editing with Neuron-Indexed Dynamic LoRACode0
Getting Sick After Seeing a Doctor? Diagnosing and Mitigating Knowledge Conflicts in Event Temporal ReasoningCode0
German also Hallucinates! Inconsistency Detection in News Summaries with the Absinth DatasetCode0
AIstorian lets AI be a historian: A KG-powered multi-agent system for accurate biography generationCode0
DefAn: Definitive Answer Dataset for LLMs Hallucination EvaluationCode0
Causal-LLaVA: Causal Disentanglement for Mitigating Hallucination in Multimodal Large Language ModelsCode0
Explain and Improve: LRP-Inference Fine-Tuning for Image Captioning ModelsCode0
The Knowledge Alignment Problem: Bridging Human and External Knowledge for Large Language ModelsCode0
Deep CNN Denoiser and Multi-layer Neighbor Component Embedding for Face HallucinationCode0
The Other Side of the Coin: Exploring Fairness in Retrieval-Augmented GenerationCode0
Understanding Multimodal Hallucination with Parameter-Free Representation AlignmentCode0
RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference DataCode0
RRHF-V: Ranking Responses to Mitigate Hallucinations in Multimodal Large Language Models with Human FeedbackCode0
MedTSS: transforming abstractive summarization of scientific articles with linguistic analysis and concept reinforcementCode0
Genetic Approach to Mitigate Hallucination in Generative IRCode0
MedScore: Factuality Evaluation of Free-Form Medical AnswersCode0
Generating Faithful and Salient Text from Multimodal DataCode0
MedHallTune: An Instruction-Tuning Benchmark for Mitigating Medical Hallucination in Vision-Language ModelsCode0
Mechanistic Understanding and Mitigation of Language Model Non-Factual HallucinationsCode0
MCQG-SRefine: Multiple Choice Question Generation and Evaluation with Iterative Self-Critique, Correction, and Comparison FeedbackCode0
Understanding Multimodal LLMs: the Mechanistic Interpretability of Llava in Visual Question AnsweringCode0
Mixture of Decoding: An Attention-Inspired Adaptive Decoding Strategy to Mitigate Hallucinations in Large Vision-Language ModelsCode0
MCiteBench: A Multimodal Benchmark for Generating Text with CitationsCode0
Chain of Visual Perception: Harnessing Multimodal Large Language Models for Zero-shot Camouflaged Object DetectionCode0
MAVEN-Fact: A Large-scale Event Factuality Detection DatasetCode0
Catch Me if You Search: When Contextual Web Search Results Affect the Detection of HallucinationsCode0
MMBoundary: Advancing MLLM Knowledge Boundary Awareness through Reasoning Step Confidence CalibrationCode0
ViGoR: Improving Visual Grounding of Large Vision Language Models with Fine-Grained Reward ModelingCode0
MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language ModelsCode0
Machine Translation Hallucination Detection for Low and High Resource Languages using Large Language ModelsCode0
LVLM-Compress-Bench: Benchmarking the Broader Impact of Large Vision-Language Model CompressionCode0
Show:102550
← PrevPage 31 of 37Next →

No leaderboard results yet.