SOTAVerified

Hallucination

Papers

Showing 15261550 of 1816 papers

TitleStatusHype
Explain and Improve: LRP-Inference Fine-Tuning for Image Captioning ModelsCode0
The Knowledge Alignment Problem: Bridging Human and External Knowledge for Large Language ModelsCode0
Deep CNN Denoiser and Multi-layer Neighbor Component Embedding for Face HallucinationCode0
The Other Side of the Coin: Exploring Fairness in Retrieval-Augmented GenerationCode0
Understanding Multimodal Hallucination with Parameter-Free Representation AlignmentCode0
RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference DataCode0
RRHF-V: Ranking Responses to Mitigate Hallucinations in Multimodal Large Language Models with Human FeedbackCode0
MedTSS: transforming abstractive summarization of scientific articles with linguistic analysis and concept reinforcementCode0
Genetic Approach to Mitigate Hallucination in Generative IRCode0
MedScore: Factuality Evaluation of Free-Form Medical AnswersCode0
Generating Faithful and Salient Text from Multimodal DataCode0
MedHallTune: An Instruction-Tuning Benchmark for Mitigating Medical Hallucination in Vision-Language ModelsCode0
Mechanistic Understanding and Mitigation of Language Model Non-Factual HallucinationsCode0
MCQG-SRefine: Multiple Choice Question Generation and Evaluation with Iterative Self-Critique, Correction, and Comparison FeedbackCode0
Understanding Multimodal LLMs: the Mechanistic Interpretability of Llava in Visual Question AnsweringCode0
Mixture of Decoding: An Attention-Inspired Adaptive Decoding Strategy to Mitigate Hallucinations in Large Vision-Language ModelsCode0
MCiteBench: A Multimodal Benchmark for Generating Text with CitationsCode0
Chain of Visual Perception: Harnessing Multimodal Large Language Models for Zero-shot Camouflaged Object DetectionCode0
MAVEN-Fact: A Large-scale Event Factuality Detection DatasetCode0
Catch Me if You Search: When Contextual Web Search Results Affect the Detection of HallucinationsCode0
MMBoundary: Advancing MLLM Knowledge Boundary Awareness through Reasoning Step Confidence CalibrationCode0
ViGoR: Improving Visual Grounding of Large Vision Language Models with Fine-Grained Reward ModelingCode0
MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language ModelsCode0
Machine Translation Hallucination Detection for Low and High Resource Languages using Large Language ModelsCode0
LVLM-Compress-Bench: Benchmarking the Broader Impact of Large Vision-Language Model CompressionCode0
Show:102550
← PrevPage 62 of 73Next →

No leaderboard results yet.