SOTAVerified

Hallucination

Papers

Showing 10011050 of 1816 papers

TitleStatusHype
UniFa: A unified feature hallucination framework for any-shot object detection0
Unleashing the potential of prompt engineering for large language models0
Unmasking Digital Falsehoods: A Comparative Analysis of LLM-Based Misinformation Detection Strategies0
Unsupervised Compressive Text Summarisation with Reinforcement Learning0
Unveiling Glitches: A Deep Dive into Image Encoding Bugs within CLIP0
A Comprehensive Survey of Hallucination in Large Language, Image, Video and Audio Foundation Models0
UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation0
Urban Land Cover Classification with Missing Data Modalities Using Deep Convolutional Neural Networks0
User-Controlled Knowledge Fusion in Large Language Models: Balancing Creativity and Hallucination0
UserSumBench: A Benchmark Framework for Evaluating User Summarization Approaches0
Using Mobile Data and Deep Models to Assess Auditory Verbal Hallucinations0
Utilizing Large Language Models in an iterative paradigm with domain feedback for zero-shot molecule optimization0
Validating Network Protocol Parsers with Traceable RFC Document Interpretation0
VALL-T: Decoder-Only Generative Transducer for Robust and Decoding-Controllable Text-to-Speech0
Valuable Hallucinations: Realizable Non-realistic Propositions0
Verb Mirage: Unveiling and Assessing Verb Concept Hallucinations in Multimodal Large Language Models0
Verify when Uncertain: Beyond Self-Consistency in Black Box Hallucination Detection0
VERITAS: A Unified Approach to Reliability Evaluation0
ViBe: A Text-to-Video Benchmark for Evaluating Hallucination in Large Multimodal Models0
VideoCLIP-XL: Advancing Long Description Understanding for Video CLIP Models0
VideoCoT: A Video Chain-of-Thought Dataset with Active Annotation Tool0
VideoHallucer: Evaluating Intrinsic and Extrinsic Hallucinations in Large Video-Language Models0
VidHalluc: Evaluating Temporal Hallucinations in Multimodal Large Language Models for Video Understanding0
Vietnamese Legal Information Retrieval in Question-Answering System0
VILA^2: VILA Augmented VILA0
Virtual Accessory Try-On via Keypoint Hallucination0
VisAidMath: Benchmarking Visual-Aided Mathematical Reasoning0
Vision-Amplified Semantic Entropy for Hallucination Detection in Medical Visual Question Answering0
MaCTG: Multi-Agent Collaborative Thought Graph for Automatic Programming0
Vision-Flan: Scaling Human-Labeled Tasks in Visual Instruction Tuning0
Vision-Language Models under Cultural and Inclusive Considerations0
Vision Transformer with Attention Map Hallucination and FFN Compaction0
VISTA-LLAMA: Reducing Hallucination in Video Language Models via Equal Distance to Visual Tokens0
Vista-LLaMA: Reliable Video Narrator via Equal Distance to Visual Tokens0
Visual Correspondence Hallucination0
Visual Fact Checker: Enabling High-Fidelity Detailed Caption Generation0
Visual Hallucination: Definition, Quantification, and Prescriptive Remediations0
Visual Instruction Bottleneck Tuning0
VLFeedback: A Large-Scale AI Feedback Dataset for Large Vision-Language Models Alignment0
VL-GenRM: Enhancing Vision-Language Verification via Vision Experts and Iterative Training0
VL-RewardBench: A Challenging Benchmark for Vision-Language Generative Reward Models0
VLRewardBench: A Challenging Benchmark for Vision-Language Generative Reward Models0
Robust Graph Meta-learning for Weakly-supervised Few-shot Node Classification0
What are Models Thinking about? Understanding Large Language Model Hallucinations "Psychology" through Model Inner State Analysis0
What does it take to get state of the art in simultaneous speech-to-speech translation?0
What External Knowledge is Preferred by LLMs? Characterizing and Exploring Chain of Evidence in Imperfect Context0
What Matters in Memorizing and Recalling Facts? Multifaceted Benchmarks for Knowledge Probing in Language Models0
When Not to Answer: Evaluating Prompts on GPT Models for Effective Abstention in Unanswerable Math Word Problems0
When Thinking LLMs Lie: Unveiling the Strategic Deception in Representations of Reasoning Models0
When to Speak, When to Abstain: Contrastive Decoding with Abstention0
Show:102550
← PrevPage 21 of 37Next →

No leaderboard results yet.