SOTAVerified

Hallucination

Papers

Showing 376400 of 1816 papers

TitleStatusHype
NOH-NMS: Improving Pedestrian Detection by Nearby Objects HallucinationCode1
DAMO: Data- and Model-aware Alignment of Multi-modal LLMsCode1
Automatic Curriculum Expert Iteration for Reliable LLM ReasoningCode1
FaithBench: A Diverse Hallucination Benchmark for Summarization by Modern LLMsCode1
Deficiency-Aware Masked Transformer for Video InpaintingCode1
FaithDial: A Faithful Benchmark for Information-Seeking DialogueCode1
Dataset Distillation via FactorizationCode1
Federated Recommendation via Hybrid Retrieval Augmented GenerationCode1
DCR-Consistency: Divide-Conquer-Reasoning for Consistency Evaluation and Improvement of Large Language ModelsCode1
Doc2Query--: When Less is MoreCode1
FactAlign: Long-form Factuality Alignment of Large Language ModelsCode1
Selective Generation for Controllable Language ModelsCode1
Parametric Depth Based Feature Representation Learning for Object Detection and Segmentation in Bird's Eye ViewCode1
Paths-over-Graph: Knowledge Graph Empowered Large Language Model ReasoningCode1
Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMsCode1
Chain of Natural Language Inference for Reducing Large Language Model Ungrounded HallucinationsCode1
Factored Verification: Detecting and Reducing Hallucination in Summaries of Academic PapersCode1
Chain-of-Knowledge: Grounding Large Language Models via Dynamic Knowledge Adapting over Heterogeneous SourcesCode1
A Head to Predict and a Head to Question: Pre-trained Uncertainty Quantification Heads for Hallucination Detection in LLM OutputsCode1
BAMBOO: A Comprehensive Benchmark for Evaluating Long Text Modeling Capacities of Large Language ModelsCode1
Face Hallucination via Split-Attention in Split-Attention NetworkCode1
FAIR GPT: A virtual consultant for research data management in ChatGPTCode1
Prevent the Language Model from being Overconfident in Neural Machine TranslationCode1
EventHallusion: Diagnosing Event Hallucinations in Video LLMsCode1
Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and MitigationCode1
Show:102550
← PrevPage 16 of 73Next →

No leaderboard results yet.