SOTAVerified

Hallucination

Papers

Showing 351375 of 1816 papers

TitleStatusHype
LightLM: A Lightweight Deep and Narrow Language Model for Generative RecommendationCode1
FactCHD: Benchmarking Fact-Conflicting Hallucination DetectionCode1
LiDAR-based 4D Occupancy Completion and ForecastingCode1
RegaVAE: A Retrieval-Augmented Gaussian Mixture Variational Auto-Encoder for Language ModelingCode1
Factored Verification: Detecting and Reducing Hallucination in Summaries of Academic PapersCode1
Theory of Mind for Multi-Agent Collaboration via Large Language ModelsCode1
Improving Large Language Models in Event Relation Logical PredictionCode1
"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference LettersCode1
KCTS: Knowledge-Constrained Tree Search Decoding with Token-Level Hallucination DetectionCode1
Enhancing Text-based Knowledge Graph Completion with Zero-Shot Large Language Models: A Focus on Semantic EnhancementCode1
OpsEval: A Comprehensive IT Operations Benchmark Suite for Large Language ModelsCode1
Chain of Natural Language Inference for Reducing Large Language Model Ungrounded HallucinationsCode1
AGIR: Automating Cyber Threat Intelligence Reporting with Natural Language GenerationCode1
HallE-Control: Controlling Object Hallucination in Large Multimodal ModelsCode1
BTR: Binary Token Representations for Efficient Retrieval Augmented Language ModelsCode1
LLM Lies: Hallucinations are not Bugs, but Features as Adversarial ExamplesCode1
Analyzing and Mitigating Object Hallucination in Large Vision-Language ModelsCode1
Robust 3D Object Detection from LiDAR-Radar Point Clouds via Cross-Modal Feature AugmentationCode1
Self-supervised Cross-view Representation Reconstruction for Change CaptioningCode1
Lyra: Orchestrating Dual Correction in Automated Theorem ProvingCode1
BAMBOO: A Comprehensive Benchmark for Evaluating Long Text Modeling Capacities of Large Language ModelsCode1
Struc-Bench: Are Large Language Models Really Good at Generating Complex Structured Data?Code1
Cognitive Mirage: A Review of Hallucinations in Large Language ModelsCode1
A Survey of Hallucination in Large Foundation ModelsCode1
Evaluation and Analysis of Hallucination in Large Vision-Language ModelsCode1
Show:102550
← PrevPage 15 of 73Next →

No leaderboard results yet.