SOTAVerified

Hallucination

Papers

Showing 361370 of 1816 papers

TitleStatusHype
OpsEval: A Comprehensive IT Operations Benchmark Suite for Large Language ModelsCode1
Chain of Natural Language Inference for Reducing Large Language Model Ungrounded HallucinationsCode1
AGIR: Automating Cyber Threat Intelligence Reporting with Natural Language GenerationCode1
HallE-Control: Controlling Object Hallucination in Large Multimodal ModelsCode1
LLM Lies: Hallucinations are not Bugs, but Features as Adversarial ExamplesCode1
BTR: Binary Token Representations for Efficient Retrieval Augmented Language ModelsCode1
Analyzing and Mitigating Object Hallucination in Large Vision-Language ModelsCode1
Robust 3D Object Detection from LiDAR-Radar Point Clouds via Cross-Modal Feature AugmentationCode1
Self-supervised Cross-view Representation Reconstruction for Change CaptioningCode1
Lyra: Orchestrating Dual Correction in Automated Theorem ProvingCode1
Show:102550
← PrevPage 37 of 182Next →

No leaderboard results yet.