SOTAVerified

Hallucination

Papers

Showing 14211430 of 1816 papers

TitleStatusHype
Chain of Natural Language Inference for Reducing Large Language Model Ungrounded HallucinationsCode1
Evaluating Hallucinations in Chinese Large Language ModelsCode3
FreshLLMs: Refreshing Large Language Models with Search Engine AugmentationCode2
MLAgentBench: Evaluating Language Agents on Machine Learning ExperimentationCode2
AGIR: Automating Cyber Threat Intelligence Reporting with Natural Language GenerationCode1
HallE-Control: Controlling Object Hallucination in Large Multimodal ModelsCode1
LLM Lies: Hallucinations are not Bugs, but Features as Adversarial ExamplesCode1
BTR: Binary Token Representations for Efficient Retrieval Augmented Language ModelsCode1
Analyzing and Mitigating Object Hallucination in Large Vision-Language ModelsCode1
AutoHall: Automated Hallucination Dataset Generation for Large Language Models0
Show:102550
← PrevPage 143 of 182Next →

No leaderboard results yet.