SOTAVerified

Hallucination

Papers

Showing 12111220 of 1816 papers

TitleStatusHype
LLMDFA: Analyzing Dataflow in Code with Large Language ModelsCode3
Measuring and Reducing LLM Hallucination without Gold-Standard Answers0
Retrieve Only When It Needs: Adaptive Retrieval Augmentation for Hallucination Mitigation in Large Language Models0
Towards Uncovering How Large Language Model Works: An Explainability Perspective0
Trading off Consistency and Dimensionality of Convex Surrogates for the Mode0
EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimodal Large Language ModelsCode1
Uncertainty Quantification for In-Context Learning of Large Language ModelsCode1
Do LLMs Know about Hallucination? An Empirical Investigation of LLM's Hidden States0
Visually Dehallucinative Instruction Generation: Know What You Don't KnowCode0
Into the Unknown: Self-Learning Large Language ModelsCode1
Show:102550
← PrevPage 122 of 182Next →

No leaderboard results yet.