SOTAVerified

Hallucination

Papers

Showing 591600 of 1816 papers

TitleStatusHype
Learning with privileged information via adversarial discriminative modality distillationCode0
Confidence Estimation for LLM-Based Dialogue State TrackingCode0
Confidence-aware Denoised Fine-tuning of Off-the-shelf Models for Certified RobustnessCode0
Exploring the Trade-Offs: Quantization Methods, Task Difficulty, and Model Size in Large Language Models From Edge to GiantCode0
Learning on LLM Output Signatures for gray-box LLM Behavior AnalysisCode0
Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMsCode0
Large Language Models on Wikipedia-Style Survey Generation: an Evaluation in NLP ConceptsCode0
Learning Conformal Abstention Policies for Adaptive Risk Management in Large Language and Vision-Language ModelsCode0
Large Language Models Are Involuntary Truth-Tellers: Exploiting Fallacy Failure for Jailbreak AttacksCode0
Language Models Hallucinate, but May Excel at Fact VerificationCode0
Show:102550
← PrevPage 60 of 182Next →

No leaderboard results yet.