SOTAVerified

Hallucination

Papers

Showing 11411150 of 1816 papers

TitleStatusHype
Unsupervised Real-Time Hallucination Detection based on the Internal States of Large Language ModelsCode2
On the Benefits of Fine-Grained Loss Truncation: A Case Study on Factuality in SummarizationCode0
Tuning-Free Accountable Intervention for LLM Deployment -- A Metacognitive Approach0
Can Large Language Models Play Games? A Case Study of A Self-Play Approach0
ERBench: An Entity-Relationship based Automatically Verifiable Hallucination Benchmark for Large Language ModelsCode0
ChatASU: Evoking LLM's Reflexion to Truly Understand Aspect Sentiment in Dialogues0
Sora as an AGI World Model? A Complete Survey on Text-to-Video Generation0
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon GenerationCode3
HaluEval-Wild: Evaluating Hallucinations of Language Models in the WildCode0
Federated Recommendation via Hybrid Retrieval Augmented GenerationCode1
Show:102550
← PrevPage 115 of 182Next →

No leaderboard results yet.