SOTAVerified

Hallucination

Papers

Showing 15211530 of 1816 papers

TitleStatusHype
Fidelity-Enriched Contrastive Search: Reconciling the Faithfulness-Diversity Trade-Off in Text GenerationCode0
Language Models Hallucinate, but May Excel at Fact VerificationCode0
Unleashing the potential of prompt engineering for large language models0
Hallucination Detection for Grounded Instruction Generation0
Chainpoll: A high efficacy method for LLM hallucination detectionCode0
Long-Form Speech Translation through Segmentation with Finite-State Decoding Constraints on Large Language Models0
MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language ModelsCode0
Know Where to Go: Make LLM a Relevant, Responsible, and Trustworthy Searcher0
Reliable Academic Conference Question Answering: A Study Based on Large Language ModelCode0
ReEval: Automatic Hallucination Evaluation for Retrieval-Augmented Large Language Models via Transferable Adversarial Attacks0
Show:102550
← PrevPage 153 of 182Next →

No leaderboard results yet.