SOTAVerified

Hallucination

Papers

Showing 15761600 of 1816 papers

TitleStatusHype
Controlling Equational Reasoning in Large Language Models with Prompt Interventions0
Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models0
SINC: Self-Supervised In-Context Learning for Vision-Language Tasks0
Facial Reenactment Through a Personalized Generator0
Improving RNN-Transducers with Acoustic LookAhead0
A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation0
Challenges in Domain-Specific Abstractive Summarization and How to Overcome them0
IERL: Interpretable Ensemble Representation Learning -- Combining CrowdSourced Knowledge and Distributed Semantic Representations0
Evidence for Reduced Sensory Precision and Increased Reliance on Priors in Hallucination-Prone Individuals in a General Population Sample0
A Survey on Multimodal Large Language Models0
Hallucination is the last thing you need0
Vision Transformer with Attention Map Hallucination and FFN Compaction0
Pushing the Limits of ChatGPT on NLP Tasks0
Explaining Legal Concepts with Augmented Large Language Models (GPT-4)0
Trapping LLM Hallucinations Using Tagged Context Prompts0
Defocus to focus: Photo-realistic bokeh rendering by fusing defocus and radiance priors0
Efficient and Interpretable Compressive Text Summarisation with Unsupervised Dual-Agent Reinforcement LearningCode0
Do Language Models Know When They're Hallucinating References?Code0
An Investigation of Evaluation Metrics for Automated Medical Note GenerationCode0
Getting Sick After Seeing a Doctor? Diagnosing and Mitigating Knowledge Conflicts in Event Temporal ReasoningCode0
PaD: Program-aided Distillation Can Teach Small Models Reasoning Better than Chain-of-thought Fine-tuningCode0
mmT5: Modular Multilingual Pre-Training Solves Source Language Hallucinations0
The Knowledge Alignment Problem: Bridging Human and External Knowledge for Large Language ModelsCode0
RCOT: Detecting and Rectifying Factual Inconsistency in Reasoning by Reversing Chain-of-Thought0
Appraising the Potential Uses and Harms of LLMs for Medical Systematic ReviewsCode0
Show:102550
← PrevPage 64 of 73Next →

No leaderboard results yet.