SOTAVerified

Hallucination

Papers

Showing 13011350 of 1816 papers

TitleStatusHype
Distilling Reasoning Ability from Large Language Models with Adaptive Thinking0
Entropy Guided Extrapolative Decoding to Improve Factuality in Large Language Models0
Reducing hallucination in structured outputs via Retrieval-Augmented Generation0
Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMsCode0
An Audit on the Perspectives and Challenges of Hallucinations in NLP0
BRAVE: Broadening the visual encoding of vision-language models0
MetaCheckGPT -- A Multi-task Hallucination Detector Using LLM Uncertainty and Meta-models0
Characterizing Multimodal Long-form Summarization: A Case Study on Financial ReportsCode0
SmurfCat at SemEval-2024 Task 6: Leveraging Synthetic Data for Hallucination DetectionCode0
Automating Research Synthesis with Domain-Specific Large Language Model Fine-Tuning0
Hyperbolic Learning with Synthetic Captions for Open-World Detection0
HaVTR: Improving Video-Text Retrieval Through Augmentation Using Large Foundation Models0
FGAIF: Aligning Large Vision-Language Models with Fine-grained AI Feedback0
SLPL SHROOM at SemEval2024 Task 06: A comprehensive study on models ability to detect hallucinationCode0
PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition DynamicsCode0
On the Limitations of Large Language Models (LLMs): False Attribution0
FFN-SkipLLM: A Hidden Gem for Autoregressive Decoding with Adaptive Feed Forward Skipping0
Fakes of Varying Shades: How Warning Affects Human Perception and Engagement Regarding LLM HallucinationsCode0
A Cause-Effect Look at Alleviating Hallucination of Knowledge-grounded Dialogue Generation0
SHROOM-INDElab at SemEval-2024 Task 6: Zero- and Few-Shot LLM-Based Classification for Hallucination DetectionCode0
Mitigating LLM Hallucinations via Conformal Abstention0
Scalable Model Editing via Customized Expert NetworksCode0
ALOHa: A New Measure for Hallucination in Captioning Models0
Hallucination Diversity-Aware Active Learning for Text Summarization0
Extracting Norms from Contracts Via ChatGPT: Opportunities and Challenges0
Comparative Study of Domain Driven Terms Extraction Using Large Language Models0
Exploring and Evaluating Hallucinations in LLM-Powered Code Generation0
AILS-NTUA at SemEval-2024 Task 6: Efficient model tuning for hallucination detection and analysisCode0
On Large Language Models' Hallucination with Regard to Known FactsCode0
Enhancing the General Agent Capabilities of Low-Parameter LLMs through Tuning and Multi-Branch ReasoningCode0
Are Large Language Models Good at Utility Judgments?Code0
FACTOID: FACtual enTailment fOr hallucInation Detection0
Rejection Improves Reliability: Training LLMs to Refuse Unknown Questions Using RL from Knowledge Feedback0
Mechanistic Understanding and Mitigation of Language Model Non-Factual HallucinationsCode0
"Sorry, Come Again?" Prompting -- Enhancing Comprehension and Diminishing Hallucination with [PAUSE]-injected Optimal Paraphrasing0
Chain-of-Action: Faithful and Multimodal Question Answering through Large Language ModelsCode0
DGoT: Dynamic Graph of Thoughts for Scientific Abstract GenerationCode0
Visual Hallucination: Definition, Quantification, and Prescriptive Remediations0
Dyna-LfLH: Learning Agile Navigation in Dynamic Environments from Learned Hallucination0
Hallucination Detection in Foundation Models for Decision-Making: A Flexible Definition and Review of the State of the Art0
ESREAL: Exploiting Semantic Reconstruction to Mitigate Hallucinations in Vision-Language Models0
Make VLM Recognize Visual Hallucination on Cartoon Character Image with Pose Information0
Sphere Neural-Networks for Rational Reasoning0
Multi-Modal Hallucination Control by Visual Information Grounding0
DEE: Dual-stage Explainable Evaluation Method for Text Generation0
Zero-Shot Multi-task Hallucination Detection0
SpatialPIN: Enhancing Spatial Reasoning Capabilities of Vision-Language Models through Prompting and Interacting 3D Priors0
Logic Query of Thoughts: Guiding Large Language Models to Answer Complex Logic Queries with Knowledge GraphsCode0
Mitigating Dialogue Hallucination for Large Vision Language Models via Adversarial Instruction Tuning0
Think Twice Before Trusting: Self-Detection for Large Language Models through Comprehensive Answer Reflection0
Show:102550
← PrevPage 27 of 37Next →

No leaderboard results yet.