SOTAVerified

Hallucination

Papers

Showing 13261350 of 1816 papers

TitleStatusHype
Comparative Study of Domain Driven Terms Extraction Using Large Language Models0
Exploring and Evaluating Hallucinations in LLM-Powered Code Generation0
AILS-NTUA at SemEval-2024 Task 6: Efficient model tuning for hallucination detection and analysisCode0
On Large Language Models' Hallucination with Regard to Known FactsCode0
Enhancing the General Agent Capabilities of Low-Parameter LLMs through Tuning and Multi-Branch ReasoningCode0
Are Large Language Models Good at Utility Judgments?Code0
FACTOID: FACtual enTailment fOr hallucInation Detection0
Rejection Improves Reliability: Training LLMs to Refuse Unknown Questions Using RL from Knowledge Feedback0
Mechanistic Understanding and Mitigation of Language Model Non-Factual HallucinationsCode0
"Sorry, Come Again?" Prompting -- Enhancing Comprehension and Diminishing Hallucination with [PAUSE]-injected Optimal Paraphrasing0
Chain-of-Action: Faithful and Multimodal Question Answering through Large Language ModelsCode0
DGoT: Dynamic Graph of Thoughts for Scientific Abstract GenerationCode0
Visual Hallucination: Definition, Quantification, and Prescriptive Remediations0
Dyna-LfLH: Learning Agile Navigation in Dynamic Environments from Learned Hallucination0
Hallucination Detection in Foundation Models for Decision-Making: A Flexible Definition and Review of the State of the Art0
ESREAL: Exploiting Semantic Reconstruction to Mitigate Hallucinations in Vision-Language Models0
Make VLM Recognize Visual Hallucination on Cartoon Character Image with Pose Information0
Sphere Neural-Networks for Rational Reasoning0
Multi-Modal Hallucination Control by Visual Information Grounding0
DEE: Dual-stage Explainable Evaluation Method for Text Generation0
Zero-Shot Multi-task Hallucination Detection0
SpatialPIN: Enhancing Spatial Reasoning Capabilities of Vision-Language Models through Prompting and Interacting 3D Priors0
Logic Query of Thoughts: Guiding Large Language Models to Answer Complex Logic Queries with Knowledge GraphsCode0
Mitigating Dialogue Hallucination for Large Vision Language Models via Adversarial Instruction Tuning0
Think Twice Before Trusting: Self-Detection for Large Language Models through Comprehensive Answer Reflection0
Show:102550
← PrevPage 54 of 73Next →

No leaderboard results yet.