SOTAVerified

Hallucination

Papers

Showing 12011250 of 1816 papers

TitleStatusHype
M2K-VDG: Model-Adaptive Multimodal Knowledge Anchor Enhanced Video-grounded Dialogue Generation0
Enabling Weak LLMs to Judge Response Reliability via Meta Ranking0
Reformatted AlignmentCode2
Vision-Flan: Scaling Human-Labeled Tasks in Visual Instruction Tuning0
EventRL: Enhancing Event Extraction with Outcome Supervision for Large Language ModelsCode3
Aligning Modalities in Vision Large Language Models via Preference Fine-tuningCode2
Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language ModelsCode1
LLMs in the Heart of Differential Testing: A Case Study on a Medical Rule Engine0
Using Hallucinations to Bypass GPT4's Filter0
Comparing Hallucination Detection Metrics for Multilingual Generation0
LLMDFA: Analyzing Dataflow in Code with Large Language ModelsCode3
Measuring and Reducing LLM Hallucination without Gold-Standard Answers0
Retrieve Only When It Needs: Adaptive Retrieval Augmentation for Hallucination Mitigation in Large Language Models0
Towards Uncovering How Large Language Model Works: An Explainability Perspective0
Trading off Consistency and Dimensionality of Convex Surrogates for the Mode0
EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimodal Large Language ModelsCode1
Uncertainty Quantification for In-Context Learning of Large Language ModelsCode1
Do LLMs Know about Hallucination? An Empirical Investigation of LLM's Hidden States0
Visually Dehallucinative Instruction Generation: Know What You Don't KnowCode0
Into the Unknown: Self-Learning Large Language ModelsCode1
Large Language Model with Graph Convolution for Recommendation0
LLMAuditor: A Framework for Auditing Large Language Models Using Human-in-the-Loop0
InstructGraph: Boosting Large Language Models via Graph-centric Instruction Tuning and Preference AlignmentCode2
Visually Dehallucinative Instruction GenerationCode0
Mitigating Object Hallucination in Large Vision-Language Models via Classifier-Free Guidance0
A Systematic Review of Data-to-Text NLG0
Careless Whisper: Speech-to-Text Hallucination HarmsCode0
Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language ModelsCode4
PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language ModelsCode3
G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question AnsweringCode4
Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & HallucinationsCode1
GLaM: Fine-Tuning Large Language Models for Domain Knowledge Graph Alignment via Neighborhood Partitioning and Generative Subgraph Encoding0
ViGoR: Improving Visual Grounding of Large Vision Language Models with Fine-Grained Reward ModelingCode0
ResumeFlow: An LLM-facilitated Pipeline for Personalized Resume Generation and RefinementCode3
Introspective Planning: Aligning Robots' Uncertainty with Inherent Task AmbiguityCode1
An Examination on the Effectiveness of Divide-and-Conquer Prompting in Large Language Models0
Enhancing Retrieval Processes for Language Generation with Augmented Queries0
INSIDE: LLMs' Internal States Retain the Power of Hallucination DetectionCode1
Training Language Models to Generate Text with Citations via Fine-grained RewardsCode1
The Instinctive Bias: Spurious Images lead to Illusion in MLLMsCode0
Unified Hallucination Detection for Multimodal Large Language ModelsCode1
Improving Assessment of Tutoring Practices using Retrieval-Augmented Generation0
Aligner: Efficient Alignment by Learning to Correct0
LLM-Enhanced Data ManagementCode4
A Closer Look at the Limitations of Instruction Tuning0
A Survey on Large Language Model Hallucination via a Creativity Perspective0
CorpusLM: Towards a Unified Language Model on Corpus for Knowledge-Intensive Tasks0
Skip : A Simple Method to Reduce Hallucination in Large Vision-Language ModelsCode1
PokeLLMon: A Human-Parity Agent for Pokemon Battles with Large Language ModelsCode3
Redefining "Hallucination" in LLMs: Towards a psychology-informed framework for mitigating misinformation0
Show:102550
← PrevPage 25 of 37Next →

No leaderboard results yet.