SOTAVerified

Hallucination

Papers

Showing 11011150 of 1816 papers

TitleStatusHype
Enhanced document retrieval with topic embeddings0
CLIP-DPO: Vision-Language Models as a Source of Preference for Fixing Hallucinations in LVLMs0
Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making0
Lower Layer Matters: Alleviating Hallucination via Multi-Layer Fusion Contrastive Decoding with Truthfulness Refocused0
Large Language Models Might Not Care What You Are Saying: Prompt Format Beats Descriptions0
Plan with Code: Comparing approaches for robust NL to DSL generation0
CodeMirage: Hallucinations in Code Generated by Large Language Models0
Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability0
Audit-LLM: Multi-Agent Collaboration for Log-based Insider Threat Detection0
Reference-free Hallucination Detection for Large Vision-Language Models0
Improving Whisper's Recognition Performance for Under-Represented Language Kazakh Leveraging Unpaired Speech and Text0
FiSTECH: Financial Style Transfer to Enhance Creativity without Hallucinations in LLMs0
Order Matters in Hallucination: Reasoning Order as Benchmark and Reflexive Prompting for Large-Language-ModelsCode0
Handwritten Code Recognition for Pen-and-Paper CS EducationCode0
KnowPO: Knowledge-aware Preference Optimization for Controllable Knowledge Selection in Retrieval-Augmented Language Models0
MAO: A Framework for Process Model Generation with Multi-Agent Orchestration0
Improving Zero-Shot ObjectNav with Generative Communication0
Misinforming LLMs: vulnerabilities, challenges and opportunities0
Piculet: Specialized Models-Guided Hallucination Decrease for MultiModal Large Language Models0
Alleviating Hallucination in Large Vision-Language Models with Active Retrieval Augmentation0
Prompting Medical Large Vision-Language Models to Diagnose Pathologies by Visual Question Answering0
Cost-Effective Hallucination Detection for LLMs0
Interpreting and Mitigating Hallucination in MLLMs through Multi-agent Debate0
VILA^2: VILA Augmented VILA0
WildHallucinations: Evaluating Long-form Factuality in LLMs with Real-World Entity Queries0
LawLuo: A Multi-Agent Collaborative Framework for Multi-Round Chinese Legal Consultation0
Retrieve, Generate, Evaluate: A Case Study for Medical Paraphrases Generation with Small Language ModelsCode0
Machine Translation Hallucination Detection for Low and High Resource Languages using Large Language ModelsCode0
Generation Constraint Scaling Can Mitigate Hallucination0
Shared Imagination: LLMs Hallucinate Alike0
Multilingual Fine-Grained News Headline Hallucination Detection0
Text2Place: Affordance-aware Text Guided Human Placement0
Developing a Reliable, Fast, General-Purpose Hallucination Detection and Mitigation Service0
MAVEN-Fact: A Large-scale Event Factuality Detection DatasetCode0
Data-Centric Human Preference Optimization with RationalesCode0
Retrieval-Augmented Generation for Natural Language Processing: A Survey0
BEAF: Observing BEfore-AFter Changes to Evaluate Hallucination in Vision-language Models0
Black-Box Opinion Manipulation Attacks to Retrieval-Augmented Generation of Large Language Models0
ANHALTEN: Cross-Lingual Transfer for German Token-Level Reference-Free Hallucination DetectionCode0
Evaluating and Enhancing Trustworthiness of LLMs in Perception Tasks0
Localizing and Mitigating Errors in Long-form Question AnsweringCode0
What's Wrong? Refining Meeting Summaries with LLM FeedbackCode0
Addressing Image Hallucination in Text-to-Image Generation through Factual Image Retrieval0
GraphEval: A Knowledge-Graph Based LLM Hallucination Evaluation Framework0
Look Within, Why LLMs Hallucinate: A Causal Perspective0
On Mitigating Code LLM Hallucinations with API Documentation0
Cohesive Conversations: Enhancing Authenticity in Multi-Agent Simulated Dialogues0
The Two Sides of the Coin: Hallucination Generation and Detection with LLMs as Evaluators for LLMs0
Mitigating Entity-Level Hallucination in Large Language ModelsCode0
DAHRS: Divergence-Aware Hallucination-Remediated SRL Projection0
Show:102550
← PrevPage 23 of 37Next →

No leaderboard results yet.