SOTAVerified

Hallucination

Papers

Showing 251300 of 1816 papers

TitleStatusHype
INSIDE: LLMs' Internal States Retain the Power of Hallucination DetectionCode1
NOH-NMS: Improving Pedestrian Detection by Nearby Objects HallucinationCode1
No-Reference Image Quality Assessment by Hallucinating Pristine FeaturesCode1
Automated Multi-level Preference for MLLMsCode1
Into the Unknown: Self-Learning Large Language ModelsCode1
JDocQA: Japanese Document Question Answering Dataset for Generative Language ModelsCode1
Improving Simultaneous Machine Translation with Monolingual DataCode1
HyperPocket: Generative Point Cloud CompletionCode1
Holistic Analysis of Hallucination in GPT-4V(ision): Bias and Interference ChallengesCode1
All in an Aggregated Image for In-Image LearningCode1
How Language Model Hallucinations Can SnowballCode1
Mitigating Multilingual Hallucination in Large Vision-Language ModelsCode1
High-resolution Face Swapping via Latent Semantics DisentanglementCode1
How well can a large language model explain business processes as perceived by users?Code1
InterrogateLLM: Zero-Resource Hallucination Detection in LLM-Generated AnswersCode1
Joint Evaluation of Answer and Reasoning Consistency for Hallucination Detection in Large Reasoning ModelsCode1
Alleviating Hallucinations of Large Language Models through Induced HallucinationsCode1
HaloQuest: A Visual Hallucination Dataset for Advancing Multimodal ReasoningCode1
Hallucination-Aware Multimodal Benchmark for Gastrointestinal Image Analysis with Large Vision-Language ModelsCode1
Hallucinated Neural Radiance Fields in the WildCode1
Hallucination Augmented Contrastive Learning for Multimodal Large Language ModelCode1
Hallucination Detection in LLMs Using Spectral Features of Attention MapsCode1
Harnessing GPT-4V(ision) for Insurance: A Preliminary ExplorationCode1
Grounded Chain-of-Thought for Multimodal Large Language ModelsCode1
HallE-Control: Controlling Object Hallucination in Large Multimodal ModelsCode1
AtomR: Atomic Operator-Empowered Large Language Models for Heterogeneous Knowledge ReasoningCode1
A Token-level Reference-free Hallucination Detection Benchmark for Free-form Text GenerationCode1
Hallu-PI: Evaluating Hallucination in Multi-modal Large Language Models within Perturbed InputsCode1
GraphArena: Benchmarking Large Language Models on Graph Computational ProblemsCode1
HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction DataCode1
Harnessing Large Language Models for Knowledge Graph Question Answering via Adaptive Multi-Aspect Retrieval-AugmentationCode1
KCTS: Knowledge-Constrained Tree Search Decoding with Token-Level Hallucination DetectionCode1
Aladdin: Zero-Shot Hallucination of Stylized 3D Assets from Abstract Scene DescriptionsCode1
Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & HallucinationsCode1
FlySearch: Exploring how vision-language models exploreCode1
PAINT: Paying Attention to INformed Tokens to Mitigate Hallucination in Large Vision-Language ModelCode1
FineSurE: Fine-grained Summarization Evaluation using LLMsCode1
Collaborative Large Language Model for Recommender SystemsCode1
Finetune-RAG: Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented GenerationCode1
A Survey of Hallucination in Large Foundation ModelsCode1
Citation-Enhanced Generation for LLM-based ChatbotsCode1
Federated Recommendation via Hybrid Retrieval Augmented GenerationCode1
Circuit Transformer: A Transformer That Preserves Logical EquivalenceCode1
Fast and Slow Generating: An Empirical Study on Large and Small Language Models Collaborative DecodingCode1
Cognitive Mirage: A Review of Hallucinations in Large Language ModelsCode1
FaithDial: A Faithful Benchmark for Information-Seeking DialogueCode1
Filter-then-Generate: Large Language Models with Structure-Text Adapter for Knowledge Graph CompletionCode1
AssistRAG: Boosting the Potential of Large Language Models with an Intelligent Information AssistantCode1
CHATREPORT: Democratizing Sustainability Disclosure Analysis through LLM-based ToolsCode1
Factored Verification: Detecting and Reducing Hallucination in Summaries of Academic PapersCode1
Show:102550
← PrevPage 6 of 37Next →

No leaderboard results yet.