SOTAVerified

Hallucination

Papers

Showing 13011325 of 1816 papers

TitleStatusHype
Don't Believe Everything You Read: Enhancing Summarization Interpretability through Automatic Identification of Hallucinations in Large Language Models0
Theory of Hallucinations based on Equivariance0
Context-aware Decoding Reduces Hallucination in Query-focused SummarizationCode1
Reducing Hallucinations: Enhancing VQA for Flood Disaster Damage Assessment with Visual Contexts0
Experimenting with Large Language Models and vector embeddings in NASA SciX0
Quantifying Bias in Text-to-Image Generative Models0
On Early Detection of Hallucinations in Factual Question AnsweringCode1
MELO: Enhancing Model Editing with Neuron-Indexed Dynamic LoRACode0
"Knowing When You Don't Know": A Multilingual Relevance Assessment Dataset for Robust Retrieval-Augmented GenerationCode1
Retrieval-Augmented Generation for Large Language Models: A SurveyCode4
Silkie: Preference Distillation for Large Visual Language Models0
Towards Verifiable Text Generation with Evolving Memory and Self-Reflection0
Vista-LLaMA: Reliable Video Narrator via Equal Distance to Visual Tokens0
Improving Factual Error Correction by Learning to Inject Factual ErrorsCode0
Hallucination Augmented Contrastive Learning for Multimodal Large Language ModelCode1
Evaluating ChatGPT as a Question Answering System: A Comprehensive Analysis and Comparison with Existing Models0
Context Tuning for Retrieval Augmented Generation0
Towards Enhanced Image Inpainting: Mitigating Unwanted Object Insertion and Preserving Color ConsistencyCode1
DelucionQA: Detecting Hallucinations in Domain-specific Question Answering0
HALO: An Ontology for Representing and Categorizing Hallucinations in Large Language Models0
Mitigating Open-Vocabulary Caption HallucinationsCode1
Weakly Supervised Detection of Hallucinations in LLM ActivationsCode5
Mitigating Fine-Grained Hallucination by Fine-Tuning Large Vision-Language Models with Caption RewritesCode1
Behind the Magic, MERLIM: Multi-modal Evaluation Benchmark for Large Image-Language ModelsCode0
RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human FeedbackCode6
Show:102550
← PrevPage 53 of 73Next →

No leaderboard results yet.