SOTAVerified

Hallucination

Papers

Showing 11761200 of 1816 papers

TitleStatusHype
Unveiling Glitches: A Deep Dive into Image Encoding Bugs within CLIP0
A Study on Effect of Reference Knowledge Choice in Generating Technical Content Relevant to SAPPhIRE Model Using Large Language Model0
BioKGBench: A Knowledge Graph Checking Benchmark of AI Agent for Biomedical ScienceCode0
PFME: A Modular Approach for Fine-grained Hallucination Detection and Editing of Large Language Models0
Applying RLAIF for Code Generation with API-usage in Lightweight LLMs0
Handling Ontology Gaps in Semantic ParsingCode0
From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic DataCode0
Mitigating Hallucination in Fictional Character Role-PlayCode0
VideoHallucer: Evaluating Intrinsic and Extrinsic Hallucinations in Large Video-Language Models0
Prompt-Consistency Image Generation (PCIG): A Unified Framework Integrating LLMs, Knowledge Graphs, and Controllable Diffusion ModelsCode0
Large Language Models are Skeptics: False Negative Problem of Input-conflicting Hallucination0
HIGHT: Hierarchical Graph Tokenization for Molecule-Language Alignment0
Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models?0
From Descriptive Richness to Bias: Unveiling the Dark Side of Generative Image Caption Enrichment0
StackRAG Agent: Improving Developer Answers with Retrieval-Augmented GenerationCode0
What Matters in Memorizing and Recalling Facts? Multifaceted Benchmarks for Knowledge Probing in Language Models0
Detecting Errors through Ensembling Prompts (DEEP): An End-to-End LLM Framework for Detecting Factual ErrorsCode0
RichRAG: Crafting Rich Responses for Multi-faceted Queries in Retrieval-Augmented Generation0
On-Policy Fine-grained Knowledge Feedback for Hallucination MitigationCode0
Beyond Under-Alignment: Atomic Preference Enhanced Factuality Tuning for Large Language Models0
Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning?0
Counterfactual Debating with Preset Stances for Hallucination Elimination of LLMsCode0
CoMT: Chain-of-Medical-Thought Reduces Hallucination in Medical Report Generation0
Mitigating Large Language Model Hallucination with Faithful Finetuning0
InternalInspector I^2: Robust Confidence Estimation in LLMs through Internal States0
Show:102550
← PrevPage 48 of 73Next →

No leaderboard results yet.