SOTAVerified

Hallucination

Papers

Showing 676700 of 1816 papers

TitleStatusHype
Investigating Multi-Pivot Ensembling with Massively Multilingual Machine Translation ModelsCode0
Characterizing Multimodal Long-form Summarization: A Case Study on Financial ReportsCode0
Characterizing Context Influence and Hallucination in SummarizationCode0
Investigating and Mitigating Object Hallucinations in Pretrained Vision-Language (CLIP) ModelsCode0
Integrating Chemistry Knowledge in Large Language Models via Prompt EngineeringCode0
Iterative Teaching by Data HallucinationCode0
Incorporating Task-specific Concept Knowledge into Script LearningCode0
Improving Factual Error Correction by Learning to Inject Factual ErrorsCode0
A Claim Decomposition Benchmark for Long-form Answer VerificationCode0
Improving Factuality in Large Language Models via Decoding-Time Hallucinatory and Truthful ComparatorsCode0
Image Denoising with Control over Deep Network HallucinationCode0
CHAIR -- Classifier of Hallucination as ImproverCode0
Instruction Makes a DifferenceCode0
Chainpoll: A high efficacy method for LLM hallucination detectionCode0
Explaining Graph Neural Networks with Large Language Models: A Counterfactual Perspective for Molecular Property PredictionCode0
Chain-of-Verification Reduces Hallucination in Large Language ModelsCode0
HypoTermQA: Hypothetical Terms Dataset for Benchmarking Hallucination Tendency of LLMsCode0
Controlling Risk of Retrieval-augmented Generation: A Counterfactual Prompting FrameworkCode0
On the Benefits of Fine-Grained Loss Truncation: A Case Study on Factuality in SummarizationCode0
How Helpful is Inverse Reinforcement Learning for Table-to-Text Generation?Code0
How Much Do LLMs Hallucinate across Languages? On Multilingual Estimation of LLM Hallucination in the WildCode0
Evolutionary thoughts: integration of large language models and evolutionary algorithmsCode0
How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their VulnerabilitiesCode0
Im2Avatar: Colorful 3D Reconstruction from a Single ImageCode0
Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and RectificationCode0
Show:102550
← PrevPage 28 of 73Next →

No leaderboard results yet.