SOTAVerified

Hallucination

Papers

Showing 14511475 of 1816 papers

TitleStatusHype
Multi-Objective Alignment of Large Language Models Through Hypervolume Maximization0
Multi-Stage Retrieval for Operational Technology Cybersecurity Compliance Using Large Language Models: A Railway Casestudy0
Multi-Task Learning with LLMs for Implicit Sentiment Analysis: Data-level and Task-level Automatic Weight Learning0
MURMUR: Modular Multi-Step Reasoning for Semi-Structured Data-to-Text Generation0
Naming is framing: How cybersecurity's language problems are repeating in AI governance0
Navigating Hallucinations for Reasoning of Unintentional Activities0
Navigating LLM Ethics: Advancements, Challenges, and Future Directions0
Navigating Uncertainty: Optimizing API Dependency for Hallucination Reduction in Closed-Book Question Answering0
N-Critics: Self-Refinement of Large Language Models with Ensemble of Critics0
Negation Blindness in Large Language Models: Unveiling the NO Syndrome in Image Generation0
Negative Object Presence Evaluation (NOPE) to Measure Object Hallucination in Vision-Language Models0
NetSafe: Exploring the Topological Safety of Multi-agent Networks0
Neural Data-to-Text Generation via Jointly Learning the Segmentation and Correspondence0
NeuRegenerate: A Framework for Visualizing Neurodegeneration0
Neuro Symbolic Reasoning for Planning: Counterexample Guided Inductive Synthesis using Large Language Models and Satisfiability Solving0
NEXT-EVAL: Next Evaluation of Traditional and LLM Web Data Record Extraction0
NoisyEQA: Benchmarking Embodied Question Answering Against Noisy Queries0
Not Afraid of the Dark: NIR-VIS Face Recognition via Cross-spectral Hallucination and Low-rank Embedding0
Object-Driven Multi-Layer Scene Decomposition From a Single Image0
Objects As Cameras: Estimating High-Frequency Illumination From Shadows0
ODE: Open-Set Evaluation of Hallucinations in Multimodal Large Language Models0
OLMoTrace: Tracing Language Model Outputs Back to Trillions of Training Tokens0
OmniPaint: Mastering Object-Oriented Editing via Disentangled Insertion-Removal Inpainting0
On A Scale From 1 to 5: Quantifying Hallucination in Faithfulness Evaluation0
One Wug, Two Wug+s Transformer Inflection Models Hallucinate Affixes0
Show:102550
← PrevPage 59 of 73Next →

No leaderboard results yet.