SOTAVerified

Hallucination

Papers

Showing 11711180 of 1816 papers

TitleStatusHype
Understanding Alignment in Multimodal LLMs: A Comprehensive Study0
Large Language Models Are Involuntary Truth-Tellers: Exploiting Fallacy Failure for Jailbreak AttacksCode0
The Need for Guardrails with Large Language Models in Medical Safety-Critical Settings: An Artificial Intelligence Application in the Pharmacovigilance Ecosystem0
LLM Uncertainty Quantification through Directional Entailment Graph and Claim Level Response Augmentation0
Free-text Rationale Generation under Readability Level Control0
Unveiling Glitches: A Deep Dive into Image Encoding Bugs within CLIP0
A Study on Effect of Reference Knowledge Choice in Generating Technical Content Relevant to SAPPhIRE Model Using Large Language Model0
BioKGBench: A Knowledge Graph Checking Benchmark of AI Agent for Biomedical ScienceCode0
PFME: A Modular Approach for Fine-grained Hallucination Detection and Editing of Large Language Models0
Applying RLAIF for Code Generation with API-usage in Lightweight LLMs0
Show:102550
← PrevPage 118 of 182Next →

No leaderboard results yet.