| Understanding Alignment in Multimodal LLMs: A Comprehensive Study | Jul 2, 2024 | Hallucination | —Unverified | 0 |
| Large Language Models Are Involuntary Truth-Tellers: Exploiting Fallacy Failure for Jailbreak Attacks | Jul 1, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| The Need for Guardrails with Large Language Models in Medical Safety-Critical Settings: An Artificial Intelligence Application in the Pharmacovigilance Ecosystem | Jul 1, 2024 | HallucinationPharmacovigilance | —Unverified | 0 |
| LLM Uncertainty Quantification through Directional Entailment Graph and Claim Level Response Augmentation | Jul 1, 2024 | HallucinationUncertainty Quantification | —Unverified | 0 |
| Free-text Rationale Generation under Readability Level Control | Jul 1, 2024 | HallucinationText Generation | —Unverified | 0 |
| Unveiling Glitches: A Deep Dive into Image Encoding Bugs within CLIP | Jun 30, 2024 | HallucinationImage Comprehension | —Unverified | 0 |
| A Study on Effect of Reference Knowledge Choice in Generating Technical Content Relevant to SAPPhIRE Model Using Large Language Model | Jun 29, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| BioKGBench: A Knowledge Graph Checking Benchmark of AI Agent for Biomedical Science | Jun 29, 2024 | AI AgentClaim Verification | CodeCode Available | 0 |
| PFME: A Modular Approach for Fine-grained Hallucination Detection and Editing of Large Language Models | Jun 29, 2024 | HallucinationSentence | —Unverified | 0 |
| Applying RLAIF for Code Generation with API-usage in Lightweight LLMs | Jun 28, 2024 | Code GenerationHallucination | —Unverified | 0 |