| MeMemo: On-device Retrieval Augmentation for Private and Personalized Text Generation | Jul 2, 2024 | HallucinationRAG | CodeCode Available | 2 |
| Understanding Alignment in Multimodal LLMs: A Comprehensive Study | Jul 2, 2024 | Hallucination | —Unverified | 0 |
| The Need for Guardrails with Large Language Models in Medical Safety-Critical Settings: An Artificial Intelligence Application in the Pharmacovigilance Ecosystem | Jul 1, 2024 | HallucinationPharmacovigilance | —Unverified | 0 |
| LLM Uncertainty Quantification through Directional Entailment Graph and Claim Level Response Augmentation | Jul 1, 2024 | HallucinationUncertainty Quantification | —Unverified | 0 |
| Free-text Rationale Generation under Readability Level Control | Jul 1, 2024 | HallucinationText Generation | —Unverified | 0 |
| Large Language Models Are Involuntary Truth-Tellers: Exploiting Fallacy Failure for Jailbreak Attacks | Jul 1, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| FineSurE: Fine-grained Summarization Evaluation using LLMs | Jul 1, 2024 | BenchmarkingHallucination | CodeCode Available | 1 |
| Unveiling Glitches: A Deep Dive into Image Encoding Bugs within CLIP | Jun 30, 2024 | HallucinationImage Comprehension | —Unverified | 0 |
| Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models | Jun 30, 2024 | Hallucinationmultimodal interaction | CodeCode Available | 1 |
| BioKGBench: A Knowledge Graph Checking Benchmark of AI Agent for Biomedical Science | Jun 29, 2024 | AI AgentClaim Verification | CodeCode Available | 0 |