| Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & Hallucinations | Feb 10, 2024 | DiagnosticHallucination | CodeCode Available | 1 |
| GLaM: Fine-Tuning Large Language Models for Domain Knowledge Graph Alignment via Neighborhood Partitioning and Generative Subgraph Encoding | Feb 9, 2024 | HallucinationKnowledge Graphs | —Unverified | 0 |
| ViGoR: Improving Visual Grounding of Large Vision Language Models with Fine-Grained Reward Modeling | Feb 9, 2024 | HallucinationNatural Language Understanding | CodeCode Available | 0 |
| ResumeFlow: An LLM-facilitated Pipeline for Personalized Resume Generation and Refinement | Feb 9, 2024 | HallucinationLanguage Modelling | CodeCode Available | 3 |
| Introspective Planning: Aligning Robots' Uncertainty with Inherent Task Ambiguity | Feb 9, 2024 | Conformal PredictionHallucination | CodeCode Available | 1 |
| An Examination on the Effectiveness of Divide-and-Conquer Prompting in Large Language Models | Feb 8, 2024 | Fact VerificationFake News Detection | —Unverified | 0 |
| Enhancing Retrieval Processes for Language Generation with Augmented Queries | Feb 6, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection | Feb 6, 2024 | DiversityHallucination | CodeCode Available | 1 |
| Training Language Models to Generate Text with Citations via Fine-grained Rewards | Feb 6, 2024 | HallucinationQuestion Answering | CodeCode Available | 1 |
| The Instinctive Bias: Spurious Images lead to Illusion in MLLMs | Feb 6, 2024 | Hallucination | CodeCode Available | 0 |