| Effectively Enhancing Vision Language Large Models by Prompt Augmentation and Caption Utilization | Sep 22, 2024 | HallucinationHallucination Evaluation | CodeCode Available | 0 |
| Contrastive Learning for Knowledge-Based Question Generation in Large Language Models | Sep 21, 2024 | Contrastive LearningHallucination | —Unverified | 0 |
| FIHA: Autonomous Hallucination Evaluation in Vision-Language Models with Davidson Scene Graphs | Sep 20, 2024 | HallucinationHallucination Evaluation | —Unverified | 0 |
| A Multiple-Fill-in-the-Blank Exam Approach for Enhancing Zero-Resource Hallucination Detection in Large Language Models | Sep 20, 2024 | HallucinationSentence | —Unverified | 0 |
| JourneyBench: A Challenging One-Stop Vision-Language Understanding Benchmark of Generated Images | Sep 19, 2024 | HallucinationImage Captioning | CodeCode Available | 0 |
| LLMs Can Check Their Own Results to Mitigate Hallucinations in Traffic Understanding Tasks | Sep 19, 2024 | Autonomous DrivingHallucination | —Unverified | 0 |
| Textualized Agent-Style Reasoning for Complex Tasks by Multiple Round LLM Generation | Sep 19, 2024 | Hallucination | —Unverified | 0 |
| THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language Models | Sep 17, 2024 | BenchmarkingBinary Classification | CodeCode Available | 0 |
| Zero-resource Hallucination Detection for Text Generation via Graph-based Contextual Knowledge Triples Modeling | Sep 17, 2024 | HallucinationText Generation | —Unverified | 0 |
| Depth-based Privileged Information for Boosting 3D Human Pose Estimation on RGB | Sep 17, 2024 | 3D Human Pose EstimationHallucination | —Unverified | 0 |
| Exploring the Trade-Offs: Quantization Methods, Task Difficulty, and Model Size in Large Language Models From Edge to Giant | Sep 17, 2024 | HallucinationInstruction Following | CodeCode Available | 0 |
| Optimizing Resource Consumption in Diffusion Models through Hallucination Early Detection | Sep 16, 2024 | Hallucination | —Unverified | 0 |
| HALO: Hallucination Analysis and Learning Optimization to Empower LLMs with Retrieval-Augmented Context for Guided Clinical Decision Making | Sep 16, 2024 | Answer GenerationDecision Making | CodeCode Available | 0 |
| SFR-RAG: Towards Contextually Faithful LLMs | Sep 16, 2024 | counterfactualHallucination | —Unverified | 0 |
| Confidence Estimation for LLM-Based Dialogue State Tracking | Sep 15, 2024 | Dialogue State TrackingHallucination | CodeCode Available | 0 |
| Explore the Hallucination on Low-level Perception for MLLMs | Sep 15, 2024 | HallucinationQuestion Answering | —Unverified | 0 |
| ODE: Open-Set Evaluation of Hallucinations in Multimodal Large Language Models | Sep 14, 2024 | AttributeHallucination | —Unverified | 0 |
| Winning Solution For Meta KDD Cup' 24 | Sep 13, 2024 | HallucinationKnowledge Graphs | —Unverified | 0 |
| MEDIC: Towards a Comprehensive Framework for Evaluating LLMs in Clinical Applications | Sep 11, 2024 | EthicsHallucination | —Unverified | 0 |
| Safety challenges of AI in medicine in the era of large language models | Sep 11, 2024 | Hallucination | —Unverified | 0 |
| Mitigating Hallucination in Visual-Language Models via Re-Balancing Contrastive Decoding | Sep 10, 2024 | HallucinationImage Captioning | —Unverified | 0 |
| LLMs Will Always Hallucinate, and We Need to Live With This | Sep 9, 2024 | Fact CheckingHallucination | —Unverified | 0 |
| Generating Faithful and Salient Text from Multimodal Data | Sep 6, 2024 | HallucinationKnowledge Graphs | CodeCode Available | 0 |
| Detecting Buggy Contracts via Smart Testing | Sep 6, 2024 | Hallucination | —Unverified | 0 |
| Combining LLMs and Knowledge Graphs to Reduce Hallucinations in Question Answering | Sep 6, 2024 | HallucinationKnowledge Graphs | —Unverified | 0 |