| Right for Right Reasons: Large Language Models for Verifiable Commonsense Knowledge Graph Question Answering | Mar 3, 2024 | Claim VerificationGraph Question Answering | —Unverified | 0 |
| Self-Consistent Decoding for More Factual Open Responses | Mar 1, 2024 | HallucinationResponse Generation | CodeCode Available | 0 |
| MALTO at SemEval-2024 Task 6: Leveraging Synthetic Data for LLM Hallucination Detection | Mar 1, 2024 | Data AugmentationHallucination | —Unverified | 0 |
| Crimson: Empowering Strategic Reasoning in Cybersecurity through Large Language Models | Mar 1, 2024 | HallucinationRetrieval | —Unverified | 0 |
| Whispers that Shake Foundations: Analyzing and Mitigating False Premise Hallucinations in Large Language Models | Feb 29, 2024 | Hallucination | —Unverified | 0 |
| Navigating Hallucinations for Reasoning of Unintentional Activities | Feb 29, 2024 | HallucinationNavigate | —Unverified | 0 |
| Editing Factual Knowledge and Explanatory Ability of Medical Large Language Models | Feb 28, 2024 | BenchmarkingHallucination | CodeCode Available | 0 |
| Collaborative decoding of critical tokens for boosting factuality of large language models | Feb 28, 2024 | HallucinationInstruction Following | —Unverified | 0 |
| Multi-FAct: Assessing Factuality of Multilingual LLMs using FActScore | Feb 28, 2024 | DiversityForm | CodeCode Available | 0 |
| Securing Reliability: A Brief Overview on Enhancing In-Context Learning for Foundation Models | Feb 27, 2024 | HallucinationIn-Context Learning | —Unverified | 0 |