| Don't Believe Everything You Read: Enhancing Summarization Interpretability through Automatic Identification of Hallucinations in Large Language Models | Dec 22, 2023 | HallucinationMachine Translation | —Unverified | 0 |
| Theory of Hallucinations based on Equivariance | Dec 22, 2023 | Hallucination | —Unverified | 0 |
| Context-aware Decoding Reduces Hallucination in Query-focused Summarization | Dec 21, 2023 | HallucinationLanguage Modelling | CodeCode Available | 1 |
| Reducing Hallucinations: Enhancing VQA for Flood Disaster Damage Assessment with Visual Contexts | Dec 21, 2023 | HallucinationQuestion Answering | —Unverified | 0 |
| Experimenting with Large Language Models and vector embeddings in NASA SciX | Dec 21, 2023 | Data AugmentationHallucination | —Unverified | 0 |
| Quantifying Bias in Text-to-Image Generative Models | Dec 20, 2023 | HallucinationMarketing | —Unverified | 0 |
| On Early Detection of Hallucinations in Factual Question Answering | Dec 19, 2023 | HallucinationOpen-Ended Question Answering | CodeCode Available | 1 |
| MELO: Enhancing Model Editing with Neuron-Indexed Dynamic LoRA | Dec 19, 2023 | Document ClassificationHallucination | CodeCode Available | 0 |
| "Knowing When You Don't Know": A Multilingual Relevance Assessment Dataset for Robust Retrieval-Augmented Generation | Dec 18, 2023 | HallucinationLanguage Modelling | CodeCode Available | 1 |
| Retrieval-Augmented Generation for Large Language Models: A Survey | Dec 18, 2023 | HallucinationRAG | CodeCode Available | 4 |