| Vision-Flan: Scaling Human-Labeled Tasks in Visual Instruction Tuning | Feb 18, 2024 | HallucinationVisual Question Answering | —Unverified | 0 |
| Measuring and Reducing LLM Hallucination without Gold-Standard Answers | Feb 16, 2024 | HallucinationIn-Context Learning | —Unverified | 0 |
| Trading off Consistency and Dimensionality of Convex Surrogates for the Mode | Feb 16, 2024 | HallucinationInformation Retrieval | —Unverified | 0 |
| Towards Uncovering How Large Language Model Works: An Explainability Perspective | Feb 16, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| LLMs in the Heart of Differential Testing: A Case Study on a Medical Rule Engine | Feb 16, 2024 | Hallucination | —Unverified | 0 |
| Comparing Hallucination Detection Metrics for Multilingual Generation | Feb 16, 2024 | HallucinationNatural Language Inference | —Unverified | 0 |
| Using Hallucinations to Bypass GPT4's Filter | Feb 16, 2024 | Hallucination | —Unverified | 0 |
| Retrieve Only When It Needs: Adaptive Retrieval Augmentation for Hallucination Mitigation in Large Language Models | Feb 16, 2024 | HallucinationRetrieval | —Unverified | 0 |
| Do LLMs Know about Hallucination? An Empirical Investigation of LLM's Hidden States | Feb 15, 2024 | Hallucination | —Unverified | 0 |
| Visually Dehallucinative Instruction Generation: Know What You Don't Know | Feb 15, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| Large Language Model with Graph Convolution for Recommendation | Feb 14, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| LLMAuditor: A Framework for Auditing Large Language Models Using Human-in-the-Loop | Feb 14, 2024 | HallucinationTruthfulQA | —Unverified | 0 |
| Visually Dehallucinative Instruction Generation | Feb 13, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| A Systematic Review of Data-to-Text NLG | Feb 13, 2024 | Data-to-Text GenerationHallucination | —Unverified | 0 |
| Mitigating Object Hallucination in Large Vision-Language Models via Classifier-Free Guidance | Feb 13, 2024 | HallucinationObject Hallucination | —Unverified | 0 |
| Careless Whisper: Speech-to-Text Hallucination Harms | Feb 12, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| GLaM: Fine-Tuning Large Language Models for Domain Knowledge Graph Alignment via Neighborhood Partitioning and Generative Subgraph Encoding | Feb 9, 2024 | HallucinationKnowledge Graphs | —Unverified | 0 |
| ViGoR: Improving Visual Grounding of Large Vision Language Models with Fine-Grained Reward Modeling | Feb 9, 2024 | HallucinationNatural Language Understanding | CodeCode Available | 0 |
| An Examination on the Effectiveness of Divide-and-Conquer Prompting in Large Language Models | Feb 8, 2024 | Fact VerificationFake News Detection | —Unverified | 0 |
| The Instinctive Bias: Spurious Images lead to Illusion in MLLMs | Feb 6, 2024 | Hallucination | CodeCode Available | 0 |
| Enhancing Retrieval Processes for Language Generation with Augmented Queries | Feb 6, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Improving Assessment of Tutoring Practices using Retrieval-Augmented Generation | Feb 4, 2024 | HallucinationMath | —Unverified | 0 |
| Aligner: Efficient Alignment by Learning to Correct | Feb 4, 2024 | Hallucination | —Unverified | 0 |
| A Closer Look at the Limitations of Instruction Tuning | Feb 3, 2024 | Hallucination | —Unverified | 0 |
| CorpusLM: Towards a Unified Language Model on Corpus for Knowledge-Intensive Tasks | Feb 2, 2024 | Answer GenerationHallucination | —Unverified | 0 |
| A Survey on Large Language Model Hallucination via a Creativity Perspective | Feb 2, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Efficient Non-Parametric Uncertainty Quantification for Black-Box Large Language Models and Decision Planning | Feb 1, 2024 | AI AgentDecision Making | —Unverified | 0 |
| Redefining "Hallucination" in LLMs: Towards a psychology-informed framework for mitigating misinformation | Feb 1, 2024 | HallucinationMisinformation | —Unverified | 0 |
| Instruction Makes a Difference | Feb 1, 2024 | HallucinationInstruction Following | CodeCode Available | 0 |
| Learning Planning-based Reasoning by Trajectories Collection and Process Reward Synthesizing | Feb 1, 2024 | HallucinationLogical Reasoning | —Unverified | 0 |
| HiQA: A Hierarchical Contextual Augmentation RAG for Multi-Documents QA | Feb 1, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| GUMsley: Evaluating Entity Salience in Summarization for 12 English Genres | Jan 31, 2024 | Abstractive Text Summarizationcoreference-resolution | —Unverified | 0 |
| From Training-Free to Adaptive: Empirical Insights into MLLMs' Understanding of Detection Information | Jan 31, 2024 | Hallucinationobject-detection | —Unverified | 0 |
| MedTSS: transforming abstractive summarization of scientific articles with linguistic analysis and concept reinforcement | Jan 30, 2024 | Abstractive Text SummarizationArticles | CodeCode Available | 0 |
| Learning to Trust Your Feelings: Leveraging Self-awareness in LLMs for Hallucination Mitigation | Jan 27, 2024 | HallucinationKnowledge Probing | —Unverified | 0 |
| A RAG-based Question Answering System Proposal for Understanding Islam: MufassirQAS LLM | Jan 27, 2024 | ArticlesChatbot | —Unverified | 0 |
| Equipping Language Models with Tool Use Capability for Tabular Data Analysis in Finance | Jan 27, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Face to Cartoon Incremental Super-Resolution using Knowledge Distillation | Jan 27, 2024 | HallucinationIncremental Learning | —Unverified | 0 |
| VALL-T: Decoder-Only Generative Transducer for Robust and Decoding-Controllable Text-to-Speech | Jan 25, 2024 | DecoderHallucination | —Unverified | 0 |
| Fine-grained Contract NER using instruction based model | Jan 24, 2024 | Few-Shot LearningHallucination | CodeCode Available | 0 |
| It's About Time: Incorporating Temporality in Retrieval Augmented Language Models | Jan 24, 2024 | Few-Shot LearningHallucination | —Unverified | 0 |
| Symbolic Equation Solving via Reinforcement Learning | Jan 24, 2024 | Hallucinationreinforcement-learning | —Unverified | 0 |
| Towards Trustable Language Models: Investigating Information Quality of Large Language Models | Jan 23, 2024 | Hallucination | —Unverified | 0 |
| Hallucination is Inevitable: An Innate Limitation of Large Language Models | Jan 22, 2024 | HallucinationLearning Theory | —Unverified | 0 |
| On the Audio Hallucinations in Large Audio-Video Language Models | Jan 18, 2024 | HallucinationSentence | —Unverified | 0 |
| Temporal Insight Enhancement: Mitigating Temporal Hallucination in Multimodal Large Language Models | Jan 18, 2024 | Hallucination | —Unverified | 0 |
| BibSonomy Meets ChatLLMs for Publication Management: From Chat to Publication Management: Organizing your related work using BibSonomy & LLMs | Jan 17, 2024 | HallucinationManagement | —Unverified | 0 |
| Hallucination Detection and Hallucination Mitigation: An Investigation | Jan 16, 2024 | Hallucination | —Unverified | 0 |
| Large Language Models are Null-Shot Learners | Jan 16, 2024 | Arithmetic ReasoningBenchmarking | —Unverified | 0 |
| The Pitfalls of Defining Hallucination | Jan 15, 2024 | Hallucinationnlg evaluation | —Unverified | 0 |