| Fidelity-Enriched Contrastive Search: Reconciling the Faithfulness-Diversity Trade-Off in Text Generation | Oct 23, 2023 | Abstractive Text SummarizationDialogue Generation | CodeCode Available | 0 |
| Language Models Hallucinate, but May Excel at Fact Verification | Oct 23, 2023 | Fact VerificationHallucination | CodeCode Available | 0 |
| Unleashing the potential of prompt engineering for large language models | Oct 23, 2023 | HallucinationPrompt Engineering | —Unverified | 0 |
| Hallucination Detection for Grounded Instruction Generation | Oct 23, 2023 | HallucinationNavigate | —Unverified | 0 |
| Chainpoll: A high efficacy method for LLM hallucination detection | Oct 22, 2023 | HallucinationRetrieval-augmented Generation | CodeCode Available | 0 |
| Long-Form Speech Translation through Segmentation with Finite-State Decoding Constraints on Large Language Models | Oct 20, 2023 | FormHallucination | —Unverified | 0 |
| MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language Models | Oct 19, 2023 | HallucinationMathematical Reasoning | CodeCode Available | 0 |
| Know Where to Go: Make LLM a Relevant, Responsible, and Trustworthy Searcher | Oct 19, 2023 | HallucinationInformation Retrieval | —Unverified | 0 |
| Reliable Academic Conference Question Answering: A Study Based on Large Language Model | Oct 19, 2023 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| ReEval: Automatic Hallucination Evaluation for Retrieval-Augmented Large Language Models via Transferable Adversarial Attacks | Oct 19, 2023 | HallucinationHallucination Evaluation | —Unverified | 0 |