| Elastic Weight Consolidation for Full-Parameter Continual Pre-Training of Gemma2 | May 9, 2025 | ARCBelebele | —Unverified | 0 | 0 |
| Evaluating Consistencies in LLM responses through a Semantic Clustering of Question Answering | Oct 20, 2024 | Language ModellingLarge Language Model | —Unverified | 0 | 0 |
| GRATH: Gradual Self-Truthifying for Large Language Models | Jan 22, 2024 | TruthfulQA | —Unverified | 0 | 0 |
| Harmonic LLMs are Trustworthy | Apr 30, 2024 | HallucinationTruthfulQA | —Unverified | 0 | 0 |
| Investigating Data Contamination in Modern Benchmarks for Large Language Models | Nov 16, 2023 | Common Sense ReasoningMMLU | —Unverified | 0 | 0 |
| Iter-AHMCL: Alleviate Hallucination for Large Language Model via Iterative Model-level Contrastive Learning | Oct 16, 2024 | Contrastive Learninggraph construction | —Unverified | 0 | 0 |
| Layer Importance and Hallucination Analysis in Large Language Models via Enhanced Activation Variance-Sparsity | Nov 15, 2024 | Contrastive LearningHallucination | —Unverified | 0 | 0 |
| LokiLM: Technical Report | Jul 10, 2024 | Knowledge DistillationLanguage Modeling | —Unverified | 0 | 0 |
| Lower Layer Matters: Alleviating Hallucination via Multi-Layer Fusion Contrastive Decoding with Truthfulness Refocused | Aug 16, 2024 | HallucinationTruthfulQA | —Unverified | 0 | 0 |
| Maintaining Informative Coherence: Migrating Hallucinations in Large Language Models via Absorbing Markov Chains | Oct 27, 2024 | Text GenerationTruthfulQA | —Unverified | 0 | 0 |