| Mitigating Adversarial Attacks in LLMs through Defensive Suffix Generation | Dec 18, 2024 | TruthfulQA | —Unverified | 0 | 0 |
| Model Unlearning via Sparse Autoencoder Subspace Guided Projections | May 30, 2025 | Adversarial Robustnessfeature selection | —Unverified | 0 | 0 |
| Monty Hall and Optimized Conformal Prediction to Improve Decision-Making with LLMs | Dec 31, 2024 | Conformal PredictionDecision Making | —Unverified | 0 | 0 |
| More is Less: The Pitfalls of Multi-Model Synthetic Preference Data in DPO Safety Alignment | Apr 3, 2025 | ARCHellaSwag | —Unverified | 0 | 0 |
| Multi-Reference Preference Optimization for Large Language Models | May 26, 2024 | GSM8KTruthfulQA | —Unverified | 0 | 0 |
| A Debate-Driven Experiment on LLM Hallucinations and Accuracy | Oct 25, 2024 | Fact CheckingHallucination | —Unverified | 0 | 0 |
| On The Truthfulness of 'Surprisingly Likely' Responses of Large Language Models | Nov 13, 2023 | Language ModelingLanguage Modelling | —Unverified | 0 | 0 |
| PRobELM: Plausibility Ranking Evaluation for Language Models | Apr 4, 2024 | Question AnsweringTruthfulQA | —Unverified | 0 | 0 |
| Teuken-7B-Base & Teuken-7B-Instruct: Towards European LLMs | Sep 30, 2024 | ARCDiversity | —Unverified | 0 | 0 |
| Reducing LLM Hallucinations using Epistemic Neural Networks | Dec 25, 2023 | TruthfulQA | —Unverified | 0 | 0 |