SOTAVerified

TruthfulQA

Papers

Showing 125 of 80 papers

TitleStatusHype
Unsupervised Elicitation of Language ModelsCode0
Model Unlearning via Sparse Autoencoder Subspace Guided Projections0
Shadows in the Attention: Contextual Perturbation and Representation Drift in the Dynamics of Hallucination in LLMs0
Truth NeuronsCode0
Elastic Weight Consolidation for Full-Parameter Continual Pre-Training of Gemma20
DYNAMAX: Dynamic computing for Transformers and Mamba based architectures0
Efficient MAP Estimation of LLM Judgment Performance with Prior Transfer0
Sample, Don't Search: Rethinking Test-Time Alignment for Language Models0
Sustainable LLM Inference for Edge AI: Evaluating Quantized LLMs for Energy Efficiency, Output Accuracy, and Inference Latency0
More is Less: The Pitfalls of Multi-Model Synthetic Preference Data in DPO Safety Alignment0
When Persuasion Overrides Truth in Multi-Agent LLM Debates: Introducing a Confidence-Weighted Persuasion Override Rate (CW-POR)0
DeLTa: A Decoding Strategy based on Logit Trajectory Prediction Improves Factuality and Reasoning AbilityCode0
Obliviate: Efficient Unmemorization for Protecting Intellectual Property in Large Language Models0
Cost-Saving LLM Cascades with Early Abstention0
Truth Knows No Language: Evaluating Truthfulness Beyond EnglishCode0
Selective Self-to-Supervised Fine-Tuning for Generalization in Large Language Models0
Multi-Agent Reinforcement Learning with Focal Diversity OptimizationCode0
TruthFlow: Truthful LLM Generation via Representation Flow Correction0
CHAIR -- Classifier of Hallucination as ImproverCode0
(WhyPHI) Fine-Tuning PHI-3 for Multiple-Choice Question Answering: Methodology, Results, and ChallengesCode0
Monty Hall and Optimized Conformal Prediction to Improve Decision-Making with LLMs0
Mitigating Adversarial Attacks in LLMs through Defensive Suffix Generation0
Uhura: A Benchmark for Evaluating Scientific Question Answering and Truthfulness in Low-Resource African Languages0
Layer Importance and Hallucination Analysis in Large Language Models via Enhanced Activation Variance-Sparsity0
Maintaining Informative Coherence: Migrating Hallucinations in Large Language Models via Absorbing Markov Chains0
Show:102550
← PrevPage 1 of 4Next →

No leaderboard results yet.