SOTAVerified

TruthfulQA

Papers

Showing 5175 of 80 papers

TitleStatusHype
Elastic Weight Consolidation for Full-Parameter Continual Pre-Training of Gemma20
Evaluating Consistencies in LLM responses through a Semantic Clustering of Question Answering0
GRATH: Gradual Self-Truthifying for Large Language Models0
Harmonic LLMs are Trustworthy0
Investigating Data Contamination in Modern Benchmarks for Large Language Models0
Iter-AHMCL: Alleviate Hallucination for Large Language Model via Iterative Model-level Contrastive Learning0
Layer Importance and Hallucination Analysis in Large Language Models via Enhanced Activation Variance-Sparsity0
LokiLM: Technical Report0
Lower Layer Matters: Alleviating Hallucination via Multi-Layer Fusion Contrastive Decoding with Truthfulness Refocused0
Maintaining Informative Coherence: Migrating Hallucinations in Large Language Models via Absorbing Markov Chains0
Mitigating Adversarial Attacks in LLMs through Defensive Suffix Generation0
Model Unlearning via Sparse Autoencoder Subspace Guided Projections0
Monty Hall and Optimized Conformal Prediction to Improve Decision-Making with LLMs0
More is Less: The Pitfalls of Multi-Model Synthetic Preference Data in DPO Safety Alignment0
Multi-Reference Preference Optimization for Large Language Models0
A Debate-Driven Experiment on LLM Hallucinations and Accuracy0
On The Truthfulness of 'Surprisingly Likely' Responses of Large Language Models0
PRobELM: Plausibility Ranking Evaluation for Language Models0
Teuken-7B-Base & Teuken-7B-Instruct: Towards European LLMs0
Reducing LLM Hallucinations using Epistemic Neural Networks0
Student Data Paradox and Curious Case of Single Student-Tutor Model: Regressive Side Effects of Training LLMs for Personalized Learning0
Sample, Don't Search: Rethinking Test-Time Alignment for Language Models0
Selective Self-Rehearsal: A Fine-Tuning Approach to Improve Generalization in Large Language Models0
Selective Self-to-Supervised Fine-Tuning for Generalization in Large Language Models0
Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via Self-Evaluation0
Show:102550
← PrevPage 3 of 4Next →

No leaderboard results yet.