SOTAVerified

TruthfulQA

Papers

Showing 1120 of 80 papers

TitleStatusHype
Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without TuningCode1
Alleviating Hallucinations of Large Language Models through Induced HallucinationsCode1
Tool-Augmented Reward ModelingCode1
Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and EthicsCode1
RAIN: Your Language Models Can Align Themselves without FinetuningCode1
Red-Teaming Large Language Models using Chain of Utterances for Safety-AlignmentCode1
TruthfulQA: Measuring How Models Mimic Human FalsehoodsCode1
Unsupervised Elicitation of Language ModelsCode0
Model Unlearning via Sparse Autoencoder Subspace Guided Projections0
Shadows in the Attention: Contextual Perturbation and Representation Drift in the Dynamics of Hallucination in LLMs0
Show:102550
← PrevPage 2 of 8Next →

No leaderboard results yet.