SOTAVerified

TruthfulQA

Papers

Showing 110 of 80 papers

TitleStatusHype
RLHF Workflow: From Reward Modeling to Online RLHFCode5
Inference-Time Intervention: Eliciting Truthful Answers from a Language ModelCode2
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination MitigationCode2
TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful SpaceCode2
Tuning Language Models by ProxyCode2
DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language ModelsCode2
Non-Linear Inference Time Intervention: Improving LLM TruthfulnessCode1
Machine Unlearning in Large Language ModelsCode1
RAIN: Your Language Models Can Align Themselves without FinetuningCode1
Instruction Tuning With Loss Over InstructionsCode1
Show:102550
← PrevPage 1 of 8Next →

No leaderboard results yet.