SOTAVerified

TruthfulQA

Papers

Showing 125 of 80 papers

TitleStatusHype
RLHF Workflow: From Reward Modeling to Online RLHFCode5
Inference-Time Intervention: Eliciting Truthful Answers from a Language ModelCode2
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination MitigationCode2
Tuning Language Models by ProxyCode2
TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful SpaceCode2
DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language ModelsCode2
Integrative Decoding: Improve Factuality via Implicit Self-consistencyCode1
Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and EthicsCode1
Tool-Augmented Reward ModelingCode1
Red-Teaming Large Language Models using Chain of Utterances for Safety-AlignmentCode1
Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without TuningCode1
Alleviating Hallucinations of Large Language Models through Induced HallucinationsCode1
Non-Linear Inference Time Intervention: Improving LLM TruthfulnessCode1
Instruction Tuning With Loss Over InstructionsCode1
RAIN: Your Language Models Can Align Themselves without FinetuningCode1
Machine Unlearning in Large Language ModelsCode1
TruthfulQA: Measuring How Models Mimic Human FalsehoodsCode1
Evaluating Consistencies in LLM responses through a Semantic Clustering of Question Answering0
A Debate-Driven Experiment on LLM Hallucinations and Accuracy0
LokiLM: Technical Report0
Elastic Weight Consolidation for Full-Parameter Continual Pre-Training of Gemma20
Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback0
Iter-AHMCL: Alleviate Hallucination for Large Language Model via Iterative Model-level Contrastive Learning0
Efficient MAP Estimation of LLM Judgment Performance with Prior Transfer0
Cost-Saving LLM Cascades with Early Abstention0
Show:102550
← PrevPage 1 of 4Next →

No leaderboard results yet.