SOTAVerified

TruthfulQA

Papers

Showing 1120 of 80 papers

TitleStatusHype
Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and EthicsCode1
Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without TuningCode1
Alleviating Hallucinations of Large Language Models through Induced HallucinationsCode1
Integrative Decoding: Improve Factuality via Implicit Self-consistencyCode1
RAIN: Your Language Models Can Align Themselves without FinetuningCode1
Instruction Tuning With Loss Over InstructionsCode1
Tool-Augmented Reward ModelingCode1
DeLTa: A Decoding Strategy based on Logit Trajectory Prediction Improves Factuality and Reasoning AbilityCode0
Enhancing Language Model Factuality via Activation-Based Confidence Calibration and Guided DecodingCode0
NoVo: Norm Voting off Hallucinations with Attention Heads in Large Language ModelsCode0
Show:102550
← PrevPage 2 of 8Next →

No leaderboard results yet.