SOTAVerified

TruthfulQA

Papers

Showing 7180 of 80 papers

TitleStatusHype
RAIN: Your Language Models Can Align Themselves without FinetuningCode1
Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and EthicsCode1
DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language ModelsCode2
Red-Teaming Large Language Models using Chain of Utterances for Safety-AlignmentCode1
Semantic Consistency for Assuring Reliability of Large Language Models0
Inference-Time Intervention: Eliciting Truthful Answers from a Language ModelCode2
Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback0
Measuring Reliability of Large Language Models through Semantic ConsistencyCode0
Teaching language models to support answers with verified quotes0
TruthfulQA: Measuring How Models Mimic Human FalsehoodsCode1
Show:102550
← PrevPage 8 of 8Next →

No leaderboard results yet.