SOTAVerified

TruthfulQA

Papers

Showing 2130 of 80 papers

TitleStatusHype
LACIE: Listener-Aware Finetuning for Confidence Calibration in Large Language ModelsCode0
Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human FeedbackCode0
A test suite of prompt injection attacks for LLM-based machine translationCode0
Steering Without Side Effects: Improving Post-Deployment Control of Language ModelsCode0
NoVo: Norm Voting off Hallucinations with Attention Heads in Large Language ModelsCode0
SaGE: Evaluating Moral Consistency in Large Language ModelsCode0
PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition DynamicsCode0
Instruction Tuning with Human CurriculumCode0
CHAIR -- Classifier of Hallucination as ImproverCode0
Measuring Reliability of Large Language Models through Semantic ConsistencyCode0
Show:102550
← PrevPage 3 of 8Next →

No leaderboard results yet.