SOTAVerified

Hallucination

Papers

Showing 921930 of 1816 papers

TitleStatusHype
Self-training Large Language Models through Knowledge DetectionCode0
Small Agent Can Also Rock! Empowering Small Language Models as Hallucination DetectorCode1
Mitigating Large Language Model Hallucination with Faithful Finetuning0
Counterfactual Debating with Preset Stances for Hallucination Elimination of LLMsCode0
Hallucination Mitigation Prompts Long-term Video UnderstandingCode0
CoMT: Chain-of-Medical-Thought Reduces Hallucination in Medical Report Generation0
MoE-RBench: Towards Building Reliable Language Models with Sparse Mixture-of-ExpertsCode1
Multimodal Needle in a Haystack: Benchmarking Long-Context Capability of Multimodal Large Language ModelsCode2
mDPO: Conditional Preference Optimization for Multimodal Large Language ModelsCode2
Teaching Large Language Models to Express Knowledge Boundary from Their Own Signals0
Show:102550
← PrevPage 93 of 182Next →

No leaderboard results yet.