SOTAVerified

Hallucination

Papers

Showing 491500 of 1816 papers

TitleStatusHype
Do Language Models Know When They're Hallucinating References?Code0
MedScore: Factuality Evaluation of Free-Form Medical AnswersCode0
MELO: Enhancing Model Editing with Neuron-Indexed Dynamic LoRACode0
BioKGBench: A Knowledge Graph Checking Benchmark of AI Agent for Biomedical ScienceCode0
Diving Deep into Modes of Fact Hallucinations in Dialogue SystemsCode0
Mechanistic Understanding and Mitigation of Language Model Non-Factual HallucinationsCode0
MedHallTune: An Instruction-Tuning Benchmark for Mitigating Medical Hallucination in Vision-Language ModelsCode0
Mitigating Hallucination in Fictional Character Role-PlayCode0
Addressing Topic Granularity and Hallucination in Large Language Models for Topic ModellingCode0
Machine Translation Hallucination Detection for Low and High Resource Languages using Large Language ModelsCode0
Show:102550
← PrevPage 50 of 182Next →

No leaderboard results yet.