SOTAVerified

Hallucination

Papers

Showing 301310 of 1816 papers

TitleStatusHype
A Survey of Hallucination in Large Foundation ModelsCode1
Label Hallucination for Few-Shot ClassificationCode1
AssistRAG: Boosting the Potential of Large Language Models with an Intelligent Information AssistantCode1
CHATREPORT: Democratizing Sustainability Disclosure Analysis through LLM-based ToolsCode1
Accuracy and Political Bias of News Source Credibility Ratings by Large Language ModelsCode1
DomainRAG: A Chinese Benchmark for Evaluating Domain-specific Retrieval-Augmented GenerationCode1
Detecting Machine-Generated Texts by Multi-Population Aware Optimization for Maximum Mean DiscrepancyCode1
Detecting Hallucinated Content in Conditional Neural Sequence GenerationCode1
Detecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI FeedbackCode1
Detecting and Preventing Hallucinations in Large Vision Language ModelsCode1
Show:102550
← PrevPage 31 of 182Next →

No leaderboard results yet.