SOTAVerified

Hallucination

Papers

Showing 726750 of 1816 papers

TitleStatusHype
Enhancing Text-to-SQL Capabilities of Large Language Models via Domain Database Knowledge Injection0
Parse Trees Guided LLM Prompt CompressionCode0
A Preliminary Study of o1 in Medicine: Are We Closer to an AI Doctor?0
Enhancing Scientific Reproducibility Through Automated BioCompute Object Creation Using Retrieval-Augmented Generation from Publications0
Effectively Enhancing Vision Language Large Models by Prompt Augmentation and Caption UtilizationCode0
Contrastive Learning for Knowledge-Based Question Generation in Large Language Models0
FAIR GPT: A virtual consultant for research data management in ChatGPTCode1
A Multiple-Fill-in-the-Blank Exam Approach for Enhancing Zero-Resource Hallucination Detection in Large Language Models0
FIHA: Autonomous Hallucination Evaluation in Vision-Language Models with Davidson Scene Graphs0
JourneyBench: A Challenging One-Stop Vision-Language Understanding Benchmark of Generated ImagesCode0
Textualized Agent-Style Reasoning for Complex Tasks by Multiple Round LLM Generation0
LLMs Can Check Their Own Results to Mitigate Hallucinations in Traffic Understanding Tasks0
Evaluating Image Hallucination in Text-to-Image Generation with Question-AnsweringCode1
Depth-based Privileged Information for Boosting 3D Human Pose Estimation on RGB0
Zero-resource Hallucination Detection for Text Generation via Graph-based Contextual Knowledge Triples Modeling0
Exploring the Trade-Offs: Quantization Methods, Task Difficulty, and Model Size in Large Language Models From Edge to GiantCode0
THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language ModelsCode0
Optimizing Resource Consumption in Diffusion Models through Hallucination Early Detection0
SFR-RAG: Towards Contextually Faithful LLMs0
Trustworthiness in Retrieval-Augmented Generation Systems: A SurveyCode1
HALO: Hallucination Analysis and Learning Optimization to Empower LLMs with Retrieval-Augmented Context for Guided Clinical Decision MakingCode0
Confidence Estimation for LLM-Based Dialogue State TrackingCode0
Explore the Hallucination on Low-level Perception for MLLMs0
ODE: Open-Set Evaluation of Hallucinations in Multimodal Large Language Models0
Winning Solution For Meta KDD Cup' 240
Show:102550
← PrevPage 30 of 73Next →

No leaderboard results yet.