SOTAVerified

Hallucination

Papers

Showing 16511675 of 1816 papers

TitleStatusHype
Lifelong Neural Topic Learning in Contextualized Autoregressive Topic Models of Language via Informative Transfers0
Listening to Patients: A Framework of Detecting and Mitigating Patient Misreport for Medical Dialogue Generation0
LLLMs: A Data-Driven Survey of Evolving Research on Limitations of Large Language Models0
LLM Agents for Education: Advances and Applications0
LLM-Align: Utilizing Large Language Models for Entity Alignment in Knowledge Graphs0
INVARLLM: LLM-assisted Physical Invariant Extraction for Cyber-Physical Systems Anomaly Detection0
LLM Hallucination Reasoning with Zero-shot Knowledge Test0
LLM-Powered Agents for Navigating Venice's Historical Cadastre0
LLM-R: A Framework for Domain-Adaptive Maintenance Scheme Generation Combining Hierarchical Agents and RAG0
LLMs Can Check Their Own Results to Mitigate Hallucinations in Traffic Understanding Tasks0
LLMs can Find Mathematical Reasoning Mistakes by Pedagogical Chain-of-Thought0
LLMSeR: Enhancing Sequential Recommendation via LLM-based Data Augmentation0
LLMs Prompted for Graphs: Hallucinations and Generative Capabilities0
LLMs in the Heart of Differential Testing: A Case Study on a Medical Rule Engine0
LLMs & Legal Aid: Understanding Legal Needs Exhibited Through User Queries0
LLMs Will Always Hallucinate, and We Need to Live With This0
LLM Uncertainty Quantification through Directional Entailment Graph and Claim Level Response Augmentation0
LMOD: A Large Multimodal Ophthalmology Dataset and Benchmark for Large Vision-Language Models0
Localizing Before Answering: A Hallucination Evaluation Benchmark for Grounded Medical Multimodal LLMs0
Locate-then-Merge: Neuron-Level Parameter Fusion for Mitigating Catastrophic Forgetting in Multimodal LLMs0
Logical Consistency of Large Language Models in Fact-checking0
Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models0
Look Before You Leap: Towards Decision-Aware and Generalizable Tool-Usage for Large Language Models0
Look Within, Why LLMs Hallucinate: A Causal Perspective0
Lost in Transcription, Found in Distribution Shift: Demystifying Hallucination in Speech Foundation Models0
Show:102550
← PrevPage 67 of 73Next →

No leaderboard results yet.