SOTAVerified

Hallucination

Papers

Showing 15261550 of 1816 papers

TitleStatusHype
Long-Form Speech Translation through Segmentation with Finite-State Decoding Constraints on Large Language Models0
MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language ModelsCode0
Know Where to Go: Make LLM a Relevant, Responsible, and Trustworthy Searcher0
Reliable Academic Conference Question Answering: A Study Based on Large Language ModelCode0
ReEval: Automatic Hallucination Evaluation for Retrieval-Augmented Large Language Models via Transferable Adversarial Attacks0
Flow Dynamics Correction for Action Recognition0
Towards reducing hallucination in extracting information from financial reports using Large Language Models0
Metric Ensembles For Hallucination Detection0
Assessing the Reliability of Large Language Model KnowledgeCode0
Configuration Validation with Large Language Models0
GameGPT: Multi-agent Collaborative Framework for Game Development0
GraphextQA: A Benchmark for Evaluating Graph-Enhanced Large Language ModelsCode0
A New Benchmark and Reverse Validation Method for Passage-level Hallucination DetectionCode0
Towards Mitigating Hallucination in Large Language Models via Self-Reflection0
Teaching Language Models to Hallucinate Less with Synthetic Tasks0
Negative Object Presence Evaluation (NOPE) to Measure Object Hallucination in Vision-Language Models0
The Troubling Emergence of Hallucination in Large Language Models -- An Extensive Definition, Quantification, and Prescriptive Remediations0
Improving the Reliability of Large Language Models by Leveraging Uncertainty-Aware In-Context Learning0
AutoHall: Automated Hallucination Dataset Generation for Large Language Models0
Self-Specialization: Uncovering Latent Expertise within Large Language Models0
Neuro Symbolic Reasoning for Planning: Counterexample Guided Inductive Synthesis using Large Language Models and Satisfiability Solving0
Hallucination Reduction in Long Input Text SummarizationCode0
Augmenting LLMs with Knowledge: A survey on hallucination prevention0
Aligning Large Multimodal Models with Factually Augmented RLHF0
Chain-of-Verification Reduces Hallucination in Large Language ModelsCode0
Show:102550
← PrevPage 62 of 73Next →

No leaderboard results yet.